Why We Never Had a Shadow AI Problem And How We Built It That Way

Why We Never Had a Shadow AI Problem And How We Built It That Way

I read something interesting this morning.

A VentureBeat article covered a New York startup called Runlayer that's built an entire business around one problem: employees are secretly installing AI agents at work, and IT teams are losing the battle trying to stop them. Their solution is a security wrapper — a governance layer bolted on top of an existing AI installation to catch dangerous commands, block credential leaks, and log every tool call.

It's a smart product. And honestly, it tells me everything about how most companies are approaching enterprise AI right now.

They're reacting. We chose to design it differently from day one.

The Problem Nobody Wants to Talk About

Here's what's actually happening inside most companies.

An employee discovers OpenClaw, or a similar AI agent. They install it on their work laptop. They connect it to their Gmail, their Slack, their Jira. Within a week, they're doing 3 hours of work in 45 minutes.

They don't tell IT. They definitely don't tell security. And they are not going to stop — because why would they?

The VentureBeat article quoted Andy Berman, CEO of Runlayer, saying:

"We passed the point of telling employees no in 2024."

He's right. You can't ban your way out of this. Productivity always wins.

But here's the part that keeps security teams up at night: these personal installations run with root-level shell access. There's no isolation. No logging. No visibility. One malicious email with a hidden prompt injection and the agent is silently exfiltrating API keys, internal Slack messages, or client records — and nobody sees it happen until the damage is done.

Runlayer's solution is ToolGuard — real-time blocking of dangerous commands, credential leak detection, shadow MCP server scanning. It's good engineering. But it's still a patch on a broken architecture.

We asked a different question:

What if the architecture was right from the beginning?

How We Built ClawWorker at Codimite

Over the last couple of weeks, we built our own version of OpenClaw, which is 100% secure, Zero Vulnerability. We deployed ClawWorker, our internal enterprise-hosted AI assistant, across our team.

We're a Google Cloud Partner. We build AI automation for enterprise clients. So when we decided to bring an AI agent into our own workflows, we weren't going to run it the way a solo developer runs it on a personal machine. We needed it to be something we could stand behind, something we could show a CTO or a Head of Security without flinching.

Here's what we built:

Every Person Gets Their Own Isolated Instance

  • Not a shared AI endpoint with a security layer on top.
  • A fully separate, containerized environment — its own domain, its own credentials, its own version.

Admin Dashboard Visibility

  • The admin dashboard tells the whole story at a glance.
  • You can see who has an instance, what domain it's on, what version it's running, whether it's active.
  • Password management, instance controls, access actions — all in one place.
  • It took me about 90 seconds to spin up my own instance for the first time. Type a name, press a button, done.

Infrastructure Control

  • It runs on our infrastructure, on our domain.
  • Not a SaaS subscription we're renting. Not a third-party tool we're hoping stays compliant.
  • Our GCP environment. Our controls. Our responsibility.
  • When a client asks us how their data is handled, the answer isn't a privacy policy URL — it's an architecture diagram and an audit trail.

This is the part that changes everything:

Nobody here ever had a shadow AI problem because there was never a reason to go shadow.
The secure version was never harder than the risky version. We made the right option the easy option.

What Runlayer Gets Right — And What's Still Missing

To be fair, Runlayer is solving a real and urgent problem. Most enterprise AI governance conversations start after the fact — after employees are already running unmanaged agents, after the IT team has already lost visibility. In that situation, a ToolGuard wrapper is genuinely valuable.

But let's be honest about what it is: it's remediation. You're hardening a house that was built without a foundation.

There are a few things that a governance wrapper fundamentally cannot solve:

  • Shared context is still shared context: Even with ToolGuard blocking dangerous outputs, if your AI agent has access to your Gmail and your colleague's Slack integration on the same machine, the isolation problem hasn't been fully solved — it's been managed. Our model eliminates that design flaw entirely.
  • Governance requires ongoing engineering overhead: Runlayer's deployment involves MDM software, IDP integration with Okta or Entra, SIEM connections to Datadog or Splunk. For a large enterprise with a dedicated security team, that's manageable. For a growing company that just wants to move fast safely? That's a 3-month implementation project before anyone gets value.
  • The "security vendor" framing creates distance: When your AI assistant lives inside a security product, the psychological effect is subtle but real — people feel watched, not empowered. Our team uses ClawWorker freely because it was built for them, not built to monitor them.

What We Learned Actually Matters in Enterprise AI

After running ClawWorker internally — and building it for client environments — here's what I've seen move the needle:

  • Speed of trust matters more than speed of deployment: The fastest AI rollout in the world fails if nobody uses it because they're afraid of what happens to their data. When people can see their own instance, on their own subdomain, running their own version, they trust it. That trust is what drives adoption.
  • Isolation is a feature, not just a security control: Separate instances aren't just safer. They're also cleaner. My PRDs and other documents don't bleed into someone else's conversations. My assistant knows my work, my style, my workflows — and only mine.
  • Visibility at the admin layer changes how leadership thinks about AI: The moment a Head of Operations can open a dashboard and see every active instance, every owner, every version — AI stops being a risk conversation and starts being an infrastructure conversation. That shift is everything when you're trying to get budget approved or convince a CTO to expand adoption.
  • Being a Google Cloud Partner isn't just a badge: It means our deployments align with GCP's security posture, compliance frameworks, and audit infrastructure. When enterprise clients ask about data residency, encryption, or access controls — we're answering from inside the same ecosystem they trust for everything else.

Where This Goes Next

The VentureBeat article ends with a quote I found telling:

"The question isn't really whether to allow AI agents — it's how to govern them."

I'd push that further. The question isn't just how to govern them. It's who should own them.

The governance-as-a-service model assumes the AI agent itself is outside the organization's control — that you're always renting someone else's infrastructure and just trying to make it safe enough. We think that's the wrong starting assumption, especially for companies that handle sensitive client data, enterprise contracts, or regulated industries.

Owning your AI stack — really owning it, instance by instance, domain by domain, user by user — is not just a security decision. It's a strategic one. It determines what you can promise clients. It determines what you can build on top. It determines whether AI is a tool you use or infrastructure you control.

At Codimite, we're building toward the second one.

Want to explore how a managed, isolated AI assistant deployment could work for your team?
Explore OpenClaw Hosted on GCP Zero Vulnerability. Maximum Security to learn more.

Codimite Development Team
Codimite
"CODIMITE" Would Like To Send You Notifications
Our notifications keep you updated with the latest articles and news. Would you like to receive these notifications and stay connected ?
Not Now
Yes Please