I read something interesting this morning.
A VentureBeat article covered a New York startup called Runlayer that's built an entire business around one problem: employees are secretly installing AI agents at work, and IT teams are losing the battle trying to stop them. Their solution is a security wrapper — a governance layer bolted on top of an existing AI installation to catch dangerous commands, block credential leaks, and log every tool call.
It's a smart product. And honestly, it tells me everything about how most companies are approaching enterprise AI right now.
They're reacting. We chose to design it differently from day one.
Here's what's actually happening inside most companies.
An employee discovers OpenClaw, or a similar AI agent. They install it on their work laptop. They connect it to their Gmail, their Slack, their Jira. Within a week, they're doing 3 hours of work in 45 minutes.
They don't tell IT. They definitely don't tell security. And they are not going to stop — because why would they?
The VentureBeat article quoted Andy Berman, CEO of Runlayer, saying:
"We passed the point of telling employees no in 2024."
He's right. You can't ban your way out of this. Productivity always wins.
But here's the part that keeps security teams up at night: these personal installations run with root-level shell access. There's no isolation. No logging. No visibility. One malicious email with a hidden prompt injection and the agent is silently exfiltrating API keys, internal Slack messages, or client records — and nobody sees it happen until the damage is done.
Runlayer's solution is ToolGuard — real-time blocking of dangerous commands, credential leak detection, shadow MCP server scanning. It's good engineering. But it's still a patch on a broken architecture.
We asked a different question:
What if the architecture was right from the beginning?
Over the last couple of weeks, we built our own version of OpenClaw, which is 100% secure, Zero Vulnerability. We deployed ClawWorker, our internal enterprise-hosted AI assistant, across our team.
We're a Google Cloud Partner. We build AI automation for enterprise clients. So when we decided to bring an AI agent into our own workflows, we weren't going to run it the way a solo developer runs it on a personal machine. We needed it to be something we could stand behind, something we could show a CTO or a Head of Security without flinching.
Here's what we built:
This is the part that changes everything:
Nobody here ever had a shadow AI problem because there was never a reason to go shadow.
The secure version was never harder than the risky version. We made the right option the easy option.
To be fair, Runlayer is solving a real and urgent problem. Most enterprise AI governance conversations start after the fact — after employees are already running unmanaged agents, after the IT team has already lost visibility. In that situation, a ToolGuard wrapper is genuinely valuable.
But let's be honest about what it is: it's remediation. You're hardening a house that was built without a foundation.
There are a few things that a governance wrapper fundamentally cannot solve:
After running ClawWorker internally — and building it for client environments — here's what I've seen move the needle:
The VentureBeat article ends with a quote I found telling:
"The question isn't really whether to allow AI agents — it's how to govern them."
I'd push that further. The question isn't just how to govern them. It's who should own them.
The governance-as-a-service model assumes the AI agent itself is outside the organization's control — that you're always renting someone else's infrastructure and just trying to make it safe enough. We think that's the wrong starting assumption, especially for companies that handle sensitive client data, enterprise contracts, or regulated industries.
Owning your AI stack — really owning it, instance by instance, domain by domain, user by user — is not just a security decision. It's a strategic one. It determines what you can promise clients. It determines what you can build on top. It determines whether AI is a tool you use or infrastructure you control.
At Codimite, we're building toward the second one.
Want to explore how a managed, isolated AI assistant deployment could work for your team?
Explore
OpenClaw Hosted on GCP Zero Vulnerability.
Maximum Security
to learn more.