Skip to main content

Posts

OpenAI Just Validated the Autonomous Agent Category

OpenClaw creator Peter Steinberger just joined OpenAI. Sam Altman personally recruited him. The project — 200,000+ GitHub stars in under three months — is transitioning to an independent foundation with OpenAI's backing. This isn't just a hire. It's a signal: autonomous agents are the next platform shift. But here's what nobody's talking about: The industry is investing billions in making agents more capable. Almost nobody is investing in making them governable. The numbers: - 80% of Fortune 500  are running active AI agents (Microsoft) - Only 14.4%  have full security approval for those agents (Gravitee) - 88% of organizations  have confirmed or suspected security incidents from AI agents - 29% of employees  admit to using unsanctioned agents at work Three days before the Steinberger hire, Proofpoint acquired Acuvity — an AI governance startup. A major cybersecurity company paid acquisition money for this problem. The governance market isn't theoretical. It...
Recent posts

180,000 Developers Gave an AI Agent Root Access

180,000 GitHub stars in under three months. 25,310 stars in a single day — shattering every previous GitHub record. And every one of those developers gave OpenClaw root-level access to their machine. I'm not here to say autonomous agents don't work. They do. I've been building AI systems for 9 years and I've never seen a productivity shift like this. But the numbers should concern you: - 80% of Fortune 500 companies are now running active AI agents (Microsoft Cyber Pulse Report) - 29% of employees admit to using unsanctioned AI agents for work - Cisco's AI Security team called OpenClaw "groundbreaking" from a capability perspective and "an absolute nightmare" from a security perspective The knee-jerk reaction is to ban autonomous agents. But as Brianne Kimmel of Worklife Ventures puts it: "People are trying these on evenings and weekends." Your best engineers will use the best tools — block them and they'll find workarounds or lea...

Introducing AICtrlNet: AI Orchestration Where Humans Are First-Class Citizens

  Introducing AICtrlNet: AI Orchestration Where Humans Are First-Class Citizens January 29, 2026  •  Srirajasekhar "Bobby" Koritala We’re excited to announce AICtrlNet, an open core AI orchestration platform that treats humans and AI as equal participants in workflows—not afterthoughts. The Community Edition is MIT licensed and available today on  GitHub  and  PyPI . The Problem We Kept Running Into Over the past few years, we’ve built AI systems for enterprises across healthcare, finance, and legal. Every project hit the same wall:  AI workflows that work in demos fail in production because they ignore humans. The existing tools fell into two camps: Code-first frameworks  (LangChain, CrewAI, AutoGen) are powerful but assume developers will handle everything programmatically. There’s no visual way to design workflows, no built-in governance, and adding human approval steps means building custom infrastructure. Visual automation tools  (n8n, Z...

AI Governance Can't Be an Afterthought

  Why a Fortune 500 Company Killed Their AI Project 3 Weeks Before Launch Last month, a Fortune 500 company killed an AI project three weeks before launch. The AI worked great. Models were accurate, pipeline was fast, demos impressed everyone. They killed it because Legal couldn't answer one question: "If this AI makes a mistake, who's accountable?" The Pattern I Keep Seeing - Month 1-6: Build the AI system. Governance is "we'll figure that out later." - Month 7: Demo to stakeholders. Everyone's excited. - Month 8: Legal review. Compliance review. Security review. - Month 9: "Where's the audit trail?" "Who approved this model?" "What happens if it's wrong?" - Month 10: Project delayed indefinitely. According to Deloitte, 62% of enterprise AI projects experience significant delays during compliance review. Average delay: 4.3 months. The Five Pillars Governance means answering these questions: 1. Explainability - W...

The Missing Piece in AI Orchestration

** Why "Just Add an Approval Step" Is Harder Than It Sounds Every AI demo follows the same script: 1. "Watch this AI agent analyze your data..." 2. "Now it's generating recommendations..." 3. "And here it executes automatically!" Then someone asks: "What if the recommendation is wrong?" The presenter pauses. "Well, you could add an approval step." That's where the demo ends. Because in practice, "add an approval step" means: - Building a custom notification system - Creating a UI for reviewing AI decisions - Preserving context so reviewers understand what they're approving - Handling timeouts, escalations, and edge cases - Maintaining audit trails for compliance Gartner found organizations spend 40% of their AI budget on "integration and operationalization"—which includes human oversight mechanisms. **The Three Problems** In my experience, human-in-the-loop fails for three reasons: 1. **Context ...

The Protocol Wars: What's Missing from MCP, A2A, and OpenAI's Agents SDK

  Anthropic has MCP. Google has A2A. OpenAI has their Agents SDK. The biggest players in AI are racing to define how agents communicate with each other. After spending nine years building AI systems—including several that resulted in patents—I keep noticing a gap in all these protocols: humans. **The Current Landscape** - **MCP (Model Context Protocol)** - Anthropic's approach to standardizing how AI models share context - **A2A (Agent-to-Agent)** - Google's protocol for autonomous agent coordination - **OpenAI Agents SDK** - A production framework for multi-agent systems Each solves real technical problems. But they all focus on AI-to-AI communication. **The Missing Question** When Agent A hands off to Agent B, what if a human should have been Agent B? From my experience building AI in healthcare, finance, and logistics: the hardest part isn't AI talking to AI. It's AI knowing when to talk to humans—and doing that handoff well. The protocol that figures this out wins. ...

Interview with an AI: What It’s Really Like to Work with Artificial Intelligence

  Introduction: In the early days, working with AI felt like commanding a robot. You gave it an instruction, and you got a result—sometimes great, sometimes… not so much. But lately, it’s started to feel different. More like a conversation. More like working with someone. In this piece, I reflect on that shift: how AI is becoming a co-creator, what makes those interactions successful, and why emotion, context, and feedback loops might be just as important as computing power. 1. Working with AI Feels Like Working with a Really Smart (But Literal) Partner One of the biggest misconceptions about AI is that it just “knows” what you want. The truth? You often need to guide it. A lot. I’ve found myself correcting it mid-way: “Hmm, that’s not what I meant. Can you focus more on X and less on Y?” And when it responds with “You’re absolutely right,” and pivots—there’s a real moment of collaboration there. That kind of responsiveness is powerful. It reminds me that AI doesn’t ne...