** Why "Just Add an Approval Step" Is Harder Than It Sounds Every AI demo follows the same script: 1. "Watch this AI agent analyze your data..." 2. "Now it's generating recommendations..." 3. "And here it executes automatically!" Then someone asks: "What if the recommendation is wrong?" The presenter pauses. "Well, you could add an approval step." That's where the demo ends. Because in practice, "add an approval step" means: - Building a custom notification system - Creating a UI for reviewing AI decisions - Preserving context so reviewers understand what they're approving - Handling timeouts, escalations, and edge cases - Maintaining audit trails for compliance Gartner found organizations spend 40% of their AI budget on "integration and operationalization"—which includes human oversight mechanisms. **The Three Problems** In my experience, human-in-the-loop fails for three reasons: 1. **Context ...
Anthropic has MCP. Google has A2A. OpenAI has their Agents SDK. The biggest players in AI are racing to define how agents communicate with each other. After spending nine years building AI systems—including several that resulted in patents—I keep noticing a gap in all these protocols: humans. **The Current Landscape** - **MCP (Model Context Protocol)** - Anthropic's approach to standardizing how AI models share context - **A2A (Agent-to-Agent)** - Google's protocol for autonomous agent coordination - **OpenAI Agents SDK** - A production framework for multi-agent systems Each solves real technical problems. But they all focus on AI-to-AI communication. **The Missing Question** When Agent A hands off to Agent B, what if a human should have been Agent B? From my experience building AI in healthcare, finance, and logistics: the hardest part isn't AI talking to AI. It's AI knowing when to talk to humans—and doing that handoff well. The protocol that figures this out wins. ...