Agents Do Not Improvise Well
Blog: Blog | Process Street | Compliance Operations Platform

We keep giving AI agents access to our tools and then acting surprised when they do something unexpected. The problem was never the AI. The problem is we never gave it the rulebook.
For years, workflow automation meant connecting tools through integrations. If this, then that. Trigger here, action there. It worked for simple tasks. It broke under complexity. And it was built for humans who could read error logs and fix broken triggers when things went sideways. AI agents do not work that way. They need context, not just connections.
Context Is the Missing Infrastructure Layer
Three of the most influential voices in technology arrived at the same conclusion in early 2026, from completely different directions.
David Heinemeier Hansson announced that Basecamp is going agent-accessible, calling agents “the killer app for AI” and betting that the future is about making your product callable by agents, not building AI features into it. Jack Dorsey laid out his vision for Block as a “mini AGI”, rebuilt around a continuously updated “world model” where every decision, discussion, and plan is machine-readable and available to every person and agent at the edge. Andrej Karpathy went viral describing how he uses LLMs to build personal knowledge bases that compound over time, arguing that “the tedious part of maintaining a knowledge base is not the reading or the thinking, it is the bookkeeping.”
All three are pointing at the same gap. Agents need structured context to operate. Products need to be callable. Decisions need to be recorded. Knowledge needs to compound. But none of them are asking the harder question: who governs what the agent does once it has that context?
Context without governance is just a smarter way to make unaccountable decisions faster.

Accessible Is Not Enough. Governable Is.
Basecamp made their product agent-accessible. That is necessary but not sufficient. An API lets agents act. It does not tell them what to do or prevent them from doing the wrong thing.
Dorsey is building a company world model. That is the right instinct. But a world model without structured processes is a database of past decisions. It tells agents what happened. It does not govern what happens next.
Karpathy is compiling knowledge bases. That compounds understanding. But a knowledge base is passive. It informs. It does not enforce.
We see the gap play out constantly. A team connects an AI agent to their tools. It starts doing useful work. Then it does something unexpected. Something that would fail an audit. The problem is not the AI. The problem is that the AI had no reliable source of truth about how work is supposed to happen, and no guardrails enforcing that source of truth in real time.
Workflows Are the Context Engine
This is where Model Context Protocol changes the equation. MCP is the standard emerging for how AI systems communicate with the software and data around them. Instead of point-to-point integrations, MCP lets AI agents discover, query, and act on structured operational context.
Process Street now has an MCP Server. And it does not just make workflows agent-accessible. It makes them agent-governable.
A knowledge base tells an agent what the company knows. A workflow tells the agent what to do, in what order, with what approvals, under what constraints. The difference is the difference between giving someone a policy manual and giving them an operating system.
Process Street workflows are versioned, governed, and auditable. Every step, every approval, every form field, every conditional rule. When that structure is exposed through MCP, an AI agent does not improvise. It operates inside the process, with full context of the policies it is supposed to enforce, and it generates proof that the work was done correctly.
The Access Control Layer for AI
Here is what this looks like in practice. An AI agent runs an employee onboarding workflow. It pulls the new hire’s information from the HRIS, fills the Process Street form fields, triggers the IT provisioning automation, and advances through each step. But when it reaches the manager approval gate, it stops. It notifies the manager. It waits. No amount of agent capability can bypass that gate, because the workflow is deterministic. The approval step is not a suggestion. It is a constraint.

That is what compliance-ready AI actually looks like. The agent has full context of the process. It can fill fields, trigger automations, query previous workflow runs, and advance tasks. But it cannot skip an approval step. It cannot bypass a compliance gate. It cannot take an action that the workflow does not permit.
A Process Street workflow is a gated, deterministic sequence. Steps happen in order. Approvals block progress until a human signs off. Conditional logic routes work based on real data, not agent inference. The agent operates within the workflow, but the workflow decides what the agent is allowed to do next.
The Companies That Win at AI Will Build Compliance First
AI-ready operations require structured processes that are callable, governable, and auditable. Callable means agent-accessible through MCP. Governable means access-controlled with human-in-the-loop gates that agents cannot bypass. Auditable means every action logged, every decision traceable, every compliance requirement provable.

The companies that win at AI over the next few years will not be the ones that moved fastest. They will be the ones that built compliance into how their AI operates from the start, before the regulators arrived, before the audit surfaced a gap, before the agent did something no one can explain.
Your workflows are already the rulebook. Now they can talk to the agents doing the work.
Connect your workflows to AI with the Process Street MCP Server.
The post Agents Do Not Improvise Well first appeared on Process Street | Compliance Operations Platform.