Here's the disconnect: almost 80% of companies are deploying AI agents… but only 15% will let them run without human oversight.
That finding comes from a Gartner survey of 360 IT leaders published on September 30th. The gap between those two numbers tells you everything you need to know about where AI actually is versus the hype.
The numbers get worse when you look at the details. Fewer than 20% of IT leaders believe their workers can protect against AI hallucinations. Only 13% think their organization has the proper governance to manage agents. Nearly three-quarters view AI agents as a new attack vector.
This matters because AI providers are pushing hard in the opposite direction. Every major tech company launched agentic AI products in 2025. Salesforce has Agentforce. Microsoft is pushing Copilot everywhere. OpenAI released Operator. The message: autonomous agents will transform work.
McKinsey research shows why companies aren't buying it. Eight in 10 report no significant bottom-line impact. McKinsey calls this the "gen AI paradox." Companies deploy tools like chatbots that scale quickly but deliver diffuse gains. Meanwhile, 90% of function-specific AI projects—the ones that could actually transform operations—remain stuck in pilot mode.
MIT published even starker numbers in August. Only about 5% of enterprise AI pilots achieve revenue acceleration. The rest stall. The research covered over 150 interviews with leaders, a survey of 350 employees, and analysis of 300 public AI deployments.
The trust problem has roots in how AI companies behave. Max Goss, senior director analyst at Gartner, said vendors are "repeatedly changing their branding, pricing models and product offerings." It doesn't help that many release new AI tools before building the governance and security capabilities to protect them.
Companies that do deploy AI agents see real problems. PagerDuty surveyed 1,500 IT and business executives across six countries. While 81% of executives now trust AI to manage crises like security breaches, 84% have already experienced AI-related outages. That's a real gap between confidence and reality.
Among organizations that deployed multiple AI agents, 79% believe AI-driven complexity will exceed their management capabilities. That number drops to 57% among companies without AI agents. The more companies use AI agents, the less they trust them.
Gartner also predicts over nearly half of agentic AI projects will be canceled by end of 2027. The reasons: escalating costs, unclear business value, inadequate risk controls. Many vendors engage in "agent washing," rebranding existing products like chatbots and RPA without substantial agentic capabilities.
Here's what works: Companies that buy specialized AI tools from vendors succeed about 67% of the time. Companies that build internally succeed only one-third as often. The ROI shows up in back-office automation, not sales and marketing where most budgets go.
PwC surveyed 300 executives in May 2025. Among companies adopting AI agents, 66% say they deliver measurable value through increased productivity. But most are using embedded agentic features in enterprise apps for routine tasks. It boosts productivity without transforming operations.
Only 14% of IT leaders surveyed by Gartner were confident their organization had consensus about what problems AI will solve. Without alignment between IT, business units, and executives, AI deployments fail.
Design lesson: The problem isn't the technology. It's the gap between what AI agents can reliably do and what organizations need them to do. Most enterprises aren't agent-ready. They lack the APIs, the data infrastructure, the governance frameworks, and the risk controls. Deploying autonomous AI into that environment creates more problems than it solves.
A better approach: Start with low to medium complexity use cases. Repetitive tasks that require some domain knowledge but not complex decision-making. Customer support ticket routing. Appointment scheduling. Document processing. Build confidence and experience before attempting full autonomy.
CIO Dive, September 30, 2025
McKinsey, June 13, 2025
MIT NANDA Report, August 2025
PagerDuty Survey, September 24, 2025
Gartner Prediction, June 25, 2025
The Pattern Across Autonomy
The AI agent trust gap mirrors what we see in physical autonomy.
Waymo operates commercial robotaxis in five cities. Tesla launched robotaxi service in Austin with safety drivers still in the vehicle. Amazon's warehouse robots work because the environment is controlled. MightyFly's $50M healthcare drone deal works because the routes are fixed and the use case is narrow.
Autonomy succeeds when the operating environment is constrained, the failure modes are understood, and the governance is in place. It fails when organizations try to deploy it everywhere at once without the infrastructure to support it.
The same companies rushing to deploy AI agents are the ones that haven't figured out how to measure AI productivity gains. They're building on unstable foundations.
The difference between 2025 and five years ago: we now have enough real deployments to see the pattern. Autonomous systems work in narrow domains with clear constraints. They struggle in open-ended environments where edge cases multiply.
That's the reality behind the hype.
Companies are deploying AI agents because they feel they have to. But they're not letting them run autonomously because they know what happens when complex systems fail without guardrails.
Forward this to someone building AI systems or managing enterprise tech.

