Companies Struggle to Scale Agentic AI as Data Gaps and Governance Hurdles Mount

Key Points
- McKinsey forecasts the agentic AI market could exceed $199 billion by 2034.
- Gartner predicts over 40% of agentic AI projects will be cancelled by 2027.
- Qlik reports 97% of firms have budgeted for agentic AI, but only 18% have fully deployed it.
- Fragmented data and unclear ownership are the top reasons pilots stall.
- Unstructured internal documents add complexity to AI decision‑making.
- Governance questions—data ownership, action approval, human oversight—are becoming urgent.
- The EU AI Act introduces transparency and accountability requirements for AI systems.
- Model Context Protocol (MCP) offers a standard for secure data sharing among AI tools.
- Experts say solid data foundations and clear accountability are prerequisites for scaling.
Investment in agentic AI is soaring, with McKinsey projecting the market to jump from $5‑7 billion in 2024 to over $199 billion by 2034. Yet pilots are faltering: Gartner forecasts more than 40% of projects will be cancelled by 2027, and Qlik reports only 18% of organizations have fully deployed the technology despite 97% allocating budgets. Executives cite fragmented data, unclear ownership and weak governance as the primary roadblocks. Experts warn that without solid data foundations and clear accountability, the promise of AI‑driven business automation will remain out of reach.
Spending on agentic AI is accelerating at a breakneck pace. McKinsey predicts the market, worth roughly $5‑7 billion this year, could swell to more than $199 billion by 2034. The surge reflects a shift from generative AI assistants that merely suggest actions to autonomous agents that plan, interpret and execute tasks across enterprise systems.
Despite the hype, many firms are hitting a wall. Gartner estimates that over 40% of agentic AI initiatives will be scrapped by the end of 2027. A separate study by Qlik finds that while 97% of organizations have earmarked funds for the technology, only 18% have moved beyond pilot phases to full deployment. The gap between ambition and reality is widening.
Data Foundations Hold Back Deployment
One recurring theme is data immaturity. Agentic systems rely on a consistent, trustworthy view of information, yet many companies still wrestle with fragmented databases, duplicated records and murky ownership. In such environments, even the most sophisticated models generate outputs that teams cannot rely on.
Unstructured content compounds the problem. Internal emails, knowledge‑base articles and legacy documents often contain valuable context, but they lack clear provenance. When an AI agent draws on such sources, verifying the timeliness or accuracy of the data becomes a near‑impossible task, eroding confidence in automated decisions.
As agents begin to interact directly with operational workflows—triggering supply‑chain adjustments or initiating financial alerts—the margin for error shrinks dramatically. A misstep that a human could review before execution now translates into a potentially costly automated action.
Governance and Interoperability Challenges
Beyond data, accountability looms large. Companies must answer basic questions: Who owns the data feeding the agent? Who approves the actions it takes? When should a human intervene? Clear lines of responsibility are essential not only for trust but also for compliance, especially when AI‑driven decisions affect revenue, regulatory reporting or risk management.
Regulatory frameworks are beginning to shape the conversation. The European Union’s AI Act, for example, sets expectations around transparency, accountability and risk mitigation early in the development cycle. While some view such rules as a brake on innovation, many executives see them as a roadmap for responsible AI deployment.
Another hurdle is the proliferation of disparate AI assistants across organizations. Different teams often adopt varied tools—analytics platforms, internal bots, external services—creating a fragmented ecosystem. For agents to be effective, they need secure, standardized ways to access trusted data and interact with other systems.
Emerging standards such as the Model Context Protocol (MCP) aim to bridge that gap. By exposing data and analytics through consistent interfaces, MCP enables multiple AI tools to share information while preserving access controls and governance safeguards. Companies that adopt such protocols early can avoid costly custom integrations later.
Industry leaders agree that success hinges on preparing the underlying infrastructure before scaling beyond pilots. Strengthening data quality, establishing clear governance and embracing interoperability standards are the first steps toward realizing the transformative potential of agentic AI.
Until those foundations are in place, the promise of autonomous, business‑wide AI remains more aspiration than reality.