What No One Tells You About AI Agentic Solutions.

The issue is called “drift”. Who are you going to believe? What is "Agentic AI"?

Who should you trust when it comes to artificial intelligence?

The person selling you the solution, or the professional trained to identify risk? The one repeating hype, or the one working with data?

My name is Norma Berríos, and I am an experienced attorney in the practice of risk management, governance, and commercial litigation.

What I am about to explain is uncomfortable, but it is grounded in peer reviewed research, industry reports, and real operational data. I am not selling you an AI system. I am telling you this: before you click, before you sign, listen carefully.

If you have a business, you have likely received (or will soon receive) an email or offer from a technology company promoting an AI solution designed to “help” your operations. You will hear that it can automate tasks and “save you money” over time.

I.What is often being offered in these cases is what is known as an agentic AI system: a system that does not just assist you, but takes actions on your behalf. This is commonly known as "AI Agents".

What does that look like in practice?

It can include systems that respond to customer emails without human review, process requests, generate shipping labels, update records across platforms, or trigger actions such as issuing refunds or responding to complaints automatically. Anything with the word "agent", "agentic" or "automation" related to AI, will have the same problems that I explain in this article.

In this article we are referring only to this type of AI "Agentic" system solutions. What we are not discussing are AI systems that require human intervention, review or approval at each step.

In a follow up article I will address the different types of "AI systems" and solutions for those not familiar with the topic. But here's the reason I urgently started this series with this particular subject: right now the "fail" percentage of Agentic AI, that we will discuss, is high. Time is of the essence and you need to know this now.

II.The Number That Should Stop You: 40%

Before we get to the data, it is important that you understand that “Agentic AI” refers to artificial intelligence systems designed to act autonomously, as we explained in the previous section.

These Agentic AI do not merely respond to a prompt; they plan, make sequential decisions, execute multi step tasks, and interact with external tools, systems, and data, often without a human interacting or making decisions at each step. This is categorically different from a chatbot, a large language model, or a generative AI content tool. An agentic system does not just answer questions; it takes actions with real world consequences on your behalf.

That distinction matters, because the risk profile is entirely different.

According to a 2025 Gartner forecast, more than 40% of agentic AI projects will be canceled by 2027 as organizations confront escalating costs, unclear business value, and inadequate risk controls.[1]

These systems are being implemented at scale under the promise of efficiency, automation, and cost reduction; however, the problem is not simply that they fail.

The problem is how they fail. They fail quietly.

III. The 91% Reality

There is a technical phenomenon consistently overlooked in mainstream discussions of AI agents: model degradation.

A peer reviewed study published in Scientific Reports, part of the Nature Portfolio, analyzed 128 AI model dataset pairs and found temporal quality degradation in 91% of cases, documenting multiple degradation patterns including gradual drift, explosive failure, evolving bias, and latent seasonality.[2]

These systems do not crash, and they do not alert you; instead, they gradually become less accurate. Industry monitoring research confirms that this degradation is often invisible without explicit, continuous oversight.[3]

IV. The Structural Problem

Artificial intelligence is not static; it is dynamic, contextual, and adaptive. When that intelligence is encapsulated into a task constrained agent without continuous human interaction, it loses access to real world feedback; errors compound and deviation becomes normalized. Because 91% of evaluated AI models degrade over time, and that drift is often invisible without explicit oversight, the most dangerous failures are not crashes but silent deviations.

V. The Contradiction No One Explains

A 2026 global survey of 919 enterprise leaders by Dynatrace found that 69% of agentic AI-powered decisions are still verified by humans, 87% of organizations are building agents that require human supervision, and only 13% use fully autonomous agents.[4]

This is the contradiction no vendor explains to you: they sell autonomy while their own industry data confirms that the overwhelming majority of deployments still require constant human verification. When a product is marketed as autonomous but the data shows 87% of organizations cannot operate it without supervision, that is not a feature gap; that gap raises questions a buyer must ask before signing, because disclosure or opacity can make a difference in whether or not there is clear and informed consent.

VI. Where This Actually Works and Why

Agentic systems work in large corporations such as financial institutions and pharmaceutical companies, supported by multi-million dollar infrastructures, monitoring teams, and R&D capacity. These organizations also have governance discipline that smaller firms often lack.  Without these safeguards, these solutions are incomplete and introduce risk that most organizations are not equipped to absorb.

VII. The Value and Liability Problem

A 2025 MIT study analyzing over 300 AI deployments found that 95% of corporate AI pilots delivered no measurable impact on the profit and loss statement, despite tens of billions of dollars in investment.[5] Failure is not only economic; it can involve direct liability exposure when incorrect outputs affect decisions, contracts, compliance, or third parties.

VIII. The Reality Behind the Narrative

When a system requires constant monitoring to function safely, presenting it as fully autonomous is a characterization of risk that deserves scrutiny.

IX. The Only Viable Path Forward

The most viable model is structured human interaction with AI systems. Validation, supervision, and context are not optional features; they are the architecture. Until robust infrastructure becomes accessible to organizations outside the Fortune 500, the responsible path is systems that integrate human oversight by design, not as an afterthought.

If you are navigating these decisions in your business, you need someone in your corner who understands both the technology and the legal exposure. That conversation starts before you sign.

X. Conclusion

The most dangerous failure in AI is silent deviation.

You are not purchasing efficiency.

You are assuming risk.

The transition to agentic systems isn't just a technical shift; it's a relational and legal one. If you’re currently navigating these complexities or weighing your options before you sign, I’m happy to exchange notes. Feel free to message me to start a conversation about how to align your AI strategy with actual risk controls.

Sources

[1] Gartner. (2025). Agentic AI projects: Risk and cancellation forecast through 2027. Gartner Research. Corroborated by Reuters, June 25, 2025.