Much has been written about the high failure rates for AI projects. In an increasingly agile world, CIOs and their organizations naturally want to embrace the mindset captured in the book title “Fail Fast, Learn Faster” — in other words, move quickly, experiment and learn along the way.
But too many organizations rush into AI without the fundamentals in place.
Before launching any AI initiative, CIOs need to act like experienced mountain climbers: establish a solid base camp with their business counterparts, align on the critical business problems and opportunities to be fixed, and make their organizations prepared for the climb ahead.
The reason is straightforward: Achieving value from AI (like any major initiative) requires discipline — not just speed. That discipline shows up as having a clear strategy tied to explicit business outcomes, with success criteria, governance and compliance defined from the start. From here, prioritization is essential. There will always be more AI use cases than resources, so CIOs must focus on the initiatives most likely to deliver measurable business impact — especially as software pricing increasingly ties to a share of cost savings and labor replacement.
Just as important, CIOs need to avoid the endless pilot trap by ensuring selected AI projects have credible paths to scale. Otherwise, pilots pile up without connecting to real work.
Once this groundwork is in place, organizations can move into pilots with calculated risk — using them not only to test technology, but also to rethink business capabilities and processes and, occasionally, as futurist Linda Yates suggests, “unleash the unicorn within.”
What actually separates pilots from production ?
Let’s dig into the anatomy of project success and then the causes of high project failure rates.
In our research at Dresner Advisory Services, I found three qualities that differentiate projects that have moved from pilots to production.
-
Success with business intelligence (BI). This means an organization’s data is industrialized — i.e., consistent, governed and usable at scale — so it is AI-ready.
-
Success with data science and machine learning. This means optimization models already exist for more complex agentic AI and, even more important, that the organization already groks AI, so less organizational learning is needed to sell AI’s value or cost to the organization.
-
A data leader exists. A senior data leader with strong business relationships is in place, which means co-creating an AI future is easier and the right AI projects for the business receive prioritization.
These weren’t nice-to-haves. They determined whether projects scaled.
Given this background, I wanted to hear from a major consultant that helps businesses day in and day out with their AI implementations — what are they seeing as they work with clients? Vamsi Duvvuri is Ernst and Young’s AI and data leader. Duvvuri argued that “AI projects fail when speed outpaces structure,” pointing to findings from the firm’s latest EY Technology Pulse Poll, which surveyed 500 U.S. business leaders working in the tech industry:
-
85% of respondents prioritize speed-to-market over extensive vetting of AI.
-
52% of respondents reported that department-level AI initiatives are conducted without formal oversight.
-
78% say adoption is outpacing their ability to manage risk.
This is scary, and reminds me of what CIOs were trying to avoid several years ago — shadow IT that wasn’t vetted, integrated or protected. The difference now is that AI embeds those risks directly into workflows and spreads them faster.
Even worse, the problem extends beyond project prioritization and selection, according to Duvvuri. He said that in practice, projects often slow down because of weak governance, unclear ownership, poor data and numerous disconnected pilots. “The result isn’t failed ambition, it’s stalled value,” he said. “For example, a company launches multiple AI pilots to help analysts work faster, but analysts still reconcile data, manage complexity and noise, and stitch together decisions between those multiple pilot projects. Value shows up briefly, then eventually plateaus.”
This interestingly nicely circles back to the three qualities identified at the beginning of this section.
Why more pilots didn’t create more value
Our Dresner data shows that 15% of organizations are in production with agentic AI and 34% are in production with some form of generative AI-based solutions. Our expectation is that the aggregate 34% are organizations that have the three success criteria above — BI maturity, AI and machine learning skills, and a strong data leader.
Meanwhile, 34% of organizations are experimenting with agentic AI; 53% said they are experimenting with generative AI. That these numbers aren’t closer is surprising, but it implies IT organizations can roll out a tactical generative AI solution without fixing underlying data and governance and without deliberating business priorities.
Given this, a question remains: how do organizations create space for pilots that deliver strategic, measurable, production value?
Clearly, responsible AI must be designed into operations. Professor Pedro Amorim advised that CIOs run a venture-style portfolio: funding many small, time-boxed bets, learning quickly, and doubling down on the winners with a clear path to industrialization.
He added that at the same time, organizations need “basic guardrails in place early (data classification, privacy/IP rules, human-in-the-loop for sensitive decisions, evaluation benchmarks, and explicit no-go criteria), and must make sure there’s budget at the front of the funnel, so you’re not forced into one or two big bets.”
So, smart experimentation includes strong data integrity, embedded cybersecurity and ongoing monitoring for issues like bias and model drift.
Trust is what makes AI sustainable. Transparency, governance, training and clear human oversight are essential so employees understand how AI works and where human judgment still matters.
“Smart experimentation means deciding where complexity should live. It is the CIO’s role to ensure agents absorb variability and orchestration, while humans retain judgment and critical decision‑making,” Duvvuri said.
In practice, that requires fewer, more disciplined experiments — anchored to real workflows, not isolated tasks. This matters because organizations do need to move quickly. But speed without control amplifies breakdowns. For this reason, Duvvuri emphasized that “the issue is control, not momentum.”
Instead of piloting AI to “assist” customer service reps, he said, a CIO should sponsor an experiment where agents handle triage, resolution and routing cases end‑to‑end, then escalate to humans only for exceptions, policy judgment and customer empathy.
Successful pilots prove not just accuracy, but operability. “Smart experimentation requires an AI-native approach to software delivery,” he said.
Account for risk from Day 1
Our research at Dresner shows that the major risks that CIOs and data leaders are worried about include the following:
-
Data security/privacy concerns.
-
Quality/accuracy of responses.
-
Potential for unintended consequences.
-
Legal and regulatory compliance.
So how do smart organizations anticipate, assess and mitigate AI risks from the start?
The organizations that thrive have a CIO who brings people together across the organization to co-create needed guardrails. It is critical to remember that minimizing risk isn’t about slowing innovation. It’s about alignment and shared purpose.
For this reason, Duvvuri said that “risk must be designed in Day 1. Because AI accelerates action, unmanaged usage creates exposure,” he said, pointing to EY data showing that 45% of technology leaders report a confirmed or suspected sensitive data leak tied to unauthorized generative AI use, and 39% report IP leakage.
That’s not a tooling problem — it’s a design failure.
CIOs need to standardize approved platforms, embed controls directly into workflows, and clearly define where agents act autonomously versus where humans must intervene, he said. Done right, governance becomes a scale enabler, not a brake on innovation.
Duvvuri suggested that CIOs establish approved AI tools, real‑time monitoring for data and IP risk, and clear authority to halt noncompliant deployments.
“Teams will move faster because safe behavior is built into the system, not enforced after the fact. As intelligence becomes cheaper and more available, enterprises don’t get simpler by default. The winners deliberately shift complexity from humans to machines, while keeping judgment, trust and accountability firmly with people,” he said.
Agile with discipline: Build the foundation first
CIOs should apply agile principles to AI — but not without discipline. Organizations need a clear strategy tied to explicit business outcomes, with success criteria, governance, and compliance defined from the outset. Data maturity and well-defined guardrails are essential. This foundation enables smarter experimentation while accounting for risk from the start. More mature organizations have a head start because they’ve already addressed many of these challenges. For CIOs in less mature environments, the priority is clear: invest in the processes and data capabilities needed to generate early wins — then refine, scale, and industrialize data and business processes.

