AI is turning projects into distributed workflows that often don’t look like conventional projects at all. As a result, CIOs are anxiously searching for new ways to track, govern and sign off on work before risk and fragmentation can set in.
The issues arise immediately. Unlike traditional projects, AI initiatives don’t necessarily begin in IT, said Jen Clark, director of AI advisory services at Eisner Advisory Group. “They start within the business whenever someone finds or builds a tool that solves a problem,” she said. This leaves CIOs without clear visibility from Day 1. And unfortunately, the flow of scaled rollout hasn’t changed to match the speed, coverage and capability of these tools.
There also isn’t the same obvious accountability for project management. In the old days, anything you wanted to know about a project eventually came down to finding the right person to ask, said David White, field CTO for startups at Google. “Any task, any action, any decision could ultimately be traced to an individual who could then be queried about what happened, what the status is and how they got there,” he said.
Tracking down AI is fundamentally more difficult, especially when you have agents that may scale up and scale down and are somewhat ephemeral, White said. He noted that the agent who made the decision may not even exist anymore. “So how do you ask it how it came to a certain decision?” He advised that organizations plan from the outset how to leverage AI, how to engage it and what kind of visibility and tracking will be needed.
Challenge and opportunity
Every function is now embedding AI into workflows through tools such as Copilot, ChatGPT and Claude, according to Clark. “Yet these platforms come with very few built-in controls,” she said. “If you have a license, you essentially have everything, up to the ability to build agents.” This means employees throughout the organization can deploy AI in new ways, without the necessary oversight of IT.
This creative application of AI also extends to the method in which it’s applied: iteratively, not linearly. Traditional projects have a start, a middle and an end, but AI deployment doesn’t work like that, said Peter-Paul Schreuder, CIO at enterprise asset management firm Ultimo.
“You’re dealing with continuous learning, iterative refinement and outputs that change over time, even when nothing in the codebase has changed,” he explained. Such challenges make conventional project tracking — milestones, delivery dates, sign-off gates — a poor fit. “Leaders end up measuring the wrong things and missing what actually matters,” Schreuder said.
Upstream success creates downstream strain, Clark warned. “As teams get more fluent in AI, pressure accumulates in legal, compliance, security and engineering/IT areas.” CIOs often miss this threat, because they’re still positioned as builders and approvers rather than as the final validation and hardening layer. “By the time something surfaces, it’s already become a problem,” she said.
Control versus innovation
Enterprises have been trying to increase employee adoption of AI in order to boost productivity and innovation, but this can come with risks if there isn’t clear governance in place. The challenge for CIOs is balancing freedom and experimentation with appropriate guardrails.
Sam Nazari, chief AI architect at Amentum, a technology, engineering and government services contractor, said AI governance should focus on enabling grassroots innovation rather than controlling it. He noted that heavy-handed governance risks stifling organic energy and problem-solving from the ground up.
“The role of governance is to ride alongside those team members working with AI rather than obstructing or micromanaging,” Nazari said. “This approach fosters enthusiasm, creativity and innovation while maintaining oversight.”
Even a light touch must be applied thoughtfully, however. Governance must be taken seriously, advised Aimen Hallou, CTO at Floxy, a web intelligence solutions developer. “It’s important to have version control not just for the code, but also for your data set, retraining process and output data,” he said. “Without proper governance, you’ll lack traceability, therefore making your project vulnerable from a regulatory point of view.”
Schreuder said the most common failure point is the gap between deployment and adoption. “CIOs can see the deployment — it’s a project, it has a go-live date,” he said. What they can’t see is whether people are actually using the system, whether the outputs are trusted and if the AI is improving or quietly degrading. “That gap is where value leaks out, because it’s invisible in standard reporting and often doesn’t surface until a business leader complains, by which point months of value have already been lost,” Schreuder added.
Final thoughts
The role of IT has changed when it comes to enterprise AI projects. The organizations with successful AI initiatives have stopped asking IT to invent and started asking them to protect, validate and scale, Clark said. She said it’s the business teams who should create first, operating within preapproved guardrails. Engineering and IT teams should enter later — not to approve the idea, but to harden it for production. “Nothing should go live without passing through that gate,” she said.
Similarly, the CIO’s role is also evolving, from a delivery focus toward stewardship, Schreuder said. “Stewardship in this context has specific responsibilities attached,” he explained. “Model and data governance, lifecycle management, auditability — these aren’t abstract concepts, they’re operational requirements.
“CIOs need to be able to demonstrate not just that AI is deployed, but that it’s being governed responsibly and that its behavior can be explained and examined,” Schreuder added. “The CIOs who will thrive are those who stop thinking about AI as an IT project and start thinking about it as a permanent, accountable part of the organization’s operating model.”

