12.5 C
New York
Saturday, April 11, 2026
Array

CIOs on the red flags


Not all AI projects will be winners.

So CIOs must apply “fail fast” principles to their AI initiatives, deciding as quickly as possible when a promising idea is just not going to pan out.

That’s easier said than done. The MIT report “State of AI in Business 2025” found that 95% of 153 senior leaders surveyed “are getting zero return.”

To understand how CIOs decide when to stop an AI project, we asked two IT leaders: What is the specific red flag that tells you an AI pilot has become a sunk cost and needs to be killed? Both identified clear, telltale signs that a project is off track.

  • Soo-Jin Behrstock, chief information technology officer at Great Day Improvements, a direct-to-consumer home remodeling company, said missed milestones are a warning sign to pivot — and that careful upfront planning makes killing an AI project practically nonexistent for her.

  • Ed Clark, CIO of California State University, which serves nearly 500,000 students, said stalled progress and weak adoption are clear indicators that a project is foundering — and that it’s crucial to watch for those signs so leaders can redeploy resources to more promising efforts.

Related:It’s not your tech stack, it’s your structure — fix it

Below are Behrstock and Clark’s responses to our question, edited for clarity and length.

Soo-Jin Behrstock, chief information technology officer, Great Day Improvements

Behrstock: ‘Start with: What does success look like?’

“When we take on AI initiatives, I always start with: What does success look like, and how are we going to measure it?

“For example, if we are using AI for sales or marketing predictions, we start with a small sample of data that we know really well. Based on that, we have a good sense of what the output should look like. If the output is not directionally right, then that usually tells us something is off — it could be the data, the process or the model.

“From there, we set short milestones, usually every couple of weeks, to see if we are getting closer to the outcome we defined with measurable results.

“If we are not [getting closer to the outcome], then we pivot or defer. I do not believe in pushing AI forward just for the sake of saying we are doing AI. If success is not clearly defined or we cannot measure progress against it, that is a red flag.

“I don’t know about killing [an AI initiative] unless you determine it’s not aligned with the business.”

 

‘Pivot to get to success’

“One thing that I do find is sometimes some developers get into analysis paralysis in terms of how the AI should work. That extends the timeline and the budget. But when you have the incremental milestones that aren’t being met, you need to ask what needs to change to get to success?

Related:InformationWeek Podcast: Safeguarding IT ecosystems from external access

“Let me give an example: Right now, we’re working on using AI predictive modeling. We’re taking a small sample of data that we’re really familiar with, and we’re measuring what the output is, so we can say, ‘Here’s what good really looks like. This works.’ Then we’re adding more data into it, so we can measure whether our model is working correctly or whether we need to pivot.

“In such cases [where we need to pivot], it could be that we don’t have the right resources or skills, so we may need to partner with consulting companies to help us.”

The value of being ‘very intentional’

“I haven’t had to be in a position to say, ‘Let’s kill it.’ But I could see doing that if what we thought would make sense for the business, we later determine doesn’t. But I haven’t been in that situation yet because everything’s been very intentional. I’m really careful to set expectations upfront and define success. And so if we’re not hitting milestones, it’s usually [because of issues] around data and process, so we determine where we need to adjust and we just pivot.”

Ed Clark, CIO, California State University System

Clark: A list of red flags

“In my mind, that red flag is when the pilot no longer has a clear path to create strategic value for your organization.

Related:Humans are the North Star for AI-native workplaces — Gartner

“Another red flag is when the team gets stuck in a loop, when they come back with the same status updates and you’re seeing no progress, when you see the same slides, the same hurdles, when you hear, ‘We’re almost there’ and nothing is happening, and there are no deliverables. Then you know this thing is stuck.

“Another thing to look for is when adoption is weak, when you’ve rolled out something that everyone said, ‘Oh, this is going to be so cool,’ but then no one uses it.

“Also, if the executive sponsorship disappears, that’s another thing I look for.

“And another signal that’s really important — and this happens all the time — is when vendors are making a core capability for their platform [that’s similar to the AI project you’re developing]. We’re not in the business of competing with these vendors.

“And then the last thing — and this happens especially in artificial intelligence — is when the original use case that you’re all excited about is just kind of obsolete because the technology moves so fast.

“Any of those could be red flags.”

Finding the reasons behind the red flags

“You have to ask why some projects end up with red flags. 

It could be what’s being asked for is too out of the range of what your team is able to accomplish. Then you have to figure out whether [the AI project] is an idea good enough to pursue — where I want to chase it down and maybe bring in external resources to get it done, or whether it’s a pilot where it’s OK for your team to just practice and learn, or whether the executive who was excited about it but won’t meet with us about it really doesn’t care about it anymore, so you shouldn’t be pursuing it.

“All the money and effort you’re spending could be going toward something else that could achieve the aims of the organization.”

The pilot that failed to take hold

“I can tell you a specific example: One of the things that we’re constantly looking at is affordability. And we thought we could make open textbooks — these free textbooks — more accessible to students by creating an AI overlay that [functions as] a tutor.

“So we tried to pilot this thing, but there was no adoption. It was frustrating because we saw a way to make open textbooks more useful for our students by adding this support system.

“It turns out that faculty in general don’t like open textbooks, because they don’t come with the teaching resources they want. And so even though it was a wonderful idea that would help serve our mission and advance our strategic goals — and that executives originally thought would be great — we had to kill the idea.”

What the team learned from killing the project 

“It did hurt to make that call, because I think the amount our students [collectively] spend in textbooks every year is in the hundreds of millions of dollars. But we learned a lot working on that project. Like, if we’re really going to do this, we need to make sure it’s multilingual and that it can handle mathematical symbols. We learned things that are going to be useful for our community that can be applied elsewhere.”



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

CATEGORIES & TAGS

- Advertisement -spot_img

LATEST COMMENTS

Most Popular

WhatsApp