Despite the billions that enterprises are pouring into GenAI projects, 95% of companies get no measurable return. That statistic from a MIT NANDA report released in July 2025 caused quite the stir among AI evangelists, skeptics and detractors alike. Eight months later — a long time in the world of AI — companies are figuring out how to move the needle on AI use cases.
But the fact remains: scaling AI is hard. Vendors don’t always deliver on the expected outcomes. Ramping up AI is expensive. The underlying data quality remains a frequent stumbling block. End users have to adopt the AI tools into their workflows. There are plenty of reasons an exciting AI use case can fail to take hold and scale.
But the time for unfettered experimentation with AI is coming to a close. Enterprise leaders and investors are expecting CIOs to implement AI use cases, scale them and deliver measurable return.
InformationWeek spoke to three CIOs in different industries about AI projects they are scaling at their organizations to understand what’s working — and what isn’t:
-
Sean McCormack, CIO at First Student, a North American private provider of student transportation that operates 47,000 vehicles responsible for completing millions of student trips each day. The school transportation company deployed Halo, an integrated AI platform for vehicle tracking, safety, communication and payroll.
-
Brian Schaeffer, CIO at OceanFirst Bank, a $14.6 billion regional bank based in New Jersey, with business and retail customers along the East Coast. The bank, which has 150 employees using Microsoft Copilot, aims to use AI to enhance its Bank Secrecy Act (BSA) and Anti-Money Laundering (AML) due diligence work.
-
Padma Sastry, CIO at Lowell Community Health Center in Massachusetts, which rolled out an AI-powered voice system to support its patient call center. It fields thousands of inbound patient inquiries per day.
Sean McCormack, CIO, First Student
What worked: 5 practices for scaling AI
McCormack, Schaeffer and Sastry operate in different industries and are pursuing different AI use cases, but their approaches to scaling AI share common traits. These five practices helped move their efforts beyond pilots.
1. Identify a workable use case
First Student hired McCormack as its CIO to drive transformation. He has done it before, at Harley-Davidson and Grainger. When he joined First Student, he and the CEO did a technology walkthrough.
“We met with every team and said, ‘Hey, show us what processes you have today. And then how’s technology helping or hurting you?'” McCormack said. “It was really informative because it gave me a view of the entire business.”
The walkthrough helped McCormack identify real pain points for First Student. He saw what the company’s drivers and dispatchers do on a daily basis, the knowledge of which served as the basis of developing Halo.
“The focus was really to create a modern end-to-end transportation solution to tie everything together — that took everything from the time we get a customer in a contract, all the way through to how we plan the routes, how do we do the dispatching, the day-to-day inspections, payroll, recruiting, everything, bring it all to a single platform,” McCormack said.
OceanFirst’s primary focus was customer due diligence required by BSA and AML regulations, a core function at financial institutions. The bank spends significant time on that, particularly for business customers.
“A business could have 20 relationships to it, and you’ve got to check every single entity on that list. It could take a whole day just to look up one instance, and we get a few 100 of these a month,” Schaeffer said.
“The use case for AI really started zooming in because all you need to do is look at all that detail and analyze it and summarize it and see if there’s any problems,” he added. A search that could take half a day could now be done in minutes.
Lowell Community Health Center manages a complex patient population. More than half of its patients speak a language other than English, and nearly 90% of patients have income below 200% of the federal poverty level, according to Sastry. The call center is a major touchpoint for patients, and it is a challenging workflow to manage. Sastry said she saw the opportunity to evaluate AI vendors for call triage and language support.
“As community health centers, we’ve always been on the front lines of trying to innovate based on the need, as opposed to ‘let’s try and do something cool,'” she said. “That’s a luxury we don’t usually have.”
2. Small and steady wins the race
Each of these three CIOs took a measured approach to launching and ramping up their AI use cases.
“For any future AI initiatives, what I’ve learned is that you always have to start small. You have to have a very contained pilot before expansion,” said Lowell Community Health Center’s Sastry.
OceanFirst’s Schaeffer described all of the foundational work that has to be done before enterprises will realize any gains from AI as a layer cake. “We knew that layer cake existed,” he said. “We didn’t realize that the cake is bigger than we thought.”
The bank has spent months working on its particular use case, testing it and building a production-ready version that it plans to push out in the second quarter.
“The good news is once you start getting traction, though, it accelerates. You start building off the successes faster because you have a stronger foundation,” Schaeffer said.
Sastry knew that introducing AI into the Lowell Community Health Center call center had to be done slowly to ensure that it did not negatively affect patient experience. First, team members deployed the AI operator after hours. Once they were confident it was working as intended, they began to turn it on during business hours an hour at a time. “After that, we collectively decided it was great to scale it to be 24/7,” Sastry said.
The Halo platform at First Student took about two years from inception to enterprise rollout, McCormack said. He applied the same measured approach to design, implementation and rollout that worked in his previous roles. It began with ideation and A/B testing. He and his team put together clickable prototypes and put them in front of actual users to get feedback. Then, the project moved to development, piloting and, ultimately, enterprise rollout.
3. Picking the right vendor
For enterprises working with outside help, choosing the right vendor is a big part of successfully applying and scaling AI use. Sastry, for example, knew she needed to find a vendor that understood the very specific needs and challenges of a federally qualified health center.
“In my conversations with Attuned, I was very transparent and honest with them to say, ‘Hey, we don’t have the bandwidth to sign on a dotted line on day one. I need to be able to test this out and evaluate the ROI and look at how it fits into the grand scheme of things,'” she said.
CIOs need to define criteria for potential AI vendors and evaluate their options to find one that will partner with them to achieve the goals of a specific AI use case.
4. Tracking success and failure
CIOs need mechanisms in place to track the performance of a particular AI use case before they attempt to scale, during the ramp-up process and on an ongoing basis. All three CIOs agreed it makes little sense to kick off a major project without knowing whether it actually delivers the expected outcomes.
For the Halo platform, McCormack and his team have defined metrics to track outcomes that matter for First Student. AI safety cameras that provide alerts for safe driving, for example, are a part of Halo.
“Are people going rolling through stop signs? Are they not wearing their seatbelts? Are they distracted? There’re so many things that we can track, and we’re able to show in all the pilots that we had measurable improvements,” he said.
OceanFirst leverages Microsoft; the bank has a Power BI dashboard to track AI utilization. “We’re seeing how many times you will click on it, how many times something gets resolved, how accurate we think the answers are. And we take those and we re-measure and tune up what we’re doing,” Schaeffer said.
For Lowell Community Health Center, cost and patient experience are two of the most important metrics. After-hours calls go to a paid third-party answering service. Sastry and her team are watching how AI triages calls and reduces the number that need to go to that paid service.
The health center is also tracking the abandonment rate. With such a high volume of calls, how many patients hang up before they get their questions answered? Since rolling out the frontline voice AI, abandonment has decreased.
5. Fail fast
CIOs don’t want to find themselves mired in an AI pilot graveyard, but not every AI project will be successful. Rather than attaching themselves to a sinking ship, McCormack suggested identifying a handful of ideas, testing them quickly and identifying those that have potential to provide value.
“Fail fast. That’s not a bad thing,” he said. “If you’re doing it the right way, you want the design thinking, you want the end-user engagements. You want to have those metrics so that you don’t go down a path where you invest six to 12 months’ worth of development and roll something out that falls flat.”
Failure can be an unpleasant prospect for CIOs on the clock to deliver measurable ROI from AI. They can avoid some potential misfires by talking to their industry peers. Schaeffer talks to teams at other banks about what is and isn’t working with their AI efforts. “Nothing’s better than talking to somebody who’s been through it, lived it and tells you what to avoid,” he said.
Brian Schaeffer, CIO, OceanFirst Bank
What doesn’t work? 3 things to avoid
Just as important as what works is what doesn’t. These CIOs pointed to common missteps that can stall AI efforts before they get off the ground.
Enthusiasm without purpose
The AI hype is real. CIOs are inundated with pitches from vendors, pressure from peers and ideas from employees. While excitement isn’t inherently a bad thing, it shouldn’t be the guiding principle.
“Don’t scale AI because it’s exciting. Scale it because it measurably solves a defined operational problem,” Sastry said.
Excitement without a concrete use case and the underlying work required to set that use case up for success will almost certainly end in disappointment.
At OceanFirst, Schaeffer learned some early lessons about leaping into the AI space. “Our initial foray into AI was generative, in chatbots. And we had limited success there,” he said. “We had done a chatbot for HR policies: A ‘How many days off do I get?’ kind of thing. It wasn’t the ‘Ah ha, wow’ moment that we would hope for.”
Forgetting the end user
No matter how thrilling an AI use case looks on paper, it will fail if the intended end users don’t adopt it. That is why communicating with those end users and involving them in the testing process for new AI tools is invaluable.
At First Student, drivers start their day by getting their assignments and inspecting their buses. They take photos of any issues, which then go to the maintenance team. Putting tablets in their hands to digitize that entire process — the ultimate goal of the Halo platform — seemed like a great idea.
“One of the things we didn’t think of is a lot of these drivers are coming in at 4 a.m. in the morning doing their inspections. The tablets that we gave them didn’t have flashlights,” McCormack said. “That’s the value of doing the proof of concept and the on-site testing to really make sure you’ve got it right before you do a large-scale rollout.”
In addition to ensuring the use case actually benefits end users, CIOs need to be prepared to tackle change management. People are often resistant to change, and AI can spark a particularly emotional reaction considering how much conversation there is about its ability to replace human workers. If CIOs ignore the end users’ reaction to a new AI tool or program, they risk poor adoption.
First Student has an entire change management program to ensure workers know how the technology works and how it can work for them. “We are very proactive … helping them understand what’s coming, creating customized training based on role, doing a lot of on-site, white glove type treatment,” McCormack said.
Waiting to tackle data and AI governance
Scaling AI without having governance in place is a classic example of putting the cart before the horse. There’s a reason you constantly hear “garbage it, garbage out” in the AI space. Data needs to be organized, managed and high quality before it can power any truly valuable AI usage.
“Some of our early chatbots … fell on their face because we didn’t think about the data as much as we should [have],” Schaeffer said. Since then, the data foundation has become a priority for the bank.
The AI space is moving so quickly, and the urge to jump in, try things out, and figure out governance later is a strong one. But trying to scale without it opens the door to risks: the risk of an unsuccessful project, security risks and regulatory risks.
“Build the governance early on, before the expansion, define the ROIs before the launch and embed that into the workflows with the human in the loop,” Sastry said. “That is how AI becomes more of a tool, a technology, an infrastructure, rather than something that is something cool to do.”
The ongoing battle of scale
Once CIOs establish a foundation for AI in their enterprises, they can begin to build on their success. But that doesn’t mean scale will suddenly become easy. The underlying governance remains essential. CIOs have to think about shifting regulatory guidelines, security and the speed at which the technology is changing. CIOs need to consistently iterate, learn from their mistakes, and manage both the technology and human elements of ramping up AI use in their organizations.
“Don’t get discouraged. There will be battle bruises, especially when you’re trying to roll out something new,” Sastry said.

