Outsourcing providers often promise 40% -70% productivity gains from AI-enabled services. The reality, according to a recent Morgan Lewis and Boston Consulting Group roundtable, is “often more challenging”– requiring operating model redesigns that most contracts weren’t built to accommodate.
For CIOs, that gap between promise and delivery is forcing a fundamental rethinking of outsourcing strategy. Contracts structured around headcounts and hourly rates don’t account for AI-driven efficiency — or the new risks that come with it.
As providers embed AI into service delivery, technology leaders are revisiting deal structures, rewriting governance terms, and in some cases, bringing work back in-house. The question isn’t whether AI will reshape outsourcing; it’s who captures the value of AI and who’s on the hook when it fails.
The end of the FTE model
The traditional outsourcing model — paying IT service providers by the full-time equivalent (FTE) — is increasingly misaligned with how AI-enabled work actually gets done.
“We have to move to pay-per-outcome,” said Eduard de Vries Sands, a former CIO and currently an AI executive advisor to digital health provider PatientPoint. “The FTE model incentivizes bad behavior. If you pay by the FTE, why would your provider use AI? That would reduce their revenue and margin.”
The shift looks a bit different from the provider side. AI is automating routine tasks and handling tier-1 work, making outsourced teams more efficient than ever, said Chandra Venkataramani, CIO at business process outsourcing firm TaskUs. To avoid cannibalizing their own revenue, many outsourcing firms are transitioning to outcome-based pricing.
“[It] offers a happy medium, where providers can still generate revenue while their clients enjoy a lower total cost of ownership,” Venkataramani said. But the transition isn’t seamless; clients and providers are still working to determine the fair value of AI-enriched services.
Suppliers are adapting in other ways, too. Gordon Wong, senior partner and operations excellence practice lead at business and technology consultancy West Monroe, said providers are more willing to front-load productivity commitments, betting on themselves to exceed them. “They’re also more open to reopening the contract and coming back to the negotiating table should there be material changes in how services are delivered,” he added.
Some providers are also pushing for longer contract terms — five, seven, even ten years — to recoup their AI investments, said Brad Peterson, a partner at law firm Mayer Brown who advises on outsourcing deals. That puts pressure on CIOs to lock in protections upfront, before the deal economics shift.
Outsourcing contracts must be rewritten for AI
As AI becomes central to service delivery, standard outsourcing agreements often fall short.
Five contract areas need updating, explained Tripp Lake, a member at law firm Dickinson Wright:
-
AI tool disclosure so buyers know what’s running on their work.
-
Explicit prohibitions on using client data for model training.
-
IP ownership clauses that extend to AI-generated outputs.
-
Liability frameworks for AI errors and hallucinations.
-
Productivity-sharing clauses that prevent providers from capturing all efficiency gains.
“When AI efficiency gains go entirely to the provider’s margin, buyers are subsidizing a competitive advantage they funded,” Lake said.
Evaluating performance gets harder when AI is doing the work. The old model was simpler, said Peterson: the supplier agreed to do the same thing the customer was doing, with lower-cost people — the old “your mess for less” model. “Now you turn it over to AI agents. It’s inherently not the same,” he said. “You can’t use the same service level measurements.”
Accountability is another sticking point. Determining which party bears responsibility for AI hallucinations or mishaps has become a crucial part of contract negotiations, Venkataramani said. Mapping out the full scope of potential AI failures and agreeing on the right human-to-AI ratio are now core to deal-making.
Outsourcing providers, for their part, often try to sidestep responsibility for AI-related issues, especially when using third-party AI models, said Jason Epstein, a partner at Nelson Mullins and co-head of the firm’s technology industry group.
“We have seen a trend now to take a much more specific approach to these issues so that use of AI is not viewed as ‘all bets are off’ in terms of the obligations of a service provider,” Epstein said. It’s a familiar pattern: when software vendors first moved to the cloud, they also tried to avoid taking on hosting responsibilities. “It did not take long until the vendors had to agree to step up and be responsible for hosted services, and the same will ultimately trend for those using AI,” he said.
AI is reshaping the insource vs. outsource calculus
AI isn’t just changing how outsourcing deals are structured. It’s prompting some organizations to reconsider whether to outsource at all.
AI-assisted coding has reduced the need for junior offshore developers and testers, allowing some companies to bring teams back in-house. “We’re able to do with 10 to 15 people what in the past took 40 to 50 offshore developers, QAs [quality assurance specialists], and business analysts,” said de Vries Sands.
Large enterprises are following a similar pattern, building out their own AI centers of excellence and reclaiming certain functions, Wong said. But he notes the trend isn’t universal. Mid-market companies are actually outsourcing more, recognizing that it’s not just a labor arbitrage play but a way to access talent and thought leadership they couldn’t build internally. “That’s especially true given how difficult it is to hire AI and technical talent right now,” Wong said.
AI introduces new risks into outsourcing
Regardless of whether work stays with providers or comes back in-house, AI adds layers of exposure that CIOs are still learning to manage.
-
Data sovereignty tops the list. “When a provider deploys a general-purpose LLM on work that includes your data, your data may become part of the model’s effective memory,” Lake said. Contracts should give customers the right to control and verify how data is used.
-
IP contamination is a related concern. If a provider’s AI tools are trained on open-source code, public datasets, or prior client work without proper licensing controls, the deliverables could come with legal strings attached — unresolved ownership issues that are already being litigated in multiple jurisdictions.
-
Then there’s what Lake calls “quality drift.” AI outputs can be confidently wrong. And in outsourced contexts — particularly those in which buyers receive summaries or reports rather than source work — hallucinated content can work its way through workflows before anyone notices. And when bots fail, they can fail big.
“When bots make mistakes, they can do so at tremendous scale and speed,” Peterson said. That requires different protections than contracts written for human-delivered work.
-
There’s also the question of agentic AI. Granting an outsourcer permission to deploy agents that access your environment means trading efficiency for control. “There are still agents that can go rogue,” Wong said. To address this issue, CIOs can place limits on autonomous agents to use cases where reverting to the original state is straightforward if something goes wrong.
CIOs take a central role in outsourcing negotiations
Perhaps the most significant shift is who’s leading these conversations.
Outsourcing negotiations that once fell to procurement or operations leaders increasingly require technical depth. Traditionally, the client-side lead might not have had the technical background needed to negotiate AI-centered contracts, Venkataramani said.
“CIOs have the expertise needed to make decisions around whether to use provider-owned or in-house technology, or whether all contracted providers should begin using the same AI technology,” he said.
AI expertise is also becoming embedded in how companies govern their outsourcing relationships. Many clients now require an AI specialist as part of the oversight structure — someone who can evaluate how providers are deploying AI and bring a market perspective on what’s possible, Wong explained.
Chief AI officers and AI centers of excellence are increasingly joining quarterly business reviews with providers, carving out dedicated time to assess how AI is being used and where it can deliver more value.
For CIOs, this is an expansion of both influence and accountability. The role has shifted from requirements-taker to strategic partner in deal structure.
“CIOs have the savvy to push for clearer standards around how AI is trained, monitored, and continuously improved within outsourced environments,” said.
For now, the timing works in their favor — suppliers are more open to reopening contracts as AI reshapes how services are delivered, Wong noted. But that window won’t stay open forever. The CIOs who act now will shape those deals. The rest will live with what’s handed to them.

