10.1 C
New York
Sunday, March 1, 2026
Array

State AI regulations could leave CIOs with unusable systems


As states rush to regulate AI, CIOs face the prospect that systems will become unusable or economically impractical under new laws. These laws threaten to drive up compliance costs, reduce ROI or strand investments altogether.

For instance, when it was reported that ShopRite, a Northeast grocery chain, was using facial recognition to identify repeat shoplifters, some Connecticut lawmakers said last month they would seek to ban facial recognition in retail stores.

Lawmakers in Nebraska want to ban electronic shelf labels (ESLs) in grocery stores larger than 10,000 square feet. They worry that dynamic pricing could displace jobs and set prices based on consumer behavior. Oklahoma has filed a similar proposal for grocers with more than 15,000 square feet.

Maryland’s legislation, by contrast, would prohibit dynamic pricing and the use of surveillance data — sensors, cameras and biometrics — to set individualized prices but would not ban ESLs themselves. Retailers such as Walmart are already rolling out electronic shelf labels and say the technology improves operational efficiency and customer experience.

Related:Who really sets AI guardrails? How CIOs can shape AI governance policy

Retail is only one area where state AI laws are surfacing. 

Patchwork AI laws put existing systems at risk 

Numerous states are considering AI regulations for systems used in medical care, insurance, human resources, finance and other critical areas. Under some proposals, companies would be required to provide regulators with:

  • Documentation of training data.

  • Customer notifications describing how AI systems are used.

Compliance costs tied to new AI laws will rise, said Mahesh Juttiyavar, CIO at Mastek, an IT services provider. He said the laws will add organizational costs and management time for which companies have not accounted.

The compliance costs “are going to add up in future,” Juttiyavar said.

Regulatory compliance carries long-term costs 

Europe’s General Data Protection Regulation (GDPR), illustrates the dynamic, according to a recent study by researchers at Cornell University and Bocconi University in Italy, underwritten by Amazon. It found that Fortune 500 companies spent an average of $15.8 million each on initial GDPR compliance, with recurring annual maintenance costs typically reaching 20% to 30% of that initial investment.

Despite the growing regulatory risk, businesses appear unwilling to slow AI deployments.

“Moving away from AI with the regulation is not going to be an option for us,” Juttiyavar said. He said AI is already deeply embedded in how organizations operate and is essential for speed and competitiveness.

Related:How AI can build organizational agility

Gregory Dawson, a management professor at Arizona State University’s W. P. Carey School of Business, agreed, explaining, “No company wants to sit on the sidelines while other companies stake their claim in the AI era.” Dawson co-authored a Brookings Institution report tracking the growth of state AI regulations.

Dawson said he expects a surge in AI-related bills as lawmakers and the public become more aware of AI’s risks. At the same time, he noted that some states, such as Arizona, have created steering committees to examine both the risks of AI and how government agencies might use the technology to improve public services.

At the federal level, Congress appears unlikely to preempt state AI laws. A proposal last year to impose a 10-year moratorium on state AI regulation was stripped from a federal budget bill by a 99–1 vote in the U.S. Senate.

That leaves CIOs planning varying state rules that may change over time. That’s the same problem they face today with privacy laws. Congress has never preempted states on privacy, despite decades of on-and-off debates.

Not every AI law will stick

The vast majority of AI bills in state houses won’t be adopted, and some — if they do make it into law — will lose their bite.

Related:State of AI: Widely used for planning — drives the business at just 25% of firms

In 2023, New York City adopted a law requiring audits of AI-driven hiring systems to ensure that they are bias-free. The measure was widely seen as a forerunner of broader AI regulations.

But the final law applies only when an AI system makes a “consequential” hiring decision, a limitation that many employers say allows them to avoid compliance by keeping a human nominally involved in the process.

“The law is essentially designed to fail, and practically there’s very little enforcement of it,” said Yelena Ambartsumian, a New York attorney who focuses on privacy and AI governance.

According to Arsen Kourinian, an attorney at Mayer Brown who specializes in data privacy and AI, laws that outright prohibit AI systems are uncommon. More often, lawmakers want to limit how technology is used rather than ban it outright.

Governance and contracts become risk controls

That approach places a premium on governance. If CIOs establish strong internal frameworks for AI deployment, “that helps you react better to legislative change” and anticipate new requirements, Kourinian said.

Still, regulatory shifts can leave companies with systems that are technically sound but legally unusable, said Peter Cassat, a partner at CM Law.

To manage that risk, Cassat advises CIOs to negotiate “change of law” provisions in vendor contracts that provide termination rights if regulations make continued use of a system impossible or impractical. But such provisions do not eliminate the risk of sunk costs.

“If it’s a SaaS provider and you’ve signed a three-year term, they don’t want to necessarily let you walk for free either,” Cassat said.

Beyond legal exposure, CIOs must also anticipate public and political reaction to AI and biometric tools.

“The CIO absolutely has the responsibility to understand how this technology could be perceived — not just internally, but by the public and lawmakers,” said Mark Moccia, an analyst at Forrester Research.

In Connecticut, that reaction to facial recognition was swift. State Senate Majority Leader Bob Duff said in a statement that residents “shouldn’t have to worry about giving up information about themselves while grocery shopping.”

For CIOs, the only thing certain is that states may act in surprising ways on AI, and they will have to be prepared.

Learn more about AI and IT leadership three times a week with the InformationWeek newsletter.



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

CATEGORIES & TAGS

- Advertisement -spot_img

LATEST COMMENTS

Most Popular

WhatsApp