1.9 C
New York
Friday, February 27, 2026
Array

Who sets AI guardrails? How CIOs can shape AI governance policy


Secretary of Defense Pete Hegseth has reportedly given Anthropic a Friday deadline to waive  its AI safeguards for unrestricted military use — or risk losing its defense contracts entirely. While most enterprises aren’t working with AI in a military capacity, this overt pressure to adjust vendor-set AI guardrails raises an industry-agnostic issue. CIOs are being reminded that these safeguards, and broader AI governance, are not set in stone but are vulnerable to commercial incentives, legal exposure and political pressure.

As public discourse around AI ethics rages on, CIOs are contending with the volatility of enterprise AI governance. It is no longer theoretical, but a practical issue that requires a response. And yet, how much of it is truly in their control?

Somewhere between the requirements of government policy, the terms set by the vendor, the pressure of the customer and the guidance of the board, CIOs must chart a path that maximizes AI utility while protecting the business. While they cannot dictate the environment, they can make critical choices within it.

Related:How AI can build organizational agility

palmer_lisa.png

Whose risk is it, anyway?

When an enterprise invests in a new AI product, it also receives the safeguards that the vendor has built into the system. But Dr. Lisa Palmer, CEO and chief research officer at AI advisory firm Neurocollective, cautions that many leaders misunderstand the governance terms of what they are buying. 

“Your AI vendor’s safety posture is a business decision they can change at any time. It is not a product feature, and they won’t ask your opinion before they change it,” Palmer said.

This isn’t inherently nefarious, but rather a practical feature of the business agreement. As Donald Farmer, futurist at Tranquilla AI, explains, the guardrails of a vendor’s AI system reflect that vendor’s assessment of acceptable risk — not the enterprise’s. “That is shaped by their legal own exposure, their broadest possible customer base and their own ethical assumptions,” Farmer said. “This works for many customers, but at the edges there can be tension.”

By definition, these safeguards are designed to improve the security and ethical application of the AI models. In many cases, they function to protect the general public from potentially unethical behavior and are therefore non-negotiable, as noted by Simon Ratcliffe, fractional CIO at Freeman Clarke. But these restrictions, while well-intended, can limit the flexibility of an organization’s individual AI posture, especially when combined with additional governance imposed by external authorities.

Related:State of AI: Widely used for planning — drives the business at just 25% of firms

“CIOs frequently find themselves caught between vendor-imposed model constraints, government procurement expectations, internal innovation pressure and regulatory compliance requirements,” Ratcliffe said. “This is not merely technical friction. It is a sovereignty question of who sets the rules inside the digital estate.”

turner-williams_wendy.png

The added complexity of governing AI systems

Part of what makes these decisions harder is the nature of AI itself, which operates unlike traditional IT systems. Farmer noted that AI systems are opaque in ways traditional enterprise software is not. “You cannot audit a neural network the way you audit a database,” he said. 

Ratcliffe similarly emphasizes this difference, pointing out that AI systems behave probabilistically, rather than predictably, which means that effective governance cannot rely on a one-time approval. Monitoring, testing and human oversight must be continuous. Chris Hutchins, founder and CEO of Hutchins Data Strategy Consulting, summarized as follows: “Governance needs to be responsive and proactive instead of reactive and episodic.” 

In practice, this puts a lot of responsibility back into the hands of the CIO. Enterprises must take an active role in enforcing governance by documenting data pipelines, logging prompts and model outputs, and recording the controls applied to each model interaction. If they don’t, they risk making themselves incredibly vulnerable. 

Related:AI disruption and the collapse of certainty

Wendy Turner-Williams, chief data architecture and intelligence officer at SymphraAI, put it bluntly: “Every AI agent expands the attack surface.” Without disciplined data management and segmentation, one compromised component can ripple across business functions. The more tightly integrated AI becomes, the greater the potential blast radius.

This requires CIOs to engage actively with governance, even if it seems like they are being handed a list of preset rules. As Palmer said, “traditional IT governance assumes that products stay the same. AI governance has to assume that they will not.” 

Determining the CIO’s sphere of influence

Caught between competing restrictions and changing mandates at the federal level, CIOs may feel powerless to influence much change — but the experts reject this impotence. Turner-Williams described the CIO’s influence as “significant, but not unilateral. The CIO acts as orchestrator and trust agent.”

This is especially true for CIOs working across multiple jurisdictions, making them accountable not only to U.S. law, but also to the EU AI Act, GDPR and other international frameworks. Several experts recommend reframing the governance approach from setting overarching policy to shaping the environment in which that policy is executed. As always, the earlier this is done, the better.

“Most influence comes from the CIO at the initial stage of adoption,” Hutchins said. “A CIO may not dictate how a vendor designs their product, but can influence the environment where AI is implemented, regulated and expanded.”

Farmer agrees with the importance of getting involved early on, before the AI product is deployed. To be most effective, he recommends focusing on the practical realities of the guardrails, rather than high-level theory: “They need to define standards at the level of real decisions: what data the system uses, which humans are in or over the loop and what remediation is possible if something goes wrong,” he said.

Ratcliffe concurred with this need to avoid getting bogged down in the theory. He describes how the CIO, while unable to set the ethical policy, has the ability to shape the architecture through which those ethics are enforced, be it through vendor selection, hosting decisions or data boundary design.

“The CIO’s real leverage is structural,” he said. “Governance follows architecture. If AI access is centralized, monitored and risk-tiered, safeguards become enforceable. If AI is decentralized and shadow-adopted, governance becomes theoretical.”

Compliance as the floor, not the ceiling

Where the CIO also has the opportunity to leave their mark is through the establishment of the enterprise’s own ethical standards. While a vendor’s guardrails may be nonnegotiable, they are also not the limit. 

Ratcliffe offers a pragmatic lens, arguing that CIOs should approach this issue as one of reputational strategy, not a compliance exercise. He suggests that CIOs evaluate their AI decisions against corporate purpose, risk appetite and public defensibility. In other words, could the organization explain and defend its deployment choices if challenged by regulators, customers or employees?

AI governance is not just an opportunity to shape standardized policy for a specific enterprise environment, it is also a way to demonstrate broader care. Farmer sees the current AI landscape as one where ethical positioning is already part of brand strategy and differentiation, with many AI vendors emphasizing the higher standards of their own safeguards. CIOs can capitalize on this by introducing their own ethical AI policies that build on their vendors’ preset standards. 

Assuming the presets are sufficient is a mistake, Palmer said.

“If your AI ethics policy is ‘We follow the law,’ you do not have an ethics policy; you have a compliance floor,” she said.



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

CATEGORIES & TAGS

- Advertisement -spot_img

LATEST COMMENTS

Most Popular

WhatsApp