8.4 C
New York
Thursday, January 8, 2026
Array

What CIOs need to know about risk and trust


Managing AI trustworthiness and risk is essential to realizing business value from AI. When asked what organizations must do to capture AI’s benefits while minimizing its downsides, Sibelco Group CIO Pedro Martinez Puig emphasized discipline and strategic focus.

“Capturing AI’s value while minimizing risk starts with discipline,” Puig said. “CIOs and their organizations need a clear strategy that ties AI initiatives to business outcomes, not just technology experiments. This means defining success criteria upfront, setting guardrails for ethics and compliance, and avoiding the trap of endless pilots with no plan for scale.”

For Puig, the work begins by creating strong use cases and rigorous foundations. “CIOs must focus on use cases that are solid enough to deliver measurable impact. In mining and materials, this includes ensuring data integrity from the plant floor to enterprise systems, embedding cybersecurity into AI workflows, and monitoring for risks like bias or model drift.”

Puig adds that trust is just as important as technology. “Transparency, governance, and training help people understand how AI decisions are made and where human judgment still matters. The goal isn’t to chase every shiny use case; it’s to create a framework where AI delivers value safely and sustainably.”

Related:2026 enterprise AI predictions — fragmentation, commodification and the agent push facing CIOs

Nicole Coughlin, CIO of the Town of Cary, N.C., echoes this view. “It takes governance, collaboration, and inclusion,” she said. “The organizations that thrive at AI will be the ones that bring people together — policy, legal, communications, operations, and IT — to co-create the guardrails. Minimizing risk isn’t about slowing innovation. It’s about alignment and shared purpose.”

Key risks for AI

According to the authors of “Rewired: The McKinsey Guide to Outcompeting in the Age of Digital and AI,” risk and trust have always been part of AI, but today’s landscape raises the stakes. They write that “AI transformations surface a whole new and complex set of interconnected risks. … AI innovations are taking place in an environment of increased regulatory scrutiny, where consumers, regulators, and business leaders are increasingly concerned about vulnerabilities across cybersecurity, data privacy, and AI systems.”

Given this context, they suggest organizations must prioritize “digital trust.” This involves:

  • Protecting consumer data and maintaining strong cybersecurity.

  • Delivering reliable AI-powered products and services.

  • Ensuring transparency around how data and AI models are used.

Building this trust requires triaging risks, operationalizing risk policies across the organization, and raising awareness so employees understand their role in responsible AI.

Related:13 unexpected, under-the-radar predictions for 2026

In Dresner Advisory Service’s 2025 research, we examined the additional risks unique to generative and agentic AI. These risks — which range from use case definition to security and privacy — have undoubtedly hindered the production rollout of GenAI solutions; many of the same concerns also apply to agentic AI, which is built on similar foundational technologies.

Data security and privacy emerge as critical issues, cited by 42% of respondents in the research. While other concerns — such as response quality and accuracy, implementation costs, talent shortages, and regulatory compliance — rank lower individually, they collectively represent substantial barriers.

When aggregated, issues related to data security, privacy, legal and regulatory compliance, ethics, and bias form a formidable cluster of risk factors — clearly indicating that trust and governance are top priorities for scaling AI adoption.

AI governance to generate trust

At its core, governance ensures that data is safe for decision-making and autonomous agents. In “Competing in the Age of AI,” authors Marco Iansiti and Karim Lakhani explain that AI allows organizations to rethink the traditional firm by powering up an “AI factory” — a scalable decision-making engine that replaces manual processes with data-driven algorithms. However, to achieve an AI factory, organizations need an effective data pipeline that gathers, cleans, integrates, and safeguards data in a systematic, sustainable and scalable way.

Related:AI reality check: Why IT leaders must get practical

A proxy for measuring this kind of industrialization of data is the success of BI implementations. In Dresner’s 2025 research, 32% of organizations surveyed said that they were completely successful with their BI implementations. In a discussion with Stephanie Woerner of MIT-CISR, she suggested their latest research numbers were comparable. Combined, these findings suggest that a significant majority of firms — roughly 68% — have yet to establish truly effective data pipelines.

To bridge this gap, organizations must initiate and own a data governance program — something that historically CIOs have loathed but must clearly change in the AI era. Fundamentals include:

  • Data integrity and quality: Ensuring the source of truth is accurate.

  • Clear ownership: Defining who is responsible for specific datasets.

  • Fairness: Actively monitoring for and reducing bias, along with ensuring that data is not exposed and used only for legitimate purposes.

Chris Child, VP of product and data engineering at Snowflake, puts it this way: “Efficiency without governance will cost businesses in the long-term.” Agentic AI adds complexity, Child says, because these autonomous systems act on data directly. “The path forward is to unify data, AI, and governance in a single secure architecture,” he said.

Meanwhile, University of Porto Professor Pedro Amorim, recommends a “venture-style” approach: “Fund many small, time-boxed bets, learn quickly, and double down on the winners with a clear path to industrialization.”

AI governance to ensure data security

Governance of risk focuses on protecting access to data. Bob Seiner — a leading data governance thought leader — notes that it is critical to formalize accountability and educate people on how to achieve governed data habits. Effective security means preventing unauthorized access, loss of integrity and theft while ensuring the legitimate processing of personal information.

Iansiti and Lakhani argue that trustworthy AI requires “centralized systems for careful data security and governance, defining appropriate checks and balances on access and usage, inventorying the assets carefully, and providing all stakeholders with necessary protection.” Because LLMs rely on large volumes of data — including PII — data must be secured against the unique ways LLMs store and retrieve information.

Amorim suggests establishing these guardrails in place early:

  • Data classification, privacy/IP rules.

  • Human-in-the-loop for sensitive decisions.

  • Explicit no-go criteria and evaluation benchmarks.

He also recommends ensuring there’s budget at the front of the funnel, so you’re not forced into one or two big bets.  

Jared Coyle, chief AI officer at SAP, recommends a governance framework based on three pillars:  

  1. Relevant: AI should be designed to work within a specific business process, not in a standalone “AI for AI’s sake” way.

  2. Reliable: The system should adhere to a consistent and data-accurate output.

  3. Responsible: The process should be certified, follow strict ethical guidelines and carry forward existing security infrastructure.

Parting Words

Achieving value with AI requires industrialized data and processes and strong governance.

The starting point is simple: CIOs must ensure their AI initiatives tie directly to business outcomes, establish clear success criteria, and embed ethics and compliance guardrails early to avoid the trap of endless pilots that never scale.

Equally important is business trust in AI. CIOs need transparent AI workflows, strong data foundations, cross-functional collaboration, and training that helps employees understand how AI decisions are made — and where humans remain in control.

Risk remains the biggest barrier to GenAI and agentic AI. Data security and privacy top the list, followed by accuracy, regulatory compliance, bias and ethics — a cluster of interconnected risks that slow production rollout.

Effective governance is the only way to deliver the industrialized data pipelines necessary for trust. This requires formalizing accountability, centralizing data platforms, enforcing access controls, and establishing early guardrails — such as data classification, privacy protections, and human-in-the-loop oversight — to ensure AI is relevant, reliable and responsible.



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

CATEGORIES & TAGS

- Advertisement -spot_img

LATEST COMMENTS

Most Popular

WhatsApp