20.2 C
New York
Wednesday, June 11, 2025

Mastering AI risk: An end-to-end strategy for the modern enterprise



Organizations find themselves navigating an environment where artificial intelligence can be a phenomenal growth engine, while simultaneously introducing unprecedented risks. This leaves executive leadership teams grappling with two critical questions: First, where should an AI cyber risk process begin and end for organizations creating and consuming AI? Second, what governance, training, and security processes should be implemented to protect people, data, and assets against vulnerabilities exposed by human error, AI system bias, and bad actors?

The answers lie in adopting a comprehensive life cycle approach to AI risk management—one that equips the C-suite, IT, AI development teams, and security leaders with the tools to navigate an ever-evolving threat landscape. 

Understanding the faces of AI cyber risk

Trustworthy AI development

Organizations developing AI models or AI applications—i.e., whether creating proprietary machine learning models or integrating AI features into existing products—must approach the process with a security-first mindset. If cyber risks and broader security risks are not properly considered at the onset, an organization is needlessly exposed to several dangers:

  • Lack of security-by-design: Models developed without formal oversight or security protocols are more susceptible to data manipulation and adversarial inputs.
  • Regulatory gaps: With emerging guidelines like the EU AI Act, the NIST AI Risk Management Framework, and ISO 42001, failing to comply invites legal scrutiny and reputational damage.
  • Biased or corrupted data: Poor data quality can yield unreliable outputs, while malicious actors can intentionally feed incorrect data to skew results.

Responsible AI usage

Organizations not actively developing AI are still consumers of the technology—often at scale and without even realizing it. Numerous software-as-a-service (SaaS) platforms incorporate AI capabilities to process sensitive data. Employees might also experiment with generative AI tools, inputting confidential or regulated information that leaves organizational boundaries.

When AI usage is unregulated or poorly understood, organizations face several risks that can lead to serious security gaps, compliance issues, and liability concerns, including:

  • Shadow AI tools: Individuals or departments may purchase, trial, and use AI-enabled apps under the radar, bypassing IT policies and creating security blind spots.
  • Policy gaps: Many businesses lack a dedicated acceptable use policy (AUP) that governs how employees interact with AI tools, potentially exposing them to data leakage, privacy, and regulatory issues.
  • Regional laws and regulations: Many jurisdictions are developing their own specific AI-related rules, like New York City’s Bias Act or Colorado’s AI governance guidelines. Misuse in hiring, financial decisions, or other sensitive areas can trigger liability.

Defending against malicious AI usage

As much as AI can transform legitimate business practices, it also amplifies the capabilities of cyber criminals that must be defended against. Key risks organizations face from bad actors include:

  • Hyper-personalized attacks: AI models can analyze massive data sets on targets, customizing emails or phone calls to maximize credibility.
  • Increasingly sophisticated deepfakes: Video and voice deepfakes have become so convincing that employees with access to corporate financial accounts and sensitive data have been tricked into paying millions to fraudsters.  
  • Executive and board awareness: Senior leaders are prime targets for whaling attempts (spear-phishing cyber attacks that target high-level executives or individuals with significant authority) that leverage advanced forgery techniques.

A life-cycle approach to managing AI risk

Organizations gain a strategic advantage with a life-cycle approach to AI cyber risk that acknowledges AI technologies evolve rapidly, as do the threats and regulations associated with them.

A true life-cycle approach combines strategic governance, advanced tools, workforce engagement, and iterative improvement. This model is not linear; it forms a loop that continuously adapts to evolving threats and changes in AI capabilities. Here is how each stage contributes.

Risk assessment and governance

  • Mapping AI risk: Conduct an AI usage inventory to identify and categorize existing tools and data flows. This comprehensive mapping goes beyond mere code scanning; it evaluates how in-house and third-party AI tools reshape your security posture, impacting organizational processes, data flows, and regulatory contexts.
  • Formal frameworks implementation: To demonstrate due diligence and streamline audits, align with recognized standards like the EU AI Act, the NIST AI Risk Management Framework, and ISO 42001. In tandem, develop and enforce an explicit acceptable use policy (AUP) that outlines proper data handling procedures.
  • Executive and board engagement: Engage key leaders, including the CFO, general counsel, and board, to ensure they comprehend the financial, legal, and governance implications of AI. This proactive involvement secures the necessary funding and oversight to manage AI risks effectively.

Technology and tools

  • Advanced detection and response: AI-enabled defenses, including advanced threat detection and continuous behavioral analytics, are critical in today’s environment. By parsing massive data sets at scale, these tools monitor activity in real time for subtle anomalies—such as unusual traffic patterns or improbable access requests—that could signal an AI-enabled attack.
  • Zero trust: Zero trust architecture continuously verifies the identity of every user and device at multiple checkpoints, adopting least-privilege principles and closely monitoring network interactions. This granular control limits lateral movement, making it far more difficult for intruders to access additional systems even if they breach one entry point.
  • Scalable defense mechanisms: Build flexible systems capable of rapid updates to counter new AI-driven threats. By proactively adapting and fine-tuning defenses, organizations can stay ahead of emerging cyber risks.

Training and awareness

  • Workforce education: Ransomware, deepfakes, and social engineering threats are often successful because employees are not primed to question unexpected messages or requests. To bolster defense readiness, offer targeted training, including simulated phishing exercises.
  • Executive and board involvement: Senior leaders must understand how AI can amplify the stakes of a data breach. CFOs, CISOs, and CROs should collaborate to evaluate AI’s unique financial, operational, legal, and reputational risks.
  • Culture of vigilance: Encourage employees to report suspicious activity without fear of reprisal and foster an environment where security is everyone’s responsibility.

Response and recovery

  • AI-powered attack simulations: Traditional tabletop exercises take on new urgency in an era where threats materialize faster than human responders can keep pace. Scenario planning should incorporate potential deepfake calls to the CFO, AI-based ransomware, or large-scale data theft.
  • Continuous improvement: After any incident, collect lessons learned. Were detection times reasonable? Did staff follow the incident response plan correctly? Update governance frameworks, technology stacks, and processes accordingly, ensuring that each incident drives smarter risk management.

Ongoing evaluation

  • Regulatory and threat monitoring: Track legal updates and new attack vectors. AI evolves quickly, so remaining static is not an option.
  • Metrics and continuous feedback: Measure incident response times, security control effectiveness, and training outcomes. Use this data to refine policies and reallocate resources as needed.
  • Adaptation and growth: To keep pace with the changing AI landscape, evolve your technology investments, training protocols, and governance structures.

A proactive, integrated approach not only safeguards your systems but also drives continuous improvement throughout the AI life cycle.

As AI development intensifies—propelled by fierce market competition and the promise of transformative insights—leaders must move beyond questioning whether to adopt AI and focus instead on how to do so responsibly. Although AI-driven threats are becoming more complex, a life-cycle approach enables organizations to maintain their competitive edge while safeguarding trust and meeting compliance obligations.

John Verry is the managing director of CBIZ Pivot Point Security, CBIZ’s cybersecurity team, in the National Risk and Advisory Services Division. 

Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles