10.3 C
New York
Monday, April 14, 2025

What Are the Biggest Blind Spots for CIOs in AI Security?


Tension between innovation and security is a tale as old as time. Innovators and CIOs want to blaze trails with new technology. CISOs and other security leaders want to take a more measured approach that mitigates risk. With the rise of AI in recent years regularly being characterized as an arms race, there is a real sense of urgency. But that risk that the security-minded worry about is still there.  

Data leakage. Shadow AI. Hallucinations. Bias. Model poisoning. Prompt injection, direct and indirect. These are known risks associated with the use of AI, but that doesn’t mean business leaders are aware of all the ways they could manifest within their organizations and specific use cases. And now agentic AI is getting thrown into the mix. 

“Organizations are moving very, very quickly down the agentic path,” Oliver Friedrichs, founder and CEO of Pangea, a company that provides security guardrails for AI applications, tells InformationWeek. “It’s eerily similar to the internet in the 1990s when it was somewhat like the Wild West and networks were wide open. Agentic applications really in most cases aren’t taking security seriously because there aren’t really a well-established set of security guardrails in place or available.” 

What are some of the security issues that enterprises might overlook as they rush to grasp the power of AI solutions? 

Related:Transforming Government Cyber Operations with AI

Visibility  

How many AI models are deployed in your organization? The answer to that question may not be as easy to answer as you think.  

“I don’t think people understand how pervasively AI is already deployed within large enterprises,” says Ian Swanson, CEO and founder of Protect AI, an AI and machine learning security company. “AI is not just new in the last two years. Generative AI and this influx of large language models that we’ve seen created a lot of tailwinds, but we also need to take stock an account of what we’ve had deployed.” 

Not only do you need to know what models are in use, you also need visibility into how those models arrive at decisions.  

“If they’re denying, let’s say an insurance claim on a life insurance policy, there needs to be some history for compliance reasons and also the ability to diagnose if something goes wrong,” says Friedrichs.  

If enterprise leaders do not know what AI models are in use and how those models are behaving, they can’t even begin to analyze and mitigate the associated security risks.  

Auditability 

Swanson gave testimony before Congress during a hearing on AI security. He offers a simple metaphor: AI as cake. Would you eat a slice of cake if you didn’t know the recipe, the ingredients, the baker? As tempting as that delicious dessert might be, most people would say no.  

Related:High-Severity Cloud Security Alerts Tripled in 2024

“AI is something that you can’t, and you shouldn’t just consume. You should understand how it’s built. You should understand and make sure that it doesn’t include things that are malicious,” says Swanson.  

Has an AI model been secured throughout the development process? Do security teams have the ability to conduct continuous monitoring?  

“It’s clear that security isn’t a onetime check. This is an ongoing process, and these are new muscles a lot of organizations are currently building,” Swanson adds.  

Third Parties and Data Usage 

Third party risk is a perennial concern for security teams, and that risk balloons along with AI. AI models often have third-party components, and each additional party is another potential exposure point for enterprise data.  

“The work is really on us to go through and understand then what are those third parties doing with our data for our organization,” says Harman Kaur, vice president of AI at Tanium, a cybersecurity and systems management company. 

Do third parties have access to your enterprise data? Are they moving that data to regions you don’t want? Are they using that data to train AI models? Enterprise teams need to dig into the terms of any agreement they make to use an AI model to answer these questions and decide how to move forward, depending on risk tolerance.   

Related:What Health Care CIOs and CISOs Need to Know About the Oracle Breaches

The legal landscape for AI is still very nascent. Regulations are still being contemplated, but that doesn’t negate the presence of legal risk. Already there are plenty of examples of lawsuits and class actions filed in response to AI use.  

“When something bad happens, everybody’s going to get sued. And they’ll point the fingers at each other,” says Robert W. Taylor, of counsel at Carstens, Allen & Gourley, a technology and IP law firm. Developers of AI models and their customers could find themselves liable for outcomes that cause harm.  

And many enterprises are exposed to that kind of risk. “When companies contemplate building or deploying these AI solutions, they don’t do a holistic legal risk assessment,” Taylor observes.  

Now, predicting how the legality around AI will ultimately settle, and when that will even happen, is no easy task. There is no roadmap, but that doesn’t mean enterprise teams should throw up their collective hands and plow ahead with no thought for the legal implications. 

“It’s all about making sure you understand at a deep level where all the risk lies in whatever technologies you’re using and then doing all you can [by] following reasonable practice best practices on how you mitigate those harms and documenting everything,” says Taylor.  

Responsible AI 

Many frameworks for responsible AI use are available today, but the devil is in the details.  

“One of the things that I think a lot of companies struggle with, my own clients included, is basically taking these principles of responsible AI and applying them to specific use cases,” Taylor shares.  

Enterprise teams have to do the legwork to determine the risks specific to their use cases and how they can apply principles of responsible AI to mitigate them.  

Security vs. Innovation  

Embracing security and innovation can feel like balancing on the edge of knife. Slip one way and you feel the cut of falling behind in the AI race. Slip the other way and you might be facing the sting of overlooking security pitfalls. But doing nothing ensures you will fall behind. 

“We’ve seen it paralyzes some organizations. They have no idea how to create a framework to say is this a risk that we’re willing to accept,” says Kaur.  

Adopting AI with a security mindset is not to say that risk is completely avoidable. Of course it isn’t. “The reality is this is such a fast-moving space that it’s like drinking from a firehose,” says Friedrichs.  

Enterprise teams can take some intentional steps to better understand the risks of AI specific to their organizations while moving toward realizing the value of this technology.  

Looking at all of the AI tools available in the market today is akin to being in a cakeshop, to use Swanson’s metaphor. Each one looks more delicious than the next. But enterprises can narrow the decision process down by starting with vendors that they already know and trust. It’s easier to know where that cake comes from and the risks of ingesting it.  

“Who do I already trust and already exists in my organization? What can I leverage from those vendors to make me more productive today?” says Kaur. “And generally, what we’ve seen is with those organizations, our legal team, our security teams have already done extensive reviews. So, there’s just an incremental piece that we need to do.” 

Leverage risk frameworks that are available, such as the AI Risk Management Framework from the National Institute of Standards and Technology (NIST). 

“Start figuring out what pieces are more important to you and what’s really critical to you and start putting all of these tools that are coming in through that filter,” says Kaur.  

Taking that approach requires a multidisciplinary effort. AI is being used across entire enterprises. Different teams will define and understand risk in different ways.  

“Pull in your security teams, pull in your development teams, pull in your business teams, and have a line of sight [on] a process that wants to be improved and work backwards from that,” Swanson recommends.  

AI represents staggering opportunities for enterprise, and we have just begun to work through the learning curve. But security risks, whether or not you see them, will always have to be a part of the conversation.  

“There should be no AI in the enterprise without security of AI. AI has to be safe, trusted, and secure in order for it to deliver on its value,” says Swanson.



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles