AI can be both a shield and a weapon. CISOs are tasked with using the technology to defend their organizations by building in-house AI tools, leveraging vendors’ AI capabilities, and finding new solutions on the market. While they widen their security moats, threat actors find ways to use AI to slip past those defenses. AI-fueled attacks are growing in volume and sophistication.
A mantra – “adopt AI or be left behind and vulnerable to attack” — is widely embraced by industry. That is often coupled with a glut of marketing promises to give CISOs and their enterprises what they need to stay ahead of the curve. As cybersecurity leaders navigate the hype cycle, it is clear that generative AI (GenAI) delivers in some ways and falls short in others.
InformationWeek spoke with four cybersecurity experts to gauge how the technology performs and where users want it to improve.
Effective use cases for AI in cybersecurity
AI gets a lot of buzz as a programming tool, and cybersecurity teams leverage it in that capacity.
“My engineers use things like GitHub Copilot to build the software that we operate in and across our groups,” said Carl Kubalsky, director and deputy CISO at John Deere.
Threat hunters also use AI to augment their capabilities. For example, AI tools can be set loose to find “needle in the haystack” anomalies that human eyes might miss.
“It doesn’t care if the text is in white or black; it can see it. We wouldn’t see white on white or black on black,” said Keri Pearlson, a senior lecturer and principal research scientist at the MIT Sloan School of Management. Some bad actors attempt to conceal harmful code by setting the text color to match the background. “There’s an example of how the technology would be able to aid in finding perhaps malware implanted into a document or a phishing email,” Pearlson said.
AI can help threat hunters move faster and better handle the sheer volume of threats an enterprise faces. John Deere, for example, has an agentic security operations center that helps analysts. It can provide context for tickets and offer insight into what analysts should do next, although the human worker decides how to act.
“We’re able to catch more things with AI plus humans,” Kubalsky said. “And that’s more and more necessary as we continue to deal with the rise in the threat landscape.”
At analytics software company FICO, the cybersecurity team has found success using AI for threat modeling, according to CISO Ben Nelson. The team is responsible for ensuring the safety and integrity of software it delivers to clients, and as a part of the design process, security architects look for potential flaws.
“What we’ve been able to do is take our historical record of all the threat models that have been produced and train models on them internally,” he said.
A human security architect is still part of the threat modeling process, but AI has reduced human labor by about 80%, according to Nelson. Faster threat modeling equals a faster development cycle.
The red team at FICO also uses AI tools to build bespoke infrastructure for testing. “They have adopted a generative AI model that actually produces the infrastructure as code snippets that help them produce those bespoke environments more rapidly so they can do rapid testing,” Nelson explained. “That’s been another big win for us on the generative AI front.”
The cybersecurity team at FICO also uses GenAI to spot attack patterns in its historical log data. They then correlate those findings with industry data to understand what a security event might cost had it not been prevented.
“It’s an interesting business tool in that respect because it’s helping us go back and quantify the cost of things that could have happened to help us justify expenses in the cyber space,” Nelson said.
Where AI must improve for cybersecurity
As CISOs integrate AI tools into their strategies, it becomes easier to spot where the technology must improve to meet vendor promises and users’ needs.
Data remains a fundamental challenge. Users need strong data governance to harness AI tools and achieve hoped-for results. Throwing a slew of solutions at a data estate is unlikely to produce instant results.
“I do think in the market, sometimes … splashy things make that promise. I don’t buy it,” Kubalsky said. “You have to fundamentally solve some of the traditional challenges associated with bringing your data together, bringing the right data governance in, giving the right data, to the right time, to the right AI, to get the outcomes that you want to achieve.”
It is possible to put new cybersecurity measures in place with AI, but there are limits. One such limit is AI’s tendency not to recognize when it hits a wall. “One of the interesting challenges that generative AI in particular has right now is an inability to articulate when it doesn’t know,” Kubalsky said.
Nelson also said that AI-fueled cyber tools have yet to deliver the kind of predictive functions he’d like to see.
“One thing that we’re actually craving from our technology vendors is a more predictive AI-based system that will take historical data and look at real-time threats,” he explained. That system would correlate the data to try to predict potential breaches. “I haven’t seen AI applied to that effectively yet.”
Nelson also noted that GenAI search features in cybersecurity tools are not living up to the hype that rose in the last year and a half.
“Almost every one of our cyber technology vendors added a generative AI search feature to the search interface,” he said. “It’s just super basic. It doesn’t add much value to my teams from an investigative perspective.” He said he hasn’t seen much improvement since that initial burst of marketing.
The issue of trust comes increasingly to the fore in the AI space, whether in a cybersecurity context or otherwise. Lena Smart is a former CISO and currently an ambassador with AIUC-1, a consortium developing standards for agentic AI. She wants vendors to be accountable to standards rather than offer users opaque promises.
“It’s the promise that, ‘You can trust us, don’t worry about it.’ ‘Your data’s safe with us, don’t worry about it,'” Smart said. “Show me the supply chain risk management audit that you got to show me where my data’s going … Show me who has access to it. What are they doing with it?”
Nelson noted that trust is “a mixed bag” among vendors. “A lot of them are turning on AI interfaces without even telling us, which is pretty scary to think about because we don’t know how they’re using the data that we’ve entrusted them,” he said. This may include training their models or comingling data with their other clients.
The road ahead for CISOs
AI will be a priority for CISOs as solutions and threats continue to evolve. CISOs are likely to want to spend less time experimenting to see what works and what doesn’t. “Going faster and faster in our evaluations is something that we’re already beginning to do,” Kubalsky said.
Accelerating AI may help organizations make bets on newer capabilities that enter the market. Kubalsky and his team keep an eye on startups in this space. That kind of forward thinking has served them well thus far. “We got engaged with some deepfake detection startup capabilities probably about two years ago, knowing that deep fakes and deceptions were going to be rising in prevalence, and that was a bet that we got right,” he said.
As exciting as new tools will continue to be, CISOs and their teams also need to lean into accountability for their vendors, as well as in-house AI tools they put to work. Smart frequently fields pitches from vendors and pushes for answers about how data will be used, who has access, and what happens to the data after a contract ends. “If they’ve not got absolutely instinctive, positive, immediate answers to that, the call’s done,” she said.
Of all the resources Nelson could bulk up on, people stand at the top of the list. “Since we’re not getting what we need from our vendors, we’re going to have to jump into some innovation and engineering in-house,” he said.
While the potential replacement of humans cannot be ignored, people remain essential for the responsible use of AI in cybersecurity. “I think in 2026, we will see managers get more control over the AI environments that they hope to bring into their organizations,” Pearlson said.

