As we enter another year defined by the adoption of AI, CISOs face more cyberthreats and increased demand to defend their organizations. InformationWeek spoke with five CISOs to get a sense of what they expect from the technology in 2026: how it will be used in the hands of threat actors, its capabilities as a defensive tool, and what security leaders want and need from the technology as it becomes increasingly ingrained in the fabric of their tech stacks.
The threat landscape
In 2025, threat actors used AI to hone their campaigns and expand the scale of attacks. Phishing attacks got harder to spot; AI easily removes the old tell of poor grammar. And AI makes it easier to cast hyperpersonalized lures for more victims.
“Right now, we’re seeing about 90% of social engineering phishing kits have AI deepfake technology available in them,” said Roger Grimes, CISO advisor at security awareness training company KnowBe4.
Thus far, AI has sharpened old tactics. That trend is only going to ramp up as time goes on. Wendi Whitmore, chief security intelligence officer at cybersecurity company Palo Alto Networks, described most of the attacks fueled by AI as “evolutionary and not revolutionary,” but that could change as threat actors move through their own learning curves.
The cyberattack executed by suspected Chinese state actors who manipulated Anthropic presaged the future of cyberattacks: large-scale and largely autonomous.
“The future of cybercrime is bad guys’ AI bots against good guys’ AI bots, and the best algorithms will win,” Grimes said. “That’s the future of all cybersecurity from here on out.”
As that future approaches, hackers will look for ways to use AI to execute attacks and search for vulnerabilities in the AI systems and tools that enterprises use.
“The thing that is most concerning to a CISO is that the LLMs are going to be the honeypots. That’s going to be the place that any hacker’s going to want to attack because that’s where all the data’s at,” said Jill Knesek, CISO at BlackLine, a cloud-based financial operations management platform.
Grimes also anticipated an increase in hacks of the model context protocol (MCP), Anthropic’s open source standard that enables AI systems to communicate with external systems. Threat actors can leverage methods such as prompt injection to exploit MCP servers.
As more widespread attacks perpetrated by AI occur, CISOs will grapple with questions around identity, according to Whitmore. “I don’t think that the industry collectively has a good understanding yet of who’s responsible when it’s actually a synthetic identity that has created a massive, widespread attack,” she said. That responsibility could rest on the business unit that deployed it, the CISO who approved the use of the tools or the actual team that’s leveraging it.
AI as a cyberdefense tool
As threat actors beef up their AI capabilities, so must defenders. In 2025, CISOs found out just what AI can do for their cybersecurity strategies. AI’s ability to sift through mountains of data and discern patterns proved to be one of its foremost boons for cybersecurity teams.
“Internal for my team, that has been a game changer because we’re seeing that now my threat analysts can take 10 minutes to research something instead of an hour going to separate tools,” said Don Pecha, CISO at FNTS, a managed cloud and mainframe services company for regulated industries.
AI can find the proverbial needle in the haystack of threats, both real and false positives, and enable analysts to make faster decisions. It can automate much of the digging and review that previously represented a lot of tedious, manual work for analysts.
While cybersecurity teams seize these benefits, AI as a cyberdefense tool has plenty of room to grow. “We’re not seeing … really purpose-built AI security for the most part. What you’re seeing is legacy security with some AI capability and functionality,” Knesek said.
As 2026 begins, more AI security solutions will emerge, particularly in the realm of agentic AI. Grimes said he expects that patching bots will be one type of AI agent granted more autonomy within organizations.
“You’re not going to be able to fight AI that’s trying to compromise you with traditional software. You’re going to need a patching bot,” he said.
The phrase “human in the loop” is held up by AI adopters as the gold standard of responsible use, but as agentic AI takes off, CISOs and their teams will have to grapple with questions about how much autonomy these agents are granted. What happens when there is less and less human involvement?
“I think some people are going to say, ‘Oh, this is great. I’m going to believe everything the vendor said. I’m going to give it full autonomy,'” Grimes said. That could lead to operational interruption.
Additionally, as AI agents become more autonomous, they will be frequent targets of malicious actors. “In order to protect the human, you’re going to have to protect the AI agents that the human is using,” Grimes said.
The CISO’s AI wish list
For all the fevered predictions around AI, uncertainties remain for the future. CISOs must keep up with rapidly changing technology. As they press ahead, what do they need and want from the technology?
-
Operational efficiencies. It’s time for AI to deliver. CISOs and CIOs want AI-driven tools that have a measurable impact. “As we move forward, they’re going to expect to see more and more purpose-built capabilities [where] really you can easily measure operational efficiencies,” Knesek said.That will be true of AI across enterprise functions.
-
Faster security reviews. Many enterprises may look at rolling out several new technologies in 2026, a daunting prospect from a security perspective. A winning solution for automating essential security reviews has yet to emerge. “At a tactical level, that’s something that CISOs and CIOs really need to figure out,” Whitmore said. “That’s everything from the process piece of it to the actual technology that’s going to help them accelerate that.”
-
Trust. Customers will ask their vendors tougher questions in order to maintain trust. Companies are reluctant to pull the veil back on their AI models, lest they give up competitive advantage, but that often leaves customers with little more than assurances rather than concrete answers.
“I understand there’s a lot of IP involved, the way that these models are trained … but it’s very difficult to onboard these and have a very fulsome understanding and guarantee that what has been presented in our conversations, even privacy policies, et cetera, is truly happening behind the scenes,” said Chris Henderson, CISO at cybersecurity company Huntress.
-
Better governance. AI governance is going to be top of mind for CISOs in the new year. “That’s really where we’re struggling today because there’s only really two or three products out there that truly provide governance in an enterprise for your AI,” Pecha said.
There are plenty of frameworks available for responsible AI use, but simply checking off items on a list isn’t enough, he added.
“You can’t just go rely on a NIST framework that said an AI should be doing these things. That checklist is necessary, but then how do you put in an operational tool that validates what data the AI was trained [on]?” Pecha asked.
CISOs will need ways to show that AI tools their teams built internally and tools they bring in from third parties are secure and used responsibly, whether through ongoing audits or some kind of external certification.
-
More threat modeling. Grimes said he wants to see more threat modeling of AI, particularly as the use of agentic AI ramps up. Where are the vulnerabilities? What has been done to mitigate them? “The vendors that threat model are going to have safer tools and more trustworthy tools,” Grimes contended.
-
More nuance. AI systems excel at gathering information and simplifying decisions for humans, but they have yet to reach a place where they can stand in for human decision-making.
“At the end of the day, it’s still not as accurate as a human at determining should somebody be woken up at 2 in the morning because of an event that has triggered?” Henderson said. He added that he would like to see AI tools move beyond giving binary responses to offering more nuanced answers that include how certain they are of a decision or recommendation.
-
Midmarket solutions. Pecha said he hopes AI makes security more accessible for small- to medium-sized businesses. These businesses don’t have the security budgets that large enterprises do, but they remain part of the supply chain. “The biggest risk we have today is small, medium businesses are not served by the security community well. They don’t have resources. They don’t have knowledge, but AI could be the stopgap for that,” he said.
While the debate about AI and the future of jobs rages, there seems to be some expectation among CISOs that AI will be a tool to augment the capabilities of human cybersecurity teams rather than a technology to replace them entirely.
“I think they need extensions of their teams rather than replacements of their teams,” said Henderson. “If you look at AI as something that can enable your team to continue to scale without adding additional bodies as opposed to replacing bodies, it’s going to be the path to success.”

