Threat actors are using AI to launch more cyberattacks faster. Recently, they’ve employed autonomous AI to raise the bar even further, putting more businesses and people at risk.
And as more agentic models are rolled out, the malware threats will inevitably increase, putting CISOs and CIOs on alert to prepare.
“The increased throughput in malware is a real threat for organizations. So too is the phenomenon of deep fakes, automatically generated by AI from video clips online, or even from photographs, which are then used in advanced social engineering attacks,” says Richard Watson, EY global and Asia-Pacific cybersecurity consulting leader. “We are starting to see clients suffer these types of attacks.”
“With agentic AI, the ability for malicious code to be produced without any human involvement becomes a real threat,” Watson adds. “We are already seeing deepfake technology evolve at an alarming rate, comparing deepfakes from six months ago with those of today, with a staggering improvement in authenticity,” he says. “As this continues, the ability to discern whether the image on the video screen is real or fake will become increasing harder, and ‘proof of human’ will become even more critical.”
Autonomous AI is a serious threat to organizations across the globe, according to Doug Saylors, partner and cybersecurity practice lead at global technology research and advisory firm ISG.
“As a new zero-day vulnerability is discovered, attackers [can] use AI to quickly develop multiple attack types and release them at scale,” says Saylors. AI is also being used by attackers to analyze large scale cybersecurity protections and look for patterns that can be exploited, then developing the exploit, he adds.
How AI Attacks Can Get Worse
“I believe it will get worse as GenAI models become more commonly available and the ability to train them quickly improves. Nation-state adversaries are using this technology today, but when it becomes available to a larger group of bad actors, it will be substantially more difficult to protect against,” Saylors says. For example, common social engineering protections simply don’t work on GenAI-produced attacks because they don’t act like human attackers.
Though malicious tools like FraudGPT have existed for a while, Mandy Andress, CISO at search AI company Elastic, warns the new GhostGPT AI model is a prime example of the tools that help cybercriminals generate code and create malware at scale.
“Like any emerging technology, the impacts of AI-generated code will require new skills for cybersecurity professionals, so organizations will need to invest in skilled teams and deeply understand their company’s business model to balance risk decisions,” says Andress.
The threat to enterprises is already substantial, according to Ben Colman, co-founder and CEO at deepfake and AI-generated media detection platform Reality Defender.
“We’re seeing bad actors leverage AI to create highly convincing impersonations that bypass traditional security mechanisms at scale. AI voice cloning technology is enabling fraud at unprecedented levels, where attackers can convincingly impersonate executives in phone calls to authorize wire transfers or access sensitive information,” Colman says. Meanwhile, deepfake videos are compromising verification processes that previously relied on visual confirmation, he adds.
“These threats are primarily coming from organized criminal networks and nation-state actors who recognize the asymmetric advantage AI offers. They’re targeting communication channels first because they’re the foundation of trust in business operations.”
How Threats Are Evolving
Attackers are using AI capabilities to automate, scale, and disguise traditional attack methods. According to Casey Corcoran, field CISO at SHI company Stratascale, examples include creating more convincing phishing and social engineering attacks to automatically modify malware so that it is unique to each attack, thereby defeating signature-based detection.
“As AI technology continues to advance, we are sure to see more evasive and adaptive attacks such as deepfake image and video impersonation, AI-guided automated complex attack vector chains, or even the ability to create financial and social profiles of target organizations and personnel at scale to target them more accurately and effectively for and with social engineering attacks,” says Corcoran. An emerging threat is AI-enhanced botnets that will be able to coordinate attacks to challenge DDoS prevention and protection capabilities, he adds.
How CIOs and CISOs Can Better Protect the Organization
Organizations need to embrace “AI for Cyber,” using AI particularly in threat detection and response, to identify anomalies and indicators of compromise, according to EY’s Watson.
“New technologies should be deployed to monitor data in motion more closely, as well as to better classify data to enable it to be protected,” says Watson. Organizations that have invested in security awareness and are moving accountability for certain cyber risks out of IT and into the business are the ones who stand to be better protected in the age of generative AI, he adds.
As cybercriminals evolve their tactics, organizations must be adaptable, agile and ensure they are following security fundamentals.
“Security teams that have full visibility into their assets, enforce proper configurations, and stay up to date on patches can mitigate 90% of threats,” says Elastic’s Andress. “While it may seem contradictory, AI-powered tools can take this one step further, providing self-healing capabilities and helping security teams proactively address emerging risks.”
Reality Defender’s Colman believes the best protection strategy is a layered defense that combines technological solutions with human judgment and organizational protocols.
“Critical communication channels need consistent verification methods, whether automated or manual, with clear escalation paths for suspicious interactions,” says Colman. Security teams should establish processes that adapt to emerging threats and regularly test their resilience against new AI capabilities rather than relying on static defenses.
Stratascale’s Corcoran says well-resourced organizations will be well-served by leveraging AI across vendor products and services to stitch telemetry and response together. They also need to focus on cyber hygiene.
Organizations should ensure they protect their people and give them the tools, processes and training needed to combat social engineering traps, Corcoran says. “AI-enhanced automated vulnerability exploitation only works if there are vulnerabilities,” he adds. “Shoring up vulnerability and patch management programs, and pen-testing for unknown gaps will go a long way toward defending against these types of attacks.”
Finally, Corcoran recommends a zero-trust mindset that narrows the aperture of access any attack can achieve, regardless of the sophistication of AI-enabled tactics and techniques.
ISG’s Saylors recommends continuous vigilance of an organization’s perimeter using attack surface management (ASM) platforms, and the adoption and maintenance of defense-in-depth strategies.
Common Mistakes to Avoid
One big mistake is believing generative AI is nowhere in the organization yet, when employees are already using open-source models. Another is believing autonomous threats aren’t real.
“Companies often get a false sense of security because they have a SOC, for example, but if the technology in the SOC has not been refreshed in the last three years, the chances are it’s out of date and you are missing attacks,” EY’s Watson says. “[You should] conduct a thorough capability review of your security operations function and identify the highest priority use cases for your organization to leverage AI in cyber defense.”
Over-reliance on point solutions, regardless of their capabilities, leads to blind spots where adversaries can exploit using AI-enhanced techniques.
“Defending against AI-based threats, like any other, requires a system of systems approach that involves integrating multiple independent threat detection, and response capabilities and processes to create more complex and capable defenses,” says Corcoran. Organizations should have a risk and controls assessment done with an eye on AI-enhanced threats. An independent assessor who isn’t bound to any technology or framework will be best positioned to help identify weaknesses in an organization’s defenses and look at solutions for processes, and technology.
Elastic’s Andress says companies often underestimate the severity of AI-enabled threats and don’t invest in the proper tools or protocols to identify and protect against potential risks.
“Having the right guardrails in place and understanding the overall threat landscape, while also properly training employees, allows companies to anticipate and address threats before they impact the business,” says Andress. “Threats don’t wait for companies to be ready. Leaders must be prepared with the proper defenses to identify and mitigate risks quickly.” Security teams can [also] leverage GenAI, he adds. It gives us an ability to be proactive, better understand the content of our environments, and anticipate what threat actors can do.
Aditya Saxena, founder at no-code chatbot builder Pmfm.ai, says organizations are unnecessarily creating vulnerabilities by relying more on AI generated code and implementing it without review.
“LLMs aren’t infallible, and we risk inadvertently introducing vulnerabilities that could take down systems at scale,” says Saxena. Additionally, bad actors could train models around subtly exploiting vulnerabilities. For example, we could have a version of DeepSeek that intentionally corrupts the code while still making it work,” he adds.
“Up until last year, we were mostly using AI as an assistant to speed up the work, but lately, as agentic AI becomes more common, we could be inadvertently trusting software, like Devin, with sensitive information, such as API keys or company secrets, to take over end-to-end development and deployment processes.”
The biggest mistake companies can make is underestimating the evolving nature of threats or relying on outdated security measures, says Amit Chadha, CEO at L&T Technology Services (LTTS).
“Our advice is clear: Adopt a proactive and cybersecure AI-driven approach, invest in critical infrastructure and threat intelligence tools, and collaborate with trusted technology partners to build a resilient digital ecosystem,” says Chadha. “But the most important factor is the human element as [most] cybercrimes happen due to human errors and mistakes. So workshops must be conducted for all employees to educate them on cybercrime prevention and ensuring they do not become the unwitting agents of a leak or data breach. In this case prevention is the cure.”
ISG’s Saylors warns that organizations are not prioritizing basic maintenance of their cybersecurity stack and taking basic precautions, such as running VM scans and patching at least critical issues immediately.
“We have seen multiple examples of very large companies that are months to years behind on patching because ‘the apps team won’t let us do it,’ or they are running N-3 versions of software because it is too hard to upgrade,” says Saylors. “Those are the organizations that have already been hacked. AI attacks will just increase the speed and severity of the damage if they become a serious target.”
He also thinks boards of directors should be educated on the continually advancing nature of cyberattacks being generated by AI and GenAI platforms.
The board of directors has the responsibility to prioritize funding for cyber transformation,” says Saylors. “Start a quantum resiliency plan now, and ensure you have multiple copies of immutable backups.”