Generative AI tools have quickly become indispensable for software development, providing high-octane fuel to accelerate the production of functional code and, in some cases, even helping improve security. But the tools also introduce serious risks to enterprises faster than chief information security officers and their teams can mitigate them.
Governments are striving to put in place legislation and policies governing the use of AI, from the relatively comprehensive EU Artificial Intelligence Act to regulatory efforts in at least 54 countries. In the U.S, AI governance is being addressed at the federal and state levels, and President Donald Trump’s administration also promotes extensive investments in AI development.
But the gears of government grind slower than the pace of AI innovation and its adoption throughout business. As of June 27, for example, state legislatures had introduced some 260 AI-related bills during the 2025 legislative sessions, but only 22 had been passed, according to research by the Brookings Institution. Many of the proposals are also selectively targeted, addressing infrastructure or training, deep fakes or transparency. Some are designed to elicit voluntary commitments from AI companies.
The gears of government grind slower than the pace of AI innovation and its adoption throughout business.
With the entanglement of global AI laws and regulations evolving almost as fast as the technology itself, companies will increase risk if they wait to be told to act on potential security pitfalls. They need to understand how to safeguard both the codebase and end users from potential cyber crises.
CISOs need to create their own AI governance frameworks to make the best, safest use of AI and to protect themselves from financial losses and liability.
The risks grow with AI-generated code
The reasons for AI’s rapid growth in software development are easy to see. In Darktrace’s 2025 State of AI Cybersecurity report, 88% of the 1,500 respondents said they are already seeing significant time savings from using AI. And 95% say they believe AI can improve the speed and efficiency of cyber defense. Not only do the vast majority of developers prefer using AI tools, but many CEOs are also beginning to mandate their use.
As with any powerful new technology, however, the other shoe will drop and could have a significant impact on enterprise risk. The increased productivity of generative AI tools also brings forth an increase in familiar flaws, such as authentication errors and misconfigurations, as well as a new wave of AI-borne threats, such as prompt injection attacks. The potential for problems could get even worse.
Recent research by Apiiro found that AI tools have increased development speeds by three to four times, but they also have increased risk tenfold. Although AI tools have cleaned up relatively minor mistakes, such as syntax errors (down by 76%) and logic bugs (down by 60%), they are introducing bigger problems. For example, privilege escalation, in which an attacker gains higher levels of access, increased by 322%, and architectural design problems jumped by 153% according to the report.
CISOs are aware that risks are mounting, but not all of them are sure how to handle them. In Darktrace’s report, 78% of CISOs said they believe AI is affecting cybersecurity. Most said they’re better prepared than they were a year ago, but 45% admitted they are still not ready to address the problem.
It’s time for CISOs to implement essential guardrails to mitigate the risks of AI use and establish governance policies that can endure, regardless of which regulatory requirements emerge from the legislative pipelines.
Secure AI use starts with the SDLC
For all the benefits it provides in speed and functionality, AI-generated code is not deployment-ready. According to BaxBench, 62% of code created by large language models (LLMs) is either incorrect or contains a security vulnerability. Veracode researchers studying more than 100 large language models have found that 45% of functional code is insecure, while researchers at Cornell University determined that about 30% contains security vulnerabilities related to 38 different Common Weakness Enumeration categories. A lack of visibility into and governance over how AI tools are used creates serious risks for enterprises, leaving them open to attacks that result in data theft, financial loss and reputational damage, among other consequences.
Since the weaknesses associated with AI development stem from the quality of the code it generates, enterprises need to incorporate governance into the software development lifecycle (SDLC). A platform (as opposed to point solutions) that focuses on the key issues facing AI software development can help organizations gain control over this ever-accelerating process.
The features of such a platform should include:
Observability: Enterprises should have clear visibility into AI-assisted development. They should know which developers are using LLM models and with which codebases they are working. Deep visibility can also help curb the use of shadow AI among employees using unapproved tools.
Governance: Organizations need to have a clear idea of how AI is being used and who will use it, which requires clear governance policies. Once those policies are in place, a platform can automate policy enforcement to ensure that developers using AI meet secure coding standards before their work is accepted for production use.
Risk metrics and benchmarking: Benchmarks can establish the skill levels developers need to create secure code and review AI-generated code, and to measure developers’ progress in training and how well they apply those skills on the job. An effective strategy would include mandatory security-focused code reviews for all AI-assisted code, establishing secure coding proficiency benchmarks for developers and selecting only approved, security-vetted AI tools. Connecting AI-generated code to developer skill levels, the vulnerabilities produced and actual commits enables you to understand the true level of security risk being introduced while also ensuring that the level of risk is minimized.
There’s no turning back from AI’s growing role in software development, but it doesn’t have to be a reckless charge toward greater productivity at the expense of security. Enterprises can’t afford to take that risk. Government regulations are taking shape, but given the pace of technological advancement, they will likely always be a bit behind the curve.
CISOs, with the support of executive leadership and an AI-focused security platform, can take matters into their own hands by implementing seamless AI governance and observability of AI tool use, while providing learning pathways to support growing security proficiency among developers. It’s all very possible. However, they need to take steps now to ensure that innovation doesn’t outpace cybersecurity.

