“Developers should also investigate the myriad of tools available to find those that work and consider how to fill the gaps with those that don’t,” says Gabriel. This will require both individual and organizational investment, he adds.
Looking to the future, many anticipate open source leading further AI democratization. “I expect we’ll see a lot more open-source models emerge to address specific use cases,” says David DeSanto, chief product officer at GitLab.
Governance around AI usage
Enhancing developers’ confidence in AI-generated code will also rely on setting guardrails for responsible usage. ”With the appropriate guardrails in place to ensure responsible and trusted AI outputs, businesses and developers will become more comfortable starting with AI-generated code,” says Salesforce’s Fernandez.
To get there, leadership must establish clear directions. “Ultimately, it’s about setting clear boundaries for those with access to AI-generated code and putting it through stricter processes to build developer confidence,” says Durkin.
“Ensuring transparency in model training data helps mitigate ethical and intellectual property risks,” says Morgan Stanley’s Gopi. Transparency is crucial from an IP standpoint, too. “Having no hold on AI output is critical for advancing AI code generation as a whole,” says GitHub’s DeSanto, who references GitLab Duo’s transparency commitment regarding its underlying models and usage of data.
For security-conscious organizations, on-premises AI may be the answer to avoiding data privacy issues. Running self-hosted models in air-gapped, offline deployments allows AI to be used in regulated environments while maintaining data security, says DeSanto.
Striking a balance between human and AI
All experts interviewed for this piece believe AI will assist developers rather than replace them wholesale. In fact, most view keeping developers in the loop as imperative for retaining code quality. “For now, human oversight remains essential when using AI-generated code,” says Digital.ai’s Kentosh.
“Building applications will mostly remain in the hands of the creative professionals using AI to supplement their work,” says SurrealDB’s Hitchcock. “Human oversight is absolutely necessary and required in the use of AI coding assistants, and I don’t see that changing,” adds Zhao.
Why? Partially, the ethical challenges. “Complete automation remains unattainable, as human oversight is critical for addressing complex architectures and ensuring ethical standards,” says Gopi. That said, AI reasoning is expected to improve. According to Wilson, the next phase is AI “becoming a legitimate engineering assistant that doesn’t just write code, but understands it.”
Others are even more bullish. “I think that the most valuable AI-driven systems will be those that can be handed over to AI coding entirely,” says Contentful’s Gabriel, although he acknowledges this is not yet a consistent reality. For now, future outlooks still place AI and humans cooperating side-by-side. “Developers will become more supervisors rather than writing every line of code,” says Perforce’s Cope.
The end goal is striking the right balance between productivity gains from AI and avoiding over-reliance. “If developers rely too heavily on AI without a solid understanding of the underlying code, we risk losing creativity and technical depth, which are crucial for innovation,” says Kentosh.
Wild ride ahead
Amazon recently claimed its AI rewrote a Java application, saving $260 million. Others are under pressure to prove similar results. “Most companies have made an investment in some type of AI-assisted development service or copilot at this point and will need to see a return on their investment,” says Kentosh.
Due to many factors, AI adoption continues to accelerate. “Most every developer I know is using AI in some capacity,” adds Thacker. “For many of them, AI is writing the majority of the code they produce each day.”
Yet, while AI eliminates repetitive tasks effectively, it still requires human intervention to take it to the final mile. “The majority of code bases are boilerplate and repeatable,” says Crowdbotics’s Hymel. “We’ll see AI being used to lay 51%+ of the ‘groundwork’ of an application that is then taken over by humans to complete.”
The bottom line? “AI-generated code isn’t great—yet,” says Wilson. “But if you’re ignoring it, you’re already behind. The next 12 months are going to be a wild ride.”