What are the dos and don’ts of prompting AI code generators?
Top devops teams create prompt knowledge bases to teach best practices and illustrate how to improve AI-generated code iteratively. Below are some recommendations for prompting code generators.
- Michael Kwok, Ph.D., VP at IBM watsonx Code Assistant and IBM Canada lab director, says, “When prompting AI, be clear and specific, avoid vagueness, and refine iteratively. Always review AI code for correctness, validate against requirements, and run tests.”
- Whiteley, CEO of Coder, suggests, “The best developers approach a prompt by fully understanding the problem and required outcome before enacting genAI-assisted tools. The wrong prompt could result in more time troubleshooting than it’s worth.”
- Reddy of PagerDuty says, “Prompting is becoming one of the most important core engineering skills in 2025. The best prompts are clear, iterative, and constrained. Prompting well is the new debugging—it reveals your clarity of thought.”
- Rahul Jain, CPO at Pendo, says, “Whether you’re a senior developer validating prototypes or a junior developer experimenting with prompts, the key is grounding AI output in real-world usage data and rigorous testing. The future of development lies in pairing AI with deep product insight to ensure what gets shipped actually delivers value.”
- Karen Cohen, director of product management at Apiiro, says, “Developers should treat AI output as untrusted input—crafting precise prompts, avoiding vague requests, and enforcing deep reviews beyond basic scans.”
How should developers review and test AI-generated code?
Developers are ill-advised to incorporate AI-generated code directly into their code bases without validating and testing it. While AI can generate code faster than developers, it’s less likely to have the full context of business needs, end-user expectations, data governance rules, non-functional acceptance criteria, devsecops non-negotiables, and other compliance requirements.
“Developers should review AI-generated code for adherence to coding standards, security considerations, and overall code quality,” says Edgar Kussberg, group product manager at Sonar. “Tools like static analyzers, when used from the very beginning of the SDLC, will check the code directly from the IDE and will help avoid code quality issues from slipping into the code. Development teams should also consider integrating security practices such as SAST [static application security testing] into the code generation process, conducting regular security assessments, and leveraging automated security tools to identify and address manual and AI-generated code vulnerabilities.”