30.8 C
New York
Monday, June 30, 2025

Rewriting infrastructure as code for the AI data center



Security oversights like that are far from theoretical. “Configs often miss security best practices,” says Novikov. “No rate limits, wide network exposure (0.0.0.0/0), missing resource limits, open CORS, and no auth on internal APIs.” In one real-world case, a fintech developer used AI to generate ingress for an internal API. “They forgot to add IP whitelisting. The API went public, got scanned in 20 minutes, and attackers found an old debug route.” 

A cautious look ahead at AI and infrastructure 

As generative AI becomes more embedded in infrastructure workflows, its role is evolving. “One pattern we’re noticing across several mid-to-large scale orgs is this: AI is being used as a ‘first draft generator,’ but increasingly also as a decision-support tool,” says ControlMonkey’s Yemini. “Engineers aren’t just asking, ‘How do I write this AWS security group?’ they’re asking, ‘What’s the cleanest way to structure this VPC for future scale?’” He notes that these questions aren’t confined to early design stages —t hey come up mid-sprint, when real-world blockers hit. “From our perspective, the most successful orgs treat generative AI like an untrained junior engineer: useful for accelerating tasks, but requiring validation, structure, and access to internal standards.” 

That need for human oversight was a recurring theme with everyone we spoke to. Microsoft’s Vegiraju puts it simply: “Engineers should first understand the code coming out of the LLM before using it.” At Confluent, Mehta emphasizes the importance of safeguards: “We need guardrails built into the system to prevent accidental breaking changes, be it due to human error or due to AI-generated changes.” She points to GitOps systems and peer-reviewed version control as ways to build accountability into the workflow. 

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles