AI Tools for DevOps - Footguns¶
Things that will burn you if you're not careful.
1. The Overly Permissive IAM Policy¶
AI tools love generating IAM policies with "Action": "*" or "Resource": "*". Every single time you generate an IAM policy, check it for least-privilege. The AI will optimize for "it works" not "it's secure."
// AI generated this. Looks fine, right?
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
// What it should be:
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": "arn:aws:s3:::my-bucket/*"
}
2. The Hallucinated Terraform Argument¶
AI tools confidently generate Terraform resource arguments that don't exist or have been deprecated. The code looks valid, terraform validate might even pass, but terraform plan will fail. Always check the provider docs for the resource you're creating.
Common hallucinations: - Made-up resource argument names - Arguments from the wrong provider version - Arguments that exist on a different resource type
3. The "It Works" Security Group¶
# AI generated this to "fix" a connectivity issue
ingress {
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
Yes, it fixes the connectivity issue. It also opens every port to the entire internet.
4. Pasting Secrets Into Prompts¶
It's 2am, you're debugging a connection issue, and you paste the entire .env file into ChatGPT without thinking. Now your database password is in OpenAI's training data (or at minimum, their logs). Always sanitize before pasting.
Before pasting, search for: API keys, passwords, tokens, connection strings with credentials, private keys, AWS access keys.
5. The Auto-Apply Trap¶
Never do this:
Always do this:
6. Trusting AI-Generated Kubernetes RBAC¶
AI will happily generate a ClusterRole with resources: ["*"] and verbs: ["*"]. This is the Kubernetes equivalent of chmod 777. Always scope RBAC to specific resources, verbs, and namespaces.
7. The Stale Dependency¶
AI training data has a cutoff date. It might suggest: - Deprecated Python packages - Old Terraform provider syntax - Vulnerable npm packages - Outdated Kubernetes API versions (extensions/v1beta1)
Always check that suggested dependencies are current and not known-vulnerable.
8. Copy-Paste Without Understanding¶
The most dangerous footgun: using AI-generated code you don't understand. If you can't explain what every line does, you can't debug it when it breaks at 3am. AI is for acceleration, not abdication.
Rule: If you can't explain it to a colleague, don't deploy it.
9. The Confidently Wrong Answer¶
AI tools never say "I don't know." They'll give you a confident, well-structured, completely wrong answer. This is especially dangerous for: - Network debugging (AI guesses at topology it can't see) - Performance tuning (AI suggests generic optimizations that may not apply) - Security advice (AI may miss context-specific attack vectors)
Mitigation: For critical decisions, verify AI suggestions against official docs or test in a safe environment.
10. Context Window Overflow¶
When you paste a huge Terraform state file or a 500-line error log, the AI may silently drop important context. You'll get an answer that addresses part of the problem but misses the actual root cause buried in the truncated portion.
Fix: Extract the relevant sections. Give the AI the error message, the specific resource, and the relevant config - not the entire state file.