Helm Footguns¶
Mistakes that cause failed deploys, broken releases, or production outages with Helm.
1. Upgrading Without --atomic¶
Your upgrade fails halfway. The release is now in failed state with a mix of old and new resources. Some pods run the new image, others the old. The next helm upgrade tries to apply on top of this broken state and fails differently.
Fix: Always use --atomic in CI/CD. It auto-rolls back on failure, leaving the release in the last known-good state.
2. --set Type Coercion¶
You pass --set image.tag=v1.2.3. Helm interprets it as a string. You pass --set replicas=03. Helm interprets it as integer 3. You pass --set nodePort=30080. Fine. You pass --set podAnnotations.prometheus\.io/port=9090. Helm drops the 0 — it's now integer 9090, not string "9090".
Fix: Use --set-string for any value that must remain a string. Better yet, use -f values.yaml where YAML typing is explicit.
3. No --wait or --timeout¶
helm upgrade returns success as soon as the resources are submitted to the API server. The pods haven't started yet. Your CI pipeline reports "deploy succeeded" while pods are in CrashLoopBackOff.
Fix: Always use --wait --timeout 5m. Helm will watch the rollout and fail if pods don't become ready.
4. Editing Resources Manually After Helm Deploy¶
You kubectl edit a deployment managed by Helm. Next helm upgrade does a three-way merge and your manual change is silently reverted — or worse, conflicts with the new template and produces invalid YAML.
Fix: All changes go through values files and helm upgrade. Use helm diff to preview. If you must hotfix, document it and roll it into values immediately.
5. Forgetting helm dependency update¶
You bump a subchart version in Chart.yaml but forget to run helm dependency update. The old version in charts/ is used. Your deploy succeeds but runs the wrong version of the dependency. You don't notice until something breaks.
Fix: Run helm dependency update in CI before every build. Commit Chart.lock to the repo. Fail the pipeline if charts/ is stale.
6. Secrets in --set Arguments¶
You pass --set dbPassword=hunter2 on the command line. It shows up in shell history, process listings (ps aux), CI logs, and Helm release secrets (base64, not encrypted).
Fix: Use --set-file to read from a file, or use external-secrets/sealed-secrets. Never pass secrets via --set.
7. No Resource Requests/Limits in Chart Defaults¶
Your chart's default values.yaml has no resource requests. Every install lands pods with BestEffort QoS. They get OOMKilled first under memory pressure. Nobody notices until production.
Fix: Set sane default requests and limits in values.yaml. Use LimitRanges as a backstop. Lint for missing resources in CI.
8. Hooks Without Delete Policies¶
Your pre-upgrade Job hook runs a DB migration. It succeeds. Next upgrade, the hook tries to create the same Job — it already exists. The upgrade fails with "Job already exists."
Fix: Always set helm.sh/hook-delete-policy: hook-succeeded,before-hook-creation. This cleans up completed hooks.
9. Testing with helm template Only¶
helm template renders locally without talking to the cluster. It can't validate that your ServiceAccount exists, that your PVC references a real StorageClass, or that your resource names are valid. You deploy and get runtime errors.
Fix: Use helm upgrade --dry-run --debug for cluster-validated rendering. Use helm template for syntax checks only.
10. Namespace Confusion¶
You install myapp in staging, then accidentally helm upgrade myapp without -n staging. Helm creates a new release in the default namespace. Now you have two releases, two sets of resources, and port conflicts.
Fix: Always pass -n <namespace> explicitly. Set namespace in your CI/CD config. Use helm list -A to audit cross-namespace releases.