- k8s
- l1
- topic-pack
- helm --- Portal | Level: L1: Foundations | Topics: Helm | Domain: Kubernetes
Helm - Primer¶
Why This Matters¶
Helm is the package manager for Kubernetes. Most production clusters use it to template, version, and deploy applications. If you operate Kubernetes in any serious capacity, you will encounter Helm charts — whether maintaining your own or consuming upstream charts from vendors. Understanding Helm deeply means the difference between confident deploys and 2 AM rollback scrambles.
Core Concepts¶
1. Charts, Releases, Repositories — The Mental Model¶
Name origin: Helm is named after the ship's wheel (the helm). The nautical metaphor follows Kubernetes (Greek for "helmsman"). Helm charts are named after nautical navigation charts. The original Helm (v1) was created at Deis in 2015. Helm 2 introduced Tiller (a server-side component in the cluster). Helm 3 (2019) removed Tiller entirely for security reasons — Tiller had cluster-admin privileges and was a frequent attack vector.
A chart is a package of Kubernetes manifests bundled with metadata and templating logic. Think of it as a parameterized blueprint for a set of resources.
A release is a running instance of a chart, deployed to a specific namespace with a specific set of values. You can have multiple releases of the same chart (e.g., redis-cache and redis-session both from the redis chart).
A repository is an index of charts served over HTTP (or OCI). Helm fetches chart archives from repositories.
# Add a repository
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
# Search for charts
helm search repo nginx --versions
# Install a chart, creating a release
helm install my-nginx bitnami/nginx -n web --create-namespace
# List releases
helm list -n web
Helm tracks each install as a release with revision history, status, and the values used.
2. Chart Structure¶
A Helm chart is a directory with a specific layout:
mychart/
Chart.yaml # Required. Name, version, dependencies, metadata
values.yaml # Default configuration values
templates/ # Go template files that render to K8s manifests
deployment.yaml
service.yaml
ingress.yaml
_helpers.tpl # Named template definitions (partials)
NOTES.txt # Post-install/upgrade user-facing message
charts/ # Dependency chart archives (.tgz)
Chart.lock # Pinned dependency versions
.helmignore # Files to exclude from packaging
Key fields in Chart.yaml: apiVersion (v2), name, version (chart SemVer), appVersion (app being deployed), type (application or library), and dependencies. Scaffold a new chart with helm create mychart — generates a working starting point that usually needs trimming.
3. Values and Overrides¶
Values are the primary configuration mechanism. They flow from multiple sources with a defined precedence (last wins):
- Chart's
values.yaml(defaults) - Parent chart's values (if this is a subchart)
- Files passed via
-f/--values(in order specified) --setand--set-stringflags
# Override via file
helm install myapp ./mychart -f production.yaml
# Override via --set (dot notation for nested keys)
helm install myapp ./mychart --set replicaCount=3,image.tag=v2.1.0
# Combine (--set wins over -f)
helm install myapp ./mychart -f production.yaml --set image.tag=hotfix-42
# Set string explicitly (prevents type coercion)
helm install myapp ./mychart --set-string nodeSelector."kubernetes\.io/os"=linux
# View computed values for an existing release
helm get values myapp -n production
helm get values myapp -n production --all # includes defaults
4. Template Basics¶
Helm uses Go templates with the Sprig function library. Templates access values, release metadata, and chart metadata through dot-objects.
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-app
labels:
{{- include "mychart.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "mychart.selectorLabels" . | nindent 6 }}
template:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
{{- if .Values.resources }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- end }}
ports:
{{- range .Values.service.ports }}
- containerPort: {{ .port }}
protocol: {{ .protocol | default "TCP" }}
{{- end }}
Named templates in _helpers.tpl:
{{- define "mychart.labels" -}}
app.kubernetes.io/name: {{ .Chart.Name }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
helm.sh/chart: {{ printf "%s-%s" .Chart.Name .Chart.Version }}
{{- end }}
Key built-in objects: .Values, .Release.Name, .Release.Namespace, .Chart.Name, .Chart.Version, .Chart.AppVersion, .Capabilities.APIVersions.
Default trap: Helm templates use Go's
text/templatepackage with the Sprig function library. One of the biggest surprises: unquoted values in YAML templates can be silently coerced.port: {{ .Values.port }}with port=8080 works fine, butname: {{ .Values.name }}with name="true" becomes a boolean. Always quote strings:name: {{ .Values.name | quote }}.
Use include (not template) when you need to pipe the result:
# Good: include returns a string, pipeable
{{ include "mychart.labels" . | nindent 4 }}
# Bad: template writes directly, cannot pipe
{{ template "mychart.labels" . }}
5. Install, Upgrade, Rollback¶
# First install
helm install myapp ./mychart -n production --create-namespace \
-f values-prod.yaml --wait --timeout 5m
# Upgrade (apply new values or chart version)
helm upgrade myapp ./mychart -n production \
-f values-prod.yaml --set image.tag=v2.2.0
# Idempotent install-or-upgrade (preferred in CI/CD)
helm upgrade --install myapp ./mychart -n production \
-f values-prod.yaml --atomic --timeout 5m
# Rollback to previous revision
helm rollback myapp 3 -n production --wait
# Rollback to immediately prior revision
helm rollback myapp 0 -n production # 0 = previous
Critical flags:
| Flag | Effect |
|---|---|
--wait |
Wait for all resources to be ready before marking success |
--atomic |
Implies --wait; auto-rollback on failure |
--timeout |
How long to wait (default 5m0s) |
--cleanup-on-fail |
Delete new resources on failed upgrade |
--force |
Force resource updates via delete/recreate (dangerous) |
--dry-run |
Simulate without applying (server-side validation) |
In CI/CD, always use --atomic and --timeout. A half-applied upgrade with no auto-rollback is the worst outcome.
Remember: Mnemonic for Helm CI/CD flags: AWT — Atomic (auto-rollback on failure), Wait (block until ready), Timeout (fail fast if stuck).
helm upgrade --install --atomic --wait --timeout 5mis the canonical CI/CD invocation. Missing any of these leads to silent partial failures.
6. Release Management¶
# List all releases across namespaces
helm list -A
# Filter by status
helm list -n production --failed
helm list -n production --pending
# Detailed status of a release
helm status myapp -n production
# Release history (revisions, timestamps, status)
helm history myapp -n production
Output:
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 2026-03-01 08:00:00 UTC superseded mychart-0.3.0 1.4.0 Install complete
2 2026-03-10 14:22:00 UTC superseded mychart-0.3.1 1.4.1 Upgrade complete
3 2026-03-14 09:15:00 UTC deployed mychart-0.3.1 1.4.2 Upgrade complete
Inspect what is deployed:
helm get manifest myapp -n production # rendered manifests
helm get values myapp -n production # user-supplied values only
helm get values myapp -n production --all # merged with defaults
helm get values myapp -n production --revision 2 # values for a specific revision
helm get all myapp -n production # everything
7. Debugging¶
When a deploy fails or templates produce unexpected output, use these tools in order:
# 1. Lint the chart (catches structural and syntax errors)
helm lint ./mychart -f values-prod.yaml
helm lint ./mychart --strict # warnings become errors
# 2. Render templates locally (no cluster needed)
helm template myapp ./mychart -f values-prod.yaml > rendered.yaml
# 3. Render with debug info (shows computed values)
helm template myapp ./mychart -f values-prod.yaml --debug
# 4. Dry-run against the cluster (validates against API server)
helm install myapp ./mychart --dry-run --debug -f values-prod.yaml
# 5. Pipe rendered output to kubectl for validation
helm template myapp ./mychart -f values-prod.yaml | kubectl apply --dry-run=client -f -
# 6. Diff what would change (requires helm-diff plugin)
helm diff upgrade myapp ./mychart -f values-prod.yaml
Common error: "YAML parse error on line 42" when the template looks fine. The error refers to the rendered output, not your template source. Use helm template --debug to see rendered output with line numbers and find the actual breakage — usually a bad nindent value or unquoted value injection.
8. Hooks¶
Hooks let you run actions at specific points in the release lifecycle. They are regular Kubernetes resources (usually Jobs or Pods) with special annotations.
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-db-migrate
annotations:
"helm.sh/hook": pre-upgrade
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
template:
spec:
containers:
- name: migrate
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
command: ["./migrate.sh"]
restartPolicy: Never
backoffLimit: 1
Hook events: pre-install, post-install, pre-delete, post-delete, pre-upgrade, post-upgrade, pre-rollback, post-rollback, test. Hook weight controls execution order (lower runs first, default 0). Delete policies: before-hook-creation (clean up before retry), hook-succeeded (delete after success), hook-failed (delete after failure).
Common pitfalls:
- Hook Jobs that are not idempotent cause cascading failures on retry
- Missing hook-delete-policy: before-hook-creation means the old Job blocks re-creation on retry
- Hooks without backoffLimit default to 6 retries, running destructive operations repeatedly
- --no-hooks skips hooks entirely — useful for emergency rollbacks but dangerous if hooks enforce invariants
9. Dependencies and Subcharts¶
Declare dependencies in Chart.yaml under the dependencies key (name, version, repository, optional condition/tags). Manage them:
helm dependency update ./mychart # download into charts/, update Chart.lock
helm dependency build ./mychart # build from lock file (reproducible, use in CI)
helm dependency list ./mychart # show deps and their status
Pass values to subcharts by nesting under the subchart name. Global values are available to all subcharts:
postgresql:
enabled: true
auth:
postgresPassword: changeme
global:
imageRegistry: registry.example.com # accessible as .Values.global.imageRegistry
10. Common Gotchas¶
Values type coercion. Helm treats --set values as their YAML-inferred type. --set foo=true is boolean, --set foo=123 is integer. If you need a string, use --set-string foo=123. This bites you with port numbers, boolean-looking strings, and anything YAML might auto-convert.
YAML indentation in templates. The nindent function is your friend and your enemy. A wrong indentation level produces valid YAML that means something completely different. Always check rendered output after changing nindent values. Use {{- toYaml .Values.resources | nindent 12 }} — count the indent level from the left margin of the rendered YAML, not from your template source.
Upgrade vs. install. helm install fails if the release exists. helm upgrade fails if it does not. Use helm upgrade --install in automation. Never use bare helm install in CI/CD pipelines.
Secret management. Never put plaintext secrets in values.yaml committed to git. Use helm-secrets plugin (SOPS/age encryption), External Secrets Operator (Vault/AWS SM), --set from CI/CD variables, or Sealed Secrets.
Three-way merge surprises. Helm 3 uses a three-way merge (old manifest, new manifest, live state). If someone edits a resource via kubectl edit, that change may persist or be silently reverted depending on whether the chart template changes that field. Avoid out-of-band edits to Helm-managed resources.
Under the hood: Helm 3 stores release metadata as Kubernetes Secrets (type
helm.sh/release.v1) in the release's namespace. Each revision gets its own Secret. This is whyhelm liststill works after restarting the Helm client — the state lives in the cluster, not on your workstation. If you need to debug release state,kubectl get secrets -l owner=helmshows all Helm-managed Secrets.
Release stuck in pending state. If a deploy crashes mid-way, the release can get stuck in pending-install or pending-upgrade. Check helm history, try helm rollback. If rollback fails, you may need to delete the broken Helm release secret (sh.helm.release.v1.<name>.v<N>) from the namespace.
Namespace split-brain. If templates hardcode a namespace in metadata instead of using {{ .Release.Namespace }}, those resources deploy to the hardcoded namespace but Helm tracks them in the release namespace. helm uninstall will not clean them up. Always use {{ .Release.Namespace }} in templates.
Wiki Navigation¶
Related Content¶
- Case Study: Pod OOMKilled — Memory Leak in Sidecar, Fix Is Helm Values (Case Study, L2) — Helm
- Helm Drills (Drill, L1) — Helm
- Helm Flashcards (CLI) (flashcard_deck, L1) — Helm
- Incident Simulator (18 scenarios) (CLI) (Exercise Set, L2) — Helm
- Interview: Helm Upgrade Broke Prod (Scenario, L2) — Helm
- Lab: Helm Upgrade Rollback (CLI) (Lab, L1) — Helm
- Runbook: Helm Upgrade Failed (Runbook, L1) — Helm
- Skillcheck: Helm & Release Ops (Assessment, L1) — Helm
- Track: Helm & Release Ops (Reference, L1) — Helm
Pages that link here¶
- Anti-Primer: Helm
- Certification Prep: CKAD — Certified Kubernetes Application Developer
- Comparison: GitOps CD
- Comparison: Kubernetes Templating
- Helm
- Helm Drills
- Helm Skill Check
- Incident Replay: Service Has No Endpoints
- Kubernetes Ecosystem - Primer
- Master Curriculum: 40 Weeks
- Production Readiness Review: Answer Key
- Production Readiness Review: Study Plans
- Runbook: Helm Upgrade Failed
- Scenario: Helm Upgrade Broke Prod — Recover Fast
- Symptoms: Pod OOMKilled, Memory Leak Is in Sidecar, Fix Is Helm Values