Skip to content

Helm

← Back to all decks

51 cards — 🟢 10 easy | 🟡 15 medium | 🔴 12 hard

🟢 Easy (10)

1. What are the required files and directories in a minimal Helm chart?

Show answer A minimal Helm chart requires: Chart.yaml (chart metadata), templates/ directory (containing at least one template), and optionally values.yaml. Chart.yaml must include apiVersion, name, and version fields. You can scaffold one with `helm create <name>`.

2. How do you preview the rendered manifests without actually deploying them?

Show answer Use `helm template <release> <chart>` to render locally, or `helm install --dry-run --debug <release> <chart>` to render server-side (which also validates against the cluster API). The --debug flag adds extra output including computed values.

3. How do you add a Helm repository and search for available chart versions?

Show answer Use `helm repo add <name> <url>` to register a repo, then `helm repo update` to fetch the index. Search with `helm search repo <keyword>` for repo charts or `helm search hub <keyword>` for Artifact Hub. Use `--versions` flag to see all available versions, not just the latest.

4. What is the difference between helm install and helm upgrade --install?

Show answer `helm install` fails if the release already exists. `helm upgrade --install` (idempotent) installs the release if it does not exist, or upgrades it if it does. This makes it the preferred command in CI/CD pipelines where you want a single command that works for both first-time deploy and subsequent updates.

5. How do you view the release history and compare what changed between two revisions?

Show answer Use `helm history <release>` to list all revisions with status, chart version, and description. To compare two revisions, use the helm-diff plugin: `helm diff revision <release> <rev1> <rev2>`. You can also retrieve the manifest for a specific revision with `helm get manifest <release> --revision <N>` and diff manually.

6. What does helm lint check, and what is the difference between warnings and errors?

Show answer `helm lint <chart-path>` validates chart structure: checks Chart.yaml is well-formed, templates render without error, and rendered YAML is valid. Errors (e.g., missing Chart.yaml, template syntax errors) cause a non-zero exit code. Warnings (e.g., missing icon in Chart.yaml, deprecated API versions) are informational and do not fail the lint. Use `--strict` to treat warnings as errors in CI pipelines.

7. You changed a template but helm install gives a YAML parse error referencing a line number that does not match your template. Why?

Show answer The error line number refers to the rendered output, not the template source. Use helm template --debug to see the fully rendered YAML with line numbers and locate the actual breakage (usually a wrong nindent value or unquoted value injection).

8. You run --set replicaCount=true intending the string "true" but the template receives a boolean. How do you force a string?

Show answer Use --set-string replicaCount=true instead of --set. Helm's --set infers YAML types automatically (true becomes boolean, 123 becomes integer). The --set-string flag forces the value to remain a string regardless of content.

9. Your CI/CD pipeline runs helm install on every deploy and fails on the second run. What single command fixes this?

Show answer Use helm upgrade --install (the idempotent form). It installs the release if it does not exist, or upgrades it if it does. Combine with --atomic and --timeout for safe CI/CD deploys.

10. How do you retrieve the exact Kubernetes manifests that Helm applied for a specific release revision?

Show answer Run helm get manifest --revision . This outputs the rendered YAML manifests as they were applied for that revision. To see the values used, run helm get values --revision .

🟡 Medium (15)

1. In what order does Helm merge values when you run helm install -f custom.yaml --set image.tag=v2?

Show answer Helm merges values in this order (last wins):
1) chart's values.yaml,
2) parent chart's values.yaml (if subchart),
3) files passed via -f/--values (in order given),
4) --set and --set-string flags. So --set image.tag=v2 overrides anything in custom.yaml, which overrides the chart default.

2. What happens to Kubernetes resources when you run helm rollback &lt;release&gt; &lt;revision&gt;?

Show answer Helm creates a new release revision with the manifests from the target revision. It applies a three-way strategic merge patch comparing the old manifest, the rollback target manifest, and the live state. Resources added in later revisions but absent in the rollback target are deleted. The rollback itself becomes a new revision number.

3. How do you declare and manage chart dependencies, and what does helm dependency update do?

Show answer Dependencies are declared in Chart.yaml under the dependencies key (with name, version, repository, and optional condition/tags). Running `helm dependency update` downloads the dependency charts into the charts/ directory as .tgz archives and generates or updates Chart.lock. If Chart.lock exists, `helm dependency build` uses the locked versions instead of resolving again.

4. A release is stuck in a "pending-upgrade" state after a failed deploy. How do you recover?

Show answer First, check status with `helm status <release>` and history with `helm history <release>`. If the release is stuck in pending-upgrade or pending-install, you can attempt `helm rollback <release> <last-good-revision>`. If rollback also fails, use `helm uninstall <release>` (possibly with --no-hooks) and reinstall, or as a last resort, manually delete the Helm secret (sh.helm.release.v1.<name>.v<N>) storing the broken state.

5. What approaches exist for managing secrets in Helm charts, and what are their tradeoffs?

Show answer Common approaches:
1) helm-secrets plugin (encrypts values files with SOPS/age/PGP, decrypts at deploy time),
2) External Secrets Operator (syncs from Vault/AWS SM/etc. into K8s secrets),
3) --set with CI/CD variables (secrets never on disk but visible in process lists),
4) Sealed Secrets (encrypt client-side, only cluster can decrypt). Avoid storing plaintext secrets in values files committed to Git.

6. What does helm test &lt;release&gt; do, and how do you write a chart test?

Show answer Helm test runs pods annotated with `helm.sh/hook: test` in the templates/tests/ directory. A test pod typically runs a validation command (e.g., curl the service, check DB connectivity) and exits 0 for success or non-zero for failure. Use `helm test <release> --timeout 5m` to run them. Tests execute in the release namespace and have access to the same service network.

7. How do you pass values from a parent chart to a subchart, and what is the global values key?

Show answer Parent charts pass values to subcharts by nesting values under the subchart name: e.g., if the subchart is named "redis", set `redis.replicaCount: 3` in the parent values. Values under the `global` key are automatically available to all subcharts as `.Values.global.*` without prefixing. This is useful for shared settings like image registry or environment labels.

8. What happens when you deploy a chart with helm install --namespace foo --create-namespace but some templates hardcode a different namespace in their metadata?

Show answer Helm sets the namespace on resources that do not specify one, but resources with an explicit namespace in their metadata are deployed to that hardcoded namespace. This creates a split-brain situation where `helm uninstall` only removes resources tracked in its release secret (stored in the --namespace), potentially orphaning cross-namespace resources. Avoid hardcoding namespaces in templates; use `{{ .Release.Namespace }}` instead.

9. How do you push and pull Helm charts using OCI registries instead of traditional chart repositories?

Show answer Helm 3.8+ supports OCI natively. Push: `helm push mychart-0.1.0.tgz oci://registry.example.com/charts`. Pull: `helm pull oci://registry.example.com/charts/mychart --version 0.1.0`. Install directly: `helm install myrelease oci://registry.example.com/charts/mychart --version 0.1.0`. Authenticate with `helm registry login`. OCI registries eliminate the need for a separate index.yaml and support existing container registry infrastructure (ECR, GCR, ACR, GHCR).

10. What does the lookup function do in Helm templates, and when does it return empty?

Show answer The `lookup` function queries live cluster resources during template rendering: `{{ lookup "v1" "Secret" "default" "my-secret" }}`. It returns the resource object or an empty dict if not found.
Important: lookup always returns empty during `helm template` and `--dry-run` because there is no cluster connection. Guard lookup usage with conditionals to handle both cases, or your templates will behave differently in CI vs. actual deploys.

11. What do the --atomic and --wait flags do during helm upgrade, and how do they interact?

Show answer `--wait` makes Helm wait until all resources are in a ready state (Pods running, Deployments available, etc.) before marking the release as successful. `--atomic` implies --wait and adds automatic rollback: if the release fails or times out, Helm rolls back to the previous revision. Use `--timeout` to control the wait duration (default 5m). In CI/CD, --atomic is preferred because a failed deploy does not leave the cluster in a half-upgraded state.

12. Your pre-upgrade hook Job fails on retry because the previous Job resource still exists. What annotation fixes this?

Show answer Add helm.sh/hook-delete-policy: before-hook-creation to the Job metadata. This tells Helm to delete the previous hook resource before creating a new one, preventing name-collision failures on retry.

Gotcha: Without this annotation, the old Job resource blocks creation of the new one (name conflict). This is the #1 cause of hook retry failures.

Remember: Three deletion policies: before-hook-creation, hook-succeeded, hook-failed. Use before-hook-creation for retry safety.

13. Explain the difference between --wait and --atomic on helm upgrade. When does --atomic add value over --wait alone?

Show answer --wait makes Helm wait for resources to become ready before marking success, but a failed deploy stays in the failed state. --atomic implies --wait and additionally auto-rolls back to the previous revision on failure or timeout. --atomic prevents the cluster from being left in a half-upgraded state.

14. An SRE manually scaled a Deployment to 5 replicas via kubectl. The Helm chart says 3 replicas. On next helm upgrade (chart unchanged), what happens to the replica count?

Show answer It stays at
5. Helm 3 uses a three-way merge comparing old manifest, new manifest, and live state. Since the old and new chart manifests both say 3 (no change in that field), Helm does not patch it. But if the chart changes replicas to 4, Helm patches it to 4, overwriting the manual edit.

15. A chart hardcodes namespace: monitoring in a ServiceMonitor template instead of using .Release.Namespace. What operational problem does this cause?

Show answer Helm tracks resources in the release namespace, but the ServiceMonitor is created in the monitoring namespace. When you run helm uninstall, Helm does not delete the ServiceMonitor because it only cleans up resources tracked in its release secret. The resource becomes orphaned. Always use {{ .Release.Namespace }} in templates.

🔴 Hard (12)

1. What are Helm hook weights and deletion policies, and how do they interact during an upgrade?

Show answer Hook weights (helm.sh/hook-weight annotation) control execution order within the same hook event (lower runs first, default 0). Deletion policies (helm.sh/hook-delete-policy) determine when hook resources are cleaned up: before-hook-creation (delete previous instance before running), hook-succeeded (delete after success), hook-failed (delete after failure). Without a deletion policy, hook resources remain until the release is deleted.

2. What is the difference between include and template in Helm templates, and why does it matter for pipelines?

Show answer The `template` action renders a named template inline but returns nothing (it writes directly to output), so it cannot be piped. The `include` function renders a named template and returns the result as a string, allowing you to pipe it through functions like `nindent` or `quote`. Always prefer `include` when you need post-processing: `{{ include "mychart.labels" . | nindent 4 }}`.

3. How does the helm-diff plugin help prevent unexpected changes during upgrades?

Show answer The helm-diff plugin (`helm diff upgrade <release> <chart>`) shows a colored diff of what would change between the current deployed release and the proposed upgrade, without applying anything. This is critical for catching unintended changes from upstream chart updates, value drift, or template logic changes. It compares rendered manifests, not live cluster state, so it may miss out-of-band modifications unless combined with --three-way-merge.

4. You run helm install and get "YAML parse error on template.yaml line 42". The template looks correct. How do you debug this?

Show answer Steps:
1) Run `helm template --debug <chart>` to see the fully rendered output with line numbers.
2) Pipe output to `yamllint` or `kubectl apply --dry-run=client -f -`.
3) Check for whitespace issues -- incorrect nindent values, missing hyphens in block scalars (|-), or accidental tab characters.
4) Use `{{ .Values.foo | toYaml | nindent N }}` carefully -- wrong N is the #1 cause of invisible YAML errors.
5) Add `{{- /* debug */ -}}` comments to isolate sections.

5. You have a pre-upgrade hook Job that runs database migrations. The Job fails partway through. What happens to the Helm upgrade, and how should you design the hook for safe retries?

Show answer When a pre-upgrade hook fails, Helm aborts the upgrade and the release stays at the current revision (status: failed). The failed Job pod remains for debugging. For safe retries:
1) make migrations idempotent,
2) set backoffLimit on the Job,
3) use hook-delete-policy: before-hook-creation so the old Job is cleaned up before retrying,
4) set a hook-weight if ordering among multiple hooks matters,
5) consider ttlSecondsAfterFinished to auto-clean completed Jobs.

6. Explain the three-way merge that Helm 3 uses during upgrades. Why can live edits via kubectl cause surprises?

Show answer Helm 3 compares three sources during upgrade: the old chart manifest (last release), the new chart manifest (current template), and the live cluster state. If someone edited a resource via kubectl (changing replicas, for example), and the old and new chart manifests both say replicas: 2, Helm sees no change in its manifests and keeps the live value. But if the new manifest changes replicas to 3, Helm patches it to 3, overwriting the manual edit. This makes manual edits unpredictable -- they may persist or be silently reverted depending on whether the chart changes that field.

7. How can you prevent Helm from deleting a specific resource (like a PVC) when the release is uninstalled?

Show answer Annotate the resource with `helm.sh/resource-policy: keep`. When Helm uninstalls the release, it skips deletion of resources with this annotation. The resource becomes orphaned (no longer managed by Helm). This is commonly used for PersistentVolumeClaims, databases, or any stateful resource you want to survive release deletion.
Note: on a subsequent `helm install` of the same chart, you may get conflicts if the kept resource already exists.

8. What is a Helm post-renderer and when would you use one?

Show answer A post-renderer is an executable that receives rendered manifests on stdin and outputs modified manifests on stdout. Invoked with `helm install --post-renderer ./kustomize-wrapper.sh`. Use cases: applying Kustomize overlays on top of third-party charts (inject sidecars, add labels, patch resources) without forking the chart, injecting organization-wide policies, or running OPA/conftest validation as a gate. The post-renderer runs after template rendering but before the manifests are sent to Kubernetes.

9. A release is stuck in pending-upgrade after a deploy crashed. helm rollback also fails. What is the recovery procedure?

Show answer Check helm history to identify the broken revision. As a last resort, delete the Helm release secret storing the broken state: kubectl delete secret sh.helm.release.v1..v -n (where N is the broken revision number). Then helm rollback to the last good revision, or helm upgrade --install to redeploy. This is destructive to Helm's state tracking so use only when rollback fails.

10. You have a pre-upgrade hook Job running database migrations that is not idempotent. The Job fails partway through, and the SRE retries the upgrade. What goes wrong and how should the hook be redesigned?

Show answer The migration runs again from the start, potentially re-applying already-completed steps and corrupting data. Redesign: make migrations idempotent (use IF NOT EXISTS, migration versioning tables), set backoffLimit: 0 or 1 to prevent Kubernetes-level retries of the broken Job, add hook-delete-policy: before-hook-creation for clean retry, and set a hook-weight if ordering among multiple hooks matters.

11. How does the helm-diff plugin help catch problems before a helm upgrade, and what is its key limitation regarding out-of-band changes?

Show answer helm diff upgrade shows a colored diff of what would change between the current deployed release and the proposed upgrade, without applying anything. This catches unintended changes from upstream chart updates, value drift, or template logic changes. Key limitation: it compares rendered manifests (old release vs new template), not live cluster state, so it may miss resources that were modified out-of-band via kubectl.

12. Your chart has a dependency on redis version 17.x. A colleague runs helm dependency update and the Chart.lock changes from 17.3.2 to 17.5.0. The deploy fails in staging. How should you manage dependency versions to prevent this?

Show answer Use helm dependency build instead of helm dependency update in CI/CD. dependency build uses the pinned versions in Chart.lock (which should be committed to git), while dependency update resolves fresh versions and rewrites the lock file. Pin exact versions in Chart.yaml when stability matters (version: "17.3.2" instead of "17.x"). Treat Chart.lock updates as deliberate changes that go through code review.