Skip to content

Portal | Level: L1: Foundations | Topics: Docker / Containers, Kubernetes Core | Domain: Kubernetes

Runbook: ImagePullBackOff

Symptoms

  • Pod status shows ImagePullBackOff or ErrImagePull
  • Pod never starts
  • Events show "Failed to pull image"

Fast Triage

kubectl get pods -n grokdevops
kubectl describe pod -n grokdevops -l app.kubernetes.io/name=grokdevops | grep -A5 Events
kubectl get deployment grokdevops -n grokdevops -o jsonpath='{.spec.template.spec.containers[0].image}'

Likely Causes (ranked)

  1. Wrong image tag — typo or nonexistent version
  2. Image not imported into k3s — local images need docker save | sudo k3s ctr images import -
  3. Private registry without credentials — missing imagePullSecrets
  4. Registry unreachable — network or DNS issue

Evidence Interpretation

What bad looks like:

NAME                          READY   STATUS             RESTARTS   AGE
grokdevops-6b5d4f7c88-x2k9l  0/1     ImagePullBackOff   0          4m
- RESTARTS=0 — the container never started, so there is nothing to restart. - ErrImagePull appears first; Kubernetes then backs off exponentially and the status flips to ImagePullBackOff. - Because the container never ran, kubectl logs returns nothing — this is expected, not a second problem. - Check Events with kubectl describe pod for the exact pull error (e.g., "manifest unknown", "unauthorized").

Fix Steps

[!WARNING] In k3s, docker build images are invisible to the cluster. You must explicitly import them with docker save <image> | sudo k3s ctr images import - before they can be pulled. This is the most common cause of ImagePullBackOff in local dev.

  1. Check the image name/tag in the deployment
  2. For k3s local dev:
    docker images | grep grokdevops
    docker save grokdevops:latest | sudo k3s ctr images import -
    
  3. For registry auth issues:
    kubectl create secret docker-registry regcred \
      --docker-server=<registry> --docker-username=<user> --docker-password=<pass> -n grokdevops
    
  4. Fix Helm values and redeploy:
    helm upgrade grokdevops devops/helm/grokdevops -n grokdevops -f devops/helm/values-dev.yaml
    

Verification

kubectl get pods -n grokdevops  # STATUS=Running
kubectl describe pod -n grokdevops -l app.kubernetes.io/name=grokdevops | grep "Successfully pulled"

Cleanup

None needed beyond fixing the image reference.

Unknown Unknowns

  • imagePullPolicy: Always forces a registry pull even if the image exists locally — change to IfNotPresent for local dev with k3s.
  • k3s uses its own image store; a docker build image is invisible to k3s until you run docker save | sudo k3s ctr images import -.
  • Private registries need an imagePullSecrets entry on the Pod spec — or attach the secret to the ServiceAccount so every pod in the namespace inherits it.
  • The backoff timer doubles each retry (10s → 20s → 40s … up to 5 min), so deleting the pod is faster than waiting during debugging.

Pitfalls

  • Checking logs — there are none; the container never started. Don't chase a logging problem that doesn't exist.
  • Deleting pods — the Deployment will recreate them with the same bad image. Fix the image reference first.
  • Not verifying the image exists in the registry — always confirm with docker manifest inspect or the registry UI before assuming a cluster-side issue.

See Also

  • training/library/guides/troubleshooting.md
  • training/interactive/runtime-labs/lab-runtime-05-helm-upgrade-rollback/ (bad image tag scenario)
  • training/interactive/incidents/scenarios/imagepull-bad-tag.sh

Wiki Navigation