Crashloopbackoff¶
16 cards — 🟢 4 easy | 🟡 4 medium | 🔴 2 hard
🟢 Easy (4)¶
1. What does CrashLoopBackOff mean?
Show answer
CrashLoopBackOff means a container keeps crashing and Kubernetes is applying exponential backoff before restarting it (10s, 20s, 40s, up to 5 minutes). The container starts, crashes, restarts, crashes again. It is not a root cause — it is a symptom that something is wrong with the container.Remember: CrashLoopBackOff = container starts, crashes, Kubernetes restarts it with exponential backoff (10s, 20s, 40s, ... up to 5 minutes). It's not an error itself — it's a symptom.
Remember: debugging steps: kubectl logs pod-name (check app logs), kubectl describe pod pod-name (check events), kubectl exec -it pod-name -- sh (if it stays up long enough).
2. What are the first three commands to diagnose CrashLoopBackOff?
Show answer
1) kubectl describe podRemember: the most common CrashLoopBackOff causes are: missing environment variables, wrong image tag, insufficient memory limits, and failed health checks (liveness probe killing the container).
3. What is the exponential backoff timing in CrashLoopBackOff?
Show answer
Kubernetes waits 10s after the first crash, then doubles: 10s, 20s, 40s, 80s, 160s, capping at 300s (5 minutes). The backoff resets after the container runs successfully for 10 minutes. During backoff, the pod status shows CrashLoopBackOff and the container is not running.Remember: the most common CrashLoopBackOff causes are: missing environment variables, wrong image tag, insufficient memory limits, and failed health checks (liveness probe killing the container).
4. What practices help prevent CrashLoopBackOff in production?
Show answer
1) Health check endpoints that are fast and dependency-free for liveness probes. 2) Graceful degradation when dependencies are unavailable. 3) Proper resource limits based on profiling. 4) Config validation before deploying (check ConfigMaps/Secrets exist). 5) Readiness gates to prevent traffic before app is ready. 6) Always set initialDelaySeconds for liveness probes.Remember: the most common CrashLoopBackOff causes are: missing environment variables, wrong image tag, insufficient memory limits, and failed health checks (liveness probe killing the container).
🟡 Medium (4)¶
1. What are the most common causes of CrashLoopBackOff?
Show answer
1) Application error (segfault, unhandled exception, missing dependency). 2) Misconfigured command or args in pod spec. 3) Missing ConfigMap/Secret that the app requires. 4) OOMKilled (exit code 137). 5) Failed liveness probe causing restarts. 6) Image built for wrong architecture (amd64 vs arm64). 7) Permission denied (wrong user/filesystem perms).Remember: MCEP — Missing config/secrets, Command error (wrong entrypoint), Exit code non-zero (app crash), Port conflict. Check logs first!
2. How can a liveness probe cause CrashLoopBackOff?
Show answer
If the liveness probe is too aggressive (short timeout, low failure threshold) or checks the wrong thing (external dependency), it can restart the container even when the app is healthy but slow to respond. Fix: increase initialDelaySeconds, increase timeoutSeconds, and never make liveness probes depend on external services.Remember: the most common CrashLoopBackOff causes are: missing environment variables, wrong image tag, insufficient memory limits, and failed health checks (liveness probe killing the container).
3. What do common container exit codes tell you?
Show answer
Exit code 0: success (container completed — wrong for long-running services). 1: general application error. 2: shell misuse. 126: command invoked but not executable (permission). 127: command not found (wrong binary path or missing in image). 137: SIGKILL (OOMKilled or kubectl delete --force). 143: SIGTERM (graceful shutdown).Remember: exit 0 = success, 1 = general error, 137 = OOMKilled (128+9=SIGKILL), 143 = graceful termination (128+15=SIGTERM). 137 is the most common surprise.
4. How does a missing ConfigMap or Secret cause CrashLoopBackOff?
Show answer
If a pod spec references a ConfigMap or Secret as an environment variable (envFrom or env.valueFrom) and it does not exist, the container fails to start — kubelet cannot inject the variables. If mounted as a volume with optional: false (default), the pod stays in ContainerCreating, not CrashLoopBackOff. Check kubectl describe for mount errors.Remember: the most common CrashLoopBackOff causes are: missing environment variables, wrong image tag, insufficient memory limits, and failed health checks (liveness probe killing the container).
🔴 Hard (2)¶
1. How do you debug a container that crashes immediately on startup?
Show answer
1) Override the entrypoint: kubectl run debug --image=Example: kubectl logs pod-name --previous shows logs from the LAST crashed container. Without --previous, you see the current (probably empty) container.
2. How do failing init containers cause CrashLoopBackOff?
Show answer
Init containers run sequentially before the main container. If an init container crashes, the pod stays in Init:CrashLoopBackOff. Common causes: migration scripts failing, dependency checks timing out, or wrong database credentials in init containers. Debug with kubectl logsRemember: the most common CrashLoopBackOff causes are: missing environment variables, wrong image tag, insufficient memory limits, and failed health checks (liveness probe killing the container).