Quiz: CrashLoopBackOff¶
4 questions
L0 (1 questions)¶
1. What does the CrashLoopBackOff status mean in Kubernetes?
Show answer
CrashLoopBackOff is a status (not an error) meaning the kubelet tried to start a container, the container exited, and the kubelet is now waiting with exponential backoff before retrying. The backoff starts at 10s and caps at 5 minutes. The container keeps restarting indefinitely.L1 (1 questions)¶
1. A pod shows CrashLoopBackOff with exit code 1 and the logs show 'connection refused to postgres:5432'. What is the likely cause and what do you check?
Show answer
Exit code 1 means the application exited with an error. The app cannot reach its database. Check: (1) Is the postgres service/pod running? (2) Is the service DNS resolving correctly? (3) Are network policies blocking the connection? (4) Is the database port correct? (5) Are credentials valid? The app likely has no retry logic on startup. *Common mistake:* Do not assume it is a container image issue when the app clearly started but failed to connect.L2 (1 questions)¶
1. A pod crashes with exit code 137 immediately on startup, then enters CrashLoopBackOff. The logs are empty. What is happening and how do you diagnose it?
Show answer
Exit code 137 = SIGKILL (128+9), usually OOMKilled. Empty logs mean the process was killed before it could write output. Check: (1) kubectl describe pod — look for OOMKilled in Last State. (2) Check resource limits — the memory limit is too low for the container to even start. (3) Raise the memory limit or profile the application's startup memory usage. *Common mistake:* Empty logs does not mean the container did not start — it means it was killed too fast to output anything.L3 (1 questions)¶
1. A deployment with 3 replicas is in CrashLoopBackOff after a config change. Only the new pods crash; old pods from the previous ReplicaSet are healthy. How do you diagnose and recover with minimal customer impact?
Show answer
1. Do NOT delete old pods — they are still serving traffic.2. Check new pod logs and describe for the crash cause (likely bad ConfigMap/Secret).
3. Compare the new config with the working config.
4. Roll back: kubectl rollout undo deployment/
5. Fix the config, test in staging, then re-deploy. The deployment strategy (RollingUpdate) should prevent full outage if maxUnavailable is configured properly.