Skip to content

Quiz: Kubernetes Debugging

← Back to quiz index

4 questions

L0 (1 questions)

1. What are the first three kubectl commands you run when a pod is not working?

Show answer 1. kubectl get pods -n -o wide — see status, restarts, node.
2. kubectl describe pod -n — read Events section for what Kubernetes tried and where it failed.
3. kubectl logs -n — see application output. These three answer 80% of questions.

L1 (1 questions)

1. A pod is stuck in Pending state. What are the most common causes and how do you identify which one?

Show answer Common causes: (1) Insufficient resources — Events show 'Insufficient cpu/memory'. (2) No matching node for nodeSelector/affinity — Events show 'FailedScheduling'. (3) PVC not bound — Events show 'unbound PersistentVolumeClaims'. (4) Taints without tolerations. Check kubectl describe pod Events section for the specific scheduling failure reason.

L2 (1 questions)

1. A pod is Running but the application is returning 503 errors. kubectl logs shows the app started successfully. How do you debug this?

Show answer The pod is running but not receiving or handling traffic correctly. Check: (1) Readiness probe — is the pod marked Ready? (2) Service selector — does the Service selector match pod labels? (3) Endpoints — kubectl get endpoints — is the pod listed? (4) Port mapping — does the Service targetPort match the container's listening port? (5) Network policies — is traffic allowed? Use kubectl port-forward to test the pod directly.

L3 (1 questions)

1. Intermittent 502 errors are hitting your service. Some requests succeed, some fail. Pods look healthy. How do you systematically narrow down the cause?

Show answer 1. Check if specific pods are unhealthy — curl each pod IP directly via port-forward.
2. Check readiness probe — a pod might be flapping between Ready/NotReady.
3. Check if the issue correlates with deployments (new pods not ready yet, old pods terminating).
4. Check ingress/load balancer health check settings vs actual readiness timing.
5. Check for connection draining issues — terminationGracePeriodSeconds and preStop hooks.
6. Use kubectl get events and pod logs with timestamps to correlate errors.