Skip to content

Drill: Get Logs from Multi-Container Pods

Goal

Retrieve logs from specific containers in multi-container pods, including init containers and previous crashed instances.

Setup

  • kubectl configured with cluster access
  • A pod with multiple containers or init containers (or create one for testing)

Commands

Get logs from the default container:

kubectl logs <pod-name> -n <namespace>

Specify a container in a multi-container pod:

kubectl logs <pod-name> -c <container-name>

List containers in a pod:

kubectl get pod <pod-name> -o jsonpath='{.spec.containers[*].name}'

List init containers:

kubectl get pod <pod-name> -o jsonpath='{.spec.initContainers[*].name}'

Get logs from an init container:

kubectl logs <pod-name> -c <init-container-name>

Get logs from previous (crashed) instance:

kubectl logs <pod-name> -c <container-name> --previous

Follow logs in real time:

kubectl logs -f <pod-name> -c <container-name>

Get logs from all containers in a pod:

kubectl logs <pod-name> --all-containers=true

Get logs with timestamps:

kubectl logs <pod-name> --timestamps=true

Get logs from the last hour:

kubectl logs <pod-name> --since=1h

Tail the last N lines:

kubectl logs <pod-name> --tail=50

What to Look For

  • Init container logs reveal setup failures before the main containers start
  • --previous shows why a container crashed before it restarted
  • Sidecar container logs (e.g., envoy, fluentd) often hold routing or logging clues
  • Timestamps help correlate events across containers

Common Mistakes

  • Not specifying -c and getting an error when a pod has multiple containers
  • Forgetting --previous and only seeing the current (healthy) instance logs
  • Not checking init container logs when a pod is stuck in Init state
  • Using -f on a CrashLoopBackOff pod (the follow ends when the container exits)

Cleanup

No cleanup needed. These are read-only commands.