Skip to content

Grading Rubric

Criterion Strong (3) Adequate (2) Weak (1)
Identified misleading symptom Immediately checked per-container memory usage; recognized sidecar was the real consumer Checked app memory first, then pivoted to sidecar after seeing it was fine Spent extended time profiling the application for memory leaks
Found root cause in observability domain Identified trace sampling change as the cause of sidecar memory growth Found the sidecar was high on memory but not why Assumed the sidecar was a red herring or needed more memory
Remediated in devops_tooling domain Reverted Helm values change and redeployed cleanly Fixed the issue but by patching the running pod or changing k8s limits Increased memory limits without fixing the sampling rate
Cross-domain thinking Explained full chain: Helm values -> sidecar config -> memory pressure -> misleading OOMKill attribution Acknowledged it was sidecar-related but missed the Helm connection Treated it as a straightforward k8s memory limits problem

Prerequisite Topic Packs

  • k8s-pods-and-scheduling — needed for understanding OOMKill mechanics and container resource limits
  • oomkilled — needed for Domain A investigation (OOM killer behavior, cgroup memory accounting)
  • observability-deep-dive — needed for Domain B root cause (distributed tracing, sampling rates)
  • helm — needed for Domain C remediation (Helm values, upgrades, diff)
  • containers-deep-dive — needed for understanding sidecar container resource isolation