| Identified misleading symptom |
Immediately checked per-container memory usage; recognized sidecar was the real consumer |
Checked app memory first, then pivoted to sidecar after seeing it was fine |
Spent extended time profiling the application for memory leaks |
| Found root cause in observability domain |
Identified trace sampling change as the cause of sidecar memory growth |
Found the sidecar was high on memory but not why |
Assumed the sidecar was a red herring or needed more memory |
| Remediated in devops_tooling domain |
Reverted Helm values change and redeployed cleanly |
Fixed the issue but by patching the running pod or changing k8s limits |
Increased memory limits without fixing the sampling rate |
| Cross-domain thinking |
Explained full chain: Helm values -> sidecar config -> memory pressure -> misleading OOMKill attribution |
Acknowledged it was sidecar-related but missed the Helm connection |
Treated it as a straightforward k8s memory limits problem |