Skip to content

Pattern: Apply-Without-Reading Manifest

ID: FP-047 Family: Human Error Amplifier Frequency: Common Blast Radius: Single Service Detection Difficulty: Moderate

The Shape

An engineer applies a Kubernetes manifest from an external source (email, paste, Stack Overflow, a colleague's gist) without fully reading it. The manifest contains a namespace override, a replicas: 1 for a production service, or a resource targeting a different service name. The kubectl apply succeeds. The unintended change (reduced replicas, wrong namespace, overwritten config) causes a production issue that takes time to diagnose because "no deploy was triggered" — the manifest application was informal and not tracked.

How You'll See It

In Kubernetes

An engineer is following a tutorial. Pastes kubectl apply -f - and types in a YAML snippet. The snippet has namespace: default and replicas: 1. Their production deployment is in namespace: production with replicas: 10. The manifest is applied to default (no effect on the intended target) and also updates the production deployment to replicas: 1 if the names match and namespace is not checked.

Or: the manifest has kubectl apply -f https://example.com/manifest.yaml — the URL could have been modified since last review; the applied content is unknown.

In CI/CD

A developer saves a rendered Helm manifest to a file to debug a chart issue. The file has production values hardcoded. A colleague finds the file and applies it to staging, intending to test. The file was actually production config. The staging service is now configured identically to production.

The Tell

A change was applied manually (outside the normal CI/CD pipeline). The change wasn't tracked in git or the deployment log. The manifest source was external or generated, not a reviewed repository file.

Common Misdiagnosis

Looks Like But Actually How to Tell the Difference
Bug in production code Misconfiguration from unreviewed manifest No code change; configuration changed; git blame shows no recent change
Unauthorized change Informal manual apply Not unauthorized (engineer had access); just unreviewed

The Fix (Generic)

  1. Immediate: kubectl diff -f manifest.yaml before any apply — shows what would change.
  2. Short-term: Use --dry-run=server before every manual apply; always cat or less the manifest file before applying.
  3. Long-term: Implement GitOps: all manifests must be committed to git and applied via pipeline; disable direct kubectl apply in production for non-admin users; use admission webhooks that log all applied manifests.

Real-World Examples

  • Example 1: Engineer applied a debug deployment manifest from a blog post. It had imagePullPolicy: Always and image: my-app:latest. A pod restart pulled a new image version. Mixed versions in production (FP-031).
  • Example 2: YAML from a Stack Overflow answer included terminationGracePeriodSeconds: 0. Applied to a database deployment. Next rolling restart: pods killed immediately without graceful shutdown. Database corruption.

War Story

A new team member was debugging a pod issue. Found a YAML snippet in our internal wiki (which was out of date). Applied it. The YAML had replicas: 1 for a service that should have had replicas: 5. We went from 5 pods to 1. Traffic was fine initially (low traffic hour) then degraded as load increased. We spent 30 minutes looking at "why is this service slow" before someone noticed the pod count. We added kubectl diff as the first command in our runbook for any manual manifest application: "you must run kubectl diff and share the output before applying."

Cross-References

  • Topic Packs: k8s-ops
  • Footguns: k8s-ops/footguns.md — "kubectl apply on manifest you didn't read"
  • Related Patterns: FP-034 (hardcoded namespace override — a specific risk in manifests you haven't read), FP-046 (wrong terminal tab — same "wrong target" human error)