Pattern: Hardcoded Namespace Override¶
ID: FP-034 Family: Configuration Landmine Frequency: Common Blast Radius: Single Service Detection Difficulty: Subtle
The Shape¶
A Kubernetes manifest explicitly sets namespace: in the metadata. When kubectl apply -f
manifest.yaml -n staging is run, the -n flag is overridden by the manifest's own namespace
field. The resource is created in the manifest's hardcoded namespace, not the operator's
intended namespace. This causes production deployments when staging was intended, or
staging deployments when production context was set.
How You'll See It¶
In Kubernetes¶
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: production # ← hardcoded
spec: ...
$ kubectl apply -f deploy.yaml -n staging
# Actually deploys to "production" because manifest namespace wins
deployment.apps/myapp configured # in production, not staging
In CI/CD¶
A template generates manifests with hardcoded namespaces for different environments.
A copy-paste creates a staging manifest with namespace: production. The staging
pipeline deploys to production. The staging tests "pass" because the production service
is running; actual staging has the old version.
The Tell¶
kubectl get deployment myapp -n stagingreturns "not found."kubectl get deployment myapp -n productionshows the recently-applied change. The manifest file contains an explicitnamespace:field in the metadata.
Common Misdiagnosis¶
| Looks Like | But Actually | How to Tell the Difference |
|---|---|---|
| Wrong kubectl context | Hardcoded namespace | Context is correct; the manifest's namespace field overrides -n flag |
| Deploy to wrong environment | Manifest namespace overrides intended namespace | Check manifest metadata for explicit namespace; -n flag is ignored if present |
| Missing resource | Resource deployed to wrong namespace | Resource exists in a different namespace from expected |
The Fix (Generic)¶
- Immediate: Delete the misplaced resource; re-apply to the correct namespace (after removing the hardcoded namespace from the manifest).
- Short-term: Remove
namespace:from all manifest metadata; rely on-nflag or kustomize namespace transforms. - Long-term: Use Kustomize
namespace:field in kustomization.yaml to set namespace at deploy time; add a CI check that rejects manifests with hardcoded production namespaces; use separate kubeconfig contexts per environment.
Real-World Examples¶
- Example 1: A developer applied a debug deployment manifest (with
namespace: productionhardcoded) with-n stagingintent. Debug container ran in production for 2 hours before discovered. No data access but compliance violation. - Example 2: Helm chart rendered manifests with
namespace: {{ .Release.Namespace }}. A developer rendered locally and saved the rendered YAML (with hardcoded production namespace) to git. CI applied that saved YAML, bypassing Helm's templating.
War Story¶
Staging deploy pipeline: "deploy to staging." We ran it. Everything looked fine. An hour later, production alert: service degraded. Checked production deployment: new version (the one we just deployed to "staging"). The manifest had
namespace: productionhardcoded from an old incident where we'd manually created a resource and forgotten to clean up the namespace field. The-n stagingflag in our CI script was silently ignored. Hardcoded namespace wins over -n flag. We spent 20 minutes confused before comparing manifest metadata. Removed all hardcoded namespaces; added a grep in CI:if grep -r 'namespace: production' manifests/; then exit 1; fi.
Cross-References¶
- Topic Packs: k8s-ops
- Footguns: k8s-ops/footguns.md — "Hardcoding namespace in manifests"
- Related Patterns: FP-047 (apply-unread-manifest — same "apply without reading" mistake), FP-046 (wrong terminal tab — wrong environment, different mechanism)