- k8s
- l1
- topic-pack
- kustomize
- k8s-core --- Portal | Level: L1: Foundations | Topics: Kustomize, Kubernetes Core | Domain: Kubernetes
Kustomize - Primer¶
Why This Matters¶
Kustomize is the template-free alternative to Helm for managing Kubernetes manifests. It ships built into kubectl (kubectl apply -k), requires no server-side components, and works by overlaying patches on base manifests rather than injecting values into templates. This means your base YAML is always valid, deployable Kubernetes — not a template with {{ .Values.thing }} placeholders. For teams that find Helm's templating too complex or error-prone, Kustomize provides environment-specific customization (dev/staging/prod) through a layered approach: base manifests + overlays that patch, add, or transform resources. Understanding when to use Kustomize vs Helm is a key architectural decision.
Timeline: Kustomize was created by the Kubernetes SIG-CLI group and first released in 2018. It was integrated into
kubectlin v1.14 (March 2019) viakubectl apply -k, making it the only configuration management tool shipped inside kubectl itself. The name is a play on "customize" with a K for Kubernetes.
Core Concepts¶
1. Directory Structure¶
myapp/
base/
kustomization.yaml
deployment.yaml
service.yaml
configmap.yaml
overlays/
dev/
kustomization.yaml
replica-patch.yaml
staging/
kustomization.yaml
replica-patch.yaml
ingress.yaml
production/
kustomization.yaml
replica-patch.yaml
hpa.yaml
resource-limits-patch.yaml
2. Base Kustomization¶
# base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
- configmap.yaml
commonLabels:
app.kubernetes.io/name: myapp
app.kubernetes.io/managed-by: kustomize
commonAnnotations:
team: platform
# base/deployment.yaml (plain, valid Kubernetes YAML)
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:latest
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
3. Overlays and Patches¶
Strategic merge patch (most common):
# overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
namespace: production
namePrefix: prod-
patches:
- path: replica-patch.yaml
- path: resource-limits-patch.yaml
images:
- name: myapp
newName: registry.example.com/myapp
newTag: v2.3.1
# overlays/production/replica-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 5
# overlays/production/resource-limits-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
template:
spec:
containers:
- name: myapp
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: "2"
memory: 1Gi
JSON patch (for precise operations):
# overlays/dev/kustomization.yaml
patches:
- target:
kind: Deployment
name: myapp
patch: |
- op: replace
path: /spec/replicas
value: 1
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: DEBUG
value: "true"
4. Generators¶
Kustomize can generate ConfigMaps and Secrets from files or literals, with automatic hash suffixes for rollout triggering.
# kustomization.yaml
configMapGenerator:
- name: app-config
literals:
- DATABASE_HOST=db.example.com
- LOG_LEVEL=info
files:
- configs/app.properties
- name: nginx-config
files:
- nginx.conf=configs/nginx.conf
secretGenerator:
- name: db-credentials
literals:
- username=admin
- password=secret123
type: Opaque
- name: tls-cert
files:
- tls.crt=certs/server.crt
- tls.key=certs/server.key
type: kubernetes.io/tls
# Hash suffix behavior:
# ConfigMap name becomes: app-config-abc123
# Deployment references automatically update
# Changing config values triggers a new rollout (new hash = new name)
#
# This is Kustomize's killer feature for config management:
# change a config value → hash changes → new ConfigMap name →
# Deployment spec changes → Kubernetes rolls out new pods.
# No manual "kubectl rollout restart" needed.
generatorOptions:
disableNameSuffixHash: false # default: false (keep hashes)
labels:
generated-by: kustomize
5. Component Composition¶
Components are reusable Kustomize "fragments" that can be included in multiple overlays.
# components/monitoring/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
- target:
kind: Deployment
patch: |
- op: add
path: /spec/template/metadata/annotations/prometheus.io~1scrape
value: "true"
- op: add
path: /spec/template/metadata/annotations/prometheus.io~1port
value: "9090"
resources:
- servicemonitor.yaml
# components/debug/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
- target:
kind: Deployment
patch: |
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: LOG_LEVEL
value: debug
# overlays/staging/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
components:
- ../../components/monitoring
- ../../components/debug
namespace: staging
6. Transformers¶
# kustomization.yaml
# Add namespace to all resources
namespace: production
# Add prefix/suffix to all resource names
namePrefix: prod-
nameSuffix: -v2
# Override images (without editing base YAML)
images:
- name: myapp
newName: registry.example.com/myapp
newTag: v2.3.1
- name: nginx
newName: registry.example.com/nginx
digest: sha256:abc123...
# Add labels to all resources
commonLabels:
environment: production
team: platform
# Add annotations to all resources
commonAnnotations:
config.kubernetes.io/managed-by: kustomize
# Replacements (field substitution — Kustomize v5+)
replacements:
- source:
kind: ConfigMap
name: app-config
fieldPath: data.DATABASE_HOST
targets:
- select:
kind: Deployment
name: myapp
fieldPaths:
- spec.template.spec.containers.[name=myapp].env.[name=DB_HOST].value
7. kubectl Integration¶
# Preview rendered output (dry run)
kubectl kustomize overlays/production
kustomize build overlays/production
# Apply directly
kubectl apply -k overlays/production
# Diff against live cluster
kubectl diff -k overlays/production
# Delete resources managed by kustomization
kubectl delete -k overlays/production
# Preview with specific kustomize version (standalone binary)
kustomize build overlays/production | kubectl apply -f -
# Generate and pipe to other tools
kustomize build overlays/production | kubeval -
kustomize build overlays/production | kubescore score -
8. Kustomize vs Helm¶
| Aspect | Kustomize | Helm |
|---|---|---|
| Base YAML | Valid Kubernetes YAML | Go templates (not valid YAML) |
| Customization | Patches and overlays | Template values |
| Complexity | Low (patches are intuitive) | Higher (template logic) |
| Reusability | Components, bases | Charts, subcharts |
| State | Stateless | Release tracking (secrets or DB) |
| Ecosystem | Built into kubectl | Thousands of community charts |
| Best for | Internal apps, simple variations | Shared packages, complex apps |
When to use Kustomize: internal services where you own the YAML and need per-environment variations.
When to use Helm: third-party applications (nginx-ingress, prometheus, cert-manager) where community charts are the standard.
Together: many teams use Helm to install third-party charts and Kustomize to manage their own application manifests. ArgoCD and Flux support both natively.
Gotcha: A common Kustomize mistake: using
commonLabelswhen you have label selectors. Kustomize addscommonLabelsto bothmetadata.labelsandspec.selector.matchLabels. Since Kubernetes forbids changing selectors on existing Deployments, adding acommonLabelsentry after initial deployment will fail with an immutable field error. Uselabels(metadata only) from thelabels:transformer instead if selectors are already set.Interview tip: When asked "Kustomize vs Helm," the concise answer is: Kustomize patches real YAML (what you see is what you deploy). Helm templates YAML (what you see has
{{ }}placeholders that resolve at render time). Kustomize is simpler but less powerful. Helm is more powerful but harder to debug when templates produce unexpected output.
Quick Reference¶
# Build and preview
kubectl kustomize overlays/production # preview YAML
kustomize build overlays/production # same, standalone
# Apply
kubectl apply -k overlays/production # apply to cluster
kubectl diff -k overlays/production # diff against live
# Validate
kustomize build overlays/production | kubectl apply --dry-run=server -f -
# Edit
kustomize edit set image myapp=registry.example.com/myapp:v2.3.1
kustomize edit set namespace production
kustomize edit add resource new-resource.yaml
# Key files
# kustomization.yaml — the index file (resources, patches, generators)
# base/ — shared manifests (valid YAML)
# overlays/<env>/ — per-environment patches
# components/ — reusable fragments
Wiki Navigation¶
Prerequisites¶
- Kubernetes Exercises (Quest Ladder) (CLI) (Exercise Set, L1)
Related Content¶
- Adversarial Interview Gauntlet (30 sequences) (Scenario, L2) — Kubernetes Core
- Case Study: Alert Storm — Flapping Health Checks (Case Study, L2) — Kubernetes Core
- Case Study: Canary Deploy Routing to Wrong Backend — Ingress Misconfigured (Case Study, L2) — Kubernetes Core
- Case Study: CrashLoopBackOff No Logs (Case Study, L1) — Kubernetes Core
- Case Study: DNS Looks Broken — TLS Expired, Fix Is Cert-Manager (Case Study, L2) — Kubernetes Core
- Case Study: DaemonSet Blocks Eviction (Case Study, L2) — Kubernetes Core
- Case Study: Deployment Stuck — ImagePull Auth Failure, Vault Secret Rotation (Case Study, L2) — Kubernetes Core
- Case Study: Drain Blocked by PDB (Case Study, L2) — Kubernetes Core
- Case Study: HPA Flapping — Metrics Server Clock Skew, Fix Is NTP (Case Study, L2) — Kubernetes Core
- Case Study: ImagePullBackOff Registry Auth (Case Study, L1) — Kubernetes Core
Pages that link here¶
- Anti-Primer: Kustomize
- Certification Prep: CKA — Certified Kubernetes Administrator
- Certification Prep: CKAD — Certified Kubernetes Application Developer
- Chaos Engineering & Fault Injection
- Comparison: Kubernetes Templating
- Kustomize
- Production Readiness Review: Study Plans
- Symptoms
- Symptoms
- Symptoms
- Symptoms
- Symptoms
- Symptoms
- Symptoms
- Symptoms: Alert Storm, Caused by Flapping Health Checks, Fix Is Probe Tuning