Skip to content

Kustomize: Kubernetes Config Without Templates

  • lesson
  • kustomize
  • helm-comparison
  • kubernetes-configuration
  • gitops
  • multi-environment-deployment
  • strategic-merge-patches ---# Kustomize — Kubernetes Config Without Templates

Topics: Kustomize, Helm comparison, Kubernetes configuration, GitOps, multi-environment deployment, strategic merge patches Level: L1–L2 (Foundations → Operations) Time: 50–70 minutes Prerequisites: None (Kubernetes and YAML concepts explained as we go)


The Mission

You maintain a web application that runs in three Kubernetes environments: dev, staging, and production. Today, each environment has its own complete set of YAML files — three copies of every Deployment, Service, and ConfigMap. When you change a container port, you edit nine files. When someone forgets to update the staging copy, the staging deploy breaks at 6pm on a Friday.

Your job: collapse all three environments into a single set of base manifests with thin, per-environment patches. No more copy-paste YAML. No more "forgot to update staging."

You could use Helm templates for this. But Helm solves it by turning your YAML into Go templates — {{ .Values.replicas }} instead of replicas: 3. Your base manifests are no longer valid YAML. They can't be linted, can't be kubectl apply'd directly, and when a template renders wrong, the error message points to the rendered output, not your template. Debugging becomes archaeology.

Kustomize takes a different path: your base manifests stay as plain, valid Kubernetes YAML. You customize them by patching — layering changes on top without touching the originals. What you see in the base directory is what gets deployed (plus patches).

Let's build it.


Part 1: The Simplest Possible Kustomization

Before any theory, let's see the tool work. Here's a complete application base:

# base/deployment.yaml — plain, valid, kubectl-apply-able
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-api
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web-api
  template:
    metadata:
      labels:
        app: web-api
    spec:
      containers:
        - name: web-api
          image: registry.example.com/web-api:v1.0.0
          ports:
            - containerPort: 8080
          resources:
            requests: { cpu: 100m, memory: 128Mi }
            limits:   { cpu: 500m, memory: 256Mi }
          env:
            - name: LOG_LEVEL
              value: info
# base/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: web-api
spec:
  selector:
    app: web-api
  ports:
    - port: 80
      targetPort: 8080
---
# base/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: web-api-config
data:
  DATABASE_HOST: "db.internal"
  CACHE_TTL: "300"
  FEATURE_FLAG_NEW_UI: "false"

Nothing unusual — three plain Kubernetes manifests. You could kubectl apply -f each one right now. Now add the glue file:

# base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - deployment.yaml
  - service.yaml
  - configmap.yaml

That's it. The kustomization.yaml lists what resources belong to this base. Run it:

# Preview the rendered output
kubectl kustomize base/

# Or with the standalone binary
kustomize build base/

The output is your three manifests concatenated. No transformation yet — this is the identity operation. But now Kustomize knows about your resources, and you can start layering.

Name Origin: Kustomize is "customize" with a K for Kubernetes. It was created by the Kubernetes SIG-CLI team in 2018 and merged into kubectl in v1.14 (March 2019). It's the only configuration tool that ships inside kubectl — run kubectl apply -k <dir> and you're using Kustomize with zero extra installs.


Part 2: Overlays — Same App, Three Environments

Here's the structure: a base/ directory with your canonical manifests, and an overlays/ directory with one subdirectory per environment. Each overlay has its own kustomization.yaml that references the base and applies patches.

Dev overlay — minimal changes

# overlays/dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../../base
namespace: dev
images:
  - name: registry.example.com/web-api
    newTag: latest
patches:
  - path: patches/deployment.yaml
# overlays/dev/patches/deployment.yaml — just override LOG_LEVEL
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-api
spec:
  template:
    spec:
      containers:
        - name: web-api
          env:
            - name: LOG_LEVEL
              value: debug

Production overlay — the interesting one

Production adds a name prefix, annotations, beefier resources, and its own config:

# overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../../base
namespace: production
namePrefix: prod-
commonAnnotations:
  team: platform
  oncall-pager: platform-oncall@example.com
images:
  - name: registry.example.com/web-api
    newTag: v1.0.0
patches:
  - path: patches/deployment.yaml
  - path: patches/configmap.yaml
# overlays/production/patches/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-api
spec:
  replicas: 5
  template:
    spec:
      containers:
        - name: web-api
          resources:
            requests:
              cpu: 500m
              memory: 512Mi
            limits:
              cpu: "2"
              memory: 1Gi
          env:
            - name: LOG_LEVEL
              value: warn
# overlays/production/patches/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: web-api-config
data:
  DATABASE_HOST: "db.production.internal"
  CACHE_TTL: "3600"
  FEATURE_FLAG_NEW_UI: "false"

Staging follows the same pattern with its own replica count (2), its own image tag (v1.0.0-rc.3), and its own database host. You get the idea — the overlay is thin, the base does the heavy lifting.

Now compare all three environments:

# See exactly what each environment gets
kustomize build overlays/dev/        | grep -E "replicas:|image:|namespace:"
kustomize build overlays/staging/    | grep -E "replicas:|image:|namespace:"
kustomize build overlays/production/ | grep -E "replicas:|image:|namespace:"

One base, three thin overlays. Change the container port in base/deployment.yaml and it propagates everywhere. Change production replicas without touching dev. The copy-paste problem is gone.

Mental Model: Think of overlays like CSS. The base is your HTML — the structure. Each overlay is a stylesheet that changes presentation without touching the HTML. Dev gets color: red (debug mode). Production gets font-weight: bold (more replicas, more resources). The HTML stays clean.


Part 3: Strategic Merge Patches — The Heart of Kustomize

That patches/deployment.yaml in each overlay is a strategic merge patch. This is the mechanism that makes Kustomize work, and it's worth understanding because it's different from what you might expect.

A naive merge would replace entire lists. If your base has three environment variables and your patch specifies one, a naive merge would drop the other two. Strategic merge patches are smarter — they merge list items by a key field.

For containers, the key is name. For env vars, the key is also name. This is a Kubernetes-specific extension (not standard JSON Merge Patch from RFC 7386).

Here's what happens when Kustomize applies the production deployment patch:

BASE                           PATCH                        RESULT
───────────────────            ──────────────────           ──────────────────
containers:                    containers:                  containers:
  - name: web-api      ←match→   - name: web-api             - name: web-api
    image: ...v1.0.0                                           image: ...v1.0.0 ← kept
    cpu: 100m                       cpu: 500m                  cpu: 500m        ← replaced
    env:                            env:                       env:
      - name: LOG_LEVEL               - name: LOG_LEVEL          - name: LOG_LEVEL
        value: info        ←match→       value: warn               value: warn  ← replaced

Container matched by name: web-api. Env var matched by name: LOG_LEVEL. Everything else preserved. You only specify what changes.

Trivia: Strategic merge patches are unique to Kubernetes. Standard JSON Merge Patch (RFC 7386) replaces entire arrays — if your patch has one container, you lose all other containers. The Kubernetes API server uses strategic merge patches internally for kubectl apply, and Kustomize reuses the same logic client-side. The merge keys for each field are defined in the Kubernetes API schema via the x-kubernetes-patch-merge-key and x-kubernetes-patch-strategy extensions.

When strategic merge isn't enough: JSON patches

Sometimes you need precision that strategic merge can't provide — inserting at a specific array index, removing a field, or operating on resources where merge keys aren't defined. That's when you use JSON Patch (RFC 6902):

# overlays/dev/kustomization.yaml
patches:
  - target:
      kind: Deployment
      name: web-api
    patch: |
      - op: add
        path: /spec/template/spec/containers/0/env/-
        value:
          name: DEBUG_SQL
          value: "true"
      - op: replace
        path: /spec/replicas
        value: 1

The - at the end of /spec/template/spec/containers/0/env/- means "append to the array." JSON patches are explicit about the operation (add, remove, replace, move, copy, test), which makes them more verbose but unambiguous.

Rule of thumb: Use strategic merge patches for most things. Reach for JSON patches when you need to add to arrays without a merge key, remove fields, or when the implicit merge behavior surprises you.


Flashcard Check #1

Cover the answers and test yourself.

Question Answer
What makes Kustomize bases different from Helm chart templates? Bases are valid, deployable Kubernetes YAML. Helm templates contain Go template directives and are not valid YAML until rendered.
What is the merge key for containers in a strategic merge patch? name — Kustomize matches containers by their name field and merges the rest.
Which kubectl flag applies a kustomization directory? kubectl apply -k <dir>
What does the namespace: field in a kustomization.yaml do? It overrides the namespace on all namespaced resources — even ones that already specify a namespace. It's a transformer, not a default.

Part 4: ConfigMap Generators and the Hash Trick

Here's a problem: you update a ConfigMap, kubectl apply it, and... nothing happens. Your pods keep running with the old config. ConfigMaps mounted as volumes update eventually (kubelet syncs every 60-120 seconds), but ConfigMaps referenced via envFrom never update without a pod restart.

Kustomize solves this elegantly with configMapGenerator:

# base/kustomization.yaml (updated)
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - deployment.yaml
  - service.yaml

configMapGenerator:
  - name: web-api-config
    literals:
      - DATABASE_HOST=db.internal
      - CACHE_TTL=300
      - FEATURE_FLAG_NEW_UI=false

Build it and look at the ConfigMap name:

$ kustomize build base/
# ...
apiVersion: v1
kind: ConfigMap
metadata:
  name: web-api-config-g74bgt5d9m    # ← hash suffix!
data:
  CACHE_TTL: "300"
  DATABASE_HOST: db.internal
  FEATURE_FLAG_NEW_UI: "false"

That g74bgt5d9m suffix is a hash of the ConfigMap's contents. Change any value and the hash changes. A new hash means a new ConfigMap name. Kustomize automatically updates every reference to this ConfigMap in your Deployments. A new Deployment spec means Kubernetes triggers a rolling update.

The chain reaction:

Config value changes
  → Hash changes
    → ConfigMap gets a new name (web-api-config-h85cfu6e0n)
      → Deployment spec references the new name
        → Kubernetes sees the Deployment spec changed
          → Rolling update begins
            → New pods start with new config

No kubectl rollout restart. No manual annotation bumps. The config change is the rollout trigger.

Gotcha: If you set disableNameSuffixHash: true because you want a predictable ConfigMap name, you lose this automatic rollout behavior. Pods will keep the old config until manually restarted. Only disable the hash for ConfigMaps that don't need to trigger pod restarts (like those consumed by label selectors or external systems).

You can generate ConfigMaps from files too:

configMapGenerator:
  - name: nginx-config
    files:
      - nginx.conf=configs/nginx.conf

secretGenerator:
  - name: db-credentials
    envs:
      - secrets.env    # MUST be in .gitignore
    type: Opaque

War Story: A team disabled the hash suffix on their ConfigMap generator because the hash "looked ugly in kubectl get configmap." Two weeks later, they pushed a database connection string change. The ConfigMap updated, but pods kept the old connection string for 18 hours — until the next deploy triggered a rollout for unrelated reasons. The fix took 5 minutes: re-enable the hash. The debugging took the entire afternoon. The hash exists because Kubernetes ConfigMap updates and pod restarts are fundamentally decoupled. Kustomize's hash suffix bridges that gap.


Part 5: Components — Reusable Cross-Cutting Concerns

Your dev overlay wants debug logging. Your staging overlay also wants debug logging plus Prometheus scrape annotations. Production wants the Prometheus annotations but not debug logging. With just bases and overlays, you'd duplicate the Prometheus patches in both staging and production.

Components solve this. They're reusable fragments that any overlay can include:

# components/prometheus/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
  - target:
      kind: Deployment
    patch: |
      - op: add
        path: /spec/template/metadata/annotations/prometheus.io~1scrape
        value: "true"
      - op: add
        path: /spec/template/metadata/annotations/prometheus.io~1port
        value: "9090"

Now overlays pick what they need — staging gets both components, production gets only Prometheus:

# overlays/staging/kustomization.yaml — uses both components
resources: [../../base]
components: [../../components/prometheus, ../../components/debug]
namespace: staging
# overlays/production/kustomization.yaml — monitoring only, no debug
resources: [../../base]
components: [../../components/prometheus]
namespace: production

The ~1 in the JSON patch path is the RFC 6901 encoding for / — the annotation key prometheus.io/scrape becomes prometheus.io~1scrape in JSON Pointer syntax.

Remember: Components use kind: Component (not Kustomization). They're mixins: the overlay brings the base, the component brings the behavior.


Part 6: Replacements — Variable Substitution Without Templates

Sometimes you need a value from one resource to appear in another. The classic case: you want your app's Deployment to reference the Service name in an environment variable, but you don't want to hardcode it.

Kustomize's replacements (which replaced the deprecated vars in Kustomize 4.5) handle this:

# kustomization.yaml
replacements:
  - source:
      kind: Service
      name: web-api
      fieldPath: metadata.name
    targets:
      - select:
          kind: Deployment
          name: web-api
        fieldPaths:
          - spec.template.spec.containers.[name=web-api].env.[name=SERVICE_NAME].value

This says: "take the metadata.name from the Service named web-api and inject it into the Deployment's SERVICE_NAME environment variable." If a namePrefix changes the Service name to prod-web-api, the environment variable updates automatically.

Gotcha: The replacements syntax is verbose compared to Helm's {{ .Values.thing }}. This is by design — Kustomize trades terseness for explicitness. Every substitution has a clear source and target. The upside: you never wonder "where does this value come from?" because the kustomization.yaml tells you exactly.


Flashcard Check #2

Question Answer
Why does configMapGenerator append a hash to the ConfigMap name? So that changing config content produces a new name, which triggers Kubernetes to roll out new pods. Without the hash, pods keep the stale config.
What is the difference between a Kustomize Component and a base? A base provides complete resources. A component provides partial modifications (patches, transformers) that can be mixed into any overlay. Components use kind: Component.
What does ~1 mean in a JSON patch path? It's the RFC 6901 encoding for /. Used when the key itself contains a forward slash (like prometheus.io/scrape).
What replaced vars in modern Kustomize? replacements — more powerful but more verbose. Vars still work but generate deprecation warnings.

Part 7: Kustomize vs Helm — The Decision Matrix

This is the question that comes up in every architecture review and every Kubernetes interview. The answer is not "one is better" — it's "they solve different problems."

Dimension Kustomize Helm
Base files Valid Kubernetes YAML Go templates (not valid YAML)
Customization Patches layered on top Values injected into templates
Debugging Diff the patch against the base Render the template, find the YAML error, trace it back to the template
Learning curve Shallow — YAML + patches Steeper — Go template syntax, sprig functions, flow control
Ecosystem Built into kubectl, no chart repos Thousands of community charts on Artifact Hub
State tracking Stateless (pure build tool) Release history stored as K8s Secrets
Rollback Re-apply previous Git commit helm rollback <release> <revision>
Best for Your own apps with per-env variations Third-party apps (nginx-ingress, prometheus, cert-manager)
Reusability Components, bases Charts, subcharts, library charts
Conditionals No if/else — use overlays or components Full Go template logic ({{- if }}, range, with)

When to use Kustomize

  • You own the YAML and need dev/staging/prod variations
  • You want base manifests that are always valid and deployable
  • Your team is allergic to Go template syntax
  • You're already deep in a GitOps workflow (ArgoCD/Flux handle Kustomize natively)

When to use Helm

  • You're deploying third-party software (use the community chart, don't reinvent it)
  • You need complex conditional logic (optional sidecars, feature-flagged resources)
  • You want release management with built-in rollback
  • You're packaging something for others to consume

When to use both

This is the most common pattern in production. Helm installs third-party charts. Kustomize manages your own application manifests. Sometimes they even work together:

# Render a Helm chart to plain YAML, then customize with Kustomize
helm template prometheus prometheus-community/prometheus \
  --values prometheus-values.yaml \
  --namespace monitoring > base/prometheus.yaml

# Now use Kustomize overlays to patch the rendered output
kustomize build overlays/production/

War Story: A team used Helm to manage their own microservices. Each service had a chart with 15 template files and a 200-line values.yaml. When a developer added a new environment variable, they forgot the | quote pipe in the template. The template rendered DEBUG: true instead of DEBUG: "true". YAML parsed it as a boolean. The app read the environment variable as the string "True" (Python's str(True)), which didn't match the expected "true". This took four hours to debug because the rendered YAML looked correct — the bug was invisible without checking YAML type coercion. With Kustomize, the base would have been value: "true" (a plain YAML string, quoted explicitly), and the patch would have been another plain YAML string. No template rendering, no type coercion, no invisible bugs.

Interview Bridge: When asked "Kustomize vs Helm," the concise answer is: Kustomize patches real YAML (WYSIWYG). Helm templates YAML (what you see has {{ }} placeholders that resolve at render time). Kustomize is simpler but less powerful. Helm is more powerful but harder to debug when templates produce unexpected output. Most production teams use both.


Part 8: Kustomize in GitOps — ArgoCD and Flux

Both major GitOps controllers understand Kustomize natively. Point them at an overlay directory and they handle the build + apply + drift detection loop.

# ArgoCD Application — just point source.path at your overlay
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: web-api-production
  namespace: argocd
spec:
  source:
    repoURL: https://github.com/yourorg/k8s-manifests.git
    targetRevision: main
    path: web-api/overlays/production
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true     # reverts manual kubectl edits

ArgoCD runs kustomize build on that path, compares the output to what's live in the cluster, and syncs differences. If someone runs kubectl scale manually, ArgoCD reverts it on the next sync cycle. Flux works the same way with its Kustomization resource.

Push a new image tag to the production overlay's kustomization.yaml, and within minutes the cluster converges. No kubectl apply, no CI pipeline with cluster credentials.

Mental Model: In GitOps, your overlay directory is the desired state of the environment. overlays/production/ is production — not "a description of production" but the actual source of truth. Kustomize's stateless, deterministic build is perfect for this: same input always produces the same output, so Git history is your audit trail.


Part 9: Real Commands You'll Actually Run

# Build and preview
kustomize build overlays/production/              # render to stdout
kubectl kustomize overlays/production/            # same thing, kubectl built-in

# Apply
kubectl apply -k overlays/production/             # build + apply in one step
kustomize build overlays/production/ | kubectl apply -f -  # explicit two-step (preferred in CI)

# Diff against live cluster
kubectl diff -k overlays/production/

# Validate before applying
kustomize build overlays/production/ | kubectl apply --dry-run=server -f -

# Edit kustomization.yaml programmatically
kustomize edit set image registry.example.com/web-api=registry.example.com/web-api:v1.1.0
kustomize edit set namespace production

# Pipe to validation tools
kustomize build overlays/production/ | kubeval -             # schema validation
kustomize build overlays/production/ | conftest test -       # policy checks (OPA)

Gotcha: The Kustomize version bundled inside kubectl lags behind the standalone binary — often by a year or more. Features like replacements and components may work with kustomize build but fail with kubectl apply -k. In CI, pin the standalone binary and use kustomize build | kubectl apply -f - for consistency.


Part 10: The Footguns

Kustomize is simpler than Helm, but "simpler" doesn't mean "safe." These are the mistakes that actually bite people.

Silent patch skip

You write a patch targeting name: myapp-deployment but the base resource is named name: myapp. Kustomize does not error. It does not warn. It silently skips the patch. Your production deploy goes out with 1 replica instead of 5.

Always verify: kustomize build overlays/production/ | grep replicas: after adding any patch. Add this as a CI step.

The commonLabels trap

You add commonLabels to inject a team: platform label everywhere. Kustomize helpfully adds it to metadata.labels AND spec.selector.matchLabels. Kubernetes forbids changing selectors on existing Deployments. Your next deploy fails with:

The Deployment "web-api" is invalid: spec.selector: Invalid value:
  ... field is immutable

Fix: Use the labels transformer (which only adds to metadata) instead of commonLabels if you have existing Deployments with established selectors.

The images transformer name mismatch

The images transformer matches by exact image name. If your base has image: docker.io/myorg/web-api:v1.0.0, the transformer name must be docker.io/myorg/web-api — not web-api or myorg/web-api. A mismatch means the tag override silently does nothing.

Debug: kustomize build base/ | grep image: to see the exact string to match.


Flashcard Check #3

Question Answer
What happens if a Kustomize patch targets a resource name that doesn't exist in the base? Nothing. The patch is silently skipped with no error or warning. This is the most dangerous Kustomize behavior.
Why is commonLabels dangerous on existing Deployments? It adds labels to spec.selector.matchLabels, which Kubernetes makes immutable after creation. Adding a new commonLabel to an existing Deployment causes an immutable field error.
Which version of Kustomize does kubectl apply -k use? A bundled version that lags behind the standalone binary, often by a year or more. Features available in the standalone kustomize may not work via kubectl apply -k.
What GitOps controllers support Kustomize natively? ArgoCD and Flux both detect kustomization.yaml and run kustomize build automatically.

Exercises

Exercise 1: Build and inspect (2 minutes)

Create the base directory from Part 1. Run kustomize build base/. Verify the output is valid with kustomize build base/ | kubectl apply --dry-run=client -f -.

Hint You need four files in the base/ directory: `kustomization.yaml`, `deployment.yaml`, `service.yaml`, and `configmap.yaml`. Copy them from Part 1.

Exercise 2: Create a dev overlay (5 minutes)

Create an overlay that sets namespace: dev, changes replicas to 1, and adds LOG_LEVEL=debug as an environment variable. Build it and verify the namespace, replica count, and env var are all correct in the output.

Hint Your overlay's kustomization.yaml needs `resources: [../../base]`, `namespace: dev`, and a patch file. Use a strategic merge patch — match by `name: web-api` for both the Deployment and the container.

Exercise 3: Spot the bug (5 minutes)

This overlay silently fails to change the image tag. Why?

# kustomization.yaml
images:
  - name: web-api
    newTag: v2.0.0

The base deployment has image: registry.example.com/web-api:v1.0.0.

Answer The `name` field in the images transformer must match the full image name as it appears in the base: `registry.example.com/web-api`. Using just `web-api` doesn't match, so the transformer silently does nothing.

Exercise 4: Component composition (15 minutes)

Create two components: one that adds Prometheus scrape annotations and one that enables debug logging. Create a staging overlay that uses both, and a production overlay that uses only Prometheus. Verify the rendered output for each environment.

Hint Components use `apiVersion: kustomize.config.k8s.io/v1alpha1` and `kind: Component`. The overlay includes them via `components: [../../components/prometheus]`. Remember that `~1` encodes `/` in JSON patch paths.

Cheat Sheet

Task Command / Config
Preview rendered YAML kustomize build overlays/prod/
Apply to cluster kubectl apply -k overlays/prod/
Diff against live kubectl diff -k overlays/prod/
Set image tag kustomize edit set image app=app:v2
Set namespace kustomize edit set namespace prod
Add a resource kustomize edit add resource new.yaml
Validate kustomize build ... \| kubectl apply --dry-run=server -f -
Concept What it does
resources: Lists YAML files or directories to include
patches: Strategic merge or JSON patches to apply
images: Override image names/tags without patching
namespace: Override namespace on ALL namespaced resources
namePrefix: / nameSuffix: Prepend/append to all resource names
commonLabels: Add labels to metadata AND selectors (careful!)
commonAnnotations: Add annotations to all resources
configMapGenerator: Generate ConfigMaps with hash suffixes
secretGenerator: Generate Secrets with hash suffixes
components: Include reusable component fragments
replacements: Copy a field value from one resource to another

Takeaways

  • Kustomize patches real YAML; Helm templates it. Your base manifests are always valid, deployable Kubernetes resources — not templates with {{ }} placeholders.

  • Overlays solve the multi-environment problem. One base, thin per-environment patches. Change the base once, all environments inherit it.

  • The ConfigMap hash suffix is the killer feature. Change a config value, get an automatic rolling update. No manual restarts, no annotation hacks.

  • Patches are silent. A typo in a resource name means the patch does nothing — with zero feedback. Always build and inspect before applying.

  • Use Kustomize for your apps, Helm for third-party software. They complement each other. Most production teams use both.

  • GitOps controllers love Kustomize. ArgoCD and Flux natively understand kustomization.yaml. Your overlay directory IS your environment's desired state.