Skip to content

K8s Concept Chain — Footguns

Mistakes people make at each layer of the concept chain.


1. Running bare pods in production

Creating pods directly without a Deployment. The pod crashes and stays dead. Nobody notices until users complain.

Fix: always use a Deployment (or StatefulSet/DaemonSet). Bare pods are for one-off debugging only.


2. Hardcoding pod IPs

Discovering a pod IP with kubectl get pod -o wide and putting it in config. The pod restarts, gets a new IP, and everything breaks.

Fix: use a Service. Reference the DNS name, never the pod IP.


3. Creating one LoadBalancer per service

Every type: LoadBalancer service provisions a separate cloud load balancer. Ten services means ten load balancers at $18-20/month each, most sitting idle.

Fix: use Ingress with a single Ingress Controller. One load balancer routes to all services.


4. Creating Ingress resources without an Ingress Controller

You apply a perfect Ingress manifest. Nothing happens. No errors, no warnings. Ingress resources are inert without a controller watching for them.

Fix: install an Ingress Controller (ingress-nginx, Traefik, etc.) before creating Ingress resources. Check: kubectl get pods -n ingress-nginx.


5. Storing passwords in ConfigMaps

ConfigMaps have no access controls beyond namespace RBAC. Anyone who can kubectl get configmap sees your database password in plaintext.

Fix: use Secrets for sensitive data. For real security, integrate with an external secrets manager (Vault, AWS Secrets Manager).


6. Thinking Secrets are encrypted

Secrets are base64-encoded, not encrypted. echo cGFzc3dvcmQ= | base64 -d reveals the value instantly. Without encryption at rest enabled at the API server level, etcd stores them in cleartext.

Fix: enable etcd encryption at rest. Use external secret operators for production. Restrict Secret access with RBAC.


7. No resource requests or limits

Pods without resource specs get QoS class BestEffort — first to be evicted under memory pressure. One pod with a memory leak consumes 8GB and OOM-kills neighbors.

Fix: always set requests (for scheduling) and limits (for protection). Start with requests = limits (Guaranteed QoS) and relax only when you understand the workload's actual resource profile.


8. Setting CPU limits too tight

CPU limits cause throttling — the kernel restricts your container's CPU time even when the node has spare capacity. Latency spikes appear random.

Fix: many teams set CPU requests but leave CPU limits unset (or generous). Memory limits should always be set (OOM is catastrophic). CPU throttling is annoying but not fatal.


9. HPA and manual replicas fighting

Setting replicas: 3 in your Deployment manifest while HPA manages the same Deployment. Every kubectl apply resets the count, overriding HPA's decisions.

Fix: when using HPA, remove the replicas field from the Deployment manifest entirely (or use --server-side apply).


10. No PodDisruptionBudget with autoscaling

Karpenter or Cluster Autoscaler removes a node during a scale-down. All pods on that node are evicted simultaneously. If they're all replicas of the same service, you get a brief outage.

Fix: set a PodDisruptionBudget (minAvailable: 1 or maxUnavailable: 1) so the autoscaler respects availability during node drains.