Skip to content

K8S Concept Chain

← Back to all decks

30 cards — 🟢 9 easy | 🟡 10 medium | 🔴 5 hard

🟢 Easy (9)

1. What happens when you run a container as a bare Pod in Kubernetes and it crashes?

Show answer It stays dead. Nobody restarts it. A bare Pod has no controller managing its lifecycle.

Remember: Never run bare Pods in production. Always use a Deployment (or StatefulSet/DaemonSet).

2. What problem does a Deployment solve that bare Pods cannot?

Show answer A Deployment ensures a desired number of replicas are always running. If a Pod dies, the Deployment controller creates a replacement automatically.

Remember: Deployment → ReplicaSet → Pods. The Deployment manages ReplicaSets for rolling updates.

3. Why can't you hardcode Pod IPs to let services communicate?

Show answer Pods get a new IP every time they restart or reschedule. At scale, IPs change constantly. A Service provides one stable ClusterIP that routes to healthy Pods using label selectors.

Remember: Services use labels, not IPs. Pods die and come back — the Service does not care.

4. What are the four Kubernetes Service types, from most to least restrictive?

Show answer 1. ClusterIP — internal only (default)
2. NodePort — static port on every node (30000-32767)
3. LoadBalancer — provisions an external cloud LB
4. ExternalName — DNS CNAME redirect, no proxying

Remember: ClusterIP is the default. Each type builds on the previous one.

5. What problem does Ingress solve that Services alone cannot?

Show answer With Services, each externally-exposed service needs its own LoadBalancer (one cloud LB each, at ~$18-20/month). Ingress provides L7 routing (hostname + path) so one load balancer can serve many services.

Remember: Ingress = one LB, many services, smart routing by host/path.

6. Why do Ingress resources do nothing by themselves?

Show answer Ingress is just a set of routing rules. An Ingress Controller (nginx, Traefik, AWS LB Controller) must be deployed to watch Ingress resources and actually configure traffic routing.

Gotcha: No Ingress Controller deployed = Ingress resources are completely inert. No errors, no warnings — just silence.

7. What problem does a ConfigMap solve?

Show answer Hardcoding config inside the container image means rebuilding for every config change and risking wrong values per environment. A ConfigMap externalizes config so the same image runs in dev, staging, and prod with different settings injected at runtime.

8. Why should you use a Secret instead of a ConfigMap for passwords?

Show answer ConfigMaps have no special access controls — anyone who can read ConfigMaps in the namespace sees the data. Secrets have separate RBAC controls and are meant for sensitive data like passwords, tokens, and TLS certs.

Gotcha: Secrets are base64-encoded, NOT encrypted. Enable etcd encryption at rest for real protection.

9. What is the Kubernetes concept chain?

Show answer Each K8s abstraction exists because the previous layer has an unresolved problem:
Pod crashes → Deployment
IPs change → Service
Too many LBs → Ingress
Rules need engine → Ingress Controller
Config in image → ConfigMap
Passwords exposed → Secret
Manual scaling → HPA
Nodes full → Karpenter
Rogue resources → Requests & Limits

Remember: Every K8s concept is a solution to a specific problem. Learn the problems, and the solutions make sense.

🟡 Medium (10)

1. What does HPA do and what problem remains after you enable it?

Show answer HPA (Horizontal Pod Autoscaler) watches metrics (CPU, memory, custom) and adjusts Deployment replica count automatically. But HPA only creates Pods — if nodes are full, new Pods sit in Pending state.

Remember: HPA scales Pods. Karpenter/Cluster Autoscaler scales nodes. You often need both.

2. How does Karpenter differ from Cluster Autoscaler?

Show answer Cluster Autoscaler adjusts node group sizes (ASGs/MIGs) — slower, constrained to predefined instance types. Karpenter provisions right-sized nodes directly based on pending Pod requirements — faster, more flexible.

Remember: Karpenter = fast, right-sized nodes. Cluster Autoscaler = adjusts existing node groups.

3. What is the difference between resource requests and limits?

Show answer Requests = minimum guaranteed resources for scheduling. The scheduler places the Pod on a node with enough unrequested capacity.
Limits = maximum a container can consume. CPU limits cause throttling; memory limits cause OOMKill.

Remember: Requests are for the scheduler. Limits are for enforcement.

4. What are the three Kubernetes QoS classes and how are they determined?

Show answer Guaranteed: requests == limits for all containers (last evicted)
Burstable: requests < limits or only requests set (middle)
BestEffort: no requests or limits at all (first evicted)

Remember: QoS class determines eviction order under memory pressure. BestEffort dies first.

5. What is the relationship between Deployment, ReplicaSet, and Pod?

Show answer Deployment manages ReplicaSets. ReplicaSet manages Pods. During a rolling update, the Deployment creates a new ReplicaSet (with the updated spec) and scales it up while scaling the old ReplicaSet down.

Remember: Deployment → ReplicaSet → Pods. Never edit ReplicaSets directly.

6. How does a Kubernetes Service find the right Pods to route traffic to?

Show answer The Service uses label selectors to match Pods. Matching Pods are added to the Service's Endpoints list. kube-proxy (or the CNI) programs iptables/IPVS rules to route Service IP traffic to endpoint Pod IPs.

Remember: No label match = no endpoints = Service returns connection refused.

7. What happens to running Pods when you update a ConfigMap?

Show answer Nothing — automatically. Pods using env var injection keep the old values until restarted. Pods using volume mounts see updates after the kubelet sync period (default ~60s), but the app must re-read the files.

Gotcha: ConfigMap updates don't trigger Pod restarts. Use `kubectl rollout restart` or a checksum annotation.

8. Are Kubernetes Secrets encrypted at rest by default?

Show answer No. Secrets are base64-encoded (not encrypted) and stored in etcd in cleartext by default. You must explicitly configure encryption at rest via EncryptionConfiguration at the API server level.

Remember: base64 != encryption. Anyone with etcd access sees all Secrets without encryption at rest.

9. What happens when you set replicas in a Deployment manifest and also use HPA?

Show answer Every time you `kubectl apply` the manifest, it resets the replica count to the hardcoded value, overriding HPA's scaling decisions.

Fix: remove the `replicas` field from the Deployment manifest when using HPA, or use server-side apply.

10. What DNS name does Kubernetes create for a ClusterIP Service?

Show answer ..svc.cluster.local

Within the same namespace, you can use just the service name. Cross-namespace, use . or the full FQDN.

Remember: CoreDNS runs in kube-system and auto-discovers Services. Pod /etc/resolv.conf points to it.

🔴 Hard (5)

1. Why do some teams set CPU requests but no CPU limits?

Show answer CPU limits cause kernel-level throttling (CFS quota) even when the node has spare CPU capacity. This creates unpredictable latency spikes. Memory limits should always be set (OOM is catastrophic), but CPU throttling is annoying rather than fatal.

Remember: CPU limit = throttle. Memory limit = OOMKill. Different failure modes require different strategies.

2. Why is a PodDisruptionBudget important when using node autoscaling?

Show answer Without a PDB, Karpenter or Cluster Autoscaler can evict all replicas of a service from a node simultaneously during scale-down. A PDB (minAvailable or maxUnavailable) ensures the autoscaler respects application availability during node drains.

Remember: PDB protects against voluntary disruptions — node drains, autoscaler evictions, maintenance.

3. You applied an Ingress resource but traffic returns 404. What are the three most likely causes?

Show answer 1. No Ingress Controller is deployed in the cluster (Ingress resources are inert)
2. The backend service name or port in the Ingress spec is wrong
3. The backend Service has no endpoints (selector labels don't match Pod labels)

Debug: kubectl get pods -n ingress-nginx, kubectl describe ingress NAME, kubectl get endpoints BACKEND_SVC

4. Pods are stuck in Pending state. Walk through the diagnostic steps.

Show answer 1. kubectl describe pod NAME — check Events for "Insufficient cpu" or "Insufficient memory"
2. kubectl describe nodes — check Allocated resources vs Allocatable
3. Check if node autoscaler (Karpenter/CA) is running and has capacity to provision
4. Check if PersistentVolumeClaims are pending (storage-bound scheduling)
5. Check taints/tolerations and nodeSelector constraints

Remember: Pending = scheduler can't place the Pod. The Events section tells you why.

5. A single Pod with no resource limits consumes all memory on a node. What happens to other Pods?

Show answer The kernel OOM killer fires. It targets processes by OOM score. Pods with QoS BestEffort (no requests/limits) are killed first, then Burstable, then Guaranteed. The rogue Pod (also BestEffort) may or may not be the one killed — it depends on OOM scoring.

Fix: always set memory limits. Use LimitRange to enforce defaults namespace-wide.