Kubernetes Control Plane as Reconciliation Engine¶
Mental model¶
Thermostat: you set the desired temperature, the system continuously adjusts to match it. Kubernetes is thermostats all the way down.
What it looks like¶
Kubernetes feels like magic. You declare what you want and it somehow happens.
What it really is¶
K8s is a declarative system built on control loops. You describe desired state. Controllers continuously converge actual state toward it.
Each control loop does three things: 1. Observe current state. 2. Compare to desired state. 3. Take action to close the gap. 4. Repeat.
Core components:
- API server: the front door. All reads and writes go through it. Stateless; stores everything in etcd.
- etcd: distributed key-value store. Single source of truth for all cluster state.
- Scheduler: assigns unscheduled pods to nodes based on resource requests, constraints, and affinity rules.
- Controller manager: runs built-in controllers (Deployment, ReplicaSet, Node, Job, etc.). Each is an independent loop.
- kubelet: agent on each node. Watches API server for pod specs assigned to its node, ensures containers run.
Why it seems confusing¶
The system is eventually consistent. You say "3 replicas" and it might take seconds to converge. There is no single orchestrator making it happen — multiple independent loops cooperate through shared state in etcd.
What actually matters¶
- Desired state is declared (in manifests). Actual state converges over time.
- Every controller is a separate reconciliation loop watching its own resource type.
- etcd is the single source of truth. Lose etcd, lose the cluster state.
- The API server is the only component that talks to etcd. Everything else talks to the API server.
Common mistakes¶
- Expecting instant consistency. Convergence takes time, especially under load.
- Manually fixing state (e.g., killing pods) instead of fixing the desired state (the manifest). The controller will just undo your manual fix.
- Ignoring etcd health. A degraded etcd degrades everything.
Small examples¶
What happens when you delete a pod managed by a Deployment:
1. You: kubectl delete pod web-abc123
2. API server: removes pod from etcd
3. ReplicaSet: observes 2/3 replicas running
(desired=3, actual=2)
4. ReplicaSet: creates a new pod spec via API server
5. Scheduler: assigns new pod to a node
6. kubelet: pulls image, starts container
7. Result: 3/3 replicas running again
No single component "decided" to fix it. Each loop did its one job.
One-line summary¶
Kubernetes is independent control loops reconciling actual state toward declared desired state, coordinated through the API server and etcd.