Quiz: etcd¶
4 questions
L0 (1 questions)¶
1. What does etcd store in a Kubernetes cluster?
Show answer
etcd stores all Kubernetes cluster state: resource definitions (Pods, Deployments, Services), RBAC policies, Secrets, ConfigMaps, leases, and CRD instances. It does NOT store container images, logs, metrics, or persistent volume data. The API server is the only component that communicates with etcd directly.L1 (1 questions)¶
1. Why should you always run an odd number of etcd members (3 or 5), not an even number like 4?
Show answer
etcd uses Raft consensus requiring a quorum (majority) to commit writes. A 4-member cluster needs quorum of 3 and tolerates 1 failure — same tolerance as a 3-member cluster but with higher cost and latency. Odd numbers maximize fault tolerance per member: 3 members tolerate 1 failure, 5 tolerate 2. *Common mistake:* People assume more members always means better fault tolerance, but 4 members tolerates the same failures as 3.L2 (1 questions)¶
1. How do you perform a backup and restore of etcd in a Kubernetes cluster?
Show answer
Backup: etcdctl snapshot save /backup/etcd-snapshot.db --endpoints=https://127.0.0.1:2379 --cacert --cert --key (with TLS certs). Verify: etcdctl snapshot status /backup/etcd-snapshot.db. Restore: stop kube-apiserver, run etcdctl snapshot restore with --data-dir pointing to a new directory, update etcd config to use the new data dir, restart etcd and kube-apiserver.L3 (1 questions)¶
1. Your 3-member etcd cluster has one member with latency spikes causing leader election churn. How do you diagnose and resolve this without downtime?
Show answer
1. Check etcdctl endpoint status --write-out=table for leader changes and Raft term increments.2. Check disk I/O on the slow member — etcd is latency-sensitive and requires low-latency storage (SSD).
3. Check network latency between members.
4. If disk: migrate the member to SSD storage.
5. If persistent: remove the slow member (etcdctl member remove), fix the underlying issue, add a new member (etcdctl member add). Quorum of 2 maintains availability during single-member replacement.