Kubernetes Ecosystem — Trivia & Interesting Facts¶
Surprising, historical, and little-known facts about the Kubernetes ecosystem.
Kubernetes was born from Google's Borg system, which ran everything at Google¶
Google's internal Borg system managed billions of containers per week as early as 2004. When Google decided to open-source a container orchestrator in 2014, three engineers — Joe Beda, Brendan Burns, and Craig McLuckie — built Kubernetes as a "clean room" reimplementation of Borg's ideas. They deliberately chose Go over C++ (Borg's language) to make the project more accessible to open-source contributors.
The name "Kubernetes" means "helmsman" and was almost "Seven"¶
Kubernetes (Greek: kubernḗtēs) means helmsman or pilot. The project was internally codenamed "Project Seven" after Star Trek's Seven of Nine (a Borg character who became independent — a fitting metaphor for an open-source Borg). The seven-spoked wheel in the Kubernetes logo is a direct reference to this codename. The abbreviation "K8s" substitutes 8 characters between the K and s.
CNCF hosts over 180 projects and Kubernetes is still the star¶
The Cloud Native Computing Foundation, formed in 2015 with Kubernetes as its seed project, had grown to host over 180 projects by 2024 — including Prometheus, Envoy, Istio, Argo, Flux, Cilium, and many more. Despite this sprawl, Kubernetes remains the gravitational center. A 2023 CNCF survey found that 84% of organizations were using or evaluating Kubernetes, up from 78% in 2022.
The CNCF landscape has over 2,000 entries and requires a magnifying glass¶
The CNCF Cloud Native Landscape (landscape.cncf.io) maps the entire cloud-native ecosystem. It contains over 2,000 projects and products across categories like runtime, orchestration, observability, and security. The graphic is so dense that it became a meme — DevOps engineers joke about needing a poster-sized printout and a magnifying glass to read it. Total funding across all landscape entries exceeds $50 billion.
etcd is the single most critical component in a Kubernetes cluster¶
etcd, a distributed key-value store created by CoreOS in 2013, stores all Kubernetes cluster state. If etcd goes down, the control plane is effectively dead — no new pods can be scheduled, no configuration changes can be made, and no API calls succeed. Despite this criticality, many early Kubernetes deployments ran etcd on the same node as the control plane without dedicated storage, leading to spectacular cluster failures.
The Operator pattern was invented by CoreOS and changed everything¶
Brandon Philips and the CoreOS team introduced the Operator pattern in 2016 to encode operational knowledge into Kubernetes controllers. The first operator managed etcd clusters — automatically handling backup, scaling, and recovery. By 2024, OperatorHub.io listed over 300 operators for databases (PostgreSQL, MySQL, MongoDB), message queues (Kafka, RabbitMQ), and infrastructure services. The pattern turned "Day 2 operations" from runbooks into code.
Kubernetes has a release every four months like clockwork¶
Starting in 2020, Kubernetes moved from four releases per year to three, on a 15-week cycle. Each release gets 14 months of patch support (12 months standard + 2 months extended). The release process involves over 40 volunteers organized into teams (release lead, docs, communications, CI signal) and is one of the most structured release processes in open source. Every release is named (e.g., 1.28 "Planternetes," 1.29 "Mandala").
Service mesh adoption plateaued despite massive investment¶
Istio, Linkerd, and Consul Connect collectively received hundreds of millions in venture capital and engineering investment. Yet by 2023, CNCF surveys showed that only about 30% of Kubernetes users had adopted a service mesh, and many who had reported complexity as the primary challenge. The "sidecar per pod" model adds significant resource overhead — Istio sidecars consume 50-100 MB of memory each, multiplied across thousands of pods.
Gateway API is replacing Ingress after years of fragmentation¶
The Kubernetes Ingress resource, introduced in 2015, was deliberately minimal — it supported only basic HTTP routing. Every ingress controller (NGINX, Traefik, HAProxy, AWS ALB) added non-standard annotations for advanced features, creating massive vendor lock-in. Gateway API, which reached GA for core features in Kubernetes 1.27 (2023), provides a standardized, expressive API for routing HTTP, gRPC, and TCP traffic without annotations.
Kubernetes is the largest open-source project by contributor count after Linux¶
With over 74,000 contributors across its GitHub organizations, Kubernetes is the second-largest open-source project by contributor count, behind only the Linux kernel. The main kubernetes/kubernetes repository has over 3,400 unique contributors. The project receives contributions from hundreds of companies, though Google, Red Hat, VMware, and Microsoft have historically dominated commit volume.
The SIG system governs Kubernetes development by domain¶
Kubernetes development is organized into approximately 30 Special Interest Groups (SIGs), each owning a domain: SIG-Network, SIG-Storage, SIG-Node, SIG-Auth, etc. Any significant change requires a Kubernetes Enhancement Proposal (KEP) that must be approved by the relevant SIG. This governance model allows Kubernetes to evolve while maintaining stability — features typically spend 3-6 releases progressing through alpha, beta, and GA stages.
Crossplane and Cluster API turned Kubernetes into a universal control plane¶
Crossplane (2018) lets you manage any cloud resource (databases, buckets, VPCs) using Kubernetes custom resources. Cluster API lets you manage Kubernetes clusters themselves as Kubernetes resources. Together, they enable a pattern where one "management cluster" provisions and controls infrastructure, cloud resources, and application clusters — turning the Kubernetes API into a universal control plane for everything, not just containers.
The first Operator ever built managed etcd¶
The etcd Operator, released alongside the original Operator blog post in 2016, automated etcd cluster creation, scaling, backup, and recovery. It was chosen as the first example because etcd's operational complexity (peer discovery, quorum maintenance, snapshot management) was well-understood and painful to manage manually.
Operator SDK supports three languages — and almost nobody uses two of them¶
The Operator SDK supports building operators in Go, Ansible, and Helm. In practice, over 90% of production operators are written in Go using the controller-runtime library. Ansible-based operators are used primarily by Red Hat's internal teams, and Helm-based operators are mostly a thin wrapper that installs a Helm chart when a CR is created.
The reconciliation loop is intentionally level-triggered, not edge-triggered¶
Operators react to the desired state of a resource (level-triggered) rather than to change events (edge-triggered). This means the controller does not need to track what changed — it compares desired state to actual state every time and takes corrective action. If the controller crashes and restarts, it simply re-reconciles all resources.
Custom Resource Definitions have a 1.5 MB size limit per object¶
CRDs store their instances in etcd, which has a 1.5 MB per-value limit. This means a custom resource's JSON representation must fit in 1.5 MB. For operators that store complex state in the CR status, this limit can become a real constraint. The workaround is using ConfigMaps or external storage for large state.
Finalizers are both essential and the #1 cause of stuck resources¶
Operators use finalizers to perform cleanup before a resource is deleted. If the operator is not running (crashed, uninstalled, or wrong namespace), the finalizer is never removed, and the resource becomes permanently undeletable. The workaround is manually editing the resource to remove the finalizer.
Kubebuilder and Operator SDK merged their codebases¶
Kubebuilder (started by SIG-API-Machinery) and Operator SDK (started by CoreOS/Red Hat) were competing frameworks that used the same underlying controller-runtime library. In 2020, they agreed to align: Operator SDK adopted Kubebuilder's project scaffolding as its base and added OLM integration on top.
Stateful workloads drove operator adoption more than anything else¶
Databases (PostgreSQL, MySQL, MongoDB, Cassandra), message queues (Kafka, RabbitMQ), and search engines (Elasticsearch) are the most common operator targets because they have the most complex operational requirements. The CloudNativePG and Strimzi operators are widely considered best-in-class examples.
Leader election prevents duplicate reconciliation in HA operator deployments¶
When running multiple replicas of an operator for high availability, only one instance should actively reconcile resources. Controller-runtime implements leader election using a Kubernetes Lease object. Without leader election, two controllers reconciling simultaneously can create conflicting state.