Skip to content

Multi-Tenancy — Trivia & Interesting Facts

Surprising, historical, and little-known facts about Kubernetes multi-tenancy.


Kubernetes was not designed for multi-tenancy — it was bolted on

Kubernetes was originally built for a single-organization, high-trust model where all workloads belong to the same entity. Multi-tenancy was retrofitted through namespaces, RBAC, NetworkPolicies, ResourceQuotas, and admission controllers. As a result, the isolation boundaries are additive and easy to misconfigure — missing any one layer (network, resource, RBAC, or pod security) creates a tenant escape path.


Namespaces provide naming isolation, not security isolation

A namespace is a scope for names and a target for RBAC bindings and ResourceQuotas. It does not, by itself, prevent network access between namespaces, restrict resource consumption, or isolate the kernel. Teams that create a namespace per tenant and call it "isolated" are building on a foundation that provides none of the actual isolation properties they expect.


The "noisy neighbor" problem is the #1 multi-tenant complaint

Without proper ResourceQuotas and LimitRanges, one tenant's workload can consume all CPU, memory, or I/O on a node, degrading every other tenant's performance. CPU throttling via CFS bandwidth control helps but introduces latency spikes. Memory has no throttling — a pod exceeding its limit is killed immediately. The asymmetry between CPU (throttled) and memory (killed) surprises teams designing multi-tenant resource policies.


NetworkPolicies are required for tenant network isolation — and most CNIs support them

By default, every pod in a Kubernetes cluster can communicate with every other pod. NetworkPolicies are the only built-in mechanism for network-level tenant isolation. A "default deny all ingress" policy per tenant namespace is the standard starting point. However, NetworkPolicies do not cover egress to external services, DNS traffic (which must be explicitly allowed), or traffic to the Kubernetes API server.


Virtual clusters run complete Kubernetes control planes inside a host cluster

vCluster (by Loft Labs) and similar projects create lightweight, fully functional Kubernetes clusters as pods in a host cluster. Each virtual cluster has its own API server, controller manager, and etcd (or sqlite), providing complete API isolation — tenants get their own cluster without the infrastructure cost of physical clusters. The virtual cluster's workloads are actually scheduled as pods in the host cluster, mapped through a syncer component.


Hierarchical namespaces tried to solve the namespace sprawl problem

Hierarchical Namespace Controller (HNC), a Kubernetes SIG project, allows parent-child namespace relationships where RBAC, NetworkPolicies, and ResourceQuotas propagate from parent to child namespaces. This models organizational hierarchies (team -> subteam -> project) without duplicating policy. HNC reached v1.0 in 2022 but adoption remains limited because many teams use GitOps to manage namespace policies, making inheritance less compelling.


Capsule and Kiosk provide tenant isolation without virtual clusters

Capsule (by Clastix) and Kiosk (by Loft Labs) are multi-tenancy controllers that enforce tenant boundaries through admission webhooks. They prevent tenants from seeing each other's namespaces, enforce resource quotas at the tenant level (across multiple namespaces), and restrict which StorageClasses, IngressClasses, and node pools each tenant can use. These tools occupy the middle ground between "just namespaces" and "full virtual clusters."


Pod Security Standards replaced PodSecurityPolicies after years of frustration

PodSecurityPolicies (PSP) were the original mechanism for preventing tenants from running privileged containers, mounting hostPaths, or escalating privileges. PSPs were notoriously difficult to configure correctly, and misconfiguration could block all pod creation in a namespace. Kubernetes deprecated PSP in 1.21 and removed it in 1.25, replacing it with Pod Security Standards (PSS) enforced via the Pod Security Admission controller — a simpler, three-level model (privileged, baseline, restricted).


Fair resource sharing across tenants requires priority and preemption

ResourceQuotas set hard caps per namespace, but they do not ensure fair sharing of unallocated resources. When the cluster is not fully utilized, all tenants can burst. When it fills up, the tenant who created pods first gets resources. PriorityClasses and preemption rules are needed to define which tenants' workloads survive under pressure. Without them, a first-come-first-served model silently favors tenants who deploy first.


GKE, EKS, and AKS all recommend separate clusters over shared clusters for hard multi-tenancy

All three major managed Kubernetes providers' official documentation recommends separate clusters for tenants requiring strong isolation (different organizations, compliance boundaries, hostile workloads). Shared clusters are recommended only for "soft" multi-tenancy (teams within the same organization). The security community generally agrees: Kubernetes namespace-based isolation is insufficient for untrusted tenants, and the kernel is a shared attack surface.


Cost allocation in multi-tenant clusters is unsolved at the Kubernetes level

Kubernetes has no built-in mechanism for tracking per-tenant resource consumption costs. Tools like Kubecost, OpenCost, and CloudZero fill this gap by correlating pod resource usage with cloud billing data. However, shared resources (control plane, networking overhead, unused reserved capacity) are difficult to attribute fairly. The "showback vs chargeback" debate — showing tenants their costs vs actually billing them — is as much a cultural challenge as a technical one.