How We Got Here: Container Evolution¶
Arc: Infrastructure Eras covered: 6 Timeline: ~1979-2025 Read time: ~14 min
The Original Problem¶
From the earliest days of Unix, people wanted isolation. You wanted to run two applications on the same machine without them stepping on each other — different library versions, different configurations, different users. The OS gave you process isolation and file permissions, but they were coarse-grained and easy to break. If two applications needed different versions of the same shared library, you were stuck.
The VM solved this by giving each application its own entire operating system. But that was wasteful — booting a whole kernel, allocating dedicated RAM, maintaining a full OS just to isolate a Python script from a Java app. There had to be a middle ground.
Era 1: chroot and Early Isolation (~1979-2000)¶
The Solution¶
The chroot system call, introduced in Version 7 Unix (1979) and added to BSD in 1982, changed the apparent root directory for a process. A chrooted process couldn't see or access files outside its new root. It was the first practical process isolation primitive.
What It Looked Like¶
# Create a minimal chroot environment
mkdir -p /srv/jail/{bin,lib,etc}
cp /bin/bash /srv/jail/bin/
cp /lib/x86_64-linux-gnu/libc.so.6 /srv/jail/lib/
cp /lib/x86_64-linux-gnu/libdl.so.2 /srv/jail/lib/
cp /lib64/ld-linux-x86-64.so.2 /srv/jail/lib64/
# Enter the jail
chroot /srv/jail /bin/bash
# Now / is /srv/jail — you can't see the host filesystem
Why It Was Better¶
- Simple and built into every Unix system
- Zero overhead — no hypervisor, no extra kernel
- Useful for build environments and FTP servers
- Conceptually clean — just a filesystem namespace
Why It Wasn't Enough¶
- Only isolated the filesystem — processes, network, users were still shared
- Root in a chroot could break out trivially (mknod, mount, ptrace)
- No resource limits — a chrooted process could consume all CPU and RAM
- No process isolation — could see and signal other processes
- Extremely manual to set up (copy every library dependency by hand)
Legacy You'll Still See¶
chroot is still used in build systems (mock for RPM builds, debootstrap for Debian), embedded systems, and as a recovery technique (chroot into a mounted root filesystem to fix a broken boot). The concept of filesystem isolation is the foundation of every container runtime.
Era 2: FreeBSD Jails and Solaris Zones (~2000-2005)¶
The Solution¶
FreeBSD Jails (2000) extended chroot with process isolation, network isolation (each jail got its own IP address), and user isolation. Solaris Zones (2005) went further with resource controls (CPU caps, memory limits) and a robust administrative model. These were the first true OS-level virtualization systems.
What It Looked Like¶
# FreeBSD Jail creation
mkdir -p /jails/web01
# Install a minimal FreeBSD into the jail
fetch ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/12.0-RELEASE/base.txz
tar -xf base.txz -C /jails/web01
# /etc/jail.conf
web01 {
host.hostname = "web01.example.com";
ip4.addr = 192.168.1.10;
path = "/jails/web01";
mount.devfs;
exec.start = "/bin/sh /etc/rc";
exec.stop = "/bin/sh /etc/rc.shutdown";
}
# Start the jail
service jail start web01
jexec web01 /bin/sh # enter the jail
Why It Was Better¶
- True isolation: separate process space, network stack, users
- Lightweight: shared kernel, no hypervisor overhead
- Resource management: Zones could cap CPU and memory
- Production-grade: used in hosting environments for real multi-tenancy
- Administrative model: proper lifecycle management (create, start, stop, delete)
Why It Wasn't Enough¶
- BSD-only (Jails) or Solaris-only (Zones) — Linux was winning the server war
- No portable image format — jails were bound to the host OS version
- No ecosystem of pre-built images
- Configuration was expert-level sysadmin work
- The Linux world needed its own answer
Legacy You'll Still See¶
FreeBSD jails are still used in hosting (Hetzner, some CDN providers). Solaris Zones persist in Oracle environments. The iocage tool modernized jail management. The concepts — namespace isolation, resource cgroups — were directly implemented in Linux and became the foundation of Docker.
Era 3: LXC and Linux Namespaces (~2006-2013)¶
The Solution¶
Linux namespaces (mount: 2002, UTS/IPC/PID: 2006, network: 2009, user: 2013) and cgroups (2007) brought jail-like isolation to Linux. LXC (Linux Containers, 2008) wrapped these kernel features into a usable tool. You could run a full Linux distribution inside a container with its own init system, network, and filesystem.
What It Looked Like¶
# Create an LXC container
lxc-create -n mycontainer -t ubuntu
# Start it
lxc-start -n mycontainer -d
# Attach to it
lxc-attach -n mycontainer
# Inside: it looks like a full Linux system
root@mycontainer:/# ps aux
root@mycontainer:/# ip addr
root@mycontainer:/# apt-get install nginx
Why It Was Better¶
- Native Linux support — no BSD or Solaris required
- Used real kernel primitives (namespaces, cgroups) — not a hack
- Lightweight: millisecond startup, minimal overhead
- Could run any Linux distribution as a guest
- Paved the way for application-level containers
Why It Wasn't Enough¶
- System containers (full OS) were conceptually close to VMs — still pets
- No standard image format — container definitions were machine-specific
- No image registry or distribution mechanism
- Configuration was complex and poorly documented
- Networking required manual bridge setup
- Security model was immature (user namespaces came late)
Legacy You'll Still See¶
LXC evolved into LXD (Canonical) and is used for system containers where you need a full OS environment. Proxmox uses LXC for lightweight virtualization. The Linux namespaces and cgroups that LXC popularized are the exact same primitives that Docker uses under the hood.
Era 4: Docker (~2013-2017)¶
The Solution¶
Docker (2013) didn't invent containers — it made them usable. Solomon Hykes and his team at dotCloud combined LXC (later replaced with libcontainer/runc), a layered filesystem (AUFS, later OverlayFS), a Dockerfile build format, an image registry (Docker Hub), and a CLI that made the whole workflow feel like magic. The key insight was application containers, not system containers: one process per container, immutable images, disposable instances.
What It Looked Like¶
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["gunicorn", "app:app", "--bind", "0.0.0.0:8000"]
docker build -t myapp:1.0 .
docker push registry.example.com/myapp:1.0
docker run -d -p 8000:8000 registry.example.com/myapp:1.0
Why It Was Better¶
- Developer-friendly: Dockerfile was readable and learnable in an hour
- Portable images: build once, run anywhere Docker runs
- Docker Hub: thousands of pre-built images for every language and database
- Layered filesystem: fast builds through layer caching
- The "works on my machine" problem was genuinely solved
- Community explosion — Docker became synonymous with containers
Why It Wasn't Enough¶
- Docker daemon ran as root — security concern
- Docker Swarm couldn't compete with Kubernetes for orchestration
- Image sizes were often bloated (1GB+ for a simple app)
- Docker Inc.'s business model struggles created ecosystem uncertainty
- The monolithic Docker daemon was a single point of failure
- Networking and storage were bolted on, not designed in
Legacy You'll Still See¶
Docker is everywhere. Dockerfiles are the standard build format. Docker Compose is the default for local multi-service development. Docker Desktop is on most developer laptops. The Docker image format became the OCI standard. Even when you're "running Kubernetes," the container images are built with Docker.
Era 5: containerd and CRI-O (~2017-2022)¶
The Solution¶
Kubernetes didn't need all of Docker — it needed a container runtime that spoke a standard interface. The Container Runtime Interface (CRI) was born. Docker donated containerd to the CNCF (2017), and Red Hat built CRI-O as a minimal Kubernetes-focused runtime. The monolithic Docker daemon was decomposed: containerd for lifecycle management, runc for the actual container execution.
What It Looked Like¶
# containerd CLI (ctr) — lower-level than Docker
ctr images pull docker.io/library/nginx:latest
ctr run --rm -t docker.io/library/nginx:latest web01
# crictl — Kubernetes CRI debugging tool
crictl images
crictl ps
crictl logs <container-id>
# But most people interact through kubectl, not the runtime directly
kubectl run nginx --image=nginx:latest
Why It Was Better¶
- Smaller attack surface than the full Docker daemon
- Purpose-built for orchestration (CRI-native)
- containerd is more stable and resource-efficient than dockerd
- Kubernetes 1.24 (2022) removed dockershim — clean break
- OCI standardization means images work across all runtimes
Why It Wasn't Enough¶
- Debugging was harder — familiar Docker CLI commands didn't work on K8s nodes
- Image building still usually required Docker (or alternatives like Buildah, kaniko)
- The "Docker is dead" narrative confused many practitioners
- containerd and CRI-O lack Docker's developer experience for local work
Legacy You'll Still See¶
containerd is the default runtime for EKS, GKE, and most managed Kubernetes. CRI-O is the default for OpenShift. Docker remains the standard for building images and local development. The runtime war is over — containerd won for Kubernetes, Docker won for developers.
Era 6: Kata Containers, gVisor, and WebAssembly (~2020-2025)¶
The Solution¶
Standard containers share the host kernel, which means a kernel exploit in one container compromises all containers. Kata Containers (2017, mainstream ~2020) runs each container in a lightweight VM, combining container workflow with VM-level isolation. Google's gVisor intercepts system calls through a user-space kernel. WebAssembly (Wasm) takes a different approach entirely: a sandboxed bytecode format that runs anywhere, starts in microseconds, and provides strong isolation by default.
What It Looked Like¶
# Kata Containers — looks like a normal container but runs in a microVM
# Configure containerd to use kata runtime
# /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.kata]
runtime_type = "io.containerd.kata.v2"
# Kubernetes RuntimeClass for Kata
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: kata
handler: kata
# Pod using Kata runtime
apiVersion: v1
kind: Pod
metadata:
name: secure-workload
spec:
runtimeClassName: kata
containers:
- name: app
image: myapp:latest
// WebAssembly component — runs in wasmtime, fermyon, etc.
// Compiles to .wasm, starts in <1ms, sandboxed by default
use spin_sdk::http::{Request, Response};
#[spin_sdk::http_component]
fn handle(req: Request) -> Response {
Response::builder()
.status(200)
.body("Hello from Wasm!")
.build()
}
Why It Was Better¶
- Kata/gVisor: VM-level isolation with container workflow
- Wasm: microsecond cold starts, ~1MB footprint, language-agnostic
- Multi-tenancy becomes safer (run untrusted code without fear)
- Wasm is truly portable — same binary runs on cloud, edge, embedded
- Strong sandboxing by default, not by configuration
Why It Wasn't Enough¶
- Kata adds VM overhead (memory, startup time) to each container
- gVisor has incomplete syscall coverage — some apps don't work
- Wasm ecosystem is immature — limited library support, no standard network/filesystem API yet
- WASI (WebAssembly System Interface) is still being standardized
- Adoption is early — most teams haven't needed to move beyond standard containers
Legacy You'll Still See¶
This is the current frontier. Kata Containers appear in security-sensitive environments (financial services, multi-tenant platforms). gVisor powers GKE Sandbox. Wasm is used by Cloudflare Workers, Fermyon Spin, and Cosmonic. Most teams are watching, not adopting yet.
Where We Are Now¶
Docker-format containers running on containerd are the overwhelming mainstream. Kubernetes orchestrates them. Image building happens with Docker, Buildah, or kaniko. The container itself is a solved problem — the innovation has moved to what runs inside (Wasm) and how securely it runs (Kata, gVisor). The OCI standard ensures that images are portable across runtimes.
Where It's Going¶
WebAssembly is the most credible next-generation container format. Solomon Hykes (Docker's creator) said in 2019: "If WASM+WASI existed in 2008, we wouldn't have needed to create Docker." The timeline for mainstream Wasm adoption in server-side computing is 3-5 years. Meanwhile, Kata and gVisor will become the default for multi-tenant and security-sensitive workloads.
The Pattern¶
Each generation finds a more granular unit of isolation that trades less overhead for its security boundary. The winning approach is always the one that fits naturally into the existing developer workflow — nobody adopts better isolation if it requires a completely different way of working.
Key Takeaway for Practitioners¶
Master Docker and containerd — they are today's reality. Keep an eye on Wasm for the future, but don't bet your production infrastructure on it yet. The most important skill is understanding what containers actually are (namespaces + cgroups + a filesystem image), so you can debug them when the abstraction leaks.
Cross-References¶
- Topic Packs: Docker, Kubernetes, containerd
- Tool Comparisons: Docker vs containerd vs CRI-O
- Evolution Guides: Bare Metal to Serverless, Kubernetes Itself