- devops
- l3
- topic-pack
- wasm-infrastructure --- Portal | Level: L3: Advanced | Topics: WebAssembly for Infrastructure | Domain: DevOps & Tooling
WebAssembly for Infrastructure - Primer¶
Why This Matters¶
WebAssembly (Wasm) for infrastructure means running compiled Wasm modules as server-side workloads — not in the browser, but alongside or instead of containers. Wasm binaries are portable, sandboxed by default, and start in microseconds rather than the hundreds of milliseconds a typical container needs. For DevOps teams, this translates to lighter resource footprints, stronger isolation without kernel privileges, and genuine polyglot support: Rust, Go, Python, and C/C++ all compile to the same .wasm target. Solomon Hykes (Docker's creator) said: "If WASM+WASI existed in 2008, we wouldn't have needed to create Docker." While that future is still developing, Wasm is production-ready at the edge (Cloudflare Workers, Fastly Compute), and the Kubernetes integration path is maturing.
Core Concepts¶
1. Key Technologies¶
WASI (WebAssembly System Interface) — the standardized API that gives Wasm modules controlled access to filesystems, environment variables, clocks, and network sockets. Without WASI, a Wasm module has zero host access — capabilities are granted explicitly at runtime. This is a fundamentally different security model from containers (which share a kernel and require capabilities to be removed).
Name origin: WebAssembly was named to evoke "assembly language for the web," but the "Web" part is increasingly misleading. WASI drops the browser assumption entirely — it stands for WebAssembly System Interface, deliberately echoing POSIX (Portable Operating System Interface). The W3C standardized the core Wasm spec; WASI is governed by the Bytecode Alliance.
Wasm Runtimes — execute modules outside the browser: | Runtime | Maintained by | Strengths | |---------|--------------|-----------| | Wasmtime | Bytecode Alliance | Reference implementation, Cranelift JIT | | WasmEdge | CNCF | Cloud/edge optimized, AI inference support | | Wasmer | Wasmer Inc. | Package manager (wapm), multi-backend | | wazero | Go community | Pure Go, zero CGO, embeddable |
Who made it: Wasmtime is the reference implementation from the Bytecode Alliance, a nonprofit founded by Mozilla, Fastly, Intel, and Red Hat in 2019. WasmEdge became a CNCF Sandbox project in 2021 and graduated to Incubating. The Bytecode Alliance also maintains the Cranelift compiler backend that Wasmtime uses for JIT compilation.
Spin (Fermyon) — developer framework for serverless Wasm applications.
containerd-wasm-shim — bridges Wasm into Kubernetes via containerd RuntimeClass.
SpinKube — Kubernetes operator that packages the Spin runtime and containerd shim integration.
2. Runtime Examples¶
# Install Wasmtime
curl https://wasmtime.dev/install.sh -sSf | bash
# Run a Wasm module
wasmtime hello.wasm
# Run with WASI capabilities
wasmtime --dir=/data hello.wasm # grant filesystem access to /data
wasmtime --env KEY=VALUE hello.wasm # pass environment variable
wasmtime --tcplisten=127.0.0.1:8080 server.wasm # grant network listening
# Without explicit grants, the module cannot access anything:
wasmtime hello.wasm # no fs, no env, no network — sandboxed by default
# Install WasmEdge
curl -sSf https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh | bash
# Run with WasmEdge
wasmedge hello.wasm
wasmedge --dir /data server.wasm
Compile to Wasm from Rust:
# Add Wasm target
rustup target add wasm32-wasi
# Build
cargo build --target wasm32-wasi --release
# Output: target/wasm32-wasi/release/myapp.wasm
# Run
wasmtime target/wasm32-wasi/release/myapp.wasm
Compile to Wasm from Go (TinyGo):
# Install TinyGo
# https://tinygo.org/getting-started/install/
# Build
tinygo build -o myapp.wasm -target=wasi main.go
# Run
wasmtime myapp.wasm
3. WASI Patterns¶
// Rust: WASI file access
use std::fs;
use std::io::Write;
fn main() {
// Only works if host grants --dir=/data
let data = fs::read_to_string("/data/config.json")
.expect("Cannot read /data/config.json — was --dir=/data granted?");
println!("Config: {}", data);
let mut file = fs::File::create("/data/output.txt").unwrap();
file.write_all(b"processed data").unwrap();
}
// Rust: WASI HTTP server (with wasi-http proposal)
// Using Spin framework
use spin_sdk::http::{IntoResponse, Request, Response};
use spin_sdk::http_component;
#[http_component]
fn handle_request(req: Request) -> anyhow::Result<impl IntoResponse> {
Ok(Response::builder()
.status(200)
.header("content-type", "text/plain")
.body("Hello from Wasm!")
.build())
}
4. Spin Framework (Developer Workflow)¶
# Install Spin
curl -fsSL https://developer.fermyon.com/downloads/install.sh | bash
sudo mv spin /usr/local/bin/
# Create a new Spin application
spin new -t http-rust my-api
cd my-api
# Project structure
# spin.toml — application manifest
# src/lib.rs — handler code
# Build the application
spin build
# Run locally (starts HTTP server)
spin up
# Listening on http://127.0.0.1:3000
# Test
curl http://localhost:3000/hello
# Deploy to Fermyon Cloud
spin deploy
spin.toml:
spin_manifest_version = 2
[application]
name = "my-api"
version = "0.1.0"
[[trigger.http]]
route = "/hello"
component = "my-api"
[component.my-api]
source = "target/wasm32-wasi/release/my_api.wasm"
allowed_outbound_hosts = ["https://api.example.com"]
[component.my-api.build]
command = "cargo build --target wasm32-wasi --release"
5. Wasm on Kubernetes¶
# RuntimeClass for Wasm workloads
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: wasmtime
handler: spin # or wasmtime, wasmedge
---
# Pod using Wasm runtime
apiVersion: v1
kind: Pod
metadata:
name: wasm-hello
spec:
runtimeClassName: wasmtime
containers:
- name: hello
image: ghcr.io/example/hello-wasm:latest
# No Linux root filesystem, no container image layers
# The OCI artifact contains only the .wasm binary
command: ["/"]
ports:
- containerPort: 80
SpinKube setup:
# Install SpinKube operator
helm repo add spinkube https://spinkube.github.io/charts
helm install spin-operator spinkube/spin-operator \
--namespace spin-operator --create-namespace
# Deploy a SpinApp
kubectl apply -f - <<EOF
apiVersion: core.spinkube.dev/v1alpha1
kind: SpinApp
metadata:
name: my-api
spec:
image: ghcr.io/example/my-api:latest
replicas: 3
executor: containerd-shim-spin
EOF
kubectl get spinapps
kubectl get pods -l core.spinkube.dev/app-name=my-api
6. Edge Deployment¶
Wasm's fast startup and small binary size make it ideal for edge computing.
Cloudflare Workers (Wasm at the edge):
# Install wrangler CLI
npm install -g wrangler
# Create a Rust worker
wrangler init my-worker --type rust
cd my-worker
wrangler dev # local development
wrangler deploy # deploy to 300+ edge locations
Fastly Compute:
fastly compute init --from https://github.com/example/starter
fastly compute build
fastly compute serve # local test
fastly compute deploy
7. Performance Comparisons¶
| Metric | Container (Alpine) | Wasm (Wasmtime) |
|---|---|---|
| Cold start | 200-500ms | 1-5ms |
| Image size | 5-50MB | 0.5-5MB |
| Memory overhead | 20-50MB base | 1-10MB base |
| Isolation | Linux namespaces + cgroups | Wasm sandbox (no kernel sharing) |
| Syscall access | All (unless restricted) | Only explicitly granted |
| Portability | Linux (mostly) | Any OS with a Wasm runtime |
Analogy: Think of Wasm's capability model like a hotel room safe. A container is more like a room with a lock on the door — you have access to everything inside, and security means adding more locks. A Wasm module starts with an empty safe: every resource (filesystem, network, environment) must be explicitly handed to it by the host. You cannot accidentally leave something accessible because nothing is accessible by default.
Interview tip: When asked "containers vs Wasm," the key distinction is the isolation boundary. Containers share the host kernel and isolate via namespaces/cgroups (the kernel is the trust boundary). Wasm modules run in a language-level sandbox with no kernel access at all — the runtime is the trust boundary. This means Wasm cannot have container escape vulnerabilities by design.
Current limitations: - WASI is still evolving — networking, threading, and GPU access are in proposal stages - Not all languages compile well to Wasm (Python support is experimental) - Debugging tools are less mature than container tooling - Ecosystem is smaller — fewer pre-built components - Some CPU-intensive workloads are slower than native (JIT overhead)
8. OCI Artifacts for Wasm¶
Wasm binaries can be distributed as OCI artifacts through standard container registries:
# Push a Wasm module to a registry
# Using wasm-to-oci or regctl
wasm-to-oci push myapp.wasm ghcr.io/myorg/myapp:v1.0.0
# Or with Spin
spin registry push ghcr.io/myorg/my-spin-app:v1.0.0
# Pull and run
wasmtime run oci://ghcr.io/myorg/myapp:v1.0.0
Quick Reference¶
# Runtimes
wasmtime hello.wasm # run with Wasmtime
wasmedge hello.wasm # run with WasmEdge
wasmtime --dir=/data app.wasm # grant filesystem access
# Build
rustup target add wasm32-wasi && cargo build --target wasm32-wasi --release
tinygo build -o app.wasm -target=wasi main.go
# Spin framework
spin new -t http-rust my-app # create project
spin build # compile to Wasm
spin up # run locally
spin deploy # deploy to Fermyon Cloud
# Kubernetes
kubectl apply -f runtimeclass.yaml # register Wasm runtime
kubectl apply -f wasm-pod.yaml # deploy Wasm workload
kubectl get spinapps # SpinKube applications
# Key differences from containers
# - No kernel sharing (Wasm sandbox is self-contained)
# - Capabilities must be explicitly granted (--dir, --env, --tcplisten)
# - Microsecond cold starts vs container milliseconds
# - OCI artifacts for distribution (same registries, different content)
Wiki Navigation¶
Prerequisites¶
- Docker Exercises (Quest Ladder) (CLI) (Exercise Set, L0)