Skip to content

WebAssembly Infrastructure — Trivia & Interesting Facts

Surprising, historical, and little-known facts about WebAssembly in infrastructure contexts.


Solomon Hykes (Docker founder) said Wasm could replace containers

In March 2019, Solomon Hykes tweeted: "If WASM+WASI existed in 2008, we wouldn't have needed to create Docker." This statement from the creator of the container revolution sent shockwaves through the infrastructure community. Hykes clarified that containers wouldn't disappear, but that Wasm's security model and portability made it a better primitive for many use cases. The tweet became one of the most cited quotes in the Wasm infrastructure discussion.


WebAssembly was designed by engineers from all four major browser vendors simultaneously

The WebAssembly specification was developed collaboratively by engineers from Google (Chrome/V8), Mozilla (Firefox/SpiderMonkey), Microsoft (Edge/Chakra), and Apple (Safari/JavaScriptCore). This level of cross-vendor collaboration is extremely rare in web standards. The MVP specification was released in March 2017, and all four browsers shipped support within months of each other — an unprecedented coordination in browser history.


WASI (WebAssembly System Interface) was created by Mozilla engineer Lin Clark in 2019

WASI, the system interface that allows WebAssembly to run outside browsers, was proposed by Lin Clark at Mozilla in March 2019. WASI provides a capability-based security model where a Wasm module can only access resources explicitly granted to it — no ambient authority. This sandboxing is fundamentally stronger than containers (which share the host kernel) and approaches the isolation of virtual machines at a fraction of the overhead.


A Wasm module can cold-start in under 1 millisecond

WebAssembly runtimes like Wasmtime and Wasmer can instantiate and start executing a Wasm module in under 1 millisecond — compared to 100-300ms for a container and 1-10+ seconds for a virtual machine. This cold-start speed makes Wasm compelling for serverless and edge computing where functions need to respond to requests almost instantly. Cloudflare Workers, built on V8 isolates (which share lineage with Wasm), demonstrated this advantage at scale.


Cloudflare Workers runs over 10 million Wasm-capable workers per second

Cloudflare Workers, launched in 2017, uses V8 isolates to run customer code at the network edge across 300+ datacenters worldwide. The platform processes over 10 million requests per second and can spin up new isolates in under 5 milliseconds. While not pure Wasm (it runs JavaScript and Wasm via V8), it demonstrated the infrastructure potential of lightweight sandboxing at massive scale.


Fermyon created a "serverless Wasm platform" that deploys applications in milliseconds

Fermyon, founded in 2021 by former Microsoft Azure engineers (including Matt Butcher, creator of Helm), built Spin — an open-source framework for building Wasm microservices. Fermyon Cloud can deploy a new application version in under 1 second. The company's pitch is that Wasm enables a serverless experience without the cold-start penalties and vendor lock-in of existing serverless platforms.


WebAssembly's memory safety guarantees make it inherently more secure than native code

Wasm modules operate in a linear memory sandbox: they cannot access memory outside their allocated space, cannot call arbitrary system functions, and cannot access the host filesystem or network without explicit WASI permissions. This means a compromised Wasm module is far more contained than a compromised native process. For infrastructure workloads processing untrusted input (CDN edge logic, plugin systems), this isolation is transformative.


Envoy Proxy uses Wasm for extensibility, replacing Lua and C++ filters

Envoy, the ubiquitous service mesh proxy, added Wasm support for custom filters in 2020. Previously, extending Envoy required writing C++ (complex and risky) or Lua (limited). Wasm filters can be written in Rust, Go, or C++ and dynamically loaded without recompiling Envoy. Istio's Wasm Plugin API builds on this, allowing service mesh users to deploy custom logic across thousands of proxy instances.


The Component Model aims to solve Wasm's biggest limitation: module interoperability

WebAssembly's Component Model (under development as of 2024-2025) defines how Wasm modules can interact with each other and the host through typed interfaces, similar to how microservices communicate via APIs. Without the Component Model, sharing complex data types between modules requires manual serialization. The Component Model, combined with WIT (WebAssembly Interface Type) definitions, could make Wasm composability as natural as function calls.


Docker Desktop added Wasm support in 2022, signaling mainstream acceptance

In October 2022, Docker announced beta support for running Wasm workloads alongside Linux containers. The docker run --runtime=io.containerd.wasmedge.v1 command runs a Wasm module using WasmEdge through containerd's standard interface. This integration means Wasm workloads can be managed with the same Docker/Kubernetes tooling that teams already use, dramatically lowering the adoption barrier.


Bytecode Alliance is the industry consortium governing Wasm's infrastructure future

The Bytecode Alliance, founded in 2019 by Mozilla, Fastly, Intel, and Red Hat, governs the development of Wasmtime (the reference Wasm runtime), WASI, and the Component Model. The consortium's mission is to create a secure-by-default WebAssembly ecosystem for server-side use cases. Its members now include Microsoft, Google, Amazon, and dozens of smaller companies building Wasm infrastructure.