Skip to content

WebAssembly for Infrastructure Footguns


1. Assuming WASM Has Full POSIX Support

You port a Linux service to WASM expecting it to behave like a container. Your application uses raw sockets, signals, threads, or filesystem operations that aren't supported in WASI Preview 1. It compiles fine but crashes at runtime with "not implemented" or "illegal instruction" traps. Fix: Check WASI capability support before porting. WASI Preview 1 has no networking sockets (handled by the host), limited thread support, and no signals. Use WASI Preview 2 (Component Model) for broader syscall coverage. Some frameworks (Spin, wasmCloud) wrap networking at the framework level rather than through WASI.

Remember: WASI is not POSIX. It is a capability-based security model where the host grants specific abilities. No raw sockets, no fork(), no signals, no /proc. Think of it as a stricter sandbox than containers — that strictness is the security benefit, but it means not everything ports cleanly.


2. Shipping Debug WASM Binaries to Production — 10x Size Bloat

Your WASM binary built with --debug or without release optimization flags is 15MB. The same code built with --release and wasm-opt -O3 is 800KB. You didn't notice the size difference until production startup times and network transfer overhead became apparent. Fix: Always build for production with release flags: cargo build --target wasm32-wasi --release. Post-process with wasm-opt -O3 -o optimized.wasm input.wasm. Verify binary size in CI: ls -lh *.wasm and fail if above a threshold. Track binary size as a metric over time.


3. Granting Overly Broad Filesystem Access Via --dir=/

You grant --dir=/ to a WASM module, giving it access to the entire host filesystem to make development easier. The WASI capability model's entire security benefit is that modules can't access what they're not granted. Granting root defeats this. Fix: Grant only the specific directories the module needs: --dir=/data --dir=/tmp. Treat capability grants like Linux filesystem permissions — minimum necessary access. Review capability grants as part of security review for any WASM deployment, especially for untrusted or third-party modules.

Gotcha: --dir=/ in development has a habit of making it into production Dockerfiles and Helm values. Treat WASI capability grants like IAM policies — review them in code review, and flag --dir=/ the same way you would flag "Action": "*" in an IAM policy.


4. WASM Runtime Class Missing Breaks Kubernetes Scheduling With Confusing Error

You apply a Pod manifest with runtimeClassName: spin to a cluster that doesn't have the Spin containerd shim installed. The pod goes into Pending state indefinitely. kubectl describe pod shows "RuntimeClass.node.k8s.io 'spin' not found" — but this error is easy to miss. Fix: Verify runtime class availability before deploying: kubectl get runtimeclasses. Install the appropriate shim on nodes (e.g., containerd-shim-spin, containerd-shim-wasmtime) and configure containerd's config.toml. Label WASM-capable nodes and use node affinity to schedule WASM pods there.


5. Treating WASM Startup Time as Free — Ignoring Cold Start Overhead

You deploy serverless WASM functions expecting microsecond cold starts. On Cloudflare Workers this is true. But on Kubernetes with a containerd shim, cold starts include: pulling the OCI image, initializing the runtime, JIT compiling the WASM (for large modules), and WASI setup. Large modules can still take hundreds of milliseconds to start. Fix: Measure actual cold start times in your target runtime, not benchmark figures from different environments. Pre-compile WASM modules at deployment time (AOT compilation) using wasmtime compile my-app.wasm. Keep module size small — split large modules if startup time matters.

Scale note: Cloudflare Workers achieve sub-millisecond cold starts because they use V8 isolates, not standard WASM runtimes. Kubernetes with containerd shims (Spin, wasmtime) have cold starts in the 50-500ms range depending on module size and AOT status. Always benchmark in your actual deployment target.


6. Component Model Incompatibilities Between Different WASM Runtimes

You build a WASM component targeting the Component Model (WASI Preview 2), test it with wasmtime, and deploy to a Kubernetes cluster running WasmEdge. The component fails because WasmEdge's Component Model support is at a different version or has different interface compatibility. Fix: Pin the runtime version in your deployment configuration. Test with the exact runtime that will run in production. The WASM Component Model specification is still evolving — check runtime compatibility matrices before mixing runtimes. Document which WASM specification version your module targets.


7. Using WASM Where Containers Already Work Well

Your team adopts WASM for all new services because it's new and exciting. Your Java microservices, Postgres-backed APIs, and full Linux daemons need capabilities (networking, threading, filesystem) that WASM WASI doesn't fully support. You spend weeks debugging compatibility issues for no operational benefit over containers. Fix: WASM makes sense for specific use cases: edge functions, plugins, sandboxed untrusted code, and polyglot small functions. For standard server-side workloads, containers are more mature, better tooled, and fully POSIX-compatible. Adopt WASM where it provides specific advantages, not as a blanket replacement for containers.


8. No Observability Into WASM Module Execution

Your WASM function is deployed and processing requests, but you have no metrics, traces, or structured logs. WASM modules can't directly call Prometheus, Jaeger, or other observability backends — all I/O goes through the host/framework. If the module is slow or erroring, you find out through user reports. Fix: Use your framework's observability hooks. Spin provides a wasi:logging interface. For Kubernetes, use the sidecar pattern to collect stdout/stderr logs. Build metrics emission into the host that wraps your WASM module. Use OpenTelemetry with WASM-compatible exporters if your runtime supports WASI sockets or HTTP.