Pattern: tmpfs Consuming Hidden RAM¶
ID: FP-007 Family: Resource Exhaustion Frequency: Uncommon Blast Radius: Single Host Detection Difficulty: Subtle
The Shape¶
tmpfs filesystems (including /tmp, /dev/shm, /run, and container overlay layers)
are backed by RAM and swap, not disk. Large files written to these locations consume
physical memory, but this consumption is invisible to standard memory monitoring tools
that only watch process RSS and slab cache. The host can OOM without any single process
showing high memory usage.
How You'll See It¶
In Linux/Infrastructure¶
$ free -h
total used free shared buff/cache available
Mem: 16G 2G 1G 8G 5G 6G
↑ used looks low but... ↑ "shared" is tmpfs/shm usage
$ df -h /dev/shm
Filesystem Size Used Avail Use% Mounted on
tmpfs 7.8G 7.5G 300M 97% /dev/shm
free is the memory used by tmpfs. A process writing to
/dev/shm for IPC (databases, video processing, scientific computing) can fill RAM
without showing high RSS in ps or top.
In Kubernetes¶
Kubernetes mounts an emptyDir with medium: Memory as a RAM-backed tmpfs. If the
application writes more than expected to this volume, memory limit accounting includes it.
Pods get OOMKilled even though container RSS looks normal because tmpfs usage counts
against the cgroup memory limit.
In CI/CD¶
Docker build layer caches, temporary build directories, or test databases written to
/tmp during CI fill the tmpfs. The CI node appears to have disk space but runs out
of RAM, causing OOM kills of unrelated services on the same host.
The Tell¶
free -hshows high "shared" column.df -h /dev/shmordf -h /tmpshows high utilization. No single process has high RSS, but the host is memory-constrained.
Common Misdiagnosis¶
| Looks Like | But Actually | How to Tell the Difference |
|---|---|---|
| Memory leak in process | tmpfs accumulation | Process RSS low; df -h /dev/shm high |
| Kernel memory leak | tmpfs cache | slabtop shows low slab; df -h /run /tmp /dev/shm shows culprit |
| Disk full | RAM full via tmpfs | df -h shows free disk but free -h shows low available RAM |
The Fix (Generic)¶
- Immediate: Identify which files are in
/dev/shmor/tmpand delete unnecessary ones.ls -laSh /dev/shmanddu -sh /tmp/*. - Short-term: Set tmpfs size limits in
/etc/fstab:tmpfs /dev/shm tmpfs defaults,size=2G 0 0. Restart processes that use/dev/shm. - Long-term: Monitor the
sharedcolumn fromfreeor thenode_memory_Shmem_bytesPrometheus metric; alert at 50% of RAM used by tmpfs.
Real-World Examples¶
- Example 1: PostgreSQL using
shared_buffersallocated via/dev/shm. After a misconfiguration setshared_buffers=12GBon a 16GB server,/dev/shmconsumed 12GB; other processes OOM killed. - Example 2: A machine learning pipeline wrote intermediate tensors to
/tmp(a 16GB tmpfs). The pipeline processed large batches overnight; host OOMed at 3am killing the training job.
War Story¶
The host had 32GB RAM and nothing in
topshowed more than 2GB RSS. Total process RSS added up to about 8GB. But the system was OOMing. Someone finally ranvmstat -s | grep shmand found 22GB in shared memory — a legacy real-time data feed was writing its ring buffer to/dev/shmand had grown to an unexpected size after a config change. We capped/dev/shmto 8GB in/etc/fstaband the OOMs stopped.
Cross-References¶
- Topic Packs: linux-memory-management, linux-ops
- Footguns: linux-memory-management/footguns.md — "tmpfs counted against memory"
- Related Patterns: FP-004 (OOM without swap — the kill mechanism), FP-003 (disk full reserved blocks — another invisible storage limit)