Skip to content

Inode Footguns

Mistakes that cause mysterious disk-full errors, data retention failures, or capacity planning oversights.


1. Only checking df -h and missing inode exhaustion

The application throws "No space left on device." You run df -h and see 40% free. You conclude it is not a disk issue and chase application bugs for hours. Meanwhile, df -i shows 100% inode usage.

Fix: When you see "No space left on device," always check both df -h (blocks) and df -i (inodes). Make this a reflex.


2. Using rm -rf * on directories with millions of files

You run rm -rf /var/spool/postfix/deferred/*. The shell tries to expand the glob into 3 million arguments. You get "Argument list too long" and nothing is deleted. Meanwhile, the application continues failing.

Fix: Use find /path -type f -delete. It processes files one at a time without shell glob expansion. For extra speed, use find ... -print0 | xargs -0 rm -f.


3. Not setting inode count at filesystem creation time

You format a 500 GB volume for a mail server using default ext4 settings. The default ratio gives ~32 million inodes. The mail server creates 50 million small queue files. You hit inode exhaustion with 300 GB free. ext4 cannot add inodes after creation — you must reformat.

Fix: At format time, set -i 4096 or -N <count> based on your workload. For mail servers and session stores, plan for many small files. Consider XFS, which allocates inodes dynamically.


4. Deleting a large file but not freeing the space

You delete a 10 GB log file with rm /var/log/huge.log. But a process still has the file open. df still shows the space as used. The inode is marked deleted but the data blocks are not freed until the last file descriptor is closed.

Fix: Check for deleted open files: lsof +L1 | grep deleted. Restart the process or truncate the fd: > /proc/<pid>/fd/<fd>.


You write a script that creates a symbolic link for every processed file. Each symlink consumes its own inode (unlike hard links). Over months, millions of symlinks accumulate, exhausting inodes while consuming almost no disk space.

Fix: Use hard links when possible (same filesystem, no directories). Clean up old symlinks periodically. Monitor df -i alongside df -h.


6. Ignoring inode usage in monitoring and capacity planning

You set up disk space alerts at 80% and 90%. You never add inode alerts. The filesystem hits 100% inodes at 60% disk usage. No alert fires. The application fails silently for hours.

Fix: Monitor inode usage with the same thresholds as disk space. Prometheus node_exporter exports node_filesystem_files_free. Alert when IUse% exceeds 80%.


7. Container overlay layers consuming inodes silently

Docker and overlayfs create whiteout files and layer metadata for every container. A host running hundreds of containers over time accumulates millions of files in /var/lib/docker/overlay2. Inode usage creeps up without any single container being the obvious culprit.

Fix: Run docker system prune regularly. Monitor /var/lib/docker inode usage. Consider placing Docker storage on a separate filesystem with higher inode allocation.


You use rsync --link-dest for incremental backups. Hard links share inodes, so identical files across snapshots do not consume extra inodes. But you then copy (not move) the backup to another filesystem. The copy creates new inodes for every file, tripling inode consumption.

Fix: When transferring hard-link-based backups, use rsync -aH (preserve hard links) or cp -al (copy as hard links). Understand that hard links only work within a single filesystem.


9. Filesystem journal consuming inodes during recovery

After a crash, the ext4 journal replays uncommitted transactions. During replay, orphaned inodes may not be cleaned up immediately. df -i shows inodes in use that do not correspond to visible files. You cannot find what is consuming them.

Fix: Run fsck on the filesystem (unmounted or read-only). This cleans up orphaned inodes. Check the lost+found directory for recovered fragments.


10. Assuming XFS and btrfs have the same inode behavior as ext4

You move from ext4 to XFS and assume inode limits work the same way. XFS allocates inodes dynamically and does not have a fixed limit at format time. But it can still run out if the inode allocation area fills up (rare). Debugging tools differ — xfs_info instead of tune2fs.

Fix: Know your filesystem. ext4: fixed inodes, set at format. XFS: dynamic, use xfs_info and xfs_db. btrfs: dynamic, practically unlimited. Tune diagnosis commands to match.