Inodes - Street-Level Ops¶
Real-world inode diagnosis and management workflows for production Linux systems.
Task: Diagnose "No Space Left" Despite Free Disk¶
# Application throwing "No space left on device"
$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 50G 32G 18G 64% /
# 18 GB free. Check inodes:
$ df -i /
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 3276800 3276800 0 100% /
# 100% inode usage — zero inodes free. That is the problem.
Remember: When you see "No space left on device" but
df -hshows free space, always checkdf -inext. The two resources are independent — you can exhaust inodes (millions of tiny files) while having plenty of disk space, or exhaust space (few huge files) while having plenty of inodes.
Task: Find Which Directory Has Millions of Files¶
# Identify the directory consuming all inodes
$ find / -xdev -printf '%h\n' | sort | uniq -c | sort -rn | head -10
2841023 /var/spool/postfix/deferred
94112 /var/spool/postfix/active
31440 /tmp/php-sessions
8201 /var/log/journal
1247 /usr/share/man/man3
892 /usr/lib/python3/dist-packages
# /var/spool/postfix/deferred has 2.8 million files — mail queue explosion
One-liner:
find / -xdev -printf '%h\n' | sort | uniq -c | sort -rn | headis the inode equivalent ofdu -sh * | sort -rh | headfor disk space. Memorize it — you will use it at 3 AM.
Task: Clean Up Millions of Small Files¶
# Do NOT use rm -rf * — shell expands * into an argument list that overflows ARG_MAX
# Use find -delete instead (no shell expansion, no fork+exec per file)
$ find /var/spool/postfix/deferred -type f -delete
# For a slower but safer approach (delete in batches):
$ find /var/spool/postfix/deferred -type f -print0 | xargs -0 -n 5000 rm -f
> **Debug clue:** `find -delete` is vastly faster than piping to `xargs rm` for millions of files because `-delete` is a built-in action that avoids fork+exec overhead. On 3 million files, `-delete` can finish in minutes while `xargs rm` takes hours. The `-xdev` flag prevents `find` from crossing filesystem boundaries — essential when searching from `/`.
# Delete files older than 7 days only
$ find /tmp/php-sessions -type f -mtime +7 -delete
# Verify inode recovery
$ df -i /var/spool
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sdb1 1310720 15230 1295490 2% /var/spool
Task: Check a File's Inode Information¶
# Show inode number
$ ls -i /etc/hostname
1048589 /etc/hostname
# Full inode metadata
$ stat /etc/hostname
File: /etc/hostname
Size: 11 Blocks: 8 IO Block: 4096 regular file
Inode: 1048589 Links: 1
Access: (0644/-rw-r--r--) Uid: (0/root) Gid: (0/root)
Modify: 2024-11-15 10:00:02
Change: 2024-11-15 10:00:02
Task: Find and Fix a Deleted-But-Open File Consuming Inodes¶
# Disk shows full, but find cannot account for the space
# A process has a deleted file still open — space is not freed
# Find open file descriptors pointing to deleted files
$ lsof +L1 | grep deleted
python3 9876 app 12u REG 8,1 5368709120 0 1234567 /var/log/app.log (deleted)
# The file is deleted from the directory but the process holds it open
# 5 GB still allocated until the process releases the fd
# Option 1: Restart the process
$ systemctl restart myapp
# Option 2: Truncate the open fd without restarting
$ : > /proc/9876/fd/12
# This empties the file while keeping the fd valid
Under the hood: On Linux,
unlink()removes the directory entry (the name) but does not free the inode or disk blocks until the last file descriptor referencing that inode is closed. This is whylsof +L1(link count < 1) finds these ghost files. The space is only truly freed when the process closes the fd or exits. This is one of the most common causes of "I deleted 10GB of logs but disk usage did not change."
Task: Create a Filesystem with More Inodes¶
# Mail server needs millions of inodes
# Default ext4: 1 inode per 16 KB (bytes-per-inode ratio) = ~6.5M inodes on 100 GB
# Create with more inodes (1 per 4 KB)
$ mkfs.ext4 -i 4096 /dev/sdb1
# Or specify an exact inode count
$ mkfs.ext4 -N 20000000 /dev/sdb1
# Check inode count on an existing filesystem
$ tune2fs -l /dev/sda1 | grep -i inode
Inode count: 3276800
Free inodes: 3031909
Inodes per group: 8192
Inode size: 256
Task: Understand Hard Links via Inodes¶
# Create a file and a hard link
$ echo "data" > original.txt
$ ln original.txt hardlink.txt
# Both share the same inode
$ ls -li original.txt hardlink.txt
1048601 -rw-r--r-- 2 root root 5 Mar 15 10:00 hardlink.txt
1048601 -rw-r--r-- 2 root root 5 Mar 15 10:00 original.txt
# Link count is 2. Delete one — data persists:
$ rm original.txt
$ cat hardlink.txt
data
# Hard link does NOT consume an extra inode:
$ df -i . | tail -1
/dev/sda1 3276800 245891 3030909 8% /
$ ln hardlink.txt another.txt
$ df -i . | tail -1
/dev/sda1 3276800 245891 3030909 8% / # Same IUsed
# Symlink DOES consume an inode:
$ ln -s hardlink.txt symlink.txt
$ df -i . | tail -1
/dev/sda1 3276800 245892 3030908 8% / # IUsed +1
Under the hood: Hard links share the same inode (same data blocks, same metadata). Symlinks get their own inode but point to a pathname, not an inode. This is why hard links survive the target being renamed or moved (same inode), but symlinks break (pathname changed). Hard links cannot cross filesystem boundaries because inode numbers are only unique within a filesystem.
Task: Set Up Inode Monitoring¶
# Quick script for cron-based alerting
$ cat > /usr/local/bin/check-inodes.sh <<'SCRIPT'
#!/bin/bash
threshold=90
df -i | awk 'NR>1 {gsub(/%/,"",$5); if ($5 > '"$threshold"') print $6, $5"%"}'| \
while read mount pct; do
logger -p local0.warning "INODE ALERT: ${mount} at ${pct} inode usage"
done
SCRIPT
$ chmod +x /usr/local/bin/check-inodes.sh
# Add to cron
$ echo "*/15 * * * * /usr/local/bin/check-inodes.sh" | crontab -
# For Prometheus node_exporter, inode metrics are exported by default:
# node_filesystem_files_free{mountpoint="/"}
# node_filesystem_files{mountpoint="/"}
Gotcha: You cannot add more inodes to an existing ext4 filesystem — the inode count is fixed at
mkfstime. If you hit inode exhaustion on a production filesystem, your only options are: (1) delete files, (2) move data to a new filesystem created with more inodes, or (3) switch to XFS which allocates inodes dynamically. Plan inode capacity at provisioning time for mail servers and session stores.
Task: Prevent Inode Exhaustion from Session Files¶
# PHP session files in /tmp accumulating
$ find /tmp -name 'sess_*' -type f | wc -l
483291
# Clean old sessions
$ find /tmp -name 'sess_*' -type f -mtime +1 -delete
# Set up systemd-tmpfiles to auto-clean
$ cat > /etc/tmpfiles.d/php-sessions.conf <<'EOF'
d /tmp 1777 root root 1d
EOF
# Or use tmpreaper (Debian) / tmpwatch (RHEL)
$ tmpreaper 24h /tmp
Emergency: Server Cannot Create PID Files or Sockets¶
# Service fails to start: "Cannot create /var/run/myapp.pid: No space left"
$ df -i /var/run
Filesystem Inodes IUsed IFree IUse% Mounted on
tmpfs 505765 503 505262 1% /var/run
# Inodes are fine here — tmpfs. Check the root filesystem:
$ df -i /
/dev/sda1 3276800 3276800 0 100% /
# Root is out of inodes. Quick fix: find and delete the obvious culprit
$ find / -xdev -printf '%h\n' | sort | uniq -c | sort -rn | head -5
# Delete the offending files, then restart the service