Skip to content

Inodes - Primer

Why This Matters

You SSH into a production server. The application is throwing "No space left on device" errors. You run df -h and see 40% disk free. Nothing makes sense — until you check inodes. This is one of the most common Linux interview questions and one of the most common real-world misdiagnoses. If you have never debugged inode exhaustion, you will eventually be blindsided by it in production.

Core Concepts

Name origin: "Inode" is short for "index node." The concept was created by Ken Thompson and Dennis Ritchie during the original Unix development at Bell Labs (1969-1971). In the original Unix filesystem, inodes were stored in a dedicated region called the "i-list" — a flat array where each inode's position (i-number) served as its unique identifier. The term appeared in Unix documentation by the mid-1970s and has been the standard filesystem metadata structure in every Unix-like OS since.

1. What Is an Inode?

An inode (index node) is a data structure on a Unix/Linux filesystem that stores metadata about a file or directory — everything except the filename and the actual data content.

Each inode contains: - File type (regular file, directory, symlink, socket, etc.) - Permissions (owner, group, other) - Owner UID and GID - File size in bytes - Timestamps (atime, mtime, ctime) - Number of hard links - Pointers to the data blocks on disk

Every file and directory consumes exactly one inode. The filesystem has a fixed inode table created at format time. Once all inodes are allocated, no new files can be created — even if there is plenty of free disk space.

Check a file's inode number:

$ ls -i /etc/hostname
1048589 /etc/hostname

View the full inode metadata:

$ stat /etc/hostname
  File: /etc/hostname
  Size: 11            Blocks: 8          IO Block: 4096   regular file
Device: 802h/2050d    Inode: 1048589     Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2024-12-01 03:22:14.000000000 +0000
Modify: 2024-11-15 10:00:02.000000000 +0000
Change: 2024-11-15 10:00:02.000000000 +0000
 Birth: 2024-11-15 10:00:02.000000000 +0000

2. How to Check Inode Usage

The key diagnostic command is df -i, which shows inode usage per filesystem:

$ df -i
Filesystem       Inodes   IUsed    IFree IUse% Mounted on
/dev/sda1       6553600  245891  6307709    4% /
tmpfs            505765       1   505764    1% /dev/shm
/dev/sdb1       1310720 1310720        0  100% /var/spool

In this example, /var/spool is at 100% inode usage — no new files can be created there, regardless of free disk space.

Compare with regular disk usage:

$ df -h /var/spool
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1       5.0G  1.2G  3.8G  24% /var/spool

There it is: 3.8 GB free but zero inodes available. This is the classic "disk full but not full" scenario.

3. Inode Exhaustion — Causes, Symptoms, and Diagnosis

Common causes: - Mail queue explosion — millions of tiny queue files in /var/spool/postfix/ - Session file buildup — PHP/Python session files in /tmp or /var/lib/php/sessions/ - Failed log rotation — millions of small log fragments - Package manager caches — leftover .deb or .rpm temp files - Container overlay layers — thousands of whiteout files - Monitoring agents writing per-metric files

Symptoms: - "No space left on device" errors despite df -h showing free space - Cannot create new files, directories, or sockets - Services fail to start (cannot create PID files or sockets) - Package installs fail - Cron jobs stop producing output files

Quick diagnosis:

# Step 1: Check if it's an inode problem
df -i /

# Step 2: If IUse% is high, find which directory has the most files
find / -xdev -printf '%h\n' | sort | uniq -c | sort -rn | head -20

The find command prints the directory name for every file, then counts occurrences. The -xdev flag stays on the same filesystem (does not cross mount points). This is the single most useful command for diagnosing inode exhaustion.

Example output:

2841023 /var/spool/postfix/deferred
  94112 /var/spool/postfix/active
  31440 /tmp/sess
   8201 /var/log/journal
   1247 /usr/share/man/man3

Almost 3 million files in the deferred mail queue — that is your culprit.

Remember: The inode exhaustion diagnostic sequence: 1) df -i (confirm IUse% is high), 2) find / -xdev -printf '%h\n' | sort | uniq -c | sort -rn | head (find the directory with millions of files), 3) clean up with find /path -type f -delete (handles "argument list too long"). Mnemonic: "df -i, find, delete" — three commands to diagnose and fix the most common "disk full but not full" problem.

4. Fixing Inode Exhaustion

Once you have identified the offending directory, clean it up:

# Delete all files in a directory (handles millions of files without "argument list too long")
find /var/spool/postfix/deferred -type f -delete

# Delete session files older than 24 hours
find /tmp/sess -type f -mtime +1 -delete

# Delete empty files (often placeholders or failed writes)
find /var/spool -type f -empty -delete

Important: do not use rm -rf * on directories with millions of files. The shell glob expansion will fail with "Argument list too long." Use find ... -delete instead.

After cleanup, verify:

$ df -i /var/spool
Filesystem       Inodes  IUsed   IFree IUse% Mounted on
/dev/sdb1       1310720  15230 1295490    2% /var/spool

Prevent recurrence: - Set up cron jobs to purge old session/tmp files - Configure proper mail queue limits in Postfix (maximal_queue_lifetime) - Use tmpwatch or systemd-tmpfiles to age out /tmp - Monitor inode usage in your alerting system (df -i in Prometheus node_exporter)

War story: A mail relay server ran for months without issue until one morning all services failed to start. df -h showed 70% free disk. The culprit: 2.8 million tiny deferred mail queue files had exhausted all inodes on /var. The fix took 40 minutes of find /var/spool/postfix/deferred -type f -delete. The prevention: a Prometheus alert on node_filesystem_files_free that fires when free inodes drop below 10%. This is now a standard alert in most node-exporter dashboards.

Interview tip: "A server reports 'No space left on device' but df -h shows plenty of free space. What is wrong?" is a classic Linux interview question. The expected answer: check df -i for inode exhaustion. Follow up with the diagnostic sequence (find the directory with millions of files) and the fix (find -delete). Bonus points: explain that XFS allocates inodes dynamically while ext4 has a fixed inode table set at format time.

5. Filesystem Creation and Inode Allocation

The number of inodes is set when the filesystem is created and generally cannot be changed without reformatting (ext4) or is dynamic (XFS, btrfs).

For ext4, the default is one inode per 16 KB of disk space. On a 100 GB disk, that gives roughly 6.5 million inodes. You can override this at creation time:

# Explicitly set the number of inodes
mkfs.ext4 -N 20000000 /dev/sdb1

# Set bytes-per-inode ratio (lower = more inodes)
# One inode per 4096 bytes instead of default 16384
mkfs.ext4 -i 4096 /dev/sdb1

# Check inode settings on an existing filesystem
tune2fs -l /dev/sdb1 | grep -i inode

Example tune2fs output:

Inode count:              6553600
Free inodes:              6307709
Inodes per group:         8192
Inode size:               256

When to increase inodes at format time: - Mail servers (millions of tiny queue files) - Build servers (many small object files) - Session stores on disk - Any workload that creates many files under 16 KB

XFS note: XFS allocates inodes dynamically by default, so inode exhaustion is less common. However, it can still happen if the inode64 mount option is not set on large filesystems.

Under the hood: On ext4, each inode is a 256-byte structure (configurable at filesystem creation). It stores 12 direct block pointers, 1 indirect pointer, 1 double-indirect pointer, and 1 triple-indirect pointer — enough to address files up to ~16 TB. For files smaller than ~60 bytes, ext4 can store the data directly inside the inode itself (inline data), avoiding the overhead of allocating a separate data block. This optimization is significant for directories with many tiny files.

Gotcha: A process holding an open file descriptor keeps the file's data on disk even after it is "deleted" with rm. The inode's link count drops to 0, but the data blocks are not freed until the file descriptor is closed. This is a common source of "I deleted the huge log file but disk space did not free up." Fix: restart the process holding the fd, or use truncate -s 0 /proc/<pid>/fd/<fd> to zero the file while it is still open.

A hard link is an additional directory entry that points to the same inode as an existing file. Both names reference the same data and metadata.

# Create a hard link
$ echo "important data" > original.txt
$ ln original.txt hardlink.txt

# Both files share the same inode number
$ ls -li original.txt hardlink.txt
1048601 -rw-r--r-- 2 root root 15 Mar 15 10:00 hardlink.txt
1048601 -rw-r--r-- 2 root root 15 Mar 15 10:00 original.txt

Notice: - Same inode number (1048601) - Link count is 2 (the "2" after permissions) - Same size, same permissions, same timestamps — they are the same file

Key behaviors: - Deleting one name does not delete the data — it decrements the link count - The data is freed only when the link count reaches 0 (and no process has the file open) - Editing through either name changes the same underlying data - Hard links cannot cross filesystem boundaries (different filesystems have separate inode tables) - Hard links cannot point to directories (to prevent filesystem loops)

# Delete the original — hardlink still works
$ rm original.txt
$ cat hardlink.txt
important data

# Link count dropped to 1
$ stat hardlink.txt | grep Links
Device: 802h/2050d    Inode: 1048601     Links: 1

Ops use case: Hard links are used by tools like rsync --link-dest to create space-efficient incremental backups. Identical files across snapshots share inodes instead of duplicating data.

A symbolic (soft) link is a separate file with its own inode. Its data content is the path to the target file.

# Create a soft link
$ ln -s /etc/hostname mylink

# Different inode numbers — it's a separate file
$ ls -li /etc/hostname mylink
1048589 -rw-r--r-- 1 root root 11 Nov 15 10:00 /etc/hostname
1048610 lrwxrwxrwx 1 root root 14 Mar 15 10:00 mylink -> /etc/hostname
Property Hard Link Soft Link
Own inode No (shares with target) Yes (separate inode)
Consumes an inode No Yes
Survives target deletion Yes (data persists) No (becomes dangling)
Cross-filesystem No Yes
Can point to directories No Yes
File type in ls -l Same as target l (link)
Size shown Target data size Length of target path string

Interview gotcha: creating millions of symbolic links can contribute to inode exhaustion because each symlink consumes its own inode. Hard links do not consume additional inodes — they reuse the target's inode.

# Hard link does NOT increase inode usage
$ df -i . | tail -1
/dev/sda1  6553600  245891  6307709    4% /
$ ln existingfile hardlink
$ df -i . | tail -1
/dev/sda1  6553600  245891  6307709    4% /

# Soft link DOES increase inode usage
$ ln -s existingfile softlink
$ df -i . | tail -1
/dev/sda1  6553600  245892  6307708    4% /

Quick Reference

Task Command
Check inode usage df -i
Show inode number of a file ls -i filename or stat filename
Show full inode metadata stat filename
Find directories with most files find / -xdev -printf '%h\n' \| sort \| uniq -c \| sort -rn \| head
Delete millions of files safely find /path -type f -delete
Check filesystem inode settings tune2fs -l /dev/sdX \| grep -i inode
Create filesystem with more inodes mkfs.ext4 -N <count> /dev/sdX
Create a hard link ln target linkname
Create a soft link ln -s target linkname

Wiki Navigation