Skip to content

mergerfs - Street-Level Ops

Real-world patterns and debugging techniques for mergerfs in production.

Quick Diagnosis Commands

# 1. Is mergerfs mounted?
mount | grep mergerfs

# 2. What branches are in the pool?
getfattr -n user.mergerfs.srcmounts /storage/.mergerfs

# 3. Free space per branch
df -h /mnt/disk*

# 4. Which branch does a specific file live on?
getfattr -n user.mergerfs.fullpath /storage/path/to/file

# 5. Current policy settings
getfattr -d /storage/.mergerfs 2>/dev/null | grep -E 'category|func\.'

Common Scenarios

Scenario 1: Media Server (Plex/Jellyfin) fstab

The standard Perfect Media Server pattern: multiple data drives pooled, one or more parity drives kept separate for SnapRAID.

# /etc/fstab -- data drives (NOT parity)
/dev/disk/by-id/ata-WDC_WD140-SERIAL1-part1  /mnt/disk1  ext4  defaults  0  2
/dev/disk/by-id/ata-WDC_WD140-SERIAL2-part1  /mnt/disk2  ext4  defaults  0  2
/dev/disk/by-id/ata-WDC_WD140-SERIAL3-part1  /mnt/disk3  ext4  defaults  0  2
/dev/disk/by-id/ata-WDC_WD180-SERIAL4-part1  /mnt/disk4  ext4  defaults  0  2

# mergerfs pool -- globs all /mnt/disk* paths
/mnt/disk*  /storage  fuse.mergerfs  defaults,allow_other,use_ino,cache.files=off,moveonenospc=true,dropcacheonclose=true,minfreespace=250G,category.create=epmfs,fsname=mergerfs  0  0

# parity drive -- NEVER in the mergerfs pool
/dev/disk/by-id/ata-WDC_WD180-PARITY1-part1  /mnt/parity1  ext4  defaults  0  2

Why these options: - allow_other: lets non-root users (and Docker containers) access the mount - use_ino: proper inode numbers for NFS/Samba compatibility - cache.files=off: prevents stale metadata with Plex scanning - dropcacheonclose=true: prevents media files from polluting page cache - moveonenospc=true: automatically relocates files if a branch fills during a write - minfreespace=250G: reserve space so SnapRAID sync has room for temp files - category.create=epmfs: new episodes go on the same drive as the rest of the series (path preservation)

Scenario 2: Download Staging Setup

For Sonarr/Radarr/qBittorrent where downloads land first, then get hardlinked or moved to the library.

# Fast SSD for active downloads
/dev/nvme0n1p1  /mnt/ssd  ext4  defaults,noatime  0  2

# Separate pool for downloads with SSD prioritized
/mnt/ssd:/mnt/disk*  /storage  fuse.mergerfs  defaults,allow_other,use_ino,cache.files=off,moveonenospc=true,category.create=ff,minfreespace=50G,fsname=mergerfs  0  0

With category.create=ff and the SSD listed first, new downloads always land on the SSD. When Sonarr/Radarr moves files to the library, they land on spinning rust via moveonenospc or a separate library pool.

Scenario 3: General NAS (Even Fill)

/mnt/disk*  /storage  fuse.mergerfs  defaults,allow_other,use_ino,cache.files=off,moveonenospc=true,minfreespace=50G,category.create=lup,fsname=mergerfs  0  0

category.create=lup (Least Used Percentage) keeps drives balanced by percentage, which is ideal when you have drives of different sizes (e.g., mixing 8 TB and 14 TB).

SnapRAID Integration

Which Drives Go Where

mergerfs pool:  /mnt/disk1, /mnt/disk2, /mnt/disk3, /mnt/disk4  --> /storage
SnapRAID data:  /mnt/disk1, /mnt/disk2, /mnt/disk3, /mnt/disk4
SnapRAID parity: /mnt/parity1 (NOT in mergerfs pool)

The data drives appear in both mergerfs and snapraid.conf. The parity drives appear only in snapraid.conf and are NEVER added to the mergerfs pool.

snapraid.conf for this layout

parity /mnt/parity1/snapraid.parity

content /var/snapraid/content
content /mnt/disk1/snapraid.content
content /mnt/disk2/snapraid.content

data d1 /mnt/disk1/
data d2 /mnt/disk2/
data d3 /mnt/disk3/
data d4 /mnt/disk4/

exclude /lost+found/
exclude *.unrecoverable
exclude /tmp/
exclude /.Trash-*/
exclude /.snapraid.*

Daily Sync Workflow

#!/bin/bash
# /usr/local/bin/snapraid-sync.sh -- run via cron daily

# Check for differences
snapraid diff
DIFF_EXIT=$?

# If there are deletions above threshold, abort (safety)
if [ $DIFF_EXIT -eq 2 ]; then
    echo "ERROR: Too many deletions detected, skipping sync" | mail -s "SnapRAID Alert" admin@example.com
    exit 1
fi

# Sync parity
snapraid sync

# Weekly scrub (run on Sundays)
if [ "$(date +%u)" -eq 7 ]; then
    snapraid scrub -p 5 -o 30
fi

Adding and Removing Drives at Runtime

Adding a new drive

# 1. Partition and format the new drive
sudo mkfs.ext4 -m 0 -T largefile4 /dev/sdX1

# 2. Create mount point and add to fstab
sudo mkdir /mnt/disk5
echo '/dev/disk/by-id/ata-WDC_WD180-SERIAL5-part1  /mnt/disk5  ext4  defaults  0  2' | sudo tee -a /etc/fstab
sudo mount /mnt/disk5

# 3. Add to mergerfs pool at runtime (no restart needed)
sudo setfattr -n user.mergerfs.srcmounts -v '+>/mnt/disk5' /storage/.mergerfs

# 4. Update fstab mergerfs line if using explicit paths (not needed if using /mnt/disk*)

# 5. Add to snapraid.conf
echo 'data d5 /mnt/disk5/' >> /etc/snapraid.conf

# 6. Run snapraid sync to include new drive in parity
sudo snapraid sync

Draining a drive before removal

# 1. Set the drive to NC (no-create) mode -- stops new files from landing there
sudo setfattr -n user.mergerfs.srcmounts -v '-/mnt/disk3' /storage/.mergerfs
sudo setfattr -n user.mergerfs.srcmounts -v '+>/mnt/disk3=NC' /storage/.mergerfs

# 2. Move files off the drive (rsync within the pool is simplest)
# For each top-level directory on the drive:
rsync -avh --remove-source-files /mnt/disk3/media/ /storage/media/

# 3. Once empty, remove from pool
sudo setfattr -n user.mergerfs.srcmounts -v '-/mnt/disk3' /storage/.mergerfs

# 4. Unmount and remove from fstab
sudo umount /mnt/disk3

# 5. Update snapraid.conf (remove the data line)
# 6. Run snapraid sync

Performance Tuning

Cache Settings for Different Workloads

Media streaming (Plex/Jellyfin):

cache.files=off,dropcacheonclose=true,cache.statfs=10
Files are large and read sequentially. Page cache pollution hurts more than it helps. statfs caching reduces overhead from policy calculations.

Database-like workloads (small random I/O):

cache.files=auto-full,cache.entry=5,cache.attr=5,cache.writeback=true
Caching helps with repeated access to the same files. Writeback aggregates small writes.

Build systems / compilation:

cache.files=partial,cache.entry=1,cache.attr=1
Balance between caching for rapid re-reads and freshness for new output files.

Thread Tuning

For high-concurrency workloads (10+ simultaneous streams or many Docker containers):

read-thread-count=0,process-thread-count=0

This creates separate read and process thread pools, each sized to your CPU core count (max 8). The separation allows read threads to keep accepting FUSE messages while process threads handle them.

fuse_msg_size

Default is 256 (1 MiB), which is the maximum. There is no reason to lower it. If you see it set lower, increase it:

fuse_msg_size=256

Monitoring

Disk Usage Per Branch

#!/bin/bash
# /usr/local/bin/mergerfs-status.sh
echo "=== mergerfs Branch Status ==="
for disk in /mnt/disk*; do
    used=$(df -h "$disk" | tail -1 | awk '{print $3}')
    avail=$(df -h "$disk" | tail -1 | awk '{print $4}')
    pct=$(df -h "$disk" | tail -1 | awk '{print $5}')
    echo "$disk: ${used} used, ${avail} free ($pct)"
done

echo ""
echo "=== Pool Total ==="
df -h /storage | tail -1

Alerting on Nearly-Full Drives

#!/bin/bash
# Cron: */15 * * * * /usr/local/bin/mergerfs-space-check.sh
THRESHOLD=90
for disk in /mnt/disk*; do
    usage=$(df "$disk" | tail -1 | awk '{print $5}' | tr -d '%')
    if [ "$usage" -gt "$THRESHOLD" ]; then
        echo "WARNING: $disk is ${usage}% full" | \
            mail -s "Disk Space Alert: $disk" admin@example.com
    fi
done

Docker Compose Patterns

Media Server Stack

services:
  plex:
    image: lscr.io/linuxserver/plex:latest
    environment:
      - PUID=1000
      - PGID=1000
    volumes:
      - /storage/media:/data/media        # mergerfs mount
      - /opt/plex/config:/config           # config on SSD, NOT on mergerfs
    network_mode: host

  sonarr:
    image: lscr.io/linuxserver/sonarr:latest
    environment:
      - PUID=1000
      - PGID=1000
    volumes:
      - /storage:/data                     # entire mergerfs pool
      - /opt/sonarr/config:/config

  radarr:
    image: lscr.io/linuxserver/radarr:latest
    environment:
      - PUID=1000
      - PGID=1000
    volumes:
      - /storage:/data
      - /opt/radarr/config:/config

  qbittorrent:
    image: lscr.io/linuxserver/qbittorrent:latest
    environment:
      - PUID=1000
      - PGID=1000
    volumes:
      - /storage/downloads:/data/downloads  # mergerfs mount
      - /opt/qbit/config:/config

Key pattern: application configs go on a fast local SSD, media data references the mergerfs mount. This way Plex/Sonarr databases are fast while bulk storage is pooled.

UID/GID consistency: set PUID=1000 and PGID=1000 (or whatever your user is) on every container. Mismatched ownership across mergerfs branches causes permission denials.

Backup Strategy

Targeting mergerfs Mount vs Individual Branches

Option A: Back up the pool mount (simpler)

restic backup /storage/important/
Applications see the unified view. Restores go back to the pool and land wherever the create policy places them.

Option B: Back up individual branches (more control)

for disk in /mnt/disk*; do
    restic backup "$disk/important/"
done
Preserves the exact drive layout. Useful when you need to restore to a specific drive after replacement.

Recommendation: back up via the pool mount for simplicity. Only back up individual branches if you have a specific reason (e.g., SnapRAID fix requires files on specific drives).

Drive Replacement Workflow

When a drive fails in a mergerfs + SnapRAID setup:

# 1. Identify the failed drive
smartctl -a /dev/sdX  # check SMART data

# 2. Remove from mergerfs pool (if still mounted)
setfattr -n user.mergerfs.srcmounts -v '-/mnt/diskN' /storage/.mergerfs

# 3. Unmount the failed drive
umount /mnt/diskN

# 4. Physically replace the drive

# 5. Partition and format the replacement
mkfs.ext4 -m 0 -T largefile4 /dev/sdX1

# 6. Update fstab with new disk-by-id (serial number changed)
# Find new ID: ls -la /dev/disk/by-id/ | grep sdX

# 7. Mount replacement
mount /mnt/diskN

# 8. Restore data from SnapRAID parity
snapraid fix -d dN

# 9. Re-add to mergerfs pool
setfattr -n user.mergerfs.srcmounts -v '+>/mnt/diskN' /storage/.mergerfs

# 10. Verify
snapraid check -d dN
ls /storage/  # verify files are visible through the pool

Critical: do NOT run snapraid sync before snapraid fix. Syncing after a drive loss recalculates parity without the missing data, destroying your ability to recover.