Skip to content

Disk & Storage Ops Primer

Why This Matters

Every service runs on storage. When a disk fills, a filesystem corrupts, or an I/O bottleneck starves your database, the impact is immediate and often catastrophic. Understanding block devices, partitioning, filesystems, LVM, RAID, and storage monitoring is foundational for any engineer who operates Linux systems. This is not optional knowledge — it is the floor.

Block Devices and Device Naming

Linux exposes storage hardware as block devices under /dev/. The naming convention tells you what kind of hardware you are talking to.

Naming Conventions

Pattern Hardware Example
/dev/sd[a-z] SCSI/SATA/SAS disks (also USB) /dev/sda, /dev/sdb
/dev/nvme[0-9]n[1-9] NVMe SSDs /dev/nvme0n1, /dev/nvme1n1
/dev/vd[a-z] Virtio disks (KVM/QEMU VMs) /dev/vda, /dev/vdb
/dev/xvd[a-z] Xen virtual disks (older AWS) /dev/xvda
/dev/md[0-9] Linux software RAID arrays /dev/md0, /dev/md127
/dev/dm-[0-9] Device-mapper (LVM, LUKS, multipath) /dev/dm-0

Partitions append a number: /dev/sda1, /dev/sda2. NVMe partitions use p: /dev/nvme0n1p1, /dev/nvme0n1p2.

Inspecting Block Devices

# List all block devices with hierarchy, size, type, and mountpoints
lsblk

# Output:
# NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
# sda           8:0    0   500G  0 disk
# ├─sda1        8:1    0     1G  0 part /boot
# ├─sda2        8:2    0   499G  0 part
# │ ├─vg0-root 253:0    0    50G  0 lvm  /
# │ └─vg0-data 253:1    0   449G  0 lvm  /data
# nvme0n1     259:0    0   1.8T  0 disk
# └─nvme0n1p1 259:1    0   1.8T  0 part /fast-storage

# Detailed info for a specific device
lsblk -o NAME,SIZE,TYPE,FSTYPE,MOUNTPOINT,UUID /dev/sda

# Show all block devices including empty ones
lsblk -a

# Get device info from udev
udevadm info --query=all --name=/dev/sda

# List SCSI devices
lsscsi

The /dev/disk/ directory contains symlinks organized by ID, UUID, label, and path — useful for stable references that survive reboot:

ls -la /dev/disk/by-uuid/
ls -la /dev/disk/by-id/
ls -la /dev/disk/by-path/
ls -la /dev/disk/by-label/

Partition Tables: MBR vs GPT

A partition table defines how a disk is divided into regions. Two standards exist.

MBR (Master Boot Record)

  • Legacy standard, dating to 1983
  • Maximum disk size: 2 TiB
  • Maximum 4 primary partitions (or 3 primary + 1 extended containing logical partitions)
  • Partition table stored in first 512 bytes of disk
  • No redundancy — if the MBR is corrupted, the partition table is lost
  • Still common on older systems and small disks

GPT (GUID Partition Table)

  • Modern standard, part of UEFI specification
  • Maximum disk size: 9.4 ZiB (effectively unlimited)
  • Up to 128 partitions by default (no need for extended/logical)
  • Stores a backup partition table at the end of the disk
  • CRC32 checksums for integrity
  • Required for disks larger than 2 TiB
  • Required for UEFI boot
# Check what partition table a disk uses
sudo fdisk -l /dev/sda | head -5
# Disk /dev/sda: 500 GiB, 536870912000 bytes, 1048576000 sectors
# Disk model: VBOX HARDDISK
# Disklabel type: gpt         <--- GPT

# Or use parted
sudo parted /dev/sda print | grep "Partition Table"
# Partition Table: gpt

Rule of thumb: Use GPT for everything new. Use MBR only when you must support legacy BIOS boot on old hardware.

Partitioning Tools

fdisk (Interactive, MBR and GPT)

The classic partitioning tool. Modern versions handle GPT fine.

# List partitions on all disks
sudo fdisk -l

# List partitions on a specific disk
sudo fdisk -l /dev/sdb

# Start interactive partitioning
sudo fdisk /dev/sdb
# Common commands inside fdisk:
#   g — create new GPT partition table
#   o — create new MBR partition table
#   n — new partition
#   d — delete partition
#   p — print partition table
#   t — change partition type
#   w — write changes and exit
#   q — quit without saving

parted (Scriptable, GPT-Native)

More powerful than fdisk, supports live resizing, and handles large disks natively.

# Create GPT label on a new disk
sudo parted /dev/sdb mklabel gpt

# Create a partition using all available space
sudo parted /dev/sdb mkpart primary ext4 0% 100%

# Create specific-sized partitions
sudo parted /dev/sdb mkpart primary ext4 1MiB 100GiB
sudo parted /dev/sdb mkpart primary xfs 100GiB 100%

# Print partition layout
sudo parted /dev/sdb print

# Resize a partition (parted can do this; fdisk cannot)
sudo parted /dev/sdb resizepart 1 200GiB

gdisk (GPT-Specific)

Like fdisk but GPT-only. Useful for GPT-specific operations like converting MBR to GPT.

# Interactive GPT partitioning
sudo gdisk /dev/sdb

# Convert MBR to GPT (non-destructive if partitions fit)
sudo gdisk /dev/sdb
# Then: w to write GPT

Filesystems

Filesystem Comparison

Filesystem Max File Size Max Volume Size Shrink Online Grow Key Feature
ext4 16 TiB 1 EiB Yes (offline) Yes Mature, widely supported
XFS 8 EiB 8 EiB No Yes High performance, default on RHEL
Btrfs 16 EiB 16 EiB Yes (online) Yes Snapshots, checksums, CoW
ZFS 16 EiB 256 ZiB No Yes Enterprise, checksums, compression

Creating Filesystems with mkfs

# ext4 — general purpose, safe default
sudo mkfs.ext4 /dev/sdb1

# ext4 with label and specific block size
sudo mkfs.ext4 -L datastore -b 4096 /dev/sdb1

# XFS — high performance, default on RHEL/Rocky
sudo mkfs.xfs /dev/sdb1

# XFS with label
sudo mkfs.xfs -L fastdata /dev/sdb1

# Btrfs
sudo mkfs.btrfs /dev/sdb1

# Btrfs spanning multiple devices (built-in RAID)
sudo mkfs.btrfs -d raid1 -m raid1 /dev/sdb1 /dev/sdc1

# Check filesystem type of an existing partition
blkid /dev/sdb1
# /dev/sdb1: UUID="a1b2c3d4..." TYPE="ext4" PARTUUID="..."

ext4 Details

The workhorse filesystem. Journaled, mature, well-understood.

# Tune ext4 parameters
sudo tune2fs -l /dev/sdb1          # list superblock info
sudo tune2fs -m 1 /dev/sdb1        # reduce reserved blocks to 1% (default 5%)
sudo tune2fs -L mydata /dev/sdb1   # set label
sudo tune2fs -c 30 /dev/sdb1       # force fsck after 30 mounts

# Check and repair
sudo e2fsck -f /dev/sdb1           # force check (must be unmounted)

XFS Details

High-performance filesystem, cannot be shrunk (only grown).

# Check and repair (requires unmount)
sudo xfs_repair /dev/sdb1

# Get filesystem info
sudo xfs_info /mount/point

# Grow XFS filesystem online (after extending partition or LV)
sudo xfs_growfs /mount/point

# Defragment
sudo xfs_fsr /mount/point

Btrfs Basics

Copy-on-write filesystem with built-in snapshots, compression, and checksumming.

# Create a subvolume
sudo btrfs subvolume create /mnt/data/@home

# Create a snapshot
sudo btrfs subvolume snapshot /mnt/data/@home /mnt/data/@home-snap

# Enable compression
sudo mount -o compress=zstd /dev/sdb1 /mnt/data

# Show filesystem usage
sudo btrfs filesystem usage /mnt/data

# Scrub (verify checksums, detect corruption)
sudo btrfs scrub start /mnt/data
sudo btrfs scrub status /mnt/data

ZFS Basics

Enterprise-grade filesystem with pooled storage, checksumming, snapshots, and built-in compression. Not in the mainline kernel (license incompatibility) — installed via OpenZFS.

# Create a storage pool
sudo zpool create datapool /dev/sdb /dev/sdc

# Create a mirrored pool (RAID1 equivalent)
sudo zpool create datapool mirror /dev/sdb /dev/sdc

# Create a RAID-Z pool (RAID5 equivalent)
sudo zpool create datapool raidz /dev/sdb /dev/sdc /dev/sdd

# Create a dataset (like a subvolume)
sudo zfs create datapool/databases

# Enable compression
sudo zfs set compression=lz4 datapool/databases

# Snapshot
sudo zfs snapshot datapool/databases@before-migration

# Rollback
sudo zfs rollback datapool/databases@before-migration

# Check pool health
sudo zpool status

Mounting Filesystems

mount and umount

# Mount a partition
sudo mount /dev/sdb1 /mnt/data

# Mount with specific options
sudo mount -o noatime,nodev /dev/sdb1 /mnt/data

# Mount by UUID (survives device renaming)
sudo mount UUID="a1b2c3d4-e5f6-7890-abcd-ef1234567890" /mnt/data

# Mount by label
sudo mount LABEL=datastore /mnt/data

# List all mounts
mount | column -t
findmnt --tree

# Unmount
sudo umount /mnt/data

# Force unmount (when device is busy)
sudo umount -l /mnt/data    # lazy unmount — detaches, completes when idle
sudo umount -f /mnt/nfs     # force — mainly for stale NFS mounts

# Find what is using a mountpoint
sudo lsof +D /mnt/data
sudo fuser -mv /mnt/data

/etc/fstab Format and Options

The /etc/fstab file defines persistent mounts that are applied at boot.

# <device>                                   <mountpoint>  <fstype>  <options>       <dump>  <pass>
UUID=a1b2c3d4-e5f6-7890-abcd-ef1234567890   /data         ext4      defaults        0       2
UUID=f9e8d7c6-b5a4-3210-fedc-ba0987654321   /fast         xfs       noatime,nofail  0       2
/dev/vg0/lv_logs                             /var/log      ext4      defaults        0       2
192.168.1.10:/exports/shared                 /mnt/nfs      nfs       _netdev,nofail  0       0

Key mount options:

Option Effect
defaults rw, suid, dev, exec, auto, nouser, async
noatime Do not update access time on reads (performance boost)
nodiratime Do not update directory access time
nofail Do not halt boot if device is missing
_netdev Wait for network before mounting (NFS, iSCSI)
ro Read-only
noexec Prevent execution of binaries
nosuid Ignore setuid/setgid bits
nodev Ignore device files
discard Enable TRIM for SSDs
x-systemd.automount Mount on first access (systemd)

Fields 5 and 6 (dump and pass): - dump: 0 = skip backup via dump command (almost always 0 now) - pass: 0 = skip fsck, 1 = check first (root), 2 = check after root

Always test fstab changes before rebooting:

# Validate fstab syntax (will attempt to mount everything)
sudo mount -a

# If mount -a succeeds with no errors, fstab is safe to reboot with

LVM (Logical Volume Manager)

LVM adds a flexible abstraction layer between physical disks and filesystems. You can resize volumes, span multiple disks, and take snapshots without downtime.

LVM Architecture

Physical Disks     →  Physical Volumes (PV)  →  Volume Group (VG)  →  Logical Volumes (LV)
/dev/sdb           →  /dev/sdb (PV)          →  vg_data            →  lv_databases
/dev/sdc           →  /dev/sdc (PV)          →                     →  lv_logs
                                                                     →  lv_backups

Creating LVM from Scratch

# Step 1: Create physical volumes
sudo pvcreate /dev/sdb /dev/sdc

# Step 2: Create a volume group spanning both PVs
sudo vgcreate vg_data /dev/sdb /dev/sdc

# Step 3: Create logical volumes
sudo lvcreate -L 100G -n lv_databases vg_data
sudo lvcreate -L 50G -n lv_logs vg_data
sudo lvcreate -l 100%FREE -n lv_backups vg_data   # use all remaining space

# Step 4: Create filesystems
sudo mkfs.ext4 /dev/vg_data/lv_databases
sudo mkfs.xfs /dev/vg_data/lv_logs
sudo mkfs.ext4 /dev/vg_data/lv_backups

# Step 5: Mount
sudo mkdir -p /data/databases /var/log /data/backups
sudo mount /dev/vg_data/lv_databases /data/databases

Inspecting LVM

# Physical volumes
sudo pvs                  # brief summary
sudo pvdisplay            # detailed

# Volume groups
sudo vgs                  # brief
sudo vgdisplay            # detailed

# Logical volumes
sudo lvs                  # brief
sudo lvdisplay            # detailed

# Show free space in a VG
sudo vgs -o +vg_free

Extending Volumes (Online)

This is the primary reason people use LVM — live resizing without downtime.

# Add a new disk to existing VG
sudo pvcreate /dev/sdd
sudo vgextend vg_data /dev/sdd

# Extend an LV by 50G
sudo lvextend -L +50G /dev/vg_data/lv_databases

# Extend an LV to use all free space in VG
sudo lvextend -l +100%FREE /dev/vg_data/lv_databases

# Resize the filesystem to match (ext4)
sudo resize2fs /dev/vg_data/lv_databases

# Resize the filesystem to match (XFS)
sudo xfs_growfs /data/databases

# One-liner: extend LV and resize filesystem together
sudo lvextend -L +50G --resizefs /dev/vg_data/lv_databases

LVM Snapshots

Point-in-time copies using copy-on-write. Useful for consistent backups.

# Create a snapshot (allocate space for changes)
sudo lvcreate -L 10G -s -n lv_databases_snap /dev/vg_data/lv_databases

# Mount the snapshot read-only
sudo mount -o ro /dev/vg_data/lv_databases_snap /mnt/snap

# Back up from the snapshot
tar czf /backup/db_backup.tar.gz -C /mnt/snap .

# Remove snapshot when done
sudo umount /mnt/snap
sudo lvremove /dev/vg_data/lv_databases_snap

# Check snapshot usage (if it fills up, it becomes invalid)
sudo lvs -o lv_name,data_percent,snap_percent

RAID (Redundant Array of Independent Disks)

RAID Levels

Level Min Disks Capacity Fault Tolerance Read Speed Write Speed Use Case
RAID 0 2 N disks None N x N x Scratch/temp (speed only)
RAID 1 2 1 disk 1 disk failure N x 1 x OS, boot drives
RAID 5 3 N-1 disks 1 disk failure (N-1) x Slower writes General storage
RAID 6 4 N-2 disks 2 disk failures (N-2) x Slower writes Large arrays
RAID 10 4 N/2 disks 1 per mirror N x (N/2) x Databases, high IOPS

Linux Software RAID with mdadm

# Create a RAID 1 array
sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1

# Create a RAID 5 array
sudo mdadm --create /dev/md1 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1

# Create a RAID 10 array
sudo mdadm --create /dev/md2 --level=10 --raid-devices=4 \
    /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

# Check RAID status
cat /proc/mdstat
sudo mdadm --detail /dev/md0

# Save config (survives reboot)
sudo mdadm --detail --scan >> /etc/mdadm/mdadm.conf
sudo update-initramfs -u

# Mark a disk as failed and remove it
sudo mdadm --manage /dev/md0 --fail /dev/sdc1
sudo mdadm --manage /dev/md0 --remove /dev/sdc1

# Add a replacement disk
sudo mdadm --manage /dev/md0 --add /dev/sde1

# Monitor rebuild progress
watch cat /proc/mdstat

Spare Disks

# Create RAID 5 with a hot spare
sudo mdadm --create /dev/md0 --level=5 --raid-devices=3 \
    --spare-devices=1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

# The spare auto-activates when a disk fails

The Rebuild Problem

The most dangerous moment is not when a disk fails -- it is during the rebuild after failure.

  • Rebuild reads every sector of every surviving disk
  • This stresses disks that are the same age and model
  • A second failure during rebuild = data loss (RAID 5)
  • Large disks mean long rebuilds: 4TB at 100MB/s takes ~11 hours
  • URE (Unrecoverable Read Error) during rebuild can fail the array
Disk Size Rebuild Rate Time
2 TB 100 MB/s ~6 hrs
4 TB 100 MB/s ~11 hrs
8 TB 100 MB/s ~22 hrs
16 TB 80 MB/s ~55 hrs

Production workloads reduce effective rebuild rate. Double these times for busy servers.

Disk Failure Symptoms

Hard failure: Drive completely gone from OS. Clear and immediate.

Slow failure: Drive responds but with high latency. Causes I/O wait spikes. Often worse than hard failure because the array stays "healthy" while performance dies.

# Check for slow disk via iostat
iostat -x 1
# Watch for high await and %util on individual devices

Silent corruption: Drive returns wrong data without error. Run scrubs regularly:

# mdadm array scrub
echo check > /sys/block/md0/md/sync_action
cat /sys/block/md0/md/mismatch_cnt   # 0 = clean

Degraded Array Handling

When an array is degraded:

  1. Identify failed disk (cat /proc/mdstat, dmesg | grep -i error)
  2. Check if hot spare kicked in (rebuild should be automatic)
  3. Before replacing: verify the correct disk (match serial number, slot)
  4. Replace disk (hot-swap if supported)
  5. Monitor rebuild (watch cat /proc/mdstat)
  6. Reduce I/O during rebuild if possible
# Tune rebuild speed limits
cat /proc/sys/dev/raid/speed_limit_min   # default: 1000 KB/s
cat /proc/sys/dev/raid/speed_limit_max   # default: 200000 KB/s
echo 50000 > /proc/sys/dev/raid/speed_limit_min  # faster rebuild
Scenario Action
SMART warns, array OK Replace soon
Single disk failed Replace now
RAID 5, disk failed Replace ASAP
RAID 6, two failed Emergency
Rebuild fails twice Replace disk

SMART Monitoring

SMART (Self-Monitoring, Analysis, and Reporting Technology) provides early warning of disk failures.

# Install smartmontools
sudo apt install smartmontools   # Debian/Ubuntu
sudo yum install smartmontools   # RHEL/CentOS

# Check if SMART is enabled
sudo smartctl -i /dev/sda

# Run a short self-test
sudo smartctl -t short /dev/sda

# Run a long self-test (takes hours)
sudo smartctl -t long /dev/sda

# View test results
sudo smartctl -l selftest /dev/sda

# View all SMART attributes
sudo smartctl -A /dev/sda

# Key attributes to watch:
# 5   — Reallocated_Sector_Ct    (bad sectors remapped — rising = failing)
# 187 — Reported_Uncorrect       (uncorrectable errors)
# 188 — Command_Timeout          (commands timing out)
# 197 — Current_Pending_Sector   (sectors waiting to be remapped)
# 198 — Offline_Uncorrectable    (sectors that can't be read)

# Overall health assessment
sudo smartctl -H /dev/sda
# PASSED = ok, FAILED = replace the disk immediately

# Enable automatic monitoring daemon
sudo systemctl enable --now smartd

For NVMe drives:

sudo smartctl -a /dev/nvme0n1

# NVMe-specific health info
sudo nvme smart-log /dev/nvme0n1

Disk I/O Concepts

Key Metrics

Metric Definition Typical Values
IOPS I/O operations per second HDD: 100-200, SSD: 10K-100K, NVMe: 100K-1M
Throughput Data transfer rate (MB/s) HDD: 100-200, SSD: 500-3500, NVMe: 3000-7000
Latency Time per I/O operation HDD: 5-15ms, SSD: 0.1-1ms, NVMe: 0.02-0.1ms
Queue depth Outstanding I/O requests Indicates saturation when consistently high
%util Percentage of time device is busy >80% sustained = bottleneck
await Average wait time (queue + service) Should be close to svctm; large gap = queuing

Monitoring with iostat

# Install sysstat package
sudo apt install sysstat

# Basic I/O stats, refreshing every 2 seconds
iostat -x 2

# Key columns in extended output:
# rrqm/s  — read requests merged per second
# wrqm/s  — write requests merged per second
# r/s     — reads per second
# w/s     — writes per second
# rMB/s   — read throughput
# wMB/s   — write throughput
# await   — average I/O wait time (ms)
# %util   — device utilization

# Filter to specific devices
iostat -x -d sda nvme0n1 2

# Show human-readable with timestamps
iostat -xhz -t 2

Monitoring with iotop

# See which processes are doing I/O (requires root)
sudo iotop

# Batch mode, show only active processes
sudo iotop -b -o

# Sort by write
sudo iotop -o -a

NFS Basics

Network File System — share directories across machines over the network.

Server Setup

# Install NFS server
sudo apt install nfs-kernel-server

# Define exports in /etc/exports
# Format: <directory> <client>(options)
echo '/exports/shared 10.0.0.0/24(rw,sync,no_subtree_check,no_root_squash)' \
    | sudo tee -a /etc/exports

# Apply changes
sudo exportfs -ra

# Show current exports
sudo exportfs -v

# Start/enable service
sudo systemctl enable --now nfs-kernel-server

Client Setup

# Install NFS client
sudo apt install nfs-common

# Discover exports from a server
showmount -e 192.168.1.10

# Mount manually
sudo mount -t nfs 192.168.1.10:/exports/shared /mnt/nfs

# Add to fstab for persistent mount
# Note: _netdev ensures network is up before mounting; nofail prevents boot hang
echo '192.168.1.10:/exports/shared /mnt/nfs nfs _netdev,nofail,rw,hard,intr 0 0' \
    | sudo tee -a /etc/fstab

NFS Versions

Version Features
NFSv3 Stateless, UDP or TCP, widely compatible
NFSv4 Stateful, TCP only, single port (2049), built-in ACLs, Kerberos
NFSv4.1 Parallel NFS (pNFS), session trunking
NFSv4.2 Server-side copy, sparse files, space reservation

iSCSI Basics

iSCSI exposes block devices over the network — the client sees a raw disk, not a shared filesystem.

# Target (server) — using targetcli
sudo targetcli
# /backstores/block create disk1 /dev/sdb
# /iscsi create iqn.2024.com.example:storage
# /iscsi/iqn.2024.com.example:storage/tpg1/luns create /backstores/block/disk1
# /iscsi/iqn.2024.com.example:storage/tpg1/acls create iqn.2024.com.example:client1

# Initiator (client) — discover and login
sudo iscsiadm -m discovery -t sendtargets -p 192.168.1.10
sudo iscsiadm -m node -T iqn.2024.com.example:storage -p 192.168.1.10 --login

# The iSCSI LUN appears as a new block device (e.g., /dev/sdc)
lsblk

# Partition, format, and mount as normal
sudo mkfs.ext4 /dev/sdc
sudo mount /dev/sdc /mnt/iscsi

# Persist across reboots
sudo iscsiadm -m node -T iqn.2024.com.example:storage -p 192.168.1.10 --op update \
    -n node.startup -v automatic

Key difference from NFS: iSCSI provides block-level access (one client at a time, unless using clustered filesystem). NFS provides file-level access (multiple clients simultaneously).

Disk Encryption with LUKS/dm-crypt

LUKS (Linux Unified Key Setup) provides full-disk encryption using dm-crypt.

Creating an Encrypted Volume

# Format a partition with LUKS encryption
sudo cryptsetup luksFormat /dev/sdb1
# Confirm with uppercase YES, then enter passphrase

# Open (unlock) the encrypted volume
sudo cryptsetup luksOpen /dev/sdb1 secure_data
# This creates /dev/mapper/secure_data

# Create filesystem on the decrypted mapper device
sudo mkfs.ext4 /dev/mapper/secure_data

# Mount
sudo mount /dev/mapper/secure_data /mnt/secure

Managing Keys

# Add a backup key (LUKS supports up to 8 key slots)
sudo cryptsetup luksAddKey /dev/sdb1

# Remove a key
sudo cryptsetup luksRemoveKey /dev/sdb1

# View key slot status
sudo cryptsetup luksDump /dev/sdb1

# Change a key
sudo cryptsetup luksChangeKey /dev/sdb1

Persistent Encrypted Mount

# Create /etc/crypttab entry (maps name to device)
echo 'secure_data UUID=<uuid-of-sdb1> none luks' | sudo tee -a /etc/crypttab

# Add fstab entry for the decrypted device
echo '/dev/mapper/secure_data /mnt/secure ext4 defaults,nofail 0 2' \
    | sudo tee -a /etc/fstab

# System will prompt for passphrase at boot
# For automated unlock, use a keyfile instead of 'none' in crypttab

Keyfile Unlock (Headless Servers)

# Generate a keyfile
sudo dd if=/dev/urandom of=/root/.luks-keyfile bs=4096 count=1
sudo chmod 400 /root/.luks-keyfile

# Add the keyfile to LUKS
sudo cryptsetup luksAddKey /dev/sdb1 /root/.luks-keyfile

# Update crypttab to use keyfile
# secure_data UUID=<uuid> /root/.luks-keyfile luks

Closing Encrypted Volumes

# Unmount first
sudo umount /mnt/secure

# Close the LUKS device
sudo cryptsetup luksClose secure_data

Quick Reference

Task Command
List block devices lsblk
Show disk UUIDs blkid
Check disk space df -h
Check inode usage df -i
Partition a disk (GPT) sudo parted /dev/sdb mklabel gpt
Create ext4 sudo mkfs.ext4 /dev/sdb1
Create XFS sudo mkfs.xfs /dev/sdb1
Mount sudo mount /dev/sdb1 /mnt
Test fstab sudo mount -a
Create PV sudo pvcreate /dev/sdb
Create VG sudo vgcreate vg0 /dev/sdb
Create LV sudo lvcreate -L 100G -n lv0 vg0
Extend LV + FS sudo lvextend -L +50G --resizefs /dev/vg0/lv0
RAID status cat /proc/mdstat
SMART health sudo smartctl -H /dev/sda
I/O stats iostat -x 2
Encrypt disk sudo cryptsetup luksFormat /dev/sdb1

Wiki Navigation

Prerequisites