Skip to content

Portal | Level: L1: Foundations | Topics: Linux Boot Process, Linux Fundamentals, systemd | Domain: Linux

Linux Boot Process — Primer

Why This Matters

When a Linux system won't boot, you're flying blind unless you understand the boot sequence. Every stage has its own failure modes, its own diagnostic tools, and its own recovery procedures. Understanding the boot process is the difference between "I'll reinstall the OS" and "I'll fix the GRUB config from a rescue shell in 90 seconds."

This primer walks through every stage from power-on to login prompt, with the detail needed to diagnose and repair failures at each stage.


The Boot Sequence Overview

Power On
    ├─ Stage 1: Firmware (BIOS or UEFI)
      └─ POST, hardware init, find boot device
    ├─ Stage 2: Bootloader (GRUB2)
      └─ Load kernel + initramfs into memory
    ├─ Stage 3: Kernel Initialization
      └─ Hardware detection, driver loading, mount initramfs
    ├─ Stage 4: initramfs / initrd
      └─ Early userspace, find real root, switch_root
    ├─ Stage 5: Init System (systemd or SysV)
      └─ Start services, reach target/runlevel
    └─ Stage 6: Login Prompt
       └─ getty, display manager, or SSH

Each stage hands off to the next. A failure at any stage stops the chain.


Stage 1: Firmware — BIOS and UEFI

BIOS (Legacy)

The Basic Input/Output System is firmware stored on a chip on the motherboard. On power-on:

  1. POST (Power-On Self-Test) — tests CPU, RAM, basic hardware
  2. Enumerate boot devices — reads the boot order from CMOS/NVRAM
  3. Load MBR — reads the first 512 bytes of the selected boot device
  4. Jump to bootloader — executes the code in the MBR

The MBR (Master Boot Record) contains: - 446 bytes of bootloader code (GRUB stage 1) - 64 bytes of partition table (4 entries) - 2-byte boot signature (0x55AA)

Limitations: MBR supports max 4 primary partitions and 2 TB disk size. This is why UEFI + GPT replaced it.

UEFI (Unified Extensible Firmware Interface)

Timeline: BIOS dates back to the IBM PC in 1981 and remained essentially unchanged for 25 years. Intel developed the Extensible Firmware Interface (EFI) starting in 1998 for the Itanium platform, then opened it as UEFI via the UEFI Forum in 2005. Most x86 systems shipped UEFI-capable by 2011, but BIOS compatibility (CSM) lingered until around 2020.

UEFI is the modern replacement for BIOS. Key differences:

Feature BIOS UEFI
Partition table MBR GPT
Max disk size 2 TB 9.4 ZB
Max partitions 4 primary 128
Boot code location MBR (512 bytes) EFI System Partition (FAT32)
Secure Boot No Yes
Boot mode 16-bit real mode 32/64-bit protected mode

UEFI boot process:

  1. POST — same as BIOS
  2. Read NVRAM boot entries — UEFI stores boot entries in firmware variables, not in the MBR
  3. Load EFI application — reads a .efi binary from the EFI System Partition (ESP)
  4. Execute bootloader — typically \EFI\ubuntu\shimx64.efi (with Secure Boot) or \EFI\ubuntu\grubx64.efi

EFI System Partition (ESP)

The ESP is a small FAT32 partition (typically 512 MB) mounted at /boot/efi/:

$ ls /boot/efi/EFI/
BOOT  ubuntu

$ ls /boot/efi/EFI/ubuntu/
grub.cfg  grubx64.efi  mmx64.efi  shimx64.efi

# View UEFI boot entries
$ efibootmgr -v
BootCurrent: 0001
BootOrder: 0001,0000,0002
Boot0000* EFI Network   PciRoot(0x0)/Pci(0x1c,0x0)/.../MAC(...)
Boot0001* ubuntu        HD(1,GPT,abc123...)/File(\EFI\ubuntu\shimx64.efi)
Boot0002* EFI Shell     Fv(...)

Secure Boot

Secure Boot verifies that the bootloader is signed by a trusted key. The chain of trust:

  1. UEFI firmware has Microsoft's key in its database
  2. shimx64.efi is signed by Microsoft, contains Canonical's/Red Hat's key
  3. grubx64.efi is signed by the distro vendor
  4. The kernel (vmlinuz) is signed by the distro vendor

If any signature check fails, boot is refused. You can manage Secure Boot keys with mokutil:

# Check Secure Boot status
$ mokutil --sb-state
SecureBoot enabled

# List enrolled keys
$ mokutil --list-enrolled

Stage 2: GRUB2 (Grand Unified Bootloader)

GRUB2 is the standard Linux bootloader. It's responsible for loading the kernel and initramfs into memory.

GRUB2 Architecture

In BIOS mode, GRUB installs in stages: - Stage 1 — in the MBR (446 bytes), just enough to find stage 1.5 - Stage 1.5 — in the gap between MBR and first partition ("post-MBR gap"), filesystem-aware - Stage 2 — in /boot/grub/, full GRUB environment with modules, themes, config

In UEFI mode, GRUB is a single EFI binary (grubx64.efi) on the ESP.

GRUB Configuration

The main config is /boot/grub/grub.cfg (generated, do not edit directly):

# Regenerate GRUB config from /etc/default/grub and /etc/grub.d/
$ sudo update-grub              # Debian/Ubuntu
$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg   # RHEL/CentOS

Settings in /etc/default/grub:

GRUB_DEFAULT=0                          # Boot first entry by default
GRUB_TIMEOUT=5                          # Seconds to wait at menu
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"  # Kernel params for default entry
GRUB_CMDLINE_LINUX=""                   # Kernel params for ALL entries
GRUB_DISABLE_RECOVERY="false"           # Show recovery entries

Kernel Command Line Parameters

Parameters passed to the kernel via GRUB:

Parameter Purpose
root=/dev/sda2 Root filesystem device
ro Mount root read-only initially
quiet Suppress most boot messages
splash Show graphical splash screen
single or 1 Boot to single-user mode
init=/bin/bash Skip init, drop to shell (emergency)
rd.break Break into initramfs shell (before switch_root)
systemd.unit=rescue.target Boot to rescue target
nomodeset Disable kernel mode setting (graphics troubleshooting)
net.ifnames=0 Use classic network interface names (eth0)
console=ttyS0,115200 Serial console output
crashkernel=256M Reserve memory for kdump

You can view the current kernel command line:

$ cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-5.15.0-91-generic root=UUID=abc123... ro quiet splash

GRUB Shell

If GRUB can't find its config, you drop to a GRUB shell. Key commands:

grub> ls                        # List disks and partitions
grub> ls (hd0,gpt2)/           # List files on a partition
grub> set root=(hd0,gpt2)      # Set root partition
grub> linux /vmlinuz root=/dev/sda2  # Load kernel
grub> initrd /initrd.img        # Load initramfs
grub> boot                       # Boot with loaded kernel+initrd

Stage 3: Kernel Initialization

Once GRUB loads the kernel (vmlinuz) and initramfs into memory, it transfers control to the kernel.

What the Kernel Does at Boot

  1. Decompress itself — the kernel image is compressed (gzip, xz, lz4, or zstd)
  2. Set up memory management — page tables, zones, slab allocator
  3. Detect hardware — PCI enumeration, ACPI parsing, device tree (on ARM)
  4. Load built-in drivers — drivers compiled into the kernel (not modules)
  5. Mount initramfs as temporary root — at /
  6. Start PID 1 in initramfs — typically /init (a script or systemd in initramfs)

/boot Contents

$ ls -la /boot/
-rw-r--r-- 1 root root  262144 Mar 14 vmlinuz-5.15.0-91-generic     # Compressed kernel
-rw-r--r-- 1 root root 95932416 Mar 14 initrd.img-5.15.0-91-generic  # initramfs
-rw-r--r-- 1 root root  5836884 Mar 14 System.map-5.15.0-91-generic  # Kernel symbol table
-rw-r--r-- 1 root root   262481 Mar 14 config-5.15.0-91-generic      # Kernel build config
drwxr-xr-x 5 root root     4096 Mar 14 grub/                        # GRUB files

The kernel ring buffer captures early boot messages:

$ dmesg | head -20
[    0.000000] Linux version 5.15.0-91-generic (buildd@lcy02) ...
[    0.000000] Command line: BOOT_IMAGE=/vmlinuz-5.15.0-91-generic root=UUID=...
[    0.000000] BIOS-provided physical RAM map:
[    0.000000]  BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
...
[    0.000345] DMI: Dell Inc. PowerEdge R640/0W23H8, BIOS 2.18.1 02/09/2023
[    0.526812] Memory: 131072000k/134217728k available

Stage 4: initramfs (Initial RAM Filesystem)

Why initramfs Exists

The kernel needs to mount the real root filesystem, but to do that it might need: - A filesystem driver (e.g., XFS, Btrfs) that's compiled as a module - A block device driver (e.g., RAID controller, NVMe) that's a module - LVM activation logic - LUKS decryption for an encrypted root - Network stack for NFS root

The initramfs is a compressed CPIO archive loaded into RAM. It contains just enough userspace tools and drivers to find and mount the real root.

initramfs Contents

# Examine initramfs contents
$ lsinitramfs /boot/initrd.img-5.15.0-91-generic | head -20
.
bin
bin/busybox
bin/cat
bin/cp
...
lib/modules/5.15.0-91-generic/kernel/drivers/scsi/
lib/modules/5.15.0-91-generic/kernel/drivers/md/
scripts/local-top
scripts/init-top

# Or unpack it for detailed inspection
$ mkdir /tmp/initramfs && cd /tmp/initramfs
$ unmkinitramfs /boot/initrd.img-5.15.0-91-generic .

initramfs Boot Sequence

  1. Kernel mounts the initramfs as /
  2. Runs /init (busybox-based script or systemd)
  3. Loads required kernel modules (storage drivers, filesystem modules, dm-crypt)
  4. Activates LVM, RAID, LUKS as needed
  5. Mounts the real root filesystem at /sysroot or /root
  6. Calls switch_root to pivot — the real root becomes /
  7. Executes the real /sbin/init (systemd or SysV init)

Regenerating initramfs

When you add drivers, change root filesystem, or update crypttab:

# Debian/Ubuntu (dracut or update-initramfs)
$ sudo update-initramfs -u -k $(uname -r)    # Update current kernel's initramfs
$ sudo update-initramfs -u -k all             # Update all kernels

# RHEL/CentOS/Fedora (dracut)
$ sudo dracut -f /boot/initramfs-$(uname -r).img $(uname -r)   # Force regenerate
$ sudo dracut -f                              # Regenerate for current kernel

# Dracut with verbose output (useful for debugging)
$ sudo dracut -fv /boot/initramfs-$(uname -r).img $(uname -r) 2>&1 | tee /tmp/dracut.log

Dracut Modules

Dracut is the modern initramfs generator. It's modular:

# List available dracut modules
$ dracut --list-modules

# Include specific modules
$ dracut --add "lvm crypt" -f

# Exclude modules (faster boot for simple setups)
$ dracut --omit "plymouth" -f

Stage 5: Init System

SysV Init (Legacy)

The original Unix init system. Runs scripts in /etc/rc.d/ or /etc/init.d/ based on runlevels:

Runlevel Purpose
0 Halt
1 (S) Single-user (rescue)
2 Multi-user (no NFS on some distros)
3 Multi-user with networking
4 Unused (custom)
5 Multi-user with GUI
6 Reboot
# Check current runlevel
$ runlevel
N 3

# Change runlevel
$ init 3

# Start/stop services
$ /etc/init.d/nginx start
$ service nginx start

SysV init runs scripts sequentially — services start one at a time. This made boot slow.

Name origin: "SysV" refers to System V (System Five), the AT&T Unix variant released in 1983 that standardized the /etc/init.d/ and runlevel model. The "V" is a Roman numeral, not the letter — it was the fifth major release of Research Unix commercialized by AT&T.

systemd

systemd is the modern init system used by nearly all major distros. Key concepts:

Units — the basic managed entity. Types: - .service — daemons and processes - .target — groups of units (replaces runlevels) - .mount — filesystem mounts - .timer — scheduled tasks (replaces cron for systemd-managed tasks) - .socket — socket activation - .device — hardware devices - .path — path-based activation

Targets replace runlevels:

systemd Target SysV Runlevel Purpose
poweroff.target 0 Halt
rescue.target 1 Single-user
multi-user.target 3 Multi-user, no GUI
graphical.target 5 Multi-user with GUI
reboot.target 6 Reboot
emergency.target Emergency shell (minimal)
# View current default target
$ systemctl get-default
multi-user.target

# Set default target
$ sudo systemctl set-default multi-user.target

# List all units loaded during boot
$ systemctl list-units --type=service

# View the dependency tree for a target
$ systemctl list-dependencies multi-user.target

systemd Boot Stages

systemd boots in dependency order, parallelizing where possible:

sysinit.target          ← Early system init (udev, filesystem mounts, swap)
  ├─ basic.target       ← Sockets, timers, paths, slices
  ├─ network.target     ← Networking is up
  ├─ multi-user.target  ← All non-GUI services running
  └─ graphical.target   ← Display manager (if applicable)

Boot Performance Analysis

# Overall boot time
$ systemd-analyze
Startup finished in 2.345s (firmware) + 1.234s (loader) + 3.456s (kernel) + 8.901s (userspace) = 15.936s
graphical.target reached after 8.901s in userspace.

# Time per service
$ systemd-analyze blame
          5.234s NetworkManager-wait-online.service
          2.345s snapd.service
          1.123s docker.service
          0.890s systemd-journal-flush.service
          ...

# Critical chain (boot bottleneck path)
$ systemd-analyze critical-chain
graphical.target @8.901s
└─multi-user.target @8.900s
  └─docker.service @6.555s +1.123s
    └─network-online.target @6.554s
      └─NetworkManager-wait-online.service @1.320s +5.234s
        └─NetworkManager.service @1.200s +0.120s

# Generate SVG boot chart
$ systemd-analyze plot > boot-chart.svg

Stage 6: Login

Once the default target is reached:

  • Console: systemd-logind / getty spawns login prompts on virtual terminals
  • GUI: A display manager (gdm, lightdm, sddm) starts
  • SSH: sshd.service accepts remote connections
# View active login sessions
$ loginctl list-sessions

# View details of a session
$ loginctl session-status 1

/etc/fstab — Filesystem Table

The filesystem table tells the system what to mount at boot:

# <device>                                 <mountpoint>  <type>  <options>       <dump> <pass>
UUID=abc123-def456-...                     /             ext4    errors=remount-ro 0      1
UUID=789abc-012def-...                     /boot         ext4    defaults          0      2
UUID=345678-9abcde-...                     none          swap    sw                0      0
/dev/mapper/vg0-data                       /data         xfs     defaults,noatime  0      2
192.168.1.10:/export/shared                /mnt/nfs      nfs     defaults,_netdev  0      0

Fields: 1. Device — UUID (preferred), LABEL, or device path 2. Mount point — where to mount 3. Type — filesystem type 4. Options — mount options (defaults = rw,suid,dev,exec,auto,nouser,async) 5. Dump — backup utility flag (0 = skip, nearly always 0) 6. Pass — fsck order (0 = skip, 1 = root first, 2 = everything else)

Always test fstab changes before rebooting:

# After editing fstab, test ALL entries
$ sudo mount -a

# If it fails, FIX IT before rebooting. A bad fstab entry can prevent boot.

Boot Logging

journalctl for Boot Logs

# Logs from current boot
$ journalctl -b

# Logs from previous boot
$ journalctl -b -1

# Logs from two boots ago
$ journalctl -b -2

# List all recorded boots
$ journalctl --list-boots
-2 abc123... Wed 2026-03-17 08:00:00  Wed 2026-03-17 23:59:59
-1 def456... Thu 2026-03-18 08:00:00  Thu 2026-03-18 18:30:00
 0 789abc... Fri 2026-03-19 08:00:00  Fri 2026-03-19 10:45:00

Default trap: On Ubuntu (and some other distros), journald defaults to volatile storage — boot logs are lost on reboot. You only discover this when you need journalctl -b -1 to diagnose a boot failure and the data is gone. Create /var/log/journal/ to enable persistent storage before you need it.

Important: By default on some distros (like Ubuntu), journal storage is volatile (memory only). To persist journals across reboots:

$ sudo mkdir -p /var/log/journal
$ sudo systemd-tmpfiles --create --prefix /var/log/journal
$ sudo systemctl restart systemd-journald

dmesg — Kernel Ring Buffer

# View kernel messages (from current boot)
$ dmesg
$ dmesg -T          # Human-readable timestamps
$ dmesg --level=err # Only errors
$ dmesg -w          # Follow (like tail -f)

Summary

The Linux boot process is a chain of handoffs: firmware finds the bootloader, the bootloader loads the kernel and initramfs, the kernel starts init, and init brings up services. Understanding each stage means you can diagnose where in the chain a failure occurred and intervene at the right level — whether that's editing GRUB from its shell, rebuilding initramfs, or adjusting systemd targets. The boot process is also where performance problems hide: a 5-second NetworkManager-wait-online service is invisible until you look at systemd-analyze blame.


Wiki Navigation

Prerequisites