Portal | Level: L2: Operations | Topics: Linux Fundamentals, systemd | Domain: Linux
Linux Boot Sequence - From Power-On to Full Boot¶
Scope¶
This document walks the Linux boot path from power applied to a usable system. It covers the common modern case first:
- x86_64 hardware
- UEFI firmware
- GRUB or systemd-boot
- Linux kernel + initramfs
systemdas PID 1
It also explains where the path differs for:
- legacy BIOS/MBR systems
- encrypted/LVM/RAID root filesystems
- network/PXE boot
- non-
systemdinit systems - Secure Boot
This is a mechanical sequence, not a distro-specific beginner guide.
Big Picture — Five Handoffs¶
Every Linux boot is a relay race of five handoffs:
| Layer | Question it answers |
|---|---|
| Firmware | What device/program do I boot next? |
| Bootloader | What kernel do I boot, with what parameters? |
| Kernel | How do I bring up the machine core and run early OS code? |
| Initramfs | How do I find and prepare the real root filesystem? |
| PID 1 | How do I bring up the actual operating system services? |
If you identify which question is failing, you usually identify the layer.
flowchart TD
A["1. Firmware / UEFI\nInitialize hardware\nSelect boot device"] --> B["2. Bootloader / GRUB\nChoose kernel + initramfs\nAssemble command line"]
B --> C["3. Kernel\nDecompress, init CPU/memory\nBring up core subsystems"]
C --> D["4. Initramfs\nLoad modules, unlock storage\nMount real root filesystem"]
D --> E["5. PID 1 / systemd\nBuild unit dependency graph\nActivate services → target reached"]
style A fill:#4a6fa5,color:#fff
style B fill:#6b8cae,color:#fff
style C fill:#f0ad4e,color:#333
style D fill:#d9534f,color:#fff
style E fill:#5cb85c,color:#fff
1. Power-On and CPU Reset¶
1.1 Power is applied¶
When you press the power button, the board, PSU, chipset, CPU, RAM, and firmware logic begin their startup sequence.
At a high level:
- PSU stabilizes voltage rails
- platform logic releases reset signals when power is good
- CPU begins execution from a fixed reset vector defined by the architecture/platform
- execution begins in firmware, not in Linux
Linux is nowhere in the picture yet.
1.2 Very early hardware state¶
At this point the machine is not a normal computer yet. It is a pile of hardware barely brought to life:
- DRAM is not fully trained/usable yet
- storage controllers may not be initialized
- USB, PCIe, GPU, and network hardware may be partially or wholly unavailable
- there is no filesystem, no scheduler, no userspace, no processes
Firmware's first job is to turn the machine into something bootable.
2. Firmware Phase¶
There are two broad firmware paths.
2.1 Modern path: UEFI¶
UEFI firmware performs platform initialization, exposes firmware services, tracks boot entries in NVRAM, and launches EFI applications from a boot device.
In simplified form, UEFI does this:
- initializes CPU/chipset/platform
- trains memory
- enumerates buses/devices
- initializes enough drivers to access boot devices
- reads boot policy and boot entries from NVRAM
- chooses a boot option
- loads and starts an EFI executable
2.1.1 Practical UEFI phases¶
The full internal UEFI/PI flow is more elaborate, but the useful admin-level mental model is:
- SEC - temporary entry / earliest trusted firmware code
- PEI - early platform init, memory discovery/training
- DXE - driver execution phase; most firmware drivers/services come alive
- BDS - boot device selection; firmware chooses what to boot
- TSL/RT - transient/runtime handoff concepts after OS launch
You do not usually troubleshoot SEC/PEI/DXE directly unless you are doing firmware engineering or deep platform work.
2.1.2 UEFI boot selection¶
UEFI stores boot entries in variables like:
BootOrderBootNextBoot####
A typical Linux host might have a boot entry pointing to something like:
\EFI\ubuntu\shimx64.efi\EFI\ubuntu\grubx64.efi\EFI\systemd\systemd-bootx64.efi\EFI\BOOT\BOOTX64.EFIon removable media
These files live on the EFI System Partition (ESP), usually a FAT filesystem.
2.1.3 Secure Boot layer¶
If Secure Boot is enabled, firmware verifies signatures according to enrolled keys/policies before launching EFI binaries. On many Linux systems the path is:
If signatures or trust policy fail, boot may halt before Linux ever starts.
2.2 Legacy path: BIOS¶
Legacy BIOS is older and much simpler.
Typical flow:
- POST runs
- BIOS initializes minimal hardware
- BIOS chooses a boot device by configured order
- BIOS reads the first sector of the device into memory
- BIOS jumps to that code
That first sector is usually the MBR boot sector.
This path is more constrained:
- tiny initial boot code budget
- no EFI applications
- boot chain depends on MBR/post-MBR tricks or partition boot sectors
- no native UEFI NVRAM boot entry model
Modern Linux systems are usually UEFI unless they are older, intentionally configured otherwise, or running in compatibility/CSM mode.
3. Firmware Chooses the Boot Target¶
3.1 What firmware is really deciding¶
Firmware is not usually loading Linux directly. It is deciding what next-stage loader to run.
That could be:
- GRUB EFI binary
- systemd-boot EFI binary
- a distro-specific EFI stub path
- a rescue/recovery EFI program
- PXE/network boot program
- Windows Boot Manager
3.2 Typical failure points here¶
If the machine fails here, symptoms include:
- firmware menu loops
- "No bootable device"
- missing boot entries
- wrong disk priority
- broken/missing ESP
- Secure Boot signature refusal
- corrupted EFI binary
Useful tools later, once booted from rescue media:
efibootmgr -vlsblk -fblkidfindmntbootctl status- checking
/boot/efior/efi
4. Bootloader Phase¶
The bootloader is the bridge between firmware and the Linux kernel.
Its job is to:
- present boot entries/menu (sometimes hidden)
- choose kernel version / boot entry
- assemble kernel command line
- load kernel image into memory
- load initramfs/initrd into memory
- optionally load CPU microcode blobs
- pass boot parameters and machine information
- jump into the kernel entry path
4.1 Common Linux bootloaders¶
| Bootloader | Notes |
|---|---|
| GRUB 2 | Most common. Flexible, filesystem-aware, menus/scripting/rescue shell. |
| systemd-boot | UEFI-only. Simpler, EFI boot-entry style. |
| Syslinux/Extlinux | Lightweight, limited use cases. |
| U-Boot | Common in embedded/ARM. |
| EFI stub | Direct kernel boot without separate loader. |
| iPXE/PXE | Network boot. |
4.2 What the bootloader passes to Linux¶
At minimum, the bootloader usually passes:
- the kernel image
- the initramfs image
- the kernel command line
- architecture/platform-specific boot parameters
Common kernel parameters you will actually see:
root=ro/rwquietsplashloglevel=console=systemd.unit=rd.luks.uuid=/rd.lvm.lv=/rd.md=(dracut-style early boot parameters)nomodesetpanic=init=resume=
Why the command line matters so much¶
The kernel command line is the contract between bootloader and early boot.
It tells the kernel and/or initramfs things like:
- where the real root filesystem is
- whether boot should start read-only first
- what console to log to
- whether to boot to rescue/emergency mode
- how to unlock encrypted storage
- whether to disable particular drivers/features
A wrong command line can break boot even if kernel and root disk are fine.
4.3 Microcode loading¶
On many systems the bootloader also loads an early CPU microcode image.
Why:
- CPU errata fixes
- security mitigations
- stability fixes
This usually happens before or alongside kernel loading, depending on distro tooling.
4.4 Bootloader failure points¶
Typical symptoms:
- GRUB prompt only
- boot menu missing entries
- kernel loads but wrong root device
- "file not found" for kernel/initramfs
- bad kernel parameters
- missing initramfs
- broken BLS entries or stale GRUB config
5. Kernel Entry and Early Kernel Startup¶
Once the bootloader transfers control, Linux begins.
This is where people often mentally blur everything together. Do not. The kernel phase and userspace phase are different universes.
5.1 Kernel image entry¶
On x86, the boot protocol defines how the bootloader loads setup code and boot parameters for the kernel.
The kernel is not immediately a fully running multitasking OS. First it must:
- gain control from the bootloader
- set up CPU execution mode as required
- establish early memory layout
- parse boot parameters
- decompress the compressed kernel image if needed
- initialize core architecture code
5.2 Decompression and relocation¶
Typical compressed kernel images (bzImage, etc.) contain compressed payloads. Early boot code unpacks the real kernel image and transfers control to it.
This happens before ordinary userspace, before root is mounted, and before your usual tools exist.
5.3 Early architecture setup¶
The kernel then performs architecture-specific initialization, such as:
- CPU feature detection
- interrupt setup
- page tables / memory management setup
- ACPI or Device Tree discovery
- APIC / timer setup
- NUMA topology discovery
- SMP preparation
- early console support
5.4 Core kernel subsystems initialize¶
This includes pieces like:
- slab/slub allocators
- scheduler
- workqueues
- VFS core
- block layer
- driver model / bus enumeration
- security framework hooks
- cgroups basics
- early device drivers compiled into the kernel
5.5 initcalls and driver initialization¶
Built-in kernel code initializes through ordered initcall stages. This is where a lot of drivers and subsystems come online.
Examples:
- storage controller drivers
- PCI enumeration
- filesystems built into the kernel
- networking core
- console/TTY pieces
- framebuffer/graphics basics
If your storage controller or root filesystem driver is not built into the kernel, it must come from the initramfs later.
5.6 Boot logs during this stage¶
This is the source of the classic kernel messages you see in:
dmesgjournalctl -k -b
Examples of things logged here:
- CPU model/features
- memory map
- ACPI tables
- detected disks/controllers
- driver probe successes/failures
- filesystems registered
- kernel command line
6. initramfs / initrd Phase - Early Userspace¶
This is one of the most important stages in real-world Linux boot.
6.1 What initramfs is¶
The kernel includes or is given an initramfs image - an early userspace archive unpacked into a temporary root filesystem in RAM.
That image usually contains:
- a tiny userspace
/init- shell/busybox or systemd/dracut tools
- kernel modules needed before the real root is mounted
- scripts/hooks for storage discovery and activation
The point of initramfs is simple:
The kernel usually cannot mount the final root filesystem by itself in every real-world configuration, so early userspace does the messy discovery/setup work first.
6.2 Why initramfs exists¶
Without initramfs, the kernel would need every possible storage and root-mounting scenario baked in up front.
Real systems often need early logic for:
- LUKS decryption
- LVM activation
- MD RAID assembly
- multipath setup
- NVMe or unusual controller modules
- USB root devices
- NFS/iSCSI/network root
- resume from hibernation setup
- filesystem checks and policy logic
6.3 What happens inside initramfs¶
A common sequence is:
- kernel unpacks initramfs into temporary rootfs in RAM
- kernel executes
/initfrom that environment /initloads required kernel modules- device discovery occurs
- uevents/udev may populate
/dev - encrypted devices are unlocked if needed
- RAID/LVM are assembled/activated if needed
- real root filesystem is found
- real root is mounted
- control switches from initramfs root to the real root
6.4 switch_root / pivot_root¶
Once the real root is ready, early userspace hands off.
Common mechanisms:
switch_rootpivot_root
Conceptually:
Temporary RAM-based root
-> mount real root somewhere
-> move critical mounts (/proc, /sys, /dev, etc.) as needed
-> replace old root with real root
-> exec real init (PID 1) from the real root
If this fails, you land in:
- a dracut emergency shell
- BusyBox shell
- initramfs shell
- panic, depending on setup
6.5 Common initramfs generators¶
- dracut - common on RHEL/Fedora family and others
- initramfs-tools - common on Debian/Ubuntu family
- mkinitcpio - common on Arch family
- custom embedded tooling in specialized systems
They produce functionally similar results through different ecosystems.
6.6 Typical breakage in this phase¶
This stage fails constantly in real life compared with the user's romantic belief that "Linux is broken."
Usual causes:
- wrong
root=UUID/path - missing storage controller module
- missing filesystem module
- missing LUKS/LVM/RAID logic in initramfs
- stale initramfs after kernel/storage changes
- broken
/etc/fstab, crypttab, mdadm, LVM metadata assumptions - USB/NVMe enumeration timing issues
- remote root network not available
Classic symptoms:
dracut-initqueue timeoutALERT! UUID=... does not exist- dropped to emergency shell
VFS: Unable to mount root fs- kernel panic after trying to start init
7. Mounting the Real Root Filesystem¶
Once early userspace discovers the true root device, it mounts it.
7.1 What “root filesystem” really means¶
This is the filesystem that becomes / for the live OS.
Examples:
- ext4 on
/dev/sda2 - XFS on an LVM LV
- Btrfs subvolume on NVMe
- encrypted LUKS container containing LVM containing XFS/ext4
- NFS root
- iSCSI root
7.2 Usual mount order concerns¶
Before the full system is up, boot code typically needs at least these pseudo-filesystems available:
/proc/sys/dev/run
Then the real root can host the normal system tree:
/etc/usr/var/home/boot- additional mounts from
/etc/fstab
7.3 Read-only then remount read-write¶
Many systems initially mount root read-only, then later remount read-write after checks/setup. This protects integrity during early boot and allows filesystem checking or recovery policy.
8. Starting PID 1¶
This is the handoff from early boot plumbing to the operating system proper.
8.1 What PID 1 is¶
PID 1 is the first userspace process in the real root environment.
Traditionally it is:
systemd- or historically
sysvinit,OpenRC,runit, etc.
PID 1 has special semantics:
- parent of orphaned processes
- responsible for system startup ordering
- central shutdown target
- often responsible for service supervision and dependency management
If PID 1 dies early, the system is usually effectively dead.
8.2 How PID 1 is chosen¶
Usually the kernel tries standard init paths such as:
/sbin/init/etc/init/bin/init/bin/sh
This can be overridden with the kernel parameter:
init=/path/to/program
That is useful for debugging and rescue.
9. systemd Boot Sequence¶
This is the dominant modern Linux userspace boot path.
9.1 What systemd does first¶
When systemd becomes PID 1, it:
- reads unit files
- loads generator output
- builds a dependency graph
- determines the default target
- starts activating units in parallel where possible
- tracks ordering dependencies where required
This is why systemd boots faster than old strictly serial init systems in many cases: it does parallel activation with dependency awareness, not blind one-after-another script execution.
9.2 Generators and early translation¶
Before ordinary units fully come alive, systemd generators can synthesize units dynamically from config/state.
Common examples:
/etc/fstab-> mount unitscrypttab-> cryptsetup-related units- kernel command line -> special boot behavior
These generated units become part of the boot dependency graph.
9.3 Conceptual target progression¶
A simplified modern systemd boot path looks like this:
initrd.target (inside initramfs, if systemd is used there)
-> switch to real root
-> sysinit.target
-> basic.target
-> multi-user.target or graphical.target
That simplified chain hides many details, but it is the right mental skeleton.
9.4 Important targets¶
| Target | What it means |
|---|---|
sysinit.target |
Foundation init: mounts, swap, device setup, tmpfiles, udev, journald basics |
basic.target |
OS plumbing ready; normal services can start |
multi-user.target |
Non-graphical multi-user (≈ runlevel 3): services, networking, login prompts |
graphical.target |
Multi-user + display manager (≈ runlevel 5): GUI login |
local-fs.target |
Local filesystems mounted |
network.target |
Network stack basic availability |
network-online.target |
Stronger: network configured and usable |
rescue.target |
Limited admin environment |
emergency.target |
Minimal shell, almost nothing else |
default.target |
What system boots into; usually symlink to multi-user or graphical |
9.5 Services, sockets, mounts, devices, timers¶
systemd is not just starting services.
It is activating different unit types:
.service.socket.mount.automount.swap.device.target.path.timer.slice.scope
This matters because boot may appear blocked by:
- a mount unit waiting on a device
- a service waiting on
network-online.target - a device timeout
- an automount dependency
- a failed generator-produced unit
9.6 Parallelism and ordering¶
Two things are distinct in systemd:
- ordering - what must happen before/after what
- requirement - what depends on what existing successfully
This distinction is critical.
A service can be ordered after another service without requiring it, or require another service without rigid serialization beyond what dependencies enforce.
This is why boot graphs can look non-linear.
9.7 What “full boot” usually means under systemd¶
In practical admin terms, the system is fully booted when the desired default target is reached and the services you care about are active.
That may mean:
multi-user.targetactive and SSH/login working- or
graphical.targetactive and display manager/login screen ready - or some appliance-specific target active
systemd-analyze and systemctl is-system-running help assess this.
Possible states include:
startingrunningdegradedmaintenance- others depending on condition
A system can be "booted" but degraded because some non-critical unit failed.
10. Login Availability¶
10.1 Text login path¶
If the system boots to a text environment, getty services provide login prompts on local consoles.
Examples:
getty@tty1.service- serial console gettys such as
serial-getty@ttyS0.service
At this point you can log in locally.
10.2 Remote login path¶
If network and sshd are up, remote shell access becomes available.
Note the subtlety:
- the OS may have reached
multi-user.target - but your operational definition of "usable" may still be "SSH is accepting connections"
Those are not always the same instant.
10.3 Graphical login path¶
If the default target is graphical and the display stack is healthy, the display manager starts and presents a login screen.
Typical path:
graphical.target
-> display-manager.service
-> gdm/sddm/lightdm
-> greeter/login screen
-> user session manager / desktop session
At that point the system is fully booted for ordinary desktop use.
11. What Changes on Legacy SysV-style Init Systems¶
Not every Linux system uses systemd.
If PID 1 is sysvinit-style instead:
- kernel still boots the same way
- initramfs stage is still conceptually similar
- PID 1 then follows
/etc/inittabor distro-specific init logic - rc scripts run by runlevel / sequence naming conventions
- boot is more script-serial and less dependency-graph driven
Rough shape:
kernel
-> initramfs
-> /sbin/init
-> read inittab
-> enter runlevel
-> run rc scripts in order
-> start getty/display manager/services
The early boot mechanics are mostly the same. The main difference is the service orchestration model after PID 1 starts.
12. Important Variants and Edge Cases¶
12.1 Encrypted root¶
Extra boot work appears before root mount:
bootloader
-> kernel + initramfs
-> initramfs prompts or obtains key
-> unlock LUKS device
-> activate LVM or mount filesystem within unlocked container
-> switch_root
Failure here often looks like:
- passphrase prompt never appears
- keyboard drivers missing early
- TPM/clevis/keyscript problems
- UUID mismatch
- crypttab/initramfs stale
12.2 LVM root¶
Early userspace must scan volume groups and activate logical volumes before mounting root.
12.3 RAID root¶
Early userspace must assemble arrays before root can be mounted.
12.4 Network root¶
For NFS/iSCSI/PXE boot:
- firmware or PXE ROM may fetch a network bootstrap image
- bootloader and/or kernel are loaded over network
- initramfs brings up networking early
- real root is mounted over network
This adds obvious failure points:
- DHCP
- routing
- link negotiation
- server availability
- storage target availability
12.5 Embedded and ARM systems¶
Differences may include:
- U-Boot instead of GRUB
- Device Tree usage is common
- different kernel image formats
- boot media may be SPI flash, eMMC, SD, network, etc.
The same abstract model still holds.
12.6 Containers¶
Containers do not perform a full hardware boot. They start in an already-running kernel and usually skip firmware/bootloader/kernel-init phases entirely.
Do not confuse container startup with system boot.
13. Failure Diagnosis and Observability¶
| Stage | Failure symptoms | Likely layer | What to inspect |
|---|---|---|---|
| Firmware | No POST, no boot device, Secure Boot refusal | Motherboard/firmware/ESP | Firmware UI, efibootmgr -v, bootctl status |
| Bootloader | GRUB rescue shell, missing entries, wrong kernel | ESP, GRUB config, UUID drift | GRUB menu, boot entry config, ESP contents |
| Kernel | Early panic, hang after initrd load, no console | Kernel image, cmdline, drivers | dmesg, journalctl -k -b |
| Initramfs | Dracut shell, "cannot find root", VFS panic | Stale initramfs, bad root=, missing modules | Emergency shell, /proc/cmdline |
| Userspace | Emergency/rescue target, services fail, no login | Unit deps, mount failures, service config | systemctl --failed, systemd-analyze blame |
Boot time delays come from: firmware (memory training, RAID init), loader (GRUB timeout, slow ESP), kernel (driver probe, storage timeout), or userspace (network-online.target wait, failed mounts, slow services). Use systemd-analyze time to see the breakdown.
Key debug commands (once you have shell access):
cat /proc/cmdline && journalctl -b && journalctl -k -b && dmesg -T
systemctl is-system-running && systemctl --failed
systemd-analyze time && systemd-analyze blame && systemd-analyze critical-chain
efibootmgr -v && bootctl status && lsblk -f && findmnt -A
14. Common Misconceptions¶
14.1 “GRUB boots Linux”¶
Not exactly. GRUB loads the kernel and hands off. The kernel boots Linux.
14.2 “systemd is the bootloader”¶
No. systemd is usually PID 1. systemd-boot is a boot manager/loader. Different things.
14.3 “The kernel mounts root by itself”¶
Sometimes in simple cases, but in many real systems the initramfs does the important discovery/setup work first.
14.4 “When I see a login prompt, boot is done”¶
Operationally maybe. Technically some background units may still be activating, and the system may even be degraded.
14.5 “Boot problems are all kernel problems”¶
No. Plenty are firmware, ESP, bootloader config, initramfs composition, mount dependency, or service-order problems.
15. Fast Troubleshooting Heuristic¶
When boot breaks, ask these in order:
- Did firmware find something bootable?
- Did the bootloader load the intended kernel/initramfs?
- Did the kernel start and print logs?
- Did initramfs find and mount the real root?
- Did PID 1 reach the expected target?
That sequence cuts through most boot-debug noise.
16. Summary¶
A full Linux boot is a relay race: Firmware → Bootloader → Kernel → Initramfs → PID 1. Each layer solves one problem and hands off to the next. Reaching multi-user.target or graphical.target is the practical definition of full boot.
17. References¶
Official and primary references used to ground this document:
- Linux x86 boot protocol: https://docs.kernel.org/arch/x86/boot.html
- Kernel initramfs/rootfs behavior: https://docs.kernel.org/filesystems/ramfs-rootfs-initramfs.html
- Kernel initrd notes: https://docs.kernel.org/admin-guide/initrd.html
- Kernel command-line parameters: https://docs.kernel.org/admin-guide/kernel-parameters.html
- systemd bootup: https://www.freedesktop.org/software/systemd/man/bootup.html
- systemd special targets: https://www.freedesktop.org/software/systemd/man/systemd.special.html
- systemd-boot: https://www.freedesktop.org/software/systemd/man/systemd-boot.html
- UEFI Boot Manager: https://uefi.org/specs/UEFI/2.11/03_Boot_Manager.html
- UEFI PI overview: https://uefi.org/specs/PI/1.8A/V2_Overview.html
Wiki Navigation¶
Prerequisites¶
- Linux Ops (Topic Pack, L0)
Related Content¶
- Deep Dive: Systemd Architecture (deep_dive, L2) — Linux Fundamentals, systemd
- Deep Dive: Systemd Timers Journald Cgroups and Resource Control (deep_dive, L2) — Linux Fundamentals, systemd
- LPIC / LFCS Exam Preparation (Topic Pack, L2) — Linux Fundamentals, systemd
- Linux Boot Process (Topic Pack, L1) — Linux Fundamentals, systemd
- Linux Logging (Topic Pack, L1) — Linux Fundamentals, systemd
- Linux Ops (Topic Pack, L0) — Linux Fundamentals, systemd
- Ops Archaeology: The Service That Won't Start (Case Study, L1) — Linux Fundamentals, systemd
- RHCE (EX294) Exam Preparation (Topic Pack, L2) — Linux Fundamentals, systemd
- Skillcheck: Linux Fundamentals (Assessment, L0) — Linux Fundamentals, systemd
- systemctl & journalctl Deep Dive (Topic Pack, L1) — Linux Fundamentals, systemd
Pages that link here¶
- /proc Filesystem
- Cron & Job Scheduling - Primer
- LPIC / LFCS Exam Preparation
- LPIC / LFCS Exam Preparation — Primer
- Linux Boot Process
- Linux Boot Process — Primer
- Linux Fundamentals - Skill Check
- Linux Logging
- Linux Logging — Primer
- Linux System Administration - Street Ops
- Ops Archaeology: The Service That Won't Start
- Primer
- RHCE (EX294) Exam Preparation — Primer
- Runbook: Systemd Service Crash Loop
- Symptoms