Server Hardware — Trivia & Interesting Facts¶
Surprising, historical, and little-known facts about server hardware.
The x86 architecture that runs most servers was designed for a calculator chip¶
Intel's 8086 processor (1978), the ancestor of every x86 server CPU, was originally positioned as a calculator and embedded controller chip. The decision by IBM to use the 8088 variant in the original IBM PC (1981) set x86 on a path from desktop to datacenter. AMD's x86-64 extension (2003) — created because Intel refused to make a 64-bit x86 — is what modern servers actually run.
ECC RAM corrects single-bit errors on the fly, and servers without it are gambling¶
Error-Correcting Code (ECC) memory detects and corrects single-bit errors and detects (but cannot correct) double-bit errors. Google's 2009 study found that about 8% of DIMMs experience at least one correctable error per year. Desktop computers use non-ECC RAM, meaning a single bit flip could corrupt data silently. This is why consumer hardware should never be used for production workloads.
Modern server CPUs have more transistors than there are stars visible to the naked eye¶
AMD's EPYC 9004 series (Genoa) contains approximately 90 billion transistors. Intel's Xeon Sapphire Rapids has around 100 billion. For comparison, only about 5,000-9,000 stars are visible to the naked eye. The transistor count in a single modern server CPU exceeds the number of neurons in a mouse brain (approximately 70 million).
The Open Compute Project was started by Facebook to commoditize server hardware¶
In 2011, Facebook open-sourced their datacenter and server designs under the Open Compute Project (OCP). Their motivation was strategic: by making server hardware designs public, they could get manufacturers to compete on price for commodity designs rather than paying a premium for proprietary hardware from Dell, HP, and IBM. OCP designs are now used by Microsoft, Google, and other hyperscalers.
Hot-swap drive bays were invented because pulling drives from live servers was terrifying¶
Before hot-swap drive carriers, replacing a failed drive in a RAID array required shutting down the server, opening the case, and physically swapping the bare drive — a process fraught with risk (static discharge, wrong drive pulled, cable dislodged). SCA-2 connectors and hot-swap bays, standardized in the late 1990s, eliminated this risk. The spring-loaded carrier design is one of the most human-centered engineering features in server design.
Server power supplies are rated by 80 Plus efficiency certification levels¶
The 80 Plus certification program (launched 2004) rates power supply efficiency: 80 Plus (80% efficiency), Bronze (82-85%), Silver (85-88%), Gold (87-90%), Platinum (90-94%), and Titanium (91-96%). Enterprise servers typically use Platinum or Titanium-rated PSUs. The efficiency difference between a basic 80 Plus and Titanium PSU at 50% load is about 16% — which, across thousands of servers, amounts to millions of dollars in electricity costs annually.
NVMe SSDs bypass the storage controller entirely, talking directly to the CPU¶
NVMe (Non-Volatile Memory Express), introduced in 2011, communicates directly over PCIe lanes to the CPU, bypassing the traditional SATA/SAS storage controller. This eliminates the controller as a bottleneck: NVMe drives can deliver 7,000+ MB/s reads versus SATA's maximum of 600 MB/s. The shift from SATA to NVMe is the single largest performance improvement in storage history, exceeding even the HDD-to-SSD transition.
Dual-socket servers exist because Moore's Law couldn't keep up with demand¶
When single CPUs couldn't provide enough performance, server designers added a second CPU socket. The two CPUs share memory access via interconnects (Intel QPI/UPI, AMD Infinity Fabric) in a NUMA (Non-Uniform Memory Access) architecture. NUMA-aware software runs dramatically faster than NUMA-unaware software — a single misconfigured NUMA setting can cut performance by 30-50%.
The 42U standard rack height accommodates exactly 42 units of equipment¶
A standard datacenter rack is 42U tall (73.5 inches / 186.7 cm of usable space). The number 42 is coincidentally the answer to the "Ultimate Question of Life, the Universe, and Everything" in The Hitchhiker's Guide to the Galaxy, but the actual reason is practical: 42U is the maximum height that allows most adults to reach the top of the rack without a step stool while staying within standard ceiling heights.
BMC firmware runs a separate computer inside every server¶
Every enterprise server contains a Baseboard Management Controller — an independent ARM-based computer with its own CPU, RAM, storage, and network connection that monitors and manages the host server. The BMC is powered from the 5V standby rail, meaning it runs even when the main server is powered off. It can power the server on/off, access the console, read sensors, and update firmware — essentially a backdoor that's also a critical management tool.
Server fans can move enough air to inflate a small parachute¶
A fully loaded 1U server's fans move approximately 80-120 CFM (cubic feet per minute) of air. A rack of 42 such servers pushes 3,300-5,000 CFM through the hot aisle — roughly equivalent to a medium-sized whole-house fan. At full speed, server fans can produce over 70 dB of noise, which is why datacenter workers wear hearing protection during extended rack work.