Datacenter — Trivia & Interesting Facts¶
Surprising, historical, and little-known facts about datacenters.
The world's first commercial datacenter was built in 1946 for the ENIAC¶
The Electronic Numerical Integrator and Computer (ENIAC), completed in 1946 at the University of Pennsylvania, required 1,800 square feet of floor space, weighed 30 tons, and consumed 150 kilowatts of power. It required its own dedicated climate control system — making it arguably the first purpose-built "datacenter." Modern datacenters can pack more compute power into a single 1U server than ENIAC could produce.
Hot aisle / cold aisle containment saves 20-40% on cooling costs¶
The simple idea of arranging server racks so all fronts face one direction (cold aisle) and all backs face the other (hot aisle) was formalized by Robert Sullivan of Upsite Technologies in 2000. Despite being one of the highest-ROI efficiency improvements available, many datacenters built before 2005 didn't use it. Adding physical containment (curtains or doors) to the hot or cold aisle can improve PUE by 0.2-0.4.
Google's datacenter in Hamina, Finland uses seawater for cooling¶
Google's Hamina datacenter, opened in 2011, is built in a converted paper mill and uses water from the Gulf of Finland for cooling. The seawater passes through a heat exchanger (never touching the servers directly) and is returned to the sea after being cooled in a mixing basin. The facility achieves a PUE of approximately 1.12 — remarkably efficient for its size.
Microsoft sank a datacenter in the ocean and it had fewer failures than land-based ones¶
Project Natick, Microsoft's underwater datacenter experiment, placed a sealed container with 864 servers on the seafloor off Scotland in 2018. After two years, the underwater servers had one-eighth the failure rate of comparable land-based servers. The hypothesis is that the nitrogen atmosphere, absence of humidity fluctuations, and lack of human access (meaning no accidental bumps) contributed to the reliability improvement.
A single hyperscale datacenter can consume as much electricity as a small city¶
A typical hyperscale datacenter consumes 50-100+ MW of power — equivalent to powering 40,000-80,000 homes. Meta's Altoona, Iowa datacenter campus exceeds 200 MW. By some estimates, datacenters consumed 1-1.5% of global electricity in 2022, and AI training workloads are pushing this upward rapidly. Northern Virginia alone hosts more datacenter capacity than most countries.
The Tier classification system was created by the Uptime Institute in 1995¶
The four-tier datacenter classification (Tier I through Tier IV) was developed by the Uptime Institute, originally as internal research at a real estate firm. Tier IV (99.995% uptime, fault-tolerant) requires fully redundant everything — 2N power, 2N cooling, no single points of failure — and can sustain any single component failure without service interruption. Building a Tier IV datacenter costs roughly 3-4x per square foot compared to Tier I.
The largest datacenter in the world is in China and covers 10.7 million square feet¶
The China Telecom-Inner Mongolia Information Park in Hohhot, China, spans approximately 10.7 million square feet across multiple buildings. For comparison, that's roughly 186 NFL football fields. The Range International Information Group datacenter in Langfang, China, held the record before it at 6.3 million square feet.
Raised floors in datacenters were originally borrowed from mainframe installations¶
The raised floor design used in traditional datacenters originated in the 1960s for IBM mainframe installations. The space under the floor tiles served as a plenum for cooled air distribution and cable routing. Modern hyperscale datacenters have largely abandoned raised floors in favor of overhead cooling and cable trays, as raised floors limit airflow capacity and add structural cost.
A misplaced comma in a BGP configuration once took out a major datacenter's connectivity¶
In 2019, a configuration error — essentially a typo — in a BGP route announcement caused a major ISP to accidentally advertise routes for a large chunk of the internet through a path that couldn't handle the traffic. The resulting traffic black hole took down connectivity for thousands of services. This incident reinforced why datacenters insist on multi-homed internet connectivity with diverse providers.
Datacenter operators track "Mean Time to Innocence" as a real metric¶
When an outage occurs, every team in the datacenter scrambles to prove it's not their fault. "Mean Time to Innocence" (MTTI) — the time it takes a team to prove the problem isn't in their domain — is a tongue-in-cheek but genuinely tracked metric. Reducing MTTI (via better monitoring and observability) actually speeds up incident resolution because teams stop finger-pointing faster.
The first cage-style colocation datacenter opened in 1998¶
Exodus Communications, founded in 1998, pioneered the colocation model where multiple tenants share a single datacenter facility, each in their own locked cage. The company grew explosively during the dot-com boom, went public, and at its peak was valued at $30 billion. It went bankrupt in 2001 after the dot-com crash and was acquired by Cable & Wireless for $800 million — a 97% loss in value.
Some datacenters use diesel generators that could power a submarine¶
Large datacenter backup generators can produce 2-3 MW each — the same class of diesel engine used in military submarines and container ships. A hyperscale datacenter campus might have 50-100 of these generators, with enough fuel stored on-site to run for 24-72 hours. The sound of a full generator test is audible from over a mile away.
Out-of-band management exists because of a 3 AM phone call problem¶
Before IPMI and dedicated management ports, the only way to fix a hung server was to physically visit the datacenter or call a remote-hands technician. OOB management was driven by a simple cost calculation: a $50/month BMC is cheaper than a single $200 truck roll at 3 AM.
PXE boot was invented by Intel in 1999 and hasn't fundamentally changed¶
The Preboot Execution Environment specification was published by Intel in 1999. Over 25 years later, the core protocol — DHCP to get an IP, TFTP to download a bootloader — is essentially unchanged. The biggest evolution was adding HTTP boot support in UEFI, which took until roughly 2016 to become widely available.
Cobbler was the provisioning tool that made Red Hat deployments practical at scale¶
Cobbler, created by Michael DeHaan (who later created Ansible) in 2006, was the first tool that made Kickstart-based provisioning manageable for large fleets. Before Cobbler, provisioning 100 servers meant maintaining 100 slightly different Kickstart files by hand.
The serial console was the original out-of-band channel¶
Before IPMI and BMCs, serial consoles connected via terminal servers were the primary OOB access method. A 48-port Cyclades terminal server in every rack was standard practice in the early 2000s.
Cloud-init was created for Ubuntu on EC2 and now runs everywhere¶
Scott Moser created cloud-init at Canonical around 2009 to configure Ubuntu instances on Amazon EC2. It has since become the universal standard for VM and cloud instance initialization, supported by every major cloud provider.
Zero-touch provisioning can go spectacularly wrong¶
In 2017, a misconfigured zero-touch provisioning system at a major hosting provider accidentally re-imaged 12 production database servers during a maintenance window. The DHCP scope for provisioning bled into the production VLAN. This incident is why most mature shops require explicit boot-order changes before provisioning.
The BMC runs its own entire operating system that most admins never see¶
A modern BMC runs a full Linux-based OS on an ARM processor with its own RAM, flash storage, and network stack — completely independent of the host server. Every server in your datacenter is actually two computers: the one you manage and the one that manages it. OpenBMC, started by Facebook in 2014, aimed to make this hidden OS auditable and open-source.