Skip to content

Disaster Recovery — Trivia & Interesting Facts

Surprising, historical, and little-known facts about disaster recovery.


93% of companies without DR that suffer a major data disaster are out of business within one year

A frequently cited University of Texas study found that 93% of companies that lose their data center for 10 or more days file for bankruptcy within one year. While the exact methodology is debated, the directional finding — that prolonged data loss is existentially threatening — has driven DR investments for decades.


The 9/11 attacks transformed disaster recovery from optional to mandatory

Before September 11, 2001, many financial institutions treated disaster recovery as a cost center. The destruction of computing facilities in the World Trade Center forced the financial industry to mandate geographically separated backup sites. Regulatory requirements like FINRA's business continuity rules were direct consequences of 9/11.


RPO and RTO were formalized by IBM in the 1970s

Recovery Point Objective (how much data you can afford to lose) and Recovery Time Objective (how long you can be down) were first formalized in IBM's disaster recovery planning methodology in the 1970s. These two metrics remain the foundation of every DR plan fifty years later, despite massive changes in technology.


Netflix's Chaos Monkey was initially considered reckless

When Netflix open-sourced Chaos Monkey in 2012 — a tool that randomly terminates production instances — many operations teams considered it dangerously irresponsible. Netflix argued that deliberately causing small failures built resilience against large ones. The approach is now mainstream, and "chaos engineering" is a recognized discipline.


60% of DR plans have never been tested

Multiple surveys consistently find that 50-60% of organizations have never tested their disaster recovery plans end-to-end. Of those that have tested, many discovered critical failures: expired credentials, missing documentation, network configurations that prevented failover, and backup systems that hadn't actually been running.


Google's multi-region Spanner database was designed around atomic clocks

Google Spanner, designed for global disaster recovery, uses GPS receivers and atomic clocks (called "TrueTime") in every datacenter to synchronize transactions across regions. This hardware-dependent approach to distributed consistency was so novel that it was initially doubted by the academic community until Google published detailed papers.


The cost of downtime for Fortune 1000 companies averages $100,000 per hour

IDC research estimates that unplanned downtime costs Fortune 1000 companies between $100,000 and $500,000 per hour, with some critical systems (trading platforms, e-commerce) exceeding $1 million per hour. These figures make DR investments that seem expensive — redundant infrastructure, regular testing — look like reasonable insurance.


Hot/warm/cold site terminology dates from the mainframe era

The classification of DR sites as "hot" (running and synced, failover in minutes), "warm" (hardware ready, data periodically synced), or "cold" (empty facility, days to recover) originated in the mainframe era when physical hardware procurement could take months. In the cloud era, you can spin up a "hot" site in minutes, but the terminology persists.


The OVHcloud datacenter fire of 2021 destroyed customer data permanently

On March 10, 2021, a fire at OVHcloud's SBG2 datacenter in Strasbourg, France destroyed servers and backup systems co-located in the same building. Customers who relied on OVHcloud for both primary and backup storage lost data permanently. The incident is now a textbook example of why backups must be geographically separated from production.


GitOps-based DR can recover entire environments from a Git repository

Modern GitOps practices enable a form of DR where the entire infrastructure and application configuration is stored in Git. In theory, you can rebuild a complete production environment from scratch by pointing Argo CD or Flux at a Git repository. In practice, state (databases, user data) is still the hard part.


The "one is none, two is one" principle comes from the U.S. Navy

The disaster recovery principle "one is none, two is one" (meaning a single backup isn't truly redundant) originated as a U.S. Navy saying about equipment reliability. In IT, this manifests as the "3-2-1 backup rule": three copies of data, on two different types of media, with one copy stored offsite.