Skip to content

S3 & Object Storage — Trivia & Interesting Facts

Surprising, historical, and little-known facts about S3 and object storage.


S3 was one of Amazon's first public cloud services, launching on March 14, 2006

Amazon S3 (Simple Storage Service) launched on March 14, 2006 — Pi Day. It was one of the first three AWS services (along with SQS and EC2). At launch, it offered storage at $0.15 per GB per month. By 2025, prices had dropped below $0.023 per GB for standard storage. S3 stored over 350 trillion objects by 2024, handling millions of requests per second.


S3's durability of 99.999999999% (eleven 9s) means losing 1 object per 10 million years

AWS claims S3 Standard provides 99.999999999% (11 nines) durability. Statistically, if you stored 10 million objects, you'd expect to lose one every 10,000 years. This durability is achieved by automatically replicating data across at least 3 Availability Zones. Despite this, S3 has had data availability issues (objects temporarily unreachable) — durability (data not lost) and availability (data accessible right now) are different guarantees.


The S3 API became an accidental industry standard

Amazon's S3 API was designed for Amazon's own storage service, but its simplicity (PUT, GET, DELETE on objects addressed by bucket/key) made it so intuitive that it became the de facto standard for object storage everywhere. MinIO, Ceph, Wasabi, Backblaze B2, Google Cloud Storage, and dozens of other products implement S3-compatible APIs. No standards body declared it a standard — the market just adopted it.


The February 2017 S3 outage was caused by a mistyped command during debugging

On February 28, 2017, an Amazon engineer ran a command to remove a small number of S3 index subsystem servers. A typo caused more servers to be removed than intended, taking down S3 in us-east-1 for nearly 4 hours. The outage cascaded across the internet — countless services that depended on S3 (including the AWS dashboard itself) went down. Amazon later added safeguards preventing rapid removal of subsystem capacity.


Eventually consistent reads in S3 confused developers for 14 years

From 2006 to December 2020, S3 was eventually consistent for overwrite PUTs and DELETEs — meaning you could update an object and immediately read back the old version. This drove developers crazy and spawned countless workarounds. In December 2020, AWS announced strong read-after-write consistency for all S3 operations at no extra cost, retroactively making 14 years of workaround code unnecessary.


MinIO, created by Anand Babu Periasamy (AB Periasamy) in 2014, is a high-performance, S3-compatible object storage server written in Go. It's the most widely deployed S3-compatible storage for on-premises use, with over 48,000 GitHub stars. MinIO's erasure coding implementation provides data protection similar to RAID but at the object level, distributing data across drives and nodes.


Object storage is fundamentally different from file storage — it has no directories

Object storage uses a flat namespace: bucket/key where the key can contain slashes (photos/2024/vacation/img001.jpg) but these aren't real directories. There's no rename operation, no append operation, and no file locking. Objects are immutable — you replace them entirely. This simplicity is what makes object storage infinitely scalable, but it confuses developers accustomed to filesystem semantics.


S3 Glacier stores data on magnetic tape in underground facilities

AWS S3 Glacier (and Glacier Deep Archive) stores infrequently accessed data at extremely low cost ($0.00099/GB/month for Deep Archive). The underlying technology includes magnetic tape libraries in highly secured facilities. Retrieval from Deep Archive takes 12-48 hours because robots must physically locate and mount the correct tape. This is the modern incarnation of the tape library, abstracted behind an API.


Presigned URLs turn S3 into a content delivery mechanism without a web server

S3 presigned URLs allow anyone to download (or upload) a specific object for a limited time without needing AWS credentials. This simple feature has enabled entire architectures: direct browser uploads to S3 (bypassing application servers), time-limited download links for paid content, and secure file sharing without running any server infrastructure. Many developers don't realize this feature exists.


The "S3 bucket misconfiguration" data leak has affected nearly every major company

Misconfigured S3 buckets (public read access on private data) have caused data leaks at Verizon (14 million records), Dow Jones (2.2 million records), the US Department of Defense (1.8 billion social media posts), and hundreds of other organizations. The problem was so pervasive that AWS added multiple layers of protection: S3 Block Public Access (2018), bucket policy warnings in the console, and Account-level public access blocks.


Object versioning in S3 was designed after customers accidentally deleted production data

S3 versioning, which keeps all previous versions of every object, was introduced in 2010 after numerous customer incidents involving accidental deletions and overwrites. Combined with MFA Delete (requiring a hardware token to permanently delete versioned objects), it provides protection against both human error and ransomware. The storage cost of versioning is often underestimated — a frequently updated object can accumulate hundreds of versions.