Skip to content

CI/CD — Trivia & Interesting Facts

Surprising, historical, and little-known facts about Continuous Integration and Continuous Delivery.


Continuous Integration was first described by Grady Booch in 1991

Grady Booch first used the term "continuous integration" in his 1991 book on object-oriented design. However, the practice as we know it was formalized by Kent Beck as part of Extreme Programming (XP) in 1997. Martin Fowler's influential 2006 article "Continuous Integration" is what most practitioners actually read.


CruiseControl was the first CI server, released in 2001

ThoughtWorks released CruiseControl in 2001 as an open-source CI tool. It was Java-based, XML-configured, and painful to use by modern standards. But it established the core CI pattern: watch a repository, build on every commit, and report results. Every CI server since is fundamentally an iteration on this idea.


Jenkins was born from a Sun Microsystems employee's frustration

Kohsuke Kawaguchi created Hudson (later renamed Jenkins) in 2004 at Sun Microsystems because he was tired of manually checking if his code broke the build. After Oracle acquired Sun in 2010 and claimed the Hudson trademark, the community forked the project as Jenkins. The fork retained over 90% of the developer community.


Jenkins has over 1,900 plugins — and that's both its strength and weakness

Jenkins' plugin ecosystem grew to over 1,900 plugins, making it the most extensible CI system ever built. However, plugin compatibility issues, security vulnerabilities in third-party plugins, and "plugin dependency hell" became Jenkins' most significant operational challenges. Many teams migrated away specifically because of plugin maintenance burden.


Google deploys to production approximately 800,000 times per day

According to Google's DevOps research, Google performs roughly 800,000 deployments per day across all its services. This is possible because deployments are automated, incremental, and independently deployable. The average Google change takes about 15 minutes from commit to production.


Amazon deploys every 11.7 seconds

In a 2015 re:Invent talk, Amazon revealed they performed a production deployment every 11.7 seconds on average. This amounted to over 7,000 deployments per day, and the number has only increased since. Each deployment affects only a single microservice, not the entire application.


The DORA metrics were created by a team of PhDs, not developers

The four DORA metrics (deployment frequency, lead time, change failure rate, mean time to recovery) were identified through rigorous academic research by Dr. Nicole Forsgren, Jez Humble, and Gene Kim. Their research, spanning 2014-2019, surveyed over 31,000 professionals and applied statistical methods that would satisfy academic peer review.


Trunk-based development is older than feature branches

Trunk-based development — committing directly to main — predates the widespread use of feature branches. Feature branches became popular with Git and GitHub around 2010-2012. Before that, most teams worked on a single shared branch because their VCS tools (SVN, CVS) made branching painful or impractical.


The "it works on my machine" problem drove containerized CI

Docker's 2013 launch directly addressed CI's most persistent problem: builds that passed locally but failed in CI (or vice versa) due to environment differences. Containerized CI runners — pioneered by GitLab CI and later adopted by GitHub Actions — eliminated environment drift by running every build in an identical container.


Blue-green deployment was named by Martin Fowler and Daniel Terhorst-North

The term "blue-green deployment" was coined by Martin Fowler and Daniel Terhorst-North to describe the pattern of maintaining two identical production environments and switching traffic between them. Despite the name suggesting colors matter, many teams use "green-blue" or different color schemes, and some use numbered environments instead.


The first known CI/CD pipeline break due to a left-pad-style incident was in 2004

While npm's left-pad incident in 2016 is the most famous dependency-removal disaster, similar incidents happened earlier. In 2004, a maintainer of a popular Java logging library changed its API in a minor version bump, breaking thousands of CI builds simultaneously. This predated semantic versioning (SemVer wasn't published until 2010).


Flaky tests cost the industry billions of dollars annually

Google published a 2016 paper revealing that 1.5% of all their test runs were flaky (passing and failing non-deterministically). They estimated that engineers spent 2-16% of their time dealing with flaky tests. Extrapolated across the industry, flaky tests waste an estimated $3-5 billion in engineering productivity annually.


Docker layer caching saves more CI time than any other optimization

Analysis across multiple CI platforms shows that Docker layer caching reduces average build times by 40-70%. Despite this, many teams don't configure it because each CI platform implements caching differently, and the configuration is often non-obvious. GitHub Actions didn't have built-in Docker layer caching until 2021.


The "CI tax" is approximately 15-25% of a team's engineering time

Maintaining CI/CD pipelines — updating dependencies, fixing flaky tests, optimizing build times, managing secrets, and debugging pipeline failures — consumes an estimated 15-25% of a typical team's engineering effort. This "CI tax" is rarely tracked or budgeted for, making it one of engineering's largest hidden costs.


Merge queues solve a problem that costs large teams hours per day

Without a merge queue, developers on large teams (20+ contributors) frequently experience "merge races" — rebasing, re-running CI, and finding that someone else merged first. GitHub's merge queue feature, launched in 2023, batches and serializes merges, eliminating the problem that some teams estimated cost 2-3 engineer-hours per day.


Secret sprawl in CI/CD is a top security risk

A 2023 GitGuardian report found that CI/CD systems contained an average of 5.5 exposed secrets per repository across their dataset. Pipeline configuration files, build logs, and environment variables are the most common locations. Many teams don't realize that CI logs are often accessible to anyone with repo read access.


Pipeline-as-code was revolutionary because pipelines used to be GUI-configured

Before pipeline-as-code (introduced by Jenkins Pipeline in 2016 and popularized by GitLab CI), CI/CD pipelines were configured through web UIs. This meant pipeline configurations weren't version-controlled, couldn't be code-reviewed, and were lost when the CI server was rebuilt. The shift to .yml files in repos was genuinely transformative.