DORA Metrics — Trivia & Interesting Facts¶
Surprising, historical, and little-known facts about DORA metrics.
DORA research started because DevOps was all opinion and no data¶
Nicole Forsgren, Jez Humble, and Gene Kim launched the State of DevOps research program in 2014 because the DevOps community was dominated by anecdotes and vendor marketing with almost no rigorous data. Their surveys of tens of thousands of professionals over multiple years produced the first statistically significant evidence that DevOps practices actually improve both speed and stability.
The four DORA metrics were chosen from hundreds of candidates¶
The final four metrics — Deployment Frequency, Lead Time for Changes, Change Failure Rate, and Mean Time to Restore — were selected through rigorous statistical analysis from a much larger initial set. Forsgren, a PhD in Management Information Systems, used cluster analysis to identify these four as the key indicators that differentiate elite, high, medium, and low performers.
Elite performers deploy 973x more frequently than low performers¶
The 2019 State of DevOps report found that elite performers deploy on-demand (multiple times per day), while low performers deploy between once per month and once every six months. This 973x gap in deployment frequency was accompanied by 6,570x faster lead times — elite performers had lead times under one hour versus one to six months for low performers.
Google acquired DORA in 2018 for an undisclosed amount¶
Google acquired DORA (DevOps Research and Assessment) in December 2018, bringing Nicole Forsgren and the research team into Google Cloud. This gave DORA access to Google's resources for larger-scale research while Google gained credibility in the DevOps space. The research continues as the annual "Accelerate State of DevOps Report" published under Google Cloud's brand.
The book "Accelerate" proved that speed and stability are not tradeoffs¶
The 2018 book "Accelerate" by Forsgren, Humble, and Kim presented the core DORA finding that shattered a decades-old assumption: teams that deploy more frequently also have lower change failure rates and faster recovery times. Speed and stability are not tradeoffs — they reinforce each other. This finding was based on four years of survey data from over 30,000 professionals.
Change Failure Rate has a paradox: teams that never deploy have 0% failure rate¶
One criticism of DORA metrics is that Change Failure Rate can be gamed by deploying less frequently. A team that deploys once a year might have a 0% change failure rate simply because they have only one data point. DORA addresses this by evaluating all four metrics together as a cluster, so a team cannot appear elite by optimizing a single metric.
DORA added a fifth metric — Reliability — in 2021¶
In the 2021 State of DevOps report, DORA added "Reliability" as a fifth metric, measuring how well teams meet their reliability targets (SLOs). This addition acknowledged that software delivery performance means nothing if the resulting service is unreliable. Reliability was the first new metric added since the original four were established in 2014.
Most organizations cannot even measure their DORA metrics¶
Despite DORA metrics being widely discussed, a 2022 survey found that fewer than 50% of organizations could accurately measure all four metrics. The most commonly unmeasured metric was Lead Time for Changes, because many organizations lack automated pipelines that track the time from code commit to production deployment. You cannot improve what you cannot measure.
DORA metrics correlate with organizational performance, not just engineering metrics¶
Perhaps the most surprising DORA finding is that software delivery performance predicts overall organizational performance — including profitability, market share, and productivity. Elite software delivery performers are twice as likely to exceed their organizational goals. This finding elevated DevOps from an engineering concern to a board-level strategic conversation.