How We Got Here: CI/CD Evolution¶
Arc: Deployment Eras covered: 5 Timeline: ~2000-2025 Read time: ~12 min
The Original Problem¶
In 2000, "building the software" meant a senior developer ran make on their workstation, zipped the output, and emailed it to the ops team. Integration happened once — right before release — and it was called "integration hell" for a reason. Two developers who had been working on separate features for three months discovered their code didn't compile together. Tests, if they existed, ran on one person's laptop. Deployments happened quarterly, on weekends, with a war room and a rollback plan written on a whiteboard.
The feedback loop between "code written" and "code running in production" was measured in weeks or months. Bugs found in production had been introduced months ago. Nobody could trace which change caused which problem.
Era 1: Manual Builds and Nightly Cron Jobs (~2000-2005)¶
The Solution¶
The first step was automating the build. Teams set up a dedicated "build machine" — a server that checked out the code from CVS or SVN and ran the build script on a schedule. CruiseControl (2001) was one of the first continuous integration servers, triggering builds on every commit or nightly.
What It Looked Like¶
<!-- CruiseControl config.xml (~2003) -->
<cruisecontrol>
<project name="myapp">
<bootstrappers>
<svnbootstrapper localWorkingCopy="/builds/myapp"/>
</bootstrappers>
<modificationset>
<svn localWorkingCopy="/builds/myapp"/>
</modificationset>
<schedule interval="300">
<ant buildfile="/builds/myapp/build.xml" target="all"/>
</schedule>
<publishers>
<email mailhost="mail.example.com"
returnaddress="build@example.com"
defaultsuffix="@example.com">
<failure address="dev-team"/>
</email>
</publishers>
</project>
</cruisecontrol>
Why It Was Better¶
- Automated: builds happened without human intervention
- Consistent: same machine, same environment every time
- Fast feedback: broken builds were detected within hours, not weeks
- Cultural shift: "don't break the build" became a team norm
Why It Wasn't Enough¶
- Build machines were pets — when they died, CI died
- No artifact management — build outputs lived on the build machine's disk
- Tests were optional and often skipped in the build
- Deployment was still completely manual
- Configuration was XML hell (CruiseControl configs were notorious)
Legacy You'll Still See¶
Cron-based builds persist in legacy systems. The "build machine under someone's desk" is a meme, but it exists. The cultural norm of "don't break the build" and "fix it immediately when it breaks" originated here and remains foundational.
Era 2: Jenkins and the Plugin Ecosystem (~2005-2015)¶
The Solution¶
Hudson (2005, forked to Jenkins in 2011) transformed CI from a niche practice into a mainstream one. Jenkins was free, Java-based, and extensible through a massive plugin ecosystem (1800+ plugins). It had a web UI for configuration, supported any build tool, and could trigger builds on SCM changes, schedule, or manual request.
What It Looked Like¶
// Jenkinsfile (Pipeline as Code, introduced ~2016 but the era
// was defined by "Freestyle" jobs configured through the GUI)
// Typical freestyle job:
// 1. GUI: New Item → Freestyle Project
// 2. Source Code Management: Git → repo URL + branch
// 3. Build Triggers: Poll SCM "H/5 * * * *"
// 4. Build Steps: Execute Shell → "mvn clean package"
// 5. Post-build: Archive artifacts, email on failure
// Later Jenkinsfile (Declarative Pipeline):
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'mvn clean package'
}
}
stage('Test') {
steps {
sh 'mvn test'
junit '**/target/surefire-reports/*.xml'
}
}
stage('Deploy') {
steps {
sh './deploy.sh staging'
}
}
}
post {
failure {
mail to: 'team@example.com', subject: "Build Failed: ${env.JOB_NAME}"
}
}
}
Why It Was Better¶
- Free and open source with massive community
- Plugin for everything: Docker, Kubernetes, Slack, JIRA, SonarQube
- Pipeline as Code (Jenkinsfile) brought CI config into the repo
- Master/agent architecture scaled to large organizations
- Became the de facto standard — every DevOps engineer knew Jenkins
Why It Wasn't Enough¶
- The GUI-configured "Freestyle" jobs were fragile and not version-controlled
- Plugin conflicts and upgrade breakage were constant headaches
- Jenkins required dedicated ops — patching, backup, plugin management
- Groovy pipeline syntax was confusing and poorly documented
- Security vulnerabilities in plugins were frequent
- Resource-hungry: the Jenkins master was a memory hog
Legacy You'll Still See¶
Jenkins is still the most widely deployed CI server. Many enterprises have Jenkins instances running thousands of jobs that nobody wants to migrate. If you join a company with "legacy CI," it's probably Jenkins. Jenkins pipelines, for all their warts, work and are well-understood.
Era 3: Hosted CI/CD Services (~2011-2020)¶
The Solution¶
Travis CI (2011), CircleCI (2011), GitLab CI (2012), and later GitHub Actions (2019) moved CI/CD to managed services. No Jenkins server to maintain. Configuration lived in a YAML file in the repo. Build environments were ephemeral containers or VMs, spun up for each build and destroyed after. The CI system was someone else's problem.
What It Looked Like¶
# .github/workflows/ci.yml (GitHub Actions)
name: CI
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm ci
- run: npm test
- run: npm run build
deploy:
needs: build
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: aws-actions/configure-aws-credentials@v4
with:
role-to-arn: arn:aws:iam::123456789:role/deploy
- run: ./deploy.sh production
# .gitlab-ci.yml
stages:
- test
- build
- deploy
test:
stage: test
image: node:20
script:
- npm ci
- npm test
build:
stage: build
image: docker:24
services:
- docker:24-dind
script:
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
deploy:
stage: deploy
only:
- main
script:
- kubectl set image deployment/app app=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
Why It Was Better¶
- Zero maintenance: no servers to patch, no plugins to update
- Configuration as code from day one (YAML in the repo)
- Ephemeral environments: no "works on the build server" problems
- Tight SCM integration (GitHub Actions in GitHub, GitLab CI in GitLab)
- Generous free tiers for open source
- Marketplace of reusable actions/orbs
Why It Wasn't Enough¶
- Vendor lock-in: migrating between CI systems is painful
- YAML configuration became its own complexity ("YAML engineering")
- Debugging CI failures required pushing commits to test changes
- Secrets management was fragile (env vars, vault integrations)
- Build minutes cost money at scale
- Self-hosted runners reintroduced some of the Jenkins maintenance burden
Legacy You'll Still See¶
GitHub Actions is the current default for new projects. GitLab CI dominates organizations using GitLab. CircleCI and Travis CI are declining but still in production. If you're starting a new project today, you're almost certainly using hosted CI.
Era 4: GitOps and Continuous Deployment (~2017-2023)¶
The Solution¶
GitOps (Weaveworks, 2017) extended CI/CD by making Git the single source of truth for deployment state. Instead of CI pushing deployments, a controller in the cluster pulled desired state from Git and reconciled. ArgoCD (2018) and Flux (2017) became the standard tools. Continuous deployment — every commit to main automatically reaches production — became achievable for disciplined teams.
What It Looked Like¶
# ArgoCD Application
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: myapp
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/myorg/deployments.git
targetRevision: main
path: apps/myapp/production
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
# Workflow:
# 1. Developer pushes code → CI builds image, pushes to registry
# 2. CI updates image tag in deployments repo (or same repo, different path)
# 3. ArgoCD detects change in Git, syncs to cluster
# 4. If sync fails, ArgoCD alerts — Git still shows desired state
# 5. Manual rollback = git revert on the deployments repo
Why It Was Better¶
- Git is the audit trail — every deployment is a commit
- Rollback is
git revert, not a custom procedure - Self-healing: manual cluster changes are overwritten
- Separation of concerns: CI builds, GitOps deploys
- Pull-based: no CI credentials on the cluster
Why It Wasn't Enough¶
- Two-repo model (app + deployment) added workflow complexity
- Secrets in Git remained a challenge (sealed-secrets, SOPS, external-secrets)
- Only native to Kubernetes — non-K8s workloads needed different tools
- Debugging sync failures required understanding both Git and K8s
- "Everything is YAML" fatigue intensified
Legacy You'll Still See¶
GitOps is the current best practice for Kubernetes deployments. ArgoCD is the most popular deployment tool in the Kubernetes ecosystem. The pattern of "CI builds, GitOps deploys" is standard in mature Kubernetes shops.
Era 5: Platform Engineering and Internal Developer Platforms (~2022-2025)¶
The Solution¶
Platform engineering teams build golden paths that abstract away the CI/CD complexity. Developers don't write Jenkinsfiles or GitHub Actions workflows — they declare what their service needs, and the platform provides the pipeline. Tools like Backstage (Spotify), Port, Humanitec, and custom IDPs generate CI/CD configuration from service metadata.
What It Looked Like¶
# Backstage template — developer fills in a form, gets a complete
# project with CI/CD, monitoring, and deployment preconfigured
apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
name: microservice
title: Create a Microservice
spec:
parameters:
- title: Service Details
properties:
name:
title: Service Name
type: string
language:
title: Language
type: string
enum: [go, python, node]
steps:
- id: scaffold
action: fetch:template
input:
url: ./skeleton
- id: publish
action: publish:github
- id: register
action: catalog:register
Why It Was Better¶
- Developers get working CI/CD without writing pipeline config
- Consistency: every service follows the same golden path
- Platform team controls security, compliance, and best practices centrally
- Self-service: no tickets, no waiting for the DevOps team
- CI/CD becomes an implementation detail, not a developer concern
Why It Wasn't Enough¶
- Building an IDP is expensive (dedicated platform team)
- Golden paths can feel restrictive for advanced use cases
- Still early — tooling is fragmented and standards are emerging
- The "platform team as a bottleneck" antipattern is common
- Maintaining the platform is itself a significant engineering effort
Legacy You'll Still See¶
Platform engineering is the current frontier. Most organizations are either planning or building their first IDP. The "CI/CD as a platform service" pattern is clearly the direction, but most teams still write their own GitHub Actions workflows.
Where We Are Now¶
GitHub Actions dominates new projects. GitLab CI leads in GitLab-centric organizations. Jenkins persists in enterprises. ArgoCD is the standard for Kubernetes deployment. The trend is toward CI/CD as a platform service — something developers consume, not configure. The YAML fatigue is real, but nobody has found a better declarative format that's both expressive and readable.
Where It's Going¶
AI-assisted CI/CD (generate pipelines from natural language, auto-fix failing builds) is the near-term evolution. Longer term, the pipeline itself may become invisible — you push code, the platform figures out how to build, test, and deploy it based on the project type and organizational policies. Dagger (CI/CD as code in Go/Python/TypeScript) is an interesting bet on replacing YAML with real programming languages.
The Pattern¶
Every CI/CD generation moves the pipeline definition closer to the developer and the pipeline execution further from the developer. The goal is to make deployment feel like
git push— and each era gets closer.
Key Takeaway for Practitioners¶
The pipeline is not the product. Spend the minimum time necessary on CI/CD configuration and the maximum time on what the pipeline builds and tests. If your team spends more time debugging CI than debugging the application, your CI is too complex.
Cross-References¶
- Topic Packs: GitHub Actions, Jenkins, ArgoCD
- Tool Comparisons: GitHub Actions vs GitLab CI vs Jenkins
- Evolution Guides: Deployment Strategies, Developer Experience