Make & Build Systems — Street Ops¶
Patterns, debugging techniques, and real-world Makefile examples from production DevOps work.
Common Patterns¶
1. Self-Documenting Help Target¶
The single most useful pattern in any DevOps Makefile. Add ## comment after any target, and the help target auto-generates usage docs:
.DEFAULT_GOAL := help
help: ## Show available targets
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | \
awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-25s\033[0m %s\n", $$1, $$2}'
Output:
$ make help
build Build Docker image
clean Remove build artifacts
deploy Deploy to current environment
lint Run all linters
test Run test suite
Pro tip: $(MAKEFILE_LIST) includes all Makefiles loaded via include, so targets from included files show up in help automatically.
2. Docker Build Targets¶
REGISTRY ?= ghcr.io/myorg
IMAGE := $(REGISTRY)/myapp
TAG := $(shell git rev-parse --short HEAD)
docker-build: ## Build Docker image
docker buildx build \
--build-arg GIT_SHA=$(TAG) \
--cache-from type=registry,ref=$(IMAGE):buildcache \
--cache-to type=registry,ref=$(IMAGE):buildcache,mode=max \
-t $(IMAGE):$(TAG) --load .
docker-push: docker-build ## Push to registry
docker push $(IMAGE):$(TAG)
docker-scan: docker-build ## Scan for vulnerabilities
trivy image --severity HIGH,CRITICAL --exit-code 1 $(IMAGE):$(TAG)
3. CI/CD Make Targets¶
Structure CI as Make targets so engineers can reproduce the pipeline locally:
ci: ci-lint ci-test ci-build ## Full CI pipeline
ci-lint:
ruff check . --output-format=github
hadolint Dockerfile
shellcheck scripts/*.sh
ci-test:
pytest --cov=app --cov-fail-under=90 --tb=short -q --junitxml=reports/junit.xml
ci-build:
docker build -t myapp:ci-$(TAG) .
trivy image --exit-code 1 --severity CRITICAL myapp:ci-$(TAG)
In your CI config, each step is a one-liner: - run: make ci-lint.
4. Multi-Environment Deploys¶
Use computed variable names to map ENV to context/values:
ENV ?= dev
KUBE_CONTEXT_dev := k3s-local
KUBE_CONTEXT_staging := eks-staging
KUBE_CONTEXT_prod := eks-prod
KUBE_CONTEXT := $(KUBE_CONTEXT_$(ENV))
deploy: ## Deploy to $(ENV) environment
helm upgrade --install myapp devops/helm/myapp \
--kube-context=$(KUBE_CONTEXT) \
-f devops/helm/values-$(ENV).yaml \
--set image.tag=$(TAG) --wait --timeout 300s
promote: ## Promote staging -> prod
$(MAKE) deploy ENV=staging
$(MAKE) smoke-test ENV=staging
@read -p "Promote to production? [y/N] " confirm && [ "$$confirm" = "y" ]
$(MAKE) deploy ENV=prod
Usage: make deploy ENV=staging, make promote.
5. Dependency-Aware Test Targets¶
Use stamp files so expensive steps only re-run when inputs change:
.stamps/lint: $(shell find . -name '*.py' -not -path './venv/*') | .stamps/
ruff check . && touch $@
.stamps/unit-test: $(shell find . -name '*.py' -not -path './venv/*') | .stamps/
pytest tests/unit/ --tb=short -q && touch $@
test: .stamps/lint .stamps/unit-test ## Run lint + unit tests
.stamps/:
mkdir -p .stamps
6. Parallel Make (-j)¶
SERVICES := api worker scheduler cron
build-all: $(addprefix build-,$(SERVICES)) ## make -j4 build-all
build-%:
docker build -t myorg/$*:$(TAG) -f services/$*/Dockerfile services/$*/
make -j$(nproc) build-all builds all services simultaneously. Warning: -j parallelizes at the target level. If two targets share a resource, you get race conditions (see footguns).
Gotcha:
make -jwithout a number means unlimited parallelism. On a 64-core build machine this can fork-bomb the system by spawning hundreds of compiler processes simultaneously. Always specify a limit:make -j$(nproc)ormake -j8.
7. Make for Monorepos¶
Use per-service includes with a unified top-level Makefile:
SERVICES := $(shell ls services/)
include $(foreach svc,$(SERVICES),services/$(svc)/Makefile.mk)
all: $(addprefix build-,$(SERVICES))
test: $(addprefix test-,$(SERVICES))
# CI: test only changed services
CHANGED_FILES := $(shell git diff --name-only HEAD~1)
CHANGED_SERVICES := $(sort $(foreach f,$(CHANGED_FILES),$(word 2,$(subst /, ,$(f)))))
changed-test: ## Test only changed services
@for svc in $(CHANGED_SERVICES); do $(MAKE) test-$$svc; done
Each service provides its own Makefile.mk with build-<svc>, test-<svc>, and lint-<svc> targets.
Debugging Make¶
make -n (Dry Run)¶
Print what Make would do without executing anything. Your first debugging tool:
$ make -n deploy ENV=staging
helm upgrade --install myapp devops/helm/myapp \
--kube-context=eks-staging \
-f devops/helm/values-staging.yaml \
--set image.tag=a1b2c3d \
--wait --timeout 300s
Use this before every production deploy to verify the command looks right.
make -d and make --trace¶
make -d gives a full trace of Make's decision-making (very verbose — pipe through less). make --trace is lighter, showing each target and its recipe as it executes. Use -d to understand why a target rebuilt (or did not); use --trace for a quick overview of execution order.
make -p (Print Database)¶
Dump all variables, rules, and values. Essential for understanding what Make sees after processing:
make -p | grep '^[A-Z_].*=' # Show all variables
make -p | grep -A2 'DOCKER_REG' # Find where a variable is set
MAKEFLAGS¶
export MAKEFLAGS="-j4" makes all Make invocations parallel. MAKEFLAGS auto-propagates to recursive $(MAKE) calls.
remake Tool¶
remake is a drop-in Make replacement with a debugger (sudo apt install remake). Run remake -X test to get breakpoints, step execution, variable inspection, and a target call stack.
Debugging Variable Expansion¶
debug: ## Print key variables
@echo "TAG=$(TAG) ENV=$(ENV) REGISTRY=$(REGISTRY)"
# Parse-time debugging (before any recipe runs)
$(info DEBUG: TAG is [$(TAG)])
$(info) prints during Makefile parsing; @echo prints during recipe execution. The distinction matters when tracking variable expansion timing issues.
Under the hood: Make has two phases: parse time (reads all Makefiles, expands
$(info), resolves variables assigned with:=) and execution time (runs recipes). Variables assigned with=are expanded lazily at execution time. If a variable seems to have the wrong value, the first question is: "which phase is this being evaluated in?"
Real-World Makefile Examples¶
Python Project¶
VENV := .venv
PIP := $(VENV)/bin/pip
TAG ?= $(shell git rev-parse --short HEAD)
# Venv only rebuilds when requirements change (file-target pattern)
$(VENV)/bin/activate: requirements.txt requirements-dev.txt
python3 -m venv $(VENV)
$(PIP) install -r requirements.txt -r requirements-dev.txt
touch $@
install: $(VENV)/bin/activate ## Install dependencies
test: install ## Run tests
$(VENV)/bin/pytest --cov=app --cov-fail-under=90 --tb=short -q
lint: install ## Lint
$(VENV)/bin/ruff check .
build: ## Build Docker image
docker build -t myapp:$(TAG) .
clean: ## Remove artifacts
rm -rf $(VENV) build/ dist/ .pytest_cache/
Key pattern: the venv target uses $(VENV)/bin/activate as a file target with requirements.txt as a prerequisite.
Default trap: Each line in a Make recipe runs in a separate shell.
cd build && ./configureworks, but puttingcd buildon one line and./configureon the next does not — the second line starts back in the original directory. Chain commands with&&on a single line, or use.ONESHELL:(GNU Make 3.82+).
Docker Project¶
Multi-service Docker project with pattern-rule builds and compose orchestration:
COMPOSE := docker compose
REGISTRY ?= ghcr.io/myorg
TAG ?= $(shell git rev-parse --short HEAD)
SERVICES := api worker nginx
build: $(addprefix build-,$(SERVICES)) ## Build all images
build-%: docker/%/Dockerfile
docker build --build-arg TAG=$(TAG) \
-t $(REGISTRY)/$*:$(TAG) -t $(REGISTRY)/$*:latest \
-f $< docker/$*/
push: $(addprefix push-,$(SERVICES)) ## Push all images
push-%: build-%
docker push $(REGISTRY)/$*:$(TAG)
up: ## Start all services
TAG=$(TAG) $(COMPOSE) up -d --wait
down: ## Stop all services
$(COMPOSE) down -v
test: up ## Run integration tests
$(COMPOSE) exec api pytest tests/integration/ --tb=short -q
$(MAKE) down
scan: build ## Scan all images
@for svc in $(SERVICES); do \
trivy image --severity HIGH,CRITICAL $(REGISTRY)/$$svc:$(TAG); \
done
Terraform Project¶
TF := terraform
TF_DIR := modules
MODULE ?= vpc
ENV ?= dev
plan: ## Plan changes for module
cd $(TF_DIR)/$(MODULE) && $(TF) init -backend-config="bucket=$(STATE_BUCKET)"
cd $(TF_DIR)/$(MODULE) && $(TF) plan -var-file=../../vars/$(ENV).tfvars -out=plan.tfplan
apply: ## Apply planned changes
cd $(TF_DIR)/$(MODULE) && $(TF) apply plan.tfplan
apply-all: ## Apply all modules in dependency order
$(MAKE) apply MODULE=vpc && $(MAKE) apply MODULE=iam && $(MAKE) apply MODULE=eks
Kubernetes Deploy¶
Helm-based deployment with rollback and debugging:
RELEASE := myapp
NAMESPACE ?= default
ENV ?= dev
TAG ?= $(shell git rev-parse --short HEAD)
KUBECTL := kubectl -n $(NAMESPACE)
HELM := helm -n $(NAMESPACE)
deploy: ## Deploy to $(ENV)
$(HELM) upgrade --install $(RELEASE) charts/myapp \
-f values-$(ENV).yaml \
--set image.tag=$(TAG) \
--wait --timeout 300s --atomic
rollback: ## Rollback to previous release
$(HELM) rollback $(RELEASE)
$(KUBECTL) rollout status deployment/$(RELEASE) --timeout=120s
status: ## Show deployment status
$(HELM) status $(RELEASE)
$(KUBECTL) get pods -l app.kubernetes.io/name=$(RELEASE)
debug: ## Debug a failing pod
@POD=$$($(KUBECTL) get pods -l app.kubernetes.io/name=$(RELEASE) \
--field-selector=status.phase!=Running -o jsonpath='{.items[0].metadata.name}' 2>/dev/null); \
if [ -z "$$POD" ]; then echo "No failing pods found"; \
else $(KUBECTL) describe pod $$POD && $(KUBECTL) logs $$POD --all-containers --tail=50; fi