Docker Compose: The Local Cluster
- lesson
- docker-compose
- container-networking
- dns-resolution
- volumes
- health-checks
- environment-variables
- multi-container-orchestration ---# Docker Compose — The Local Cluster
Topics: Docker Compose, container networking, DNS resolution, volumes, health checks, environment variables, multi-container orchestration Level: L1–L2 (Foundations → Operations) Time: 60–90 minutes Prerequisites: None (everything is explained from scratch)
The Mission¶
You're building a local development environment for a web application. The app needs four
services running together: a Python API, a PostgreSQL database, a Redis cache, and an Nginx
reverse proxy. Running four separate docker run commands with the right flags, networks,
and volumes is tedious and error-prone. You need something that describes the whole stack
in one file and brings it up with one command.
That something is Docker Compose. By the end of this lesson, you'll have a working
compose.yaml that wires together all four services, and you'll understand the networking,
storage, and lifecycle primitives that make it work.
Why Compose Exists¶
Without Compose, starting a three-service stack looks like this:
# Create a network
docker network create myapp
# Start the database
docker run -d --name db --network myapp \
-e POSTGRES_PASSWORD=secret \
-v pgdata:/var/lib/postgresql/data \
postgres:16
# Start the cache
docker run -d --name cache --network myapp redis:7-alpine
# Start the app
docker run -d --name api --network myapp \
-e DATABASE_URL=postgresql://postgres:secret@db:5432/app \
-e REDIS_URL=redis://cache:6379 \
-p 8000:8000 myapp:latest
Three commands with eight flags each. Forget one flag, misspell a container name, and the whole stack breaks silently. Now multiply by a team of five developers who each need to remember these incantations.
Compose replaces all of it with a YAML file and docker compose up.
Name Origin: Docker Compose started life as Fig, created in 2013 by Orchard Laboratories (Ben Firshman and Aanand Prasad). Docker acquired Orchard in 2014 and renamed Fig to Docker Compose. The original Fig YAML format became the
docker-compose.ymlwe know today. The name "Fig" was a pun — figs grow in orchards.
The V1 → V2 History (It Matters)¶
You'll see two things in the wild: docker-compose (hyphenated) and docker compose
(space). They are different programs.
| Compose V1 | Compose V2 | |
|---|---|---|
| Binary | docker-compose (standalone Python script) |
docker compose (Go plugin for the Docker CLI) |
| Language | Python | Go |
| Install | Separate pip/binary install | Ships with Docker Desktop and modern Docker Engine |
| Config file | docker-compose.yml with version: '3.8' |
compose.yaml (no version: needed) |
| Status | EOL since July 2023 | Current, actively maintained |
# Check which you have
docker-compose --version # V1: "docker-compose version 1.29.2"
docker compose version # V2: "Docker Compose version v2.24.5"
Gotcha: If you're following a tutorial from 2020 and it says
docker-compose up, it probably works with V2 too — Docker added a compatibility shim. But if you seeversion: '3.8'at the top of the file, that's a V1 artifact. Compose V2 ignores theversion:key entirely. You can delete it.
The filename preference also shifted: V1 expected docker-compose.yml. V2 looks for
compose.yaml first, then falls back to docker-compose.yml. Use compose.yaml for
new projects.
Compose File Anatomy¶
Here's the complete compose.yaml for our four-service stack. Read it top to bottom —
every line is annotated.
# compose.yaml — no "version:" needed in Compose V2
services:
# --- Reverse Proxy ---
nginx:
image: nginx:1.25-alpine
ports:
- "80:80" # Host port 80 → container port 80
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro # Bind mount, read-only
depends_on:
api:
condition: service_healthy # Wait for API health check
networks:
- frontend
- backend
restart: unless-stopped
# --- Python API ---
api:
build:
context: .
dockerfile: Dockerfile
environment:
DATABASE_URL: "postgresql://postgres:${DB_PASSWORD}@db:5432/app"
REDIS_URL: "redis://cache:6379"
LOG_LEVEL: "info"
depends_on:
db:
condition: service_healthy # Don't start until Postgres is ready
cache:
condition: service_started # Redis starts fast, no health check needed
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 10s
timeout: 5s
retries: 3
start_period: 15s # Grace period on first startup
networks:
- backend
restart: unless-stopped
# --- Database ---
db:
image: postgres:16-alpine
environment:
POSTGRES_DB: app
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ${DB_PASSWORD} # Pulled from .env file
volumes:
- pgdata:/var/lib/postgresql/data # Named volume for persistence
- ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro # Run on first start
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 3s
retries: 5
networks:
- backend
restart: unless-stopped
# --- Cache ---
cache:
image: redis:7-alpine
command: redis-server --maxmemory 128mb --maxmemory-policy allkeys-lru
volumes:
- redis-data:/data # Persist Redis RDB snapshots
networks:
- backend
restart: unless-stopped
# --- Networks ---
networks:
frontend: # Nginx talks to the outside world
backend: # Internal: API ↔ DB ↔ Cache
# --- Volumes ---
volumes:
pgdata: # Survives docker compose down
redis-data: # Survives docker compose down
That's 70 lines of YAML replacing dozens of imperative commands. Let's break down each section.
The Top-Level Keys¶
A Compose file has up to seven top-level keys. You'll use three constantly and encounter the others in larger projects.
| Key | Purpose | Required? |
|---|---|---|
services |
The containers to run | Yes |
networks |
Custom networks for isolation | No (a default one is created) |
volumes |
Named volumes for persistent data | No |
configs |
Configuration files injected into containers | No |
secrets |
Sensitive data (passwords, keys) | No |
extensions |
Reusable YAML fragments (x- prefix) | No |
name |
Override the project name | No |
Mental Model: Think of
servicesas the "what,"networksas the "who can talk to whom," andvolumesas the "what survives a restart." Those three keys handle 90% of your Compose files.
Networking: The DNS Resolution Trick¶
This is the single most useful thing Compose does, and most tutorials bury it.
When you define services in a Compose file, each service name becomes a DNS hostname
on the Compose network. Inside the api container:
# These just work — no IP addresses, no configuration
ping db # resolves to the db container's IP
ping cache # resolves to the cache container's IP
curl nginx:80 # resolves to the nginx container's IP
This is why the connection strings use hostnames:
DATABASE_URL: "postgresql://postgres:secret@db:5432/app"
# ^^
# This is the service name, not a hostname you configured
Under the Hood: Compose creates a network per project (named
<project>_defaultunless you define custom networks). Docker's embedded DNS server at127.0.0.11resolves service names to container IPs. This DNS server is injected via/etc/resolv.confinside each container. When a container starts or restarts and gets a new IP, the DNS records update automatically.
# Verify DNS inside a running container
docker compose exec api cat /etc/resolv.conf
# nameserver 127.0.0.11
# options ndots:0
# Resolve a service name
docker compose exec api nslookup db
# Name: db
# Address: 172.20.0.3
Custom Networks for Isolation¶
In our example, we defined two networks: frontend and backend. Nginx is on both.
The database is only on backend. This means:
- Nginx can reach the API (both on
backend) - The API can reach the database and cache (all on
backend) - Nothing on
frontendcan reach the database directly
This is the same principle as network segmentation in production — the database is not exposed to the outermost layer.
Gotcha: The default bridge network (
docker0) does NOT provide DNS resolution between containers. Only user-defined networks — which Compose creates automatically — get DNS. If you're running containers withdocker runand they can't resolve each other by name, you probably forgot--network.
Flashcard Check: Networking¶
| Question | Answer (cover this column first) |
|---|---|
| How do containers in Compose find each other? | By service name — Docker's embedded DNS at 127.0.0.11 resolves service names to container IPs |
| What is the default DNS server address inside a Docker container on a user-defined network? | 127.0.0.11 |
Can a container on the frontend network talk to a container only on backend? |
No — containers can only communicate if they share at least one network |
| What's the difference between the default bridge and a user-defined bridge? | User-defined bridges provide DNS resolution by container name; the default bridge does not |
Volumes: Where Your Data Lives¶
Compose supports two kinds of mounts. Understanding the difference prevents data loss.
Named Volumes¶
volumes:
pgdata: # Declared at the top level — Docker manages the storage
services:
db:
volumes:
- pgdata:/var/lib/postgresql/data # Mount into the container
Named volumes live at /var/lib/docker/volumes/<name>/_data on the host. They survive
docker compose down. They survive docker compose up --force-recreate. They do not
survive docker compose down -v (the -v flag deletes volumes).
# Inspect where a volume lives on disk
docker volume inspect myapp_pgdata
# "Mountpoint": "/var/lib/docker/volumes/myapp_pgdata/_data"
Gotcha:
docker compose downkeeps your volumes.docker compose down -vdestroys them. If you've trained your fingers to always type-vduring development, you will eventually destroy production data on the wrong host. Muscle memory is dangerous.
Bind Mounts¶
Bind mounts map a path on your host directly into the container. They're essential for development (edit code on your laptop, see changes in the container) but come with a nasty surprise.
War Story: The Bind Mount Permissions Disaster
You bind-mount your project directory into a container. The container runs as UID 1000 (the
appuseryou defined in the Dockerfile). But your host user is UID 501 (macOS). The container writes a log file owned by UID 1000. On your host, that file is owned by some random system user. You can't delete it withoutsudo. Now flip it: the container needs to read a file you created on the host (UID 501). Inside the container, that file is owned by UID 501, andappuser(UID 1000) can't read it.The fix: align UIDs. Set the container user to match your host UID:
user: "${UID:-1000}:${GID:-1000}"in your Compose file. Or usedocker compose run --user $(id -u):$(id -g). On Linux, this is straightforward. On macOS with Docker Desktop, the VM handles UID mapping for you — but only for bind mounts under/Users.
| Type | Managed by | Survives down |
Survives down -v |
Best for |
|---|---|---|---|---|
| Named volume | Docker | Yes | No | Database data, persistent state |
| Bind mount | You | Yes (it's your filesystem) | Yes | Dev configs, source code in development |
Health Checks and depends_on¶
Starting services in the right order is harder than it sounds. The database needs to be accepting connections before the API starts, or the API crashes on startup trying to run migrations.
The naive approach (broken)¶
This starts the db container before the api container. But "started" doesn't mean
"ready." PostgreSQL takes 2-3 seconds to initialize. The API connects at second 0 and
gets Connection refused.
The correct approach¶
services:
db:
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 3s
retries: 5
api:
depends_on:
db:
condition: service_healthy # Wait until the health check passes
Now Compose won't start api until db's health check reports healthy. The sequence:
dbstarts → PostgreSQL initializes →pg_isreadyreturns success → status: healthycachestarts → immediately available → status: startedapistarts → connects todbandcache→ health check passes → status: healthynginxstarts → proxies toapi
# Watch the startup sequence
docker compose up
# ✔ Container myapp-db-1 Healthy 3.2s
# ✔ Container myapp-cache-1 Started 0.4s
# ✔ Container myapp-api-1 Healthy 5.1s
# ✔ Container myapp-nginx-1 Started 0.2s
The start_period trick¶
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 10s
timeout: 5s
retries: 3
start_period: 15s # Don't count failures during the first 15 seconds
start_period gives the container time to boot before health checks start counting
failures. Without it, a slow-starting Java app might fail three health checks during
JVM startup and get marked as unhealthy before it even loads the first class.
Trivia: The
condition: service_healthysyntax was removed in Compose V2.0, then brought back by popular demand in V2.1. For about six months, the official recommendation was to use external tools likewait-for-it.sh. The community revolt was swift, and Docker restored the feature. Sometimes good ideas get temporarily removed.
Flashcard Check: Health and Lifecycle¶
| Question | Answer (cover this column first) |
|---|---|
What does depends_on: db (without conditions) guarantee? |
Only that db's container starts first — NOT that the service inside is ready |
| What condition makes Compose wait for a health check? | condition: service_healthy |
What does start_period do in a health check? |
Gives the container a grace period — failures during this window don't count toward retries |
| What PostgreSQL command tests if the database is accepting connections? | pg_isready |
Environment Variables and .env Files¶
Hardcoding passwords in compose.yaml is a bad idea. Compose supports .env files for
variable substitution.
The .env file¶
# .env — same directory as compose.yaml
DB_PASSWORD=supersecret_dev_only
API_SECRET_KEY=local-dev-key-do-not-use-in-prod
COMPOSE_PROJECT_NAME=myapp
These variables are available in compose.yaml with ${VARIABLE} syntax:
Variable precedence¶
Compose resolves variables in this order (highest wins):
- Shell environment (
export DB_PASSWORD=override) .envfile- Default value in the Compose file (
${DB_PASSWORD:-fallback})
# Override for one command
DB_PASSWORD=staging_pw docker compose up -d
# See what Compose resolved
docker compose config # Shows the fully interpolated YAML
Gotcha: The
.envfile is for Compose variable substitution — it sets values in the Compose file itself. Theenvironment:key in a service sets variables inside the container. They're different scopes. A variable in.envonly reaches the container if the Compose file explicitly passes it through viaenvironment:orenv_file:.
services:
api:
env_file:
- .env.api # Load all variables from this file into the container
environment:
DB_PASSWORD: ${DB_PASSWORD} # This one comes from .env via Compose interpolation
Remember: Never commit
.envfiles with real credentials. Add.envto your.gitignore. For production, use a secrets manager — not environment variables.
Profiles: Optional Services¶
Not every developer needs every service running. The frontend developer doesn't need Prometheus. The backend developer doesn't need the email service.
services:
api:
# ... always runs (no profile)
prometheus:
image: prom/prometheus:v2.49.0
profiles: ["monitoring"] # Only starts when requested
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
ports:
- "9090:9090"
mailhog:
image: mailhog/mailhog:v1.0.1
profiles: ["email"]
ports:
- "8025:8025"
# Normal startup — only services without profiles
docker compose up -d
# Include monitoring
docker compose --profile monitoring up -d
# Include everything
docker compose --profile monitoring --profile email up -d
Profiles keep the default docker compose up fast for everyone while making optional
services available on demand.
Compose Watch: The Development Workflow¶
Compose Watch (V2.22+) replaces the old bind-mount-and-restart dance for development.
services:
api:
build: .
develop:
watch:
- action: sync # Copy changed files into the container
path: ./src
target: /app/src
- action: rebuild # Rebuild the image when dependencies change
path: ./requirements.txt
docker compose watch
# Watching for changes...
# src/main.py changed → syncing to /app/src/main.py
# requirements.txt changed → rebuilding api service
Three watch actions:
| Action | What it does | Use when |
|---|---|---|
sync |
Copies changed files into the running container | Source code with hot-reload |
rebuild |
Rebuilds and replaces the container | Dependency files change |
sync+restart |
Copies files, then restarts the container | Config changes that need a process restart |
This is faster than bind mounts because it uses Docker's file sync rather than the filesystem notify layer, which is notoriously slow on macOS with Docker Desktop.
Multi-Stage Builds With Compose¶
Your Dockerfile can use multi-stage builds, and Compose can target a specific stage.
This lets you use one Dockerfile for both development and production:
# Dockerfile
FROM python:3.11-slim AS base
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
FROM base AS dev
RUN pip install debugpy pytest
COPY . .
CMD ["python", "-m", "debugpy", "--listen", "0.0.0.0:5678", "-m", "uvicorn", "app:app", "--reload"]
FROM base AS prod
COPY . .
RUN adduser --disabled-password --no-create-home appuser
USER appuser
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
# compose.yaml — for local dev
services:
api:
build:
context: .
target: dev # Use the dev stage
ports:
- "8000:8000"
- "5678:5678" # Debugger port
volumes:
- .:/app # Live code reload
# compose.prod.yaml — for production-like testing
services:
api:
build:
context: .
target: prod # Use the prod stage (no debugger, no dev deps)
ports:
- "8000:8000"
Compose vs Kubernetes: When to Graduate¶
This is the question everyone asks. Here's a practical framework:
| Scenario | Use Compose | Use Kubernetes |
|---|---|---|
| Local development | Yes | Overkill |
| CI/CD test environments | Yes | Usually overkill |
| Single-server deployment | Maybe (with restart: always) |
Overkill |
| Multi-server production | No | Yes |
| Auto-scaling needed | No | Yes |
| Rolling deployments needed | Limited | Yes |
| Team > 5 services in prod | Fragile | Yes |
Mental Model: Compose is a single-host tool. It assumes all containers run on one machine. The moment you need containers spread across multiple hosts, auto-healing, rolling deployments, or horizontal scaling, you've outgrown Compose. Kubernetes, Nomad, or even Docker Swarm (still works, just not actively developed) are the next step.
The good news: the concepts translate directly. Compose services → Kubernetes
Deployments. Compose networks → Kubernetes Services + NetworkPolicies. Compose
volumes → Kubernetes PersistentVolumeClaims. The abstractions change; the thinking
doesn't.
Debugging: When Things Go Wrong¶
Read the logs¶
# All services
docker compose logs
# One service, follow mode, last 100 lines
docker compose logs -f --tail 100 api
# With timestamps (essential for correlating across services)
docker compose logs -f -t api db
Get a shell¶
# Interactive shell in a running container
docker compose exec api /bin/sh
# Run a one-off command
docker compose exec db psql -U postgres -d app
# Run a new container with the service's config (useful for debugging startup issues)
docker compose run --rm api /bin/sh
The difference between exec and run: exec attaches to a running container. run
creates a new container from the service definition. Use run when the container won't
start — it lets you get a shell without running the normal entrypoint.
Port conflicts¶
Something else is using port 5432 on your host. Find it:
# Linux
ss -tlnp | grep 5432
# LISTEN 0 244 0.0.0.0:5432 * users:(("postgres",pid=1234,...))
# macOS
lsof -i :5432
# postgres 1234 user ... TCP *:postgresql (LISTEN)
Options: stop the local PostgreSQL, change the host port mapping ("5433:5432"), or
don't expose the port at all (services on the same Compose network don't need host port
mappings to talk to each other).
Remember: Containers on the same Compose network talk to each other by service name on the container port. Port mappings (
ports:) are only needed for access from outside the Compose network (your browser, CLI tools on the host, etc.).
Stuck containers¶
# Force recreate everything
docker compose up -d --force-recreate
# Nuclear option: tear down and rebuild
docker compose down
docker compose build --no-cache
docker compose up -d
# See what Compose thinks the current state is
docker compose ps
# NAME IMAGE COMMAND STATUS
# myapp-api-1 myapp-api "uvicorn app:app..." Up 3 minutes (healthy)
# myapp-db-1 postgres:16 "docker-entrypoint..." Up 3 minutes (healthy)
# myapp-cache-1 redis:7 "redis-server..." Up 3 minutes
# myapp-nginx-1 nginx:1.25 "/docker-entrypoint…" Up 3 minutes
Flashcard Check: Debugging¶
| Question | Answer (cover this column first) |
|---|---|
What's the difference between docker compose exec and docker compose run? |
exec runs a command in a running container; run creates a new container from the service definition |
| How do you find what's using a port on Linux? | ss -tlnp \| grep <port> |
Do containers on the same Compose network need ports: to talk to each other? |
No — they communicate directly by service name on the container port |
What does docker compose down -v do that down doesn't? |
It also deletes named volumes (potential data loss) |
The Complete .env and Supporting Files¶
For the Compose file above to work, you need a few supporting files.
.env¶
nginx.conf¶
events { worker_connections 1024; }
http {
upstream api {
server api:8000; # "api" is the Compose service name
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /health {
proxy_pass http://api/health;
}
}
}
init.sql¶
-- Runs only on first database creation (when the volume is empty)
CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
email VARCHAR(255) UNIQUE NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);
Exercises¶
Exercise 1: Bring It Up (2 minutes)¶
Create a directory with the compose.yaml, .env, nginx.conf, and init.sql from
this lesson. You'll need a simple Dockerfile for the API:
FROM python:3.11-slim
WORKDIR /app
RUN pip install fastapi uvicorn psycopg2-binary redis
COPY app.py .
EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
And a minimal app.py:
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
def root():
return {"status": "ok"}
@app.get("/health")
def health():
return {"status": "healthy"}
Run docker compose up -d and verify with curl http://localhost/health.
Expected output
If you get `connection refused`, check `docker compose ps` — the API might still be starting. Wait for the health check to pass.Exercise 2: Prove DNS Works (3 minutes)¶
Get a shell inside the API container and resolve the other services:
Inside the container, try nslookup db, nslookup cache, and ping nginx -c 1.
What to look for
Each service name resolves to an IP on the Docker network (typically 172.x.x.x). This proves the embedded DNS is working. You didn't configure any of this — Compose did it automatically.Exercise 3: Break the Dependency Chain (5 minutes)¶
Modify the api service to remove the health check condition on db:
Now add a startup query to app.py that connects to PostgreSQL immediately. Restart
with docker compose up -d --force-recreate. Does the API start successfully? Why or
why not?
Explanation
Without `condition: service_healthy`, Compose starts the API as soon as the DB container is created — before PostgreSQL is ready to accept connections. The API's startup query fails with `Connection refused`. This is why health check conditions matter: container-started is not the same as service-ready.Exercise 4: Add a Service (10 minutes)¶
Add Adminer (a web-based database UI) to the Compose file:
- Image:
adminer:4 - Port: 8080 on the host, 8080 in the container
- Network:
backendonly - Profile:
tools(so it doesn't start by default)
Start it with docker compose --profile tools up -d and open http://localhost:8080.
Connect to db on port 5432 using the credentials from .env.
Solution
Note that the server hostname is `db` (the service name), not `localhost`. The Compose DNS resolves it.Cheat Sheet¶
| Command | What it does |
|---|---|
docker compose up -d |
Start all services in background |
docker compose up -d --build |
Rebuild images, then start |
docker compose down |
Stop and remove containers + networks (keeps volumes) |
docker compose down -v |
Same, but also deletes named volumes |
docker compose ps |
Show running services and their status |
docker compose logs -f <service> |
Follow logs for one service |
docker compose exec <service> sh |
Shell into a running container |
docker compose run --rm <service> sh |
New container from service definition |
docker compose config |
Show resolved YAML (verify variable substitution) |
docker compose --profile <name> up |
Start services including a profile |
docker compose watch |
Auto-sync/rebuild on file changes |
docker compose top |
Show processes in all containers |
| Compose YAML key | Purpose |
|---|---|
services: |
Container definitions |
networks: |
Network isolation boundaries |
volumes: |
Persistent storage declarations |
depends_on: condition: |
Startup ordering with health awareness |
healthcheck: |
Readiness probe for a service |
profiles: |
Optional services activated by flag |
env_file: |
Load env vars into container from file |
develop: watch: |
File sync/rebuild triggers for dev |
Takeaways¶
-
Compose replaces imperative
docker runcommands with a declarative YAML file. One file describes your entire local stack — networking, storage, startup order, and environment. -
Service names are DNS hostnames. This is Compose's killer feature.
db,cache,apiresolve automatically. No IP addresses, no manual/etc/hostsedits. -
depends_onwithoutcondition: service_healthyis almost useless. Container-started is not service-ready. Always pairdepends_onwith health checks for databases and APIs. -
Named volumes survive
docker compose downbut notdown -v. Know which one you're typing. Muscle memory can destroy data. -
Compose is a single-host tool. It's perfect for development and testing. For multi-host production with scaling and rolling deploys, graduate to Kubernetes.
-
Compose V2 is a Docker CLI plugin, not a standalone binary. Use
docker compose(space), notdocker-compose(hyphen). Usecompose.yaml, notdocker-compose.yml.
Related Lessons¶
- What Happens When You
docker build— how images are constructed from layers - Deploy a Web App From Nothing — the progression from bare process to containerized deployment
- Why DNS Is Always the Problem — deeper dive into DNS resolution and debugging
- The Disk That Filled Up — when Docker volumes and images consume all your storage
- What Happens When You
kubectl apply— the next step when you outgrow Compose