Deploy a Web App From Nothing
- lesson
- processes
- systemd
- nginx
- tls
- docker
- docker-compose
- kubernetes ---# Deploy a Web App From Nothing
Topics: processes, systemd, nginx, TLS, Docker, docker-compose, Kubernetes Level: L1–L2 (Foundations → Operations) Time: 75–90 minutes Prerequisites: None (everything is explained from scratch)
The Mission¶
You have a Python web app — a single file, app.py. Your job: get it running in production,
reachable from the internet, with TLS encryption, automatic restarts, and logging.
We'll start with the simplest possible deployment (run it directly) and add layers one at a time. Each layer solves a problem that the previous approach couldn't handle. By the end, you'll understand why the production stack has so many layers — not because engineers love complexity, but because each layer exists for a reason.
# app.py — our application
from fastapi import FastAPI
import uvicorn
app = FastAPI()
@app.get("/")
def root():
return {"status": "ok", "message": "Hello from the app"}
@app.get("/health")
def health():
return {"status": "healthy"}
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
Level 0: Just Run It¶
It works. You can hit http://server:8000/ and get a response.
Problems: - Close the terminal → app dies - App crashes → stays dead - No logs except what scrolls past in the terminal - No TLS (plain HTTP) - Runs as your user (might be root) - No resource limits (can eat all RAM) - Rebooting the server kills the app
This is fine for development. For production, we need every one of those problems solved.
Level 1: systemd — Survive Reboots and Crashes¶
The first layer: make the OS manage the process.
# /etc/systemd/system/myapp.service
[Unit]
Description=My Web Application
After=network.target
[Service]
Type=simple
User=appuser
Group=appuser
WorkingDirectory=/opt/myapp
ExecStart=/opt/myapp/venv/bin/python app.py
Restart=on-failure
RestartSec=5
StandardOutput=journal
StandardError=journal
# Resource limits
MemoryMax=512M
CPUQuota=200%
# Security hardening
NoNewPrivileges=true
ProtectSystem=strict
ReadWritePaths=/opt/myapp/data
PrivateTmp=true
[Install]
WantedBy=multi-user.target
# Set it up
sudo useradd -r -s /sbin/nologin appuser
sudo mkdir -p /opt/myapp/data
sudo chown appuser:appuser /opt/myapp /opt/myapp/data
# Install and start
sudo systemctl daemon-reload
sudo systemctl enable --now myapp
sudo systemctl status myapp
What we gained:
- App starts on boot (WantedBy=multi-user.target)
- Restarts on crash (Restart=on-failure)
- Runs as non-root (User=appuser)
- Logs go to journal (journalctl -u myapp -f)
- Memory capped at 512M (MemoryMax=512M)
- Filesystem restricted (ProtectSystem=strict)
Still missing: - No TLS - Running on port 8000 (not 80/443) - No load balancing - No graceful deployments
Mental Model: systemd is your application's babysitter. It starts the app, watches it, restarts it if it crashes, limits its resources, and logs everything it says. Without systemd, you're the babysitter — and you need to sleep.
Level 2: Nginx Reverse Proxy — TLS and Port 443¶
Users expect https://app.example.com, not http://server:8000. We need:
- Port 443 (HTTPS)
- TLS certificate
- Proper HTTP headers
Nginx sits in front of the app, handling TLS and proxying requests:
# /etc/nginx/sites-available/myapp
server {
listen 80;
server_name app.example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name app.example.com;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
# Security headers
add_header Strict-Transport-Security "max-age=63072000" always;
add_header X-Content-Type-Options nosniff always;
add_header X-Frame-Options DENY always;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /health {
proxy_pass http://127.0.0.1:8000/health;
access_log off; # Don't log health checks
}
}
# Get a free TLS certificate
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d app.example.com
# Enable the site
sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginx
The architecture now:
What we gained: - HTTPS with a real certificate (free, auto-renewing) - HTTP → HTTPS redirect - Security headers (HSTS, X-Frame-Options) - Health check endpoint without access log noise - The app doesn't need to handle TLS (simpler code)
Still missing: - App and all its dependencies are installed directly on the OS - Deploying a new version means SSH + git pull + restart - No isolation between app and OS - "Works on my machine" problems
Level 3: Docker — Isolation and Reproducibility¶
The app, its dependencies, and its runtime environment are now packaged as a single artifact:
# Dockerfile
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY app.py .
RUN useradd -r -u 1000 appuser
USER appuser
EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
# Build
docker build -t myapp:v1 .
# Run
docker run -d \
--name myapp \
--restart unless-stopped \
-p 127.0.0.1:8000:8000 \
--memory 512m \
--init \
myapp:v1
Key flags:
- -p 127.0.0.1:8000:8000 — only accept connections from localhost (Nginx forwards to us)
- --memory 512m — same limit we had in systemd
- --init — tini handles signals and zombie reaping (PID 1 problem)
- --restart unless-stopped — Docker restarts the container on crash
The architecture now:
What we gained:
- Identical environment everywhere (dev, staging, prod)
- Dependencies don't conflict with the host OS
- Deployment = docker pull + docker run (no system packages to install)
- Easy rollback: docker run myapp:v0.9 reverts to the old version
Still missing: - Nginx and the app are managed separately - Database, cache, workers — all separate manual setup - Multi-service orchestration
Level 4: Docker Compose — Multi-Service Stack¶
Real apps have a database, a cache, a reverse proxy, and maybe a worker process. docker-compose defines them all in one file:
# docker-compose.yml
services:
app:
build: .
init: true
restart: unless-stopped
environment:
- DATABASE_URL=postgresql://app:secret@db:5432/myapp
- REDIS_URL=redis://cache:6379/0
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
deploy:
resources:
limits:
memory: 512M
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
- /etc/letsencrypt:/etc/letsencrypt:ro
depends_on:
- app
db:
image: postgres:16-alpine
environment:
POSTGRES_DB: myapp
POSTGRES_USER: app
POSTGRES_PASSWORD: secret
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app -d myapp"]
interval: 10s
timeout: 5s
retries: 5
cache:
image: redis:7-alpine
command: redis-server --maxmemory 128mb --maxmemory-policy allkeys-lru
volumes:
pgdata:
# Start everything
docker compose up -d
# Check status
docker compose ps
# View logs
docker compose logs -f app
# Deploy new version
docker compose build app
docker compose up -d --no-deps app # Restart only the app service
The architecture now:
Internet → :443 → Nginx container
↓
App container → DB container (PostgreSQL)
↓
Cache container (Redis)
What we gained:
- Entire stack defined in one file
- docker compose up starts everything in dependency order
- Health checks wait for database before starting the app
- Named volumes persist database data across restarts
- docker compose down stops everything cleanly
Still missing: - Single host — no redundancy - No health-based routing - Scaling = manual - Secret management is hardcoded
Level 5: Kubernetes — Orchestration at Scale¶
When you need multiple instances, automatic failover, rolling deployments, and infrastructure-level health checks, you move to Kubernetes.
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: registry.example.com/myapp:v1
ports:
- containerPort: 8000
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
readinessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 15
periodSeconds: 20
securityContext:
runAsNonRoot: true
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 8000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts: [app.example.com]
secretName: myapp-tls
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp
port:
number: 80
kubectl apply -f deployment.yaml
# Watch the rollout
kubectl rollout status deployment/myapp
# Scale up
kubectl scale deployment/myapp --replicas=5
# Deploy new version (rolling update)
kubectl set image deployment/myapp myapp=registry.example.com/myapp:v2
The architecture now:
What we gained: - 3 replicas — survives individual pod failures - Rolling deployments — zero-downtime updates - Readiness probes — unhealthy pods removed from rotation automatically - Liveness probes — stuck pods restarted automatically - Resource requests/limits — scheduling and OOM protection - Security context — non-root, read-only filesystem, dropped capabilities - TLS via cert-manager — automatic certificate provisioning and renewal
The Complete Stack — Why Each Layer Exists¶
Level 0: python3 app.py
Problem: dies when terminal closes
↓
Level 1: systemd service
Solves: restart, boot, logging, resource limits
Problem: no TLS, runs on weird port
↓
Level 2: + Nginx reverse proxy
Solves: TLS, port 443, security headers
Problem: app installed directly on OS
↓
Level 3: + Docker container
Solves: isolation, reproducibility, clean deploys
Problem: multi-service coordination
↓
Level 4: + Docker Compose
Solves: multi-service stack, health checks, volumes
Problem: single host, no redundancy
↓
Level 5: + Kubernetes
Solves: scaling, failover, rolling deploys, health routing
Each level adds complexity. You don't always need Level 5. A personal project is fine at Level 1 or 2. A small team's internal tool might live at Level 4 forever. Kubernetes makes sense when you need the reliability and scaling features — not before.
Mental Model: Production infrastructure is an onion. Each layer wraps around the previous one to solve a specific problem. When you peel back a layer, you should be able to say exactly why it's there. If you can't, maybe it shouldn't be.
Flashcard Check¶
Q1: What does Restart=on-failure in a systemd unit do?
Restarts the service automatically when it exits with a non-zero exit code. Combined with
RestartSec=5, it waits 5 seconds between restarts to avoid rapid restart loops.
Q2: Why does Nginx sit in front of the Python app?
TLS termination (the app doesn't handle encryption), port 443 (unprivileged apps can't bind below 1024), security headers, and potential load balancing across multiple app instances.
Q3: docker run -p 127.0.0.1:8000:8000 — why the 127.0.0.1?
Binds the port to localhost only. External traffic goes through Nginx (which handles TLS), not directly to the container. Without
127.0.0.1, Docker publishes on all interfaces, bypassing Nginx.
Q4: In docker-compose, what does depends_on: db: condition: service_healthy do?
Waits until the database's healthcheck passes before starting the app. Without the condition,
depends_ononly waits for the container to start, not for the service inside to be ready.
Q5: Kubernetes Deployment has replicas: 3. What happens when one pod crashes?
Kubernetes restarts the crashed pod (liveness probe) and removes it from the Service endpoints (readiness probe). The other 2 pods handle traffic during recovery.
Q6: When should you NOT use Kubernetes?
When you don't need scaling, failover, or rolling deploys. A single-server app behind Nginx + systemd (Level 2) or Docker Compose (Level 4) is simpler and easier to debug.
Exercises¶
Exercise 1: Build Levels 0-2 (hands-on)¶
Create the app, write the systemd unit, and set up Nginx. You don't need a real domain —
use localhost and a self-signed cert:
# Self-signed cert for testing
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /tmp/selfsigned.key -out /tmp/selfsigned.crt \
-subj "/CN=localhost"
Can you curl -k https://localhost/health and get a response?
Exercise 2: Dockerize it (hands-on)¶
Write the Dockerfile, build the image, and run it. Verify:
1. The image is under 200MB
2. The container runs as non-root (docker exec myapp id)
3. The health endpoint works through Nginx
Exercise 3: The decision (think, don't code)¶
For each scenario, which level is appropriate?
- A personal blog
- A startup's API serving 100 requests/second
- A bank's transaction processing system
- An internal admin dashboard used by 5 people
- A SaaS platform with 50 microservices
Answers
1. **Level 1-2.** systemd + Nginx + Let's Encrypt. Static site generator if possible. No containers needed — it's overkill for a blog. 2. **Level 3-4.** Docker + Compose for the stack. Maybe Kubernetes if the team grows and needs rolling deploys. 100 rps is easily handled by a single server. 3. **Level 5.** Multi-replica, health-checked, auto-scaling, with strict security contexts. The cost of downtime justifies the operational complexity. 4. **Level 1-2.** systemd + Nginx. Docker optional. Kubernetes would be absurd for 5 users on an internal tool. 5. **Level 5.** 50 microservices need orchestration, service discovery, and independent scaling. This is what Kubernetes was built for.Cheat Sheet¶
Level 1: systemd¶
Level 2: Nginx reverse proxy¶
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
Level 3: Docker¶
docker run -d --name app --restart unless-stopped \
-p 127.0.0.1:8000:8000 --memory 512m --init myapp:v1
Level 4: Docker Compose¶
docker compose up -d
docker compose logs -f app
docker compose up -d --no-deps app # Redeploy one service
Level 5: Kubernetes¶
kubectl apply -f deployment.yaml
kubectl rollout status deployment/myapp
kubectl scale deployment/myapp --replicas=5
Takeaways¶
-
Each layer solves exactly one problem. systemd = process lifecycle. Nginx = TLS and routing. Docker = isolation. Compose = multi-service. Kubernetes = orchestration.
-
You don't always need every layer. Match the infrastructure to the actual need. Kubernetes for a personal blog is like driving a semi truck to pick up groceries.
-
The app doesn't change. The same
app.pyruns at every level. The infrastructure wraps around it, never inside it. -
Health checks thread through everything. systemd checks process alive. Docker checks container running. Nginx checks backend responding. Kubernetes checks readiness and liveness. Health endpoints are the universal language of "is it working?"
-
Security accumulates with each layer. Non-root user (systemd) + TLS (Nginx) + container isolation (Docker) + security context (Kubernetes). Each layer adds defense.
Related Lessons¶
- The Hanging Deploy — what happens when these processes and signals go wrong
- What Happens When You Click a Link — the request path through Nginx, TLS, and your app
- What Happens When You
docker build— how the Docker image is actually constructed