HTTP Protocol — Street-Level Ops¶
Quick Diagnosis Commands¶
# Get response headers only (fastest health check)
curl -sI https://api.example.com/health
# Full verbose request/response with TLS handshake details
curl -v https://api.example.com/health 2>&1
# Timing breakdown: DNS, connect, TLS, TTFB, total
curl -o /dev/null -s -w \
"dns: %{time_namelookup}s\n\
connect: %{time_connect}s\n\
tls: %{time_appconnect}s\n\
ttfb: %{time_starttransfer}s\n\
total: %{time_total}s\n\
http: %{http_code}\n\
size: %{size_download} bytes\n" \
https://api.example.com/health
# Follow redirects and show each hop
curl -L -v https://example.com 2>&1 | grep -E '^[<>*]|^< HTTP|^< Location'
# Send a JSON POST request
curl -X POST https://api.example.com/v2/deploy \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d '{"image": "myapp:v2.1.0", "replicas": 3}'
# Check certificate expiration
echo | openssl s_client -connect api.example.com:443 \
-servername api.example.com 2>/dev/null \
| openssl x509 -noout -enddate -subject
# notAfter=Mar 18 00:00:00 2027 GMT
# Check certificate chain (find missing intermediates)
openssl s_client -connect api.example.com:443 \
-servername api.example.com -showcerts </dev/null 2>/dev/null \
| grep -E 'Certificate chain|s:|i:'
# Test a specific HTTP method
curl -X OPTIONS -v https://api.example.com/v2/deploy 2>&1 \
| grep -E 'Access-Control|Allow'
# Check response compression
curl -sI -H "Accept-Encoding: gzip" https://api.example.com/data \
| grep -i content-encoding
# content-encoding: gzip ← compression active
# Test with a specific Host header (virtual host routing)
curl -s -H "Host: app2.example.com" http://10.0.1.50/health
Gotcha: 502 Bad Gateway — Backend Not Reachable¶
Symptom: Users get 502 errors. The application logs show nothing because the request never reached the app.
Rule: 502 means the proxy/load balancer tried to connect to the upstream backend and failed. The problem is between the proxy and the backend, not between the client and the proxy.
Remember: 502 = "I tried to reach the backend and it rejected me." 504 = "I tried to reach the backend and it was too slow." 503 = "I know the backend can't handle this right now." These three error codes tell you exactly where in the request lifecycle the failure occurred.
# Step 1: Confirm the backend is listening
curl -s http://backend-host:8000/health
# If connection refused → backend is down
# Step 2: Check from the proxy server itself
ssh proxy-server
curl -s http://localhost:8000/health # if backend is local
curl -s http://backend-ip:8000/health # if backend is remote
# Step 3: Check nginx upstream config
grep -A5 'upstream' /etc/nginx/conf.d/*.conf
# Verify IP, port, and health check settings
# Step 4: Check for connection reuse issues (keepalive mismatch)
# If the backend closes idle connections before nginx expects:
# nginx.conf upstream block:
# keepalive_timeout 60s;
# Backend (e.g., gunicorn):
# --keep-alive 65 # must be LONGER than proxy timeout
# Step 5: Check backend process limits
ss -tlnp | grep 8000
# Is the socket listening? Is the process running?
Gotcha: 504 Gateway Timeout — Backend Too Slow¶
Symptom: Slow requests return 504 while fast requests work fine.
# Step 1: Measure actual backend response time
curl -o /dev/null -s -w "ttfb: %{time_starttransfer}s\ntotal: %{time_total}s\n" \
http://backend:8000/slow-endpoint
# Step 2: Check proxy timeout settings
# Nginx:
grep -E 'proxy_read_timeout|proxy_connect_timeout|proxy_send_timeout' \
/etc/nginx/conf.d/*.conf
# Default proxy_read_timeout is 60s
> **Default trap:** Nginx's default `proxy_read_timeout` is 60 seconds, but AWS ALB's default idle timeout is also 60 seconds. If your backend takes exactly 60s, you get a race condition between nginx and ALB timing out. Set ALB idle timeout higher than nginx timeout, or set nginx timeout lower than ALB — never equal.
# Step 3: Increase timeout for specific slow endpoints
# In nginx location block:
# location /api/reports {
# proxy_read_timeout 300s; # 5 minutes for report generation
# }
# Step 4: Verify ALB/ELB idle timeout (AWS)
aws elbv2 describe-target-group-attributes \
--target-group-arn $TG_ARN \
| jq '.Attributes[] | select(.Key == "deregistration_delay.timeout_seconds")'
Pattern: Debugging CORS Errors¶
CORS errors appear in the browser console but not in server logs. The server is working correctly — the browser is blocking the response.
# Step 1: Reproduce the preflight request
curl -v -X OPTIONS https://api.example.com/data \
-H "Origin: https://dashboard.example.com" \
-H "Access-Control-Request-Method: POST" \
-H "Access-Control-Request-Headers: Content-Type, Authorization" \
2>&1 | grep -E 'Access-Control|HTTP/'
# Expected response headers for working CORS:
# Access-Control-Allow-Origin: https://dashboard.example.com
# Access-Control-Allow-Methods: GET, POST, OPTIONS
# Access-Control-Allow-Headers: Content-Type, Authorization
# Access-Control-Max-Age: 86400
# Step 2: Check if the server returns CORS headers on actual requests
curl -sI -H "Origin: https://dashboard.example.com" \
https://api.example.com/data \
| grep -i access-control
# Common fixes in nginx:
# add_header Access-Control-Allow-Origin "https://dashboard.example.com";
# add_header Access-Control-Allow-Methods "GET, POST, OPTIONS";
# add_header Access-Control-Allow-Headers "Content-Type, Authorization";
# if ($request_method = OPTIONS) { return 204; }
Pattern: Certificate Expiry Monitoring¶
# Check cert expiry for a list of domains
for domain in api.example.com app.example.com cdn.example.com; do
expiry=$(echo | openssl s_client -connect ${domain}:443 \
-servername $domain 2>/dev/null \
| openssl x509 -noout -enddate 2>/dev/null \
| cut -d= -f2)
days_left=$(( ($(date -d "$expiry" +%s) - $(date +%s)) / 86400 ))
echo "$domain: $days_left days ($expiry)"
done
# Check for incomplete certificate chain
openssl s_client -connect api.example.com:443 \
-servername api.example.com </dev/null 2>&1 \
| grep "Verify return code"
# 0 = OK
# 21 = unable to verify the first certificate (missing intermediate)
> **Gotcha:** "Verify return code: 21" means your server is missing an intermediate certificate. The fix is to concatenate the intermediate cert(s) with your server cert in the right order: server cert first, then intermediate(s), root last (or omitted). Most browsers cache intermediates and work anyway, but curl and API clients fail — so you discover the problem in production, not testing.
Pattern: HTTP Request Debugging Playbook¶
# Full request lifecycle analysis
# 1. Check DNS resolution
dig +short api.example.com
# 2. Check TCP connectivity
nc -zv api.example.com 443 2>&1
# 3. Check TLS handshake
openssl s_client -connect api.example.com:443 \
-servername api.example.com </dev/null 2>&1 \
| head -20
# 4. Check HTTP response
curl -sv https://api.example.com/health 2>&1
# 5. Check from different locations (is it network-specific?)
# Use a jump host or different network:
ssh jump-host "curl -sI https://api.example.com/health"
Pattern: Rate Limit Detection and Handling¶
# Check rate limit headers
curl -sI https://api.example.com/data | grep -i 'rate\|limit\|retry'
# Common headers:
# X-RateLimit-Limit: 100
# X-RateLimit-Remaining: 0
# X-RateLimit-Reset: 1710754800
# Retry-After: 60
# Decode the reset timestamp
python3 -c "import datetime; print(datetime.datetime.fromtimestamp(1710754800))"
# Retry loop with exponential backoff
for i in 1 2 4 8 16; do
status=$(curl -s -o /dev/null -w "%{http_code}" https://api.example.com/data)
if [ "$status" != "429" ]; then
echo "Success: HTTP $status"
break
fi
echo "Rate limited (429), retrying in ${i}s..."
sleep $i
done
Pattern: Debugging Cache Issues¶
# Check cache headers on a response
curl -sI https://cdn.example.com/static/app.js | grep -iE 'cache|etag|age|expires'
# Cache-Control: public, max-age=31536000, immutable
# ETag: "abc123"
# Age: 3600 ← served from cache, cached 1 hour ago
# Force cache bypass
curl -sI -H "Cache-Control: no-cache" https://cdn.example.com/static/app.js
# Or use a cache-busting query string:
curl -sI "https://cdn.example.com/static/app.js?v=$(date +%s)"
# Check if CDN is serving stale content
curl -sI https://cdn.example.com/api/config | grep -E 'X-Cache|Age|Date'
# X-Cache: HIT ← CDN served from cache
# X-Cache: MISS ← CDN fetched fresh from origin
Useful One-Liners¶
# Test all HTTP methods against an endpoint
for method in GET HEAD POST PUT PATCH DELETE OPTIONS; do
status=$(curl -s -o /dev/null -w "%{http_code}" -X $method https://api.example.com/test)
echo "$method → $status"
done
# Extract and decode a JWT from a response
curl -s https://api.example.com/auth/token -d 'user=admin&pass=secret' \
| jq -r '.token' \
| cut -d. -f2 \
| base64 -d 2>/dev/null \
| jq .
# Check HTTP/2 support
curl -sI --http2 https://api.example.com/ 2>&1 | head -1
# HTTP/2 200 ← supports HTTP/2
> **Under the hood:** HTTP/2 multiplexes multiple requests over a single TCP connection. This eliminates head-of-line blocking at the HTTP layer but not at the TCP layer — a single lost packet still stalls all streams. HTTP/3 (QUIC) fixes this by using UDP with per-stream loss recovery.
# Measure TTFB for multiple requests (load test lite)
for i in $(seq 1 10); do
curl -o /dev/null -s -w "%{time_starttransfer}\n" https://api.example.com/health
done | awk '{sum+=$1; n++} END {printf "avg TTFB: %.3fs (n=%d)\n", sum/n, n}'
# Check if server supports gzip compression
curl -sI -H "Accept-Encoding: gzip, deflate, br" https://api.example.com/data \
| grep -i content-encoding
# View redirect chain
curl -sIL https://example.com 2>&1 | grep -E 'HTTP/|Location:'