Skip to content

Nginx

← Back to all decks

26 cards — 🟢 5 easy | 🟡 10 medium | 🔴 5 hard

🟢 Easy (5)

1. How does Nginx achieve high concurrency with minimal resources?

Show answer Nginx uses an event-driven, non-blocking architecture. A master process manages multiple worker processes, each running an event loop (epoll/kqueue) that handles thousands of connections. There is no thread-per-connection overhead, allowing 10K+ concurrent connections on modest hardware. Total max connections = worker_processes x worker_connections.

Remember: nginx config: http→server(vhost)→location(path). Directives inherit downward.

2. What command should you always run before reloading Nginx?

Show answer nginx -t to test the configuration syntax. Always run nginx -t before nginx -s reload. Reload is graceful (no dropped connections); restart drops connections and should be avoided in production.

Remember: nginx = web server + reverse proxy + LB. Event-driven, 10K+ connections.

Fun fact: Created 2004 by Igor Sysoev for C10K problem. "engine-x."

3. How does Nginx select which server block handles a request?

Show answer 1. Match the listen directive (IP:port). 2. Match server_name against the Host header. 3. If no match, use the default_server block. A catch-all block typically uses server_name _ (match nothing convention) and returns 444 to close unmatched connections.

Remember: nginx config: http→server(vhost)→location(path). Directives inherit downward.

4. What is the minimal Nginx reverse proxy configuration for a backend app?

Show answer A server block with listen 80, server_name, and a location / block containing proxy_pass http://backend:port. Add proxy_set_header Host $host and proxy_set_header X-Real-IP $remote_addr so the backend sees the original client info. Without these headers the backend only sees the proxy's IP address.

Remember: nginx = web server + reverse proxy + LB. Event-driven, 10K+ connections.

Fun fact: Created 2004 by Igor Sysoev for C10K problem. "engine-x."

5. How do you configure custom error pages in Nginx and what are the most common HTTP errors to handle?

Show answer Use error_page directive: error_page 502 503 504 /50x.html with a location = /50x.html block pointing to the file. Common errors: 502 Bad Gateway (backend down), 503 Service Unavailable (overloaded), 504 Gateway Timeout (backend too slow). Check error logs at /var/log/nginx/error.log for root cause.

Remember: nginx = web server + reverse proxy + LB. Event-driven, 10K+ connections.

Fun fact: Created 2004 by Igor Sysoev for C10K problem. "engine-x."

🟡 Medium (10)

1. What is the Nginx location matching priority order?

Show answer From highest to lowest: 1. = (exact match), 2. ^~ (prefix, skips regex check), 3. ~ (case-sensitive regex) and ~* (case-insensitive regex) evaluated in config order, 4. plain prefix (longest match). Nginx first finds the longest prefix, then checks regex locations. First regex match wins. If no regex matches, the longest prefix is used.

Remember: nginx = web server + reverse proxy + LB. Event-driven, 10K+ connections.

Fun fact: Created 2004 by Igor Sysoev for C10K problem. "engine-x."

2. Why does the trailing slash in proxy_pass matter?

Show answer Without trailing slash (proxy_pass http://backend): the location path is appended — /app/page goes to backend as /app/page. With trailing slash (proxy_pass http://backend/): the location path is replaced — /app/page goes to backend as /page. With a path (proxy_pass http://backend/v2/): location is replaced with the path — /app/page becomes /v2/page. This is a top-5 Nginx misconfiguration.

Remember: nginx config: http→server(vhost)→location(path). Directives inherit downward.

3. What load balancing methods does Nginx support and how are upstream health checks configured?

Show answer Methods: round_robin (default), least_conn, ip_hash (sticky sessions), hash $key (consistent hashing), random. Health checks (OSS): passive only — set max_fails=3 fail_timeout=30s per server. After 3 failures in 30 seconds, the server is marked down for 30 seconds. Use keepalive connections to backends for performance.

Remember: nginx = web server + reverse proxy + LB. Event-driven, 10K+ connections.

Fun fact: Created 2004 by Igor Sysoev for C10K problem. "engine-x."

4. What are the key directives for modern TLS configuration in Nginx?

Show answer ssl_protocols TLSv1.2 TLSv1.3, ssl_ciphers with ECDHE suites, ssl_prefer_server_ciphers off, ssl_stapling on (OCSP stapling), ssl_session_cache shared:SSL:10m, ssl_session_tickets off, and add_header Strict-Transport-Security for HSTS. Redirect HTTP to HTTPS with a separate server block returning 301.

Remember: nginx = web server + reverse proxy + LB. Event-driven, 10K+ connections.

Fun fact: Created 2004 by Igor Sysoev for C10K problem. "engine-x."

5. How do you define an upstream block and what options control backend behavior?

Show answer upstream app_backend { server 10.0.1.1:8080 weight=3; server 10.0.1.2:8080; server 10.0.1.3:8080 backup; keepalive 32; } The weight parameter controls traffic distribution, backup marks a server as failover-only, and keepalive sets the number of idle connections cached per worker to reduce TCP handshake overhead.

Example: `upstream backend { server 10.0.0.1:8080; server 10.0.0.2:8080; }` — round-robin.

6. What is the difference between return and rewrite in Nginx location blocks?

Show answer return sends an immediate HTTP response (e.g., return 301 https://$host$request_uri) and is faster because it stops processing. rewrite modifies the URI internally and continues processing through other location blocks. Use return for redirects and rewrite for internal URI transformations. Mixing both causes hard-to-debug behavior.

Remember: nginx = web server + reverse proxy + LB. Event-driven, 10K+ connections.

Fun fact: Created 2004 by Igor Sysoev for C10K problem. "engine-x."

7. What are the performance benefits of SSL termination at the Nginx layer?

Show answer Nginx handles the expensive TLS handshake and encryption, offloading this CPU work from backend servers. Benefits: centralized certificate management, session resumption via ssl_session_cache, OCSP stapling to reduce client-side lookups, and backend servers communicate over plain HTTP on a trusted internal network. This can reduce backend CPU usage by 10-30%.

Remember: nginx = web server + reverse proxy + LB. Event-driven, 10K+ connections.

Fun fact: Created 2004 by Igor Sysoev for C10K problem. "engine-x."

8. What security headers should you add to every Nginx server block?

Show answer add_header X-Frame-Options "SAMEORIGIN" (prevents clickjacking)
add_header X-Content-Type-Options "nosniff" (prevents MIME sniffing)
add_header X-XSS-Protection "1; mode=block"
add_header Referrer-Policy "strict-origin-when-cross-origin"
add_header Content-Security-Policy "default-src 'self'"
Use always parameter to add headers on error responses too.

Remember: nginx = web server + reverse proxy + LB. Event-driven, 10K+ connections.

Fun fact: Created 2004 by Igor Sysoev for C10K problem. "engine-x."

9. How do worker_connections and worker_rlimit_nofile interact for high-concurrency Nginx?

Show answer worker_connections sets max simultaneous connections per worker. Each proxied connection uses two file descriptors (client + backend). worker_rlimit_nofile must be >= worker_connections (ideally 2x for proxying). Total capacity = worker_processes × worker_connections. Also raise the OS ulimit to match.

Remember: `worker_processes auto;` = one per CPU. Thousands of connections via epoll.

10. How do you expose Nginx metrics for Prometheus monitoring?

Show answer Enable stub_status in a location block (returns active connections, accepts, handled, requests). Use nginx-prometheus-exporter sidecar to scrape stub_status and expose /metrics in Prometheus format. For richer metrics (per-upstream, per-location), use the commercial Nginx Plus API or OpenTelemetry module.

Remember: nginx config: http→server(vhost)→location(path). Directives inherit downward.

🔴 Hard (5)

1. How do you configure Nginx proxy caching with stale content fallback?

Show answer Define proxy_cache_path with keys_zone, max_size, and inactive time. In the location block: proxy_cache my_cache, proxy_cache_valid 200 302 10m (cache successful responses for 10 minutes), proxy_cache_use_stale error timeout updating http_500 http_502 (serve stale content when backend is down). Add X-Cache-Status header for debugging cache hits vs misses.

Remember: `nginx -t` tests. `nginx -s reload` = zero-downtime. Config: /etc/nginx/nginx.conf.

2. How does Nginx rate limiting work with burst and nodelay?

Show answer limit_req_zone defines a shared memory zone with a rate (e.g., 10r/s per IP using $binary_remote_addr). In the location: limit_req zone=api burst=20 nodelay. Burst=20 allows 20 requests to exceed the rate before rejecting. Nodelay processes burst requests immediately rather than delaying them. Excess requests beyond the burst get a 429 status (limit_req_status 429).

Example: `limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;`

3. How do you configure Nginx to proxy WebSocket connections?

Show answer Set proxy_http_version 1.1, proxy_set_header Upgrade $http_upgrade, and proxy_set_header Connection "upgrade" in the location block. Also set proxy_read_timeout to a high value (e.g., 86400 seconds) to prevent Nginx from closing idle WebSocket connections. Without these headers, the HTTP upgrade handshake fails and WebSocket connections are rejected.

Remember: `nginx -t` tests. `nginx -s reload` = zero-downtime. Config: /etc/nginx/nginx.conf.

4. What is microcaching in Nginx and when should you use it?

Show answer Microcaching caches responses for very short periods (1-5 seconds) using proxy_cache_valid 200 1s. Even 1-second caching dramatically reduces backend load during traffic spikes because hundreds of concurrent requests hit the cache instead of the backend. Use fastcgi_cache_lock on to prevent cache stampedes where multiple requests try to populate the cache simultaneously.

Remember: nginx = web server + reverse proxy + LB. Event-driven, 10K+ connections.

Fun fact: Created 2004 by Igor Sysoev for C10K problem. "engine-x."

5. How do you implement sticky sessions in Nginx without Nginx Plus?

Show answer Use ip_hash in the upstream block to route clients to the same backend based on their IP address. Limitation: all clients behind the same NAT share one backend. Alternative: use the hash directive with a cookie value (hash $cookie_jsessionid consistent) for more granular stickiness. Consistent hashing minimizes redistribution when backends are added or removed.

Remember: nginx config: http→server(vhost)→location(path). Directives inherit downward.