Skip to content

Nginx & Web Servers Footguns

  1. Using if inside a location block for conditional logic. The if directive in Nginx creates an implicit nested location. Directives from the parent location may not apply, causing proxy_pass to fail, headers to vanish, or requests to be misrouted. Fix: Use map to set variables based on conditions, and try_files or return to act on them. if is safe only with return and rewrite in server context. Read the wiki page: "If Is Evil."

  2. Getting the proxy_pass trailing slash wrong. proxy_pass http://backend and proxy_pass http://backend/ behave completely differently. Without the trailing slash, Nginx forwards the full URI. With it, Nginx strips the location prefix. One misplaced character reroutes all traffic. Fix: Test your proxy_pass behavior explicitly with curl -v against the backend. Document the expected path transformation in a comment above the directive.

    Gotcha: The "off-by-slash" vulnerability (popularized by Orange Tsai at Black Hat) exploits exactly this confusion. A missing trailing slash in the location combined with a trailing slash in proxy_pass can enable path traversal: GET /api../internal/ resolves to the internal endpoint. This is a common finding in security audits.

  3. Skipping nginx -t before reloading. You edit the config and run systemctl reload nginx. The config has a syntax error. On some setups, the old config keeps running (safe). On others, Nginx fails to reload and subsequent restarts break. Fix: Always run nginx -t before nginx -s reload. Script it: nginx -t && nginx -s reload. Put it in your deployment pipeline.

  4. Not understanding location matching order. You add a prefix location for /static/ but a regex location ~ \.(jpg|png)$ is matching those requests instead. You add more locations trying to override it, creating a maze nobody can follow. Fix: Use ^~ on prefix locations to prevent regex override (location ^~ /static/). Use = /path for exact matches. Draw out the matching order on paper for complex configs.

  5. Forgetting that add_header does not inherit into child blocks. You set security headers at the server level. Then you add add_header X-Custom "value" in a location block. All your security headers vanish from responses for that location because add_header in a child completely replaces the parent's headers. Fix: Repeat all add_header directives in every location block that adds any, or use the ngx_headers_more module (more_set_headers), which does not have this behavior.

    Debug clue: If security headers (CSP, HSTS, X-Frame-Options) suddenly disappear from some responses, check whether any location block adds its own add_header. Use curl -I https://yoursite/path to inspect headers per-path. This is one of the most common Nginx security misconfigurations found in pen tests.

  6. Setting client_max_body_size too low (or not at all). The default is 1MB. Users trying to upload files get a cryptic 413 "Request Entity Too Large" error. You search the backend logs and find nothing because Nginx rejected it before the request reached the backend. Fix: Set client_max_body_size appropriately in the server or location block (e.g., client_max_body_size 50m; for file upload endpoints). For APIs with known payload limits, set it precisely.

    Default trap: The default client_max_body_size is 1m (1 megabyte). This catches almost every file upload feature, large JSON API payload, and multipart form submission. The 413 error appears in the Nginx error log, not the backend — so backend developers searching their own logs find nothing.

  7. Using restart instead of reload in production. systemctl restart nginx kills all active connections. Users experience dropped downloads, broken WebSockets, and interrupted API calls. Meanwhile, reload gracefully transitions to the new config with zero downtime. Fix: Always use nginx -s reload or systemctl reload nginx in production. Reserve restart for cases where the binary itself has been upgraded or listen sockets have changed.

  8. Ignoring upstream keepalive connections. Without keepalive, Nginx opens a new TCP connection to the backend for every request. On high-traffic sites, this means thousands of connection setups per second, wasting time and file descriptors. Fix: Configure keepalive in the upstream block and set proxy_http_version 1.1 and proxy_set_header Connection "" in the location block. Start with keepalive 32 and adjust based on traffic.

    Under the hood: Without keepalive, every request costs a TCP handshake (~1ms LAN, 50-200ms cross-AZ). At 1000 req/sec, that is 1000 new connections/sec consuming ephemeral ports and file descriptors. With keepalive, the same connections are reused. Monitor ss -s to see connection churn.

  9. Proxy buffer sizes too small for the application. The backend sends large headers (big cookies, JWT tokens, long redirect URLs). Nginx logs "upstream sent too big header while reading response header" and returns 502. The backend is fine — Nginx cannot hold the response. Fix: Increase proxy_buffer_size (for headers) and proxy_buffers (for body). Check error logs for "too big header" messages. Typical safe values: proxy_buffer_size 16k; proxy_buffers 4 32k;.

  10. Running Nginx without monitoring connection and error metrics. You have no visibility into active connections, request rates, or error rates. When performance degrades, you have no historical data to correlate. You are flying blind. Fix: Enable the stub_status module (location /nginx_status { stub_status; allow 127.0.0.1; deny all; }). Scrape it with Prometheus (nginx-exporter) or feed it to your monitoring stack. Monitor at minimum: active connections, requests/sec, 4xx rate, 5xx rate.