Nginx & Web Servers Footguns¶
-
Using
ifinside alocationblock for conditional logic. Theifdirective in Nginx creates an implicit nested location. Directives from the parent location may not apply, causing proxy_pass to fail, headers to vanish, or requests to be misrouted. Fix: Usemapto set variables based on conditions, andtry_filesorreturnto act on them.ifis safe only withreturnandrewriteinservercontext. Read the wiki page: "If Is Evil." -
Getting the
proxy_passtrailing slash wrong.proxy_pass http://backendandproxy_pass http://backend/behave completely differently. Without the trailing slash, Nginx forwards the full URI. With it, Nginx strips the location prefix. One misplaced character reroutes all traffic. Fix: Test your proxy_pass behavior explicitly withcurl -vagainst the backend. Document the expected path transformation in a comment above the directive.Gotcha: The "off-by-slash" vulnerability (popularized by Orange Tsai at Black Hat) exploits exactly this confusion. A missing trailing slash in the
locationcombined with a trailing slash inproxy_passcan enable path traversal:GET /api../internal/resolves to the internal endpoint. This is a common finding in security audits. -
Skipping
nginx -tbefore reloading. You edit the config and runsystemctl reload nginx. The config has a syntax error. On some setups, the old config keeps running (safe). On others, Nginx fails to reload and subsequent restarts break. Fix: Always runnginx -tbeforenginx -s reload. Script it:nginx -t && nginx -s reload. Put it in your deployment pipeline. -
Not understanding location matching order. You add a prefix location for
/static/but a regex location~ \.(jpg|png)$is matching those requests instead. You add more locations trying to override it, creating a maze nobody can follow. Fix: Use^~on prefix locations to prevent regex override (location ^~ /static/). Use= /pathfor exact matches. Draw out the matching order on paper for complex configs. -
Forgetting that
add_headerdoes not inherit into child blocks. You set security headers at the server level. Then you addadd_header X-Custom "value"in a location block. All your security headers vanish from responses for that location becauseadd_headerin a child completely replaces the parent's headers. Fix: Repeat alladd_headerdirectives in every location block that adds any, or use thengx_headers_moremodule (more_set_headers), which does not have this behavior.Debug clue: If security headers (CSP, HSTS, X-Frame-Options) suddenly disappear from some responses, check whether any
locationblock adds its ownadd_header. Usecurl -I https://yoursite/pathto inspect headers per-path. This is one of the most common Nginx security misconfigurations found in pen tests. -
Setting
client_max_body_sizetoo low (or not at all). The default is 1MB. Users trying to upload files get a cryptic 413 "Request Entity Too Large" error. You search the backend logs and find nothing because Nginx rejected it before the request reached the backend. Fix: Setclient_max_body_sizeappropriately in the server or location block (e.g.,client_max_body_size 50m;for file upload endpoints). For APIs with known payload limits, set it precisely.Default trap: The default
client_max_body_sizeis1m(1 megabyte). This catches almost every file upload feature, large JSON API payload, and multipart form submission. The 413 error appears in the Nginx error log, not the backend — so backend developers searching their own logs find nothing. -
Using
restartinstead ofreloadin production.systemctl restart nginxkills all active connections. Users experience dropped downloads, broken WebSockets, and interrupted API calls. Meanwhile,reloadgracefully transitions to the new config with zero downtime. Fix: Always usenginx -s reloadorsystemctl reload nginxin production. Reserverestartfor cases where the binary itself has been upgraded or listen sockets have changed. -
Ignoring upstream keepalive connections. Without keepalive, Nginx opens a new TCP connection to the backend for every request. On high-traffic sites, this means thousands of connection setups per second, wasting time and file descriptors. Fix: Configure
keepalivein the upstream block and setproxy_http_version 1.1andproxy_set_header Connection ""in the location block. Start withkeepalive 32and adjust based on traffic.Under the hood: Without keepalive, every request costs a TCP handshake (~1ms LAN, 50-200ms cross-AZ). At 1000 req/sec, that is 1000 new connections/sec consuming ephemeral ports and file descriptors. With keepalive, the same connections are reused. Monitor
ss -sto see connection churn. -
Proxy buffer sizes too small for the application. The backend sends large headers (big cookies, JWT tokens, long redirect URLs). Nginx logs "upstream sent too big header while reading response header" and returns 502. The backend is fine — Nginx cannot hold the response. Fix: Increase
proxy_buffer_size(for headers) andproxy_buffers(for body). Check error logs for "too big header" messages. Typical safe values:proxy_buffer_size 16k; proxy_buffers 4 32k;. -
Running Nginx without monitoring connection and error metrics. You have no visibility into active connections, request rates, or error rates. When performance degrades, you have no historical data to correlate. You are flying blind. Fix: Enable the stub_status module (
location /nginx_status { stub_status; allow 127.0.0.1; deny all; }). Scrape it with Prometheus (nginx-exporter) or feed it to your monitoring stack. Monitor at minimum: active connections, requests/sec, 4xx rate, 5xx rate.