nginx Web Servers — Trivia & Interesting Facts¶
Surprising, historical, and little-known facts about nginx and web server technology.
nginx was created to solve the C10K problem¶
Igor Sysoev started developing nginx in 2002 specifically to handle 10,000 concurrent connections on a single server — the "C10K problem" described by Dan Kegel in 1999. Apache's process-per-connection (and later thread-per-connection) model consumed too much memory at scale. nginx used an event-driven, asynchronous architecture that could handle thousands of connections in a single worker process, using a fraction of the memory.
nginx is pronounced "engine-x," not "N-G-I-N-X"¶
Despite the lowercase spelling, nginx is pronounced as two words: "engine X." Igor Sysoev chose the name as a play on "engine" reflecting its role as a web engine. The spelling has confused countless engineers, job interviewees, and podcast hosts. The official documentation consistently uses the lowercase "nginx" spelling.
nginx passed Apache as the most-used web server around 2019¶
Apache httpd dominated the web server market for over 20 years (from 1996 to the late 2010s). nginx gradually overtook it, and by the early 2020s, nginx served more active websites than Apache according to multiple surveys including W3Techs and Netcraft. The shift was driven by nginx's superior performance for static content, its effectiveness as a reverse proxy, and the rise of cloud-native architectures where nginx was the default choice.
The nginx config file syntax was designed by one person and has no formal grammar¶
Unlike Apache's XML-inspired configuration or HAProxy's keyword-based format, nginx's configuration syntax was designed entirely by Igor Sysoev. It uses a C-like block structure with braces, but it has no formal specification or grammar definition. The parser is hand-written C code, and edge cases in the syntax are documented primarily through trial, error, and Stack Overflow answers. This informality sometimes produces surprising behavior with whitespace and quoting.
The "location" block matching order is one of nginx's most misunderstood features¶
nginx evaluates location blocks in a specific, non-obvious order: exact matches (=) first, then longest prefix match, then regular expressions in order of appearance, then the remembered longest prefix. The fact that regex locations override prefix locations (unless the prefix uses the ^~ modifier) has caused countless misconfigurations. The nginx documentation's location matching algorithm section is one of the most-read pages in all of web server documentation.
nginx's "try_files" directive replaced an entire class of rewrite rules¶
Before try_files was introduced in nginx 0.7.27 (2008), serving an SPA (single-page application) required complex rewrite rules to check if a file existed and fall back to index.html. The try_files $uri $uri/ /index.html pattern became so ubiquitous that it's now cargo-culted into nearly every nginx configuration for modern web applications, often without the operator understanding what each argument does.
F5 Networks acquired nginx for $670 million in 2019¶
F5 Networks (maker of the BIG-IP hardware load balancer) acquired NGINX, Inc. in March 2019 for $670 million. This was remarkable because nginx started as a side project by a single Russian developer. The acquisition highlighted how an open-source project could become critical Internet infrastructure — at the time of acquisition, nginx powered about a third of all websites globally.
nginx as a reverse proxy created the modern microservices gateway pattern¶
While nginx was originally a web server, its reverse proxy capabilities (proxy_pass) made it the default API gateway and load balancer for microservice architectures. The pattern of nginx sitting in front of application servers (Node.js, Python, Go) handling TLS termination, static file serving, rate limiting, and request routing became so standard that nginx essentially defined the "sidecar proxy" pattern before service meshes formalized it.
Worker process tuning follows a simple rule: one worker per CPU core¶
nginx's recommended configuration is one worker process per CPU core (worker_processes auto; since nginx 1.3.8). Each worker handles thousands of connections using epoll (Linux) or kqueue (BSD). Adding more workers than CPU cores typically degrades performance due to context switching. This simplicity is one of nginx's operational advantages — unlike Apache's complex MPM tuning (prefork vs. worker vs. event), nginx's process model has essentially one knob.
The "upstream" block's default load balancing is unweighted round-robin¶
When you define an upstream block with multiple servers, nginx uses round-robin by default with no health checking. This means nginx will continue sending traffic to a dead backend until the connection times out. Active health checks (health_check directive) are only available in nginx Plus (the commercial version). The open-source version relies on passive health checking — marking backends as failed only after proxy_next_upstream errors — which is reactive rather than proactive.