Synthetic Monitoring — Trivia & Interesting Facts¶
Surprising, historical, and little-known facts about synthetic monitoring.
Synthetic monitoring predates real-user monitoring by over a decade¶
The first synthetic monitoring services (Keynote Systems, founded 1995; Gomez, founded 1997) used scripted transactions to test website availability years before Real User Monitoring (RUM) was technically feasible. RUM required JavaScript injection capabilities that didn't mature until the mid-2000s. Synthetic monitoring still exists because it detects problems before any real user is affected — it's proactive, while RUM is reactive.
Pingdom was one of the first affordable synthetic monitoring tools¶
Pingdom, founded in Sweden in 2007, democratized uptime monitoring by offering affordable HTTP checks that previously required expensive enterprise tools. It was acquired by SolarWinds in 2014 for $18 million. Before Pingdom, small companies often had no external monitoring — they learned about outages when customers called. Pingdom's simple model (check a URL every minute, alert if it's down) established the category for small and medium businesses.
Synthetic monitoring from multiple locations can detect CDN and DNS problems invisible to internal monitoring¶
A synthetic check running from Tokyo might see a completely different infrastructure path than one running from New York: different CDN edge nodes, different DNS resolvers, different ISP paths. Internal monitoring sees only the view from inside the network. Many major outages (like CDN misconfigurations or regional DNS failures) are invisible to internal monitoring but immediately visible to geographically distributed synthetic checks.
Browser-based synthetic monitoring uses real headless Chrome instances¶
Modern synthetic monitoring tools (Datadog Synthetics, New Relic, Checkly) run real headless Chromium browsers in cloud infrastructure to execute test scripts. Each test launches a full browser instance, navigates pages, clicks buttons, fills forms, and measures rendering performance — the same way a real user would interact. This is dramatically more realistic than simple HTTP checks but consumes 100x more resources per test.
The "waterfall" chart for web performance was popularized by WebPageTest¶
WebPageTest, created by Patrick Meenan at AOL in 2008, introduced the waterfall chart showing every resource load in sequence with timing breakdowns (DNS, connect, TLS, TTFB, content download). The visualization was so intuitive that every synthetic monitoring tool adopted it. Meenan open-sourced WebPageTest, and it remains the gold standard for web performance testing. Google later hired Meenan to work on Chrome DevTools.
Synthetic monitoring catches "silent failures" that real users might not report¶
Some failures don't generate errors — they just return stale data, wrong results, or degraded functionality that users might not notice or might not bother reporting. A synthetic test that checks the actual content of a response (not just the HTTP status code) catches these silent failures. A classic example: an API returning cached data from 3 hours ago with a 200 status code — technically "up" but functionally broken.
Canary deployments get their name from coal mine canaries, not the Canary Islands¶
The practice of routing a small percentage of traffic to a new deployment (to detect problems before full rollout) is called "canary deployment" — named after the practice of bringing canaries into coal mines to detect toxic gases. The birds were more sensitive to carbon monoxide and would die before gas reached lethal levels for humans. Synthetic tests against canary deployments serve the same purpose: detecting problems before they affect real users.
Synthetic monitoring costs can escalate rapidly with complex multi-step tests¶
A simple HTTP uptime check running every minute from 10 locations costs very little. But a 15-step browser-based transaction test (login, navigate, search, add to cart, checkout) running every 5 minutes from 20 locations generates 5,760 test executions per day. At enterprise pricing, complex synthetic test suites can cost $10,000-50,000+ per month. This cost pressure drives teams to carefully select which user journeys to monitor synthetically.
The "first byte time" metric from synthetic tests is one of the best server health indicators¶
Time to First Byte (TTFB), measured by synthetic HTTP checks, captures the complete server-side processing time: network transit, TLS handshake, request queuing, application processing, and response generation. A TTFB spike often indicates server problems (CPU saturation, database latency, memory pressure) before other metrics show it. Many teams set alerts on TTFB percentile changes rather than absolute thresholds.
Synthetic monitoring helped discover that ISPs were throttling Netflix traffic¶
In the early 2010s, synthetic monitoring from various ISP networks revealed dramatic performance differences in Netflix streaming quality across providers. Tests from Comcast and Verizon networks showed significantly worse Netflix performance than tests from other ISPs, providing evidence of throttling. This data became part of the net neutrality debate and helped Netflix make the case for direct peering agreements with ISPs.