Skip to content

AWS Lambda — Trivia & Interesting Facts

Surprising, historical, and little-known facts about AWS Lambda.


Lambda was announced at re:Invent 2014 and started the serverless movement

AWS Lambda launched in November 2014 as the first major Function-as-a-Service offering from a cloud provider. It initially supported only Node.js and had a maximum execution time of 60 seconds. The launch essentially created the "serverless" category that Google Cloud Functions and Azure Functions would later enter.


Lambda originally had a 60-second timeout, now it is 15 minutes

The maximum execution time started at 60 seconds in 2014, was extended to 5 minutes in 2016, and finally raised to 15 minutes (900 seconds) in 2018. This limit is hard — if your function runs longer than 15 minutes, it is forcibly terminated with no graceful shutdown opportunity.


Cold starts happen because Lambda creates a new execution environment

When Lambda has no warm container available, it must download your deployment package, start a new microVM (using Firecracker), initialize the runtime, and run your initialization code. This "cold start" can add 100ms to over 10 seconds depending on the runtime (Java and .NET tend to be slowest). Provisioned Concurrency, launched in 2019, keeps environments warm to eliminate this.


Lambda runs on Firecracker, an open-source microVM manager

In 2018, AWS open-sourced Firecracker, the lightweight virtual machine monitor that powers Lambda (and Fargate). Firecracker can launch a microVM in as little as 125 milliseconds and uses about 5 MB of memory per VM. It was purpose-built to replace the container-based isolation Lambda originally used.


A single Lambda function can handle 1,000 concurrent executions by default

The default concurrency limit per function is 1,000 simultaneous executions per region (shared across all functions in the account). This limit can be raised to tens of thousands through a support request. Without reserved concurrency, one runaway function can consume the entire account's concurrency pool.


Lambda@Edge runs your code at CloudFront edge locations worldwide

Introduced in 2017, Lambda@Edge lets you execute Lambda functions at over 400 CloudFront edge locations around the world. Functions can modify HTTP requests and responses at four points in the CloudFront lifecycle. The tradeoff: Lambda@Edge functions can only run for up to 30 seconds (viewer-triggered) or 5 seconds (origin-triggered).


The maximum deployment package size is 250 MB unzipped

Lambda imposes a 50 MB limit on zipped deployment packages uploaded directly, or 250 MB unzipped (including layers). Container image support, added in 2020, raised this to 10 GB, enabling workloads like machine learning inference that need large model files.


Lambda Layers were introduced to solve the dependency duplication problem

Before Lambda Layers (launched in 2018), every function that needed a shared library had to bundle it independently. Layers allow up to 5 shared packages to be attached to a function, reducing deployment size and enabling common dependencies like numpy or boto3 to be maintained once and shared across functions.


Lambda processes over 100 trillion invocations per month

By 2022, AWS reported that Lambda was processing over 100 trillion function invocations per month across all customers. The service had grown from zero to one of the highest-traffic services in AWS in under eight years, driven by event-driven architectures and API Gateway integrations.


You can run Lambda functions for $0.0000002 per request

Lambda pricing has two components: $0.20 per million requests and $0.0000166667 per GB-second of compute. A function with 128 MB of memory running for 100ms costs roughly $0.0000002 per invocation. The free tier includes 1 million requests and 400,000 GB-seconds per month, which never expires.