Terraform — Trivia & Interesting Facts¶
Surprising, historical, and little-known facts about Terraform.
Terraform was created by Mitchell Hashimoto, who started HashiCorp in his early 20s¶
Mitchell Hashimoto co-founded HashiCorp with Armon Dadgar in 2012, when both were in their early twenties. Hashimoto had previously created Vagrant (2010), which automated development environment setup. Terraform was released in July 2014 and became the company's flagship product. HashiCorp went public in December 2021 at a $15 billion valuation and was acquired by IBM in 2024 for approximately $6.4 billion.
The license change from MPL to BSL in August 2023 forked the entire community¶
On August 10, 2023, HashiCorp changed Terraform's license from the Mozilla Public License (open source) to the Business Source License (source-available but not open source). Within days, a group of companies (led by Gruntwork, Spacelift, and others) forked Terraform as "OpenTofu" under the Linux Foundation. The fork was one of the most significant open-source licensing events since the Redis and Elasticsearch license changes.
Terraform state files are the most dangerous files in your infrastructure¶
The Terraform state file (terraform.tfstate) contains a complete record of every resource Terraform manages, including sensitive data like database passwords, API keys, and private IPs — all stored in plaintext JSON. Accidentally committing a state file to git, storing it on an unencrypted S3 bucket, or losing it entirely can be catastrophic. This is why remote state backends with encryption and locking exist — and why state management is Terraform's most criticized design decision.
"terraform destroy" has no undo and minimal confirmation by default¶
Running terraform destroy deletes every resource in the state file. The only safeguard is a confirmation prompt asking you to type "yes." There's no undo, no soft delete, no recovery. In CI/CD pipelines, terraform destroy -auto-approve skips even the confirmation prompt. Multiple organizations have experienced catastrophic, unrecoverable deletions from accidental terraform destroy in production. This is why some teams alias terraform destroy to a wrapper script with additional safeguards.
HCL (HashiCorp Configuration Language) was created because YAML and JSON weren't good enough¶
HashiCorp created HCL because JSON lacks comments and is verbose for human authoring, YAML is ambiguous and error-prone (the "Norway problem" — NO is parsed as boolean false), and existing config languages didn't support the interpolation and block structure Terraform needed. HCL occupies a middle ground: more readable than JSON, less ambiguous than YAML, with first-class support for expressions and references.
Terraform's plan-apply workflow was inspired by database migrations¶
The two-phase workflow — terraform plan (show what will change) then terraform apply (make the changes) — was inspired by database migration tools that preview SQL before executing it. This pattern gives operators a chance to catch mistakes before they affect production. Despite this safety mechanism, many teams skip the plan review in CI/CD pipelines, negating its value.
The provider ecosystem has over 3,000 providers, most community-maintained¶
Terraform's provider ecosystem includes over 3,000 providers in the Terraform Registry, covering everything from major cloud providers to DNS services to pizza ordering APIs (yes, really — there's a Dominos provider). HashiCorp maintains only a handful of "official" providers (AWS, Azure, GCP, Kubernetes). The vast majority are community-maintained, with varying levels of quality and update frequency.
Terraform modules are the primary mechanism for code reuse, but versioning is tricky¶
Terraform modules allow packaging and reusing infrastructure patterns. The Terraform Registry hosts thousands of public modules. However, module versioning creates a dependency management problem: updating a module version can change infrastructure behavior in unexpected ways. The terraform init -upgrade command updates module versions, and many teams have been surprised by breaking changes in minor version bumps of community modules.
The "blast radius" of a single Terraform state file drives the monorepo vs. multi-repo debate¶
A single Terraform configuration managing 1,000 resources means that a mistake in any file could potentially affect all 1,000 resources. This "blast radius" problem has spawned intense debate about how to structure Terraform code: one large configuration (simple but dangerous), many small configurations (safe but complex), or tools like Terragrunt that manage multiple configurations. There's no consensus — every approach has significant tradeoffs.
Terraform import was broken for years and still isn't fully solved¶
Importing existing infrastructure into Terraform management was notoriously painful: terraform import only updated state, not configuration. You had to manually write the HCL configuration to match the imported resource. Terraform 1.5 (2023) added import blocks and terraform plan -generate-config-out for automatic configuration generation, but the process is still imperfect — generated configurations often require manual cleanup.
Drift detection — finding differences between Terraform state and reality — is an unsolved problem¶
Terraform only knows about changes it made. If someone modifies infrastructure through the AWS console, Terraform won't know until the next terraform plan. This "drift" problem means that Terraform's state can become inaccurate, leading to surprises during the next apply. Tools like Driftctl and cloud provider config recorders attempt to detect drift, but comprehensive drift detection across all resource types remains an unsolved problem.
The average Terraform apply at a large organization touches 50-200 resources¶
Enterprise Terraform runs typically manage 50-200 resources per configuration. At this scale, a terraform plan takes 1-5 minutes (making API calls to every cloud resource to check its current state), and an apply can take 10-30 minutes. For organizations managing thousands of resources, the total plan+apply time across all configurations can exceed hours, which is why parallel execution and state splitting are essential.