How We Got Here: Secrets Management¶
Arc: Security Eras covered: 5 Timeline: ~2010-2025 Read time: ~11 min
The Original Problem¶
In 2010, your database password was in a configuration file on the production server. Maybe it was in application.properties, maybe in a .env file, maybe hardcoded in the source code. It was committed to Subversion, visible to every developer, and unchanged since the application was first deployed. When someone left the company, nobody rotated the password because nobody knew which systems would break. Secrets were scattered across servers, repositories, wikis, and sticky notes.
The attack surface was enormous and invisible. A single compromised developer laptop could expose every production credential because they were all in the Git repo.
Era 1: Environment Variables and .env Files (~2010-2015)¶
The Solution¶
The Twelve-Factor App (Heroku, 2011) advocated storing configuration in environment variables. The .env file pattern (popularized by the dotenv library, 2012) let developers keep secrets out of code while maintaining convenience. The rule was simple: never commit secrets to version control.
What It Looked Like¶
# .env file (NOT committed to Git)
DATABASE_URL=postgres://myapp:s3cret@db.internal:5432/production
STRIPE_API_KEY=sk_live_abc123def456
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
SMTP_PASSWORD=emailpass123
# .gitignore
.env
.env.*
# Application code
import os
db_url = os.environ['DATABASE_URL']
stripe_key = os.environ['STRIPE_API_KEY']
# Deployment: set env vars on the server
export DATABASE_URL="postgres://myapp:s3cret@db.internal:5432/production"
# Or in systemd unit file
# Environment=DATABASE_URL=postgres://...
Why It Was Better¶
- Secrets separated from code (if the
.gitignorewas correct) - Environment variables worked everywhere — any language, any platform
- Simple to understand and implement
- 12-Factor alignment provided a clear mental model
Why It Wasn't Enough¶
.envfiles were committed to Git constantly (despite.gitignore)- Environment variables were visible in process listings (
/proc/*/environ) - No encryption at rest — secrets were plaintext on disk
- No access control — anyone with server access could read all secrets
- No rotation mechanism — changing a secret meant redeploying
- No audit trail — who accessed which secret, when?
Legacy You'll Still See¶
Environment variables remain the standard interface between secrets management systems and applications. Docker --env-file, Kubernetes envFrom, and Lambda environment variables all use this pattern. The .env file is still the first thing new developers create. The difference is where the values come from — not the file itself, but a secrets manager.
Era 2: Encrypted Files and Config Management (~2013-2017)¶
The Solution¶
Tools like git-crypt (2012), ansible-vault (2014), and SOPS (Mozilla, 2015) encrypted secrets at rest while allowing them to be stored in version control. The encrypted file was safe to commit; only authorized team members with the decryption key could read it. Chef Encrypted Data Bags and Puppet Hiera eyaml followed the same pattern.
What It Looked Like¶
# SOPS — encrypt specific values in a YAML file
# Original secrets.yaml:
database:
password: s3cret
host: db.internal
stripe:
api_key: sk_live_abc123
# After: sops -e -i secrets.yaml
database:
password: ENC[AES256_GCM,data:abc123...,tag:def456...]
host: db.internal # non-secret values stay plaintext
stripe:
api_key: ENC[AES256_GCM,data:ghi789...,tag:jkl012...]
sops:
kms:
- arn: arn:aws:kms:us-east-1:123456789:key/abc-def
# Ansible Vault
ansible-vault encrypt secrets.yml
# Edit encrypted file
ansible-vault edit secrets.yml
# Use in playbook
ansible-playbook deploy.yml --ask-vault-pass
# ansible secrets.yml (encrypted):
$ANSIBLE_VAULT;1.1;AES256
3532363036613661...
Why It Was Better¶
- Secrets in version control with encryption (auditable change history)
- Works with existing tools (Git, Ansible, Puppet, Chef)
- KMS integration: decryption key managed by AWS/GCP, not a passphrase
- Selective encryption: only secret values encrypted, structure visible
- SOPS supports per-field encryption (non-secrets stay readable)
Why It Wasn't Enough¶
- Decrypted values still ended up in memory, env vars, or temp files
- Key management was pushed to KMS but still required IAM setup
- No dynamic secrets — passwords were still static, just encrypted
- Rotation required re-encrypting and redeploying
- No fine-grained access control (you could decrypt everything or nothing)
- Merge conflicts on encrypted files were painful
Legacy You'll Still See¶
SOPS is widely used for encrypting secrets in GitOps repositories. Ansible Vault is standard in Ansible-based deployments. The pattern of "encrypted secrets in Git" persists because it fits naturally into the Git workflow. Many teams use SOPS + age or SOPS + KMS as their primary secrets approach.
Era 3: HashiCorp Vault and Centralized Secrets (~2015-2021)¶
The Solution¶
HashiCorp Vault (2015) introduced a centralized secrets management platform with dynamic secrets, leasing, rotation, fine-grained access control, and an audit log. Instead of storing a static password, Vault could generate a short-lived database credential on demand. When the lease expired, the credential was automatically revoked.
What It Looked Like¶
# Enable the database secrets engine
vault secrets enable database
# Configure the Postgres connection
vault write database/config/myapp \
plugin_name=postgresql-database-plugin \
connection_url="postgresql://{{username}}:{{password}}@db.internal:5432/production" \
allowed_roles="myapp-readonly" \
username="vault_admin" \
password="admin_password"
# Create a role that generates short-lived credentials
vault write database/roles/myapp-readonly \
db_name=myapp \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
default_ttl="1h" \
max_ttl="24h"
# Application requests a credential
vault read database/creds/myapp-readonly
# Returns: username=v-myapp-abc123, password=A1B2C3..., lease_duration=1h
# After 1h, the credential is automatically revoked
# Vault policy — fine-grained access control
path "database/creds/myapp-readonly" {
capabilities = ["read"]
}
path "secret/data/myapp/*" {
capabilities = ["read", "list"]
}
# Deny access to production database admin creds
path "database/creds/admin-*" {
capabilities = ["deny"]
}
Why It Was Better¶
- Dynamic secrets: short-lived, automatically rotated, unique per consumer
- Fine-grained access control: per-path policies with deny rules
- Complete audit log: every secret read, write, and deletion is logged
- Multiple auth methods: LDAP, OIDC, AWS IAM, Kubernetes service accounts
- PKI engine: Vault as an internal certificate authority
- Transit engine: encryption as a service (apps don't handle keys)
Why It Wasn't Enough¶
- Operational complexity: Vault itself needs to be highly available and backed up
- Unsealing ceremony after restarts (Shamir's secret sharing or auto-unseal)
- Application integration required Vault-aware code or an agent/sidecar
- Single point of failure if not properly HA
- Expensive at enterprise scale (HashiCorp Enterprise licensing)
- Learning curve was significant for operations teams
Legacy You'll Still See¶
Vault is the industry standard for centralized secrets management. Most enterprises that take security seriously run Vault (or are evaluating it). The concepts Vault introduced — dynamic secrets, lease-based access, audit logging — are now expected features of any secrets solution.
Era 4: Kubernetes External Secrets Operators (~2019-2023)¶
The Solution¶
External Secrets Operator (ESO, 2019), AWS Secrets Manager CSI Driver, and similar tools bridged the gap between centralized secrets stores (Vault, AWS Secrets Manager, Azure Key Vault, GCP Secret Manager) and Kubernetes-native secrets. Instead of applications fetching secrets directly from Vault, the operator synced secrets into Kubernetes Secrets, which pods consumed via env vars or volume mounts.
What It Looked Like¶
# ExternalSecret — sync from AWS Secrets Manager to K8s Secret
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: myapp-secrets
namespace: production
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secretsmanager
kind: ClusterSecretStore
target:
name: myapp-secrets
creationPolicy: Owner
data:
- secretKey: database-url
remoteRef:
key: production/myapp/database-url
- secretKey: stripe-api-key
remoteRef:
key: production/myapp/stripe-api-key
# Pod consuming the synced secret — no Vault SDK needed
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:latest
envFrom:
- secretRef:
name: myapp-secrets
# ClusterSecretStore — configure once per cluster
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: aws-secretsmanager
spec:
provider:
aws:
service: SecretsManager
region: us-east-1
auth:
jwt:
serviceAccountRef:
name: external-secrets
namespace: external-secrets
Why It Was Better¶
- Applications don't need Vault SDKs or API calls — just read env vars
- Kubernetes-native: standard Secret objects, standard pod mounting
- Automatic refresh: secrets rotate without redeployment
- Multi-backend: same operator works with Vault, AWS SM, Azure KV, GCP SM
- GitOps-friendly: ExternalSecret manifests can be committed (no actual secrets in Git)
Why It Wasn't Enough¶
- Kubernetes Secrets are base64-encoded, not encrypted (at rest encryption requires etcd config)
- Secret refresh requires pod restart for env vars (volume mounts can be dynamic)
- Another operator to install, configure, and maintain
- RBAC must be carefully configured (Kubernetes RBAC for secrets is coarse)
- Doesn't solve the "who can access what" problem fully — just moves it
Legacy You'll Still See¶
External Secrets Operator is the current standard for Kubernetes secrets management. Most Kubernetes deployments that use a centralized secrets store also use ESO or a similar syncing mechanism. The pattern of "external store -> K8s Secret -> pod" is well-established.
Era 5: Workload Identity and Secretless Authentication (~2022-2025)¶
The Solution¶
The ultimate evolution: eliminate long-lived secrets entirely. Workload identity (GKE Workload Identity, EKS Pod Identity, Azure Workload Identity) lets Kubernetes pods authenticate to cloud services using their Kubernetes identity — no API keys, no passwords, no credentials to rotate. SPIFFE/SPIRE provides a universal identity framework for workloads across environments.
What It Looked Like¶
# EKS Pod Identity — no AWS credentials in the pod
apiVersion: v1
kind: ServiceAccount
metadata:
name: myapp
namespace: production
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789:role/myapp-production
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
template:
spec:
serviceAccountName: myapp # pod gets AWS credentials automatically
containers:
- name: myapp
image: myapp:latest
# No AWS_SECRET_ACCESS_KEY needed
# The SDK automatically uses the projected token
# SPIFFE/SPIRE — universal workload identity
# Every workload gets an identity: spiffe://example.com/production/myapp
# mTLS certificates issued and rotated automatically
# No shared secrets, no API keys, no passwords
# SPIRE registration entry
spire-server entry create \
-spiffeID spiffe://example.com/production/myapp \
-parentID spiffe://example.com/node/k8s-worker-01 \
-selector k8s:pod-label:app:myapp \
-selector k8s:ns:production
Why It Was Better¶
- No static secrets to leak, rotate, or manage
- Identity-based: workloads prove who they are, not what they know
- Automatic credential rotation (tokens are short-lived by design)
- Eliminated entire categories of credential-related incidents
- Cloud-native IAM integration (AWS, GCP, Azure)
- SPIFFE enables cross-environment identity (cloud + on-prem)
Why It Wasn't Enough¶
- Only works for cloud provider services (not arbitrary third-party APIs)
- Third-party services still need API keys (Stripe, Twilio, etc.)
- SPIFFE/SPIRE is complex to deploy and operate
- Not all applications support token-based authentication
- Migration from static secrets is gradual and requires application changes
Legacy You'll Still See¶
Workload identity is the current best practice for cloud service authentication. GKE, EKS, and AKS all support it natively. SPIFFE/SPIRE is growing in adoption for multi-environment setups. The trend is clearly toward "fewer secrets, shorter-lived credentials, identity-based access."
Where We Are Now¶
Most organizations use a layered approach: workload identity for cloud services, Vault or cloud-native secrets managers (AWS Secrets Manager, Azure Key Vault) for application secrets, External Secrets Operator for Kubernetes integration, and SOPS for secrets in GitOps repos. The goal of "zero static secrets" is approaching for cloud-native workloads but still distant for legacy applications and third-party integrations.
Where It's Going¶
The direction is clear: fewer secrets, shorter lifetimes, identity-based access. SPIFFE/SPIRE will become the universal workload identity standard. Cloud providers will expand workload identity to more services. The remaining challenge is third-party services that require static API keys — the industry will need to standardize on short-lived token exchange (OAuth 2.0 DPoP and similar mechanisms).
The Pattern¶
Every generation of secrets management reduces the number of static credentials and the time they're valid. From permanent passwords to encrypted files to dynamic secrets to ephemeral tokens to "no secrets at all" — the trend is always toward shorter-lived, narrower-scoped, automatically-managed credentials.
Key Takeaway for Practitioners¶
The best secret is one that doesn't exist. Use workload identity for cloud services. Use dynamic secrets (Vault) for databases. Store remaining secrets in a proper secrets manager, not environment files. If a credential doesn't have an expiration date, treat it as a security debt that needs to be addressed.
Cross-References¶
- Topic Packs: Vault, AWS Secrets Manager, External Secrets
- Tool Comparisons: Secrets Management Solutions
- Evolution Guides: Supply Chain Security, Kubernetes Itself