Skip to content

Secrets Management Footguns

Mistakes that leak credentials, break rotation, or give you a false sense of security.


1. Base64 is not encryption

You create a Kubernetes Secret. The values are base64-encoded. You think they're encrypted. They're not. echo "c3VwZXJzZWNyZXQ=" | base64 -d = supersecret. Anyone with kubectl get secret -o yaml access can read every secret.

Fix: Enable etcd encryption at rest. Use an external secret store (Vault, AWS SM). Restrict RBAC access to secrets.

Under the hood: Kubernetes Secrets are base64-encoded, not encrypted. Base64 is a reversible encoding — echo "c3VwZXJzZWNyZXQ=" | base64 -d instantly reveals the value. Even with etcd encryption at rest, anyone with get secrets RBAC permission sees plaintext in API responses. Treat RBAC on secrets like RBAC on production databases.


2. Secrets in Helm values committed to Git

You put dbPassword: supersecret in values-prod.yaml and commit it. It's in Git history forever, even if you delete and recommit. Every developer with repo access now has prod credentials.

Fix: Use Sealed Secrets, SOPS, or helm-secrets plugin. Never put plaintext secrets in any file that touches Git.


3. Rotating a secret without restarting consumers

You update a Kubernetes Secret. Pods that read the secret as environment variables still have the old value in memory. They'll keep using the old credential until they restart. If the old credential is revoked, they break.

Fix: After rotating secrets, roll pods: kubectl rollout restart deploy/app. For file-mounted secrets, K8s updates the file automatically (with a delay), but your app needs to watch for file changes.


4. Vault token with no TTL

You create a Vault token for your application. You don't set a TTL. The token never expires. If it's leaked, the attacker has permanent access. You won't know until something bad happens.

Fix: Always set TTLs on tokens and leases. Use short-lived dynamic credentials. Implement token renewal in your application. Monitor for expired tokens.

Gotcha: If you lose enough Vault unseal keys to fall below the threshold (or your auto-unseal KMS key is deleted), your Vault data is permanently inaccessible. There is no recovery mechanism. Recovery keys cannot decrypt the master key — they are authorization-only. Test your unseal procedure quarterly and store keys in geographically separate locations.


5. Shared credentials across environments

Your dev, staging, and prod environments all use the same database password. A developer leaks the dev password. An attacker uses it against production because the password is the same.

Fix: Unique credentials per environment. Per-service credentials where possible. Use dynamic secret generation (Vault database secrets engine).


6. Logging secrets

Your application logs the full request body, which includes an API key in a header. Or your error handler logs the entire config object, including database credentials. Now your secrets are in CloudWatch, Loki, or Splunk — accessible to everyone with log access.

Fix: Implement secret masking in your logging library. Never log full request/response bodies. Audit logs for leaked credentials. Use regex patterns to detect and redact secrets.


7. External Secrets Operator with overly broad IAM

Your ESO installation has IAM permissions to read every secret in AWS Secrets Manager. Any namespace in your cluster can create an ExternalSecret pointing to any AWS secret, including production database passwords.

Fix: Scope IAM permissions per namespace or per secret. Use ESO ClusterSecretStore with namespace selectors. Apply RBAC to limit who can create ExternalSecret resources.


8. No backup for your secret store

Your Vault cluster runs on a single node with no snapshots. The disk fails. Every dynamically generated credential, every encryption key, every secret is gone. Your applications can't authenticate to anything.

Fix: Run Vault HA with Raft or Consul storage. Take regular snapshots. Store unseal keys and recovery keys securely and separately. Test restore procedures.


9. Sealed Secrets cluster key not backed up

You use Sealed Secrets. The cluster is rebuilt (disaster recovery). The Sealed Secrets controller generates a new key pair. All your sealed secrets in Git are now un-decryptable. You can't deploy anything that needs secrets.

Fix: Back up the Sealed Secrets private key: kubectl get secret -n kube-system -l sealedsecrets.bitnami.com/sealed-secrets-key -o yaml > sealed-secrets-key.yaml. Store securely outside the cluster.

War story: Teams discover this the hard way during disaster recovery: the cluster is rebuilt, Sealed Secrets controller generates a new key pair, and every SealedSecret in Git is now un-decryptable. You cannot deploy anything that needs credentials. The backup is one kubectl get secret command — but you must run it before you need it.


10. envFrom exposing all secrets to a container

You use envFrom: [secretRef: {name: all-the-secrets}] to mount a secret with 20 keys. Your container only needs 2 of them. The other 18 (including admin keys for other services) are available as environment variables to any code running in the container.

Fix: Mount only the specific keys you need using env[].valueFrom.secretKeyRef. Follow least privilege for secret access, not just RBAC.