Skip to content

Postmortem: AWS Credentials Committed to Public Repo — Caught by Pre-Commit Hook

Field Value
ID PM-026
Date 2025-03-12
Severity Near-Miss
Duration 0m (no customer impact)
Time to Detect 0m (blocked at commit time)
Time to Mitigate 38m (key rotation completed)
Customer Impact None
Revenue Impact None
Teams Involved Platform Engineering, Security, Developer Experience
Postmortem Author Renata Kwiatkowski
Postmortem Date 2025-03-14

Executive Summary

On 2025-03-12 at 14:07 UTC, a Platform Engineering engineer staged a .env file containing production AWS credentials with AdministratorAccess permissions and attempted to commit it to a public GitHub repository. The pre-commit hook running gitleaks v8.18.1 blocked the commit immediately and printed a detailed violation report. If the push had succeeded, the credentials would have been exposed in a public repository indexed by GitHub's own secret scanning and by automated attacker tooling — historical data from other companies shows exploitation of exposed AWS keys within 2 to 5 minutes of the push. The pre-commit hook was installed 3 weeks prior as part of Security Hardening Sprint SH-14; without that installation, this commit would have gone straight to origin.

Timeline (All times UTC)

Time Event
13:51 Marcus Delgado (Platform Engineering) begins local testing of new deployment automation script, copies .env.example to .env and populates with real credentials from AWS console
14:02 Marcus runs git add . from repo root, accidentally staging .env alongside intended files
14:07 Marcus runs git commit -m "feat: add deployment automation script" — gitleaks pre-commit hook fires
14:07 Hook output shows two violations: AWS_ACCESS_KEY_ID (rule: aws-access-token) and AWS_SECRET_ACCESS_KEY (rule: generic-api-key); commit is blocked
14:09 Marcus unstages .env with git reset HEAD .env, adds .env to .gitignore, completes commit without credentials
14:11 Marcus notifies security channel #sec-incidents per policy; creates Security ticket SEC-2041
14:15 Security engineer Priya Nambiar confirms the key was not pushed to any remote; checks GitHub audit log and gitleaks output
14:19 Priya initiates key rotation: creates replacement IAM access key via AWS console
14:33 New key deployed to CI/CD secrets store (Vault); old key deactivated in IAM
14:45 Priya confirms old key shows zero API calls after 14:07 (no exfiltration window); key deleted from IAM
14:52 Marcus adds pre-commit hook validation to team onboarding checklist; incident marked contained
15:30 Platform Engineering team retrospective scheduled; Security begins postmortem

Impact

Customer Impact

None — caught before reaching production or any remote repository.

Internal Impact

  • Marcus Delgado: ~1.5 hours (incident response, key rotation coordination)
  • Priya Nambiar (Security): ~1 hour (investigation, IAM key rotation, audit log review)
  • Platform Engineering lead (Soo-Jin Park): ~30 minutes (incident review, policy update)
  • Total: approximately 3 engineering-hours

Data Impact

None. The credentials were never transmitted outside the local workstation.

What Would Have Happened

If the pre-commit hook had not been installed, the commit containing the .env file would have been pushed to the public GitHub repository (github.com/helixtechnologies/platform-deploy) within seconds. GitHub's own secret scanning would have sent an alert to the repository administrator — but that notification workflow has a lag of 5 to 15 minutes. More critically, automated attacker infrastructure continuously scans GitHub for freshly pushed AWS credentials and routinely acts within 2 to 5 minutes of exposure, well before any human could respond.

The exposed key carried the AdministratorAccess managed policy on the Helix Technologies production AWS account (account ID ending in -4821). An attacker with this key could have: launched GPU instances for cryptocurrency mining (estimated $15K-$50K/day in EC2 costs before billing alerts trigger); exfiltrated data from all S3 buckets including the helixtechnologies-customer-pii bucket holding records for approximately 280,000 users; deleted or ransomed RDS snapshots and EBS volumes; created new IAM users with persistent backdoor access that would survive the key rotation.

Beyond the immediate blast radius, the attacker could have established long-lived persistence through IAM users, OIDC identity providers, or Lambda backdoors. Incident response for a full AdministratorAccess compromise is typically a multi-day forensic exercise costing $200K-$500K in engineering time, external IR firm fees, and regulatory notification overhead. Helix Technologies would also have faced mandatory breach notification to regulators given the PII exposure, with potential CCPA/GDPR fines.

Root Cause

What Happened (Technical)

Marcus was developing a deployment automation script that required AWS credentials to test locally. Following the common but insecure practice of using a .env file for local credential injection, he copied .env.example to .env and populated it with a long-lived IAM access key generated directly in the AWS console. The key was associated with a service account (svc-deploy-automation) that had been granted AdministratorAccess during initial setup with a note to "scope down later" — a task that had remained in the backlog for 4 months.

When Marcus ran git add . from the repository root, .env was captured by the wildcard because the repository's .gitignore did not include a .env entry. The .gitignore had entries for .env.local and .env.production, but the bare .env filename was absent — an asymmetry that is easy to miss on quick inspection. This is a common .gitignore misconfiguration pattern.

The gitleaks hook, installed via pre-commit framework as part of Security Hardening Sprint SH-14 (completed 2025-02-19), scanned the staged diff and matched both AWS_ACCESS_KEY_ID (pattern: AKIA[0-9A-Z]{16}) and AWS_SECRET_ACCESS_KEY (40-character base-64 string adjacent to the key ID) against its ruleset. The hook exited non-zero and printed a formatted violation report identifying the file, line number, and matched rule, blocking the commit entirely.

The .env file was never transmitted to any remote system. Git's pre-commit hook fires before the commit object is created; there was no intermediate state where the secret was recorded in git history.

Contributing Factors

  1. Bare .env missing from .gitignore: The repository's .gitignore had variant entries (.env.local, .env.production) but not the unqualified .env. Engineers relying on .gitignore as a safety net got no protection for the most common filename variant.
  2. Over-permissioned long-lived key: The service account svc-deploy-automation held AdministratorAccess rather than least-privilege permissions, and the key was long-lived rather than using IAM Roles Anywhere or temporary credentials. This maximized the blast radius of any exposure.
  3. No team-wide secret management guidance: There was no documented standard for how engineers should inject local development credentials. The gap left each engineer to invent their own approach, with .env files being the path of least resistance.

What We Got Lucky About

  1. The pre-commit hook was installed 3 weeks earlier. Security Hardening Sprint SH-14 completed on 2025-02-19 and included mandatory installation of gitleaks via the pre-commit framework across all Helix Technologies repositories. The hook was installed by automated bootstrap script, so Marcus's workstation had it without any deliberate action on his part. Before SH-14, the same git add . && git commit sequence would have succeeded silently.
  2. No prior push window. Marcus was working on a new branch he had not yet pushed. There was no remote history to audit for prior accidental inclusions of the .env file.
  3. The AWS key had not been used for any automated jobs yet. Because the key was freshly generated for local testing, the CloudTrail baseline was clean. Priya could confirm with confidence that the only API calls on the key were from Marcus's local testing session — there was no ambiguous activity to investigate.

Detection

How We Detected

The gitleaks pre-commit hook matched two rules (aws-access-token, generic-api-key) against the staged diff during git commit. The hook printed a formatted report and exited with code 1, which the pre-commit framework interprets as a blocking failure. The commit was never created.

Why This Almost Wasn't Caught

Prior to 2025-02-19, no secret scanning hook existed in this repository. The .gitignore omission of the bare .env filename meant the file would have been staged silently. There is no server-side push protection configured on this repository (GitHub Advanced Security is under evaluation but not yet licensed). Without the pre-commit hook, the only backstop would have been GitHub's post-push secret scanning, which sends an email notification — an inherently reactive control with a 5-15 minute lag.

Response

What Went Well

  1. The engineer immediately recognized the hook output as a real security event rather than a false positive to dismiss, and notified the security channel within 4 minutes of the blocked commit.
  2. Key rotation was completed in under 40 minutes from the blocked commit, including deployment of the new key to Vault and confirmation of zero API calls on the old key during the potential exposure window.
  3. The Security team's runbook for "exposed IAM key" (SEC-RB-009) was current and accurate; Priya executed it without needing to improvise any steps.

What Could Have Gone Better

  1. The service account should never have held AdministratorAccess. The least-privilege scoping was deferred for 4 months. A higher-severity blast radius made this near-miss far more dangerous than it needed to be.
  2. No .env entry in .gitignore means engineers had a false sense of protection. The .gitignore variant entries (*.env.local, *.env.production) suggested someone had thought about this but not covered the most common case.

Action Items

ID Action Priority Owner Status Due Date
PM026-01 Add bare .env and *.env to repository's .gitignore; audit all Helix repos for same gap P0 Soo-Jin Park In Progress 2025-03-15
PM026-02 Rotate svc-deploy-automation to least-privilege IAM policy scoped to deployment actions only; delete long-lived key and migrate to IAM Roles Anywhere P0 Marcus Delgado Open 2025-03-19
PM026-03 Publish internal guide: "How to inject AWS credentials for local dev without .env files" (use aws-vault or IAM Identity Center SSO) P1 Developer Experience Open 2025-03-26
PM026-04 Enable GitHub Advanced Security secret scanning push protection on all public repositories P1 Security Open 2025-03-26
PM026-05 Add a CI job that runs gitleaks in server-side check mode (defense in depth behind the pre-commit hook) P2 Platform Engineering Open 2025-04-02
PM026-06 Audit all existing IAM service account keys for AdministratorAccess; document and schedule remediation P1 Security Open 2025-03-26

Lessons Learned

  1. Pre-commit hooks are force multipliers but require mandatory installation. A hook that engineers can opt out of — or that isn't installed automatically — will not catch the case where it matters most. Automated bootstrap via pre-commit install on repo clone (enforced by CI) is the correct model.
  2. .gitignore false precision creates false confidence. Listing .env.local and .env.production implies the problem is handled. Auditing .gitignore files for coverage gaps (e.g., the bare filename) should be a periodic hygiene task, not a one-time setup.
  3. Blast radius is set long before the incident. The real failure in this near-miss was assigning AdministratorAccess to a service account 4 months ago and deferring the scoping task. When a secret is exposed, the damage it can cause is determined by decisions made weeks or months earlier.

Cross-References

  • Failure Pattern: Secret in Source Control (human error / staging wildcard)
  • Topic Packs: CI/CD Security, IAM Least Privilege, Pre-commit Controls, Secret Management
  • Runbook: SEC-RB-009 — Exposed IAM Key Response; SEC-RB-002 — Pre-commit Hook Installation
  • Decision Tree: Security Triage → Credential Exposure → Was it pushed to remote? → No → Rotate and audit CloudTrail