Skip to content

Decision Tree: A Secret Was Exposed

Category: Security Response Starting Question: "A secret (credential, key, token) was exposed — what do I do?" Estimated traversal: 2-5 minutes Domains: security, secrets-management, incident-response, git


The Tree

A secret (credential, key, token) was exposed  what do I do?
├── FIRST: Revoke or rotate the secret RIGHT NOW  do not wait for full investigation
   This is always the first action. Even if you are unsure of the blast radius,
   an invalid secret cannot be used. Rotate first, investigate second.
      └── Secret revoked/rotated?  Continue investigation below
├── What type of secret is it?
      ├── API key (third-party service: Stripe, Twilio, AWS, GitHub, etc.)
      └──  Revoke in the provider console immediately; generate replacement
          AWS: IAM console  Access Keys  Deactivate
          GitHub: Settings  Developer settings  Tokens  Delete
          Stripe: Dashboard  Developers  API Keys  Roll key
      ├── Database password
      └──  Rotate in DB: `ALTER USER <user> WITH PASSWORD '<new>';`
          Update secret in Vault / Kubernetes secret / SSM Parameter Store
          Restart services that use the old password to pick up the new one
      ├── TLS private key
      └──  Revoke cert with CA (Let's Encrypt: certbot revoke; internal CA: CRL/OCSP)
│   │       Issue new cert + key pair; deploy to all endpoints using old cert
│   │
│   ├── SSH private key
│   │   └── → Remove from all authorized_keys files on all systems
│   │       `grep -r "<public-key-fingerprint>" /home/*/.ssh/authorized_keys /root/.ssh/authorized_keys`
│   │       Generate new keypair; distribute new public key
│   │
│   ├── OAuth token / session token
│   │   └── → Revoke via provider API or token revocation endpoint
│   │       Force re-authentication for affected users/services
│   │
│   └── Service account key (GCP SA JSON, AWS long-term credential)
│       └── → Disable key immediately in cloud console
│           GCP: `gcloud iam service-accounts keys disable <key-id> --iam-account=<sa>`
│           AWS: `aws iam update-access-key --access-key-id <id> --status Inactive`

├── Where was it exposed?
│   │
│   ├── Public git repository (GitHub, GitLab, public fork)
│   │   │
│   │   ├── How long ago was it committed?
│   │   │   `git log --all --full-history -- <file-path>`
│   │   │   `git log --follow -p -- <file-path> | grep -E "commit|Date|<secret-prefix>"`
│   │   │   │
│   │   │   ├── < 1 hour ago — possibly not indexed by scanners yet
│   │   │   │   Act as if it IS indexed — rotate immediately regardless
│   │   │   │   (GitHub's secret scanning may have already caught it)
                  └── > 1 hour ago  assume it has been crawled and may be in use
             Treat as confirmed exposure; audit access logs immediately
            └──  ACTION: Git History Rewrite (public repo)
          Rewrite history to remove the secret + force push
          WARNING: This is destructive  coordinate with all contributors
      ├── Private git repository
      └── Rotate secret; rewrite history to prevent future leaks
          Lower urgency than public repo but still required
      ├── Application logs (stdout, log aggregator, Splunk, Datadog)
            ├── Are logs accessible externally or to untrusted parties?
                  ├── YES  Treat as public exposure; rotate immediately
                  └── NO  Internal exposure only; rotate and add log scrubbing
            └──  ACTION: Purge the log entry from aggregators
          Splunk: delete the event (requires admin); Datadog: contact support
          Add log scrubbing middleware to prevent recurrence
      ├── Error messages / API responses (secret returned in HTTP response)
      └── Treat as public  rotate immediately; fix the code path that exposed it
      ├── Slack / Teams / email / internal chat
            ├── Delete the message immediately (message history may be retained)
            └── Check if the channel is accessible to contractors / guests
           If yes, treat as semi-public; rotate as if confirmed exposure
      └── Pastebin / internal wiki / ticket system
       └── Delete the entry; rotate the secret; check who had access
├── Has the secret been used since exposure?
   (Check access logs for the service the secret grants access to)
      ├── AWS: `aws cloudtrail lookup-events --lookup-attributes AttributeKey=AccessKeyId,AttributeValue=<key>`
   ├── GCP: `gcloud logging read 'protoPayload.authenticationInfo.principalEmail="<sa>"' --limit=50`
   ├── GitHub: Settings  Security  Audit log  filter by token
   ├── DB: `SELECT * FROM pg_stat_activity WHERE usename = '<user>';`
            Check `pg_log` or `audit_log` for login history
      ├── YES  the secret was used after exposure
            ├── Were the actions legitimate (known automation, expected service)?
                  ├── YES  likely automation using the old key before rotation caught up
            Document; confirm no anomalous actions in the session
                  └── NO or UNKNOWN  assume unauthorized access
             └──  ACTION: Declare Security Incident + Forensic Audit
                 What did the access do? Was data read/copied/deleted?
                 Preserve logs before they roll over
            └── Document the used actions regardless for incident record
      └── NO  no usage detected since exposure
       └── Rotation was fast enough; document timeline and close out
└── Does this affect external services or vendors?
        ├── Third-party service (Stripe, Twilio, AWS, Slack, etc.)
       └──  ACTION: Notify vendor if SLA or data sharing agreement requires it
           Most vendor ToS require breach notification
        └── Internal-only secret
        └── No external notification required; document internally

Node Details

Check 1: Is the secret still active?

Command/method:

# AWS — test if key is still valid
aws sts get-caller-identity --profile <profile-using-key>

# Generic API key test — try an authenticated call
curl -H "Authorization: Bearer <token>" https://api.example.com/me

# Check if DB password still works
psql -h <host> -U <user> -d <db> -c '\conninfo'
What you're looking for: Whether the credential still authenticates. If it's already been auto-rotated or deactivated by another process, confirm the rotation was complete and correct. Common pitfall: Some services cache credentials for minutes to hours. A "revoked" API key may still work for up to 15 minutes in some providers (especially CDN edge nodes). Don't conclude "it's safe" immediately after revocation — check logs for the next 30 minutes.

Check 2: Scan git history for additional leaks

Command/method:

# truffleHog — scan full git history for secrets
docker run --rm trufflesecurity/trufflehog:latest git --repo-path=. --since-commit HEAD~50

# git-secrets — check all commits
git secrets --scan-history

# gitleaks — fast, configurable
gitleaks detect --source=. --log-opts="--all"

# Manual check for specific secret patterns
git log --all -p | grep -E "AKIA[0-9A-Z]{16}|sk_live_|ghp_[a-zA-Z0-9]{36}"
What you're looking for: Additional secrets committed anywhere in git history — in other branches, in merge commits, in deleted files that are still in the object store. Common pitfall: Scanning only the working tree. A secret removed by git rm is still in git history. You must scan --all (all refs) and use git log -p to see patch content.

Check 3: Audit access logs for unauthorized use

Command/method:

# AWS CloudTrail — filter by access key
aws cloudtrail lookup-events \
  --lookup-attributes AttributeKey=AccessKeyId,AttributeValue=<AKID> \
  --start-time "$(date -d '72 hours ago' --iso-8601=seconds)" \
  --output table

# Check for unusual source IPs
aws cloudtrail lookup-events \
  --lookup-attributes AttributeKey=AccessKeyId,AttributeValue=<AKID> \
  --query 'Events[].{Time:EventTime,IP:CloudTrailEvent}' \
  | jq '.[].IP | fromjson | .sourceIPAddress'

# Kubernetes secret access audit
kubectl get events --all-namespaces | grep -i secret
What you're looking for: Any API calls from unexpected source IPs, unusual regions, unusual times (3am UTC when your services are all US-based), or calls to sensitive APIs (IAM, S3 bucket listing, data export). Common pitfall: Only looking at read operations. An attacker will often first enumerate (ListBuckets, DescribeInstances) and only then exfiltrate. The enumeration calls look benign but confirm the key was in use.

Check 4: Is there a rotation mechanism in place?

Command/method:

# Check if secret is managed by Vault
vault kv get secret/<path>
vault lease lookup <lease-id>

# Check Kubernetes External Secrets or Sealed Secrets
kubectl get externalsecrets --all-namespaces
kubectl get sealedsecrets --all-namespaces

# Check AWS Secrets Manager rotation config
aws secretsmanager describe-secret --secret-id <name> | jq '.RotationEnabled, .RotationRules'
What you're looking for: Whether you have automated rotation available. If yes, trigger it and verify services pick up the new value. If not, document that manual rotation is required and add "implement auto-rotation" to backlog. Common pitfall: Rotating the secret in the secrets manager but forgetting to restart services that loaded the old value at startup. After rotation, check all consumers restart or reload secrets.


Terminal Actions

✅ Action: Revoke/Rotate Secret (Always First)

Do: 1. Identify the authoritative source for this secret (Vault path, AWS SSM parameter, IAM key, provider dashboard) 2. Generate a new secret value BEFORE revoking the old one (to avoid service outage) 3. Update all consumers of the secret (Kubernetes secrets, CI/CD variables, app configs) 4. Verify services are using the new secret: check application logs for successful auth 5. Revoke/delete the old secret value 6. Verify the old secret no longer authenticates:

# Expect 401/403/error — old secret must fail
curl -H "Authorization: Bearer <old-token>" https://api.example.com/me
Verify: Old secret fails authentication. New secret succeeds. Services are healthy (no auth errors in logs). kubectl get pods shows no CrashLoopBackOff from auth failures.

✅ Action: Scan Git History for Additional Leaks

Do: 1. Run truffleHog against full git history:

docker run --rm -v "$(pwd):/repo" trufflesecurity/trufflehog:latest git --repo-path=/repo --since-commit "$(git rev-list --max-parents=0 HEAD)"
2. Also run gitleaks:
gitleaks detect --source=. --log-opts="--all" --report-path=/tmp/gitleaks-report.json
3. Review all findings — not just the secret you already know about 4. Document: which commits, which files, when committed, who committed Verify: Run the scanner again after any history rewrite — confirm zero findings. Document the scan results in the incident record.

✅ Action: Audit Access Logs for Unauthorized Use

Do: 1. Determine exact exposure window: when was the secret committed/exposed vs when was it revoked? 2. Pull all access logs for that service for the full exposure window 3. Identify legitimate callers (known IPs, expected user agents, expected call patterns) 4. Flag all anomalous entries: new IPs, unexpected regions, off-hours access, unusual API calls 5. If any anomalous access found: escalate to incident response immediately Verify: Full log coverage for the exposure window is documented. Every access is accounted for as either legitimate or investigated.

✅ Action: Git History Rewrite (Public Repo)

Do: 1. Notify all contributors — this will break their local clones 2. Use git filter-repo (preferred over filter-branch):

pip install git-filter-repo
git filter-repo --path-glob <file-containing-secret> --invert-paths
# Or replace the specific string:
git filter-repo --replace-text <(echo '<old-secret>==><REDACTED>')
3. Force push to all branches:
GUARDRAIL_PUSH_BYPASS=1 git push --force --all
GUARDRAIL_PUSH_BYPASS=1 git push --force --tags
4. Contact GitHub/GitLab support to purge cached views and PR diffs that may still show the secret 5. Check all open PRs — their diff may still contain the secret; close and reopen if needed Verify: git log --all -p | grep <secret> returns zero results. The secret does not appear in any open PR diff. Contributor machines are re-cloned (old clones still have the history).

✅ Action: Declare Security Incident + Forensic Audit

When: Confirmed unauthorized use of the exposed secret. Do: 1. Declare incident in incident management system — set severity based on what the secret grants access to 2. Preserve all logs before they roll over — export to durable storage immediately 3. Enumerate all actions taken with the compromised secret during the exposure window 4. Determine if data was read, copied, modified, or deleted 5. If data was accessed: notify legal/compliance for regulatory assessment 6. Rotate ALL secrets in the same trust domain (attacker may have used the compromised access to harvest more) Verify: Full forensic timeline documented. All affected secrets rotated. Regulatory obligation assessed. Incident retrospective scheduled.

✅ Action: Update Secret Management to Prevent Recurrence

Do: 1. Add pre-commit hook to detect secrets before they are committed:

# Install git-secrets
git secrets --install
git secrets --register-aws
# Add custom patterns
git secrets --add 'sk_live_[0-9a-zA-Z]{24}'
2. If secret was in a Kubernetes manifest: migrate to Sealed Secrets or External Secrets Operator 3. If secret was hardcoded in source: refactor to read from environment variable / Vault 4. Update CI/CD to scan for secrets: add truffleHog or gitleaks to pipeline 5. Document the new control in runbook Verify: Attempt to commit a test secret (a fake key matching the pattern) — pre-commit hook must block it. Secret is no longer in code.

⚠️ Escalation: Unauthorized Access Confirmed

When: Access logs show the secret was used from an unknown source after exposure. Who: Security team lead (immediate), CISO (within 1 hour if data was accessed), Legal/Compliance if regulated data is involved Include in page: Secret type, exposure method, exposure window (start time to rotation time), evidence of unauthorized use (log lines with timestamps and source IPs), what the secret granted access to, what actions were taken


Edge Cases

  • The secret is in a dependency's config file you didn't write: Still rotate it — you are responsible for secrets in your deployed artifacts regardless of who wrote the code.
  • Rotating the secret will cause downtime and you need a maintenance window: Apply WAF or network-level block on external access first, then rotate during window. Document the temporary control. Do not skip rotation because of inconvenience.
  • Multiple services share the same secret: Rotate all consumers atomically or use a rolling rotation with a grace period where both old and new values are valid.
  • The secret was in a Docker image pushed to a public registry: Rotate the secret immediately; the image is already public and cannot be fully recalled (others may have pulled it). File a support request with the registry to remove the image, but treat the secret as fully compromised.
  • You find old secrets from a year ago in git history that are already revoked: Still remove them from history (they indicate poor practice) and scan to confirm they are truly inactive.

Cross-References