Offensive Security Basics — Footguns¶
Common security mistakes that ops engineers make, often with good intentions.
Running security tools on production without authorization¶
You ran nmap against the production subnet to "just check a few ports." The IDS triggered. The security team got paged. Legal got involved because a client's compliance monitoring flagged it.
Get written authorization before scanning anything. Even your own infra. Even "just one port." Have a signed scope document or at minimum an email from someone with authority. CYA is not optional here.
"We don't need MFA, we have strong passwords"¶
You do not have strong passwords. Your users have Company2024! and
Summer2025! and they reuse them on their personal email. Credential stuffing
doesn't care about your password complexity rules.
MFA is not optional. It is the single most effective control against credential-based attacks. Enable it for everything that supports it, starting with admin accounts and VPN.
Trusting WAF to stop all web attacks¶
A WAF is a speed bump, not a wall. Attackers bypass WAFs routinely with encoding tricks, parameter pollution, and zero-day payloads. WAF vendors are always playing catch-up.
The WAF helps. It is not a substitute for parameterized queries, input validation, and secure code. If your app is vulnerable and you're relying on the WAF, you're one bypass away from a breach.
Backing up to the same network the ransomware can reach¶
Your backups are on a NAS on the same VLAN as your servers. Ransomware encrypts everything it can reach. It can reach your NAS. Now you have encrypted backups of encrypted data.
Backup infrastructure must be network-segmented at minimum. Immutable storage (S3 Object Lock, WORM tape) is better. An air-gapped copy is best. If your backup server authenticates with the same AD credentials as everything else, ransomware will find it.
Default credentials on internal systems ("it's behind the firewall")¶
The iDRAC is still root/calvin. Jenkins is still admin/admin. Redis has
no auth. "It's fine, it's on the internal network."
It's not fine. Attackers who get initial access to one machine pivot internally. Every default credential is a free lateral movement opportunity. The firewall protects against external threats. It does nothing once an attacker is inside.
Change every default password. Put auth on every service. Internal does not mean safe.
War story: The 2013 Target breach started with stolen HVAC vendor credentials and moved laterally through flat internal networks. 40 million credit card numbers were exfiltrated. The attackers never needed to break through the firewall — they were already inside via a trusted vendor with default-strength credentials.
Not testing backup restoration¶
You have backups. They run every night. The cron job is green. You have never actually restored from them.
When ransomware hits and you try to restore, you discover the backups have been silently failing for three months, or the restore process takes 48 hours instead of the 4 you promised in the DR plan, or the backup doesn't include the database, just the app files.
Test restoration quarterly at minimum. Time it. Verify the data. Document the process. An untested backup is a hope, not a control.
Assuming HTTPS means the application is secure¶
HTTPS means the connection between the client and server is encrypted. That's it. It says nothing about:
- Whether the application has SQL injection vulnerabilities
- Whether the server is patched
- Whether the API validates input
- Whether authentication is implemented correctly
- Whether the database is encrypted at rest
HTTPS is table stakes. It's the minimum. It protects data in transit and nothing else.
Port scanning someone else's infrastructure¶
You wanted to see if a vendor's API server was properly secured. You ran nmap against their IP range. This is unauthorized access to a computer system in most jurisdictions. It doesn't matter that you "didn't do anything" with the results.
Only scan systems you own or have explicit written authorization to test. For vendor security, ask for their SOC 2 report or pentest results. Don't do your own reconnaissance against their infrastructure.
Storing API keys in git history¶
You committed an AWS access key to the repo. You realized the mistake and removed it in the next commit. The key is still in git history. Anyone who clones the repo can find it with:
Bots scan GitHub for leaked credentials constantly. Keys get found within minutes of being pushed.
If you committed a secret:
1. Rotate the credential immediately — this is step one, not step two
2. Remove it from history with git filter-branch or BFG Repo-Cleaner
3. Add the pattern to .gitignore and set up pre-commit hooks (e.g., git-secrets, gitleaks)
4. Use environment variables or a secrets manager (Vault, AWS Secrets Manager, SOPS)
The secret is compromised the moment it hits a remote. Removing it from HEAD is not enough.
Gotcha: GitHub's secret scanning catches known credential patterns (AWS keys starting with
AKIA, GitHub tokens starting withghp_), but it does not catch database passwords, custom API keys, or secrets in non-standard formats. Pre-commit hooks likegitleaksortrufflehogrun locally and catch a broader range of patterns before they ever reach the remote.