Skip to content

Offensive Security Basics — Street Ops

Practical defensive checks you can run on your own infrastructure. Everything here assumes you have authorization to test. If you don't, stop.


Running nmap on your own infra

Scan your own hosts to see what attackers see. Do this regularly.

# Quick scan — top 1000 ports, service detection
nmap -sV -T4 192.168.1.0/24

# Full port scan — all 65535 ports (slower)
nmap -p- -sV 10.0.0.50

# Check specific ports you care about
nmap -p 22,80,443,3306,5432,6379,8080,9200 target-host

# Output to file for diffing later
nmap -sV -oN baseline-scan.txt 10.0.0.0/24

What to look for: - Ports open that shouldn't be (Redis on 6379 exposed, Elasticsearch on 9200) - Services running unexpected versions - Management interfaces exposed (iDRAC, iLO, IPMI on 623) - Anything listening on 0.0.0.0 that should be 127.0.0.1

Compare scans over time. New open ports mean something changed — find out what.

Debug clue: If nmap shows a port as "filtered" instead of "closed," a firewall is silently dropping packets rather than sending RST. "Closed" means the host responded with RST (port is reachable, nothing listening). "Filtered" means no response at all. This distinction tells you whether you have a firewall issue or a service issue.


Checking for common web vulns

Header checks

# Check security headers on your sites
curl -sI https://yoursite.com | grep -iE 'strict-transport|content-security|x-frame|x-content-type'

You want to see: - Strict-Transport-Security — forces HTTPS - Content-Security-Policy — controls allowed script sources - X-Frame-Options — prevents clickjacking - X-Content-Type-Options: nosniff — prevents MIME sniffing

Missing headers = low-hanging fruit for attackers.

SQLi sanity checks on staging

On your staging environment (never production):

# Does the app handle special characters in input?
curl -s "https://staging.yoursite.com/search?q=test'%20OR%201=1--"

# Check if error messages leak database info
curl -s "https://staging.yoursite.com/api/users/1'" | grep -i "sql\|mysql\|postgres\|syntax"

If you see SQL error messages in the response, you have a problem.

SSRF checks

Verify internal metadata endpoints are blocked:

# From an application server, ensure the app can't reach cloud metadata
curl -s http://169.254.169.254/latest/meta-data/
# This should be blocked by firewall rules / IMDSv2 enforcement

Verifying password policies

Check that your policies are enforced, not just documented.

# Test if weak passwords are accepted (on test accounts)
# Try: password, 123456, company name, blank password

# Check password hash algorithm in use
# Linux — look for $6$ (SHA-512), $2b$ (bcrypt), $argon2 (argon2)
sudo grep 'test-user' /etc/shadow

# Check PAM password requirements
grep -r 'pam_pwquality\|pam_cracklib\|minlen\|minclass' /etc/pam.d/

# Check password aging policy
chage -l username

Key questions: - Is there a minimum length of 12+ characters? - Are passwords checked against breach lists? - Is MFA enforced for all admin and remote access? - Are service accounts using passwords or (better) keys/certificates?


Testing SSH hardening

# Check sshd_config for critical settings
sudo sshd -T | grep -E 'passwordauthentication|permitrootlogin|pubkeyauthentication|permitemptypasswords|maxauthtries|allowagentforwarding'

What you want to see:

passwordauthentication no
permitrootlogin no
pubkeyauthentication yes
permitemptypasswords no
maxauthtries 3

Test it actually works:

# Attempt password login — should be rejected
ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no user@host

# Attempt root login — should be rejected
ssh root@host

Check for weak key types:

# Find DSA or RSA < 2048-bit keys in authorized_keys
for user_home in /home/*; do
  if [ -f "$user_home/.ssh/authorized_keys" ]; then
    echo "=== $user_home ==="
    ssh-keygen -l -f "$user_home/.ssh/authorized_keys" 2>/dev/null | grep -E 'DSA|1024'
  fi
done


Checking for default credentials

These get missed constantly, especially on internal systems.

System Default creds to check
Dell iDRAC root / calvin
HP iLO Administrator / (on label or blank)
IPMI ADMIN / ADMIN
Cisco switches cisco / cisco, admin / admin
MySQL root / (blank)
PostgreSQL postgres / postgres
Redis no auth by default
Elasticsearch no auth by default (pre-8.x)
Jenkins admin / admin
Grafana admin / admin
MongoDB no auth by default
# Redis — check if auth is required
redis-cli -h target-host ping
# If you get PONG with no password, auth is missing

# MySQL — check for passwordless root
mysql -h target-host -u root --password='' -e 'SELECT 1' 2>/dev/null

# Elasticsearch — check for open access
curl -s http://target-host:9200/_cluster/health

Every one of these should be changed or locked down on day one.

War story: A Redis instance with no authentication was exposed to the internet for 72 hours during a cloud migration. Attackers used the CONFIG SET command to write an SSH key into the root user's authorized_keys file. The server was fully compromised before anyone noticed. Redis with no auth + internet exposure = root shell in under a minute.


DDoS readiness

Test rate limiting

# Quick test — send 100 rapid requests, check for 429 responses
for i in $(seq 1 100); do
  curl -s -o /dev/null -w "%{http_code}\n" https://yoursite.com/api/endpoint
done | sort | uniq -c

You should start seeing 429 Too Many Requests after your rate limit threshold.

Verify CDN config

  • Is your origin IP hidden? (dig +short yoursite.com should return CDN IPs, not origin)
  • Are CDN caching rules configured for static assets?
  • Is the CDN configured to absorb L7 attacks (challenge pages, JS challenges)?
  • Is there an incident escalation path to your CDN provider?

DDoS incident playbook

Have answers to these before the attack: - Who gets paged? What's the communication channel? - Can you enable "under attack" mode on your CDN in under 2 minutes? - Can you block traffic by country or ASN if needed? - What's the process to engage your ISP or cloud provider's DDoS team? - Where's the runbook? Is it accessible when your infra is down?


Ransomware readiness

Test backup restoration

If you haven't restored from backup recently, you don't have backups. You have hopes.

# Can you actually restore? Test it.
# 1. Spin up an isolated environment
# 2. Restore from your most recent backup
# 3. Verify data integrity
# 4. Time the process — know your RTO

Verify immutability

# S3 Object Lock — check bucket configuration
aws s3api get-object-lock-configuration --bucket your-backup-bucket

# Try to delete a backup object (in test bucket)
# It should fail if immutability is working
aws s3api delete-object --bucket your-backup-bucket --key test-backup-file

Verify segmentation

  • Can production hosts reach backup networks? They shouldn't.
  • Can a compromised workstation pivot to backup infrastructure? Test it.
  • Are backup credentials stored separately from production credentials?
  • Is there an air-gapped copy that survives total network compromise?

Log review for attack indicators

Things to look for in your logs:

# Failed SSH auth spikes
journalctl -u sshd --since "1 hour ago" | grep "Failed password" | wc -l

# Unusual outbound connections
ss -tnp | grep -v '127.0.0.1\|::1' | awk '{print $5}' | sort | uniq -c | sort -rn | head -20

# Web server — scan for SQLi/XSS attempts in access logs
grep -iE "(union.*select|<script|\.\.\/|etc\/passwd)" /var/log/nginx/access.log

# Failed sudo attempts
grep 'authentication failure' /var/log/auth.log | tail -20

# New cron jobs (persistence mechanism)
ls -la /etc/cron.d/ /var/spool/cron/crontabs/ 2>/dev/null

Set up alerts for: - More than 10 failed auth attempts per minute from one source - Outbound connections to known-bad IPs (threat intel feeds) - New SUID binaries appearing - Unexpected processes listening on network ports


The 15-minute security audit

New system just handed to you? Run through this:

[ ] Open ports  nmap -sV host (anything unexpected?)
[ ] Default creds  try the usual suspects on exposed services
[ ] SSH config  key-only auth? Root login disabled?
[ ] Patches  how old is the OS? Any known CVEs?
[ ] Users  who has accounts? Any unauthorized?
[ ] Sudo  who has sudo? Is it ALL=(ALL) NOPASSWD?
[ ] Cron  any unexpected scheduled tasks?
[ ] Outbound  any unexpected outbound connections?
[ ] Logs  is logging enabled and shipping somewhere?
[ ] Backups  are they configured? When was the last one?
[ ] Firewall  is it running? What's the default policy?
[ ] Updates  is unattended-upgrades or equivalent enabled?

This isn't comprehensive. It's a first pass that catches the worst stuff in the time it takes to drink a coffee.

Remember: The 15-minute audit is a triage, not a hardening exercise. It finds the doors left wide open. A full hardening pass (CIS benchmarks, OpenSCAP, etc.) takes hours per system. But 80% of real-world compromises exploit the basics this checklist covers: default creds, missing patches, and open management ports.