Portal | Level: L2: Operations | Topics: Compliance & Audit, Linux Hardening, Audit Logging | Domain: Security
Compliance & Audit Automation - Primer¶
Why This Matters¶
Compliance is not optional. Whether you're in DoD, healthcare, finance, or just handling customer data, auditors will come knocking. The question is whether you'll spend six weeks scrambling to produce evidence or whether your pipeline produces it continuously. Compliance-as-code turns a dreaded annual exercise into a CI/CD artifact — auditable, repeatable, and honest.
Having done STIG remediation across hundreds of bare-metal servers, I can tell you: manual compliance is a lie. The spreadsheet says "compliant" but the reality drifts within hours of the last scan. Automated compliance is the only compliance that actually holds.
Core Concepts¶
1. The Compliance Landscape¶
| Framework | Sector | Focus | Key Tool |
|---|---|---|---|
| CIS Benchmarks | Universal | OS/app hardening | CIS-CAT, OpenSCAP |
| DISA STIGs | DoD / Government | Security configuration | STIG Viewer, OpenSCAP |
| PCI-DSS | Payment card | Cardholder data protection | InSpec, Qualys |
| HIPAA | Healthcare | Protected health information | InSpec, custom controls |
| SOC 2 | SaaS / Cloud | Trust service criteria | InSpec, cloud-native tools |
| FedRAMP | Government cloud | Cloud security | OSCAL, OpenSCAP |
| NIST 800-53 | Government | Security controls catalog | OpenSCAP, OSCAL |
Name origin: SCAP stands for "Security Content Automation Protocol" -- an ecosystem of standards maintained by NIST. XCCDF (eXtensible Configuration Checklist Description Format) defines the checklist. OVAL (Open Vulnerability and Assessment Language) defines the checks. Together they form the machine-readable backbone of automated compliance scanning.
Compliance Maturity Model:
Level 0: Manual Spreadsheets, screenshots, "trust me"
Level 1: Scripted One-off scan scripts, run before audit
Level 2: Scheduled Cron jobs run scans weekly, reports emailed
Level 3: Pipeline Compliance checks in CI/CD, blocks bad builds
Level 4: Continuous Real-time monitoring + auto-remediation
Level 5: Codified Compliance profiles versioned in git, auditable diffs
2. OpenSCAP — The Swiss Army Knife¶
OpenSCAP is an open-source SCAP (Security Content Automation Protocol) scanner. It evaluates systems against XCCDF profiles (like CIS or STIG) and produces machine-readable results.
# Install OpenSCAP
dnf install -y openscap-scanner scap-security-guide # RHEL/CentOS
apt install -y libopenscap8 ssg-debian # Debian/Ubuntu
# List available profiles
oscap info /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml
# Run a CIS benchmark scan
oscap xccdf eval \
--profile xccdf_org.ssgproject.content_profile_cis \
--results results.xml \
--report report.html \
/usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml
# Run a STIG scan
oscap xccdf eval \
--profile xccdf_org.ssgproject.content_profile_stig \
--results results.xml \
--report report.html \
/usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml
# Generate a remediation Ansible playbook from scan results
oscap xccdf generate fix \
--fix-type ansible \
--output remediate.yml \
--result-id "" \
results.xml
OpenSCAP Workflow:
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ SCAP Content │────▶│ Scanner │────▶│ Results XML │
│ (XCCDF/OVAL) │ │ (oscap) │ │ + HTML Report │
└──────────────┘ └──────────────┘ └──────┬───────┘
│
┌──────────────┐ ┌──────────────┐ │
│ Remediation │◀────│ Fix Generator │◀────────────┘
│ Playbook │ │ (ansible/bash)│
└──────────────┘ └──────────────┘
3. Chef InSpec — Compliance as Code¶
InSpec defines compliance controls as human-readable Ruby code. It's cloud-aware, platform-agnostic, and integrates with CI/CD.
# controls/ssh_config.rb
control 'ssh-01' do
impact 1.0
title 'SSH Protocol version must be 2'
desc 'Ensure only SSH Protocol 2 is used'
describe sshd_config do
its('Protocol') { should eq '2' }
end
end
control 'ssh-02' do
impact 0.7
title 'SSH root login must be disabled'
desc 'Root login over SSH must be prohibited'
describe sshd_config do
its('PermitRootLogin') { should eq 'no' }
end
end
control 'ssh-03' do
impact 0.7
title 'SSH idle timeout must be configured'
describe sshd_config do
its('ClientAliveInterval') { should cmp <= 300 }
its('ClientAliveCountMax') { should cmp <= 3 }
end
end
# Run InSpec profile against a local system
inspec exec /path/to/profile
# Run against a remote host via SSH
inspec exec /path/to/profile -t ssh://admin@10.0.1.50 -i ~/.ssh/key
# Run against AWS
inspec exec /path/to/aws-profile -t aws://us-east-1
# Output formats
inspec exec profile --reporter cli json:results.json html:report.html
# Use a profile from the InSpec Supermarket
inspec exec supermarket://dev-sec/linux-baseline
4. CIS-CAT and CIS Benchmarks¶
CIS Benchmarks are consensus-based configuration guidelines. CIS-CAT is the official assessment tool.
CIS Benchmark Levels:
Level 1: Basic security hardening. Should be applied everywhere.
Minimal performance impact. Few operational tradeoffs.
Example: Disable unused filesystems, configure password policy.
Level 2: Defense in depth. For high-security environments.
May impact performance or usability.
Example: Mandatory access control (SELinux enforcing),
audit all privileged commands.
| CIS Benchmark | Key Controls | Quick Win |
|---|---|---|
| Linux (RHEL/Ubuntu) | SSH hardening, filesystem permissions, audit logging | Disable root SSH login |
| Docker | Daemon config, image signing, resource limits | Don't run containers as root |
| Kubernetes | API server flags, RBAC, pod security | Enable audit logging |
| AWS | IAM policies, S3 bucket policies, CloudTrail | Enable MFA on root account |
5. STIG Automation¶
STIGs (Security Technical Implementation Guides) are DoD configuration standards. They're more prescriptive than CIS benchmarks and carry severity ratings.
Remember: STIG severity mnemonic: "CAT I = Career-ending, CAT II = Career-limiting, CAT III = Cleanup." CAT I findings (like no root password) demand immediate action. CAT II findings need a plan. CAT III findings should be fixed but won't sink you.
STIG Severity Categories:
CAT I (High): Direct data loss or system compromise
Must fix. No exceptions without formal waiver.
Example: No password on root account
CAT II (Medium): Potential for degraded security
Should fix. Waivers possible with justification.
Example: Audit log not configured
CAT III (Low): Minor security concerns
Fix when practical.
Example: Warning banner not displayed
# STIG automation with Ansible (RHEL example)
# Using the DISA STIG Ansible role from SCAP Security Guide
ansible-playbook -i inventory stig-remediate.yml \
--extra-vars "rhel9_stig_cat1=true rhel9_stig_cat2=true rhel9_stig_cat3=false"
# Verify STIG compliance
oscap xccdf eval \
--profile xccdf_org.ssgproject.content_profile_stig \
--results stig-results.xml \
--report stig-report.html \
/usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml
# Count findings by severity
xmllint --xpath "count(//rule-result[@result='fail'])" stig-results.xml
6. Compliance-as-Code Pipeline¶
Pipeline Architecture:
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│ Git Push │───▶│ CI Build │───▶│ Compliance│───▶│ Evidence │
│ (code + │ │ (image + │ │ Gate │ │ Archive │
│ profiles)│ │ infra) │ │ (scan) │ │ (S3/blob) │
└──────────┘ └──────────┘ └──────┬─────┘ └──────────┘
│
Pass?│
┌────┴────┐
│Yes │No
▼ ▼
Deploy Block +
Notify
# .github/workflows/compliance.yml (simplified)
name: Compliance Gate
on: [push]
jobs:
compliance-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run InSpec against container image
run: |
docker build -t myapp:test .
docker run -d --name test-container myapp:test
inspec exec compliance/linux-baseline \
-t docker://test-container \
--reporter json:results.json cli
docker rm -f test-container
- name: Check for critical failures
run: |
CRITICAL=$(jq '[.profiles[].controls[] |
select(.impact >= 0.7 and .results[].status == "failed")] |
length' results.json)
if [ "$CRITICAL" -gt 0 ]; then
echo "BLOCKED: $CRITICAL critical compliance failures"
exit 1
fi
- name: Archive evidence
uses: actions/upload-artifact@v4
with:
name: compliance-evidence-${{ github.sha }}
path: results.json
7. Automated Remediation¶
# Ansible playbook for automated STIG remediation
---
- name: Automated STIG Remediation
hosts: all
become: yes
tasks:
- name: "V-230223 - Disable root SSH login (CAT II)"
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^#?PermitRootLogin'
line: 'PermitRootLogin no'
notify: restart sshd
- name: "V-230271 - Set password minimum length (CAT II)"
lineinfile:
path: /etc/security/pwquality.conf
regexp: '^#?minlen'
line: 'minlen = 15'
- name: "V-230290 - Enable audit service (CAT I)"
service:
name: auditd
state: started
enabled: yes
- name: "V-230386 - Set permissions on /etc/shadow (CAT I)"
file:
path: /etc/shadow
mode: '0000'
owner: root
group: root
handlers:
- name: restart sshd
service:
name: sshd
state: restarted
8. Audit Evidence Collection¶
Auditors want proof, not promises. Automate evidence collection so it's always fresh.
# Automated evidence collection script
#!/bin/bash
EVIDENCE_DIR="/var/evidence/$(date +%Y-%m-%d)"
mkdir -p "$EVIDENCE_DIR"
# System configuration evidence
cp /etc/ssh/sshd_config "$EVIDENCE_DIR/"
cp /etc/audit/auditd.conf "$EVIDENCE_DIR/"
cp /etc/security/pwquality.conf "$EVIDENCE_DIR/"
ss -tlnp > "$EVIDENCE_DIR/listening_ports.txt"
getent group wheel > "$EVIDENCE_DIR/privileged_users.txt"
# OpenSCAP scan results
oscap xccdf eval \
--profile xccdf_org.ssgproject.content_profile_stig \
--results "$EVIDENCE_DIR/stig-results.xml" \
--report "$EVIDENCE_DIR/stig-report.html" \
/usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml 2>/dev/null
# Package inventory
rpm -qa --qf '%{NAME}-%{VERSION}-%{RELEASE}\n' | sort > "$EVIDENCE_DIR/packages.txt"
# Checksum the evidence bundle
sha256sum "$EVIDENCE_DIR"/* > "$EVIDENCE_DIR/checksums.sha256"
echo "Evidence collected: $EVIDENCE_DIR"
Common Pitfalls¶
- Compliance scan on build, drift in production — Scanning only at build time means production drifts from the golden image. Run scans continuously in production, not just in CI.
- Remediating everything blindly — Auto-remediation of STIGs without understanding the controls can break applications. Test remediation in staging first. Some controls conflict with application requirements.
- Treating compliance as security — Passing a CIS benchmark does not mean you're secure. It means you've met a baseline. Compliance is the floor, not the ceiling.
Analogy: Compliance is like a building code inspection. Passing the inspection means your building meets minimum safety standards -- it doesn't mean it's burglar-proof. Security is a continuous practice; compliance is a periodic checkpoint. 4. Evidence without timestamps — An undated scan result is worthless to an auditor. Every evidence artifact must have a timestamp, hostname, and scan profile version. 5. One profile for all environments — Dev servers don't need the same STIG hardening as production. Use tiered profiles: relaxed for dev, strict for prod. But always scan both. 6. Ignoring CAT III findings — Individually minor, but 50 unfixed CAT III findings signal systemic neglect to an auditor. Fix the easy ones and document exceptions for the rest.
War story: A common pattern in DoD environments: the team passes a STIG scan on Friday, and by Monday production has drifted out of compliance because someone manually changed an sshd setting to debug a connectivity issue. This is why Level 4 (continuous monitoring) matters -- a weekly scan catches drift too late. The fix: run OpenSCAP scans in a cron job and alert on new findings within hours, not weeks.
Wiki Navigation¶
Prerequisites¶
- SELinux & Linux Hardening (Topic Pack, L2)
Related Content¶
- SELinux & Linux Hardening (Topic Pack, L2) — Audit Logging, Compliance & Audit, Linux Hardening
- Infrastructure Forensics (Topic Pack, L2) — Audit Logging, Linux Hardening
- Audit Logging (Topic Pack, L1) — Audit Logging
- Audit Logging Flashcards (CLI) (flashcard_deck, L1) — Audit Logging
- Compliance Flashcards (CLI) (flashcard_deck, L1) — Compliance & Audit
- Deep Dive: Systemd Service Design Debugging and Hardening (deep_dive, L2) — Linux Hardening
- LDAP & Identity Management (Topic Pack, L2) — Linux Hardening
- Linux Security Flashcards (CLI) (flashcard_deck, L1) — Linux Hardening
- Linux Users & Permissions (Topic Pack, L1) — Linux Hardening
- Runbook: Unauthorized Access Investigation (Runbook, L2) — Audit Logging
Pages that link here¶
- Anti-Primer: Compliance Automation
- Audit Logging
- Compliance & Audit Automation
- Infrastructure Forensics
- LDAP & Identity Management
- Linux Users and Permissions
- Master Curriculum: 40 Weeks
- Runbook: Unauthorized Access Investigation
- SELinux & AppArmor
- SELinux & Linux Hardening
- systemd Service Design, Debugging, and Hardening