Skip to content

Supply Chain Security - Primer

Why This Matters

Software supply chain attacks target every link in the path from source code to running production artifact. A compromised dependency, tampered build, or unsigned image can bypass traditional perimeter defenses entirely. For DevOps teams, supply chain security is now a baseline requirement — not optional hardening — driven by compliance mandates (NIST SSDF, FedRAMP), vulnerability management obligations, and the need to prove that what you built is what you deployed.

Fun fact: The 2020 SolarWinds attack (compromised build pipeline injected malware into signed updates sent to 18,000 organizations including US government agencies) was the watershed moment that made supply chain security a board-level concern. The 2021 Log4Shell vulnerability demonstrated the SBOM use case — organizations without an SBOM spent days determining whether they were affected.

Core Concepts

SBOM (Software Bill of Materials) — A machine-readable inventory of every component in an artifact: libraries, versions, licenses. Formats include SPDX and CycloneDX. An SBOM lets you answer "does this image contain log4j?" in seconds rather than days.

Name origin: SPDX (Software Package Data Exchange) was created by the Linux Foundation in 2010 and became ISO standard ISO/IEC 5962:2021. CycloneDX originated from the OWASP community in 2017, designed specifically for security use cases. SPDX is more license-focused; CycloneDX is more vulnerability-focused.

Interview tip: When asked "how do you handle dependency vulnerabilities?", the strong answer is a pipeline: generate SBOM at build time, scan with Grype/Trivy, gate deploys on severity thresholds, and store SBOMs alongside artifacts for retroactive queries when new CVEs drop.

Under the hood: Generate an SBOM with syft packages myimage:latest -o spdx-json > sbom.json. Scan it for vulnerabilities with grype sbom:sbom.json. Both tools are from Anchore and work with OCI images, filesystems, and archives.

SLSA Framework — Supply-chain Levels for Software Artifacts. A graduated security framework (Levels 1-3) that defines requirements for build provenance, source integrity, and build platform hardening. Higher levels provide stronger guarantees that an artifact was built from the claimed source by a trustworthy system.

Name origin: SLSA is pronounced "salsa." Originally from Google's internal Binary Authorization for Borg (BAB), it was open-sourced via the OpenSSF in 2021.

Gotcha: SLSA Level 1 only requires provenance documentation — it does not require signature verification. Many teams claim "SLSA compliance" at Level 1 thinking it provides strong security guarantees, when the real value starts at Level 2 (hosted build, signed provenance) and Level 3 (hardened build platform).

Ref: https://slsa.dev/

Sigstore and cosign — An open-source ecosystem for signing, verifying, and recording software artifacts. Cosign signs container images using either key pairs or keyless OIDC-based identity. Signatures are recorded in the Rekor transparency log, creating a tamper-evident audit trail.

Who made it: Sigstore was started by Dan Lorenc and others at Google, donated to the OpenSSF (Open Source Security Foundation) in 2021. The name "Sigstore" combines "signature" and "store." Rekor (the transparency log) is named after a town in Iceland, following a naming theme from the project's early days.

In-toto Attestations — Structured metadata that describes how an artifact was produced. Attestations bind a subject (artifact digest) to a predicate (build provenance, vulnerability scan results, SBOM). They enable policy engines like Kyverno to enforce rules such as "only admit images with a signed SLSA provenance attestation."

Name origin: "In-toto" is Latin for "as a whole" — the framework verifies the entire software supply chain end-to-end, not just individual steps. Created by Justin Cappos at NYU's Secure Systems Lab, the same lab that created TUF (The Update Framework) used by Docker and pip.

Dependency Scanning — Automated detection of known vulnerabilities in project dependencies. Tools like Grype, npm audit, pip audit, and govulncheck compare your dependency graph against CVE databases and fail builds when critical issues are found.

Gotcha: Dependency scanning only catches known vulnerabilities (CVEs). It cannot detect intentionally malicious code in packages that have not yet been flagged (typosquatting, account takeover). Combine scanning with lockfile pinning, vendoring critical dependencies, and using private registries that mirror verified packages.

Quick Example: Signing a Container Image

# Build and push the image, capturing the immutable digest
DIGEST=$(docker build --push -t ghcr.io/org/myapp:v1.0.0 . \
  | grep "digest:" | awk '{print $NF}')

# Sign by digest using keyless OIDC (CI-friendly)
cosign sign --yes ghcr.io/org/myapp:v1.0.0@${DIGEST}

# Consumers verify before deploying
cosign verify \
  --certificate-identity-regexp "https://github.com/org/myapp/.*" \
  --certificate-oidc-issuer https://token.actions.githubusercontent.com \
  ghcr.io/org/myapp:v1.0.0@${DIGEST}

This establishes a chain of trust: the image is signed by a known CI identity, the signature is logged in a public transparency log, and consumers can verify both before admitting the image to a cluster.

Remember: The supply chain security layers mnemonic: "SAVD" — SBOM (know what you shipped), Attestation (prove how it was built), Verification (check before deploy), Dependency scanning (catch known vulns). Each layer catches different classes of attack; you need all four.

War story: The 2021 Codecov breach showed how a compromised CI tool can exfiltrate secrets from thousands of downstream projects. Codecov's bash uploader script was modified to send environment variables (containing secrets) to an attacker-controlled server. Projects with locked-down CI environments and pinned script hashes were unaffected. This is why SLSA Level 3 requires a hardened build platform.


Wiki Navigation

Prerequisites