Backstage Footguns¶
1. Running PostgreSQL-Required Production Setup With SQLite¶
Backstage's default scaffold uses an in-memory SQLite database. You deploy it to production thinking SQLite is fine. On restart all catalog data is gone — entities, locations, user data — because in-memory databases don't persist.
Fix: Use PostgreSQL for any non-development deployment. Set database.client: pg in app-config.yaml with connection details. SQLite is acceptable only for local development with database.client: better-sqlite3.
2. Entity Name Collisions Across Namespaces Go Unnoticed¶
You have two teams register components both named api-gateway in the default namespace. Backstage silently deduplicates — one entity overwrites the other depending on ingestion order. Teams don't notice until they see incorrect ownership or missing metadata.
Fix: Use explicit namespaces in catalog-info.yaml (metadata.namespace: team-payments). Enforce unique component names in your organization via linting or CI checks on catalog files. The full entity ref format is component:<namespace>/<name>.
3. Committing App Secrets Directly in app-config.yaml¶
You hardcode GitHub OAuth client secrets, database passwords, or API keys directly in app-config.yaml and commit it. The file is checked into git and distributed with the Docker image.
Fix: Use environment variable substitution: clientSecret: ${AUTH_GITHUB_CLIENT_SECRET}. Inject secrets at runtime via Kubernetes Secrets or a secrets manager (Vault, AWS Secrets Manager). Use app-config.local.yaml (gitignored) for local development secrets.
Gotcha:
app-config.yamlis often committed to git as part of the Backstage scaffold. Secrets hardcoded there persist in git history forever, even after deletion. Usegit log --all -p -- app-config.yaml | grep -i secretto audit for historical leaks.
4. Monolithic Plugin Installation Breaks Everything on Upgrade¶
You install many Backstage plugins without version pinning. After running yarn versions:bump to upgrade Backstage core, several plugins break because they depend on older API versions. The entire portal is down.
Fix: Pin plugin versions in package.json. Use yarn versions:check to identify version mismatches before bumping. Test upgrades in a staging environment. Backstage releases are frequent — use a dedicated upgrade cadence (e.g., monthly) rather than ad-hoc upgrades.
5. TechDocs Failing Because mkdocs-material Is Not Installed¶
TechDocs renders a blank page or errors with "Module not found." The default Backstage TechDocs generator runs mkdocs inside a Docker container, but if you switched to runIn: local for performance, you need the dependencies installed on the Backstage server itself.
Fix: When using generator.runIn: local, install dependencies on the Backstage server: pip install mkdocs-material mkdocs-techdocs-core. In the Docker image, add RUN pip install mkdocs-material mkdocs-techdocs-core. Alternatively, use runIn: docker and accept the startup overhead.
6. GitHub Discovery Ingests Too Many Repositories¶
You configure github-discovery with a broad pattern targeting your entire GitHub organization. Backstage attempts to scan 500+ repositories, finds catalog files in unexpected places, and ingests entities you didn't intend — including archived, deprecated, or private repositories.
Fix: Use targeted location entries instead of broad discovery, or configure discovery filters: target: https://github.com/myorg/*/blob/main/catalog-info.yaml. Add discovery exclusion patterns. Regularly audit registered locations with GET /api/catalog/locations.
War story: CVE-2024-53983 demonstrated that Backstage's scaffolder plugin was vulnerable to SSRF via git config injection, allowing attackers to capture privileged git tokens. Broad discovery patterns increase the attack surface — every ingested repo is a potential vector. Restrict discovery scope and keep Backstage updated.
7. owner Field References Non-Existent Groups¶
You set spec.owner: team-platform on a component. Backstage can't resolve the owner because there's no Group entity named team-platform in the catalog. Components show as orphaned, breaking ownership searches, access control, and on-call lookups.
Fix: Register Group entities explicitly before registering components that reference them. Use groups.yaml files that define your organizational structure and register them as a high-priority location. Validate ownership references with the catalog validation API before registering entities.
8. Backstage App Runs as Root in Container¶
The default Backstage Dockerfile scaffolded by @backstage/create-app doesn't explicitly set a non-root user. The application runs as root inside the container, violating security policy in hardened Kubernetes clusters (PodSecurityPolicy, Pod Security Standards).
Fix: Add USER node to the production stage of the Dockerfile and ensure file permissions allow the node user to read the app files. Many corporate Kubernetes clusters reject root containers — catch this during template review, not at deployment time.
Default trap: The
@backstage/create-appscaffold does not add aUSERdirective. Clusters enforcing Pod Security Standards atrestrictedlevel will reject the pod with "must not run as root." AddUSER 1000to your Dockerfile and setrunAsNonRoot: truein the pod security context.
9. Outdated Backstage Packages Accumulate Over Months¶
Teams deploy Backstage and don't update it for 6+ months. Backstage is under active development with frequent breaking changes. After many versions of drift, running yarn versions:bump introduces dozens of breaking API changes simultaneously.
Fix: Establish a monthly or bi-monthly upgrade cadence. Use npx @backstage/cli versions:check to track drift. Backstage maintainers provide upgrade guides for each release. Staying current is much easier than large batch upgrades.