SQLite Operations & Internals - Street-Level Ops¶
Quick Diagnosis Commands¶
# Open a database file
sqlite3 /path/to/app.db
# Open with useful defaults already set
sqlite3 -column -header /path/to/app.db
# One-liner query from shell
sqlite3 /path/to/app.db "SELECT count(*) FROM users;"
# Dump schema only
sqlite3 /path/to/app.db .schema
# Dump entire database as SQL
sqlite3 /path/to/app.db .dump > backup.sql
# Show database file info (page size, page count, etc.)
sqlite3 /path/to/app.db "PRAGMA page_count; PRAGMA page_size; PRAGMA freelist_count;"
# Inside the sqlite3 CLI — useful dot commands
.headers on # show column names
.mode column # column-aligned output
.mode csv # CSV output
.mode json # JSON output (SQLite 3.33+)
.tables # list all tables
.schema tablename # show CREATE statement for a table
.indexes tablename # show indexes on a table
.quit # exit
# Explain query plan
EXPLAIN QUERY PLAN SELECT * FROM orders WHERE user_id = 42;
# Show all attached databases
PRAGMA database_list;
# Check journal mode
PRAGMA journal_mode;
# Check current WAL size
ls -la /path/to/app.db-wal /path/to/app.db-shm 2>/dev/null
# Integrity check (can be slow on large databases)
sqlite3 /path/to/app.db "PRAGMA integrity_check;"
# Returns "ok" if healthy, lists problems otherwise
# Quick integrity check (checks structural integrity, not data)
sqlite3 /path/to/app.db "PRAGMA quick_check;"
# Check foreign key violations
sqlite3 /path/to/app.db "PRAGMA foreign_key_check;"
# Analyze query statistics (helps optimizer)
sqlite3 /path/to/app.db "ANALYZE;"
# Rebuild database, reclaim free space
sqlite3 /path/to/app.db "VACUUM;"
# Incremental vacuum (less blocking)
sqlite3 /path/to/app.db "PRAGMA incremental_vacuum(100);"
Common Scenarios¶
Scenario 1: Enable WAL Mode for Concurrent Readers¶
Default rollback journal mode allows only one writer and blocks readers during writes. WAL (Write-Ahead Logging) allows concurrent readers and one writer.
# Enable WAL mode (persists across connections)
sqlite3 /path/to/app.db "PRAGMA journal_mode=WAL;"
# Returns: wal
# Verify WAL mode is active
sqlite3 /path/to/app.db "PRAGMA journal_mode;"
# WAL creates two additional files — normal, do not delete
ls -la /path/to/app.db*
# app.db
# app.db-wal <- write-ahead log
# app.db-shm <- shared memory index
# Tune WAL checkpoint behavior
# Auto-checkpoint when WAL reaches 1000 pages (default)
sqlite3 /path/to/app.db "PRAGMA wal_autocheckpoint=1000;"
# Manual checkpoint (consolidates WAL back to main file)
sqlite3 /path/to/app.db "PRAGMA wal_checkpoint(FULL);"
When to use WAL: Any multi-process scenario — web servers, background workers, multiple app instances reading the same file. WAL dramatically reduces lock contention.
Under the hood: In rollback journal mode, writers block readers by locking the entire database file. In WAL mode, writers append to a separate WAL file while readers continue reading from the main database. Readers only see committed transactions that existed when they started reading. This is MVCC (multi-version concurrency control) — the same concept PostgreSQL uses, implemented with files instead of a server.
Scenario 2: Backup Without Downtime¶
# Method 1: SQLite .backup command (online backup API — safe while app is running)
sqlite3 /path/to/app.db ".backup /path/to/backup.db"
# Method 2: Using sqlite3_backup API via CLI
sqlite3 /path/to/app.db "VACUUM INTO '/path/to/backup.db';"
# Method 3: Copy files (only safe if application is stopped OR using WAL mode with checkpoint)
# If WAL mode: checkpoint first, then copy
sqlite3 /path/to/app.db "PRAGMA wal_checkpoint(TRUNCATE);"
cp /path/to/app.db /path/to/backup.db
# Method 4: Litestream for continuous replication to S3
# litestream replicate /path/to/app.db s3://mybucket/app.db
# Verify backup integrity
sqlite3 /path/to/backup.db "PRAGMA integrity_check;"
Scenario 3: Diagnosing Slow Queries¶
# Enable query timing
sqlite3 /path/to/app.db
> .timer on
> SELECT * FROM orders WHERE customer_id = 123;
> Run Time: real 2.847 user 0.002 sys 0.001
# Check if indexes exist
sqlite3 /path/to/app.db ".indexes orders"
# Explain what the query optimizer will do
sqlite3 /path/to/app.db \
"EXPLAIN QUERY PLAN SELECT * FROM orders WHERE customer_id = 123;"
# Bad: "SCAN TABLE orders" <- full table scan, no index
# Good: "SEARCH TABLE orders USING INDEX idx_customer_id (customer_id=?)"
> **One-liner:** If `EXPLAIN QUERY PLAN` says SCAN instead of SEARCH, you are missing an index and performance will degrade linearly with table size.
# Create missing index
sqlite3 /path/to/app.db \
"CREATE INDEX IF NOT EXISTS idx_orders_customer ON orders(customer_id);"
# Update statistics after creating index
sqlite3 /path/to/app.db "ANALYZE orders;"
Scenario 4: Recovering a Corrupted Database¶
# First: check integrity
sqlite3 /path/to/app.db "PRAGMA integrity_check;" 2>&1 | head -20
# If corrupted, try to recover what you can
sqlite3 /path/to/app.db ".recover" | sqlite3 /path/to/recovered.db
# Alternative recovery approach
sqlite3 /path/to/app.db ".dump" 2>/dev/null | \
grep -v "^ROLLBACK" | \
sqlite3 /path/to/recovered.db
# Verify the recovered database
sqlite3 /path/to/recovered.db "PRAGMA integrity_check;"
sqlite3 /path/to/recovered.db ".tables"
# Compare row counts
sqlite3 /path/to/app.db "SELECT name, (SELECT count(*) FROM sqlite_master WHERE type='table' AND name=t.name) FROM sqlite_master t WHERE type='table';" 2>/dev/null || echo "Original unreadable"
sqlite3 /path/to/recovered.db "SELECT count(*) FROM users; SELECT count(*) FROM orders;"
Key Patterns¶
Tuning for Performance¶
# Apply these PRAGMAs at connection open time for best performance
sqlite3 /path/to/app.db << 'EOF'
PRAGMA journal_mode=WAL;
PRAGMA synchronous=NORMAL; -- Faster than FULL, safe with WAL
PRAGMA cache_size=-64000; -- 64MB page cache (negative = KB)
PRAGMA foreign_keys=ON; -- Enable FK enforcement
PRAGMA busy_timeout=5000; -- Wait 5s on lock before SQLITE_BUSY
EOF
Default trap:
busy_timeoutdefaults to 0, meaning any lock contention immediately returns SQLITE_BUSY. In a web application, this causes random "database is locked" errors under concurrent load. Always setbusy_timeoutto at least 5000ms (5 seconds) at connection open time.Gotcha:
PRAGMA synchronous=NORMALwith WAL mode is safe because WAL provides crash recovery through the WAL file. Butsynchronous=OFFrisks data loss on power failure even with WAL. Never usesynchronous=OFFfor data you care about.
File Locking and Containers¶
# Problem: SQLite uses POSIX advisory locks, which don't work correctly
# across NFS/CIFS mounts or certain container volume types
# Diagnose locking issues
sqlite3 /path/to/app.db "PRAGMA locking_mode;" # normal or exclusive
# If multiple containers share a SQLite file over NFS, use exclusive locking
sqlite3 /path/to/app.db "PRAGMA locking_mode=EXCLUSIVE;"
# Better: don't share SQLite files across containers
# Use Litestream to replicate to S3 and restore per-container
# Or use a proper client-server database (PostgreSQL) for multi-container
# Check if file is locked by another process
fuser /path/to/app.db
lsof /path/to/app.db
When NOT to Use SQLite¶
Do NOT use SQLite when:
- Multiple writers from separate processes/containers (use PostgreSQL/MySQL)
- Data file on NFS or distributed filesystem (locking breaks)
- Database > a few GB (operational complexity increases significantly)
- High concurrent write throughput needed (single writer is a hard limit)
- Network replication between nodes required (use client-server DB)
DO use SQLite when:
- Embedded in an application, single process
- Read-heavy workloads (WAL mode handles many readers)
- Testing/CI (fast, no server to start)
- Edge/IoT devices where a server is impractical
- Development environments mirroring production shape
Embedding in Applications (Operational Checklist)¶
# Schema migrations — track them
CREATE TABLE IF NOT EXISTS schema_migrations (
version INTEGER PRIMARY KEY,
applied_at TEXT DEFAULT (datetime('now'))
);
# Check current schema version
sqlite3 /path/to/app.db \
"SELECT MAX(version) FROM schema_migrations;"
# Attach multiple databases
sqlite3 /path/to/app.db \
"ATTACH DATABASE '/path/to/archive.db' AS archive;"
# Now query across both: SELECT * FROM archive.old_orders;
# Export table to CSV
sqlite3 -separator ',' /path/to/app.db \
"SELECT * FROM users;" > users.csv
# Import CSV
sqlite3 /path/to/app.db << 'EOF'
.mode csv
.import users.csv users
EOF