Skip to content

Loki

← Back to all decks

25 cards — 🟢 3 easy | 🟡 4 medium | 🔴 3 hard

🟢 Easy (3)

1. How does Loki differ from Elasticsearch in its indexing approach?

Show answer Loki indexes only labels (metadata), not the full log content. This makes it much cheaper to operate than Elasticsearch, which indexes all log text.

Remember: Loki indexes labels only. Query content with LogQL filter expressions.

2. How do you select a log stream in LogQL?

Show answer Using a stream selector with labels in curly braces, e.g., {app="nginx", namespace="production"}.

Remember: LogQL: label matchers + pipelines. |= contains, != not, |~ regex.

Example: {app="nginx", namespace="production"} |= "error" selects nginx logs in production containing error.

Name origin: Loki is named after the Norse trickster god — lighter than Elasticsearch, just as Loki is lighter than Thor.

3. What is Promtail and what role does it play in the Loki ecosystem?

Show answer Promtail is a log collection agent that ships log entries to Loki. It tails log files, attaches labels, and forwards them to Loki for indexing and storage.

Remember: Loki = "Prometheus for logs." Indexes labels, not content. Lightweight.

Fun fact: Grafana Labs created it. Same label approach as Prometheus.

🟡 Medium (4)

1. How do you filter logs by content in LogQL?

Show answer Use pipe operators: |= for contains, != for not contains, |~ for regex match, !~ for not regex match. Example: {app="nginx"} |= "error" != "healthcheck".

Remember: LogQL: label matchers + pipelines. |= contains, != not, |~ regex.

2. How do you parse structured fields from logs in LogQL?

Show answer Use parser stages: | json for JSON logs, | logfmt for logfmt, or | pattern with a template. After parsing, you can filter on extracted fields, e.g., {app="nginx"} | json | status >= 400.

Remember: LogQL: label matchers + pipelines. |= contains, != not, |~ regex.

3. How do you derive metrics from logs using LogQL?

Show answer Use metric functions on log queries: count_over_time({app="nginx"} |= "error" [5m]) counts matching log lines, rate() computes per-second rate, and sum by (label) aggregates across streams.

Remember: LogQL: label matchers + pipelines. |= contains, != not, |~ regex.

4. Why should you keep label cardinality low in Loki?

Show answer High-cardinality labels (e.g., request_id, user_id) create too many unique streams, which bloats Loki's index and degrades query performance. Use a small set of bounded labels like namespace, app, and environment.

Gotcha: High cardinality kills Loki. ~10-15 values per label max. No request IDs.

🔴 Hard (3)

1. How would you find the top error paths from Nginx logs in the last hour using LogQL?

Show answer sum by (path) (count_over_time({app="nginx"} | json | status >= 500 [1h])) — this parses JSON, filters for 5xx status codes, counts over one hour, and groups by the path field.

Remember: LogQL: label matchers + pipelines. |= contains, != not, |~ regex.

2. How does Loki's storage architecture handle log retention?

Show answer Loki stores log chunks in object storage (S3, GCS) and indexes in a key-value store (BoltDB, DynamoDB). Retention is configured per-tenant with compaction rules that delete chunks older than the retention period.

Remember: Loki indexes labels only. Query content with LogQL filter expressions.

3. How do you create alert rules in Loki using the Loki Ruler?

Show answer Define LogQL metric queries in alert rule YAML groups, similar to Prometheus alert rules. Example: sum(rate({namespace="app"} |= "level=error" [5m])) > 1 with a for duration and severity labels. The Loki Ruler evaluates these and sends to Alertmanager.

Remember: Loki indexes labels only. Query content with LogQL filter expressions.