Skip to content

Portal | Level: L2: Operations | Topics: Tempo | Domain: Observability

Runbook: Tempo Not Receiving Traces

Symptoms

  • Grafana Tempo shows no traces
  • Trace search returns empty
  • Application has OTel instrumentation but no data appears

Fast Triage

# Check Tempo pod
kubectl get pods -n monitoring -l app.kubernetes.io/name=tempo

# Check Tempo health
kubectl port-forward -n monitoring svc/tempo 3200:3200
curl http://localhost:3200/ready

# Check Tempo logs
kubectl logs -n monitoring -l app.kubernetes.io/name=tempo --tail=50

# Verify Grafana Tempo data source
kubectl port-forward -n monitoring svc/kube-prometheus-stack-grafana 3000:80
# Open http://localhost:3000 → Configuration → Data Sources → Tempo

Likely Causes (ranked)

[!NOTE] Tempo is receive-only — unlike Promtail (which actively pulls logs from nodes), Tempo passively waits for traces to be pushed via OTLP. If no application is instrumented with OpenTelemetry, Tempo will be healthy but permanently empty. "No traces" usually means the sender side is misconfigured, not Tempo.

  1. Application not instrumented — OTel SDK not configured (this repo has Tempo ready but app-side tracing is future work)
  2. Wrong OTLP endpoint — app sending to wrong host:port
  3. Tempo pod not running — crashed or evicted
  4. Grafana data source misconfigured — wrong URL for Tempo

Evidence Interpretation

What bad looks like: - Tempo pod is Running and /ready returns 200, but Grafana Explore shows zero traces. - No errors in Tempo logs — because Tempo is waiting to receive traces, and nothing is sending them. - Unlike Promtail (which pulls logs), Tempo is purely receive-side. If the application is not instrumented with OpenTelemetry, Tempo will be healthy but empty. - Check whether the app has OTEL_EXPORTER_OTLP_ENDPOINT set and whether that endpoint is reachable from the app pod.

Fix Steps

  1. Verify Tempo is running:
    kubectl get pods -n monitoring -l app.kubernetes.io/name=tempo
    
  2. Check Grafana Tempo data source URL should be http://tempo:3200
  3. If Tempo is down, restore via Helm:
    helm upgrade tempo grafana/tempo -n monitoring -f devops/observability/values/values-tempo.yaml
    
  4. Note: Application-side OpenTelemetry instrumentation is a future addition per training/library/guides/observability.md.

Verification

kubectl port-forward -n monitoring svc/tempo 3200:3200
curl http://localhost:3200/ready  # should return "ready"

Cleanup

None needed.

Unknown Unknowns

  • The application must be instrumented with the OpenTelemetry SDK and configured to export traces. Tempo does not discover or pull traces from applications.
  • OTLP has two protocol variants: gRPC on port 4317 and HTTP on port 4318. Using the wrong port or protocol causes silent failure — no error in Tempo logs.
  • Tempo is receive-only; if no traces arrive, Tempo logs show nothing wrong. The problem is always on the sender side.
  • Trace data in Tempo has a retention period. Old traces expire; "no traces" may mean you are searching outside the retention window.

Pitfalls

  • Assuming Tempo collects traces automatically — unlike Promtail for logs, Tempo is passive. The app must push traces via OTLP.
  • Wrong OTLP port — gRPC uses 4317, HTTP uses 4318. Mixing them up causes connection errors or silent drops.
  • Not checking the Grafana data source URL — if Grafana's Tempo data source points to the wrong service or port, queries return empty even when Tempo has data.

See Also

  • training/library/guides/observability.md (Tracing section)
  • training/interactive/incidents/scenarios/tempo-no-traces.sh

Wiki Navigation