Skip to content

Wireshark / tshark / tcpdump - Street-Level Ops

Quick Diagnosis Commands

# List available interfaces
tcpdump -D
tshark -D

# Quick capture on interface (Ctrl+C to stop)
tcpdump -i eth0

# Capture to file for later analysis
tcpdump -i eth0 -w /tmp/capture.pcap

# Capture with capture filter (fast, kernel-level)
tcpdump -i eth0 -w /tmp/capture.pcap 'host 10.0.1.5 and port 443'
tcpdump -i eth0 -w /tmp/capture.pcap 'tcp port 8080'
tcpdump -i eth0 -w /tmp/capture.pcap 'net 10.0.0.0/8'

# Rotate files: 100MB per file, keep 10 files
tcpdump -i eth0 -w /tmp/cap.pcap -C 100 -W 10

# Capture all interfaces (useful when unsure which interface)
tcpdump -i any -w /tmp/capture.pcap

# Capture with timestamps in display
tcpdump -i eth0 -tttt 'port 53'

# Don't resolve DNS/ports (faster, clearer)
tcpdump -i eth0 -nn 'port 80'

> **Remember:** Always use `-nn` in production captures. Without it, tcpdump does reverse DNS lookups for every IP, which slows capture, generates DNS traffic that pollutes your capture, and can hang if DNS is the thing you are debugging.

# Show packet contents in hex+ASCII
tcpdump -i eth0 -XX 'port 25'

# Limit packet count
tcpdump -i eth0 -c 1000 -w /tmp/capture.pcap
# tshark: read a pcap and display
tshark -r /tmp/capture.pcap

# tshark: apply display filter when reading
tshark -r /tmp/capture.pcap -Y 'http.response.code == 500'

# tshark: extract specific fields as text
tshark -r /tmp/capture.pcap -T fields -e ip.src -e ip.dst -e tcp.dstport

# tshark: extract fields as JSON
tshark -r /tmp/capture.pcap -T json -Y 'dns' | jq '.[].layers.dns'

# tshark: count packets per protocol
tshark -r /tmp/capture.pcap -qz io,phs

# tshark: conversation statistics (top talkers)
tshark -r /tmp/capture.pcap -qz conv,tcp

# tshark: decode HTTP/2 or gRPC traffic
tshark -r /tmp/capture.pcap -Y 'http2' -V

# tshark: follow TCP stream (text)
tshark -r /tmp/capture.pcap -q -z follow,tcp,ascii,0

# tshark: filter by time
tshark -r /tmp/capture.pcap -Y 'frame.time >= "2024-01-15 14:00:00"'

Common Scenarios

Scenario 1: Diagnosing TLS handshake failures

# Capture TLS traffic to file
tcpdump -i eth0 -w /tmp/tls.pcap 'tcp port 443'

# With tshark, look at handshake
tshark -r /tmp/tls.pcap -Y 'ssl.handshake' -V | less

# Show only TLS alert records (these indicate failures)
tshark -r /tmp/tls.pcap -Y 'ssl.alert_message'

# Extract alert level and description
tshark -r /tmp/tls.pcap -Y 'ssl.alert_message' \
  -T fields -e ip.src -e ip.dst -e ssl.alert_message.level -e ssl.alert_message.desc

# Common TLS alert descriptions:
# 40 = handshake_failure (cipher mismatch, cert rejected)
# 42 = bad_certificate
# 44 = certificate_expired
# 48 = unknown_ca
# 70 = protocol_version

> **Debug clue:** Alert 48 (`unknown_ca`) is the most common TLS failure in production  it means the client does not trust the server's CA. Either the server is missing an intermediate certificate in its chain, or the client's CA bundle is outdated. Check with: `openssl s_client -connect host:443 -showcerts`.

# To see the server certificate in tshark:
tshark -r /tmp/tls.pcap -Y 'ssl.handshake.type == 11' -V | grep -A5 'subject'

Scenario 2: Debugging HTTP/2 and gRPC traffic

# Capture gRPC traffic (default port 50051, or whatever port your service uses)
tcpdump -i eth0 -w /tmp/grpc.pcap 'tcp port 50051'

# Decode HTTP/2 frames
tshark -r /tmp/grpc.pcap -Y 'http2' -T fields \
  -e ip.src -e ip.dst -e http2.type -e http2.flags -e http2.streamid

# HTTP/2 frame types: 0=DATA, 1=HEADERS, 4=SETTINGS, 7=GOAWAY, 8=WINDOW_UPDATE

# Show gRPC method names (in HEADERS frames)
tshark -r /tmp/grpc.pcap -Y 'http2.header.name == ":path"' \
  -T fields -e ip.src -e http2.header.value

# Show gRPC status codes (grpc-status header)
tshark -r /tmp/grpc.pcap -Y 'http2.header.name == "grpc-status"' \
  -T fields -e ip.src -e ip.dst -e http2.header.value

# A GOAWAY frame means the connection is being closed by the other side
# Extract the error code from GOAWAY
tshark -r /tmp/grpc.pcap -Y 'http2.type == 7' -V | grep -E 'Error|Last Stream'

# If traffic is plaintext HTTP/2 (h2c), tshark may need a hint:
tshark -r /tmp/grpc.pcap -d 'tcp.port==50051,http2' -Y 'http2'

Scenario 3: Finding retransmissions and RST packets

# Find all TCP retransmissions (indicates packet loss)
tshark -r /tmp/capture.pcap -Y 'tcp.analysis.retransmission'

# Count retransmissions per source
tshark -r /tmp/capture.pcap -Y 'tcp.analysis.retransmission' \
  -T fields -e ip.src | sort | uniq -c | sort -rn

# Find RST packets (abrupt connection termination)
tshark -r /tmp/capture.pcap -Y 'tcp.flags.reset == 1' \
  -T fields -e frame.time -e ip.src -e ip.dst -e tcp.srcport -e tcp.dstport

# Find connections that were reset immediately (likely rejected)
tshark -r /tmp/capture.pcap -Y 'tcp.flags.reset == 1 and tcp.seq == 1'

# Find duplicate ACKs (precursor to retransmissions)
tshark -r /tmp/capture.pcap -Y 'tcp.analysis.duplicate_ack'

# Zero window events (receiver overwhelmed, sender blocked)
tshark -r /tmp/capture.pcap -Y 'tcp.analysis.zero_window'

> **Under the hood:** TCP retransmissions don't always mean packet loss on the wire. They can indicate the receiver's application is too slow to read from the socket buffer (causing zero-window stalls that look like retransmissions). Check `tcp.analysis.zero_window` alongside retransmissions to distinguish network problems from application-side backpressure.

# Full TCP analysis stats
tshark -r /tmp/capture.pcap -qz tcp,sum

Scenario 4: DNS debugging

# Capture only DNS
tcpdump -i eth0 -w /tmp/dns.pcap 'udp port 53 or tcp port 53'

# Show all DNS queries and responses
tshark -r /tmp/dns.pcap -Y 'dns' -T fields \
  -e frame.time -e ip.src -e ip.dst -e dns.qry.name -e dns.resp.name -e dns.a -e dns.flags.response

# Find DNS queries with no response (potential DNS outage)
tshark -r /tmp/dns.pcap -Y 'dns.flags.response == 0' \
  -T fields -e frame.time -e dns.qry.name | head -20
# Then check if matching responses exist

# Find NXDOMAIN responses (domain not found)
tshark -r /tmp/dns.pcap -Y 'dns.flags.rcode == 3' \
  -T fields -e frame.time -e ip.src -e dns.qry.name

# DNS response time (latency)
tshark -r /tmp/dns.pcap -qz dns,tree | head -40

# Find queries for internal domains going to external resolvers
tshark -r /tmp/dns.pcap -Y 'dns.flags.response == 0 and not ip.dst == 10.0.0.0/8' \
  -T fields -e ip.dst -e dns.qry.name | grep '\.corp\|\.internal\|\.local'

Key Patterns

Capture filter vs display filter

Capture filters (tcpdump syntax, applied at capture time  use these to reduce file size):
  host 10.0.1.5           traffic to/from this IP
  net 10.0.0.0/8          entire subnet
  port 443                any traffic on port 443
  tcp port 8080           TCP on port 8080
  not port 22             exclude SSH
  'src host 1.2.3.4 and dst port 80'   combinators: and, or, not

Display filters (tshark/Wireshark, applied when reading  much richer):
  ip.addr == 10.0.1.5     either src or dst
  tcp.port == 8080         TCP on either port
  http.response.code >= 400   HTTP errors
  dns.qry.name contains "google"   DNS for google
  tcp.analysis.retransmission     retransmits
  ssl.alert_message        TLS alerts
  grpc.status_code != 0   non-OK gRPC responses

Rule: capture filters are BPF syntax; display filters are Wireshark filter language  they are NOT interchangeable.

> **Gotcha:** `host 10.0.1.5` is a valid capture filter but not a display filter. `ip.addr == 10.0.1.5` is a valid display filter but not a capture filter. Mixing them up is the most common Wireshark beginner mistake  the error message is usually cryptic rather than helpful.

Remote capture into local Wireshark (no file staging)

# SSH + tcpdump piped directly to local Wireshark
ssh user@remote-host "sudo tcpdump -i eth0 -w - 'port 8080'" | wireshark -k -i -

# Or capture to file on remote, then copy:
ssh user@remote-host "sudo tcpdump -i eth0 -c 5000 -w /tmp/cap.pcap 'port 8080'"
scp user@remote-host:/tmp/cap.pcap /tmp/
wireshark /tmp/cap.pcap    # or: tshark -r /tmp/cap.pcap

Kubernetes pod traffic capture

# Get the node and container runtime info for a pod
kubectl get pod my-pod -o wide
# Note the node name

# Option 1: Use kubectl debug (if ephemeral containers are enabled)
kubectl debug -it my-pod --image=nicolaka/netshoot --target=my-container
# Inside: tcpdump -i eth0 -w /tmp/cap.pcap

# Option 2: Capture on the node using nsenter
# SSH to the node, find the pod's network namespace
POD_IP=$(kubectl get pod my-pod -o jsonpath='{.status.podIP}')
# Find the veth interface for this pod IP:
ip route | grep "$POD_IP"
# Capture on that interface:
tcpdump -i veth<xyz> -w /tmp/pod-cap.pcap

# Option 3: Use ksniff (kubectl plugin)
kubectl sniff my-pod -f "port 8080" -o /tmp/pod.pcap

# Copy capture from the node
kubectl cp my-pod:/tmp/cap.pcap /tmp/cap.pcap

Follow TCP stream equivalent in tshark

# In Wireshark: Right-click a packet -> Follow -> TCP Stream
# In tshark equivalent:

# First, find the stream index
tshark -r /tmp/capture.pcap -Y 'tcp.port == 8080' \
  -T fields -e tcp.stream | sort -u
# Pick a stream number, say 5

# Follow that stream
tshark -r /tmp/capture.pcap -q -z follow,tcp,ascii,5

# For HTTP streams:
tshark -r /tmp/capture.pcap -q -z follow,http,ascii,5

# Output raw bytes:
tshark -r /tmp/capture.pcap -q -z follow,tcp,raw,5