Skip to content

Wireshark / tshark / tcpdump Footguns

Mistakes that cause outages or wasted hours.


1. Using display filter syntax as a capture filter — tcpdump silently captures everything or errors

You run tcpdump -i eth0 'ip.addr == 10.0.1.5' expecting to filter by IP. tcpdump uses BPF syntax, not Wireshark display filter syntax. The result is either an error or unfiltered capture depending on how the expression parses. You end up with a 2GB file when you needed 2MB. Fix: For tcpdump/capture filters: host 10.0.1.5, tcp port 443, net 10.0.0.0/8. Display filters (ip.addr, tcp.port, http.response.code) work only inside Wireshark or tshark's -Y flag. Keep the two syntaxes mentally separate — they are completely different languages.

Remember: Capture filters = BPF syntax (used by tcpdump, tshark -f). Display filters = Wireshark syntax (used by Wireshark GUI, tshark -Y). BPF: host, port, net, tcp, udp. Wireshark: ip.addr, tcp.port, http.request.method. They cannot be interchanged.


2. Capturing on loopback instead of the actual interface — missing real network traffic

You debug a service that's failing to connect to an external host. You run tcpdump -i lo because the app is local. Loopback only shows traffic destined for 127.0.0.1 — traffic to external IPs goes through eth0 or ens3. You see nothing and conclude the app isn't sending any requests. Fix: Use tcpdump -D to list all interfaces. Use tcpdump -i any to capture on all interfaces simultaneously and narrow down once you've confirmed which interface carries the traffic. Loopback is only correct for debugging services that communicate via localhost.


3. Capturing without -nn — DNS resolution inside tcpdump slows capture and corrupts timing

You run tcpdump without -nn (numeric host and port). For every packet, tcpdump does a reverse DNS lookup and port-to-service name lookup. During a high-rate capture, this causes capture drops, delayed timestamps, and a misleading view of traffic. You also introduce network traffic (DNS queries) while trying to capture traffic. Fix: Always use tcpdump -nn in production captures. Use -n at minimum to skip DNS resolution. Resolve names afterward by reading the pcap in Wireshark or with tshark -N.

Gotcha: Without -nn, tcpdump generates DNS queries for every packet's source and destination IP. On a busy interface, this creates a feedback loop — DNS queries generate more packets, which generate more DNS queries. This can measurably increase network traffic and distort your capture timing.


4. Reading the first vmstat row from a tshark capture — it's an average since boot, not current

This one is about IO stats, not tshark — but the same mental model error appears in tshark: looking at the summary row from tshark -qz io,phs and assuming it represents recent traffic, when tshark's I/O statistics include all packets in the file from the first to last timestamp. If you're analyzing a long-running capture looking for a brief anomaly, the aggregate hides it. Fix: Use time-windowed display filters: tshark -r cap.pcap -Y 'frame.time >= "2024-01-15 14:02:00" and frame.time <= "2024-01-15 14:03:00"'. Use-qz io,stat,1` for per-second statistics to see where anomalies occur in the timeline.


5. Capturing TLS traffic and expecting to see plaintext — forgetting you need the key material

You capture HTTPS traffic expecting to see HTTP request paths, headers, and body. tshark shows only encrypted TLS application data records — you cannot read the content. You spend time looking for a plain-text version or assuming the capture failed. Fix: To decrypt TLS in Wireshark/tshark: (a) have the server's private key for RSA key exchange — set via Edit -> Preferences -> Protocols -> TLS -> RSA Keys or tshark -o 'tls.keylog_file:/path/to/sslkeylog.log'; (b) use an SSLKEYLOGFILE from the client process for TLS 1.3 / ECDHE (RSA key alone does not work with PFS). Set SSLKEYLOGFILE=/tmp/keys.log before starting the client process. Many times the right approach is to use application-level logging or a proxy instead of packet capture for TLS content.


6. Not setting -w — losing the capture when the terminal closes

You run tcpdump interactively to diagnose an intermittent issue, watch packets scroll by, think you see something, then the SSH connection drops. The capture is gone. You wait for the issue to recur and start over. Fix: Always write to a file with tcpdump -i eth0 -w /tmp/capture.pcap. Read and filter the file separately: tshark -r /tmp/capture.pcap -Y 'your filter'. This separates capture from analysis, preserves evidence, and lets you apply different filters to the same capture. For long-running captures, rotate files: tcpdump -C 100 -W 20 -w /tmp/cap.pcap.


7. Pcap file too large to open — capturing without a size limit during a slow diagnosis

You set up a capture to catch an intermittent issue and walk away. The disk fills up, the system panics, or Wireshark runs out of memory trying to open a 10GB pcap. The capture includes 9.5GB of irrelevant traffic. Fix: Always set a capture filter to reduce volume. Use -C 100 -W 10 (10 files of 100MB each, rotating) for long-running captures. Use -c <count> if you know roughly how many packets you need. Monitor capture file size: watch ls -lh /tmp/capture.pcap. On modern systems, tcpdump --immediate-mode -w - piped to tshark -r - can process without writing large files to disk.


8. Using tshark -e fields without -T fields — getting mangled output

You run tshark -r cap.pcap -e ip.src -e ip.dst expecting clean column output. Without -T fields, the -e flag is ignored and tshark outputs its default packet summary format. You spend time wondering why the output looks wrong. Fix: Always pair -e with -T fields: tshark -r cap.pcap -T fields -e ip.src -e ip.dst -e tcp.dstport. Add -E header=y to get column headers. Add -E separator=, for CSV output. Add -E quote=d for double-quoted fields. The full pattern: tshark -r cap.pcap -T fields -E header=y -E separator=\t -e field1 -e field2.


9. Missing packets on high-speed interfaces — no -B buffer size specified

You capture on a busy 10Gbps interface and later notice the capture has gaps. tcpdump printed "X packets dropped by kernel" at the end. The ring buffer was too small for the packet rate, so the kernel dropped bursts of packets before they could be written to the file. Fix: Increase the kernel ring buffer: tcpdump -i eth0 -B 65536 -w /tmp/cap.pcap. The default is around 2MB; 65536 sets it to 64MB. Also consider: write to fast storage (SSD or tmpfs), use a capture filter to reduce rate, or use dedicated capture tools (PF_RING, AF_PACKET with TPACKET_V3) for sustained high-rate capture.


10. Diagnosing "packet loss" from retransmission counts without checking RTT — conflating loss and latency

You see TCP retransmissions in a capture and conclude there's packet loss on the network. But retransmissions also occur when the sender's timeout expires before receiving an ACK — which can happen due to high latency rather than actual packet loss. You spend time on network hardware when the issue is cross-region latency. Fix: Check TCP RTT alongside retransmissions: tshark -r cap.pcap -Y 'tcp.analysis.ack_rtt' -T fields -e tcp.analysis.ack_rtt | sort -n | tail -20. If RTT is consistently high (>100ms) but packet loss is low, the problem is latency (routing, geographic distance, or a slow middlebox), not loss. Retransmissions with low RTT indicate actual packet loss. Use tshark -qz tcp,sum for a summary.

Debug clue: Key tshark filters for loss vs. latency: tcp.analysis.retransmission (retransmits), tcp.analysis.duplicate_ack (receiver detected gap), tcp.analysis.lost_segment (Wireshark detected missing sequence numbers). High retransmits + low RTT = loss. High retransmits + high RTT = latency. High duplicate_ack = receiver-side congestion.