Kubernetes Networking Cheat Sheet¶
Service Types¶
| Type | Scope | Use Case |
|---|---|---|
| ClusterIP | Internal only | Default. Service-to-service. |
| NodePort | NodeIP:30000-32767 | Dev, bare-metal |
| LoadBalancer | Cloud LB → pods | Production external |
| ExternalName | DNS CNAME | Alias to external service |
Under the hood: ClusterIP creates virtual IP + iptables/IPVS rules. NodePort = ClusterIP + port on every node. LoadBalancer = NodePort + cloud provider's LB pointing at the NodePorts. Each type builds on the previous one.
Gotcha: ExternalName does NOT proxy traffic — it returns a CNAME record. If the external service requires SNI or Host headers, ExternalName may not work as expected.
DNS Resolution¶
pod → /etc/resolv.conf → CoreDNS (10.96.0.10)
→ svc.namespace.svc.cluster.local
→ upstream DNS (for external names)
Short names resolved via search domains:
backend → backend.same-ns.svc.cluster.local
FQDN (faster, skips search): backend.prod.svc.cluster.local
# Debug DNS
kubectl exec pod -- nslookup backend-svc
kubectl exec pod -- cat /etc/resolv.conf
kubectl logs -n kube-system -l k8s-app=kube-dns
NetworkPolicy¶
# Default deny all
spec:
podSelector: {}
policyTypes: [Ingress, Egress]
# Allow specific traffic
spec:
podSelector: { matchLabels: { app: api } }
ingress:
- from:
- podSelector: { matchLabels: { app: frontend } }
ports: [{ port: 8080 }]
egress:
- to:
- podSelector: { matchLabels: { app: db } }
ports: [{ port: 5432 }]
- ports: [{ port: 53, protocol: UDP }] # DNS
Rules: - No policy = all traffic allowed - Any policy selecting a pod = deny by default for that policy type - Policies are additive (union of all matching policies)
Ingress¶
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx
tls:
- hosts: [app.example.com]
secretName: app-tls
rules:
- host: app.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service: { name: api-svc, port: { number: 8080 } }
- path: /
pathType: Prefix
backend:
service: { name: frontend, port: { number: 80 } }
Remember: The K8s networking debug ladder: Pod -> Service -> Endpoints -> DNS -> NetworkPolicy. If curl to a Service fails, check
kubectl get endpoints <svc>— if empty, the Service label selector does not match any pod labels. This is the #1 cause of "Service not working" in Kubernetes.
Debugging Connectivity¶
# 1. Is the pod running?
kubectl get pod <name> -o wide
# 2. Does the Service have endpoints?
kubectl get endpoints <svc>
# Empty = label selector doesn't match
# 3. Can you reach it?
kubectl exec test-pod -- curl -v <svc>:<port>
# 4. Direct pod IP (bypass Service)
kubectl exec test-pod -- curl -v <pod-ip>:<port>
# 5. Is something blocking?
kubectl get networkpolicy -n <ns>
# 6. Is the app listening?
kubectl exec <pod> -- ss -tlnp
CIDR Quick Reference¶
/32 = 1 IP /24 = 256 IPs
/28 = 16 IPs /20 = 4,096 IPs
/16 = 65,536 IPs /8 = 16M IPs
Formula: 2^(32 - prefix) = total IPs
Common Port Numbers¶
| Port | Service |
|---|---|
| 80 | HTTP |
| 443 | HTTPS |
| 5432 | PostgreSQL |
| 3306 | MySQL |
| 6379 | Redis |
| 9090 | Prometheus |
| 3000 | Grafana |
| 8080 | Common app port |
| 53 | DNS |
| 6443 | K8s API server |
| 10250 | Kubelet |
| 2379/2380 | etcd client/peer |
Connection Errors Decoded¶
| Error | Meaning | Check |
|---|---|---|
| Connection refused | Port not listening | App running? Right port? |
| Connection timed out | Packets dropped | Firewall, NetworkPolicy, SG |
| Connection reset | Forcibly closed | Backend crash, overload |
| No route to host | Routing failure | Node network, CNI |
| Name resolution failed | DNS issue | CoreDNS, resolv.conf |