Portal | Level: L1: Foundations | Topics: Kubernetes Services & Ingress, Kubernetes Networking | Domain: Kubernetes
Kubernetes Services & Ingress - Primer¶
Why This Matters¶
Pods are ephemeral. They get rescheduled, they crash, they scale up and down. You can't hardcode a pod's IP address because it'll change. Services provide stable networking — a fixed IP and DNS name that routes traffic to the right set of pods regardless of where they're running. Ingress sits on top of services to expose them to the outside world. If you don't understand this layer, you can't debug any production networking issue.
Service Types¶
ClusterIP (Default)¶
Creates a virtual IP address accessible only inside the cluster. This is the most common service type.
apiVersion: v1
kind: Service
metadata:
name: api-server
namespace: production
spec:
type: ClusterIP
selector:
app: api-server
ports:
- name: http
port: 80 # Port the service listens on
targetPort: 8000 # Port the container listens on
protocol: TCP
Other pods reach this service at api-server.production.svc.cluster.local:80 or just api-server:80 from the same namespace.
NodePort¶
Exposes the service on every node's IP at a static port (default range: 30000-32767).
apiVersion: v1
kind: Service
metadata:
name: api-server
spec:
type: NodePort
selector:
app: api-server
ports:
- port: 80
targetPort: 8000
nodePort: 30080 # Optional — Kubernetes assigns one if omitted
Access from outside: http://<any-node-ip>:30080. NodePort also creates a ClusterIP — internal traffic still works normally.
LoadBalancer¶
Provisions an external load balancer from the cloud provider (AWS ELB/NLB, GCP LB, Azure LB). Builds on top of NodePort.
apiVersion: v1
kind: Service
metadata:
name: api-server
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
spec:
type: LoadBalancer
selector:
app: api-server
ports:
- port: 80
targetPort: 8000
# Check the assigned external IP
kubectl get svc api-server
# EXTERNAL-IP may show <pending> while the LB is provisioning
ExternalName¶
Maps a service to a DNS CNAME. No proxying. No selectors. Just DNS aliasing.
apiVersion: v1
kind: Service
metadata:
name: external-db
spec:
type: ExternalName
externalName: db.example.com
Pods that resolve external-db.default.svc.cluster.local get a CNAME to db.example.com. Useful for abstracting external dependencies behind an in-cluster service name.
Service Discovery¶
DNS (Primary)¶
CoreDNS (the cluster DNS) creates records for every service:
From within the same namespace, you can use the short name: api-server. From a different namespace: api-server.production.
# Test DNS resolution from inside a pod
kubectl exec -it debug-pod -- nslookup api-server.production.svc.cluster.local
# Full DNS record format:
# api-server.production.svc.cluster.local -> 10.96.45.12 (ClusterIP)
Environment Variables¶
Kubernetes also injects environment variables for each active service when a pod starts:
# Inside a pod, environment variables for a service named "api-server":
API_SERVER_SERVICE_HOST=10.96.45.12
API_SERVER_SERVICE_PORT=80
The catch: env vars are injected at pod creation time. If the service is created after the pod, the pod won't have the variables. DNS doesn't have this ordering problem. Prefer DNS.
Endpoints and EndpointSlices¶
When you create a Service with a selector, Kubernetes creates Endpoints (and EndpointSlices) that list the IP addresses of all pods matching the selector.
# See which pods a service routes to
kubectl get endpoints api-server -n production
kubectl get endpointslices -l kubernetes.io/service-name=api-server -n production
# Detailed view
kubectl describe endpoints api-server -n production
EndpointSlices replaced the older Endpoints resource for scalability. Each EndpointSlice holds up to 100 endpoints (configurable). For services with thousands of pods, this avoids a single massive Endpoints object.
If kubectl get endpoints shows <none>, the service selector doesn't match any pods — this is the #1 cause of services not routing traffic.
kube-proxy Modes¶
kube-proxy runs on every node and implements the service abstraction by programming network rules.
iptables Mode (Default)¶
kube-proxy creates iptables rules that DNAT (destination NAT) service IPs to pod IPs. Selection is random (equal probability).
Pros: Simple, well-tested. Cons: O(n) rule evaluation with many services, random backend selection (no least-connections), no connection draining.
IPVS Mode¶
Uses Linux IPVS (IP Virtual Server) for load balancing. Supports multiple algorithms: round-robin, least-connections, destination-hashing, source-hashing.
# Check kube-proxy mode
kubectl get configmap kube-proxy -n kube-system -o yaml | grep mode
# On a node, inspect IPVS rules
ipvsadm -Ln
Pros: O(1) lookup, multiple LB algorithms, better performance with thousands of services. Cons: Requires IPVS kernel modules, slightly more complex debugging.
Session Affinity¶
By default, each request from a client can hit any backend pod. Session affinity pins a client to the same pod.
Kubernetes implements this via iptables recent module (source IP based). It does not support cookie-based affinity — for that, use an ingress controller.
Headless Services¶
Under the hood: When you set
clusterIP: None, kube-proxy creates no iptables or IPVS rules for this service. DNS is the only discovery mechanism. CoreDNS returns A records for each ready pod endpoint, and the client is responsible for choosing which one to connect to. This is why StatefulSet databases use headless services — the client needs to connect to a specific replica, not a random one.
A headless service has clusterIP: None. Instead of getting a single virtual IP, DNS returns the individual pod IPs.
apiVersion: v1
kind: Service
metadata:
name: db-headless
spec:
clusterIP: None
selector:
app: postgres
ports:
- port: 5432
# DNS returns multiple A records (one per pod)
kubectl exec -it debug-pod -- nslookup db-headless.default.svc.cluster.local
# Returns: 10.244.1.5, 10.244.2.8, 10.244.3.12
Headless services are essential for StatefulSets. Each StatefulSet pod gets a stable DNS name: <pod-name>.<headless-service>.<namespace>.svc.cluster.local.
# For a StatefulSet named "postgres" with headless service "db-headless":
# postgres-0.db-headless.default.svc.cluster.local
# postgres-1.db-headless.default.svc.cluster.local
External Traffic Policy¶
Controls how traffic from outside the cluster reaches pods.
Cluster (Default)¶
Traffic can be routed to pods on any node. kube-proxy on the receiving node does a second hop to the pod's node.
Pros: Even distribution across all pods. Cons: Extra network hop adds latency. Source IP is lost (SNAT'd to the node's IP).
Local¶
Traffic only goes to pods on the node that received it. If no pod is local, the traffic is dropped.
Pros: Preserves client source IP. No extra hop. Cons: Uneven distribution if pods aren't evenly spread across nodes. Nodes without pods get traffic and drop it (health check on the NodePort fixes this — the LB stops sending to unhealthy nodes).
Internal Traffic Policy¶
Controls routing for traffic originating inside the cluster. Available since Kubernetes 1.26.
When set to Local, traffic from a pod only reaches service endpoints on the same node. Useful for node-local caches or DaemonSet-backed services where you want each pod to talk to the instance on its own node.
Ingress¶
Ingress exposes HTTP/HTTPS routes from outside the cluster to services inside.
Ingress Resource¶
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
namespace: production
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- api.example.com
secretName: api-tls-secret
rules:
- host: api.example.com
http:
paths:
- path: /v1
pathType: Prefix
backend:
service:
name: api-v1
port:
number: 80
- path: /v2
pathType: Prefix
backend:
service:
name: api-v2
port:
number: 80
IngressClass¶
Tells Kubernetes which ingress controller should handle this Ingress resource. Multiple controllers can coexist in the same cluster.
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: k8s.io/ingress-nginx
Path Types¶
| Type | Behavior |
|---|---|
| Prefix | Matches the URL path prefix. /api matches /api, /api/, /api/users. |
| Exact | Matches the URL path exactly. /api matches only /api, not /api/. |
| ImplementationSpecific | Matching depends on the ingress controller. Avoid unless you need controller-specific behavior. |
Popular Ingress Controllers¶
NGINX Ingress Controller (ingress-nginx) — the most widely deployed. Wraps nginx with a Kubernetes controller that auto-generates nginx config from Ingress resources. Feature-rich via annotations.
Traefik — built-in Let's Encrypt, automatic service discovery, middleware chains. Popular in smaller clusters and edge deployments.
HAProxy Ingress — high-performance, good for TCP/UDP workloads and fine-grained rate limiting.
AWS ALB Ingress Controller (now AWS Load Balancer Controller) — provisions native AWS Application Load Balancers per Ingress. No in-cluster proxy — traffic goes directly from ALB to pods.
Gateway API¶
Timeline: The Gateway API reached GA (v1.0) in October 2023 after three years of development. It does not replace the Ingress resource — both coexist. However, new features (traffic splitting, header-based routing, cross-namespace references) are only being added to Gateway API, not Ingress.
Gateway API is the successor to Ingress. It's more expressive, role-oriented, and supports TCP/UDP/gRPC natively.
Core Resources¶
GatewayClass — defines the controller implementation (like IngressClass).
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: cloud-gateway
spec:
controllerName: example.com/gateway-controller
Gateway — instantiates infrastructure (load balancer, proxy). Managed by platform team.
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: production-gw
namespace: infra
spec:
gatewayClassName: cloud-gateway
listeners:
- name: https
protocol: HTTPS
port: 443
tls:
mode: Terminate
certificateRefs:
- name: wildcard-tls
allowedRoutes:
namespaces:
from: Selector
selector:
matchLabels:
gateway-access: "true"
HTTPRoute — defines routing rules. Managed by app developers.
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: api-route
namespace: production
spec:
parentRefs:
- name: production-gw
namespace: infra
hostnames:
- api.example.com
rules:
- matches:
- path:
type: PathPrefix
value: /v2
backendRefs:
- name: api-v2
port: 80
weight: 90
- name: api-v2-canary
port: 80
weight: 10
Gateway API advantages over Ingress: - Traffic splitting with weights (canary deployments built-in) - Header-based routing - Cross-namespace references with explicit permissions - TCP/UDP/gRPC routes as first-class resources - Role-oriented: infrastructure vs application concerns are separated
Network Policies¶
NetworkPolicies are firewall rules for pods. By default, all pods can talk to all other pods (flat network). Network policies restrict this.
Important: NetworkPolicies require a CNI that supports them (Calico, Cilium, Weave Net). The default kubenet and some basic CNIs do not enforce network policies — you create the resource but it has no effect.
Default Deny All Ingress¶
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {} # Empty = all pods in namespace
policyTypes:
- Ingress
# No ingress rules = deny all ingress
Allow Specific Traffic¶
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-api-from-frontend
namespace: production
spec:
podSelector:
matchLabels:
app: api-server
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
- namespaceSelector:
matchLabels:
env: production
ports:
- protocol: TCP
port: 8000
egress:
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432
- to: # Allow DNS
- namespaceSelector: {}
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
Critical: If you set a default-deny policy and forget to allow DNS (port 53), every pod in the namespace loses DNS resolution. Always allow egress to port 53 when using network policies.
Network Policy Logic¶
- Multiple
fromentries in the sameingressrule are OR'd - Multiple keys within a single
fromentry are AND'd - Same applies for
egress/to
ingress:
- from:
# These two are AND'd: pod must match label AND be in matching namespace
- podSelector:
matchLabels:
app: frontend
namespaceSelector:
matchLabels:
env: production
- from:
# This is a separate OR'd rule
- podSelector:
matchLabels:
app: monitoring
DNS in Kubernetes¶
CoreDNS¶
CoreDNS is the cluster DNS server. It runs as a Deployment in kube-system and is exposed via a ClusterIP service (typically 10.96.0.10).
# Check CoreDNS pods
kubectl get pods -n kube-system -l k8s-app=kube-dns
# Check CoreDNS config
kubectl get configmap coredns -n kube-system -o yaml
Service FQDN Format¶
<service>.<namespace>.svc.<cluster-domain>
# Examples:
api-server.production.svc.cluster.local
postgres.default.svc.cluster.local
For headless services with StatefulSets:
<pod-name>.<service>.<namespace>.svc.<cluster-domain>
# Example:
postgres-0.db-headless.default.svc.cluster.local
ndots and Search Domains¶
Every pod gets a /etc/resolv.conf like:
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
ndots:5 means: if a name has fewer than 5 dots, try appending each search domain before querying it as-is. So resolving api.example.com (2 dots, < 5) causes 4 DNS queries:
1. api.example.com.default.svc.cluster.local (NXDOMAIN)
2. api.example.com.svc.cluster.local (NXDOMAIN)
3. api.example.com.cluster.local (NXDOMAIN)
4. api.example.com. (success)
This 4x DNS amplification is significant at scale. For external domains, use trailing dots (api.example.com.) or lower ndots:
DNS Policies¶
| Policy | Behavior |
|---|---|
| ClusterFirst (default) | Use CoreDNS for cluster names, fall through to upstream for external |
| Default | Inherit DNS config from the node |
| ClusterFirstWithHostNet | Like ClusterFirst but for pods using hostNetwork |
| None | No auto-configured DNS. You must provide dnsConfig |
Quick Reference¶
| Concept | Key Point |
|---|---|
| ClusterIP | Internal only, stable VIP, DNS-discoverable |
| NodePort | External access via high port on every node |
| LoadBalancer | Cloud LB provisioned automatically, builds on NodePort |
| ExternalName | DNS CNAME alias, no proxying |
| Headless (clusterIP: None) | DNS returns pod IPs directly, essential for StatefulSets |
| externalTrafficPolicy: Local | Preserves source IP, requires pods on receiving nodes |
| Ingress | HTTP/HTTPS routing with host/path rules, needs a controller |
| Gateway API | Next-gen ingress, role-based, supports traffic splitting |
| NetworkPolicy | Pod-level firewall, requires compatible CNI |
| ndots:5 | Default causes extra DNS lookups for external names |
Wiki Navigation¶
Prerequisites¶
- Kubernetes Ops (Production) (Topic Pack, L2)
- Networking Deep Dive (Topic Pack, L1)
Related Content¶
- Runbook: Ingress 502 Bad Gateway (Runbook, L2) — Kubernetes Networking, Kubernetes Services & Ingress
- API Gateways & Ingress (Topic Pack, L2) — Kubernetes Networking
- Case Study: CNI Broken After Restart (Case Study, L2) — Kubernetes Networking
- Case Study: Canary Deploy Routing to Wrong Backend — Ingress Misconfigured (Case Study, L2) — Kubernetes Networking
- Case Study: CoreDNS Timeout Pod DNS (Case Study, L2) — Kubernetes Networking
- Case Study: Grafana Dashboard Empty — Prometheus Blocked by NetworkPolicy (Case Study, L2) — Kubernetes Networking
- Case Study: Service Mesh 503s — Envoy Misconfigured, RBAC Policy (Case Study, L2) — Kubernetes Networking
- Case Study: Service No Endpoints (Case Study, L1) — Kubernetes Networking
- Cilium & eBPF Networking (Topic Pack, L2) — Kubernetes Networking
- Deep Dive: Kubernetes Networking (deep_dive, L2) — Kubernetes Networking
Pages that link here¶
- API Gateways & Ingress
- Anti-Primer: Kubernetes Services And Ingress
- Certification Prep: CKA — Certified Kubernetes Administrator
- Certification Prep: CKAD — Certified Kubernetes Application Developer
- Certification Prep: CKS — Certified Kubernetes Security Specialist
- Cilium
- Comparison: Ingress Controllers
- Incident Replay: Service Has No Endpoints
- Kubernetes Networking
- Kubernetes Services & Ingress
- Production Readiness Review: Answer Key
- Production Readiness Review: Study Plans
- Runbook: Ingress 502 Bad Gateway
- Scenario: Ingress Returns 404 Intermittently
- Symptoms