Skip to content

Portal | Level: L2: Operations | Topics: API Gateways & Ingress, Kubernetes Networking, Load Balancing | Domain: Kubernetes

API Gateways & Ingress - Primer

Why This Matters

Your Kubernetes cluster runs 50 services. Users and other systems need to reach them. You could expose each service with its own LoadBalancer (expensive, unmanageable) or NodePort (ugly, insecure). Instead, you put a single entry point in front — an ingress controller — that routes traffic to the right service based on hostnames and paths.

An API gateway takes this further: it adds authentication, rate limiting, request transformation, and observability at the edge, before traffic hits your services. This is the layer where you enforce policy, shed load, and terminate TLS — so your services don't have to.

If you've ever debugged a 502 from Nginx Ingress or wondered why your rate limits aren't working, this primer gives you the foundation.

Analogy: An ingress controller is the bouncer at a nightclub. It stands at the door (cluster edge), checks your ID (TLS termination, auth), decides which room you belong in (host/path routing), and enforces capacity limits (rate limiting). Without it, every service would need its own bouncer.


Ingress Controller Architecture

 External Traffic
 ┌─────▼──────────────────────────────────────┐
 │          Load Balancer (L4)                 │
 │       (cloud LB or MetalLB)                │
 └─────┬──────────────────────────────────────┘
 ┌─────▼──────────────────────────────────────┐
 │       Ingress Controller Pod(s)             │
 │    (nginx, traefik, kong, envoy, etc.)      │
 │                                              │
 │  ┌─────────────────────────────────────┐    │
 │  │  Watches Ingress/HTTPRoute objects  │    │
 │  │  Generates proxy config dynamically │    │
 │  │  Terminates TLS                     │    │
 │  │  Routes by host + path              │    │
 │  └─────────────────────────────────────┘    │
 └─────┬──────────┬──────────┬────────────────┘
       │          │          │
 ┌─────▼────┐ ┌──▼─────┐ ┌─▼────────┐
 │ Service A │ │Service B│ │Service C │
 │ (api)     │ │(web)    │ │(auth)    │
 └──────────┘ └────────┘ └──────────┘

The ingress controller is a reverse proxy that: 1. Watches the Kubernetes API for Ingress (or Gateway API) resources 2. Dynamically generates proxy configuration (nginx.conf, traefik rules, etc.) 3. Routes incoming requests to the correct backend Service 4. Handles TLS termination, rate limiting, authentication, and more


Ingress Resource Basics

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp-ingress
  namespace: production
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - api.example.com
      secretName: api-tls-cert
  rules:
    - host: api.example.com
      http:
        paths:
          - path: /v1
            pathType: Prefix
            backend:
              service:
                name: api-v1
                port:
                  number: 8080
          - path: /v2
            pathType: Prefix
            backend:
              service:
                name: api-v2
                port:
                  number: 8080
    - host: web.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: web-frontend
                port:
                  number: 80

Gotcha: Annotations are stringly-typed and fail silently. A typo like nginx.ingress.kubernetes.io/rewrit-target (missing an 'e') is ignored without any error or warning. Use kubectl describe ingress and check for annotation validation warnings.

Path Types

PathType Matching Behavior Example
Exact Only matches the exact path /api matches /api only
Prefix Matches the path prefix by path element /api matches /api, /api/v1

Host-Based vs Path-Based Routing

 Host-based routing:
 ┌──────────────────────────────────┐
 │ api.example.com  →  api-service  │
 │ web.example.com  →  web-service  │
 │ admin.example.com → admin-svc    │
 └──────────────────────────────────┘

 Path-based routing:
 ┌──────────────────────────────────┐
 │ example.com/api  →  api-service  │
 │ example.com/web  →  web-service  │
 │ example.com/admin → admin-svc    │
 └──────────────────────────────────┘

 Combined (most common):
 ┌──────────────────────────────────────┐
 │ api.example.com/v1  →  api-v1-svc   │
 │ api.example.com/v2  →  api-v2-svc   │
 │ web.example.com/    →  web-svc      │
 └──────────────────────────────────────┘

TLS Termination

The ingress controller handles TLS so your services don't need to:

spec:
  tls:
    - hosts:
        - api.example.com
        - web.example.com
      secretName: wildcard-tls

The TLS secret contains the certificate and private key:

kubectl create secret tls wildcard-tls \
  --cert=fullchain.pem \
  --key=privkey.pem \
  -n production

TLS Modes

Mode What Happens Use Case
Termination TLS ends at ingress, HTTP to backend Most common, simplest
Re-encryption TLS at ingress, new TLS to backend When backends require TLS
Passthrough Ingress forwards raw TCP, backend handles TLS End-to-end encryption
# Passthrough (nginx-ingress)
metadata:
  annotations:
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"

Annotation-Driven Configuration

Ingress controllers use annotations for controller-specific features. This is where most of the power (and confusion) lives.

Nginx Ingress Common Annotations

metadata:
  annotations:
    # Timeouts
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "10"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "120"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "120"

    # Body size
    nginx.ingress.kubernetes.io/proxy-body-size: "50m"

    # Rate limiting
    nginx.ingress.kubernetes.io/limit-rps: "10"
    nginx.ingress.kubernetes.io/limit-connections: "5"

    # CORS
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/cors-allow-origin: "https://web.example.com"

    # Redirects
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"

    # Custom headers
    nginx.ingress.kubernetes.io/configuration-snippet: |
      more_set_headers "X-Request-ID: $req_id";

    # WebSocket support
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"

    # Authentication
    nginx.ingress.kubernetes.io/auth-url: "https://auth.example.com/verify"
    nginx.ingress.kubernetes.io/auth-signin: "https://auth.example.com/login"

Rate Limiting

Rate limiting at the ingress protects backends from overload:

# Nginx Ingress: limit by client IP
metadata:
  annotations:
    nginx.ingress.kubernetes.io/limit-rps: "10"        # 10 requests/second per IP
    nginx.ingress.kubernetes.io/limit-rpm: "300"        # 300 requests/minute per IP
    nginx.ingress.kubernetes.io/limit-connections: "5"  # 5 concurrent per IP
    nginx.ingress.kubernetes.io/limit-whitelist: "10.0.0.0/8"  # Exempt internal

Rate Limiting with Kong

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: rate-limit
spec:
  plugin: rate-limiting
  config:
    minute: 100
    hour: 1000
    policy: redis
    redis_host: redis.default.svc
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    konghq.com/plugins: rate-limit

Rate Limiting with Traefik

apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
  name: rate-limit
spec:
  rateLimit:
    average: 50       # requests per second
    burst: 100        # burst capacity
    period: 1s

Authentication at the Edge

Push authentication to the ingress so services don't each implement it:

 Request → Ingress → Auth Service → Backend
                        ├── 200 OK → forward to backend
                        └── 401 → return 401 to client
# External auth (nginx-ingress)
metadata:
  annotations:
    nginx.ingress.kubernetes.io/auth-url: "http://auth-service.auth.svc.cluster.local:8080/verify"
    nginx.ingress.kubernetes.io/auth-response-headers: "X-User-ID,X-User-Email"
    nginx.ingress.kubernetes.io/auth-cache-key: "$remote_addr"
    nginx.ingress.kubernetes.io/auth-cache-duration: "200 202 30m, 401 1m"

Ingress Controller Comparison

Feature Nginx Ingress Traefik Kong Envoy/Istio
Config model Annotations CRDs + annotations CRDs + plugins CRDs (Istio)
Rate limiting Basic (per-IP) Moderate Advanced (Redis) Advanced (Envoy)
Auth plugins External auth ForwardAuth JWT, OAuth, OIDC Full (Istio)
Canary deploys Annotation-based Weighted services Canary plugin Traffic splitting
Dashboard None (Prometheus) Built-in web UI Kong Manager Kiali
Complexity Low Low-Medium Medium High
Best for Standard web apps Dynamic services API management Service mesh

cert-manager Integration

Automate TLS certificate management with cert-manager:

# ClusterIssuer for Let's Encrypt
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: ops@example.com
    privateKeySecretRef:
      name: letsencrypt-prod-key
    solvers:
      - http01:
          ingress:
            class: nginx

---
# Ingress with automatic cert provisioning
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  tls:
    - hosts:
        - api.example.com
      secretName: api-tls-auto  # cert-manager creates this
  rules:
    - host: api.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: api
                port:
                  number: 8080

Gateway API — The Future

Timeline: Kubernetes Ingress was introduced in v1.1 (2015) and went GA in v1.19 (2020). The Gateway API was accepted as a SIG-Network subproject in 2019 and reached GA for core resources (Gateway, HTTPRoute) in v0.8.0 (October 2023). Both coexist today, but new features are only added to Gateway API.

The Gateway API is the successor to the Ingress resource, offering more expressiveness:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: api-routes
spec:
  parentRefs:
    - name: main-gateway
  hostnames:
    - "api.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /v1
      backendRefs:
        - name: api-v1
          port: 8080
          weight: 90
        - name: api-v2
          port: 8080
          weight: 10    # canary: 10% to v2

Gateway API advantages over Ingress: - Role-oriented: separate Gateway (infra team) from Routes (app teams) - Native traffic splitting (canary, blue-green) - Header-based routing - Request/response manipulation - Cross-namespace references


Key Takeaways

  1. An ingress controller is a dynamically-configured reverse proxy running in your cluster.
  2. Annotations are how you configure controller-specific features — typos fail silently.

Debug clue: When a 502 comes from Nginx Ingress, check kubectl logs -n ingress-nginx <pod> for upstream connection errors. The most common cause is that the backend Service has no healthy endpoints (kubectl get endpoints <svc-name>). 3. Terminate TLS at the ingress unless you have a specific reason for passthrough. 4. Rate limiting at the edge protects backends from overload and abuse. 5. Push authentication to the ingress — don't make every service implement it. 6. cert-manager automates TLS certificates. Set it up once, stop manually managing certs. 7. Gateway API is replacing Ingress — learn it now, adopt when your controller supports it.


Wiki Navigation

Prerequisites

Next Steps