Skip to content

Api Gateway

← Back to all decks

16 cards — 🟢 3 easy | 🟡 4 medium | 🔴 3 hard

🟢 Easy (3)

1. What does an ingress controller do in Kubernetes?

Show answer An ingress controller is a dynamically-configured reverse proxy that watches the Kubernetes API for Ingress or Gateway API resources, generates proxy configuration, routes incoming requests to the correct backend Service based on hostnames and paths, and handles TLS termination.

Remember: ingress controller = 'smart front door for your cluster.' It watches Kubernetes API for route changes and reconfigures its proxy automatically.

Example: nginx-ingress, Traefik, HAProxy Ingress, Kong Ingress, and Contour are popular implementations.

2. What are the three TLS modes for ingress and when do you use each?

Show answer Termination: TLS ends at ingress, HTTP to backend — most common and simplest. Re-encryption: TLS at ingress, new TLS connection to backend — when backends require TLS. Passthrough: ingress forwards raw TCP, backend handles TLS — for end-to-end encryption requirements.

Remember: TTP — Termination (most common), re-encryption (Transit TLS), Passthrough (end-to-end). Mnemonic: 'TLS Terminates, Transits, or Passes.'

3. What is the difference between host-based and path-based routing in an ingress?

Show answer Host-based routing directs traffic based on the hostname (api.example.com -> api-service, web.example.com -> web-service). Path-based routing uses URL paths on the same host (example.com/api -> api-service, example.com/web -> web-service). Most production setups combine both.

Remember: host-based = virtual hosting (like Apache VirtualHost). Path-based = URL routing. Most setups combine both.

🟡 Medium (4)

1. How do you configure rate limiting at the ingress layer in nginx-ingress?

Show answer Use annotations: nginx.ingress.kubernetes.io/limit-rps for requests per second per IP, limit-rpm for requests per minute, limit-connections for concurrent connections per IP, and limit-whitelist to exempt internal IP ranges (e.g., 10.0.0.0/8). Rate limiting at the edge protects backends from overload and abuse.

Gotcha: annotation-based rate limiting is per-IP. Behind a NAT or proxy, all users share one IP — add X-Forwarded-For awareness or use a smarter gateway.

2. How does external authentication work at the ingress layer?

Show answer The ingress controller forwards each request to an auth service URL before routing to the backend. If the auth service returns 200 OK, the request proceeds to the backend with auth response headers (e.g., X-User-ID, X-User-Email) injected. If it returns 401, the client gets a 401. This pushes authentication to the ingress so individual services don't each implement it.

Remember: pushing auth to the ingress layer (edge authentication) means individual services don't implement auth — reducing code duplication and security surface area.

3. How does cert-manager automate TLS certificate management with ingress?

Show answer Create a ClusterIssuer pointing to Let's Encrypt with an ACME solver (http01 or dns01). Add the annotation cert-manager.io/cluster-issuer to your Ingress resource and specify a secretName under tls. cert-manager automatically provisions, renews, and stores the certificate in the specified Kubernetes secret. Set up once, stop manually managing certs.

Remember: cert-manager automates the entire TLS lifecycle: request, validate (HTTP-01 or DNS-01), issue, store, and renew. Set up once, forget about cert expiry.

4. Why are annotation typos particularly dangerous in ingress controller configuration?

Show answer Ingress controllers use annotations for controller-specific features (timeouts, rate limiting, CORS, auth, body size, etc.). Typos in annotation names fail silently — the configuration is simply ignored without any error. This means a misspelled rate-limiting annotation provides zero protection, and you might not discover the gap until an incident occurs.

Remember: ingress controller = 'smart front door for your cluster.' It watches Kubernetes API for route changes and reconfigures its proxy automatically.

Example: nginx-ingress, Traefik, HAProxy Ingress, Kong Ingress, and Contour are popular implementations.

🔴 Hard (3)

1. Compare Nginx Ingress, Traefik, Kong, and Envoy/Istio across key features.

Show answer Nginx: annotation-based config, basic per-IP rate limiting, low complexity, best for standard web apps. Traefik: CRDs + annotations, built-in web UI, low-medium complexity, best for dynamic services. Kong: CRDs + plugins, advanced Redis-backed rate limiting, JWT/OAuth/OIDC auth, medium complexity, best for API management. Envoy/Istio: CRDs with full service mesh, advanced traffic splitting, Kiali dashboard, high complexity, best for service mesh architectures.

Remember: the ingress layer is your cluster's front door. Every request passes through it — making it the ideal place for cross-cutting concerns: TLS, auth, rate limiting, and observability.

2. What advantages does the Gateway API have over the traditional Ingress resource?

Show answer Gateway API is the successor to Ingress with: role-oriented design (separate Gateway resource for infra teams from Routes for app teams), native traffic splitting for canary and blue-green deployments, header-based routing, request/response manipulation, and cross-namespace references. It provides these capabilities without relying on controller-specific annotations.

Remember: Gateway API = 'Ingress v2.' Role-oriented (infra team owns Gateway, app team owns Routes), with native canary and header-based routing — no annotation hacks.

3. How do you implement canary deployments using the Gateway API HTTPRoute resource?

Show answer Define an HTTPRoute with multiple backendRefs under the same rule, each with a weight. For example, api-v1 with weight 90 and api-v2 with weight 10 sends 10% of traffic to the canary. This is native to the Gateway API spec — no controller-specific annotations required. Adjust weights to gradually shift traffic as confidence in the new version increases.

Remember: canary at the ingress layer means traffic splitting without application changes. Adjust weights to control rollout speed. Combine with monitoring for automated rollback.