Connection Refused
- lesson
- networking
- firewalls
- dns
- processes
- containers
- kubernetes-services
- systemd ---# Connection Refused
Topics: networking, firewalls, DNS, processes, containers, Kubernetes services, systemd Level: L1–L2 (Foundations → Operations) Time: 60–90 minutes Prerequisites: None (everything is explained from scratch)
The Mission¶
$ curl http://app.example.com:8080/health
curl: (7) Failed to connect to app.example.com port 8080: Connection refused
"Connection refused" is the most common error in all of DevOps. You'll see it from curl,
ssh, psql, redis-cli, mysql, telnet, nc, web browsers, and every application
that talks to a network.
The error looks simple. It isn't. "Connection refused" has at least seven completely different root causes across different layers of the stack, and the fix for each one is different. Guessing the wrong layer wastes hours.
This lesson teaches you to diagnose "connection refused" systematically, layer by layer. You'll learn to ask the right questions in the right order — so instead of trying random things for an hour, you isolate the problem in minutes.
What "Connection Refused" Actually Means¶
At the TCP level, "connection refused" means one specific thing: the client sent a SYN packet, and the server (or something in front of it) replied with a RST (reset) packet.
Client → Server: SYN "I want to connect to port 8080"
Server → Client: RST "Nothing is listening on port 8080"
This is different from other failures:
| What you see | What happened at TCP level | Meaning |
|---|---|---|
| Connection refused | Got RST back | Port exists but nothing is listening, or firewall actively rejects |
| Connection timed out | No response at all | Packet was dropped (firewall, wrong IP, network issue) |
| No route to host | Got ICMP Unreachable | Network layer can't reach the destination at all |
| Name resolution failed | DNS didn't return an IP | DNS is broken or the hostname doesn't exist |
Under the Hood: When a SYN arrives at a Linux host and no process has called
bind()andlisten()on that port, the kernel itself generates the RST — no application is involved. The kernel is saying "I received your mail, but nobody lives at this address." This is important: the RST is proof that the host is reachable and the network is fine. The problem is always at the host level or above.Mental Model: Think of "connection refused" as a closed door with a sign saying "nobody home." "Connection timed out" is a letter that never got delivered — it could be a wrong address, a roadblock, or a black hole. Refused is actually a better error than timeout, because it tells you the host is alive and reachable. You just need to figure out why the door is closed.
The Diagnostic Ladder¶
Here's the systematic approach. Work through these layers in order — each one rules out a class of problems:
Layer 1: Is DNS resolving correctly? → dig / nslookup
Layer 2: Is the host reachable? → ping / traceroute
Layer 3: Is the port open? → ss / netstat
Layer 4: Is the process running? → ps / systemctl
Layer 5: Is it bound to the right address? → ss -tlnp
Layer 6: Is a firewall blocking? → iptables / nftables
Layer 7: Is it a container/k8s issue? → docker / kubectl
Let's work through each one with a real scenario.
Layer 1: Is DNS Resolving Correctly?¶
Before anything else — are you even talking to the right server?
# What IP does the hostname resolve to?
dig +short app.example.com
# → 203.0.113.50
# Is that the IP you expect?
# If this returns nothing, DNS is broken.
# If this returns the wrong IP, you're talking to the wrong server.
This catches:
- Stale DNS cache — you migrated the service to a new IP, but the old record is cached
- Wrong DNS record — someone pointed
app.example.comto the load balancer but the load balancer doesn't know about port 8080 - Split-horizon DNS — the hostname resolves to a different IP depending on where you're querying from (inside vs. outside the network)
War Story: A team migrated their database from
10.0.1.5to10.0.2.20and updated the DNS record. But the application server had a local DNS cache with a 24-hour TTL. For the next 24 hours, the app connected to the old IP — where nothing was listening. Error: "connection refused." The network was fine, the new server was running perfectly, and the DNS record was correct. The problem was a stale cache on the client.
# Compare what different DNS servers return
dig @8.8.8.8 app.example.com # Google's resolver
dig @10.0.0.2 app.example.com # Your internal resolver
# Check /etc/hosts (this overrides DNS!)
grep app.example.com /etc/hosts
# Flush DNS cache on the client
# systemd-resolved:
sudo resolvectl flush-caches
# nscd:
sudo nscd -i hosts
If DNS is fine, you know you're talking to the right IP. Move to Layer 2.
Layer 2: Is the Host Reachable?¶
Can you reach the server at all?
# Basic reachability
ping -c 3 203.0.113.50
# Trace the route (see where it stops)
traceroute -n 203.0.113.50
# or the better tool:
mtr -n 203.0.113.50
If ping works, the host is reachable at the network level. If it doesn't, you have a routing problem, not an application problem.
Gotcha: Many production servers block ICMP (ping) for "security." This means a failed ping doesn't prove the host is unreachable — it might just be ignoring your pings. If ping fails, try connecting to a port you know is open:
nc -zv 203.0.113.50 22(SSH). If SSH works but your target port doesn't, the host is reachable and the problem is specific to that port.
# Quick TCP connectivity test (doesn't need an HTTP client)
nc -zv 203.0.113.50 8080
# → Connection to 203.0.113.50 8080 port [tcp/*] succeeded!
# or
# → nc: connect to 203.0.113.50 port 8080 (tcp) failed: Connection refused
If the host is reachable but the port is refused, the network is fine. Move to the host.
Layer 3: Is the Port Open?¶
SSH into the server (or use whatever access you have) and check if anything is listening on port 8080:
This is the single most important debugging command for "connection refused." The output tells you everything:
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=1234,...))
LISTEN 0 128 127.0.0.1:8080 0.0.0.0:* users:(("myapp",pid=5678,...))
Let's decode the flags:
| Flag | Meaning |
|---|---|
-t |
TCP only (not UDP) |
-l |
Listening sockets only |
-n |
Show port numbers, not service names |
-p |
Show the process that owns the socket |
Look at the output above carefully. See the problem?
127.0.0.1:8080 — the app is listening on localhost only. It accepts connections from
the local machine, but remote connections are refused by the kernel because they arrive on
a different IP (the server's external address).
This is root cause #1 — and probably the most common.
Layer 4: The Process Isn't Running¶
What if ss -tlnp shows nothing on port 8080? Then nothing is listening. Check if the
process is running at all:
# Is the process running?
pgrep -a myapp
# → (nothing) ← it crashed or never started
# If it's a systemd service, check its status
systemctl status myapp
# → Active: failed (Result: exit-code)
# Process: 5678 ExecStart=/opt/myapp/bin/server (code=exited, status=1/FAILURE)
# What happened? Check the logs
journalctl -u myapp -n 50
Common reasons the process isn't running:
| Symptom in journal | Root cause |
|---|---|
Address already in use |
Another process grabbed the port first |
Permission denied |
Non-root process trying to bind to port < 1024 |
No such file or directory |
Wrong path in ExecStart, or missing binary/config |
Out of memory |
OOM killer struck (check dmesg) |
| No log output at all | Crashed before logging started — check ExecStart command manually |
# Who's on the port? (when "Address already in use")
ss -tlnp | grep :8080
# or
lsof -i :8080
# Check if OOM killer struck
dmesg | grep -i "oom\|killed"
Gotcha:
Permission deniedon ports below 1024 is a Linux security boundary. Ports 1-1023 are "privileged" and require root (or theCAP_NET_BIND_SERVICEcapability). If your app needs port 80, either run it behind a reverse proxy on port 8080, usesetcap 'cap_net_bind_service=+ep' /usr/bin/myapp, or use systemd's socket activation to bind the port as root and hand it to the unprivileged process.
Layer 5: Bound to the Wrong Address¶
This is the most common "connection refused" root cause that experienced engineers still miss. The process is running, the port is open, but it's listening on the wrong IP.
| What you see | What it means |
|---|---|
0.0.0.0:8080 |
Listening on ALL interfaces — accepts connections from anywhere |
*:8080 |
Same as 0.0.0.0:8080 |
127.0.0.1:8080 |
Listening on localhost ONLY — rejects remote connections |
::1:8080 |
IPv6 localhost only |
:::8080 |
All IPv6 addresses (and usually IPv4 via dual-stack) |
10.0.1.5:8080 |
Listening on one specific interface only |
127.0.0.1 is the silent killer. The process looks healthy, the port is open, logs
show it's ready — but it refuses all remote connections because it's only listening on
the loopback interface.
Every framework defaults differently:
| Framework/Tool | Default bind | How to fix |
|---|---|---|
| Flask | 127.0.0.1:5000 |
flask run --host=0.0.0.0 |
Django runserver |
127.0.0.1:8000 |
python manage.py runserver 0.0.0.0:8000 |
| FastAPI / Uvicorn | 127.0.0.1:8000 |
uvicorn app:app --host 0.0.0.0 |
| Node.js Express | 127.0.0.1:3000 (if app.listen(3000)) |
app.listen(3000, '0.0.0.0') |
| Go net/http | Depends on ListenAndServe arg |
http.ListenAndServe(":8080", handler) |
| Redis | 127.0.0.1:6379 |
bind 0.0.0.0 in redis.conf |
| PostgreSQL | 127.0.0.1:5432 |
listen_addresses = '*' in postgresql.conf |
| MySQL | 127.0.0.1:3306 |
bind-address = 0.0.0.0 in my.cnf |
Mental Model: Think of
0.0.0.0as "accept calls on all phone lines" and127.0.0.1as "accept calls only from the phone on my desk." Most services default to desk-only for security — you have to explicitly tell them to answer all lines.Gotcha: In a Docker container, binding to
127.0.0.1means the service is only accessible from inside the container itself — evendocker -p 8080:8080won't work because the port-forward delivers traffic to the container's external interface, not loopback. This is the #1 reason "it works locally but not in Docker."
Layer 6: Firewall Is Blocking¶
The process is running, listening on 0.0.0.0:8080, but remote connections still get
refused. Check the firewall.
# Check iptables rules
sudo iptables -L -n -v
# Specifically, check the INPUT chain
sudo iptables -L INPUT -n -v --line-numbers
# Check for REJECT rules (these cause "connection refused")
# DROP rules cause "connection timed out" instead
sudo iptables -L -n | grep -i "reject\|drop"
The difference between DROP and REJECT matters:
| Firewall action | Client sees | When to use |
|---|---|---|
| REJECT | "Connection refused" (RST sent back) | Internal services — gives the client useful feedback |
| DROP | "Connection timed out" (no response) | Public-facing — don't tell attackers the host exists |
# Check nftables (modern replacement for iptables)
sudo nft list ruleset
# Check firewalld (if you're on RHEL/CentOS)
sudo firewall-cmd --list-all
# Check ufw (Ubuntu)
sudo ufw status verbose
War Story: A hospital network engineer added a rule to block outbound SSH (port 22) to restrict which servers staff could SSH to. The rule was broader than intended — it blocked port 22 entirely, including his own management session. The firewall had HA (high availability), so the rule synced to the secondary firewall within seconds. Both firewalls now rejected SSH. 40-minute lockout requiring physical console access. The fix was a policy: every firewall change must have an auto-revert timer (
at now + 10 minrevert the rule), and an out-of-band management path (cellular modem on the console port).
Cloud security groups¶
If you're on AWS, GCP, or Azure, there's a firewall outside the host that you can't see
from iptables:
# AWS: Check security group rules
aws ec2 describe-security-groups --group-ids sg-xxxxx
# The security group might allow port 22 (SSH) but not 8080
# This is invisible from inside the instance
| Layer | Tool to check | Notes |
|---|---|---|
| Host firewall | iptables -L -n / nft list ruleset |
On the server |
| Cloud security group | AWS console / aws ec2 describe-security-groups |
Outside the server |
| Network ACL | AWS console / aws ec2 describe-network-acls |
At the subnet level |
Gotcha: Cloud security groups are stateful (allow return traffic automatically), but Network ACLs are stateless (you need rules in both directions). A missing outbound rule on a NACL will break connections even though the inbound rule allows them. This confuses people who are used to security groups.
Layer 7: Container and Kubernetes Issues¶
If the app runs in a container, there are additional layers between the client and the process.
Docker port mapping¶
# Check if the port is actually published
docker ps
# → PORTS: 0.0.0.0:8080->8080/tcp ← good, port is mapped
# → PORTS: 8080/tcp ← BAD — exposed but not published
# Exposed vs Published:
# EXPOSE in Dockerfile = documentation only, does nothing for networking
# -p 8080:8080 in docker run = actually maps the port
Common Docker networking mistakes:
| Symptom | Cause | Fix |
|---|---|---|
| Connection refused from host | Port not published (-p missing) |
docker run -p 8080:8080 ... |
| Connection refused from host | App bound to 127.0.0.1 inside container |
Change app to bind 0.0.0.0 |
| Connection refused between containers | Using localhost instead of container name |
Use service name or docker network |
| Intermittent failures | Docker bridge NAT issues | Use a named network, not the default bridge |
# Test from inside the container
docker exec -it mycontainer curl localhost:8080
# → Works? Then the problem is outside the container (port mapping or firewall)
# Test from the host
curl localhost:8080
# → Works from host but not remote? Then it's a firewall or security group issue
Kubernetes service issues¶
In Kubernetes, the path from client to pod has several more layers:
Each one can cause "connection refused":
# 1. Check the pod is running
kubectl get pods -l app=myapp
# → STATUS: Running? CrashLoopBackOff? ImagePullBackOff?
# 2. Check the service exists and has endpoints
kubectl get svc myapp
kubectl get endpoints myapp
# → If ENDPOINTS is <none>, the service has no healthy backends
# 3. Check the service selector matches the pod labels
kubectl describe svc myapp | grep Selector
kubectl get pods --show-labels | grep myapp
# 4. Check the pod's health checks
kubectl describe pod <pod-name> | grep -A5 "Liveness\|Readiness"
# A failing readiness probe removes the pod from service endpoints
The most common Kubernetes "connection refused" causes:
| What's broken | How to check | Fix |
|---|---|---|
| Pod not running | kubectl get pods |
Check logs: kubectl logs <pod> |
| Service has no endpoints | kubectl get endpoints myapp |
Service selector doesn't match pod labels |
| Readiness probe failing | kubectl describe pod <pod> |
Pod is running but not "ready" — check probe config |
| Wrong port in service | kubectl get svc myapp -o yaml |
Service targetPort doesn't match container port |
| Container bound to localhost | kubectl exec <pod> -- ss -tlnp |
App must bind 0.0.0.0, not 127.0.0.1 |
| NetworkPolicy blocking | kubectl get networkpolicy |
Policy may be denying ingress to pod |
Gotcha: In Kubernetes, once you apply a
NetworkPolicythat selects a pod, all traffic not explicitly allowed by a policy is denied. This is "deny by addition" — before any NetworkPolicy exists, all pod-to-pod traffic is allowed. The moment you create one policy, everything not matching a policy gets blocked. This catches people who create a policy for one service and accidentally block everything else.Under the Hood: A Kubernetes Service is not a real thing on the network. There's no process listening on the ClusterIP. Instead,
kube-proxy(or a CNI like Cilium) programs iptables rules (or eBPF maps) on every node that intercept packets destined for the ClusterIP and rewrite the destination to a real Pod IP using DNAT. If the endpoints list is empty (no healthy pods), there's nothing to rewrite to, and the connection gets... refused.
The Complete Decision Tree¶
When you see "connection refused," work through this:
Connection Refused
│
├── Is DNS correct?
│ dig +short hostname
│ └── Wrong IP? → Fix DNS record or flush cache
│
├── Is the host reachable?
│ ping / nc -zv host 22
│ └── No? → Routing problem, not application
│
├── Is anything listening on the port?
│ ssh to host → ss -tlnp | grep :PORT
│ ├── Nothing? → Process isn't running
│ │ └── systemctl status / docker ps / kubectl get pods
│ │
│ └── Listening on 127.0.0.1?
│ └── Wrong bind address → Change to 0.0.0.0
│
├── Is a firewall blocking?
│ iptables -L -n / cloud security group
│ └── REJECT rule? → Add allow rule for port
│
└── Is it a container/k8s issue?
├── Docker: Port not published? → -p flag
├── K8s: Endpoints empty? → Check selector/readiness
└── K8s: NetworkPolicy? → Check policy rules
Flashcard Check¶
Q1: "Connection refused" vs "connection timed out" — what's the TCP difference?
Refused = got RST back (host is reachable, port has no listener or firewall rejects). Timed out = no response at all (packet dropped, host unreachable, or firewall DROPs).
Q2: ss -tlnp shows 127.0.0.1:8080. Can remote clients connect?
No. The service is bound to localhost only. Remote connections arrive on the external interface and are refused by the kernel. Change the bind address to
0.0.0.0.
Q3: What's the difference between EXPOSE and -p in Docker?
EXPOSEis documentation — it does nothing for networking.-p 8080:8080actually creates the port mapping that makes the service accessible from outside the container.
Q4: A Kubernetes service shows Endpoints: <none>. What's wrong?
No pods match the service's label selector, or all matching pods are failing their readiness probe. Check
kubectl get pods --show-labelsandkubectl describe pod.
Q5: Firewall DROP vs REJECT — what does the client see?
DROP = client sees "connection timed out" (no response). REJECT = client sees "connection refused" (RST sent back). DROP hides the host; REJECT gives feedback.
Q6: App works with curl localhost:8080 on the server but not from remote. Why?
Most likely: app is bound to
127.0.0.1instead of0.0.0.0. Second most likely: firewall blocking the port for remote connections.
Q7: You just deployed to a new AWS EC2 instance and port 8080 is refused. ss -tlnp
shows the app listening on 0.0.0.0:8080. iptables -L -n shows no blocking rules. What
else should you check?
The AWS Security Group. It's a firewall outside the instance, invisible from inside. Also check Network ACLs at the subnet level.
Q8: In Kubernetes, what happens when you create a NetworkPolicy that selects a pod?
All traffic to that pod not explicitly allowed by a NetworkPolicy is denied. Before any policy existed, all traffic was allowed. This is "deny by addition."
Exercises¶
Exercise 1: Reproduce and diagnose (hands-on)¶
Create a "connection refused" scenario and diagnose it. On any Linux machine:
# Start a Python HTTP server bound to localhost only
python3 -m http.server 8080 --bind 127.0.0.1 &
# Test from the same machine
curl http://localhost:8080
# → Works (200 OK, directory listing)
# Now test using the machine's actual IP
curl http://$(hostname -I | awk '{print $1}'):8080
# → Connection refused
Use ss -tlnp to diagnose why. Then fix it.
Solution
ss -tlnp | grep 8080
# → LISTEN 0 5 127.0.0.1:8080 0.0.0.0:* users:(("python3",pid=...))
# Problem: bound to 127.0.0.1
# Fix: restart with 0.0.0.0
kill %1
python3 -m http.server 8080 --bind 0.0.0.0 &
# Verify
ss -tlnp | grep 8080
# → LISTEN 0 5 0.0.0.0:8080 0.0.0.0:*
curl http://$(hostname -I | awk '{print $1}'):8080
# → Works
Exercise 2: Firewall diagnosis (hands-on)¶
Create a firewall rule that blocks a port, then diagnose it:
# Start a server
python3 -m http.server 9090 --bind 0.0.0.0 &
# Verify it works
curl http://localhost:9090
# Now block it with iptables
sudo iptables -A INPUT -p tcp --dport 9090 -j REJECT
# Try again
curl http://localhost:9090
Diagnose using iptables -L -n. Then fix it and clean up.
Hint
List rules with line numbers: `sudo iptables -L INPUT -n --line-numbers`. Delete by line number: `sudo iptables -D INPUTSolution
Note: if the rule had been `DROP` instead of `REJECT`, `curl` would have shown "connection timed out" instead of "connection refused."Exercise 3: Docker networking diagnosis¶
If you have Docker available, reproduce and diagnose:
# Run a container with the port NOT published
docker run -d --name test-app python:3.11-slim python3 -m http.server 8080
# Try to connect from the host
curl http://localhost:8080
# → Connection refused
# Diagnose: is the port published?
docker ps --format '{{.Names}}\t{{.Ports}}'
# → test-app 8080/tcp ← exposed but NOT published
Fix it without rebuilding the image.
Solution
Two fixes were needed: `-p 8080:8080` to publish the port, AND `--bind 0.0.0.0` because Python's HTTP server defaults to binding all interfaces but some apps don't. Always check both.Exercise 4: The full triage (think, then do)¶
Your colleague says "the staging API is down — connection refused." You have SSH access to the server. Write down the exact commands you'd run, in order, before you start typing. Then compare to the answer.
The systematic approach
# 1. Verify DNS (from your machine)
dig +short staging-api.example.com
# 2. Test basic connectivity (from your machine)
nc -zv staging-api.example.com 443
# 3. SSH to the server and check the process
systemctl status api-server
# or
docker ps | grep api
# 4. Check what's listening
ss -tlnp | grep -E ':443|:8080'
# 5. If listening on wrong address → fix bind config
# 6. If not listening at all → check logs
journalctl -u api-server -n 50
# or
docker logs api-container --tail 50
# 7. If listening correctly → check firewall
sudo iptables -L INPUT -n
# and cloud security groups if applicable
# 8. If Kubernetes → check the full chain
kubectl get pods -l app=api-server
kubectl get endpoints api-server
kubectl describe svc api-server
Exercise 5: The decision (think, don't code)¶
For each scenario, identify the most likely root cause and the first command you'd run:
curl: (7) Connection refused— to a service you deployed 5 minutes ago on a new servercurl: (7) Connection refused— to a service that was working yesterdaypsql: could not connect: Connection refused— to a database on localhostssh: connect to host X port 22: Connection refused— to a server you've SSH'd to beforecurl: (7) Connection refused— inside a Kubernetes pod trying to reach another pod
Answers
1. **New server:** Most likely the cloud security group doesn't have a rule for your port. Second most likely: the app is bound to `127.0.0.1`. Run: `ss -tlnp` on the server, then check the security group. 2. **Was working:** Process crashed, or someone changed a firewall rule. Run: `systemctl statusCheat Sheet¶
Quick Diagnosis¶
| Step | Command | What it tells you |
|---|---|---|
| DNS | dig +short hostname |
Are you hitting the right IP? |
| Reachability | nc -zv host port |
Can you reach the port at all? |
| Listening | ss -tlnp \| grep :PORT |
Is something listening? On which address? |
| Process | systemctl status svc |
Is the service running? |
| Firewall | sudo iptables -L INPUT -n |
Is a rule blocking the port? |
| Container | docker ps --format '{{.Ports}}' |
Is the port published? |
| K8s | kubectl get endpoints svc |
Does the service have healthy backends? |
Bind Address Reference¶
| Address | Accepts connections from |
|---|---|
0.0.0.0 |
Anywhere (all interfaces) |
127.0.0.1 |
Local machine only |
10.0.1.5 |
Only that specific interface |
:: |
Anywhere (IPv6 + usually IPv4) |
::1 |
Local machine only (IPv6) |
Firewall Actions¶
| Action | Client sees | Use when |
|---|---|---|
ACCEPT |
Connection works | Allowing traffic |
REJECT |
"Connection refused" (RST) | Internal services — give feedback |
DROP |
"Connection timed out" | Public-facing — hide the host |
Common Default Bind Addresses¶
| Service | Default | To bind everywhere |
|---|---|---|
| Flask | 127.0.0.1:5000 |
--host 0.0.0.0 |
| Uvicorn | 127.0.0.1:8000 |
--host 0.0.0.0 |
| Express | Depends on code | app.listen(port, '0.0.0.0') |
| PostgreSQL | 127.0.0.1 |
listen_addresses = '*' |
| Redis | 127.0.0.1 |
bind 0.0.0.0 |
| MySQL | 127.0.0.1 |
bind-address = 0.0.0.0 |
Takeaways¶
-
"Connection refused" means the host is reachable. That's actually good news. The network is fine. The problem is at the host level or above.
-
Check the bind address first.
127.0.0.1is the #1 cause of "connection refused" for newly deployed services.ss -tlnpis your best friend. -
Work the layers in order. DNS → reachability → port → process → bind address → firewall → container/k8s. Don't jump to firewalls before checking if the process is even running.
-
REJECT vs DROP produce different errors. REJECT = "connection refused" (RST). DROP = "connection timed out" (silence). Knowing this tells you what kind of blocking you're dealing with.
-
Cloud security groups are invisible from inside the instance. If
iptableslooks clean andssshows the port listening on0.0.0.0, check the cloud layer. -
In Kubernetes, check endpoints. An empty endpoints list means no healthy pods — either the selector is wrong or readiness probes are failing. The Service isn't a real listener; it's iptables/eBPF rules rewriting packet destinations.
Related Lessons¶
- The Hanging Deploy — when processes don't respond to signals
- What Happens When You Click a Link — the full path from browser to server
Pages that link here¶
- Api Gateways The Front Door To Your Microservices
- Aws Ec2 The Virtual Server You Never See
- Cross-Domain Lessons
- Dns Ops When Nslookup Isnt Enough
- Envoy The Proxy Thats Everywhere
- Kubernetes Debugging When Pods Wont Behave
- Kubernetes Services How Traffic Finds Your Pod
- Python For Ops The Bash Experts Bridge