RBAC - Street-Level Ops¶
Real-world workflows for auditing, debugging, and managing Kubernetes RBAC in production.
Check Permissions¶
# Can the current user create deployments in production?
kubectl auth can-i create deployments -n production
# yes
# Can a specific service account list pods?
kubectl auth can-i list pods -n staging \
--as=system:serviceaccount:ci:ci-deployer
# no
# List ALL permissions for the current user in a namespace
kubectl auth can-i --list -n production
# Resources Non-Resource URLs Resource Names Verbs
# pods [] [] [get list watch]
# deployments.apps [] [] [get list create update patch]
# Check cluster-scoped permissions
kubectl auth can-i create namespaces
kubectl auth can-i '*' '*' # am I cluster-admin?
Remember: RBAC object mnemonic: R-B-S — Role (what verbs on what resources), Binding (who gets the role), Subject (the who — User, Group, or ServiceAccount). Roles are namespace-scoped; ClusterRoles are cluster-scoped. Bindings wire them together.
Find Bindings for a Subject¶
# Find all RoleBindings and ClusterRoleBindings for a specific user
kubectl get rolebindings,clusterrolebindings -A -o json | \
jq -r '.items[] | select(.subjects[]? | .name=="alice") | "\(.kind) \(.metadata.namespace // "cluster-wide")/\(.metadata.name) -> \(.roleRef.name)"'
# Find all bindings for a service account
kubectl get rolebindings,clusterrolebindings -A -o json | \
jq -r '.items[] | select(.subjects[]? | .name=="ci-deployer" and .kind=="ServiceAccount") | "\(.kind) \(.metadata.namespace // "cluster-wide")/\(.metadata.name) -> \(.roleRef.name)"'
# Audit the default service account — what permissions does it have?
kubectl get rolebindings,clusterrolebindings -A -o json | \
jq -r '.items[] | select(.subjects[]? | .name=="default" and .kind=="ServiceAccount") | "\(.metadata.namespace // "cluster"): \(.roleRef.name)"'
Inspect a Role¶
# See what a Role or ClusterRole allows
kubectl describe role deployer -n staging
# Rules:
# Resources Verbs
# --------- -----
# deployments.apps [get list create update patch]
# services [get list create update patch]
# configmaps [get list create update patch]
# pods [get list watch]
kubectl describe clusterrole secret-reader
Create a Scoped Service Account¶
# Create a service account for CI
kubectl create serviceaccount ci-deployer -n ci
# Create a role with minimum permissions
cat <<'EOF' | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: deployer
namespace: staging
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "configmaps"]
verbs: ["get", "list", "create", "update", "patch"]
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list", "watch"]
EOF
# Bind the role to the service account
kubectl create rolebinding ci-deployer-staging \
--role=deployer \
--serviceaccount=ci:ci-deployer \
-n staging
# Verify
kubectl auth can-i create deployments -n staging \
--as=system:serviceaccount:ci:ci-deployer
# yes
kubectl auth can-i create deployments -n production \
--as=system:serviceaccount:ci:ci-deployer
# no (only has access in staging)
Disable Default SA Token¶
# Patch a service account to not auto-mount tokens
kubectl patch serviceaccount default -n production \
-p '{"automountServiceAccountToken": false}'
# Verify pods are not getting tokens
kubectl exec -it myapp-abc123 -- ls /var/run/secrets/kubernetes.io/serviceaccount/
# ls: cannot access: No such file or directory
Gotcha: In Kubernetes 1.24+, the
defaultservice account no longer gets auto-generated long-lived Secrets. Pods get short-lived projected tokens instead (auto-rotated by kubelet). If you have legacy workloads that expect a staticSecretnameddefault-token-xxxxx, they will break after upgrading. UseTokenRequestAPI or explicitSecretcreation instead.
Audit Stale Bindings¶
# List all ClusterRoleBindings with cluster-admin
kubectl get clusterrolebindings -o json | \
jq -r '.items[] | select(.roleRef.name=="cluster-admin") | "\(.metadata.name): \([.subjects[]? | "\(.kind)/\(.name)"] | join(", "))"'
# Find bindings referencing users (not groups or SAs) — often stale
kubectl get rolebindings,clusterrolebindings -A -o json | \
jq -r '.items[] | select(.subjects[]? | .kind=="User") | "\(.metadata.namespace // "cluster")/\(.metadata.name): \([.subjects[] | select(.kind=="User") | .name] | join(", "))"'
# Count bindings per namespace
kubectl get rolebindings -A --no-headers | awk '{print $1}' | sort | uniq -c | sort -rn
Test RBAC Before Deploying¶
# Create a temporary service account for testing
kubectl create serviceaccount test-rbac-sa -n staging
# Apply the role and binding under test
kubectl create rolebinding test-rbac-binding \
--role=deployer \
--serviceaccount=staging:test-rbac-sa \
-n staging
# Test positive cases (should return "yes")
kubectl auth can-i create deployments -n staging \
--as=system:serviceaccount:staging:test-rbac-sa
# Test negative cases (should return "no")
kubectl auth can-i delete namespaces \
--as=system:serviceaccount:staging:test-rbac-sa
# Clean up
kubectl delete rolebinding test-rbac-binding -n staging
kubectl delete serviceaccount test-rbac-sa -n staging
Debug 403 Forbidden Errors¶
# Check audit logs for RBAC denials (self-managed clusters)
grep "Forbidden" /var/log/kubernetes/audit.log | tail -20
# Check who is making the request
kubectl auth whoami # K8s 1.26+
# Common issue: pods/log and pods/exec are separate subresources
kubectl auth can-i get pods -n production
# yes
kubectl auth can-i get pods/log -n production
# no — need separate rule for subresources
Debug clue: A 403 that mentions
system:anonymousmeans the request had no credentials at all — the token was missing or malformed. A 403 mentioning a specific user or SA means the identity is authenticated but lacks the right Role. Checkkubectl auth whoami(1.26+) to confirm what identity the API server sees.Default trap: A RoleBinding can reference a ClusterRole (granting its permissions only within that namespace). This is the recommended pattern for reusable roles — create one ClusterRole like
deployer, then bind it per-namespace with RoleBindings. A ClusterRoleBinding grants the permissions cluster-wide, which is almost never what you want for service accounts.
Aggregated ClusterRoles for CRDs¶
# Extend the built-in view role for custom resources
cat <<'EOF' | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: widget-viewer
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rules:
- apiGroups: ["myapp.example.com"]
resources: ["widgets"]
verbs: ["get", "list", "watch"]
EOF
# Now anyone with the built-in "view" ClusterRole can also read widgets
Under the hood: Aggregated ClusterRoles work via label selectors. The built-in
view,edit, andadminClusterRoles have aggregation labels. When you create a ClusterRole withrbac.authorization.k8s.io/aggregate-to-view: "true", the controller automatically merges its rules into theviewClusterRole. This happens at runtime — theviewrole's rules list grows dynamically.