A quick k8s environment setup
Introduction
If
you intend to thwart any attacks against your cloud environment, awareness and
knowledge of the current and emerging threat landscape, attack vectors and
attack paths are vital. Attackers generally start at poking at the externally
hosted assets, they enumerate for its services, any domains, and sub-domains
the company have, any web apps hosted on these domains, any mobile apps and
sensitive information they leak, which hosting / cloud provider the apps are
hosted on, does the company use any public Git repositories, the technology
stack of the apps, and its supply chain. These knowledge gathering tasks we
collectively call Open-Source Intelligence (OSINT) Gathering techniques. Attackers
do not leave a stone unturned, after all they have all the time in their hands.
Therefore,
it becomes paramount for us cyber security folks to ensure we are top of the
latest threats and exploits, this blog series is an attempt to present a
catalogue of various cloud-based attacks and remediation. We start with the
Kubernetes (k8s), we come across so many companies that have implemented highly
containerised micro-services-based applications, some even have these matured
solutions running on multiple cloud providers and whilst some just starting on
the journey. So, it becomes more important to be able to emulate k8s attacks
and remediate our cloud solutions, bolster the security posture, thereby start
tackle the main piece at the centre, the k8s.
Coming
back to this blog item, this specific iteration is related to the initial lab
setup and k8s attack using a vulnerable application to gain the initial access.
To start with, we will be looking into developing our own Kubernetes (k8s)
cluster locally on our laptop, with a local registry, the aim is to recreate a
local environment potentially representing a home grown k8s environment for
their apps and hosted the solution on a public cloud offering such as AWS EC2,
a lot of clients we've come across do this.
This
blog is aimed at anyone, who is keen to develop the knowledge on implementing container
environments locally and subsequently use these as a playground to emulate and
defend the cloud-based attacks. We shall be referencing the MITRE ATT&CK
framework – Kubernetes threat matrix in this series, going back and forth to
it.
What is
needed to setup a local Kubernetes Lab?
As
mentioned above to emulate attacks on the underlying k8s we need to obtain
initial access and some sort of remote code execution, one of the methods to
achieve this is to exploit a web application vulnerability. To ensure we dive straight
into K8 attacks we have used a containerised version of DVWA (Damn Vulnerable
Web Application) to achieve the code execution.
Kind
(Kubernetes in Docker)
Kind
is basically running Kubernetes inside a Docker container. Kind was initially
developed to test Kubernetes itself, there are several advantages some of them
is to use any newer or older version of k8s and run multiple clusters
simultaneously.
Firstly,
we will install Kind on our local machine, we’ve attempted to install kind on
Mac and Linux, both works. But for consistency the following command will
install kind on Arch Linux.
yay -Sy kind
If it’s not installed install docker.
yay -Sy docker
If
you need a graphical interface to manage the docker containers, registry and
images, so on, consider using portainer, which can also be run as a container
by using the command below:
docker run -d -p 9000:9000
--name=portainer --restart=unless-stopped -v
/var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data
portainer/portainer-ce
Do we need
a registry?
We
may need a registry to keep local images for the lab, several container
specific attacks arise from running vulnerable container images, in this blog
series we will push vulnerable images, and build containers/pods with these, further
we shall also be looking at tools that are available to scan these images for
vulnerabilities, as building secure images is paramount in k8s security.
A local registry can be implemented as follows.
#!/bin/sh
set -o errexit
# create registry container unless it
already exists
reg_name='kind-registry'
reg_port='5001'
if [ "$(docker inspect -f
'{{.State.Running}}' "${reg_name}" 2>/dev/null || true)" !=
'true' ]; then
docker run \
-d --restart=always -p "127.0.0.1:${reg_port}:5000" --name
"${reg_name}" \
registry:2
fi
# connect the registry to the cluster
network if not already connected
if [ "$(docker inspect -f='{{json
.NetworkSettings.Networks.kind}}' "${reg_name}")" = 'null' ];
then
docker network connect "kind" "${reg_name}"
fi
# Document the local registry
# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: local-registry-hosting
namespace: kube-public
data:
localRegistryHosting.v1: |
host: "localhost:${reg_port}"
help: "https://kind.sigs.k8s.io/docs/user/local-registry/"
EOF
Create the
Cluster
Now we can
create the cluster with one control plane node and three worker nodes, as
follows.
cat <<EOF | kind create cluster
--name randomrobbie --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image:
kindest/node:v1.23.4@sha256:0e34f0d0fd448aa2f2819cfd74e99fe5793a6e4938b328f657c8e3f81ee0dfb9
- role: worker
image: kindest/node:v1.23.4@sha256:0e34f0d0fd448aa2f2819cfd74e99fe5793a6e4938b328f657c8e3f81ee0dfb9
- role: worker
image:
kindest/node:v1.23.4@sha256:0e34f0d0fd448aa2f2819cfd74e99fe5793a6e4938b328f657c8e3f81ee0dfb9
- role: worker
image: kindest/node:v1.23.4@sha256:0e34f0d0fd448aa2f2819cfd74e99fe5793a6e4938b328f657c8e3f81ee0dfb9
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:5001"]
endpoint = ["http://kind-registry:5000"]
EOF
Creating cluster "randomrobbie" ...
✓ Ensuring node
image (kindest/node:v1.23.4) 🖼
✓ Preparing
nodes 📦 📦 📦 📦
✓ Writing
configuration 📜
✓ Starting
control-plane 🕹️
✓ Installing CNI
🔌
✓ Installing
StorageClass 💾
✓ Joining worker
nodes 🚜
Set kubectl context to "kind-randomrobbie"
You can now use your cluster with:
kubectl cluster-info --context kind-randomrobbie
Have a nice day! 👋
> kubectl config get-contexts
CURRENT
NAME CLUSTER AUTHINFO NAMESPACE
kind-kind kind-kind kind-kind
*
kind-randomrobbie
kind-randomrobbie
kind-randomrobbie
minikube minikube minikube default
If not, set the default cluster as follows:
kubectl config use-context kind-randomrobbie
Deploy the
App in the Cluster
The following command is run to deploy the DVWA
application in the cluster.
kubectl apply -f dvwa.yml
> cat dvwa.yml
---
kind: Service
apiVersion: v1
metadata:
name: dvwa-svc
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort:
80
selector:
app: dvwa
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: dvwa
spec:
replicas: 1
selector:
matchLabels:
app: dvwa
template:
metadata:
labels:
app: dvwa
spec:
containers:
- name:
dvwa-web
image:
vulnerables/web-dvwa
ports:
-
containerPort: 80
readinessProbe:
tcpSocket:
port: 80
The
following command will list the services deployed within the cluster, the HTTP
service we shall be using to attack the cluster with.
kubectl get services
However,
since we haven’t deployed an Ingress controller any services deployed on the
cluster will not be accessible on the host machine, therefore we need to
port-forward to access the remote service:
kubectl port-forward svc/dvwa-svc 8000:80
Finally,
just browse to the service:
Summary
In
this article we aimed for building a local k8s cluster with local registry
enabled, and we deployed a vulnerable app on the cluster to be able to conduct
the initial access attacks, eventually to gain shell access on the k8s pod.
In
the next article we shall conduct the actual attacks, gain a shell, perform
privilege escalation attacks, and subsequently escape the pod.
Comments