Kubernetes Hardening Guide with CISA 1.6 Benchmark
On August 3rd, 2021 the National Security Agency (NSA) and the Cybersecurity and Infrastructure Security Agency (CISA) released, Kubernetes Hardening Guidance, a cybersecurity technical report detailing the complexities of securely managing Kubernetes. This blog post will show you how you can harden your Kubernetes cluster based on CISA best practices.
Parts of the K8S Security Lab series
Container Runetime Security
- Part1: How to deploy CRI-O with Firecracker?
- Part2: How to deploy CRI-O with gVisor?
- Part3: How to deploy containerd with Firecracker?
- Part4: How to deploy containerd with gVisor?
- Part5: How to deploy containerd with kata containers?
Advanced Kernel Security
- Part1: Hardening Kubernetes with seccomp
- Part2: Linux user namespace management wit CRI-O in Kubernetes
- Part3: Hardening Kubernetes with seccomp
Network Security
- Part1: RKE2 Install With Calico
- Part2: RKE2 Install With Cilium
- Part3: CNI-Genie: network separation with multiple CNI
- Part3: Configurre network wit nmstate operator
- Part3: Kubernetes Network Policy
- Part4: Kubernetes with external Ingress Controller with vxlan
- Part4: Kubernetes with external Ingress Controller with bgp
- Part4: Central authentication with oauth2-proxy
- Part5: Secure your applications with Pomerium Ingress Controller
- Part6: CrowdSec Intrusion Detection System (IDS) for Kubernetes
- Part7: Kubernetes audit logs and Falco
Secure Kubernetes Install
- Part1: Best Practices to keeping Kubernetes Clusters Secure
- Part2: Kubernetes Secure Install
- Part3: Kubernetes Hardening Guide with CIS 1.6 Benchmark
- Part4: Kubernetes Certificate Rotation
User Security
- Part1: How to create kubeconfig?
- Part2: How to create Users in Kubernetes the right way?
- Part3: Kubernetes Single Sign-on with Pinniped OpenID Connect
- Part4: Kubectl authentication with Kuberos Depricated !!
- Part5: Kubernetes authentication with Keycloak and gangway Depricated !!
- Part6: kube-openid-connect 1.0 Depricated !!
Image Security
Pod Security
- Part1: Using Admission Controllers
- Part2: RKE2 Pod Security Policy
- Part3: Kubernetes Pod Security Admission
- Part4: Kubernetes: How to migrate Pod Security Policy to Pod Security Admission?
- Part5: Pod Security Standards using Kyverno
- Part6: Kubernetes Cluster Policy with Kyverno
Secret Security
- Part1: Kubernetes and Vault integration
- Part2: Kubernetes External Vault integration
- Part3: ArgoCD and kubeseal to encript secrets
- Part4: Flux2 and kubeseal to encrypt secrets
- Part5: Flux2 and Mozilla SOPS to encrypt secrets
Monitoring and Observability
- Part6: K8S Logging And Monitoring
- Part7: Install Grafana Loki with Helm3
Backup
Disable swap
free -h
swapoff -a
swapoff -a
sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab
free -h
Project Longhorn Prerequisites
yum install -y iscsi-initiator-utils
modprobe iscsi_tcp
echo "iscsi_tcp" >/etc/modules-load.d/iscsi-tcp.conf
systemctl enable iscsid
systemctl start iscsid
Install and configure containerd
yum install -y yum-utils device-mapper-persistent-data lvm2 git nano wget iproute-tc vim-common
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y containerd.io
## Configure containerd
sudo mkdir -p /etc/containerd
sudo containerd config default > /etc/containerd/config.toml
nano /etc/containerd/config.toml
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
systemctl enable --now containerd
systemctl status containerd
echo "runtime-endpoint: unix:///run/containerd/containerd.sock" > /etc/crictl.yaml
cd /tmp
wget https://github.com/containerd/nerdctl/releases/download/v0.12.0/nerdctl-0.12.0-linux-amd64.tar.gz
tar -xzf nerdctl-0.12.0-linux-amd64.tar.gz
mv nerdctl /usr/local/bin
nerdctl ps
kubaedm preConfiguuration
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
#
# protectKernelDefaults
#
kernel.keys.root_maxbytes = 25000000
kernel.keys.root_maxkeys = 1000000
kernel.panic = 10
kernel.panic_on_oops = 1
vm.overcommit_memory = 1
vm.panic_on_oom = 0
EOF
sysctl --system
cd /opt
git clone https://github.com/devopstales/k8s_sec_lab
mkdir /etc/kubernetes/
head -c 32 /dev/urandom | base64
nano /opt/k8s_sec_lab/k8s-manifest/002-etcd-encription.yaml
cp /opt/k8s_sec_lab/k8s-manifest/001-audit-policy.yaml /etc/kubernetes/audit-policy.yaml
cp /opt/k8s_sec_lab/k8s-manifest/002-etcd-encription.yaml /etc/kubernetes/etcd-encription.yaml
Install kubeadm
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install epel-release
yum install -y kubeadm kubelet kubectl
echo 'KUBELET_KUBEADM_ARGS="--node-ip=172.17.13.10 --container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock"' > /etc/sysconfig/kubelet
kubeadm config images pull --config 010-kubeadm-conf-1-22-2.yaml
systemctl enable kubelet.service
nano 010-kubeadm-conf.yaml
# add the ips, short hostnames, fqdns of the nodes and the ip of the loadbalancer as certSANs to the apiServer config.
kubeadm init --skip-phases=addon/kube-proxy --config 010-kubeadm-conf-1-22-2.yaml
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# In my kubeadm config I forced the usage of PSP.
# At the beginning there is no psp deployed, so non of the pods can start.
# Thi is tru for the kube-apiserver too.
kubectl apply -f 011-psp.yaml
kubectl get csr --all-namespaces
kubectl get csr -oname | xargs kubectl certificate approve
kubectl apply -f 012-k8s-clusterrole.yaml
cilium
mount | grep /sys/fs/bpf
yum install -y https://harbottle.gitlab.io/harbottle-main/7/x86_64/harbottle-main-release.rpm
yum install -y kubectx helm
helm repo add cilium https://helm.cilium.io/
kubectl taint nodes --all node-role.kubernetes.io/master-
helm upgrade --install cilium cilium/cilium \
--namespace kube-system \
-f 031-cilium-helm-values.yaml
kubectl get pods -A
kubectl apply -f 013-k8s-cert-approver.yaml
harden kubernetes
There is an opensource tool theat tests CISA’s best best practices on your clsuter. We vill use this to test the resoults.
# kube-bench
# https://github.com/aquasecurity/kube-bench/releases/
yum install -y https://github.com/aquasecurity/kube-bench/releases/download/v0.6.5/kube-bench_0.6.5_linux_amd64.rpm
useradd -r -c "etcd user" -s /sbin/nologin -M etcd
chown etcd:etcd /var/lib/etcd
chmod 700 /var/lib/etcd
# kube-bench
kube-bench
kube-bench | grep "\[FAIL\]"
There is no FAIL jusk WARNING. Jeee.
join nodes
Firs we need to get the join command from the master:
# on master1
kubeadm token create --print-join-command
kubeadm join 172.17.9.10:6443 --token c2t0rj.cofbfnwwrb387890 \
--discovery-token-ca-cert-hash sha256:a52f4c16a6ce9ef72e3d6172611d17d9752dfb1c3870cf7c8ad4ce3bcb97547e
If the next node is a worke we can just use the command what we get. If a next node is a master we need to generate a certificate-key. You need a separate certificate-key for every new master.
# on master1
## generate cert key
kubeadm certs certificate-key
29ab8a6013od73s8d3g4ba3a3b24679693e98acd796356eeb47df098c47f2773
## store cert key in secret
kubeadm init phase upload-certs --upload-certs --certificate-key=29ab8a6013od73s8d3g4ba3a3b24679693e98acd796356eeb47df098c47f2773
# on master2
kubeadm join 172.17.9.10:6443 --token c2t0rj.cofbfnwwrb387890 \
--discovery-token-ca-cert-hash sha256:a52f4c16a6ce9ef72e3d6172611d17d9752dfb1c3870cf7c8ad4ce3bcb97547e \
--control-plane --certificate-key 29ab8a6013od73s8d3g4ba3a3b24679693e98acd796356eeb47df098c47f2773
# on master3
kubeadm join 172.17.9.10:6443 --token c2t0rj.cofbfnwwrb387890 \
--discovery-token-ca-cert-hash sha256:a52f4c16a6ce9ef72e3d6172611d17d9752dfb1c3870cf7c8ad4ce3bcb97547e \
--control-plane --certificate-key 29ab8a6013od73s8d3g4ba3a3b24679693e98acd796356eeb47df098c47f2773
In the end withevery new node we need to approve the certificate requests for the node.
kubectl get csr -oname | xargs kubectl certificate approve
useradd -r -c "etcd user" -s /sbin/nologin -M etcd
chown etcd:etcd /var/lib/etcd
chmod 700 /var/lib/etcd