Secure k3s with gVisor
Page content
In this post I will show you how you can secure k3s with gVisor.
Parst of the K8S Security series
- Part1: Best Practices to keeping Kubernetes Clusters Secure
- Part2: Kubernetes Hardening Guide with CIS 1.6 Benchmark
- Part3: RKE2 The Secure Kubernetes Engine
- Part4: RKE2 Install With cilium
- Part5: Kubernetes Certificate Rotation
- Part6: Hardening Kubernetes with seccomp
- Part7a: RKE2 Pod Security Policy
- Part7b: Kubernetes Pod Security Admission
- Part7c: Pod Security Standards using Kyverno
- Part8: Kubernetes Network Policy
- Part9: Kubernetes Cluster Policy with Kyverno
- Part10: Using Admission Controllers
- Part11a: Image security Admission Controller
- Part11b: Image security Admission Controller V2
- Part11c: Image security Admission Controller V3
- Part12: Continuous Image security
- Part13: K8S Logging And Monitoring
- Part14: Kubernetes audit logs and Falco
- Part15a Image Signature Verification with Connaisseur
- Part15b Image Signature Verification with Connaisseur 2.0
- Part15c Image Signature Verification with Kyverno
- Part16a Backup your Kubernetes Cluster
- Part16b How to Backup Kubernetes to git?
- Part17a Kubernetes and Vault integration
- Part17b Kubernetes External Vault integration
- Part18a: ArgoCD and kubeseal to encript secrets
- Part18b: Flux2 and kubeseal to encrypt secrets
- Part18c: Flux2 and Mozilla SOPS to encrypt secrets
- Part19: ArgoCD auto image updater
- Part20: Secure k3s with gVisor
- Part21: How to use imagePullSecrets cluster-wide??
- Part22: Automatically change registry in pod definition
In previous pos I showd you how to install a k3s Cluster. Now we modify the configuration of the containerd to use different low level container runtime.
What is gvisor
gVisor is an application kernel, written in Go, that implements a substantial portion of the Linux system call interface. It provides an additional layer of isolation between running applications and the host operating system.
gVisor includes an Open Container Initiative (OCI) runtime called runsc
that makes it easy to work with existing container tooling. The runsc
runtime integrates with Docker, containerd and Kubernetes, making it simple to run sandboxed containers.
Bootstrap the k3s cluster
k3sup install \
--ip=172.17.8.101 \
--user=vagrant \
--sudo \
--cluster \
--k3s-channel=stable \
--k3s-extra-args "--no-deploy=traefik --no-deploy=servicelb --flannel-iface=enp0s8 --node-ip=172.17.8.101" \
--merge \
--local-path $HOME/.kube/config \
--context=k3s-ha
k3sup join \
--ip 172.17.8.102 \
--user vagrant \
--sudo \
--k3s-channel stable \
--server \
--server-ip 172.17.8.101 \
--server-user vagrant \
--sudo \
--k3s-extra-args "--no-deploy=traefik --no-deploy=servicelb --flannel-iface=enp0s8 --node-ip=172.17.8.102"
k3sup join \
--ip 172.17.8.103 \
--user vagrant \
--sudo \
--k3s-channel stable \
--server \
--server-ip 172.17.8.101 \
--server-user vagrant \
--sudo \
--k3s-extra-args "--no-deploy=traefik --no-deploy=servicelb --flannel-iface=enp0s8 --node-ip=172.17.8.103"
What is gVisor
tmux-cssh -u vagrant 172.17.8.101 172.17.8.102 172.17.8.103
sudo su -
yum install nano wget -y
nano gvisor.sh
#!/bash
(
set -e
URL=https://storage.googleapis.com/gvisor/releases/release/latest
wget ${URL}/runsc ${URL}/runsc.sha512 \
${URL}/gvisor-containerd-shim ${URL}/gvisor-containerd-shim.sha512 \
${URL}/containerd-shim-runsc-v1 ${URL}/containerd-shim-runsc-v1.sha512
sha512sum -c runsc.sha512 \
-c gvisor-containerd-shim.sha512 \
-c containerd-shim-runsc-v1.sha512
rm -f *.sha512
chmod a+rx runsc gvisor-containerd-shim containerd-shim-runsc-v1
sudo mv runsc gvisor-containerd-shim containerd-shim-runsc-v1 /usr/local/bin
)
bash gvisor.sh
cp /var/lib/rancher/k3s/agent/etc/containerd/config.toml \
/var/lib/rancher/k3s/agent/etc/containerd/config.toml.back
cp /var/lib/rancher/k3s/agent/etc/containerd/config.toml.back \
/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl
nano /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl
...
[plugins.cri.containerd]
disable_snapshot_annotations = true
snapshotter = "overlayfs"
disabled_plugins = ["restart"]
[plugins.linux]
shim_debug = true
[plugins.cri.containerd.runtimes.runsc]
runtime_type = "io.containerd.runsc.v1"
[plugins.cri.cni]
...
systemcl restart k3s
exit
cat<<EOF | kubectl apply -f -
apiVersion: node.k8s.io/v1beta1
kind: RuntimeClass
metadata:
name: gvisor
handler: runsc
EOF
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: www-runc
spec:
containers:
- image: nginx:1.18
name: www
ports:
- containerPort: 80
EOF
cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
labels:
app: untrusted
name: www-gvisor
spec:
runtimeClassName: gvisor
containers:
- image: nginx:1.18
name: www
ports:
- containerPort: 80
EOF
kubectl get po
NAME READY STATUS RESTARTS AGE
www-gvisor 1/1 Running 0 9s
www-runc 1/1 Running 0 1m