RKE2 Install With cilium
In this post I will show you how you can install a RKE2 with cilium and encripted VXLAN.
Parts of the K8S Security Lab series
Container Runetime Security
- Part1: How to deploy CRI-O with Firecracker?
- Part2: How to deploy CRI-O with gVisor?
- Part3: How to deploy containerd with Firecracker?
- Part4: How to deploy containerd with gVisor?
- Part5: How to deploy containerd with kata containers?
Advanced Kernel Security
- Part1: Hardening Kubernetes with seccomp
- Part2: Linux user namespace management wit CRI-O in Kubernetes
- Part3: Hardening Kubernetes with seccomp
Network Security
- Part1: RKE2 Install With Calico
- Part2: RKE2 Install With Cilium
- Part3: CNI-Genie: network separation with multiple CNI
- Part3: Configurre network wit nmstate operator
- Part3: Kubernetes Network Policy
- Part4: Kubernetes with external Ingress Controller with vxlan
- Part4: Kubernetes with external Ingress Controller with bgp
- Part4: Central authentication with oauth2-proxy
- Part5: Secure your applications with Pomerium Ingress Controller
- Part6: CrowdSec Intrusion Detection System (IDS) for Kubernetes
- Part7: Kubernetes audit logs and Falco
Secure Kubernetes Install
- Part1: Best Practices to keeping Kubernetes Clusters Secure
- Part2: Kubernetes Secure Install
- Part3: Kubernetes Hardening Guide with CIS 1.6 Benchmark
- Part4: Kubernetes Certificate Rotation
User Security
- Part1: How to create kubeconfig?
- Part2: How to create Users in Kubernetes the right way?
- Part3: Kubernetes Single Sign-on with Pinniped OpenID Connect
- Part4: Kubectl authentication with Kuberos Depricated !!
- Part5: Kubernetes authentication with Keycloak and gangway Depricated !!
- Part6: kube-openid-connect 1.0 Depricated !!
Image Security
Pod Security
- Part1: Using Admission Controllers
- Part2: RKE2 Pod Security Policy
- Part3: Kubernetes Pod Security Admission
- Part4: Kubernetes: How to migrate Pod Security Policy to Pod Security Admission?
- Part5: Pod Security Standards using Kyverno
- Part6: Kubernetes Cluster Policy with Kyverno
Secret Security
- Part1: Kubernetes and Vault integration
- Part2: Kubernetes External Vault integration
- Part3: ArgoCD and kubeseal to encript secrets
- Part4: Flux2 and kubeseal to encrypt secrets
- Part5: Flux2 and Mozilla SOPS to encrypt secrets
Monitoring and Observability
- Part6: K8S Logging And Monitoring
- Part7: Install Grafana Loki with Helm3
Backup
What is Cilium?
Cilium is open source software for transparently securing the network connectivity between application services deployed using Linux container management platforms like Docker and Kubernetes.
At the foundation of Cilium is a new Linux kernel technology called eBPF, which enables the dynamic insertion of powerful security visibility and control logic within Linux itself. Because eBPF runs inside the Linux kernel, Cilium security policies can be applied and updated without any changes to the application code or container configuration. (Source: cilium.io )
What is Hubble?
Hubble is a fully distributed networking and security observability platform. It is built on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner.
By building on top of Cilium, Hubble can leverage eBPF for visibility. By relying on eBPF, all visibility is programmable and allows for a dynamic approach that minimizes overhead while providing deep and detailed visibility as required by users. Hubble has been created and specifically designed to make best use of these new eBPF powers. (Source: cilium.io )
RKE2 Setup
Project Longhorn Prerequisites
yum install -y epel-release
yum install -y nano curl wget git tmux jq vim-common
yum install -y iscsi-initiator-utils
modprobe iscsi_tcp
echo "iscsi_tcp" >/etc/modules-load.d/iscsi-tcp.conf
systemctl enable iscsid
systemctl start iscsid
cat <<EOF>> /etc/NetworkManager/conf.d/rke2-canal.conf
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:flannel*
EOF
systemctl reload NetworkManager
RKE2 rpm Install
cat << EOF > /etc/yum.repos.d/rancher-rke2-1-20-latest.repo
[rancher-rke2-common-latest]
name=Rancher RKE2 Common Latest
baseurl=https://rpm.rancher.io/rke2/latest/common/centos/8/noarch
enabled=1
gpgcheck=1
gpgkey=https://rpm.rancher.io/public.key
[rancher-rke2-1-20-latest]
name=Rancher RKE2 1.20 Latest
baseurl=https://rpm.rancher.io/rke2/latest/1.20/centos/8/x86_64
enabled=1
gpgcheck=1
gpgkey=https://rpm.rancher.io/public.key
EOF
yum -y install rke2-server
Kubectl, Helm & RKE2
Install kubectl
, helm
and RKE2 to the host system:
sudo git clone https://github.com/ahmetb/kubectx /opt/kubectx
sudo ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx
sudo ln -s /opt/kubectx/kubens /usr/local/bin/kubens
echo 'PATH=$PATH:/usr/local/bin' >> /etc/profile
echo 'PATH=$PATH:/var/lib/rancher/rke2/bin' >> /etc/profile
source /etc/profile
sudo dnf copr -y enable cerenit/helm
sudo dnf install -y helm
RKE2 specific ports
sudo firewall-cmd --add-port=9345/tcp --permanent
sudo firewall-cmd --add-port=6443/tcp --permanent
sudo firewall-cmd --add-port=10250/tcp --permanent
sudo firewall-cmd --add-port=2379/tcp --permanent
sudo firewall-cmd --add-port=2380/tcp --permanent
sudo firewall-cmd --add-port=30000-32767/tcp --permanent
# Used for the Monitoring
sudo firewall-cmd --add-port=9796/tcp --permanent
sudo firewall-cmd --add-port=19090/tcp --permanent
sudo firewall-cmd --add-port=6942/tcp --permanent
sudo firewall-cmd --add-port=9091/tcp --permanent
### CNI specific ports
# 4244/TCP is required when the Hubble Relay is enabled and therefore needs to connect to all agents to collect the flows
sudo firewall-cmd --add-port=4244/tcp --permanent
# Cilium healthcheck related permits:
sudo firewall-cmd --add-port=4240/tcp --permanent
sudo firewall-cmd --remove-icmp-block=echo-request --permanent
sudo firewall-cmd --remove-icmp-block=echo-reply --permanent
# Since we are using Cilium with GENEVE as overlay, we need the following port too:
sudo firewall-cmd --add-port=6081/udp --permanent
### Ingress Controller specific ports
sudo firewall-cmd --add-port=80/tcp --permanent
sudo firewall-cmd --add-port=443/tcp --permanent
### To get DNS resolution working, simply enable Masquerading.
sudo firewall-cmd --zone=public --add-masquerade --permanent
### Finally apply all the firewall changes
sudo firewall-cmd --reload
Verification:
sudo firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eno1
sources:
services: cockpit dhcpv6-client ssh wireguard
ports: 9345/tcp 6443/tcp 10250/tcp 2379/tcp 2380/tcp 30000-32767/tcp 4240/tcp 6081/udp 80/tcp 443/tcp 4244/tcp 9796/tcp 19090/tcp 6942/tcp 9091/tcp
protocols:
masquerade: yes
forward-ports:
source-ports:
icmp-blocks:
rich rules:
Basic Configuration
mkdir -p /etc/rancher/rke2
cat << EOF > /etc/rancher/rke2/config.yaml
write-kubeconfig-mode: "0644"
profile: "cis-1.5"
selinux: true
# add ips/hostname of hosts and loadbalancer
tls-san:
- "k8s.mydomain.intra"
- "172.17.9.10"
# Make a etcd snapshot every 6 hours
etcd-snapshot-schedule-cron: "0 */6 * * *"
# Keep 56 etcd snapshorts (equals to 2 weeks with 6 a day)
etcd-snapshot-retention: 56
cni:
- cilium
disable:
- rke2-canal
- rke2-kube-proxy
disable-cloud-controller: true
disable-kube-proxy: true
EOF
Note: I disabled rke2-canal
and rke2-kube-proxy
since I plan to install Cilium as CNI in “kube-proxy less mode” (kubeProxyReplacement: "strict"
). Do not disable rke2-kube-proxy
if you use another CNI - it will not work afterwards!
sudo cp -f /usr/share/rke2/rke2-cis-sysctl.conf /etc/sysctl.d/60-rke2-cis.conf
sysctl -p /etc/sysctl.d/60-rke2-cis.conf
useradd -r -c "etcd user" -s /sbin/nologin -M etcd
mkdir -p /var/lib/rancher/rke2/server/manifests/
cat << EOF > /var/lib/rancher/rke2/server/manifests/rke2-ingress-nginx-config.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-ingress-nginx
namespace: kube-system
spec:
valuesContent: |-
controller:
metrics:
service:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "10254"
EOF
Prevent RKE2 Package Updates
In order to provide more stability, I chose to DNF/YUM “mark/hold” the RKE2 related packages so a dnf update
/yum update
does not mess around with them.
Add the following line to /etc/dnf/dnf.conf
and/or /etc/yum.conf
:
exclude=rke2-*
Cilium Prerequisites
Ensure the eBFP filesystem is mounted (which should already be the case on RHEL 8.3):
mount | grep /sys/fs/bpf
# if present should output, e.g. "none on /sys/fs/bpf type bpf"...
If that’s not the case, mount it using the commands down here:
sudo mount bpffs -t bpf /sys/fs/bpf
sudo bash -c 'cat <<EOF >> /etc/fstab
none /sys/fs/bpf bpf rw,relatime 0 0
EOF'
Deploy Cilium
Cilium’s eBPF kube-proxy replacement currently cannot be used with Transparent Encryption.
cat << EOF > /var/lib/rancher/rke2/server/manifests/rke2-cilium-config.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-cilium
namespace: kube-system
spec:
valuesContent: |-
cilium:
kubeProxyReplacement: "strict"
k8sServiceHost: 10.0.2.15
k8sServicePort: 6443
operator:
replicas: 1
encryption:
enabled: false
type: wireguard
l7Proxy: false
hubble:
metrics:
enabled:
- dns:query;ignoreAAAA
- drop
- tcp
- flow
- icmp
- http
relay:
enabled: true
ui:
enabled: true
replicas: 1
ingress:
enabled: true
hosts:
- hubble.k8s.intra
annotations:
cert-manager.io/cluster-issuer: ca-issuer
tls:
- secretName: ingress-hubble-ui-tls
hosts:
- hubble.k8s.intra
prometheus:
enabled: true
# Default port value (9090) needs to be changed since the RHEL cockpit also listens on this port.
port: 19090
# Configure this serviceMonitor section AFTER Rancher Monitoring is enabled!
#serviceMonitor:
# enabled: true
EOF
Starting RKE2
Enable the rke2-server
service and start it:
sudo systemctl enable rke2-server --now
Verification:
sudo systemctl status rke2-server
sudo journalctl -u rke2-server -f
Configure Kubectl (on RKE2 Host)
mkdir ~/.kube
ln -s /etc/rancher/rke2/rke2.yaml ~/.kube/config
chmod 600 /root/.kube/config
ln -s /var/lib/rancher/rke2/agent/etc/crictl.yaml /etc/crictl.yaml
kubectl get node
crictl ps
crictl images
Verification:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s.mydomain.intra NotReady etcd,master 2m4s v1.18.16+rke2r1
Deploy demo app
kubens default
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/minikube/http-sw-app.yaml
kubectl apply -f k8s_sec_lab/manifest/cilium_demo_rb.yaml
kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing