Best Practices to keeping Kubernetes Clusters Secure

Page content

Kubernetes offers rich configuration options, but defaults are usually the least secure. Most sysadmin did not knows how to secure a kubernetes cluster. So this is my Best Practice list to keeping Kubernetes Clusters Secure.

Parts of the K8S Security Lab series

Container Runetime Security
Advanced Kernel Security
Network Security
Secure Kubernetes Install
User Security
Image Security
  • Part1: Image security Admission Controller
  • Part2: Image security Admission Controller V2
  • Part3: Image security Admission Controller V3
  • Part4: Continuous Image security
  • Part5: trivy-operator 1.0
  • Part6: trivy-operator 2.1: Trivy-operator is now an Admisssion controller too!!!
  • Part7: trivy-operator 2.2: Patch release for Admisssion controller
  • Part8: trivy-operator 2.3: Patch release for Admisssion controller
  • Part8: trivy-operator 2.4: Patch release for Admisssion controller
  • Part8: trivy-operator 2.5: Patch release for Admisssion controller
  • Part9_ Image Signature Verification with Connaisseur
  • Part10: Image Signature Verification with Connaisseur 2.0
  • Part11: Image Signature Verification with Kyverno
  • Part12: How to use imagePullSecrets cluster-wide??
  • Part13: Automatically change registry in pod definition
  • Part14: ArgoCD auto image updater
    Pod Security
    Secret Security
    Monitoring and Observability
    Backup

    Use firewalld

    In most tutorial the first thing in a Kubernets installation is to disable the firewall because is it easier than configure properly.

    # master
    firewall-cmd --permanent --add-port=6443/tcp
    firewall-cmd --permanent --add-port=2379-2380/tcp
    firewall-cmd --permanent --add-port=10250/tcp
    firewall-cmd --permanent --add-port=10251/tcp
    firewall-cmd --permanent --add-port=10252/tcp
    firewall-cmd --permanent --add-port=10255/tcp
    firewall-cmd --permanent --add-port=8472/udp
    firewall-cmd --add-masquerade --permanent
    firewall-cmd --permanent --add-port=30000-32767/tcp
    
    # worker
    firewall-cmd --permanent --add-port=10250/tcp
    firewall-cmd --permanent --add-port=10255/tcp
    firewall-cmd --permanent --add-port=8472/udp
    firewall-cmd --permanent --add-port=30000-32767/tcp
    firewall-cmd --add-masquerade --permanent
    
    # frontend
    firewall-cmd --permanent --add-port=10250/tcp
    firewall-cmd --permanent --add-port=10255/tcp
    firewall-cmd --permanent --add-port=8472/udp
    firewall-cmd --permanent --add-port=30000-32767/tcp
    firewall-cmd --add-masquerade --permanent
    firewall-cmd --permanent --zone=public --add-service=http
    firewall-cmd --permanent --zone=public --add-service=https
    

    Enabling signed kubelet serving certificates

    By default the kubelet serving certificate deployed by kubeadm is self-signed. This means a connection from external services like the metrics-server to a kubelet cannot be secured with TLS.

    To configure the kubelets in a new kubeadm cluster to obtain properly signed serving certificates you must pass the following minimal configuration to kubeadm init:

    apiVersion: kubeadm.k8s.io/v1beta3
    kind: ClusterConfiguration
    ---
    apiVersion: kubelet.config.k8s.io/v1beta1
    kind: KubeletConfiguration
    serverTLSBootstrap: true
    

    If you whant to know more about certificates and thear rotation chenck my blog post.

    Pod network add-on

    Several external projects provide Kubernetes Pod networks using CNI, some of which also support Network Policy. Use one of them.

    • Calico
    • Canal
    • Weave Net
    • Contiv
    • Cilium

    See the list of available networking and network policy add-ons.

    Using RBAC Authorization

    Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. For that you need to create Role or ClusterRole objects then assign that objects to a user wit RoleBinding or ClusterRoleBinding.

    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: deployer
      namespace: $NAMESPACE
    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: deployer-access
      namespace: $NAMESPACE
    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: deployer-access
      namespace: $NAMESPACE
    rules:
    - apiGroups: ["", "extensions", "apps"]
      resources: ["*"]
      verbs: ["*"]
    - apiGroups: ["batch"]
      resources:
      - jobs
      - cronjobs
      verbs: ["*"]
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: deployer
      namespace: $NAMESPACE
    subjects:
    - kind: ServiceAccount
      name: deployer
      namespace: $NAMESPACE
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: deployer-access
    

    PodSecurityPolicy

    At default configuration users in docker containers has the same UID and GUID pool than the users on the host system. So if an unprivileged user runs a container as root and mount the host’s filesystem to this container it can do what avers it wants on your host. Docker has an option to change the id pool us the users in the containers but kubernetes dose not support it. The RBAC adds access to an apiGroup like create deployments but dose not allow to configure the options you can use in the deployment.

    A PodSecurityPolicy is a cluster-level resource for managing security aspects of a pod specification.

    PSPs allow you to control:

    • The ability to run privileged containers and control privilege escalation
    • Access to host filesystems
    • Usage of volume types
    • And a few other aspects including SELinux, AppArmor, sysctl, and seccomp profiles

    Pod Security Policies are implemented as an Admission Controller in Kubernetes. To enable PSPs in your cluster, make sure to include PodSecurityPolicy in the enable-admission-plugins list that is passed as a parameter to your Kubernetes API configuration:

    nano /etc/kubernetes/manifests/kube-apiserver.yaml
    ...
    --enable-admission-plugins=...,PodSecurityPolicy
    ...
    

    Creating Pod Security Policies

    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      name: restricted
      annotations:
        seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default'
        apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
        seccomp.security.alpha.kubernetes.io/defaultProfileName:  'runtime/default'
        apparmor.security.beta.kubernetes.io/defaultProfileName:  'runtime/default'
    spec:
      privileged: false
      allowPrivilegeEscalation: false
      defaultAllowPrivilegeEscalation: false
      readOnlyRootFilesystem: false
      hostNetwork: false
      hostIPC: false
      hostPID: false
      requiredDropCapabilities:
        - ALL
      volumes:
        - 'configMap'
        - 'emptyDir'
        - 'projected'
        - 'secret'
        - 'downwardAPI'
        - 'persistentVolumeClaim'
      hostPorts:
        - min: 0
          max: 0
      seLinux:
        rule: 'RunAsAny'
      runAsUser:
        rule: 'MustRunAsNonRoot'
      supplementalGroups:
        rule: 'MustRunAs'
        ranges:
          - min: 1
            max: 65535
      fsGroup:
        rule: 'MustRunAs'
        ranges:
          - min: 1
            max: 65535
    

    Assigning Pod Security Policies

    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: psp:restricted
    rules:
    - apiGroups:
      - extensions
      resources:
      - podsecuritypolicies
      resourceNames:
      - restricted # the psp we are giving access to
      verbs:
      - use
    ---
    # This applies psp/restricted to all authenticated users
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: psp:restricted
    subjects:
    - kind: Group
      name: system:authenticated # All authenticated users
      apiGroup: rbac.authorization.k8s.io
    roleRef:
      kind: ClusterRole
      name: psp:restricted # A references to the role above
      apiGroup: rbac.authorization.k8s.io
    view raw
    

    Audit Log

    Usually it’s a best practice to enable audits in your cluster. Let’s go ahead and create a basic policy saved in our master.

    mkdir -p /etc/kubernetes
    
    cat > /etc/kubernetes/audit-policy.yaml <<EOF
    apiVersion: audit.k8s.io/v1beta1
    kind: Policy
    rules:
    # Do not log from kube-system accounts
    - level: None
      userGroups:
      - system:serviceaccounts:kube-system
      - system:nodes
    - level: None
      users:
      - system:apiserver
      - system:kube-scheduler
      - system:volume-scheduler
      - system:kube-controller-manager
      - system:node
    # Don't log these read-only URLs.
    - level: None
      nonResourceURLs:
      - /healthz*
      - /version
      - /swagger*
    # limit level to Metadata so token is not included in the spec/status
    - level: Metadata
      omitStages:
      - RequestReceived
      resources:
      - group: authentication.k8s.io
        resources:
        - tokenreviews
    EOF
    
    mkdir -p /var/log/kubernetes/apiserver
    
    kube-apiserver --audit-log-path=/var/log/kubernetes/apiserver/audit.log \
    --audit-policy-file=/etc/kubernetes/audit-policy.yaml
    

    Image security

    Doesn’t matter how secure is your kubernetes network or infrastructure is if you runs outdated unsecur images. You mast always update your base image, scan for known vulnerabilities. For applications use hardened base images and install as less components as you can. Some application for image scann:

    • Anchore Engine
    • Clair
    • trivy

    Find the right baseimage

    I think the best choice for a base image is Distroless, which is set of images made by Google, that were created with intent to be secure. These images contain the bare minimum that’s needed for your app.

    FROM gcr.io/distroless/python3
    COPY --from=build-env /app /app
    WORKDIR /app
    CMD ["hello.py", "/etc"]
    

    Least privileged user

    Create a dedicated user and group on the image, with minimal permissions to run the application; use the same user to run this process. For example, Node.js image which has a built-in node generic user:

    USER node
    CMD node index.js
    

    Store secret in etcd encripted.

    The Kubernetes’s base secret store is not so secure because it stores the data as base64 encoded plain text in the etcd.

    The kube-apiserver process accepts an argument --encryption-provider-config that controls how API data is encrypted in etcd. An example configuration is provided below.

    mkdir /etc/kubernetes/etcd-enc/
    
    head -c 32 /dev/urandom | base64
    
    nano /etc/kubernetes/etcd-enc/etcd-encription.yaml
    ---
    apiVersion: apiserver.config.k8s.io/v1
    kind: EncryptionConfiguration
    resources:
      - resources:
        - secrets
        providers:
        - identity: {}
        - aesgcm:
            keys:
            - name: key1
              secret: <BASE 64 ENCODED SECRET>
    

    In this example key1 is the secret contains the encryption/decryption key.

    nano kube-apiserver.yaml
    ...
        - --encryption-provider-config=/etc/kubernetes/etcd-enc/etcd-encription.yaml
    ...
        volumeMounts:
    ...
        - mountPath: /etc/kubernetes/etcd-enc
          name: etc-kubernetes-etcd-enc
          readOnly: true
      hostNetwork: true
    ...
      - hostPath:
          path: /etc/kubernetes/etcd-enc
          type: DirectoryOrCreate
        name: etc-kubernetes-etcd-enc
    status: {}
    

    The CIS Kubernetes Benchmark

    The Center for Internet Security (CIS) Kubernetes Benchmark is a reference document that can be used by system administrators, security and audit professionals and other IT roles to establish a secure configuration baseline for Kubernetes.

    Create kube-bench job

    kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/master/job.yaml
    kubectl get jobs --watch
    

    Get job output from logs

    kubectl logs $(kubectl get pods -l app=kube-bench -o name)