Continuous Image Security
In this post I will show you my tool to Continuously scann deployed images in your Kubernetes cluster.
Parst of the K8S Security series
- Part1: Best Practices to keeping Kubernetes Clusters Secure
- Part2: Kubernetes Hardening Guide with CIS 1.6 Benchmark
- Part3: RKE2 The Secure Kubernetes Engine
- Part4: RKE2 Install With cilium
- Part5: Kubernetes Certificate Rotation
- Part6: Hardening Kubernetes with seccomp
- Part7a: RKE2 Pod Security Policy
- Part7b: Kubernetes Pod Security Admission
- Part7c: Pod Security Standards using Kyverno
- Part8: Kubernetes Network Policy
- Part9: Kubernetes Cluster Policy with Kyverno
- Part10: Using Admission Controllers
- Part11a: Image security Admission Controller
- Part11b: Image security Admission Controller V2
- Part11c: Image security Admission Controller V3
- Part12: Continuous Image security
- Part13: K8S Logging And Monitoring
- Part14: Kubernetes audit logs and Falco
- Part15a Image Signature Verification with Connaisseur
- Part15b Image Signature Verification with Connaisseur 2.0
- Part15c Image Signature Verification with Kyverno
- Part16a Backup your Kubernetes Cluster
- Part16b How to Backup Kubernetes to git?
- Part17a Kubernetes and Vault integration
- Part17b Kubernetes External Vault integration
- Part18a: ArgoCD and kubeseal to encript secrets
- Part18b: Flux2 and kubeseal to encrypt secrets
- Part18c: Flux2 and Mozilla SOPS to encrypt secrets
- Part19: ArgoCD auto image updater
- Part20: Secure k3s with gVisor
- Part21: How to use imagePullSecrets cluster-wide??
- Part22: Automatically change registry in pod definition
In a previous posts we talked about admission-controllers that scnas the image at deploy. Like Banzaicloud’s anchore-image-validator and Anchore’s own admission-controller. But what if you run your image for a long time. Last weak I realised I run containers wit imagest older the a year. I this time period many new vulnerability came up.
I find a tool called trivy-scanner that do almast what I want. It scans the docker images in all namespaces with the label trivy=true
and get the resoults to a prometheus endpoint. It based on Shell Operator that runs a small python script. I made my own version from it:
Deploy the app
git clone https://github.com/devopstales/trivy-scanner
nano trivy-scanner/deploy/kubernetes/kustomization.yaml
namespace: trivy-scanner
...
kubectl create ns trivy-scanner
kubectl aplly -k trivy-scanner/deploy/kubernetes/
Demo
Test the guestbook-demo
namespace:
kubectl label namespaces guestbook-demo trivy=true
kubectl get service -n trivy-scanner
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
trivy-scanner ClusterIP 10.43.179.39 <none> 9115/TCP 15m
curl -s http://10.43.179.39:9115/metrics | grep so_vulnerabilities
Now you need to add the trivy-scanner
Service
as target for your prometheus. I created a ServiceMonitor
object for that:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
serviceapp: trivy-exporter-servicemonitor
release: prometheus
name: trivy-exporter-servicemonitor
spec:
selector:
matchLabels:
app: trivy-scanner
endpoints:
- port: metrics
If you use my grafana dasgboard from the repo you can see someting like this: