Kubernetes CephFS volume with CSI driver
In this post I will show you how can you use CephFS with CSI driver for persistent storage on Kubernetes.
Parts of the Kubernetes series
- Part1a: Install K8S with ansible
- Part1b: Install K8S with kubeadm
- Part1c: Install K8S with kubeadm and containerd
- Part1d: Install K8S with kubeadm and allow swap
- Part1e: Install K8S with kubeadm in HA mode
- Part2: Intall metal-lb with K8S
- Part2: Intall metal-lb with BGP
- Part3: Install Nginx ingress to K8S
- Part4: Install cert-manager to K8S
- Part5a: Use local persisten volume with K8S
- Part5b: Use ceph persisten volume with K8S
- Part5c: Use ceph CSI persisten volume with K8S
- Part5d: Kubernetes CephFS volume with CSI driver
- Part5e: Use Project Longhorn as persisten volume with K8S
- Part5f: Use OpenEBS as persisten volume with K8S
- Part5f: vSphere persistent storage for K8S
- Part6: Kubernetes volume expansion with Ceph RBD CSI driver
- Part7a: Install k8s with IPVS mode
- Part7b: Install k8s with IPVS mode
- Part8: Use Helm with K8S
- Part9: Tillerless helm2 install
- Part10: Kubernetes Dashboard SSO
- Part11: Kuberos for K8S
- Part12: Gangway for K8S
- Part13a: Velero Backup for K8S
- Part13b: How to Backup Kubernetes to git?
- Part14a: K8S Logging And Monitoring
- Part14b: Install Grafana Loki with Helm3
The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage storage systems to Kubernetes. Using CSI third-party storage providers can write and deploy plugins exposing storage systems in Kubernetes. Before we begin lets ensure that we have the following requirements:
- Kubernetes cluster v1.14+
- allow-privileged flag enabled for both kubelet and API server
- Running Ceph cluster
- Created CephFS
First we need to create a namespace for the storage provider:
kubectl create ns ceph-csi-cephfs
kubens ceph-csi-cephfs
Login to the CEPH cluster and get the configurations:
ceph config generate-minimal-conf > ceph-minimal.conf
cat ceph-minimal.conf
# minimal ceph.conf for e285a458-7c95-4187-8129-fbd6c370c537
[global]
fsid = e285a458-7c95-4187-8129-fbd6c370c537
mon_host = [v2:192.168.10.11:3300/0,v1:192.168.10.11:6789/0] [v2:192.168.10.12:3300/0,v1:192.168.10.12:6789/0] [v2:192.168.10.13:3300/0,v1:192.168.10.13:6789/0]
ceph auth get-key client.admin
QVFDWDNuVmtNV3NvSlJBQUFvazIxMCszZXFxNmF6SmpT5WJjaUE9PQ==
helm repo add ceph-csi https://ceph.github.io/csi-charts
helm show values ceph-csi/ceph-csi-cephfs > defaultValues.yaml
Crate helm values file:
nano values.yaml
---
csiConfig:
- clusterID: e285a458-7c95-4187-8129-fbd6c370c537
monitors:
- 192.168.10.11:6789
- 192.168.10.12:6789
- 192.168.10.13:6789
cephFS:
subvolumeGroup: "csi"
secret:
name: csi-cephfs-secret
adminID: admin
adminKey: QVFDWDNuVmtNV3NvSlJBQUFvazIxMCszZXFxNmF6SmpT5WJjaUE9PQ==
create: true
storageClass:
create: true
name: k8s-cephfs
clusterID: e285a458-7c95-4187-8129-fbd6c370c537
# (required) CephFS filesystem name into which the volume shall be created
fsName: k8s-etc-nvme
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeNamePrefix: "poc-k8s-"
provisionerSecret: csi-cephfs-secret
controllerExpandSecret: csi-cephfs-secret
nodeStageSecret: csi-cephfs-secret
Deploy helm chart:
helm upgrade --install ceph-csi-cephfs ceph-csi/ceph-csi-cephfs --values ./values.yaml
Demo time
nano pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-cephfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: csi-cephfs-sc
kubectl apply -f pvc.yaml
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-cephfs-pvc Bound pvc-51526639-6fef-4abd-b453-c2b03c08781f 1Gi RWX csi-cephfs-sc 31m