Kubernetes Ceph RBD volume with CSI driver
In this post I will show you how can you use CEPH RBD with CSI driver for persistent storage on Kubernetes.
Parts of the Kubernetes series
- Part1a: Install K8S with ansible
- Part1b: Install K8S with kubeadm
- Part1c: Install K8S with kubeadm and containerd
- Part1d: Install K8S with kubeadm and allow swap
- Part1e: Install K8S with kubeadm in HA mode
- Part2: Intall metal-lb with K8S
- Part2: Intall metal-lb with BGP
- Part3: Install Nginx ingress to K8S
- Part4: Install cert-manager to K8S
- Part5a: Use local persisten volume with K8S
- Part5b: Use ceph persisten volume with K8S
- Part5c: Use ceph CSI persisten volume with K8S
- Part5d: Kubernetes CephFS volume with CSI driver
- Part5e: Use Project Longhorn as persisten volume with K8S
- Part5f: Use OpenEBS as persisten volume with K8S
- Part5f: vSphere persistent storage for K8S
- Part6: Kubernetes volume expansion with Ceph RBD CSI driver
- Part7a: Install k8s with IPVS mode
- Part7b: Install k8s with IPVS mode
- Part8: Use Helm with K8S
- Part9: Tillerless helm2 install
- Part10: Kubernetes Dashboard SSO
- Part11: Kuberos for K8S
- Part12: Gangway for K8S
- Part13a: Velero Backup for K8S
- Part13b: How to Backup Kubernetes to git?
- Part14a: K8S Logging And Monitoring
- Part14b: Install Grafana Loki with Helm3
The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage storage systems to Kubernetes. Using CSI third-party storage providers can write and deploy plugins exposing storage systems in Kubernetes. Before we begin lets ensure that we have the following requirements:
- Kubernetes cluster v1.14+
- allow-privileged flag enabled for both kubelet and API server
- Running Ceph cluster
git clone https://github.com/ceph/ceph-csi.git
cd ceph-csi/deploy/rbd/kubernetes/v1.14+/
kubectl create -f csi-nodeplugin-rbac.yaml
kubectl create -f csi-provisioner-rbac.yaml
ceph config generate-minimal-conf
# minimal ceph.conf for 54530b3e-9823-4c84-9c39-a65470e961e8
[global]
fsid = 54530b3e-9823-4c84-9c39-a65470e961e8
mon_host = [v2:1192.168.1.31:3300/0,v1:192.168.1.31:6789/0] [v2:192.168.1.32:3300/0,v1:192.168.1.32:6789/0] [v2:192.168.1.33:3300/0,v1:192.168.1.33:6789/0]
nano csi-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
config.json: |-
[
{
"clusterID": "54530b3e-9823-4c84-9c39-a65470e961e8",
"monitors": [
"192.168.1.31:6789",
"192.168.1.32:6789",
"192.168.1.33:6789"
]
}
]
metadata:
name: ceph-csi-config
kubectl create -fcsi-config-map.yaml
kubectl create -f csi-rbdplugin-provisioner.yaml
kubectl create -f csi-rbdplugin.yaml
ceph auth get-key client.admin|base64
QVFDTDliVmNEb21I32SHoPxXNGhmRkczTFNtcXM0ZW5VaXlTZEE977==
nano csi-rbd-secret.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: csi-rbd-secret
namespace: default
data:
userID: admin
userKey: QVFDTDliVmNEb21I32SHoPxXNGhmRkczTFNtcXM0ZW5VaXlTZEE977==
nano rbd-csi-sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd
provisioner: rbd.csi.ceph.com
parameters:
monitors: 192.168.1.31:6790,192.168.1.32:6790,192.168.1.33:6790
clusterID: k8s-ceph
pool: rbd
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: default
csi.storage.k8s.io/node-publish-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-publish-secret-namespace: default
adminid: admin
csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete
mountOptions:
- discard
kubectl create -f csi-rbd-secret.yaml
kubectl create -f rbd-csi-sc.yaml
kubectl get storageclass
NAME PROVISIONER AGE
csi-rbd rbd.csi.ceph.com 15s
nano raw-block-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: raw-block-pvc
spec:
accessModes:
- ReadWriteMany
volumeMode: Block
resources:
requests:
storage: 1Gi
storageClassName: csi-rbd
kubectl create -f raw-block-pvc.yaml
kubectl get pvc
NAME STATUS VOLUME
raw-block-pvc Bound pvc-fd66b4d6-757d-22e9-8f9e-4f86e2356a59