Kubernetes Ceph RBD for dynamic provisioning
In this post I will show you how can you use CEPH RBD for persistent storagi on Kubernetes.
Parts of the Kubernetes series
- Part1a: Install K8S with ansible
- Part1b: Install K8S with kubeadm
- Part1c: Install K8S with kubeadm and containerd
- Part1d: Install K8S with kubeadm and allow swap
- Part1e: Install K8S with kubeadm in HA mode
- Part2: Intall metal-lb with K8S
- Part2: Intall metal-lb with BGP
- Part3: Install Nginx ingress to K8S
- Part4: Install cert-manager to K8S
- Part5a: Use local persisten volume with K8S
- Part5b: Use ceph persisten volume with K8S
- Part5c: Use ceph CSI persisten volume with K8S
- Part5d: Kubernetes CephFS volume with CSI driver
- Part5e: Use Project Longhorn as persisten volume with K8S
- Part5f: Use OpenEBS as persisten volume with K8S
- Part5f: vSphere persistent storage for K8S
- Part6: Kubernetes volume expansion with Ceph RBD CSI driver
- Part7a: Install k8s with IPVS mode
- Part7b: Install k8s with IPVS mode
- Part8: Use Helm with K8S
- Part9: Tillerless helm2 install
- Part10: Kubernetes Dashboard SSO
- Part11: Kuberos for K8S
- Part12: Gangway for K8S
- Part13a: Velero Backup for K8S
- Part13b: How to Backup Kubernetes to git?
- Part14a: K8S Logging And Monitoring
- Part14b: Install Grafana Loki with Helm3
Environment
# openshift cluster
192.168.1.41 kubernetes01 # master node
192.168.1.42 kubernetes02 # frontend node
192.168.1.43 kubernetes03 # worker node
192.168.1.44 kubernetes04 # worker node
192.168.1.45 kubernetes05 # worker node
# ceph cluster
192.168.1.31 ceph01
192.168.1.32 ceph02
192.168.1.33 ceph03
Prerequirement
RBD volume provisioner needs admin key from Ceph to provision storage. To get the admin key from Ceph cluster use this command:
sudo ceph --cluster ceph auth get-key client.admin
nano ceph-admin-secret.yaml
apiVersion: v1
data:
key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ==
kind: Secret
metadata:
name: ceph-admin-secret
namespace: kube-system
type: kubernetes.io/rbd
I will also create a separate Ceph pool for
sudo ceph --cluster ceph osd pool create k8s 1024 1024
sudo ceph --cluster ceph auth get-or-create client.k8s mon 'allow r' osd 'allow rwx pool=k8s'
sudo ceph --cluster ceph auth get-key client.k8s
nano ceph-secret-k8s.yaml
apiVersion: v1
data:
key: QVFBOFF2SlZheUJQRVJBQWgvS2ctS2htOFNSZnRvclJPRk1jdXc9PQ==
kind: Secret
metadata:
name: ceph-secret-k8s
namespace: kube-system
type: kubernetes.io/rbd
# on all openshift node
wget http://download.proxmox.com/debian/proxmox-ve-release-5.x.gpg \
-O /etc/apt/trusted.gpg.d/proxmox-ve-release-5.x.gpg
chmod +r /etc/apt/trusted.gpg.d/proxmox-ve-release-5.x.gpg
echo deb http://download.proxmox.com/debian/ceph-luminous $(lsb_release -sc) main \
> /etc/apt/sources.list.d/ceph.list
apt-get update
apt-get install ceph-common -y
cat <<EOF > rbd-provisioner.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
# resourceNames: ["kube-dns"]
verbs: ["list", "get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: kube-system
roleRef:
kind: ClusterRole
name: rbd-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: rbd-provisioner
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rbd-provisioner
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: rbd-provisioner
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: rbd-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: rbd-provisioner
spec:
containers:
- name: rbd-provisioner
image: "quay.io/external_storage/rbd-provisioner:v1.0.0-k8s1.10"
env:
- name: PROVISIONER_NAME
value: ceph.com/rbd
serviceAccount: rbd-provisioner
EOF
kubectl create -n kube-system -f rbd-provisioner.yaml
kubectl get pods -l app=rbd-provisioner -n kube-system
Please check that quay.io/external_storage/rbd-provisioner:latest
image has the same Ceph version as your Ceph cluster.
docker pull quay.io/external_storage/rbd-provisioner:latest
docker history quay.io/external_storage/rbd-provisioner:latest | grep CEPH_VERSION
# pfroxmox ceph use luminous so I'will use v1.0.0-k8s1.10
docker history quay.io/external_storage/rbd-provisioner:v1.0.0-k8s1.10 | grep CEPH_VERSION
<missing> 13 months ago /bin/sh -c #(nop) ENV CEPH_VERSION=luminous 0B
# on one openshift master node
nano k8s-storage.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: k8s
parameters:
adminId: admin
adminSecretName: ceph-admin-secret
adminSecretNamespace: kube-system
imageFeatures: layering
imageFormat: "2"
monitors: 192.168.1.31.xip.io:6789, 192.168.1.32.xip.io:6789, 192.168.1.33.xip.io:6789
pool: k8s
userId: k8s
userSecretName: ceph-secret-k8s
provisioner: ceph.com/rbd
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
kubectl apply -f ceph-admin-secret.yaml
kubectl apply -f ceph-secret-k8s.yaml
kubectl apply -f k8s-storage.yaml