Kubernetes nginx ingress with helm
In this post I will show you how to use a local folder as a persistent volume in Kubernetes.
Parts of the Kubernetes series
- Part1a: Install K8S with ansible
- Part1b: Install K8S with kubeadm
- Part1c: Install K8S with kubeadm and containerd
- Part1d: Install K8S with kubeadm and allow swap
- Part1e: Install K8S with kubeadm in HA mode
- Part2: Intall metal-lb with K8S
- Part2: Intall metal-lb with BGP
- Part3: Install Nginx ingress to K8S
- Part4: Install cert-manager to K8S
- Part5a: Use local persisten volume with K8S
- Part5b: Use ceph persisten volume with K8S
- Part5c: Use ceph CSI persisten volume with K8S
- Part5d: Kubernetes CephFS volume with CSI driver
- Part5e: Use Project Longhorn as persisten volume with K8S
- Part5f: Use OpenEBS as persisten volume with K8S
- Part5f: vSphere persistent storage for K8S
- Part6: Kubernetes volume expansion with Ceph RBD CSI driver
- Part7a: Install k8s with IPVS mode
- Part7b: Install k8s with IPVS mode
- Part8: Use Helm with K8S
- Part9: Tillerless helm2 install
- Part10: Kubernetes Dashboard SSO
- Part11: Kuberos for K8S
- Part12: Gangway for K8S
- Part13a: Velero Backup for K8S
- Part13b: How to Backup Kubernetes to git?
- Part14a: K8S Logging And Monitoring
- Part14b: Install Grafana Loki with Helm3
For a production environment this is not an ideal structure because if you store the data on a single host if the host dies your data will be lost. For this Demo I will use a separate disk for storing the PV’s folders. So you can backup or replicate this disk separately.
Configure the disk
vgcreate local-vg /dev/sdd
lvcreate -l 100%FREE -n local-lv local-vg /dev/sdd
mkfs.xfs -f /dev/local-vg/local-lv
mkdir -p /mnt/local-storage/
mount /dev/local-vg/local-lv /mnt/local-storage
echo "/dev/local-vg/local-lv /mnt/local-storage xfs defaults 0 0" >> /etc/fstab
rm -rf /mnt/local-storage/lost+found
Now you can create every PV and PVC manually.
mkdir /mnt/local-storage/pv-tst
cat pv-tst.yaml
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-tst
spec:
capacity:
storage: 1Gi
local:
path: /mnt/local-storage/pv-tst
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kubernetes03.devopstales.intra
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-tst
namespace: tst
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeName: pv-tst
storageClassName: local
Add automated hostpath-provisioner
This is a Persistent Volume Claim (PVC) provisioner for Kubernetes. It dynamically provisions hostPath volumes to provide storage for PVCs.
git clone https://github.com/torchbox/k8s-hostpath-provisioner
cd k8s-hostpath-provisioner
kubectl apply -f deployment.yaml
nano local-sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: auto-local
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: torchbox.com/hostpath
parameters:
pvDir: /mnt/local-storage
Test the provisioner by creating a new PVC:
cat testpvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: testpvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Gi
kubectl create -f testpvc.yaml
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
testpvc Bound pvc-145c785e-ab83-11e7-9432-4201ac1fd019 50Gi RWX auto-local 10s