Configure block storage CSI provisioner
To use persistent volumes in a cluster, storage and PersistentVolume objects generally need to be configured directly. In Kubernetes Engine, you can set up a CSI (Container Storage Interface) Provisioner to use KakaoCloud Block storage as a persistent volume. Once the CSI Provisioner is configured in the cluster, you can easily create a persistent volume by creating a PersistentVolumeClaim.
Here is how to configure the CSI Provisioner.
Bare Metal Server node pools do not support CSI Provisioner.
Step 1. Perform prerequisites
Before configuring the CSI Provisioner, the following prerequisites are required. These steps are performed only once per cluster.
-
Create a cluster for dynamic provisioning of PVs. (Refer to the Create cluster documentation)
-
Set up kubectl control configuration for the cluster to send dynamic provisioning commands for PVs.
Step 2. Configure dynamic volume provisioning
Deploy KakaoCloud's CSI Provisioner to configure dynamic provisioning of persistent volumes via PVC (PersistentVolumeClaim). This can be done using either a YAML file or Helm.
If you deploy the block storage CSI Provisioner after creating a Multi-AZ cluster where node pools run in different AZs in the kr-central-2 region, a PersistentVolumeClaim will be created in each AZ where the node pools are running (e.g., kr-central-2-a
, kr-central-2-b
).
Deploy CSI Provisioner using YAML file
Run the following command in the terminal to install the CSI Provisioner.
kubectl --kubeconfig=$KUBE_CONFIG apply -f https://raw.githubusercontent.com/kakaoicloud-guide/kubernetes-engine/main/guide-samples/dynamicPV/cinder-csi.yaml
Deploy CSI Provisioner using Helm
Set up dynamic volume provisioning using Helm, the Kubernetes package management tool.
-
Before setting up dynamic volume provisioning, install the Helm client. For detailed instructions on installing Helm for different operating systems, refer to the Helm official documentation > Installing Helm.
-
Run the following command to add the official Helm chart repository.
Add Helm chart repository command$ helm repo add cpo https://kubernetes.github.io/cloud-provider-openstack
"cpo" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "cpo" chart repository
Update Complete. ⎈Happy Helming!⎈ -
Enter the following command in the terminal to deploy the CSI Provisioner to the cluster. This will deploy resources like the namespace and service for the CSI Provisioner in one step.
Helm CSI Provisioner deployment command$ helm install cinder-csi cpo/openstack-cinder-csi \
--version 2.3.0 \
--set secret.enabled=true \
--set secret.name=cloud-config \
--namespace kube-system
NAME: cinder-csi
LAST DEPLOYED: Mon Mar 13 14:05:04 2023
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Use the following storageClass csi-cinder-sc-retain and csi-cinder-sc-delete only for RWO volumes.
Step 3. Verify CSI Provisioner deployment
After deploying the CSI Provisioner, run the following command to verify that the resources have been successfully created.
kubectl --kubeconfig=$KUBE_CONFIG get ds,deploy -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/openstack-cinder-csi-nodeplugin 1 1 1 1 1 `<none>` 3m5s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/openstack-cinder-csi-controllerplugin 1/1 1 1 3m5s
Step 4. (Optional) Deploy CSI Provisioner to specific AZ in Multi-AZ cluster
When node pools in a cluster created in the Multi-AZ supported kr-central-2 region run in different AZs, you can create a PersistentVolumeClaim in a specific AZ.
Specify the AZ information for the PersistentVolumeClaim in the StorageClass and apply the modified StorageClass YAML file.
-
After deploying the CSI Provisioner, verify that the StorageClass resource has been created.
Check StorageClass resource commandkubectl --kubeconfig=$KUBE_CONFIG get StorageClass -n kube-system
실행 결과NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION
csi-cinder-sc-delete cinder.csi.openstack.org Delete Immediate true
csi-cinder-sc-retain cinder.csi.openstack.org Retain Immediate true -
Add the
allowedTopologies
field to the StorageClass, and configure thematchLabelExpressions
under allowedTopologies. In the YAML file, check the following configuration items:- For
matchLabelExpressions
, set thekey: topology.cinder.csi.openstack.org/zone
and input the specific AZ information where the PVC will be created.
Item Value allowedTopologies
Value: matchLabelExpressions
matchLabelExpressions
underallowedTopologies
Value:
-key: topology.cinder.csi.openstack.org/zone
values:
-kr-central-2-a
orkr-central-2-b
- For
-
Download the following YAML file locally to modify and redeploy the StorageClass with the configuration from step 2.
- You need to add the
allowedTopologies
field to both StorageClass resources (csi-cinder-sc-delete, csi-cinder-sc-retain) that were created after deploying the CSI Provisioner.
Modify StorageClass YAML for creating PVC in specific AZ in Multi-AZ cluster# edit-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-cinder-sc-delete
provisioner: cinder.csi.openstack.org
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
allowedTopologies:
- matchLabelExpressions:
- key: topology.cinder.csi.openstack.org/zone
values:
- "{kr-central-2-a or kr-central-2-b}"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-cinder-sc-retain
provisioner: cinder.csi.openstack.org
reclaimPolicy: Retain
volumeBindingMode: Immediate
allowVolumeExpansion: true
allowedTopologies:
- matchLabelExpressions:
- key: topology.cinder.csi.openstack.org/zone
values:
- "{kr-central-2-a or kr-central-2-b}" - You need to add the
-
Run the following command to apply the modified StorageClass YAML file.
- When applying the YAML file, replace the filename with the actual YAML file name saved locally.
Deploy modified StorageClass commandkubectl --kubeconfig=$KUBE_CONFIG apply -f {YAML file name}.yaml
Resultstorageclass.storage.k8s.io/csi-cinder-sc-delete configured
storageclass.storage.k8s.io/csi-cinder-sc-retain configured -
Verify that the StorageClass resource has been successfully modified.
Step 5. Test dynamic provisioning of PersistentVolume
Test the dynamic creation of PV (PersistentVolume) by applying a PVC (PersistentVolumeClaim), and confirm that the volume is attached to a pod.
Apply PVC and verify PV creation
-
Deploy the following YAML file to create a PVC (PersistentVolumeClaim).
Example PVC creation file# pvc-test.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: csi-cinder-sc-deleteDeploy PVCkubectl --kubeconfig=$KUBE_CONFIG apply -f pvc-test.yaml
-
Verify that the PV (PersistentVolume) has been dynamically created based on the PVC (PersistentVolumeClaim).
Check PV and PVC status commandkubectl --kubeconfig=$KUBE_CONFIG get pv,pvc
ResultNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-c2456546-ddc2-4bd6-9d79-35f6ba53e7fc 10Gi RWO Delete Bound default/pvc-test csi-cinder-sc-delete 16s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pvc-test Bound pvc-c2456546-ddc2-4bd6-9d79-35f6ba53e7fc 10Gi RWO csi-cinder-sc-delete 16s
Create pod using PVC
Here is an example of deploying a pod that uses the previously created PV.
-
Deploy the following YAML file to create the pod.
Example pod deployment file# task-pv-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: pvc-test
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storagePod deployment commandkubectl --kubeconfig=$KUBE_CONFIG apply -f task-pv-pod.yaml
-
Run the following command to check if the PV has been successfully mounted to the pod.
Check pod commandkubectl --kubeconfig=$KUBE_CONFIG get pods // Check pod
kubectl --kubeconfig=$KUBE_CONFIG exec -ti task-pv-pod -- df -h // Check the container of the retrieved podResultNAME READY STATUS RESTARTS AGE
task-pv-pod 1/1 Running 0 53s
Filesystem Size Used Avail Use% Mounted on
/dev/vdb 9.8G 24K 9.8G 1% /usr/share/nginx/html -
Verify that the pod has been successfully created and mounted to the PV.