Configure block storage CSI provisioner
To use persistent volumes in a cluster, you typically need to configure both storage and a PersistentVolume object.
In Kubernetes Engine, you can configure the CSI (Container Storage Interface) provisioner to use KakaoCloud Block Storage as persistent storage. Once the CSI provisioner is configured, you can simply create a PersistentVolumeClaim to dynamically provision persistent volumes.
The CSI provisioner is not supported on Bare Metal Server node pool types.
Step 1. Perform prerequisites
Before configuring the CSI provisioner, the following tasks must be completed. These prerequisites need to be performed only once per cluster.
-
Refer to Create cluster to create a cluster for dynamic PV provisioning.
-
Configure
kubectl
to manage the cluster. See Configure kubectl control for details.
Step 2. Configure dynamic volume provisioning
To provision persistent volumes dynamically via PersistentVolumeClaims (PVC), deploy KakaoCloud's CSI provisioner.
You can deploy it using a YAML file or with Helm.
If you deploy the block storage CSI provisioner to a multi-AZ cluster where node pools are running in different AZs (e.g., kr-central-2-a
, kr-central-2-b
), a PersistentVolumeClaim is created for each AZ where a node is running.
Deploy CSI provisioner using YAML file
Run the following command in your terminal to install the CSI provisioner:
kubectl --kubeconfig=$KUBE_CONFIG apply -f https://raw.githubusercontent.com/kakaoenterprise/kakaocloud-tutorials/refs/heads/k8se-public-guides/dynamicPV/cinder-csi.yaml
Deploy CSI provisioner using Helm
Use Helm, the Kubernetes package manager, to configure dynamic volume provisioning.
-
First, install the Helm client. Refer to Helm documentation > Install Helm for OS-specific instructions.
-
Add the official Helm chart repository:
Add Helm chart repositoryhelm repo add cpo https://kubernetes.github.io/cloud-provider-openstack
Output"cpo" has been added to your repositories
Update repositoryhelm repo update
Output...Successfully got an update from the "cpo" chart repository
-
Deploy the CSI provisioner to your cluster:
Deploy CSI provisioner using Helmhelm install cinder-csi cpo/openstack-cinder-csi \
--version 2.3.0 \
--set secret.enabled=true \
--set secret.name=cloud-config \
--namespace kube-systemOutputSTATUS: deployed
Use the following storageClass csi-cinder-sc-retain and csi-cinder-sc-delete only for RWO volumes.
Step 3. Verify CSI provisioner deployment
To verify that the CSI provisioner was deployed correctly, run:
kubectl --kubeconfig=$KUBE_CONFIG get ds,deploy -n kube-system
daemonset.apps/openstack-cinder-csi-nodeplugin ...
deployment.apps/openstack-cinder-csi-controllerplugin ...
Step 4. (Optional) Deploy CSI provisioner to a specific AZ in multi-AZ cluster
In a multi-AZ cluster, you can target PersistentVolumeClaims to a specific AZ by editing the StorageClass with the appropriate allowedTopologies
.
-
Verify that the StorageClass resources were created after deploying the CSI provisioner:
Check StorageClasskubectl --kubeconfig=$KUBE_CONFIG get StorageClass -n kube-system
-
Add the
allowedTopologies
field to your StorageClass YAML:Example: StorageClass targeting specific AZapiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-cinder-sc-delete
provisioner: cinder.csi.openstack.org
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
allowedTopologies:
- matchLabelExpressions:
- key: topology.cinder.csi.openstack.org/zone
values:
- kr-central-2-a
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-cinder-sc-retain
provisioner: cinder.csi.openstack.org
reclaimPolicy: Retain
volumeBindingMode: Immediate
allowVolumeExpansion: true
allowedTopologies:
- matchLabelExpressions:
- key: topology.cinder.csi.openstack.org/zone
values:
- kr-central-2-a -
Apply the modified YAML:
Apply modified StorageClasskubectl --kubeconfig=$KUBE_CONFIG apply -f edit-storageclass.yaml
Step 5. Test dynamic provisioning of persistent volume
Create a PVC (PersistentVolumeClaim) and confirm that a PV (PersistentVolume) is dynamically created and mounted to a pod.
Apply PVC and verify PV creation
-
Create a PVC using the following YAML:
PVC exampleapiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: csi-cinder-sc-deleteApply PVCkubectl --kubeconfig=$KUBE_CONFIG apply -f pvc-test.yaml
-
Check the dynamically created PV:
Check PV and PVCkubectl --kubeconfig=$KUBE_CONFIG get pv,pvc
Expected outputpersistentvolume/pvc-xxxxx... 10Gi RWO Delete Bound pvc-test csi-cinder-sc-delete ...
persistentvolumeclaim/pvc-test Bound ...
Create pod using PVC
-
Deploy a pod that mounts the PVC:
Pod exampleapiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: pvc-test
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storageApply podkubectl --kubeconfig=$KUBE_CONFIG apply -f task-pv-pod.yaml
-
Verify that the volume is mounted:
Check pod and mountkubectl --kubeconfig=$KUBE_CONFIG get pods
kubectl --kubeconfig=$KUBE_CONFIG exec -ti task-pv-pod -- df -hExpected output/dev/vdb 9.8G 24K 9.8G 1% /usr/share/nginx/html
-
This confirms that the pod was successfully created and the PV is properly mounted.