Skip to main content

Configure block storage CSI provisioner

To use persistent volumes in a cluster, you typically need to configure both storage and a PersistentVolume object.
In Kubernetes Engine, you can configure the CSI (Container Storage Interface) provisioner to use KakaoCloud Block Storage as persistent storage. Once the CSI provisioner is configured, you can simply create a PersistentVolumeClaim to dynamically provision persistent volumes.

info

The CSI provisioner is not supported on Bare Metal Server node pool types.

Step 1. Perform prerequisites

Before configuring the CSI provisioner, the following tasks must be completed. These prerequisites need to be performed only once per cluster.

  1. Refer to Create cluster to create a cluster for dynamic PV provisioning.

  2. Configure kubectl to manage the cluster. See Configure kubectl control for details.

Step 2. Configure dynamic volume provisioning

To provision persistent volumes dynamically via PersistentVolumeClaims (PVC), deploy KakaoCloud's CSI provisioner.
You can deploy it using a YAML file or with Helm.

info

If you deploy the block storage CSI provisioner to a multi-AZ cluster where node pools are running in different AZs (e.g., kr-central-2-a, kr-central-2-b), a PersistentVolumeClaim is created for each AZ where a node is running.

Deploy CSI provisioner using YAML file

Run the following command in your terminal to install the CSI provisioner:

Apply deployment
kubectl --kubeconfig=$KUBE_CONFIG apply -f https://raw.githubusercontent.com/kakaoenterprise/kakaocloud-tutorials/refs/heads/k8se-public-guides/dynamicPV/cinder-csi.yaml

Deploy CSI provisioner using Helm

Use Helm, the Kubernetes package manager, to configure dynamic volume provisioning.

  1. First, install the Helm client. Refer to Helm documentation > Install Helm for OS-specific instructions.

  2. Add the official Helm chart repository:

    Add Helm chart repository
    helm repo add cpo https://kubernetes.github.io/cloud-provider-openstack
    Output
    "cpo" has been added to your repositories
    Update repository
    helm repo update
    Output
    ...Successfully got an update from the "cpo" chart repository
  3. Deploy the CSI provisioner to your cluster:

    Deploy CSI provisioner using Helm
    helm install cinder-csi cpo/openstack-cinder-csi \
    --version 2.3.0 \
    --set secret.enabled=true \
    --set secret.name=cloud-config \
    --namespace kube-system
    Output
    STATUS: deployed
    Use the following storageClass csi-cinder-sc-retain and csi-cinder-sc-delete only for RWO volumes.

Step 3. Verify CSI provisioner deployment

To verify that the CSI provisioner was deployed correctly, run:

Check resources
kubectl --kubeconfig=$KUBE_CONFIG get ds,deploy -n kube-system
Expected output
daemonset.apps/openstack-cinder-csi-nodeplugin ...
deployment.apps/openstack-cinder-csi-controllerplugin ...

Step 4. (Optional) Deploy CSI provisioner to a specific AZ in multi-AZ cluster

In a multi-AZ cluster, you can target PersistentVolumeClaims to a specific AZ by editing the StorageClass with the appropriate allowedTopologies.

  1. Verify that the StorageClass resources were created after deploying the CSI provisioner:

    Check StorageClass
    kubectl --kubeconfig=$KUBE_CONFIG get StorageClass -n kube-system
  2. Add the allowedTopologies field to your StorageClass YAML:

    Example: StorageClass targeting specific AZ
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
    name: csi-cinder-sc-delete
    provisioner: cinder.csi.openstack.org
    reclaimPolicy: Delete
    volumeBindingMode: Immediate
    allowVolumeExpansion: true
    allowedTopologies:
    - matchLabelExpressions:
    - key: topology.cinder.csi.openstack.org/zone
    values:
    - kr-central-2-a
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
    name: csi-cinder-sc-retain
    provisioner: cinder.csi.openstack.org
    reclaimPolicy: Retain
    volumeBindingMode: Immediate
    allowVolumeExpansion: true
    allowedTopologies:
    - matchLabelExpressions:
    - key: topology.cinder.csi.openstack.org/zone
    values:
    - kr-central-2-a
  3. Apply the modified YAML:

    Apply modified StorageClass
    kubectl --kubeconfig=$KUBE_CONFIG apply -f edit-storageclass.yaml

Step 5. Test dynamic provisioning of persistent volume

Create a PVC (PersistentVolumeClaim) and confirm that a PV (PersistentVolume) is dynamically created and mounted to a pod.

Apply PVC and verify PV creation

  1. Create a PVC using the following YAML:

    PVC example
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: pvc-test
    spec:
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 10Gi
    storageClassName: csi-cinder-sc-delete
    Apply PVC
    kubectl --kubeconfig=$KUBE_CONFIG apply -f pvc-test.yaml
  2. Check the dynamically created PV:

    Check PV and PVC
    kubectl --kubeconfig=$KUBE_CONFIG get pv,pvc
    Expected output
    persistentvolume/pvc-xxxxx...   10Gi  RWO  Delete  Bound  pvc-test  csi-cinder-sc-delete ...
    persistentvolumeclaim/pvc-test Bound ...

Create pod using PVC

  1. Deploy a pod that mounts the PVC:

    Pod example
    apiVersion: v1
    kind: Pod
    metadata:
    name: task-pv-pod
    spec:
    volumes:
    - name: task-pv-storage
    persistentVolumeClaim:
    claimName: pvc-test
    containers:
    - name: task-pv-container
    image: nginx
    ports:
    - containerPort: 80
    name: "http-server"
    volumeMounts:
    - mountPath: "/usr/share/nginx/html"
    name: task-pv-storage
    Apply pod
    kubectl --kubeconfig=$KUBE_CONFIG apply -f task-pv-pod.yaml
  2. Verify that the volume is mounted:

    Check pod and mount
    kubectl --kubeconfig=$KUBE_CONFIG get pods
    kubectl --kubeconfig=$KUBE_CONFIG exec -ti task-pv-pod -- df -h
    Expected output
    /dev/vdb   9.8G   24K   9.8G   1% /usr/share/nginx/html
  3. This confirms that the pod was successfully created and the PV is properly mounted.