Skip to main content

Configure block storage CSI provisioner

To use persistent volumes in a cluster, storage and PersistentVolume objects generally need to be configured directly. In Kubernetes Engine, you can set up a CSI (Container Storage Interface) Provisioner to use KakaoCloud Block storage as a persistent volume. Once the CSI Provisioner is configured in the cluster, you can easily create a persistent volume by creating a PersistentVolumeClaim.
Here is how to configure the CSI Provisioner.

info

Bare Metal Server node pools do not support CSI Provisioner.

Step 1. Perform prerequisites

Before configuring the CSI Provisioner, the following prerequisites are required. These steps are performed only once per cluster.

  1. Create a cluster for dynamic provisioning of PVs. (Refer to the Create cluster documentation)

  2. Set up kubectl control configuration for the cluster to send dynamic provisioning commands for PVs.

Step 2. Configure dynamic volume provisioning

Deploy KakaoCloud's CSI Provisioner to configure dynamic provisioning of persistent volumes via PVC (PersistentVolumeClaim). This can be done using either a YAML file or Helm.

info

If you deploy the block storage CSI Provisioner after creating a Multi-AZ cluster where node pools run in different AZs in the kr-central-2 region, a PersistentVolumeClaim will be created in each AZ where the node pools are running (e.g., kr-central-2-a, kr-central-2-b).

Deploy CSI Provisioner using YAML file

Run the following command in the terminal to install the CSI Provisioner.

Deployment command
kubectl --kubeconfig=$KUBE_CONFIG apply -f https://raw.githubusercontent.com/kakaoicloud-guide/kubernetes-engine/main/guide-samples/dynamicPV/cinder-csi.yaml

Deploy CSI Provisioner using Helm

Set up dynamic volume provisioning using Helm, the Kubernetes package management tool.

  1. Before setting up dynamic volume provisioning, install the Helm client. For detailed instructions on installing Helm for different operating systems, refer to the Helm official documentation > Installing Helm.

  2. Run the following command to add the official Helm chart repository.

    Add Helm chart repository command
    $ helm repo add cpo https://kubernetes.github.io/cloud-provider-openstack

    "cpo" has been added to your repositories

    $ helm repo update

    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "cpo" chart repository
    Update Complete. ⎈Happy Helming!
  3. Enter the following command in the terminal to deploy the CSI Provisioner to the cluster. This will deploy resources like the namespace and service for the CSI Provisioner in one step.

    Helm CSI Provisioner deployment command
    $ helm install cinder-csi cpo/openstack-cinder-csi \
    --version 2.3.0 \
    --set secret.enabled=true \
    --set secret.name=cloud-config \
    --namespace kube-system
    NAME: cinder-csi
    LAST DEPLOYED: Mon Mar 13 14:05:04 2023
    NAMESPACE: kube-system
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    Use the following storageClass csi-cinder-sc-retain and csi-cinder-sc-delete only for RWO volumes.

Step 3. Verify CSI Provisioner deployment

After deploying the CSI Provisioner, run the following command to verify that the resources have been successfully created.

Check resources command
kubectl --kubeconfig=$KUBE_CONFIG get ds,deploy -n kube-system
Execution result
NAME                                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/openstack-cinder-csi-nodeplugin 1 1 1 1 1 `<none>` 3m5s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/openstack-cinder-csi-controllerplugin 1/1 1 1 3m5s

Step 4. (Optional) Deploy CSI Provisioner to specific AZ in Multi-AZ cluster

When node pools in a cluster created in the Multi-AZ supported kr-central-2 region run in different AZs, you can create a PersistentVolumeClaim in a specific AZ.
Specify the AZ information for the PersistentVolumeClaim in the StorageClass and apply the modified StorageClass YAML file.

  1. After deploying the CSI Provisioner, verify that the StorageClass resource has been created.

    Check StorageClass resource command
    kubectl --kubeconfig=$KUBE_CONFIG get StorageClass -n kube-system
    실행 결과
    NAME                             PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION
    csi-cinder-sc-delete cinder.csi.openstack.org Delete Immediate true
    csi-cinder-sc-retain cinder.csi.openstack.org Retain Immediate true
  2. Add the allowedTopologies field to the StorageClass, and configure the matchLabelExpressions under allowedTopologies. In the YAML file, check the following configuration items:

    • For matchLabelExpressions, set the key: topology.cinder.csi.openstack.org/zone and input the specific AZ information where the PVC will be created.
    ItemValue
    allowedTopologiesValue: matchLabelExpressions
    matchLabelExpressions under allowedTopologiesValue:
    - key: topology.cinder.csi.openstack.org/zone
       values:
        - kr-central-2-a or kr-central-2-b
  3. Download the following YAML file locally to modify and redeploy the StorageClass with the configuration from step 2.

    • You need to add the allowedTopologies field to both StorageClass resources (csi-cinder-sc-delete, csi-cinder-sc-retain) that were created after deploying the CSI Provisioner.
    Modify StorageClass YAML for creating PVC in specific AZ in Multi-AZ cluster
    # edit-storageclass.yaml 

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
    name: csi-cinder-sc-delete
    provisioner: cinder.csi.openstack.org
    reclaimPolicy: Delete
    volumeBindingMode: Immediate
    allowVolumeExpansion: true
    allowedTopologies:
    - matchLabelExpressions:
    - key: topology.cinder.csi.openstack.org/zone
    values:
    - "{kr-central-2-a or kr-central-2-b}"
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
    name: csi-cinder-sc-retain
    provisioner: cinder.csi.openstack.org
    reclaimPolicy: Retain
    volumeBindingMode: Immediate
    allowVolumeExpansion: true
    allowedTopologies:
    - matchLabelExpressions:
    - key: topology.cinder.csi.openstack.org/zone
    values:
    - "{kr-central-2-a or kr-central-2-b}"
  4. Run the following command to apply the modified StorageClass YAML file.

    • When applying the YAML file, replace the filename with the actual YAML file name saved locally.

    Deploy modified StorageClass command
    kubectl --kubeconfig=$KUBE_CONFIG apply -f {YAML file name}.yaml   
    Result
    storageclass.storage.k8s.io/csi-cinder-sc-delete configured
    storageclass.storage.k8s.io/csi-cinder-sc-retain configured
  5. Verify that the StorageClass resource has been successfully modified.

Step 5. Test dynamic provisioning of PersistentVolume

Test the dynamic creation of PV (PersistentVolume) by applying a PVC (PersistentVolumeClaim), and confirm that the volume is attached to a pod.

Apply PVC and verify PV creation

  1. Deploy the following YAML file to create a PVC (PersistentVolumeClaim).

    Example PVC creation file
    # pvc-test.yaml
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: pvc-test
    spec:
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 10Gi
    storageClassName: csi-cinder-sc-delete
    Deploy PVC
    kubectl --kubeconfig=$KUBE_CONFIG apply -f pvc-test.yaml
  2. Verify that the PV (PersistentVolume) has been dynamically created based on the PVC (PersistentVolumeClaim).

    Check PV and PVC status command
    kubectl --kubeconfig=$KUBE_CONFIG get pv,pvc
    Result
    NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS           REASON   AGE
    persistentvolume/pvc-c2456546-ddc2-4bd6-9d79-35f6ba53e7fc 10Gi RWO Delete Bound default/pvc-test csi-cinder-sc-delete 16s

    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    persistentvolumeclaim/pvc-test Bound pvc-c2456546-ddc2-4bd6-9d79-35f6ba53e7fc 10Gi RWO csi-cinder-sc-delete 16s

Create pod using PVC

Here is an example of deploying a pod that uses the previously created PV.

  1. Deploy the following YAML file to create the pod.

    Example pod deployment file
    # task-pv-pod.yaml
    apiVersion: v1
    kind: Pod
    metadata:
    name: task-pv-pod
    spec:
    volumes:
    - name: task-pv-storage
    persistentVolumeClaim:
    claimName: pvc-test
    containers:
    - name: task-pv-container
    image: nginx
    ports:
    - containerPort: 80
    name: "http-server"
    volumeMounts:
    - mountPath: "/usr/share/nginx/html"
    name: task-pv-storage
    Pod deployment command
    kubectl --kubeconfig=$KUBE_CONFIG apply -f task-pv-pod.yaml
  2. Run the following command to check if the PV has been successfully mounted to the pod.

    Check pod command
    kubectl --kubeconfig=$KUBE_CONFIG get pods   // Check pod
    kubectl --kubeconfig=$KUBE_CONFIG exec -ti task-pv-pod -- df -h // Check the container of the retrieved pod
    Result
    NAME          READY   STATUS    RESTARTS   AGE
    task-pv-pod 1/1 Running 0 53s

    Filesystem Size Used Avail Use% Mounted on
    /dev/vdb 9.8G 24K 9.8G 1% /usr/share/nginx/html
  3. Verify that the pod has been successfully created and mounted to the PV.