Skip to main content

Setting up Jupyter Notebook environment using Kubeflow

This guide explains the steps to configure a Jupyter Notebook environment using the Kubeflow service on KakaoCloud's Kubernetes platform.

Basic information
  • Estimated time: 30 minutes
  • User environment
    • Recommended OS: MacOS, Ubuntu
    • Region: kr-central-2
  • Prerequisites:

Before you start

Using KakaoCloud's Kubernetes Engine and Kubeflow, you can establish an efficient foundation for an MLOps environment. In this document, you'll learn how to perform data analysis and model training using Jupyter Notebook, and how to optimize machine learning workflows using various features of Kubeflow.

About this scenario

In this scenario, we guide you through creating Kubeflow on the KakaoCloud console, accessing the dashboard, and creating a Jupyter Notebook instance. The main topics covered in this scenario are:

  • Setting up a Kubernetes cluster and file storage
  • Performing data analysis and model training by creating a Jupyter Notebook

Before you start

As a prerequisite for setting up the Kubeflow environment, create and configure a Kubernetes cluster and file storage.

1. Create Kubernetes cluster

Configure a basic Kubernetes cluster for the Kubeflow environment. This cluster serves as the foundation for deploying various Kubeflow components.

  1. In the KakaoCloud Console > Container Pack > Kubernetes Engine, click [Create cluster].

    Cluster settings
    ItemDescription
    Cluster namekc-handson
    Kubernetes version1.28
    Cluster network settings  Select a network with an IP range that supports external communication from the created VPC and subnet
    info

    If the cluster's network is a private subnet, nodes in the private subnet cannot communicate over the internet. To enable internet communication for external CRs, NAT communication is required.

    You can use a NAT Instance for NAT communication. For more details, refer to Appendix. NAT instance.

    Node pool settings
    NameCountNode pool specification
    pool-ingress 1- Node pool type: Virtual Machine
    - Instance type: m2a.large
    - Volume type/size: 50GB
    - Node count: 1
    - Autoscale: Disabled
    pool-worker6  - Node pool type: Virtual Machine
    - Instance type: m2a.xlarge
    - Volume type/size: 100GB
    - Node count: 6
    - Autoscale: Disabled
    pool-gpu1- Node pool type: GPU
    - Instance type: p2i.6xlarge
    - Volume type/size: 100GB
    - Autoscale: Disabled
  2. Ensure that the status of the created node pool is Running.

  3. Follow the steps in Kubectl control setup to configure the kubectl file for the cluster.

2. Create file storage

Create file storage required for data management and storage in Kubeflow. This storage will be used as a Persistent Volume for the notebook instance, ensuring safe storage of data and models. Configure the file storage instance in the same network and subnet as the selected cluster.

  1. In the KakaoCloud Console > Beyond Storage Service > File Storage, click [Create instance].

    ItemDescription
    Instance namekc-handson-fs
    Volume size1TB
    Network settingsSame as the Kubernetes cluster
    Subnet settingsSame as the Kubernetes cluster
    Access control settingsAllow access from all private IPs within the configured network
    Mount information  handson
  2. Ensure that the status of the created instance changes to Active.


Getting started

The main steps for configuring the Jupyter Notebook environment are as follows.

Step 1. Create Kubeflow

Deploy and configure Kubeflow on the prepared Kubernetes cluster. This process ensures that you can utilize Kubeflow's various features through the initial configuration.

  1. In the KakaoCloud Console > AI Service > Kubeflow menu, click [Create Kubeflow]. Refer to the configuration values below to create Kubeflow.

    Kubeflow settings
    ItemDescription
    Kubeflow namekc-handson
    Kubeflow version1.8
    Kubeflow service type Essential+HPT+ServingAPI
    Cluster settings
    ItemDescription
    Cluster connectionkc-handson
    Ingress node poolpool-ingress
    Worker node poolpool-worker
    CPU node poolpool-worker
    GPU node poolpool-gpu
    GPU MIG1g.10gb - 7 count
    Default file storagekc-handson-fs

    Authentication information for users and workloads
    CategoryItemDescription
    Object storage settingsObject storage typeObject Storage or MinIO
    Kubeflow owner settingsOwner email account${ADMIN_EMAIL} (example@kakaocloud.com)
    Namespace namekubeflow-tutorial
    Namespace file storagekc-handson-fs
    DB settingsDB typeKubeflow Internal DB
    Port3306
    Password${DB_PASSWORD}
    Confirm password${DB_PASSWORD}
    Domain connection (optional)Enter a valid domain format
  2. Ensure that the created Kubeflow status changes to Active.

Step 2. Access the dashboard

To access the deployed Kubeflow environment, connect to the dashboard. From here, you can manage various Kubeflow resources and configure the Jupyter Notebook environment.

There are two main methods to access the Kubeflow dashboard: via Load Balancer Public IP or using kubectl port forwarding.

  1. In the KakaoCloud Console, go to Load Balancing > Load Balancer.

  2. Find the load balancer named kube_service_{project_id}_{IKE cluster_name}_ingress-nginx_ingress-nginx-controller created for Kubeflow's Ingress and check its Public IP. If there is no Public IP, assign a new one from the options menu.

    Assign Public IP

  3. Open your browser and access the Public IP of the load balancer on port 80.

    open http://{LB_PUBLIC_IP}
  1. After accessing the dashboard, log in using the owner email account provided during the Kubeflow creation step and the initial password sent to the owner’s email.

Step 3. Create Jupyter Notebook

Through the dashboard, users can create a Jupyter Notebook instance. In this step, you will select the specifications for the notebook and configure the necessary settings.

  1. In the Kubeflow dashboard, click on the Notebooks tab on the left side.

    주피터 노트북 생성

  2. Navigate to the Notebooks page and click the [+ New Notebook] button at the top right. Refer to the information below to create a new notebook.

  1. For a GPU-based notebook, refer to the following configuration details:

    ItemCategoryDescription
    NameNameUsed to identify the notebook instance in the Kubeflow dashboard
    NamespaceKubernetes namespace where the notebook instance will be created
    Docker ImageImageSpecify the Docker image
    CPU / RAMMinimum CPUThe number of CPU cores, specifying the amount of CPU resources the notebook instance will use
    Minimum Memory GiUnit of memory resources (GiB), specifying the amount of memory resources the notebook instance will use
    GPUsNumber of GPUsGPU resources to be used by the notebook instance
    Affinity / TolerationsAffinity ConfigSelect the CPU node pool where the notebook will be created
    - Specify the node on which the notebook instance will run
    Tolerations GroupAllow specific node taint settings
  2. Enter the information for the notebook you want to create. Refer to the example values below.

    ItemDescription
    Namehandson
    Imagekc-kubeflow/jupyter-pyspark-pytorch:v1.8.0.py311.1a
    Minimum CPU2
    Minimum Memory Gi12
    Number of GPUs4
    GPU VendorNVIDIA MIG - 1g.10gb
    Affinity Configpool-gpu
  1. Click the [LAUNCH] button to create the notebook.

Step 4. Access Jupyter Notebook

Once the Jupyter Notebook instance is created, you can access it to work on real machine learning projects.

  1. Click the [CONNECT] button next to the created notebook instance to access it.

    Click Connect button

  2. Select the Python3 kernel from the Notebook.

    Click Python3 button

  3. Enter the following example code. After running the code, verify the output message to confirm the results.

    import torch

    def check_gpu_available():
    if torch.cuda.is_available():
    print("GPU is available on this system.")
    else:
    print("GPU is not available on this system.")

    check_gpu_available()
    Basic information

    Unlike this tutorial, when using a Single GPU instance, you need to set the environment variable CUDA_VISIBLE_DEVICES to 0.

    import torch
    import os

    def set_cuda_devices():
    os.environ["CUDA_VISIBLE_DEVICES"] = "0"

    def check_gpu_available():
    if torch.cuda.is_available():
    print("GPU is available on the current system.")
    else:
    print("GPU is not available on the current system.")

    set_cuda_devices()
    check_gpu_available()
  4. For notebooks using GPUs, access the terminal within the Notebook and run the nvidia-smi command to check the NVIDIA devices.

    nvidia 장치 확인