Kubeflow basic workflow
This tutorial series is designed to help you learn the core features for model development and experimentation based on Kubeflow on KakaoCloud.
You will explore key functionalities of Kubeflow—such as Jupyter Notebook, pipeline execution, hyperparameter tuning, and model serving—through practical examples.
This series is ideal for developers new to Kubeflow or those just getting started with MLOps environments. Each hands-on exercise introduces core concepts and usage patterns in a natural, step-by-step manner.
Tutorial structure
📄️ Deploy Jupyter Notebooks on Kubeflow
Build an MLOps environment on KakaoCloud's Kubernetes using the Kubeflow service.
📄️ Predictive modeling in Kubeflow Notebooks
The sample dataset will also be used for hands-on practice within a Jupyter Notebook environment.
📄️ Predictive model training in Kubeflow Pipelines
Through the pipeline exercise, you will create and run experiments, and train a prediction model accordingly.
📄️ ML experiment management with Kubeflow TensorBoard
TensorBoard is used within the Kubeflow environment to monitor training metrics.
📄️ Kubeflow hyperparameter tuning
Create an AutoML experiment using a sample hyperparameter tuning exercise.
📄️ Parallel training with Kubeflow MIG
By configuring MIG, you can utilize multiple GPU resources to build prediction models in both Notebooks and Pipelines.
📄️ Kubeflow model serving API setup
This tutorial demonstrates how to train a model using a sample dataset within a Kubeflow pipeline and expose it as an API.