Skip to main content

Manage node

A node is a server where container applications run, and in Kubernetes Engine, nodes are managed in node pools, which are groups of nodes with the same instance type.

Create and manage node pools

Here’s how to manage node pools in the Kubernetes Engine service.

Create node pool

After creating a cluster, you can create a node pool.

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.

  2. In the Cluster menu, select the cluster to create a node pool.

  3. On the cluster details page, click the Node pool tab, then click the [Create node pool] button.

  4. In Create node pool, enter the necessary information and click the Create button.

    이미지

    ItemDescription
    Node pool typeSelect the type of node pool to create
    - Bare Metal Server type is supported only in the kr-central-2 region
    Basic settingsSet the basic information for the node pool
    - Node pool name: Enter the name of the node pool
    - Node pool description (optional): Enter a description within 60 characters
    ImageSelect one image to use for the nodes
    - Available images vary depending on the node pool type
    Instance typeSelect the instance type for the node pool

    ⚠️ For proper Kubernetes Engine service usage, use instances with at least 1 GiB
    VolumeConfigure the volume type and size for the instances
    - Currently, the volume type is fixed as SSD, and the size can be set from 30 to 5,120GB

    ⚠️ Volume settings are not available for Bare Metal Server type node pools
    Number of nodesSet the number of nodes in the node pool
    Node pool network settingsSelect the VPC and subnet where the nodes will run
    - VPC: Same as the cluster's VPC and cannot be modified
    - Subnet: Select the subnet where the nodes will run from the subnets chosen during cluster creation
      ᄂ For Multi-AZ in the kr-central-2 region, select subnets from different availability zones to increase availability
    Resource-based auto scaling
    (optional)
    Automatically expand the number of nodes when resources are insufficient to schedule pods,
    and automatically shrink when resource utilization remains below a threshold

    ⚠️ Autoscaling is not available for Bare Metal Server type node pools

    - Minimum node count: The minimum number of nodes when autoscaling shrinks the node count
    - Maximum node count: The maximum number of nodes when autoscaling expands the node count
    Key pairConfigure the use of a key pair for SSH access to the node instances in the node pool
      ᄂ Select an existing key pair or create a new key pair
      ᄂ To create a new key pair: Click Create key pair, enter a key pair name, and click Create and download to download the .pem file
    - The key pair assigned to the node pool is not displayed in the instance details
    - The key pair cannot be changed after cluster creation; if needed, create a new node pool to apply new settings
    Network bondingIf creating a Bare Metal Server type node pool, network bonding is automatically applied
    - Network bonding sets two interfaces with the same IP on each node in the pool based on the subnet selected in the network settings
    - Only a single availability zone is supported
    Advanced settings (optional)Set advanced configurations for the node pool
    - Node label (optional): Specify Kubernetes labels to be applied to all nodes in the pool, which can be used with nodeSelector
    - Node taint (optional): Specify Kubernetes taints to be applied to all nodes in the pool, which can be used with toleration
    - CPU multithreading: An option to optimize performance by specifying single threads per CPU core
    ᄂ some instance types require CPU multithreading
      ᄂ Disable for high-performance computing (HPC) workloads
    ⚠️ Multithreading settings are not available for Bare Metal Server type node pools
    - User script (optional): Input a shell script to be executed when nodes in the pool are created
      ᄂ Use this for additional node configuration, up to 16KB, and note that user scripts cannot be modified after configuration

Configure node pool

Check or modify information and the number of nodes in a node pool.

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.

  2. Select the cluster with the node pool settings to modify from the Cluster menu.

  3. On the cluster detail page, go to the Node pool tab, then click the [More] icon > Configure node pool.

  4. Review the information in the popup. If needed, modify the settings and click [Save].

    ItemDescription
    Node pool informationInformation of the node pool
    - Node pool name: Cannot be changed
    - Node pool description (optional): Review or modify the current description within 60 characters
    Number of nodesThe current number of nodes in the node pool
    - You can modify the node count

View node pool details

View detailed information about the node pool and the nodes within it.

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.

  2. Select the cluster from the Cluster menu that contains the node poolto view.

  3. On the cluster detail page, go to the Node pool tab, then select the node pool to view.

  4. Review the detailed information of the node pool.

    TabItemDescription
    DetailsKey pairInformation about the key pair assigned to the node pool
    Created atThe creation date of the node pool
    Volume typeVolume type settings for the node pool
    ImageDetailed information about the image installed on the nodes
    VPCInformation about the VPC used by the cluster
    SubnetInformation about the subnet where the node pool’s nodes are running
    Pod schedulingPod scheduling configuration for the node pool
    Node labelLabels applied to nodes in the node pool
    Node taintTaints applied to nodes in the node pool
    User scriptUser script applied to the nodes in the node pool
    ScalingResource-based auto scalingCreate and manage Resource-based auto scaling policies
    Schedule-based auto scalingCreate and manage schedule-based auto scaling policies
    - View schedule-based auto scaling events
    NodeNodeInformation about the nodes
    - Click on the node name to view detailed node information
    Node statusNode status information
    - Running: The node is ready and running
    - Running (Scheduling Disable): The node is no longer scheduling new pods (pods already assigned are still running)
    - Provisioned: Node provisioning is complete
    - Deleted: Node deletion is complete
    - Pending: Node provisioning is in progress
    - Provisioning: Node is being provisioned
    - Deleting: Node is being deleted
    - Failed: Requires user intervention
    Node poolInformation about the node pool the node belongs to
    Private IPThe private IP of the node
    Availability zoneThe availability zone of the subnet where the node is running
    UptimeThe total time elapsed since the node request was made, not the node’s creation date

Configure node labels

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.

  2. Select the cluster from the Cluster menu that contains the node pool.

  3. On the cluster detail page, go to the Node pool tab, then click the Set node label button in the details.

  4. Enter the Key and Value, then click [Save]. The labels will be applied to all nodes in the node pool.

    ItemDescription
    KeyA key for identifying the label, up to 50 per node
    ValueThe value of the label
    [Trash icon]Click to delete the label
info

Keywords reserved by KakaoCloud or Kubernetes cannot be used as label keys.

Configure user script

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.
  2. Select the cluster from the Cluster menu that contains the node pool.
  3. On the cluster detail page, go to the Node pool tab, then click the Set user script button in the details.
  4. Upload or enter the User script and click [Save].
info

The user script will only apply to newly created nodes after it is set.

Delete node pool

You can delete a node pool that is no longer needed.

caution

Deleting a node pool will delete all nodes in the pool, and this action cannot be undone.

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.
  2. Select the cluster from the Cluster menu where the node pool is located.
  3. Go to the Node pool tab, click the [More] icon next to the node pool, and select Delete node pool.
  4. Enter the information in the Delete node pool popup and click [Delete].

Pod scheduling configuration

Configure pod scheduling (node pool)

You can configure the pod scheduling status for all nodes in a node pool.

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.
  2. Select the cluster from the Cluster menu where the node pool is located.
  3. On the cluster detail page, go to the Node pool tab, then click the [More] icon for the node pool and select Set pod scheduling.
  4. Review the information in the popup. Modify settings as needed and click [Save].
    • You can allow or block pod scheduling regardless of the node pool's status. The pod scheduling settings apply to all nodes in the pool.

Configure pod scheduling (single node)

Configure pod scheduling status for individual nodes.

info

Pod scheduling settings for a single node do not override the settings for the entire node pool. The most recent scheduling settings will apply to individual nodes.

NodeDescription
Blocked node schedulingPods will not be assigned to a node after scheduling is blocked, but existing pods will continue to run.
Allowed node schedulingPods will be scheduled normally on the node.
  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.
  2. Select the cluster from the Cluster menu that contains the target node.
  3. On the cluster detail page, go to the Node tab and click the [More] icon for the target node, then select Block node scheduling.
    • Depending on the node's current scheduling status, either Block node scheduling or Allow node scheduling will appear.
  4. Review the information in the popup and click [Apply].
    • Depending on the current scheduling status, the confirmation will allow or block scheduling.

Node pool scaling management

info
  • The previous node pool autoscaling setting is now called Resource-based auto scaling.
  • Both Resource-based auto scaling and schedule-based auto scaling are available. You can only set one type of autoscaling at a time. ᄂ Future improvements will allow both autoscaling types to be used simultaneously.

Configure resource-based auto scaling

Resource-based auto scaling adjusts the number of nodes based on the resource usage of the node pool. If available resources are insufficient to schedule pods, nodes are automatically added, and when resource utilization remains low, nodes are automatically removed.
This feature is based on the Kubernetes Cluster Autoscaler project.

info
  • Resource-based auto scaling can be configured independently for each node pool.
  • Automatic scale-in can be configured for the entire cluster and is applied if autoscaling is set during cluster/node pool creation.
  • The autoscaling feature is not available for Bare Metal Server type node pools.
caution
  • Resource-based auto scaling operates based on the request value defined for pod resources.
  • If pod resource request values are not defined, the autoscaling feature will not function.
  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.

  2. Select the cluster from the Cluster menu for autoscaling configuration.

  3. On the cluster detail page, go to the Node pool tab and select the node pool.

  4. In the Scaling tab of the Node pool details, click the Resource-based auto scaling button.

  5. Review the information in the popup. Modify the settings as needed and click [Save].

    ItemDescription
    Resource-based auto scalingEnable or disable the automatic adjustment of node count based on node resource usage
    Minimum node countThe minimum number of nodes when autoscaling scales down
    - Automatic scale-in is configured at the cluster level. For details, refer to Configure scale-in for clusters
    Maximum node countThe maximum number of nodes when autoscaling scales up

Configure HPA and load testing

You can set up HorizontalPodAutoscaler (HPA) with Cluster Autoscaler for more efficient resource management. This example demonstrates how to test if automatic scaling works as expected by applying HPA.

info

HPA monitors resource usage (such as CPU) and automatically adjusts the number of pods in a workload (such as Deployment, StatefulSet). For more details, refer to the Kubernetes documentation on HPA.

  1. Install Helm client before configuring HPA. For installation instructions by operating system, refer to the Helm official documentation > Installing Helm.

  2. Install metrics-server to monitor pod metrics for HPA load testing.

    • Add the metrics-server chart repository, then install metrics-server.
    metrics-server installation command
    helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
    helm upgrade --install metrics-server metrics-server/metrics-server --set hostNetwork.enabled=true --set containerPort=4443
  3. Check if the resource usage of the nodes is being monitored correctly. It may take up to 5 minutes to collect monitoring information after installing the metrics-server.

    Check node resource usage
    kubectl top node
  4. After setting up the HPA and Cluster Autoscaler, deploy the php-server for load testing.

    Deploy php-server app
     apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: php-apache
    spec:
    selector:
    matchLabels:
    run: php-apache
    replicas: 1
    template:
    metadata:
    labels:
    run: php-apache
    spec:
    containers:
    - name: php-apache
    image: ike-controlplane-provider.kr-central-1.kcr.dev/ike-cr/hpa-example:latest
    ports:
    - containerPort: 80
    resources:
    limits:
    cpu: 500m
    requests:
    cpu: 500m

    ---

    apiVersion: v1
    kind: Service
    metadata:
    name: php-apache
    labels:
    run: php-apache
    spec:
    ports:
    - port: 80
    selector:
    run: php-apache
    Check deployment of php-server app
    kubectl apply -f php-apache.yaml
  5. Create an HPA for load testing.

    Create HPA
       kubectl autoscale deployment php-apache --cpu-percent=10 --min=1 --max=10  // Create HPA
    kubectl get hpa // Check HPA settings
    Result
       NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
    php-apache Deployment/php-apache 46%/50% 1 10 5 28m
  6. Run a load-generating pod to test the HPA and auto-scaling configuration.

    Execute pod
     kubectl run -i --tty load-generator --rm --image=ike-controlplane-provider.kr-central-1.kcr.dev/ike-cr/busybox:1.28 --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- `HTTP`://php-apache; done"
  7. Confirm that the number of pods and nodes increases appropriately as the load increases.

    • The HPA for the php-apache server is triggered, and some pods that cannot be scheduled due to insufficient resources enter the Pending state.

    • Since some pods cannot be scheduled, the number of nodes is automatically scaled to three to add resources.

      Check the results of the HPA and auto-scaling operations.
      kubectl get pods -w  # Check for changes in the number of pods
      kubectl get nodes -w # Check for changes in the number of nodes
      Result
      NAME                             READY   STATUS    RESTARTS      AGE
      php-apache-766d5cdd5b-2t5p8 0/1 Pending 0 44s
      php-apache-766d5cdd5b-5mhlk 0/1 Pending 0 29s
      php-apache-766d5cdd5b-5vjt6 0/1 Pending 0 14s
      php-apache-766d5cdd5b-74z87 1/1 Running 0 44s
      php-apache-766d5cdd5b-d49g9 0/1 Pending 0 29s
      php-apache-766d5cdd5b-fnlld 1/1 Running 0 44s
      php-apache-766d5cdd5b-nr5f2 0/1 Pending 0 29s
      php-apache-766d5cdd5b-t7zr8 0/1 Pending 0 29s
      php-apache-766d5cdd5b-vjjlz 1/1 Running 0 2m49s
      php-apache-766d5cdd5b-whjhw 0/1 Pending 0 14s

      NAME STATUS ROLES AGE VERSION
      host-10-187-5-177 Ready <none> 51s v1.24.6
      host-10-187-5-189 Ready <none> 9m5s v1.24.6
      host-10-187-5-98 Ready <none> 69s v1.24.6
  8. When a node is added, all pods that were in the Pending state are changed to the Running state.

Set scheduled-based auto scaling

Scheduled-based auto scaling is a feature that adjusts the number of nodes in a node pool at scheduled times. By using scheduled-based auto scaling, you can automate the adjustment of the number of nodes in the node pool based on predicted increases or decreases in traffic. For example, workloads that regularly experience a pattern of decreased traffic on weekends and increased traffic during the week can be managed by creating rules such as the following:

  • Scheduled-based auto scaling
    • Rule 1: Every Monday at 8:30 AM | Maximum node count to increase
    • Rule 2: Every Friday at 7:30 PM | Minimum node count to decrease

This allows the application to have enough nodes to handle peak traffic during the week and reduce unnecessary nodes during the relatively low traffic on weekends. This scheduled-based auto scaling can optimize costs and performance.

info
  • Up to 2 scheduling rules can be created for scheduled-based auto scaling.
  • Since a maximum of 2 rules can be used, it is recommended to have one rule for scaling up at a specific time and another rule for scaling back to the original number of nodes.
  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine menu.
  2. Select the cluster where to check the node pool settings in the Cluster menu.
  3. Choose the node pool to configure in the Node pool tab on the cluster's detail page.
  4. In the Node pool details, click the Create rule button under the Scaling tab in the scheduled-based auto scaling section.
  5. In the Create rule pop-up window, enter the rule name and description, then click the [Create] button.
    ItemDescription
    NameThe name of the scheduled-based auto scaling rule
    RuleDesired number of nodes
    - Set the desired number of nodes at the specified time
    Recurrence settings
    StartThe start date and time when the rule operates
    - The recurrence points are determined based on the start date and time.
    Scheduled execution dateDisplays the nearest scheduled date based on the start date and time.

Set recurrence

To create a recurring schedule, you must select [Daily], [Weekly], or [Monthly] in the rule creation. The detailed explanations of the currently available recurrence periods are as follows:

Recurrence PeriodDescription
OncePerforms once at the start date and time
DailyRepeats daily based on the time of the start date
- Start Date: 2024/05/01 (Wed) 10:00
- Recurrence Point: Daily at 10:00
WeeklyRepeats weekly based on the day of the week and time of the start date
- Start Date: 2024/05/01 (Wed) 10:00
- Recurrence Point: Every Wednesday at 10:00
MonthlyRepeats monthly based on the day and time of the start date
- Start Date: 2024/05/01 (Wed) 10:00
- Recurrence Point: Every 1st of the month at 10:00

Delete scheduled-based auto scaling

To delete scheduled-based auto scaling, follow these steps:

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine menu.
  2. Select the cluster where to configure the auto-scaling settings in the Cluster menu.
  3. Choose the node pool to configure in the Node pool tab on the cluster's detail page.
  4. In the Node pool details, click the Trash can button in the scheduled-based auto scaling rules list under the Scaling tab.
  5. In the pop-up window, enter the rule name and click the [Delete] button.
info
  • If the node pool's status is changing, you cannot delete the schedule conditions.
  • Changing states: ScalingUp, ScalingDown, Updating

Check scheduled-based auto scaling events

You can check the results of scheduled-based auto scaling in the event list. The activities adjusted through scheduled-based auto scaling will retain a maximum of 20 logs per rule. Scheduled-based auto scaling events can also be verified through the Cloud Trail service.

info
  • If the status of scheduled-based auto scaling drops to Failed, it indicates that there was an internal issue during auto-scaling, causing the node pool's status to drop to Failed.
  • This may be due to exceeding the node pool quota, timeouts during capacity adjustments, etc. For events that customers cannot resolve, please contact the Helpdesk.
  • When a rule is deleted, the associated events will also be deleted.
    ᄂ Logs recorded in the Cloud Trail service will be retained.
CategoryDescription
Event timeThe time when the rule was executed
Rule nameThe name of the executed rule
ResultThe result of the executed rule
- [Success], [Failure] Detailed event results can be checked in a pop-up.

Manage nodes

The following methods are used to manage nodes in the Kubernetes Engine service.

View node details

You can check detailed information about a node.

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine menu.

  2. Select the cluster where the node pool to check is located from the Cluster menu.

  3. Click on the Node tab on the cluster's detail page and select the node to check.

    • Alternatively, select the node from the Node pool > Node tab to view the details.
  4. Check the information in the Details tab.

    CategoryDescription
    Instance IDThe VM instance ID information corresponding to the node
    - Clicking on the instance ID will take you to the VM instance page
    Instance typeThe VM instance type corresponding to the node and the node pool type information
    Created atThe creation date of the VM instance corresponding to the node
    Kubernetes versionThe Kubernetes version of the node
    Availability ZoneInformation about the availability zone of the subnet where the node is running
    VolumeInformation about the volume attached to the VM instance corresponding to the node
    Key pairInformation about the key pair set for the node
    - The key pair specified through the node pool will not be exposed in the instance details
    Private IPThe private IP information of the node
    ImageDetailed information about the image installed on the node
    CPU MultithreadingWhether the CPU multithreading feature is enabled
    Summary of nodeProvides performance and status information about the node, and when the [Refresh] icon is clicked, the latest information of the node is updated
    - Pod: Information about the pods currently running on the node
    - Node condition: Detailed status information about the current node
    - Taint: Taint information set on the current node
    - Label: Label information set on the current node
    - Annotation: Annotation information set on the current node
    - Allocatable resource: Current status of allocatable resources on the node
    - Event: Event information that has occurred on the current node

Node monitoring

You can check monitoring information such as resource usage and trends of nodes in chart format for a specific period.

info

To monitor nodes in Kubernetes Engine, node-exporter is installed on port 59100 of the node. Please note that port 59100, where node-exporter is installed, cannot be used separately.

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine menu.
  2. Select the cluster where the node pool to check is located from the Cluster menu.
  3. Click on the Node tab on the cluster's detail page and select the node to check.
    • Alternatively, select the node from the Node pool > Node tab to view the details.
  4. In the Monitoring tab, select the period and Node/Pod, and check the information.

Node monitoring

When you select Node, you can check the resource usage and trend information of that node.

CategoryDescription
CPU usage (millicores)The CPU usage of the node
Memory usage (Bytes)The memory usage of the node
Disk usage (Bytes)The disk usage of the node
RX Network (byte/s)The number of bytes received over the network by the node
TX Network (byte/s)The number of bytes sent over the network by the node
Reserved CPU Capacity (%)The percentage of CPU reserved for the node's components
Reserved Memory Capacity (%)The percentage of memory reserved for the node's components
Pods (Count)The number of pods running on the node
Containers (Count)The number of containers running on the node

Pod monitoring

When you select Pod, you can check the resource usage and trend information of the pods running on the node.

CategoryDescription
CPU usage (millicores)The CPU usage of the pod
Memory usage (Bytes)The memory usage of the pod
RX Network (byte/s)The number of bytes received over the network by the pod
TX Network (byte/s)The number of bytes sent over the network by the pod
Reserved CPU Capacity (%)The percentage of CPU reserved for the pod
Reserved Memory Capacity (%)The percentage of memory reserved for the pod

Recover nodes

You can recover nodes that are in Failed status.

caution

When recovering a node, the node will be drained, and a new node will be created, while the existing node will be deleted. Running services may be affected, and deleted nodes cannot be recovered. Note that the IP of the VM corresponding to the newly created node will change.

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine menu.
  2. Select the cluster where the node pool to check is located from the Cluster menu.
  3. In the cluster's detail page, go to the Node tab, click the [More] icon of the node to recover, and select Recover node.
  4. In the Recover node pop-up window, enter the information and click the [Recover] button.

Update nodes

When the cluster has been updated to the latest Kubernetes version or when the latest node component updates can use the latest images, you can update the nodes.

When executing a node update, a rolling update will be performed as follows:

  1. A new node with the latest image version is created.

  2. Pods running on the existing node are evicted, and the node is switched to an unschedulable state.

  3. The evicted pods run on the new node, and once evictions are complete, the existing node is deleted.

  4. This process is repeated sequentially for all existing nodes.

Considerations Before Updating

If any of the following conditions are not met, the update will not proceed.

ConditionDescription
Cluster StatusProvisioned status
- If in any other status, the update button will not be displayed
Node pool StatusRunning status
- If in any other status, the update button will not be displayed

Update procedures

If the node pool meets the conditions to start the update, you can update the node pool.

caution

Once the update starts, it cannot be canceled, and you cannot revert to the previous state.

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine menu.
  2. Select the cluster where the node pool to delete is located from the Cluster menu.
  3. Click on the Node pool tab, and then click the Kubernetes version > [Update] button for the node pool to update.
  4. In the pop-up window, check the information and click the [Update] button.
  5. Once the node pool update starts, the status of the node pool will change to Updating. Once the node pool update is complete, it will change to Running. During the update, adding new node pools and configuring existing node pools will not be possible.

Check for update failures

During the rolling update, if the eviction of pods fails due to PDB (Pod Disruption Budget) settings, the update may fail. In case of an update failure, you can try the following methods. For more detailed explanations, refer to the official Kubernetes documentation.

  • Modify the Min Available and Max Unavailable values of the PDB to successfully evict the pods. Be aware that if the Max Unavailable value is 0, the eviction of nodes for updates will fail.
  • Back up the PDB, then delete it. After the update is complete, reset the PDB.
  • If the pods are managed by a Deployment, StatefulSet, etc., and the number of pods is adjusted through a ReplicaSet, eviction of pods may fail. In this case, back up and delete the Deployment, StatefulSet, etc., in advance.
  • Additionally, you can find guidelines for safely draining nodes in the official Kubernetes documentation: Safely Drain a Node.
  • Node pool updates are conducted in a rolling update manner, and you must be able to create the same number of nodes as the current nodes. Therefore, if there are insufficient VM and IaaS resources available for your project, the update may fail.
  • If a node transitions to the Failed status during the update, making the Updating status Pending, you can proceed with Node Recovery. Once the node is recovered, the update will proceed normally again. If the node pool remains in the Updating state for an extended period, please contact the Helpdesk > Technical Support.