Skip to main content

Manage nodes

A node is a server that runs container applications. In Kubernetes Engine, nodes are managed in groups called node pools, where all nodes share the same instance type.

Create and manage node pools

The following explains how to manage node pools in the Kubernetes Engine service.

Create node pool

You must first create a cluster before creating a node pool.

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.

  2. In the Cluster menu, select the cluster in which to create the node pool.

  3. On the cluster's detail page, go to the Node pool tab and click the [Create node pool] button.

  4. Enter the required information in the Create node pool form and click [Create].

    vpc_htg_manageVPC_kr2_03.png
    CategoryDescription
    Node pool typeSelect the type of node pool to create
    Basic settingsEnter node pool basic info
    - Node pool name
    - Node pool description (optional): Up to 60 characters
    ImageSelect one image for the node
    - Available options vary by node pool type
    Instance typeChoose an instance type

    ⚠️ At least 1 GiB required
    VolumeSet the volume type and size
    - SSD type only, size between 30–5,120 GB

    ⚠️ Not available for Bare Metal Server node pools
    Node countSet the number of nodes in the node pool
    Node pool network settingsSelect VPC/subnet for node deployment
    - VPC: Same as the cluster (not editable)
    - Subnet: Choose from those selected during cluster creation
      ㄴ Multi-AZ environments support multiple subnets for higher availability
    - Security group (optional): Apply to nodes
    Key pairConfigure SSH access
    ㄴ Select existing or create new key pair
    ㄴ After creation, download the .pem file
    - Once set, the key pair cannot be changed
    Network bondingFor Bare Metal Server node pools only
    - Enables dual interfaces for higher availability
    - Single AZ only
    Advanced settings (optional)
    - Node labels: Apply to all nodes in the pool, usable with nodeSelector
    - Node taints: Apply to all nodes, usable with toleration
    - CPU multithreading: Optimize performance (some types require it)
      ㄴ Recommended to disable for HPC workloads
    - User script: Run on node creation
      ㄴ Up to 16 KB, not editable later

Configure node pool

View or modify node pool details and node count.

  1. Go to Container Pack > Kubernetes Engine in the KakaoCloud Console.

  2. In the Cluster menu, select the cluster.

  3. In the Node pool tab, click the [More] icon next to the target pool > Configure node pool.

  4. In the popup, review or update details and click [Save].

    CategoryDescription
    Node pool info
    - Name: Cannot be changed
    - Description: Optional, up to 60 characters
    Node count
    - Update number of nodes

View node pool details

You can check detailed information about a node pool and the nodes belonging to it.

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.

  2. In the Cluster menu, select the cluster that contains the node pool.

  3. On the cluster's detail page, go to the Node pool tab and select the node pool you want to view.

  4. On the node pool detail page, check the information.

    TabCategoryDescription
    DetailsKey pairKey pair applied to the nodes in the node pool
    Created dateDate the node pool was created
    Volume typeVolume type configured for the node pool
    ImageDetailed info about the image installed on the nodes
    Node labelsLabels configured on the node pool
    Node taintsTaints configured on the node pool
    User scriptUser script configured for the node pool
    NetworkVPCVPC of the cluster
    SubnetSubnet where the node pool's nodes are deployed
    Security groupSecurity group applied to the node pool's nodes
    ScalingResource-based auto scalingCreate and manage resource-based auto scaling policies
    Scheduled auto scalingCreate and manage scheduled auto scaling policies
    - View scheduled autoscale events
    NodeNodeNode information
    - Click the node name to view detailed node information
    Node statusNode status details
    - Running: Node is ready and running
    - Running (Scheduling Disable): Node is blocked from new scheduling (does not affect already running pods)
    - Provisioned: Node provisioning complete
    - Deleted: Node deleted
    - Pending: Node provisioning pending
    - Provisioning: Node provisioning in progress
    - Deleting: Node deletion in progress
    - Failed: User intervention required
    Node poolNode pool the node belongs to
    Private IPPrivate IP of the node
    Availability zoneAvailability zone of the node's subnet
    UptimeTime elapsed since node creation request (not the creation date)

Configure node labels

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.

  2. In the Cluster menu, select the cluster containing the target node pool.

  3. On the cluster detail page, go to the Node pool tab and click [Configure node labels].

  4. Enter the Key and Value for the label and click [Save]. The label will apply to all nodes in the node pool.

    CategoryDescription
    KeyKey to distinguish labels, up to 50
    ValueValue for the label
    [Trash] iconClick to delete the corresponding label
info
  • Reserved keys from KakaoCloud and Kubernetes cannot be used.

Configure user script

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.
  2. In the Cluster menu, select the cluster containing the target node pool.
  3. On the cluster detail page, go to the Node pool tab and click [Configure user script].
  4. Load or enter the script and click [Save].
info
  • The user script is applied only to nodes created after the setting.

Delete node pool

You can delete node pools that are no longer in use.

caution

When a node pool is deleted, all nodes in it are also deleted and cannot be recovered.

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.
  2. In the Cluster menu, select the cluster that contains the node pool.
  3. In the Node pool tab, click the [More] icon next to the target node pool > Delete node pool.
  4. In the Delete node pool popup, enter the required information and click [Delete].

Configure pod scheduling

Configure whether pods can be scheduled to specific nodes.

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.

  2. From the Cluster list, select the cluster containing the target node.

  3. On the cluster detail page, go to the Node pool tab, select the node pool, then go to the Node tab.

  4. Select the node and click the [Configure pod scheduling] button.

    Pod schedulingDescription
    Allow (uncordon)Pods can be scheduled to this node
    Block (cordon)New pods cannot be scheduled to this node
  5. In the Configure pod scheduling popup, select a value and click [Apply].

Manage node pool scaling

info
  • The previous “Auto Scaling” setting has been renamed to Resource-based auto scaling.
  • You can configure either Resource-based auto scaling or Scheduled auto scaling, but not both simultaneously.
    ㄴ Support for simultaneous configuration is planned.

Configure resource-based auto scaling

Resource-based auto scaling automatically increases or decreases the number of nodes in a node pool depending on resource usage.
If available node resources are insufficient and pods cannot be scheduled, the number of nodes is automatically increased.
Conversely, if the resource usage remains below a defined threshold, the number of nodes is automatically reduced.
This feature is based on the official Kubernetes project Cluster Autoscaler.

info
  • Auto scaling is not available for node pools using the Bare Metal Server type.
caution
  • Resource-based auto scaling operates based on the request values defined in the pod resource settings.
  • If a pod does not define a request value, auto scaling will not function.
  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.

  2. In the Cluster menu, select the desired cluster.

  3. On the cluster’s detail page, go to the Node pool tab and select the node pool.

  4. On the Node pool details, go to the Scaling tab and click the [Configure resource-based auto scaling] button.

  5. In the Configure resource-based auto scaling popup, enter or edit the settings, then click [Save].

    CategoryItemDescription
    Resource-based auto scalingDesired node countCurrent number of nodes in the node pool
    - Can be changed
    Minimum node countMinimum number of nodes when auto-scaling down
    Maximum node countMaximum number of nodes when auto-scaling up
    Auto scale-down rulesScale-down threshold conditionThreshold of CPU/Memory usage for triggering scale-down
    - Range: 1 ~ 100
    - Default: 50%
    Threshold durationTime period for which resource usage must stay below threshold
    - Range: 1 ~ 86400 (seconds), 1 ~ 1440 (minutes)
    - Default: 10 minutes
    Exclude monitoring period after scale-upDuration to exclude new nodes from scale-down monitoring after auto scale-up
    - Range: 1 ~ 86400 (seconds), 1 ~ 1440 (minutes)
    - Default: 10 minutes

Configure HPA and load testing

Configuring HPA (HorizontalPodAutoscaler) alongside the Cluster Autoscaler allows for more efficient resource management.
The following test example demonstrates automatic scaling in action.

info

HPA automatically adjusts the number of pods in workloads (e.g., Deployments, StatefulSets) based on CPU or other resource usage.
For more details, see the Kubernetes official documentation.

  1. Before setting up HPA, install the Helm client.
    Refer to the Helm installation guide for instructions by OS.

  2. Install metrics-server for monitoring pods and resources.

    Install metrics-server
    helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
    helm upgrade --install metrics-server metrics-server/metrics-server --set hostNetwork.enabled=true --set containerPort=4443
  3. Verify that node resource usage is being collected correctly.
    It may take up to 5 minutes after installation.

    Check node resource usage
    kubectl top node
  4. Deploy the php-server app to test HPA and Cluster Autoscaler together.

    php-server App deployment
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: php-apache
    spec:
    selector:
    matchLabels:
    run: php-apache
    template:
    metadata:
    labels:
    run: php-apache
    spec:
    containers:
    - name: php-apache
    image: ke-container-registry.kr-central-2.kcr.dev/ke-cr/hpa-example:latest
    ports:
    - containerPort: 80
    resources:
    limits:
    cpu: 500m
    requests:
    cpu: 500m

    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: php-apache
    labels:
    run: php-apache
    spec:
    ports:
    - port: 80
    selector:
    run: php-apache
    Deploy php-server app
    kubectl apply -f php-apache.yaml
  5. Create the HPA:

    Create HPA
    kubectl autoscale deployment php-apache --cpu-percent=10 --min=1 --max=10
    kubectl get hpa
    Sample output
    NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
    php-apache Deployment/php-apache 46%/50% 1 10 5 28m
  6. Run a pod to generate load:

    Run load generator pod
    kubectl run -i --tty load-generator --rm --image=ke-container-registry.kr-central-2.kcr.dev/ke-cr/busybox:latest --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"
  7. Observe the increasing number of pods and nodes as load increases.

    • HPA triggers scaling due to rising load, but some pods remain in Pending due to insufficient resources.
    • Nodes are automatically scaled up to accommodate pending pods.
    Verify HPA and autoscaling behavior
    kubectl get pods -w    # Monitor pod count changes
    kubectl get nodes -w # Monitor node count changes
    Sample output
    NAME                             READY   STATUS    RESTARTS   AGE
    php-apache-766d5cdd5b-2t5p8 0/1 Pending 0 44s
    ...
    NAME STATUS ROLES AGE VERSION
    host-10-187-5-177 Ready <none> 51s v1.24.6
    ...
  8. Once nodes are added, all Pending pods transition to the Running state.

Configure scheduled-based auto scaling

Scheduled-based auto scaling allows you to automatically adjust the number of nodes in a node pool at specified times.
This is useful when traffic patterns can be predicted — such as increased load on weekdays and reduced load on weekends.
You can optimize costs and performance by defining scaling rules like the following:

  • Scheduled-based auto scaling example:
    ㄴ Rule 1: Every Monday at 08:30 AM | Scale up to the desired maximum node count
    ㄴ Rule 2: Every Friday at 07:30 PM | Scale down to the desired minimum node count
info
  • You can define up to two scheduled scaling rules.
  • It is recommended to configure one rule for scale-up and another for scale-down at specific times.
  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.

  2. In the Cluster menu, select the cluster where the node pool is configured.

  3. On the cluster’s detail page, go to the Node pool tab and select the node pool.

  4. In the Scaling tab, under Scheduled-based auto scaling, click the [Create rule] button.

  5. In the Create rule popup, enter the rule name and configuration, then click [Create].

    FieldDescription
    NameName of the scheduled-based auto scaling rule
    RuleDesired number of nodes
    - Set the desired node count at the scheduled time
    - Recurrence options: [Once], [Daily], [Weekly], [Monthly]
    Start timeThe time at which the rule takes effect
    - This determines the recurrence trigger
    Next execution dateDisplays the upcoming execution time based on the defined start time

Recurrence options

To create a recurring schedule, select one of the recurrence types below when creating a rule:

Recurrence typeDescription
OnceExecutes only once at the specified time
DailyRepeats daily at the time of the start date
e.g., Start time: 2024/05/01 (Wed) 10:00 → Repeats daily at 10:00
WeeklyRepeats weekly on the same weekday and time
e.g., Start time: 2024/05/01 (Wed) 10:00 → Repeats every Wednesday at 10:00
MonthlyRepeats monthly on the same day and time
e.g., Start time: 2024/05/01 (Wed) 10:00 → Repeats on the 1st of each month at 10:00

Delete scheduled-based auto scaling

To delete a scheduled-based scaling rule:

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.
  2. Select the target cluster.
  3. On the cluster’s detail page, go to the Node pool tab and select the node pool.
  4. In the Scaling tab, locate the rule and click the [Trash can] icon.
  5. In the Delete rule popup, enter the rule name and click [Delete].
info
  • You cannot delete a schedule while the node pool is in a transitional state:
    ScalingUp, ScalingDown, Updating

Scheduled-based auto scaling events

You can view the result of scheduled scaling executions in the event history.
Each rule maintains up to 20 execution history entries. These events are also available via the Cloud Trail service.

info
  • If a rule execution results in a Failed status, it means the node pool entered a failed state during the scaling process.
  • This could be due to issues like quota limits or timeouts during capacity changes.
    If the failure cannot be resolved manually, please contact the Help Desk.
  • Deleting a rule also deletes its associated event history.
    ㄴ However, related logs in Cloud Trail will still be preserved.
FieldDescription
Event timeTime the rule was executed
Rule nameName of the executed rule
ResultExecution result
- [Success], [Failure] with detailed popup view

Manage nodes

This section describes how to manage nodes in the Kubernetes Engine service.

View node details

You can view the detailed information of a node.

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.

  2. In the Cluster menu, select the cluster that contains the node pool.

  3. In the cluster’s detail page, click the Node tab and select the node you want to inspect.

  4. In the Details tab, review the following information:

    CategoryDescription
    Instance IDVM instance ID of the node
    - Click the instance ID to navigate to the VM instance page
    Instance typeVM instance type and associated node pool type
    Instance creation dateCreation date of the VM instance
    Kubernetes versionKubernetes version of the node
    Availability zoneAvailability zone where the node is running
    VolumeVolume information attached to the VM instance
    Key pairKey pair assigned to the node
    - Key pairs set through the node pool are not displayed in the instance details
    Private IPPrivate IP address of the node
    ImageDetailed information about the installed image on the node
    CPU multithreadingIndicates whether CPU multithreading is enabled
    Node summaryProvides node performance and state data, refreshed using the [Refresh] icon
    - Pods: Pods currently running on the node
    - Node conditions: Detailed health status of the node
    - Taints: Taints set on the node
    - Labels: Labels assigned to the node
    - Annotations: Annotations set on the node
    - Allocatable resources: Resources currently allocatable on the node
    - Events: Node-related events

Monitor nodes

You can check node resource usage trends and metrics through time-series charts.

info

Node monitoring in Kubernetes Engine requires node-exporter to be installed on port 59100 of each node. This port cannot be used for other purposes.

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.

  2. In the Cluster menu, select the cluster that contains the node pool.

  3. In the cluster’s detail page, click the Node tab and select a node to inspect.

    • Alternatively, go to Node pool tab > Node tab and select a node.
  4. In the Monitoring tab, select the period and choose Node or Pod to view metrics.

Monitor nodes

Selecting Node displays the following metrics:

MetricDescription
CPU usage (millicores)CPU usage of the node
Memory usage (Bytes)Memory usage of the node
Disk usage (Bytes)Disk usage of the node
RX Network (byte/s)Bytes received via network
TX Network (byte/s)Bytes transmitted via network
Reserved CPU (%)CPU reserved by node components
Reserved memory (%)Memory reserved by node components
Pods (count)Number of pods running on the node
Containers (count)Number of containers running on the node

Monitor pod

Selecting Pod displays metrics for each pod running on the selected node:

MetricDescription
CPU usage (millicores)CPU usage of the pod
Memory usage (Bytes)Memory usage of the pod
RX Network (byte/s)Bytes received by the pod
TX Network (byte/s)Bytes transmitted by the pod
Reserved CPU (%)CPU reserved for the pod
Reserved memory (%)Memory reserved for the pod

Recreate nodes

You can manually recreate a node regardless of its current state.

caution

Recreating a node will drain the current node, create a new one, and delete the existing node. This may disrupt running services. The deleted node is unrecoverable, and the new node will have a different IP.

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.
  2. In the Cluster menu, select the cluster that contains the node pool.
  3. In the cluster’s Node tab, click the [More] icon next to the target node > Recreate node.
  4. In the Recreate node popup, enter the required information and click [Recreate].

Update nodes

If the cluster control plane is updated to a newer Kubernetes version or node image components have updates (e.g., OS), you can update your nodes to the latest image.

Node updates are performed as rolling updates through the following steps:

  1. A new node with the latest image is created.
  2. Pods on the old node are evicted, and the node is marked as unschedulable.
  3. The evicted pods are launched on the new node.
  4. The old node is deleted after the migration is complete.
  5. These steps repeat for each node in the node pool.
Prerequisites

Node updates cannot proceed unless the following conditions are met:

ConditionDescription
Cluster statusMust be Provisioned
- Update button is not visible if in other statuses
Node pool statusMust be Running
- Update button is not visible if in other statuses

Update procedures

If the node pool meets the update prerequisites, you can initiate an update.

caution

Once started, the update cannot be canceled or rolled back.

  1. Go to KakaoCloud Console > Container Pack > Kubernetes Engine.
  2. In the Cluster menu, select the cluster that contains the node pool.
  3. In the Node pool tab, click the Kubernetes version of the node pool > [Update].
  4. In the Node pool version update popup, confirm the information and click [Update].
  5. The node pool status will change to Updating. Once complete, it changes back to Running.
    • During the update, no new node pools can be added and no changes can be made to the existing node pool.

Troubleshoot update failures

Rolling updates may fail if pods cannot be drained due to PDB (PodDisruptionBudget) settings.

Troubleshooting options:

  • Modify Min Available or Max Unavailable values in the PDB.
    Max Unavailable set to 0 will block node draining.
  • Backup and delete the PDB before the update. Reapply it after the update.
  • If pods are deployed as Deployments or StatefulSets managed by ReplicaSets, temporarily delete them after backup to allow draining.
  • Refer to the Kubernetes official doc: Safely Drain a Node for safe node draining techniques.
  • Ensure enough VM/IaaS resources are available to create new nodes for rolling updates.
  • If a node enters a Failed state during the update and the node pool is stuck in Updating, you can recreate the failed node to resume the update process.
  • If the node pool remains in Updating for an extended period, please contact the Help Desk > Technical Inquiry.