Manage node
A node is a server where container applications run, and in Kubernetes Engine, nodes are managed in node pools, which are groups of nodes with the same instance type.
Create and manage node pools
Here’s how to manage node pools in the Kubernetes Engine service.
Create node pool
After creating a cluster, you can create a node pool.
-
Go to KakaoCloud Console > Container Pack > Kubernetes Engine.
-
In the Cluster menu, select the cluster to create a node pool.
-
On the cluster details page, click the Node pool tab, then click the [Create node pool] button.
-
In Create node pool, enter the necessary information and click the Create button.
Item Description Node pool type Select the type of node pool to create
-Bare Metal Server
type is supported only in thekr-central-2
regionBasic settings Set the basic information for the node pool
- Node pool name: Enter the name of the node pool
- Node pool description (optional): Enter a description within 60 charactersImage Select one image to use for the nodes
- Available images vary depending on the node pool typeInstance type Select the instance type for the node pool
⚠️ For proper Kubernetes Engine service usage, use instances with at least 1 GiBVolume Configure the volume type and size for the instances
- Currently, the volume type is fixed as SSD, and the size can be set from 30 to 5,120GB
⚠️ Volume settings are not available forBare Metal Server
type node poolsNumber of nodes Set the number of nodes in the node pool Node pool network settings Select the VPC and subnet where the nodes will run
- VPC: Same as the cluster's VPC and cannot be modified
- Subnet: Select the subnet where the nodes will run from the subnets chosen during cluster creation
ᄂ For Multi-AZ in thekr-central-2
region, select subnets from different availability zones to increase availabilityResource-based auto scaling
(optional)Automatically expand the number of nodes when resources are insufficient to schedule pods,
and automatically shrink when resource utilization remains below a threshold
⚠️ Autoscaling is not available forBare Metal Server
type node pools
- Minimum node count: The minimum number of nodes when autoscaling shrinks the node count
- Maximum node count: The maximum number of nodes when autoscaling expands the node countKey pair Configure the use of a key pair for SSH access to the node instances in the node pool
ᄂ Select an existing key pair or create a new key pair
ᄂ To create a new key pair: Click Create key pair, enter a key pair name, and click Create and download to download the.pem
file
- The key pair assigned to the node pool is not displayed in the instance details
- The key pair cannot be changed after cluster creation; if needed, create a new node pool to apply new settingsNetwork bonding If creating a Bare Metal Server
type node pool, network bonding is automatically applied
- Network bonding sets two interfaces with the same IP on each node in the pool based on the subnet selected in the network settings
- Only a single availability zone is supportedAdvanced settings (optional) Set advanced configurations for the node pool
- Node label (optional): Specify Kubernetes labels to be applied to all nodes in the pool, which can be used withnodeSelector
- Node taint (optional): Specify Kubernetes taints to be applied to all nodes in the pool, which can be used withtoleration
- CPU multithreading: An option to optimize performance by specifying single threads per CPU core
ᄂ some instance types require CPU multithreading
ᄂ Disable for high-performance computing (HPC) workloads
⚠️ Multithreading settings are not available forBare Metal Server
type node pools
- User script (optional): Input a shell script to be executed when nodes in the pool are created
ᄂ Use this for additional node configuration, up to 16KB, and note that user scripts cannot be modified after configuration
Configure node pool
Check or modify information and the number of nodes in a node pool.
-
Go to KakaoCloud Console > Container Pack > Kubernetes Engine.
-
Select the cluster with the node pool settings to modify from the Cluster menu.
-
On the cluster detail page, go to the Node pool tab, then click the [More] icon > Configure node pool.
-
Review the information in the popup. If needed, modify the settings and click [Save].
Item Description Node pool information Information of the node pool
- Node pool name: Cannot be changed
- Node pool description (optional): Review or modify the current description within 60 charactersNumber of nodes The current number of nodes in the node pool
- You can modify the node count
View node pool details
View detailed information about the node pool and the nodes within it.
-
Go to KakaoCloud Console > Container Pack > Kubernetes Engine.
-
Select the cluster from the Cluster menu that contains the node poolto view.
-
On the cluster detail page, go to the Node pool tab, then select the node pool to view.
-
Review the detailed information of the node pool.
Tab Item Description Details Key pair Information about the key pair assigned to the node pool Created at The creation date of the node pool Volume type Volume type settings for the node pool Image Detailed information about the image installed on the nodes VPC Information about the VPC used by the cluster Subnet Information about the subnet where the node pool’s nodes are running Pod scheduling Pod scheduling configuration for the node pool Node label Labels applied to nodes in the node pool Node taint Taints applied to nodes in the node pool User script User script applied to the nodes in the node pool Scaling Resource-based auto scaling Create and manage Resource-based auto scaling policies Schedule-based auto scaling Create and manage schedule-based auto scaling policies
- View schedule-based auto scaling eventsNode Node Information about the nodes
- Click on the node name to view detailed node informationNode status Node status information
-Running
: The node is ready and running
-Running (Scheduling Disable)
: The node is no longer scheduling new pods (pods already assigned are still running)
-Provisioned
: Node provisioning is complete
-Deleted
: Node deletion is complete
-Pending
: Node provisioning is in progress
-Provisioning
: Node is being provisioned
-Deleting
: Node is being deleted
-Failed
: Requires user interventionNode pool Information about the node pool the node belongs to Private IP The private IP of the node Availability zone The availability zone of the subnet where the node is running Uptime The total time elapsed since the node request was made, not the node’s creation date
Configure node labels
-
Go to KakaoCloud Console > Container Pack > Kubernetes Engine.
-
Select the cluster from the Cluster menu that contains the node pool.
-
On the cluster detail page, go to the Node pool tab, then click the Set node label button in the details.
-
Enter the Key and Value, then click [Save]. The labels will be applied to all nodes in the node pool.
Item Description Key A key for identifying the label, up to 50 per node Value The value of the label [Trash icon] Click to delete the label
Keywords reserved by KakaoCloud or Kubernetes cannot be used as label keys.
Configure user script
- Go to KakaoCloud Console > Container Pack > Kubernetes Engine.
- Select the cluster from the Cluster menu that contains the node pool.
- On the cluster detail page, go to the Node pool tab, then click the Set user script button in the details.
- Upload or enter the User script and click [Save].
The user script will only apply to newly created nodes after it is set.
Delete node pool
You can delete a node pool that is no longer needed.
Deleting a node pool will delete all nodes in the pool, and this action cannot be undone.
- Go to KakaoCloud Console > Container Pack > Kubernetes Engine.
- Select the cluster from the Cluster menu where the node pool is located.
- Go to the Node pool tab, click the [More] icon next to the node pool, and select Delete node pool.
- Enter the information in the Delete node pool popup and click [Delete].
Pod scheduling configuration
Configure pod scheduling (node pool)
You can configure the pod scheduling status for all nodes in a node pool.
- Go to KakaoCloud Console > Container Pack > Kubernetes Engine.
- Select the cluster from the Cluster menu where the node pool is located.
- On the cluster detail page, go to the Node pool tab, then click the [More] icon for the node pool and select Set pod scheduling.
- Review the information in the popup. Modify settings as needed and click [Save].
- You can allow or block pod scheduling regardless of the node pool's status. The pod scheduling settings apply to all nodes in the pool.
Configure pod scheduling (single node)
Configure pod scheduling status for individual nodes.
Pod scheduling settings for a single node do not override the settings for the entire node pool. The most recent scheduling settings will apply to individual nodes.
Node | Description |
---|---|
Blocked node scheduling | Pods will not be assigned to a node after scheduling is blocked, but existing pods will continue to run. |
Allowed node scheduling | Pods will be scheduled normally on the node. |
- Go to KakaoCloud Console > Container Pack > Kubernetes Engine.
- Select the cluster from the Cluster menu that contains the target node.
- On the cluster detail page, go to the Node tab and click the [More] icon for the target node, then select Block node scheduling.
- Depending on the node's current scheduling status, either Block node scheduling or Allow node scheduling will appear.
- Review the information in the popup and click [Apply].
- Depending on the current scheduling status, the confirmation will allow or block scheduling.
Node pool scaling management
- The previous node pool autoscaling setting is now called Resource-based auto scaling.
- Both Resource-based auto scaling and schedule-based auto scaling are available. You can only set one type of autoscaling at a time. ᄂ Future improvements will allow both autoscaling types to be used simultaneously.
Configure resource-based auto scaling
Resource-based auto scaling adjusts the number of nodes based on the resource usage of the node pool. If available resources are insufficient to schedule pods, nodes are automatically added, and when resource utilization remains low, nodes are automatically removed.
This feature is based on the Kubernetes Cluster Autoscaler project.
- Resource-based auto scaling can be configured independently for each node pool.
- Automatic scale-in can be configured for the entire cluster and is applied if autoscaling is set during cluster/node pool creation.
- The autoscaling feature is not available for
Bare Metal Server
type node pools.
- Resource-based auto scaling operates based on the request value defined for pod resources.
- If pod resource request values are not defined, the autoscaling feature will not function.
-
Go to KakaoCloud Console > Container Pack > Kubernetes Engine.
-
Select the cluster from the Cluster menu for autoscaling configuration.
-
On the cluster detail page, go to the Node pool tab and select the node pool.
-
In the Scaling tab of the Node pool details, click the Resource-based auto scaling button.
-
Review the information in the popup. Modify the settings as needed and click [Save].
Item Description Resource-based auto scaling Enable or disable the automatic adjustment of node count based on node resource usage Minimum node count The minimum number of nodes when autoscaling scales down
- Automatic scale-in is configured at the cluster level. For details, refer to Configure scale-in for clustersMaximum node count The maximum number of nodes when autoscaling scales up
Configure HPA and load testing
You can set up HorizontalPodAutoscaler (HPA) with Cluster Autoscaler for more efficient resource management. This example demonstrates how to test if automatic scaling works as expected by applying HPA.
HPA monitors resource usage (such as CPU) and automatically adjusts the number of pods in a workload (such as Deployment, StatefulSet). For more details, refer to the Kubernetes documentation on HPA.
-
Install Helm client before configuring HPA. For installation instructions by operating system, refer to the Helm official documentation > Installing Helm.
-
Install metrics-server to monitor pod metrics for HPA load testing.
- Add the metrics-server chart repository, then install metrics-server.
metrics-server installation commandhelm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
helm upgrade --install metrics-server metrics-server/metrics-server --set hostNetwork.enabled=true --set containerPort=4443 - Add the metrics-server chart repository, then install metrics-server.
-
Check if the resource usage of the nodes is being monitored correctly. It may take up to 5 minutes to collect monitoring information after installing the metrics-server.
Check node resource usagekubectl top node
-
After setting up the HPA and Cluster Autoscaler, deploy the php-server for load testing.
Deploy php-server appapiVersion: apps/v1
kind: Deployment
metadata:
name: php-apache
spec:
selector:
matchLabels:
run: php-apache
replicas: 1
template:
metadata:
labels:
run: php-apache
spec:
containers:
- name: php-apache
image: ike-controlplane-provider.kr-central-1.kcr.dev/ike-cr/hpa-example:latest
ports:
- containerPort: 80
resources:
limits:
cpu: 500m
requests:
cpu: 500m
---
apiVersion: v1
kind: Service
metadata:
name: php-apache
labels:
run: php-apache
spec:
ports:
- port: 80
selector:
run: php-apacheCheck deployment of php-server appkubectl apply -f php-apache.yaml
-
Create an HPA for load testing.
Create HPAkubectl autoscale deployment php-apache --cpu-percent=10 --min=1 --max=10 // Create HPA
kubectl get hpa // Check HPA settingsResultNAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 46%/50% 1 10 5 28m -
Run a load-generating pod to test the HPA and auto-scaling configuration.
Execute podkubectl run -i --tty load-generator --rm --image=ike-controlplane-provider.kr-central-1.kcr.dev/ike-cr/busybox:1.28 --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- `HTTP`://php-apache; done"
-
Confirm that the number of pods and nodes increases appropriately as the load increases.
-
The HPA for the php-apache server is triggered, and some pods that cannot be scheduled due to insufficient resources enter the
Pending
state. -
Since some pods cannot be scheduled, the number of nodes is automatically scaled to three to add resources.
Check the results of the HPA and auto-scaling operations.kubectl get pods -w # Check for changes in the number of pods
kubectl get nodes -w # Check for changes in the number of nodesResultNAME READY STATUS RESTARTS AGE
php-apache-766d5cdd5b-2t5p8 0/1 Pending 0 44s
php-apache-766d5cdd5b-5mhlk 0/1 Pending 0 29s
php-apache-766d5cdd5b-5vjt6 0/1 Pending 0 14s
php-apache-766d5cdd5b-74z87 1/1 Running 0 44s
php-apache-766d5cdd5b-d49g9 0/1 Pending 0 29s
php-apache-766d5cdd5b-fnlld 1/1 Running 0 44s
php-apache-766d5cdd5b-nr5f2 0/1 Pending 0 29s
php-apache-766d5cdd5b-t7zr8 0/1 Pending 0 29s
php-apache-766d5cdd5b-vjjlz 1/1 Running 0 2m49s
php-apache-766d5cdd5b-whjhw 0/1 Pending 0 14s
NAME STATUS ROLES AGE VERSION
host-10-187-5-177 Ready <none> 51s v1.24.6
host-10-187-5-189 Ready <none> 9m5s v1.24.6
host-10-187-5-98 Ready <none> 69s v1.24.6
-
-
When a node is added, all pods that were in the Pending state are changed to the
Running
state.
Set scheduled-based auto scaling
Scheduled-based auto scaling is a feature that adjusts the number of nodes in a node pool at scheduled times. By using scheduled-based auto scaling, you can automate the adjustment of the number of nodes in the node pool based on predicted increases or decreases in traffic. For example, workloads that regularly experience a pattern of decreased traffic on weekends and increased traffic during the week can be managed by creating rules such as the following:
- Scheduled-based auto scaling
- Rule 1: Every Monday at 8:30 AM | Maximum node count to increase
- Rule 2: Every Friday at 7:30 PM | Minimum node count to decrease
This allows the application to have enough nodes to handle peak traffic during the week and reduce unnecessary nodes during the relatively low traffic on weekends. This scheduled-based auto scaling can optimize costs and performance.
- Up to 2 scheduling rules can be created for scheduled-based auto scaling.
- Since a maximum of 2 rules can be used, it is recommended to have one rule for scaling up at a specific time and another rule for scaling back to the original number of nodes.
- Go to KakaoCloud Console > Container Pack > Kubernetes Engine menu.
- Select the cluster where to check the node pool settings in the Cluster menu.
- Choose the node pool to configure in the Node pool tab on the cluster's detail page.
- In the Node pool details, click the Create rule button under the Scaling tab in the scheduled-based auto scaling section.
- In the Create rule pop-up window, enter the rule name and description, then click the [Create] button.
Item Description Name The name of the scheduled-based auto scaling rule Rule Desired number of nodes
- Set the desired number of nodes at the specified time
Recurrence settingsStart The start date and time when the rule operates
- The recurrence points are determined based on the start date and time.Scheduled execution date Displays the nearest scheduled date based on the start date and time.
Set recurrence
To create a recurring schedule, you must select [Daily], [Weekly], or [Monthly] in the rule creation. The detailed explanations of the currently available recurrence periods are as follows:
Recurrence Period | Description |
---|---|
Once | Performs once at the start date and time |
Daily | Repeats daily based on the time of the start date - Start Date: 2024/05/01 (Wed) 10:00 - Recurrence Point: Daily at 10:00 |
Weekly | Repeats weekly based on the day of the week and time of the start date - Start Date: 2024/05/01 (Wed) 10:00 - Recurrence Point: Every Wednesday at 10:00 |
Monthly | Repeats monthly based on the day and time of the start date - Start Date: 2024/05/01 (Wed) 10:00 - Recurrence Point: Every 1st of the month at 10:00 |
Delete scheduled-based auto scaling
To delete scheduled-based auto scaling, follow these steps:
- Go to KakaoCloud Console > Container Pack > Kubernetes Engine menu.
- Select the cluster where to configure the auto-scaling settings in the Cluster menu.
- Choose the node pool to configure in the Node pool tab on the cluster's detail page.
- In the Node pool details, click the Trash can button in the scheduled-based auto scaling rules list under the Scaling tab.
- In the pop-up window, enter the rule name and click the [Delete] button.
- If the node pool's status is changing, you cannot delete the schedule conditions.
- Changing states: ScalingUp, ScalingDown, Updating
Check scheduled-based auto scaling events
You can check the results of scheduled-based auto scaling in the event list. The activities adjusted through scheduled-based auto scaling will retain a maximum of 20 logs per rule. Scheduled-based auto scaling events can also be verified through the Cloud Trail service.
- If the status of scheduled-based auto scaling drops to Failed, it indicates that there was an internal issue during auto-scaling, causing the node pool's status to drop to Failed.
- This may be due to exceeding the node pool quota, timeouts during capacity adjustments, etc. For events that customers cannot resolve, please contact the Helpdesk.
- When a rule is deleted, the associated events will also be deleted.
ᄂ Logs recorded in the Cloud Trail service will be retained.
Category | Description |
---|---|
Event time | The time when the rule was executed |
Rule name | The name of the executed rule |
Result | The result of the executed rule - [Success], [Failure] Detailed event results can be checked in a pop-up. |
Manage nodes
The following methods are used to manage nodes in the Kubernetes Engine service.
View node details
You can check detailed information about a node.
-
Go to KakaoCloud Console > Container Pack > Kubernetes Engine menu.
-
Select the cluster where the node pool to check is located from the Cluster menu.
-
Click on the Node tab on the cluster's detail page and select the node to check.
- Alternatively, select the node from the Node pool > Node tab to view the details.
-
Check the information in the Details tab.
Category Description Instance ID The VM instance ID information corresponding to the node
- Clicking on the instance ID will take you to the VM instance pageInstance type The VM instance type corresponding to the node and the node pool type information Created at The creation date of the VM instance corresponding to the node Kubernetes version The Kubernetes version of the node Availability Zone Information about the availability zone of the subnet where the node is running Volume Information about the volume attached to the VM instance corresponding to the node Key pair Information about the key pair set for the node
- The key pair specified through the node pool will not be exposed in the instance detailsPrivate IP The private IP information of the node Image Detailed information about the image installed on the node CPU Multithreading Whether the CPU multithreading feature is enabled Summary of node Provides performance and status information about the node, and when the [Refresh] icon is clicked, the latest information of the node is updated
- Pod: Information about the pods currently running on the node
- Node condition: Detailed status information about the current node
- Taint: Taint information set on the current node
- Label: Label information set on the current node
- Annotation: Annotation information set on the current node
- Allocatable resource: Current status of allocatable resources on the node
- Event: Event information that has occurred on the current node
Node monitoring
You can check monitoring information such as resource usage and trends of nodes in chart format for a specific period.
To monitor nodes in Kubernetes Engine, node-exporter is installed on port 59100 of the node. Please note that port 59100, where node-exporter is installed, cannot be used separately.
- Go to KakaoCloud Console > Container Pack > Kubernetes Engine menu.
- Select the cluster where the node pool to check is located from the Cluster menu.
- Click on the Node tab on the cluster's detail page and select the node to check.
- Alternatively, select the node from the Node pool > Node tab to view the details.
- In the Monitoring tab, select the period and Node/Pod, and check the information.
Node monitoring
When you select Node, you can check the resource usage and trend information of that node.
Category | Description |
---|---|
CPU usage (millicores) | The CPU usage of the node |
Memory usage (Bytes) | The memory usage of the node |
Disk usage (Bytes) | The disk usage of the node |
RX Network (byte/s) | The number of bytes received over the network by the node |
TX Network (byte/s) | The number of bytes sent over the network by the node |
Reserved CPU Capacity (%) | The percentage of CPU reserved for the node's components |
Reserved Memory Capacity (%) | The percentage of memory reserved for the node's components |
Pods (Count) | The number of pods running on the node |
Containers (Count) | The number of containers running on the node |
Pod monitoring
When you select Pod, you can check the resource usage and trend information of the pods running on the node.
Category | Description |
---|---|
CPU usage (millicores) | The CPU usage of the pod |
Memory usage (Bytes) | The memory usage of the pod |
RX Network (byte/s) | The number of bytes received over the network by the pod |
TX Network (byte/s) | The number of bytes sent over the network by the pod |
Reserved CPU Capacity (%) | The percentage of CPU reserved for the pod |
Reserved Memory Capacity (%) | The percentage of memory reserved for the pod |
Recover nodes
You can recover nodes that are in Failed status.
When recovering a node, the node will be drained, and a new node will be created, while the existing node will be deleted. Running services may be affected, and deleted nodes cannot be recovered. Note that the IP of the VM corresponding to the newly created node will change.
- Go to KakaoCloud Console > Container Pack > Kubernetes Engine menu.
- Select the cluster where the node pool to check is located from the Cluster menu.
- In the cluster's detail page, go to the Node tab, click the [More] icon of the node to recover, and select Recover node.
- In the Recover node pop-up window, enter the information and click the [Recover] button.
Update nodes
When the cluster has been updated to the latest Kubernetes version or when the latest node component updates can use the latest images, you can update the nodes.
When executing a node update, a rolling update will be performed as follows:
-
A new node with the latest image version is created.
-
Pods running on the existing node are evicted, and the node is switched to an unschedulable state.
-
The evicted pods run on the new node, and once evictions are complete, the existing node is deleted.
-
This process is repeated sequentially for all existing nodes.
If any of the following conditions are not met, the update will not proceed.
Condition | Description |
---|---|
Cluster Status | Provisioned status - If in any other status, the update button will not be displayed |
Node pool Status | Running status - If in any other status, the update button will not be displayed |
Update procedures
If the node pool meets the conditions to start the update, you can update the node pool.
Once the update starts, it cannot be canceled, and you cannot revert to the previous state.
- Go to KakaoCloud Console > Container Pack > Kubernetes Engine menu.
- Select the cluster where the node pool to delete is located from the Cluster menu.
- Click on the Node pool tab, and then click the Kubernetes version > [Update] button for the node pool to update.
- In the pop-up window, check the information and click the [Update] button.
- Once the node pool update starts, the status of the node pool will change to
Updating
. Once the node pool update is complete, it will change toRunning
. During the update, adding new node pools and configuring existing node pools will not be possible.
Check for update failures
During the rolling update, if the eviction of pods fails due to PDB (Pod Disruption Budget) settings, the update may fail. In case of an update failure, you can try the following methods. For more detailed explanations, refer to the official Kubernetes documentation.
- Modify the Min Available and Max Unavailable values of the PDB to successfully evict the pods. Be aware that if the Max Unavailable value is
0
, the eviction of nodes for updates will fail. - Back up the PDB, then delete it. After the update is complete, reset the PDB.
- If the pods are managed by a Deployment, StatefulSet, etc., and the number of pods is adjusted through a ReplicaSet, eviction of pods may fail. In this case, back up and delete the Deployment, StatefulSet, etc., in advance.
- Additionally, you can find guidelines for safely draining nodes in the official Kubernetes documentation: Safely Drain a Node.
- Node pool updates are conducted in a rolling update manner, and you must be able to create the same number of nodes as the current nodes. Therefore, if there are insufficient VM and IaaS resources available for your project, the update may fail.
- If a node transitions to the
Failed
status during the update, making theUpdating
statusPending
, you can proceed with Node Recovery. Once the node is recovered, the update will proceed normally again. If the node pool remains in theUpdating
state for an extended period, please contact the Helpdesk > Technical Support.