Updating a Node Pool¶
Precautions¶
Changes to the container engine, OS, or pre-/post-installation script in a node pool take effect only on new nodes. To synchronize the modification onto existing nodes, manually reset these nodes.
The modification of the system disk or data disk size and data disk space allocation rule of a node pool applies only to new nodes. The settings cannot be synchronized for existing nodes even if they are reset.
Changes to resource tags and Kubernetes labels/taints in a node pool will be automatically synchronized to existing nodes. You do not need to reset these nodes.
Updating a Node Pool¶
Log in to the CCE console.
Click the cluster name to access the cluster console. Choose Nodes in the navigation pane. In the right pane, click the Node Pools tab.
Click Update next to the name of the node pool you will edit. Configure the parameters in the displayed Update Node Pool page.
Basic Settings
Table 1 Basic settings¶ Parameter
Description
Node Pool Name
Name of a node pool.
Node Configuration
Table 2 Node configuration parameters¶ Parameter
Description
Specifications
Select node specifications that best fit your service needs.
Note
If a node pool is configured with multiple node flavors, only the flavors (which can be located in different AZs) of the same node type are supported. For example, a node pool consisting of general computing-plus nodes supports only general computing-plus node flavors, but not the flavors of general computing nodes.
Nodes added to a single node pool must have the same GPU type. For example, if you select the nvidia-v100 flavor, you are not allowed to select the nvidia-t4 flavor.
A maximum of 10 node flavors can be added to a node pool (the flavors in different AZs are counted separately). When adding a node flavor, you can choose multiple AZs, but you need to specify them.
Nodes in a newly created node pool are created using the default flavor. If the resources for the default flavor are insufficient, node creation will fail.
After a node pool is created, the flavors of existing nodes cannot be deleted.
Container Engine
The container engines supported by CCE include Docker and containerd, which may vary depending on cluster types, cluster versions, and OSs. Select a container engine based on the information displayed on the CCE console. For details, see Mapping Between Node OSs and Container Engines.
Note
The modified container engine automatically applies to newly added nodes. Manually reset existing nodes for the modification to take effect.
OS
Select an OS type. Different types of nodes support different OSs.
Public image: Select a public image for the node.
Private image: Select a private image for the node.
Note
Service container runtimes share the kernel and underlying calls of nodes. To ensure compatibility, select a Linux distribution version that is the same as or close to that of the final service container image for the node OS.
The changed OS automatically applies to newly added nodes. Manually reset existing nodes for the modification to take effect.
Edit Key Pair
Only node pools that use key pairs for login support key pair editing. You can select another key pair.
Note
The changed key pair automatically applies to any new nodes added. Manually reset existing nodes for the modification to take effect.
Modify Login Settings
After this function is enabled, you can modify the node login mode.
Key Pair
Select the key pair used to log in to the node. You can select a shared key.
A key pair is used for identity authentication when you remotely log in to a node. If no key pair is available, click Create Key Pair.
Note
The modified login setting automatically applies to any new nodes added. Manually reset existing nodes for the modification to take effect.
Storage Settings
Table 3 Configuration parameters¶ Parameter
Description
System Disk
System disk used by the node OS. The value ranges from 40 GiB to 1024 GiB. The default value is 50 GiB.
Note
After the system disk configuration is modified, the modification takes effect only on newly added nodes. The configuration cannot be synchronized to existing nodes even if they are reset.
System Disk Encryption: System disk encryption safeguards your data. Snapshots generated from encrypted disks and disks created using these snapshots automatically inherit the encryption setting. Only the nodes of the Elastic Cloud Server (VM) type in certain regions support system disk encryption. For details, see the console.
Not encrypted is selected by default.
After setting System Disk Encryption to Enabled (key), choose an existing key. If no key is available, click View Key List and create a key. After the key is created, click the refresh icon next to the text box.
Note
The modified system disk encryption automatically applies to new nodes. Manually reset existing nodes for the modification to take effect.
System Component Storage
Select a disk for storing system components.
Data Disk: added for storing container runtime and kubelet components by default. The disk size ranges from 20 GiB to 32768 GiB. The default value is 100 GiB. This data disk cannot be deleted or detached. Otherwise, the node will be unavailable.
System Disk: stores CCE resources such as downloaded images, ephemeral storage for containers, and container stdout logs. If the system disk is fully occupied, it will negatively affect the stability of the node.
Note
In clusters of v1.23.18-r0, v1.25.13-r0, v1.27.10-r0, v1.28.8-r0, v1.29.4-r0, or later, you can select a disk for storing system components. If CCE Node Problem Detector is used, ensure that its version is 1.19.2 or later.
Customizing the pod base size for a node pool will prevent you from changing System Component Storage to System Disk.
The modified system component storage setting automatically applies to new nodes. Manually reset existing nodes for the modification to take effect.
Data Disk
At least one default data disk must be added for storing container runtime and kubelet components if System Component Storage is set to Data Disk. This data disk cannot be deleted or detached. Otherwise, the node will be unavailable. This function is available for clusters of a version earlier than v1.23.18-r0, v1.25.13-r0, v1.27.10-r0, v1.28.8-r0, or v1.29.4-r0.
Default data disk: used for container runtime and kubelet components. The disk size ranges from 20 GiB to 32768 GiB. The default value is 100 GiB.
Other common data disks: You can set the data disk size to a value ranging from 10 GiB to 32768 GiB. The default value is 100 GiB.
If System Component Storage is set to System Disk, you do not need to add a default data disk. In this case, all data disks are common ones: You can set the data disk size to a value ranging from 10 GiB to 32768 GiB. The default value is 100 GiB. This function is available for clusters of v1.23.18-r0, v1.25.13-r0, v1.27.10-r0, v1.28.8-r0, v1.29.4-r0, or later versions.
Note
After the data disk configuration is modified, the modification takes effect only on newly added nodes. The configuration cannot be synchronized to existing nodes even if they are reset.
Advanced Settings
Expand the area and configure the following parameters:
Data Disk Space Allocation: allocates space for container engines, images, and ephemeral storage for them to run properly. For details about how to allocate data disk space, see Space Allocation of a Data Disk.
Note
After the data disk space allocation configuration is modified, the modification takes effect only for new nodes. The configuration cannot apply to the existing nodes even if they are reset.
Enabled: Data disk encryption safeguards your data. Snapshots generated from encrypted disks and disks created using these snapshots automatically inherit the encryption setting.
Not encrypted is selected by default.
After setting Data Disk Encryption to Enabled, choose an existing key. If no key is available, click View Key List and create a key. After the key is created, click the refresh icon next to the text box.
Note
After the data disk encryption is modified, the modification only applies to newly added nodes. The configuration cannot be synchronized to existing nodes even if they are reset.
Adding data disks
A maximum of 16 data disks can be attached to an ECS. By default, a raw disk is created without any processing.
You can also click Expand and select any of the following options:
Default: By default, a raw disk is created without any processing.
Mount Disk: The data disk is attached to a specified directory.
Use as PV: applicable when there is a high performance requirement on PVs. The node.kubernetes.io/local-storage-persistent label is added to the node with PV configured. The value is linear or striped.
Use as ephemeral volume: applicable when there is a high performance requirement on emptyDir.
PVs and EVs support the following write modes:
Linear: A linear logical volume integrates one or more physical volumes. Data is written to the next physical volume when the previous one is used up.
Striped: A striped logical volume stripes data into blocks of the same size and stores them in multiple physical volumes in sequence. This allows data to be concurrently read and written. A storage pool consisting of striped volumes cannot be scaled-out. This option can be selected only when there are multiple volumes.
Note
Local PVs are supported only when the cluster version is v1.21.2-r0 or later and the Everest add-on version is 2.1.23 or later. Version 2.1.23 or later is recommended.
Local EVs are supported only when the cluster version is v1.21.2-r0 or later and the Everest add-on version is 1.2.29 or later.
Local Disk Description
If the node flavor is disk-intensive or ultra-high I/O, one data disk can be a local disk.
Local disks may break down and do not ensure data reliability. Store your service data in EVS disks, which are more reliable than local disks.
Advanced Settings
Table 4 Advanced settings¶ Parameter
Description
Resource Tag
You can add resource tags to classify resources.
You can create predefined tags on the TMS console. These tags are available to all resources that support tags. You can use these tags to improve the tag creation and resource migration efficiency.
CCE will automatically create the CCE-Dynamic-Provisioning-Node=Node ID tag.
Note
Modified resource tags automatically apply to new nodes.
Kubernetes Label
A key-value pair added to a Kubernetes object (such as a pod). After specifying a label, click Add Label for more. A maximum of 20 labels can be added.
Labels can be used to distinguish nodes. With workload affinity settings, container pods can be scheduled to a specified node. For more information, see Labels and Selectors.
Note
Modified Kubernetes labels automatically apply to new nodes as well as existing nodes if Kubernetes labels synchronized is selected in Synchronization for Existing Nodes.
Taint
This parameter is left blank by default. You can add taints to configure anti-affinity for the node. A maximum of 20 taints are allowed for each node. Each taint contains the following parameters:
Key: A key must contain 1 to 63 characters, starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key.
Value: A value must contain 1 to 63 characters, starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed.
Effect: Available options are NoSchedule, PreferNoSchedule, and NoExecute.
For details, see Managing Node Taints.
Note
Modified taints automatically apply to new nodes as well as existing nodes if Kubernetes taints synchronized is selected in Synchronization for Existing Nodes.
Synchronization for Existing Nodes
After the options are selected, changes to resource tags and Kubernetes labels/taints in a node pool will be synchronized to existing nodes in the node pool.
Note
When you update a node pool, pay attention to the following if you change the state of Resource tags synchronized:
After the option is selected:
CCE will synchronize the resource tags configured in the node pool to existing nodes. If a resource tag with the same key of a resource tag in the node pool already exists on an ECS, the value of the tag on the ECS will be changed to that of the resource tag in the node pool.
Typically, it takes less than 10 minutes to synchronize resource tags onto existing nodes, depending on the number of nodes in the node pool.
Issue a resource tag synchronization request only after the previous synchronization is complete. Otherwise, the resource tags may be inconsistent between existing nodes.
When you update a node pool, pay attention to the following if you change the state of Kubernetes labels or Taints:
When these options are deselected, the Kubernetes labels/taints of the existing and new nodes in the node pool may be inconsistent. If service scheduling relies on node labels or taints, the scheduling may fail or the node pool may fail to scale.
When these options are selected:
If you have modified or added labels or taints in the node pool, the modifications will be automatically synchronized to existing nodes typically in 10 minutes after Kubernetes labels or Taints is selected.
If you have deleted a label or taint in the node pool, you must manually delete the label or taint on the node list page after Kubernetes labels or Taints is selected.
If you have manually changed the key or effect of a taint on an existing node, a new taint will be added to the existing node after Kubernetes labels or Taints is selected. In the new taint, its key is different from the manually changed key but its value and effect are the same as those manually changed ones, or its effect is different from the manually changed effect but its key and value are the same as those manually changed ones. This is because a Kubernetes taint natively uses a key and effect as a key-value pair. The taints with different keys or effects are considered as two taints.
New Node Scheduling
Default scheduling policy for the nodes newly added to a node pool. If you select Unschedulable, newly created nodes in the node pool will be labeled as unschedulable. In this way, you can perform some operations on the nodes before pods are scheduled to these nodes.
Scheduled Scheduling: After scheduled scheduling is enabled, new nodes will be automatically scheduled after the custom time expires.
Disabled: By default, scheduled scheduling is not enabled for new nodes. To manually enable this function, go to the node list. For details, see Configuring a Node Scheduling Policy in One-Click Mode.
Custom: the default timeout for unschedulable nodes. The value ranges from 0 to 99 in the unit of minutes.
Note
If auto scaling of node pools is also required, ensure the scheduled scheduling is less than 15 minutes. If a node added through Autoscaler cannot be scheduled for more than 15 minutes, Autoscaler determines that the scale-out failed and triggers another scale-out. Additionally, if the node cannot be scheduled for more than 20 minutes, the node will be scaled in by Autoscaler.
After this function is enabled, nodes will be tainted with node.cloudprovider.kubernetes.io/uninitialized during a node pool creation or update.
ECS Group
An ECS group logically groups ECSs. The ECSs in the same ECS group comply with the same policy associated with the ECS group.
Anti-affinity: ECSs in an ECS group are deployed on different physical hosts to improve service reliability.
Select an existing ECS group, or click Add ECS Group to create one. After the ECS group is created, click the refresh icon.
Pre-installation Command
Installation script command. The script command will be Base64-transcoded. The characters of both the pre-installation and post-installation scripts are centrally calculated, and the total number of characters after transcoding cannot exceed 10240.
The script will be executed before Kubernetes software is installed. Note that if the script is incorrect, Kubernetes software may fail to be installed.
Note
The modified pre-installation setting automatically applies to newly added nodes. Manually reset existing nodes for the modification to take effect.
Post-installation Command
Installation script command. The script command will be Base64-transcoded. The characters of both the pre-installation and post-installation scripts are centrally calculated, and the total number of characters after transcoding cannot exceed 10240.
The script will be executed after Kubernetes software is installed, which does not affect the installation. During post-installation script execution, pods can be scheduled normally. However, if the script execution times out, node installation will fail. To prevent pods from being scheduled to nodes with incomplete script execution, enable the option to schedule pods only after the post-installation script execution completes.
Caution
CAUTION: Do not use the reboot command in the post-installation script to restart the system immediately. Instead, use the shutdown -r 1 command to restart the system with a one-minute delay.
Note
The modified post-installation setting automatically applies to newly added nodes. Manually reset existing nodes for the modification to take effect.
Agency
If you need to share ECS resources with other accounts or delegate a more professional person or team to manage the resources, you can create an agency on IAM and grant the agency the permissions to manage ECS resources. The delegated account can log in to the cloud system and switch to your account to manage resources. You do not need to share security credentials (such as passwords) with other accounts, ensuring the security of your account.
If you have created an agency, select the agency from the drop-down list. If no agency is available, click Create Agency on the right to create one.
Note
After an agency is modified, the modification will only apply to new nodes and not to existing ones, even if they are reset.
Custom Prefix and Suffix
Custom name prefix and suffix of a node in a node pool. After the configuration, the nodes in the node pool will be named with the configured prefix and suffix. For example, if the prefix is prefix- and the suffix is -suffix, the nodes in the node pool will be named in the format of "prefix-Node pool name with five-digit random characters-suffix".
A prefix and suffix can be customized only when a node pool is created, and they cannot be modified after the node pool is created.
A prefix can end with a special character, and a suffix can start with a special character.
A node name consists of a maximum of 56 characters in the format of "Prefix-Node pool name with five-digit random characters-Suffix".
A node name does not support the combination of a period (.) and special characters (such as .., .-, or -.).
This function is available only in clusters of v1.28.1, v1.27.3, v1.25.6, v1.23.11, v1.21.12, or later.
Note
After the custom name prefix and suffix are modified, the modification will only apply to new nodes and not to existing ones, even if they are reset.
After the configuration, click OK.
After node pool parameters are modified, you can find the update on the Nodes page. Reset the nodes in the target node pool to synchronize the configuration update.