Creating a CCE Standard/Turbo Cluster

CCE standard and Turbo clusters provide enterprise-class Kubernetes cluster hosting service that supports full lifecycle management of containerized applications. They offer a highly scalable, high-performance solution for deploying and managing cloud native applications. On the CCE console, you can easily create CCE standard and Turbo clusters. After a cluster is created, CCE hosts the master nodes. You only need to create worker nodes. In this way, you can implement cost-effective O&M and efficient service deployment.

Before creating a CCE standard or Turbo cluster, you are advised to learn about What Is CCE? Networking Overview, and Planning CIDR Blocks for a Cluster.

Step 1: Configure Basic Settings

Basic settings define the core architecture and underlying resource rules of a cluster, providing a framework for cluster running and resource allocation.

  1. Log in to the CCE console. In the upper left corner of the page, click image1 and select a region for your cluster. The closer the selected region is to the region where resources are deployed, the lower the network latency and the faster the access.

    After confirming the region, click Create Cluster. If you use CCE for the first time, you need to create an agency following instructions.

  2. Configure the basic settings of the cluster. For details, see Table 1.

    Table 1 Basic settings of a cluster (applicable to standard and Turbo clusters)

    Parameter

    Description

    Modifiable After Cluster Creation

    Type

    Select CCE Standard Cluster or CCE Turbo Cluster as required.

    • CCE standard clusters provide highly reliable and secure containers for commercial use.

    • CCE Turbo clusters use the high-performance cloud native network. Such clusters provide cloud native hybrid scheduling, achieving higher resource utilization and wider scenario coverage.

    For details, see cluster types.

    No

    Cluster Name

    Enter a cluster name. Cluster names under the same account must be unique.

    Enter 4 to 128 characters. Start with a lowercase letter and do not end with a hyphen (-). Only lowercase letters, digits, and hyphens (-) are allowed.

    Yes

    Cluster Version

    Select a Kubernetes version. The latest commercial version is recommended, and it provides you with more stable, reliable features.

    Yes

    Cluster Scale

    Select a cluster scale as required. This parameter controls the maximum number of worker nodes that a cluster can manage.

    Yes

    The cluster that has been created can only be scaled out. For details, see Changing a Cluster Scale.

    Master Nodes

    Select the number of master nodes. The master nodes are automatically hosted by CCE and deployed with Kubernetes cluster management components such as kube-apiserver, kube-controller-manager, and kube-scheduler.

    • 3 Masters: Three master nodes will be created for high cluster availability.

    • Single: Only one master node will be created in your cluster.

      Note

      If more than half of the master nodes in a CCE cluster are faulty, the cluster cannot function properly.

    You can also select AZs for deploying the master nodes of a specific cluster. By default, AZs are allocated automatically for the master nodes.

    • Automatic: Master nodes are randomly distributed in different AZs for cluster DR. If there are not enough AZs available, CCE will prioritize assigning nodes in AZs with enough resources to ensure cluster creation. However, this may result in AZ-level DR not being guaranteed.

    • Custom: Master nodes are deployed in specific AZs.

      If there is one master node in a cluster, you can select one AZ for the master node. If there are multiple master nodes in a cluster, you can select multiple AZs for the master nodes.

      • AZ: Master nodes are deployed in different AZs for cluster DR.

      • Host: Master nodes are deployed on different hosts in the same AZ for cluster DR.

      • Custom: Master nodes are deployed in the AZs you specified.

    No

    After the cluster is created, the number of master nodes and the AZs where they are deployed cannot be changed.

Step 2: Configure Network Settings

Network configuration follows a hierarchical management system. It ensures end-to-end network connectivity and security assurance for containerized applications through collaborative configuration of the cluster networks, container networks, and Service networks.

  • Cluster network: handles communication between nodes, transmitting pod and Service traffic while ensuring cluster infrastructure connectivity and security.

  • Container network: assigns each pod an independent IP address, enabling direct container communication and cross-node communication.

  • Service network: establishes a stable access entry, supports load balancing, and optimizes traffic management for Services within a cluster.

Before configuring the network settings, you are advised to learn the concepts and relationships of the three types of networks. For details, see Networking Overview.

Configuring Network Settings for a CCE Standard Cluster

  1. Configure cluster network settings. For details, see Table 2.

    Table 2 Cluster network settings

    Parameter

    Description

    Modifiable After Cluster Creation

    VPC

    Select a VPC for a cluster.

    If no VPC is available, click Create VPC to create one. After the VPC is created, click the refresh icon.

    No

    Default Node Subnet

    Select a subnet. Once selected, all nodes in the cluster will automatically use the IP addresses assigned within that subnet. However, during node or node pool creation, the subnet settings can be reconfigured.

    No

    Default Node Security Group

    Select the security group automatically generated by CCE or select an existing one.

    The default node security group must allow traffic from certain ports to ensure normal communication. Otherwise, the node cannot be created. For details, see Configuring Cluster Security Group Rules.

    Yes

    IPv6

    After this function is enabled, the cluster supports the IPv4/IPv6 dual-stack, meaning each worker node can have both an IPv4 and IPv6 address. Both IP addresses support private and public network access. Before enabling this function, ensure that Default Node Subnet includes an IPv6 CIDR block.

    • CCE standard clusters (using VPC networks): IPv6 is not supported.

    No

  2. Configure container network parameters. For details, see Table 3.

    Table 3 Container network settings

    Parameter

    Description

    Modifiable After Cluster Creation

    Network Model

    The network model used by the container network in a cluster.

    • Tunnel network: applies to large clusters (with up to 2000 nodes) and scenarios that do not demand high performance such as web applications and data middle- and back-end services with low access traffic.

    • VPC network: applies to small clusters (with 1000 nodes or fewer) and scenarios that demand high performance such as AI and big data computing.

    For more details about their differences, see Overview.

    No

    DataPlane V2 (supported by clusters using the VPC networks)

    eBPF is used at the kernel layer for Kubernetes network acceleration, enhancing high-performance service communication through ClusterIP Services, precise traffic control via network policies, and intelligent bandwidth management with egress bandwidth. For details, see DataPlane V2 Network Acceleration.

    This function has the restrictions below. For details, see the restrictions in DataPlane V2 Network Acceleration.

    • Restrictions on clusters: This function can be enabled only for clusters of v1.27.16-r30, v1.28.15-r20, v1.29.13-r0, v1.30.10-r0, v1.31.6-r0, or later that use VPC networks.

    • Restrictions on memory: After this function is enabled, CCE will automatically deploy the cilium-agent on every node in a cluster. Each cilium-agent will use 80 MiB of memory, and the memory usage will increase by 10 KiB whenever a new pod is added.

    • Restrictions on OSs: After a node is created, it can only use HCE OS 2.0.

    Note

    CCE DataPlane V2 is released with restrictions. To use this feature, submit a service ticket to CCE.

    No

    Network Policies (supported by clusters using the tunnel networks)

    Policy-based network control for a cluster. For details, see Configuring Network Policies to Restrict Pod Access.

    After this function is enabled, if the CIDR blocks of a customer's service conflict with the on-premises CIDR blocks, the link to a newly added gateway may not be established.

    For example, if a cluster uses a Direct Connect connection to access an external address, the external switch does not support ip-option. Enabling network policies in this scenario could result in network access failure.

    Yes

    Container CIDR Block

    CIDR block used by containers. This parameter determines the maximum number of containers in the cluster. CCE standard clusters support:

    • Manually set: You can customize the container CIDR blocks as needed. For cross-VPC passthrough networking, make sure the container CIDR block does not overlap with the VPC CIDR block to be accessed to prevent conflicts. For details, see Planning CIDR Blocks for a Cluster. The VPC network model allows you to configure multiple CIDR blocks, and container CIDR blocks can be added even after the cluster is created. For details, see Expanding the Container CIDR Block of a Cluster That Uses a VPC Network.

    • Auto select: CCE will randomly allocate a non-conflicting CIDR block from the ranges 172.16.0.0/16 to 172.31.0.0/16, or from 10.0.0.0/12, 10.16.0.0/12, 10.32.0.0/12, 10.48.0.0/12, 10.64.0.0/12, 10.80.0.0/12, 10.96.0.0/12, and 10.112.0.0/12. Since the allocated CIDR block cannot be modified after the cluster is created, you are advised to manually configure the CIDR blocks, especially in commercial scenarios.

      Note

      After a cluster using a container tunnel network is created, the container CIDR block cannot be expanded later. To prevent IP address exhaustion, it is advised to set the container CIDR block with a maximum mask length of 19 bits.

    No

    After a cluster using a VPC network is created, you can add container CIDR blocks to the cluster but cannot modify or delete the existing ones.

    Pod IP Addresses Reserved for Each Node (supported by clusters using the VPC networks)

    The number of pod IP addresses that can be allocated on each node (alpha.cce/fixPoolMask). This parameter determines the maximum number of pods that can be created on each node.

    In a container network, each pod is assigned a unique IP address. If the number of pod IP addresses reserved for each node is insufficient, pods cannot be created. For details, see Number of Allocatable Pod IP Addresses on a Node.

    No

  3. Configure Service network parameters. For details, see Table 4.

    Table 4 Service network settings

    Parameter

    Description

    Modifiable After Cluster Creation

    Service CIDR Block

    Configure an IP address range for the ClusterIP Services in a cluster. This parameter controls the maximum number of ClusterIP Services in a cluster. ClusterIP Services enable communication between containers in a cluster. The Service CIDR block cannot overlap with the node subnet or container CIDR block.

    No

    Request Forwarding

    Configure load balancing and route forwarding of Service traffic in a cluster. IPVS and iptables are supported. For details, see Comparing iptables and IPVS.

    • iptables: the traditional kube-proxy mode. It applies to the scenario where the number of Services is small or a large number of short connections are concurrently sent on the client. IPv6 clusters do not support iptables.

    • IPVS: allows higher throughput and faster forwarding. It is suitable for large clusters or when there are a large number of Services.

    No

Configuring Network Settings for a CCE Turbo Cluster

  1. Configure cluster network settings. For details, see Table 5.

    Table 5 Cluster network settings

    Parameter

    Description

    Modifiable After Cluster Creation

    VPC

    Select a VPC for a cluster.

    If no VPC is available, click Create VPC to create one. After the VPC is created, click the refresh icon.

    No

    Default Node Subnet

    Select a subnet. Once selected, all nodes in the cluster will automatically use the IP addresses assigned within that subnet. However, during node or node pool creation, the subnet settings can be reconfigured.

    No

    Default Node Security Group

    Select the security group automatically generated by CCE or select an existing one.

    The default node security group must allow traffic from certain ports to ensure normal communication. Otherwise, the node cannot be created. For details, see Configuring Cluster Security Group Rules.

    Yes

    IPv6

    After this function is enabled, the cluster supports the IPv4/IPv6 dual-stack, meaning each worker node can have both an IPv4 and IPv6 address. Both IP addresses support private and public network access. Before enabling this function, ensure that Default Node Subnet includes an IPv6 CIDR block.

    • CCE standard clusters (using VPC networks): IPv6 is not supported.

    No

  2. Configure container network parameters. For details, see Table 6.

    Table 6 Container network settings

    Parameter

    Description

    Modifiable After Cluster Creation

    Network Model

    The network model used by the container network in a cluster. Only Cloud Native Network 2.0 is supported.

    For more details about this network model, see Overview.

    No

    DataPlane V2

    eBPF is used at the kernel layer for Kubernetes network acceleration, enhancing high-performance service communication through ClusterIP Services, precise traffic control via network policies, and intelligent bandwidth management with egress bandwidth. For details, see DataPlane V2 Network Acceleration.

    This function has the restrictions below. For details, see the restrictions in DataPlane V2 Network Acceleration.

    • Restrictions on clusters: The cluster version must be v1.27.16-r10, v1.28.15-r0, v1.29.10-r0, v1.30.6-r0, or later.

    • Restrictions on memory: After this function is enabled, CCE will automatically deploy the cilium-agent on every node in a cluster. Each cilium-agent will use 80 MiB of memory, and the memory usage will increase by 10 KiB whenever a new pod is added.

    • Restrictions on OSs: After a node is created, it can only use HCE OS 2.0.

    Note

    CCE DataPlane V2 is released with restrictions. To use this feature, submit a service ticket to CCE.

    No

    Pod Subnet

    Select the subnet to which the pod belongs. If no subnet is available, click Create Subnet to create one. The pod subnet determines the maximum number of containers in a cluster. You can add pod subnets after a cluster is created.

    Yes

    Default Security Group

    Select the security group automatically generated by CCE or select an existing one. This parameter controls inbound and outbound traffic to prevent unauthorized access.

    The default security group of containers must allow access from specified ports to ensure proper communication between containers in the cluster. For details about how to configure security group ports, see How Do I Harden the Automatically Created Security Group Rules for CCE Cluster Nodes?

    Yes

  3. Configure Service network parameters. For details, see Table 7.

    Table 7 Service network settings

    Parameter

    Description

    Modifiable After Cluster Creation

    Service CIDR Block

    Configure an IP address range for the ClusterIP Services in a cluster. This parameter controls the maximum number of ClusterIP Services in a cluster. ClusterIP Services enable communication between containers in a cluster. The Service CIDR block cannot overlap with the node subnet or container CIDR block.

    No

    Request Forwarding

    Configure load balancing and route forwarding of Service traffic in a cluster. IPVS and iptables are supported. For details, see Comparing iptables and IPVS.

    • iptables: the traditional kube-proxy mode. It applies to the scenario where the number of Services is small or a large number of short connections are concurrently sent on the client. IPv6 clusters do not support iptables.

    • IPVS: allows higher throughput and faster forwarding. It is suitable for large clusters or when there are a large number of Services.

    No

    IPv6 Service CIDR Block

    Configure IPv6 addresses for Services. This parameter is only available after IPv6 is enabled.

    No

(Optional) Step 3: Configure Advanced Settings

Advanced settings extend and strengthen previous settings, enhancing security, stability, and compliance within clusters. This is achieved through capabilities like improved authentication, resource management, and security mechanisms.

Table 8 Advanced settings

Parameter

Description

Modifiable After Cluster Creation

IAM Authentication

CCE clusters support IAM authentication. You can call IAM authenticated APIs to access CCE clusters.

No

Certificate Authentication

Certificate authentication is used for identity authentication and access control. It ensures that only authorized users or services can access specific cluster.

  • Automatically generated: CCE automatically creates and hosts X.509 certificates for your clusters. It automatically maintains and rotates cluster certificates.

  • Bring your own: You can add a custom certificate to your cluster and use this certificate for authentication. In this case, you need to upload CA root certificate, client certificate, and client certificate private key.

    Caution

    CAUTION:

    • Upload a file smaller than 1 MiB. The CA certificate and client certificate can be in .crt or .cer format. The private key of the client certificate can only be uploaded unencrypted.

    • The validity period of the client certificate must be longer than five years.

    • The uploaded CA root certificate is used by the authentication proxy and for configuring the kube-apiserver aggregation layer. If any of the uploaded certificates is invalid, the cluster cannot be created.

    • In clusters of v1.25 and later, Kubernetes no longer supports certificate authentication generated using the SHA1WithRSA or ECDSAWithSHA1 algorithms. You are advised to use the certificates generated using the SHA-256 algorithm for authentication.

No

CPU Management

CPU management policies allow precise control over CPU allocation for pods. For details, see CPU Policy.

  • Disabled: The default CPU affinity policy is used. Affinity policies other than the default behavior of the OS scheduler are not provided. While many CPUs remain available in the shared pool, workloads cannot exclusively use any of them.

  • Enabled: Workload pods can exclusively use CPUs. If a pod with a QoS class of Guaranteed requests an integer number of CPUs, the containers within the pod are pinned to physical CPUs on the host node. This mode benefits workloads sensitive to CPU cache hit ratio and scheduling latency.

Yes

Disk Encryption for Master Nodes

After this function is enabled, dynamic and static data on disks can be encrypted, providing strong security protection for your data.

After encryption, the disk read/write performance deteriorates. This function is available only for clusters of v1.25 or later.

No

Overload Control

After this function is enabled, concurrent requests will be dynamically controlled based on the resource demands received by master nodes, ensuring stable running of the master nodes and the cluster. For details, see Enabling Overload Control for a Cluster.

Yes

Cluster Deletion Protection

After this function is enabled, you will not be able to delete or unsubscribe from clusters on CCE. This option is a measure to prevent accidental deletion of clusters through the console or APIs. You can modify the function status in Settings after creating it.

Yes

Time Zone

The cluster's scheduled tasks and nodes are subject to the chosen time zone.

x

Resource Tag

Adding tags to resources allows for customized classification and organization. A maximum of 20 resource tags can be added.

You can create predefined tags on the TMS console. These tags are available to all resources that support tags. You can use these tags to improve the tag creation and resource migration efficiency.

Yes

Description

Cluster description helps users and administrators quickly understand the basic settings, status, and usage of a cluster. The description can contain a maximum of 200 characters.

Yes

Step 4: Select Add-ons

CCE provides a variety of add-ons to extend cluster functions and enhance the functionality and flexibility of containerized applications. You can select add-ons as required. Some basic add-ons are set as mandatory by default. If non-basic add-ons are not installed during cluster creation, they can still be added later on the Add-ons page after the cluster is created.

  1. Click Next: Select Add-on. On the page displayed, select the add-ons to be installed during cluster creation.

  2. Select basic add-ons to ensure the proper running of the cluster. For details, see Table 9.

    Table 9 Basic add-ons

    Add-on

    Description

    CCE Container Network (Yangtse CNI)

    This add-on provides network connectivity, Internet access, and security isolation for pods in a cluster. It is the basic cluster add-on.

    CCE Container Storage (Everest)

    This add-on (CCE Container Storage (Everest)) is installed by default. It is a cloud native container storage system based on CSI and supports cloud storage services such as EVS.

    CoreDNS

    This add-on (CoreDNS) is installed by default. It provides DNS resolution for your cluster and can be used to access the in-cloud DNS server.

    NodeLocal DNSCache

    (Optional) After you select this option, CCE will automatically install NodeLocal DNSCache. NodeLocal DNSCache improves cluster DNS performance by running a DNS cache proxy on cluster nodes.

  3. Select the observability add-ons to experience the full observability function. For details, see Table 9.

    Table 10 Observability add-ons

    Add-on

    Description

    Cloud Native Cluster Monitoring

    (Optional) After you select this option, CCE will automatically install Cloud Native Cluster Monitoring. Cloud Native Cluster Monitoring collects monitoring metrics for your cluster and reports the metrics to AOM. The agent mode does not support HPA based on custom Prometheus statements. If related functions are required, install this add-on manually after the cluster is created.

    Cloud Native Log Collection

    (Optional) After you select this option, CCE will automatically install Cloud Native Log Collection. Cloud Native Log Collection helps report logs to LTS. After the cluster is created, you are allowed to obtain and manage collection rules on the Logging page of the CCE cluster console.

    CCE Node Problem Detector

    (Optional) After you select this option, CCE will automatically install CCE Node Problem Detector to detect faults and isolate nodes for prompt cluster troubleshooting.

Step 5: Configure Add-ons

Configure the selected add-ons to ensure they operate stably and accurately and meet service requirements.

  1. Click Next: Configure Add-on.

  2. Configure the basic add-ons. For details, see Table 11.

    Table 11 Basic add-on settings

    Add-on

    Description

    CCE Container Network (Yangtse CNI)

    This add-on is unconfigurable.

    CCE Container Storage (Everest)

    This add-on is unconfigurable. After the cluster is created, you can go to Add-ons to modify the settings.

    CoreDNS

    This add-on is unconfigurable. After the cluster is created, you can go to Add-ons to modify the settings.

    NodeLocal DNSCache

    This add-on is unconfigurable. After the cluster is created, you can go to Add-ons to modify the settings.

  3. Configure the observability add-ons. For details, see Table 12.

    Table 12 Observability add-on settings

    Add-on

    Description

    Cloud Native Cluster Monitoring

    Select an AOM instance for Cloud Native Cluster Monitoring to report metrics. If no AOM instance is available, click Creating Instance to create one.

    Cloud Native Log Collection

    Select the logs to be collected. If enabled, a log group named k8s-log-{clusterId} will be automatically created, and a log stream will be created for each selected log type.

    • Container log: Standard output logs of containers are collected. The corresponding log stream is named in the format of stdout-{Cluster ID}.

    • Kubernetes Events: Kubernetes logs are collected. The corresponding log stream is named in the format of event-{Cluster ID}.

    • Kubernetes Audit Logs: Audit logs of the master nodes are collected. The log streams are named in the format of audit-{Cluster ID}.

    • Control Plane Logs: Logs from critical components such as kube-apiserver, kube-controller-manage, and kube-scheduler that run on the master nodes are collected. The log streams are named in the format of kube-apiserver-{Cluster ID}, kube-controller-manage-{Cluster ID}, and kube-scheduler-{Cluster ID}, respectively.

    If log collection is disabled, choose Logging in the navigation pane of the cluster console after the cluster is created and enable this option.

    CCE Node Problem Detector

    This add-on is unconfigurable. After the cluster is created, you can go to Add-ons to modify the settings.