• Cloud Container Engine

  1. Help Center
  2. Cloud Container Engine
  3. User Guide 2.0
  4. Cluster Management
  5. Creating a VM Cluster

Creating a VM Cluster

Before you create a containerized application, at least one cluster must be available. At present, a maximum of five clusters can be created.

Basic Resources of a Cluster

Table 1 lists the basic resources that you need for creating a cluster.

Table 1 Basic resources of a cluster



Masters and related resources

Associated with CCE resource tenants, and invisible to you.

ECSs (optional)

An ECS corresponds to a cluster node that provides computing resources.

An ECS is named in the format of Cluster name-Random number. The name format is user-defined. ECSs created in batches are named in the format of Cluster name-Random number 1-Random number 2.

Security groups

Two security groups are created for a cluster: one for managing cluster masters, and the other for managing cluster nodes.


To ensure that a cluster runs properly, retain the settings of security groups and security group rules configured during cluster creation.

  1. Security group for masters

    Name format: Clustername-cce-control-Random number


    • Allows outbound traffic.
    • Allows other nodes to access Kubernetes services of masters.
  2. Security group for nodes

    Name format: Clustername-cce-node-Random number


    • Allows outbound traffic.
    • Allows remote login to Linux or Windows operating systems using ports 22 and 3389.
    • Allows communication between Kubernetes components using ports 4789 and 10250.
    • Allows external nodes to access Kubernetes using ports 30000 to 32767.
    • Allows communication between nodes in the same security group.

Disks (optional)

Two disks are configured for each node. One is the system disk, and the other is the data disk used to run Docker.

Elastic IP address (optional)

An elastic IP address (EIP) must be associated with a node in order to enable communication with the Internet.


You have created a VPC and key pair as described in Creating a VPC and a Key Pair.

Creating a Cluster

  1. Log in to the CCE console. On the Dashboard page, click Create VM Cluster.
  2. Set the parameters listed in Table 2. The parameters marked with * are mandatory.

    Table 2 Parameters for creating a cluster



    * Region

    Physical location of a cluster.

    * Cluster Name

    Name of the cluster to be created.

    * Version

    Cluster version, which corresponds to the Kubernetes base version.

    * Size

    Maximum number of nodes that can be managed by the cluster. If you select 50 nodes, the cluster can manage a maximum of 50 nodes.

    Each cluster consists of at least one master node and worker node. In this document, worker nodes and nodes are interchangeable.
    • Master node: controls worker nodes in the cluster. The master node is automatically created along with the cluster, and manages and controls the entire cluster.
    • Worker node: a node that runs the application you deploy. You can create a worker load either when or after creating the cluster. The master node assigns a worker node to each deployable component of your application. When a worker node is down, the master node migrates the application to another worker node.

    * High Availability

    • Yes: Three master nodes will be created in the same AZ for the cluster. A master node manages and controls the entire cluster. The cluster remains available even when two of the master nodes are faulty.

      If the cluster is an HA cluster, you can enable cross-AZ deployment of master nodes by clicking Advanced Settings > Configure and setting Multiple AZs to On. In this way, three master nodes will be located in different AZs. The cluster remains available even when one of the AZs is down. For details, see Table 3.

    • No: Only one master node is created for the cluster. The cluster becomes unavailable if the master node is faulty, but running applications are not affected.

    * VPC

    VPC where the new cluster is located.

    If no VPC is available, click Create a VPC and create one. For details, see Creating a VPC and a Key Pair.

    * Subnet

    Subnet in which the nodes run.

    * Network Model

    • Tunnel network: A virtual network built on top of a VPC network, applicable to common scenarios.
    • VPC network: A VPC network that delivers higher performance and applies to high-performance and intensive interaction scenarios. Only one cluster using the VPC network model can be created under a VPC.

    * Container CIDR Block

    Select a container classless inter-domain routing (CIDR) block that best fits your needs. The IP addresses in the selected CIDR block will be assigned to the container instances.

    • If Automatically select is deselected, you must select a CIDR block. If the CIDR block you select conflicts with a subnet CIDR block, the system prompts you to select another CIDR block. The recommended CIDR blocks are,, and
    • If Automatically select is selected, the system automatically assigns a CIDR block that does not conflict with any subnet CIDR block.
    • The container CIDR block is a one-time configuration and cannot be changed after the cluster is created. If you want to use another container CIDR block, you have to create a new cluster and assign the new container CIDR block to the cluster.
    • Plan container CIDR blocks before creating a cluster. If different clusters share a container CIDR block, IP address conflicts and application access exceptions will occur.
    • The mask determines the maximum number of nodes in a cluster. Set the mask in the container CIDR block to an appropriate value that matches the cluster size.


    Description of the cluster.

    * Advanced Settings

    • Skip: Skip the advanced settings.
    Table 3 Advanced settings



    Multiple AZs

    • Off: The cluster's master nodes are deployed in the same AZ. If the AZ is down, the cluster stops serving new applications, but existing applications are not affected.
    • On: The cluster's master nodes are distributed across multiple AZs. When one of the AZs is down, the cluster can continue to serve new applications.

    Service Forwarding Mode

    • iptables: Traditional kube-proxy runs in iptables mode.
    • ipvs: Optimized kube-proxy mode with higher throughput and faster speed. This mode is suitable for large-sized clusters.
    • Compared with iptables, ipvs boasts more complex load balancing algorithms.
    • ipvs supports server health checking and connection retries.
    • Only clusters of Kubernetes v1.11 or later support the ipvs mode.

  3. After the configuration is complete, click Next.
  4. Select whether to create a node in the cluster.

    • No: Create a cluster without nodes. Go to 7.
    • Yes: Create the first node for the cluster.

  5. Set the parameters listed in Table 4, and click Next.

    Table 4 Parameters for creating a node




    Physical location where resources use independent power supplies and networks. AZs are physically isolated but interconnected through an internal network. To improve application reliability, you are advised to create cloud servers in different AZs.

    Node Name

    Name of the node to be created.


    • General-purpose: provides general computing, storage, and network configurations for the majority of application scenarios; typically used for web servers, development and test environments, and small database applications.
    • Memory-optimized: provides instances with a larger memory size; typically used for memory-intensive applications that process a large amount of data, such as relational databases and NoSQL databases.


    A Dedicated Host (DeH) is a physical server dedicated for your use. You can create nodes on a DeH to physically isolate the nodes from those of other tenants.

    • Skip: Skip this setting.
    • Configure: Select an existing DeH.


    Only EulerOS 2.2 is supported.

    OS upgrade is not supported in the current version.


    Number of nodes.


    An independent public IP address. If a node needs to access the Internet, assign a new EIP or use an existing EIP.


    By default, the SNAT function of VPCs is disabled on CCE. If SNAT is enabled, EIPs are not required for accessing external networks.

    • Do not use: A cloud server without an EIP cannot access the Internet. It can be used only as a cloud server for deploying services or clusters on a private network.
    • Automatically assign: An EIP with exclusive bandwidth is automatically assigned to each cloud server. When creating an ECS, ensure that the EIP quota is sufficient. Set the specifications and bandwidth as required.
    • Specify: An existing EIP is assigned to the cloud server.


    Disk type, which can be System Disk or Data Disk.

    • The system disk capacity is configurable and ranges from 40 to 1024 GB. The default value is 40 GB.
    • The data disk capacity is configurable and ranges from 100 to 32678 GB. The default value is 100 GB.

    Data disks deliver three levels of I/O performance:

    • Common I/O: EVS disks of this level provide reliable block storage and a maximum IOPS of 1000 per disk. They are suitable for key applications.
    • High I/O: EVS disks of this level provide a maximum IOPS of 3,000 and a minimum read/write latency of 1 ms. They are suitable for RDS, NoSQL, data warehouse, and file system applications.
    • Ultra-high I/O: EVS disks of this level provide a maximum IOPS of 20,000 and a minimum read/write latency of 1 ms. They are suitable for RDS, NoSQL, and data warehouse applications.

    Key Pair

    Used for identity authentication when you remotely log in to a node. Select an existing key pair.

    If no key pair is available, click Create a key pair and create one.

    Advanced Settings

    • Skip: Skip the advanced settings.
    • Configure: Configure advanced settings as follows.


    • Set File Injection.

      Use the file injection function to inject script files into ECSs to:

      • Simplify ECS configuration.
      • Initialize ECS OS configuration.
      • Upload your scripts to an ECS during ECS creation.
      1. Click Add File.
      2. Enter the file path or the file name. In Linux, enter the file path that contains the file name (for example, /etc/foo.txt). The file name can contain only letters and digits.
      3. Click Select File, and select a script that meets the OS requirements.
    • Set Subnet IP Address.

      Select Automatically assign or Manually assign.

  6. Click Create Now and install add-ons.

    System resource add-ons are mandatory and will be installed now.

    Advanced add-ons are optional. You can install them now or after the cluster is created. For details, see Add-on Management.

  7. Click Create Now, review the details, and click Submit.

    It takes 6 to 10 minutes to create a cluster. Information about the progress of the creation process will be displayed.

Related Operations

After creating a cluster, you can:

  • Create a namespace. You can create multiple namespaces in a cluster and classify them into different logical groups to share cluster resources. The logical groups can be managed separately. For more information about how to create a namespace for a cluster, see Managing Namespaces.
  • Click the cluster name to view cluster details. Table 5 describes the cluster details tabs.
    Table 5 Cluster details



    Cluster Details

    View the details and operating status of the cluster.


    Check the CPU and memory usage of the cluster over the past 1 hour, 3 hours, or 12 hours.


    • View cluster events on the Events tab page.
    • Set search criteria. For example, you can set the time segment or enter an event name to view corresponding events.

    Auto Scaling

    Cluster auto scaling dynamically changes the number of nodes in a cluster to meet your service requirements. Auto scaling is triggered to reduce labor costs when applications cannot be scheduled due to insufficient resources in a cluster.

    For details, see Cluster Auto Scaling.


    To access a Kubernetes cluster from a client, you can use the Kubernetes CLI tool kubectl.

    For details, see Connecting to a Kubernetes Cluster Using kubectl.