• Elastic Cloud Server

ecs
  1. Help Center
  2. Elastic Cloud Server
  3. User Guide
  4. Service Overview
  5. ECS Types and Specifications
  6. High-Performance Computing ECSs

High-Performance Computing ECSs

Overview

H1 ECSs provide a large number of CPU cores, large memory size, and high throughput. These ECSs are suitable for high-performance processor applications restricted by computing performance.

H2 ECSs are designed to meet high-end computational needs, such as molecular modeling and computational fluid dynamics. In addition to the substantial CPU power, the H2 ECSs offer diverse options for low-latency RDMA networking using EDR InfiniBand NICs to support memory-intensive computational requirements.

HL1 ECSs are the second generation of high-computing ECSs, featuring large memory capacity. They allow ECSs to interconnect with each other using 100 Gbit/s RDMA InfiniBand NICs and support 56 Gbit/s shared high I/O storage.

Specifications

Table 1 H1 ECS specifications

ECS Type

vCPUs

Memory (GB)

Flavor

Virtualization Type

High-performance computing

2

4

h1.large

XEN

4

8

h1.xlarge

XEN

8

16

h1.2xlarge

XEN

16

32

h1.4xlarge

XEN

32

64

h1.8xlarge

XEN

2

8

h1.large.4

XEN

4

16

h1.xlarge.4

XEN

8

32

h1.2xlarge.4

XEN

16

64

h1.4xlarge.4

XEN

32

128

h1.8xlarge.4

XEN

2

16

h1.large.8

XEN

4

32

h1.xlarge.8

XEN

8

64

h1.2xlarge.8

XEN

16

128

h1.4xlarge.8

XEN

32

256

h1.8xlarge.8

XEN

Table 2 H2 ECS specifications

ECS Type

vCPUs

Memory (GB)

Flavor

Virtualization Type

Local Disks

Capacity of One Local Disk (TB)

Network Type

High-performance computing

16

128

h2.3xlarge.10

KVM

1

3.2

100 Gbit/s EDR InfiniBand

16

256

h2.3xlarge.20

KVM

1

3.2

100 Gbit/s EDR InfiniBand

Table 3 HL1 ECS specifications

ECS Type

vCPUs

Memory (GB)

Flavor

Virtualization Type

Network Type

High-performance computing

32

256

hl1.8xlarge.8

KVM

100 Gbit/s EDR InfiniBand

Scenarios

  • Applications

    High-performance computing first-generation ECSs are suitable for applications that require a large number of parallel computing resources and high-performance infrastructure services to meet computing and storage requirements and ensure high rendering efficiency, such as scientific computing.

    High-performance computing second-generation ECSs are suitable for such applications as HPC, big data, and Artificial Intelligence (AI).

    • High-performance hardware: The ratio of memory to vCPU is 8:1. They have a large number of multithreading and physical CPUs, providing high-performance storage I/O and high-throughput network connections.
    • Dedicated for HPC clusters: Multiple HL1 ECSs form a cluster, which can be used to install scalable cluster file systems, such as Luster. HPC applications running on H2 ECSs can read and modify the data stored on the ECSs.
    • RDMA network connection: As with H2 ECSs, HL1 ECSs also offer RDMA network using EDR 100 Gbit/s InfiniBand. HL1 ECSs can communicate with H2 ECSs through the RDMA protocol. In addition, to achieve high storage I/O, the HL1 ECSs can access EVS disks through the RDMA protocol. The full storage path can reach a bandwidth of 56 Gbit/s.
  • Application scenarios

    H1 ECSs apply to computing and storage systems for genetic engineering, games, animations, biopharmaceuticals, and scientific computing.

    H2 and HL1 ECSs provide computing capabilities for clusters with a large memory, good connectivity between nodes, and high storage I/O. The typical application scenarios include HPC, big data, and AI. In HPC solution, HL1 ECSs are perfectly suited for the Luster parallel distributed file system, generally used for large-scale cluster computing.

    For example, in HPC scenario, H2 ECSs can be used as compute nodes, and HL1 ECSs can be used as storage nodes.

Features

High-performance computing ECSs have the following features:

  • Large memory capacity and more processor cores than other types of ECSs
  • Up to 32 vCPUs
  • H2 and HL1 ECSs use InfiniBand NICs that provide a bandwidth of 100 Gbit/s.
  • HL1 ECSs can use the following types of EVS disks as system disk and data disk:

    High I/O (performance-optimized I)

    Ultra-high I/O (latency-optimized)

  • HL1 ECSs support 56 Gbit/s shared high I/O storage.

    To support 56 Gbit/s shared high I/O storage, you only need to attach high I/O (performance-optimized I) or ultra-high I/O (latency-optimized) EVS disks to target HL1 ECSs.

Notes on Using H1 ECSs

  • H1 ECSs do not support NIC hot swapping.
  • H1 ECSs support modifying specifications if the source and target ECSs are of the same type.
  • H1 ECSs support modifying specifications with general-purpose (S1, C1, C2, or M1) ECSs.
  • H1 ECSs support the following OSs:
    • CentOS 6.8 64bit
    • CentOS 7.2 64bit
    • CentOS 7.3 64bit
    • Windows Server 2008
    • Windows Server 2012
    • Windows Server 2016
    • SUSE Enterprise Linux Server 11 SP3 64bit
    • SUSE Enterprise Linux Server 11 SP4 64bit
    • SUSE Enterprise Linux Server 12 SP1 64bit
    • SUSE Enterprise Linux Server 12 SP2 64bit
    • Red Hat Enterprise Linux 6.8 64bit
    • Red Hat Enterprise Linux 7.3 64bit
  • The primary and extension NICs of an H1 ECS have specified application scenarios. For details, see Table 4.
    Table 4 Application scenarios of the NICs of an H1 ECS

    NIC Type

    Application Scenario

    Remarks

    Primary NIC

    Applies to vertical layer 3 communication.

    N/A

    Extension NIC

    Applies to horizontal layer 2 communication.

    To improve network performance, you can set the MTU of the extension NIC to 8888.

Notes on Using H2 ECSs

  • H2 ECSs do not support OS reinstallation or change.
  • H2 ECSs do not support specifications modification.
  • H2 ECSs do not support cold migration, live migration, or HA.
  • H2 ECSs support the following OSs:
    • For public images:
      • CentOS 7.3 64bit
      • SUSE Linux Enterprise Server 11 SP4 64bit
      • SUSE Linux Enterprise Server 12 SP2 64bit
    • For private images:
      • CentOS 6.5 64bit
      • CentOS 7.2 64bit
      • CentOS 7.3 64bit
      • SUSE Linux Enterprise Server 11 SP4 64bit
      • SUSE Linux Enterprise Server 12 SP2 64bit
      • Red Hat Enterprise Linux 7.2 64bit
      • Red Hat Enterprise Linux 7.3 64bit
  • H2 ECSs use InfiniBand NICs that provide a bandwidth of 100 Gbit/s.
  • Each H2 ECS uses one PCIe 3.2 TB SSD card for temporary local storage.
  • If an H2 ECS is created using a private image, install an InfiniBand NIC driver on the ECS after the ECS creation following the instructions provided by Mellanox. Download the required version (4.2-1.0.0.0) of InfiniBand NIC driver from the official Mellanox website and install the driver by following the instructions provided by Mellanox.
  • For SUSE H2 ECSs, if IP over InfiniBand (IPoIB) is required, you must manually configure an IP address for the InfiniBand NIC after installing the InfiniBand driver. For details, see section How Can I Manually Configure an IP Address for an InfiniBand NIC?
  • After you delete an H2 ECS, the data stored in SSDs is automatically cleared. Therefore, do not store persistence data into SSDs during ECS running.

Notes on Using HL1 ECSs

  • HL1 ECSs only support the attachment of high I/O (performance-optimized I) and ultra-high I/O (latency-optimized) EVS disks.

    To support 56 Gbit/s shared high I/O storage, you only need to attach high I/O (performance-optimized I) or ultra-high I/O (latency-optimized) EVS disks to target HL1 ECSs.

  • HL1 ECSs do not support specifications modification.
  • HL1 ECSs use InfiniBand NICs that provide a bandwidth of 100 Gbit/s.
  • HL1 ECSs created using a private image must have the InfiniBand NIC driver installed. Download the required version (4.2-1.0.0.0) of InfiniBand NIC driver from the official Mellanox website and install the driver by following the instructions provided by Mellanox.
    • InfiniBand NIC type: Mellanox Technologies ConnectX-4 Infiniband HBA (MCX455A-ECAT)
    • Mellanox official website: http://www.mellanox.com/
  • For SUSE HL1 ECSs, if IPoIB is required, you must manually configure an IP address for the InfiniBand NIC after installing the InfiniBand driver. For details, see section How Can I Manually Configure an IP Address for an InfiniBand NIC?
  • HL1 ECSs support the following OSs:
    • For public images:
      • CentOS 7.3 64bit
      • SUSE Linux Enterprise Server 11 SP4 64bit
      • SUSE Linux Enterprise Server 12 SP2 64bit
    • For private images:
      • CentOS 6.5 64bit
      • CentOS 7.2 64bit
      • CentOS 7.3 64bit
      • SUSE Linux Enterprise Server 11 SP4 64bit
      • SUSE Linux Enterprise Server 12 SP2 64bit
      • Red Hat Enterprise Linux 7.2 64bit
      • Red Hat Enterprise Linux 7.3 64bit
  • Charging an HL1 ECS is stopped when it is stopped.