Overview¶
Container Storage¶
CCE container storage is implemented based on Kubernetes container storage APIs (CSI). CCE integrates multiple types of cloud storage and covers different application scenarios. CCE is fully compatible with Kubernetes native storage services, such as emptyDir, hostPath, secret, and ConfigMap.
CCE allows workload pods to use multiple types of storage:
In terms of implementation, storage supports Container Storage Interface (CSI) and Kubernetes native storage.
Type
Description
CSI
An out-of-tree volume add-on, which specifies the standard container storage API and allows storage vendors to use standard custom storage plugins that are mounted using PVCs and PVs without the need to add their plugin source code to the Kubernetes repository for unified build, compilation, and release. CSI is a recommended in Kubernetes 1.13 and later versions.
Kubernetes native storage
An "in-tree" volume add-on that is built, compiled, and released with the Kubernetes repository.
In terms of storage media, storage can be classified as cloud storage, local storage, and Kubernetes resource objects.
Type
Description
Application Scenario
Cloud storage
The storage media is provided by storage vendors. Storage volumes of this type are mounted using PVCs and PVs.
Data requires high availability or needs to be shared, for example, logs and media resources.
Select a proper cloud storage type based on the application scenario. For details, see Cloud Storage Comparison.
Local storage
The storage media is the local data disk or memory of the node. The local PV is a customized storage type provided by CCE and mounted using PVCs and PVs through the CSI. Other storage types are Kubernetes native storage.
Non-HA data requires high I/O and low latency.
Select a proper local storage type based on the application scenario. For details, see Local Storage Comparison.
Kubernetes resource objects
ConfigMaps and secrets are resources created in clusters. They are special storage types and are provided by tmpfs (RAM-based file system) on the Kubernetes API server.
ConfigMaps are used to inject configuration data to pods.
Secrets are used to transmit sensitive information such as passwords to pods.
Cloud Storage Comparison¶
Item | EVS | SFS | SFS Turbo | OBS |
---|---|---|---|---|
Definition | EVS offers scalable block storage for cloud servers. With high reliability, high performance, and rich specifications, EVS disks can be used for distributed file systems, dev/test environments, data warehouses, and high-performance computing (HPC) applications. | Expandable to petabytes, SFS provides fully hosted shared file storage, highly available and stable to handle data- and bandwidth-intensive applications in HPC, media processing, file sharing, content management, and web services. | Expandable to 320 TB, SFS Turbo provides fully hosted shared file storage, which is highly available and stable, to support small files and applications requiring low latency and high IOPS. You can use SFS Turbo in high-traffic websites, log storage, compression/decompression, DevOps, enterprise OA, and containerized applications. | Object Storage Service (OBS) provides massive, secure, and cost-effective data storage for you to store data of any type and size. You can use it in enterprise backup/archiving, video on demand (VoD), video surveillance, and many other scenarios. |
Data storage logic | Stores binary data and cannot directly store files. To store files, format the file system first. | Stores files and sorts and displays data in the hierarchy of files and folders. | Stores files and sorts and displays data in the hierarchy of files and folders. | Stores objects. Files directly stored automatically generate the system metadata, which can also be customized by users. |
Access mode | Accessible only after being attached to ECSs and initialized. | Mounted to ECSs using network protocols. A network address must be specified or mapped to a local directory for access. | Supports the Network File System (NFS) protocol (NFSv3 only). You can seamlessly integrate existing applications and tools with SFS Turbo. | Accessible through the Internet or Direct Connect (DC). Specify the bucket address and use transmission protocols such as HTTP or HTTPS. |
Static storage volumes | Supported. For details, see Using an Existing EVS Disk Through a Static PV. | Supported. For details, see Using an Existing SFS File System Through a Static PV. | Supported. For details, see Using an Existing SFS Turbo File System Through a Static PV. | Supported. For details, see Using an Existing OBS Bucket Through a Static PV. |
Dynamic storage volumes | Supported. For details, see Using an EVS Disk Through a Dynamic PV. | Supported. For details, see Using an SFS File System Through a Dynamic PV. | Not supported | Supported. For details, see Using an OBS Bucket Through a Dynamic PV. |
Features | Non-shared storage. Each volume can be mounted to only one node. | Shared storage featuring high performance and throughput | Shared storage featuring high performance and bandwidth | Shared, user-mode file system |
Application scenarios | HPC, enterprise core cluster applications, enterprise application systems, and dev/test Note HPC apps here require high-speed and high-IOPS storage, such as industrial design and energy exploration. | HPC, media processing, content management, web services, big data, and analysis applications Note HPC apps here require high bandwidth and shared file storage, such as gene sequencing and image rendering. | High-traffic websites, log storage, DevOps, and enterprise OA | Big data analytics, static website hosting, online video on demand (VoD), gene sequencing, intelligent video surveillance, backup and archiving, and enterprise cloud boxes (web disks) |
Capacity | TB | SFS 1.0: PB | General-purpose: TB | EB |
Latency | 1-2 ms | SFS 1.0: 3-20 ms | General-purpose: 1-5 ms | 10 ms |
Max. IOPS | 2200-256000, depending on flavors | SFS 1.0: 2000 | General-purpose: up to 100,000 | Tens of millions |
Bandwidth | MB/s | SFS 1.0: GB/s | General-purpose: up to GB/s | TB/s |
Local Storage Comparison¶
Item | Local PV | Local Ephemeral Volume | emptyDir | hostPath |
---|---|---|---|---|
Definition | Node's local disks form a storage pool (VolumeGroup) through LVM. LVM divides them into logical volumes (LVs) and mounts them to pods. | Kubernetes native emptyDir, where node's local disks form a storage pool (VolumeGroup) through LVM. LVs are created as the storage medium of emptyDir and mounted to pods. LVs deliver better performance than the default storage medium of emptyDir. | Kubernetes native emptyDir. Its lifecycle is the same as that of a pod. Memory can be specified as the storage medium. When the pod is deleted, the emptyDir volume is deleted and its data is lost. | Used to mount a file directory of the host where a pod is located to a specified mount point of the pod. |
Features | Low-latency, high-I/O, and non-HA PV. Storage volumes are non-shared storage and bound to nodes through labels. Therefore, storage volumes can be mounted only to a single pod. | Local ephemeral volume. The storage space is from local LVs. | Local ephemeral volume. The storage space comes from the local kubelet root directory or memory. | Used to mount files or directories of the host file system. Host directories can be automatically created. Pods can be migrated (not bound to nodes). |
Storage volume mounting | Static storage volumes are not supported. Using a Local PV Through a Dynamic PV is supported. | For details, see Using a Local EV. | For details, see Using a Temporary Path. | For details, see hostPath. |
Application scenarios | High I/O requirements and built-in HA solutions of applications, for example, deploying MySQL in HA mode. |
|
| Requiring a node file, for example, if Docker is used, you can use hostPath to mount the /var/lib/docker path of the node. Important NOTICE: Avoid using hostPath volumes as much as possible, as they are prone to security risks. If hostPath volumes must be used, they can only be applied to files or directories and mounted in read-only mode. |