- User Guide
- Service Overview
- Differences Between Dedicated and Shared Load Balancers
Differences Between Dedicated and Shared Load Balancers¶
Each type of load balancer has their advantages.
Note
In the eu-de region, you can create both dedicated and shared load balancers, and you can create either type of load balancers on the management console or by calling APIs.
In the eu-nl region, you can only create dedicated load balancers, either on the console or by calling APIs.
Feature Comparison¶
Dedicated load balancers provide more powerful forwarding performance, while shared load balancers are less expensive. You can select the appropriate load balancer based on your application needs. The following tables compare the features supported by the two types of load balancers. (Y indicates that an item is supported, and x indicates that an item is not supported.)
Item | Dedicated Load Balancers | Shared Load Balancers |
---|---|---|
Deployment mode | Their performance is not affected by other load balancers. You can select different specifications based on your requirements. | Shared load balancers are deployed in clusters, and all the load balancers share underlying resources, so that the performance of a load balancer is affected by other load balancers. |
Concurrent connections | A dedicated load balancer in an AZ can establish up to 20 million concurrent connections. If you deploy a dedicated load balancer in two AZs, the number of concurrent connections will be doubled. For example, if you deploy a dedicated load balancer in two AZs, it can handle up to 40 million concurrent connections. |
|
Protocol | Description | Dedicated Load Balancers | Shared Load Balancers |
---|---|---|---|
QUIC | If you use UDP as the frontend protocol, you can select QUIC as the backend protocol, and select the connection ID algorithm to route requests with the same connection ID to the same backend server. QUIC has the advantages of low latency, high reliability, and no head-of-line blocking (HOL blocking), and is very suitable for the mobile Internet. No new connections need to be established when you switch between a Wi-Fi network and a mobile network. | Y | x |
HTTP/2 | Hypertext Transfer Protocol 2.0 (HTTP/2) is a new version of the HTTP protocol. HTTP/2 is compatible with HTTP/1.X and provides improved performance and security. Currently, only HTTPS listeners support this feature. | Y | Y |
TCP/UDP (Layer 4) | After receiving TCP or UDP requests from the clients, the load balancer directly routes the requests to backend servers. Load balancing at Layer 4 features high routing efficiency. | Y | Y |
HTTP/HTTPS (Layer 7) | After receiving a request, the listener needs to identify the request and forward data based on the fields in the HTTP/HTTPS packet header. Though the routing efficiency is lower than that at Layer 4, load balancing at Layer 7 provides some advanced features such as encrypted transmission and cookie-based sticky sessions. | Y | Y |
WebSocket | WebSocket is a new HTML5 protocol that provides full-duplex communication between the browser and the server. WebSocket saves server resources and bandwidth, and enables real-time communication. | Y | Y |
Backend Type | Description | Dedicated Load Balancers | Shared Load Balancers |
---|---|---|---|
IP as backend servers | You can add servers in a VPC connected using a VPC peering connection, in a VPC connected through a cloud connection, or in an on-premises data center at the other end of a Direct Connect or VPN connection, by using the server IP addresses. In this way, incoming traffic can be flexibly distributed to cloud servers and on-premises servers for hybrid load balancing. | Y | x |
ECS | You can use load balancers to distribute incoming traffic across ECSs. | Y | Y |
Component | Condition | Description | Dedicated Load Balancers | Shared Load Balancers |
---|---|---|---|---|
Forwarding rule | Domain name | Load balancers can route requests based on domain names. The domain name in the request must exactly match that in the forwarding policy. | Y | Y |
URL | Load balancers can route requests based on URLs. There are three URL matching rules: exact match, prefix match, and regular expression match. | Y | Y | |
HTTP request method | You can route requests based on any HTTP method. The options include GET, POST, PUT, DELETE, PATCH, HEAD and OPTIONS. | Y | x | |
HTTP header | You can route requests based on the value of any HTTP header. An HTTP header consists of a key and one or more values. You need to configure the key and values separately. | Y | x | |
Query string | Route requests based on the query string. | Y | x | |
CIDR block (source IP addresses) | You can route requests based on source IP addresses from where the requests originate. | Y | x | |
Action | Forward to a backend server group | Requests are forwarded to the specified backend server group for processing. | Y | Y |
Redirect to another listener | Requests are redirected to another listener, which then routes the requests to its associated backend server group. | Y | x | |
Redirect to another URL | Requests are redirected to the configured URL. When clients access website A, the load balancer returns 302 or any other 3xx status code and automatically redirects the clients to website B. You can custom the redirection URL that will be returned to the clients. | Y | x | |
Return a specific response body | Load balancers return a fixed response to the clients. You can custom the status code and response body that load balancers directly return to the clients without the need to route the requests to backend servers. | Y | x |
Feature | Description | Dedicated Load Balancers | Shared Load Balancers |
---|---|---|---|
Multiple specifications | Load balancers allow you to select appropriate specifications based on your requirements. For details, see Specifications of Dedicated Load Balancers. | Y | x |
HTTPS support | Load balancers can receive HTTPS requests from clients and route them to an HTTPS backend server group. | Y | x |
Slow start | You can enable slow start for HTTP or HTTPS listeners. After you enable it, the load balancer linearly increases the proportion of requests to send to backend servers in this mode. Slow start gives applications time to warm up and respond to requests with optimal performance. | Y | x |
Mutual authentication | In this case, you need to deploy both the server certificate and client certificate. Mutual authentication is supported only by HTTPS listeners. | Y | Y |
SNI | Server Name Indication (SNI) is an extension to TLS and is used when a server uses multiple domain names and certificates. After SNI is enabled, certificates corresponding to the domain names are required. | Y | Y |
Passing the EIP of each load balancer to backend servers | When you add an HTTPS or HTTP listener, you can store the EIP bound to the load balancer in the HTTP header and pass it to backend servers. | Y | Y |
Custom timeout durations | You can configure and modify timeout durations (idle timeout, request timeout, and response timeout) for your listeners to meet varied demands. For example, if the size of a request from an HTTP or HTTPS client is large, you can increase the request timeout duration to ensure that the request can be successfully routed.
| Y | Y |
Security policies | When you add HTTPS listeners, you can select appropriate security policies to improve service security. A security policy is a combination of TLS protocols and cipher suites. | Y | Y |
Feature | Description | Dedicated Load Balancers | Shared Load Balancers |
---|---|---|---|
Customized cross-AZ deployment | You can create a load balancer in multiple AZs. Each AZ selects an optimal path to process requests. In addition, the AZs back up each other, improving service processing efficiency and reliability. If you deploy a load balancer in multiple AZs, its performance such as the number of new connections and the number of concurrent connections will multiply. For example, if you deploy a dedicated load balancer in two AZs, it can handle up to 40 million concurrent connections. Note
| Y | x |
Load balancing algorithms | Load balancers support weighted round robin, weighted least connections, and source IP hash. | Y | Y |
Load balancing over public and private networks |
| Y | Y |
Modifying the bandwidth | You can modify the bandwidth used by the EIP bound to the load balancer as required. | Y | Y |
Binding/Unbinding an IP address | You can bind an IP address to a load balancer or unbind the IP address from a load balancer based on service requirements. | Y | Y |
Sticky session | If you enable sticky sessions, requests from the same client will be routed to the same backend server during the session. | Y | Y |
Access control | You can add IP addresses to a whitelist or blacklist to control access to a listener.
| Y | Y |
Health check | Load balancers periodically send requests to backend servers to check whether they can process requests. | Y | Y |
Certificate management | You can create two types of certificates: server certificate and CA certificate. If you need an HTTPS listener, you need to bind a server certificate to it. To enable mutual authentication, you also need to bind a CA certificate to the listener. You can also replace a certificate that is already used by a load balancer. | Y | Y |
Tagging | If you have a large number of cloud resources, you can assign different tags to the resources to quickly identify them and use these tags to easily manage your resources. | Y | Y |
Monitoring | You can use Cloud Eye to monitor load balancers and associated resources and view metrics on the management console. | Y | Y |