This section describes the Web interface functions of MRS clusters.
MRS provides a Web interface, the functions of which are described as follows:
To expand clusters and handle peak service loads, add core nodes or Task nodes.
Reduce the number of Core nodes or Task nodes to shrink the cluster so that MRS delivers better storage and computing capabilities at lower O&M costs based on service requirements.
Automatically adjust computing resources based on service requirements and the preset policies, so that the number of Task nodes can be automatically scaled out and scaled-in with service load changes, ensuring stable service running.
If either the system or a cluster is faulty, Elastic BigData will collect fault information and report it to the network management system. Maintenance personnel will then be able to locate the faults.
To help locate faults in the case of faulty clusters, operation information is recorded.
MRS supports the ability to import data from the OBS system to HDFS and also export data that has already been processed and analyzed. You can store data in HDFS.
A job is an executable program provided by MRS to process and analyze user data. Currently, MRS supports MapReduce jobs, Spark jobs, and Hive jobs, and allows users to submit Spark SQL statements online to query and analyze data.
Tags are cluster identifiers. Adding tags to clusters can help you identify and manage your cluster resources.
You can add a maximum of 10 tags to a cluster when creating the cluster or add them on the details page of the created cluster.
Bootstrap actions indicate that you can run your scripts on a specified cluster node before or after starting big data components. You can run bootstrap actions to install third-party software, modify the cluster running environment, and perform other customizations. If you choose to run bootstrap actions when expanding a cluster, the bootstrap actions will be run on the newly added nodes in the same way.
Jobs can be managed, stopped, or deleted. You can also view details of completed jobs along with detailed configurations. Spark SQL jobs, however, cannot be stopped.