• MapReduce Service

  1. Help Center
  2. MapReduce Service
  3. User Guide
  4. Cluster Operation Guide
  5. Managing Jobs
  6. Introduction to Jobs

Introduction to Jobs

A job is an executable program provided by MRS to process and analyze user data. All added jobs are displayed in Job Management, where you can add, query, and manage jobs.

Job Types

An MRS cluster allows you to create and manage the following jobs:

  • MapReduce: provides the capability to process massive data quickly and in parallel. It is a distributed data processing mode and execution environment. MRS supports the submission of the MapReduce Jar program.
  • Spark: functions as a distributed computing framework based on memory. MRS supports the submission of Spark, Spark Script, and Spark SQL jobs.
    • Spark: submits the Spark program, executes the Spark application, and computes and processes user data.
    • Spark Script: submits the Spark Script script and batch executes Spark SQL statements.
    • Spark SQL: uses Spark SQL statements (similar to SQL statements) to query and analyze user data in real time.
  • Hive: functions as an open-source data warehouse constructed on Hadoop. MRS supports the submission of the Hive Script script and batch executes HiveQL statements.

If you fail to create a job in a Running cluster, check the component health status on the cluster management page. For details, see Viewing the System Overview.

Job List

Jobs are listed in chronological order by default in the job list, with the most recent jobs displayed at the top. Table 1 describes parameters of the job list.

Table 1 Parameters of the job list




Job name

This parameter is set when a job is added.


Unique identifier of a job

This parameter is automatically assigned when a job is added.


Job type

Possible types include:

  • Distcp (data import and export)
  • MapReduce
  • Spark
  • Spark Script
  • Spark SQL
  • Hive Script

    After you import or export data on the File Management page, you can view the Distcp job on the Job Management page.


Job status

  • Running
  • Completed
  • Terminated
  • Abnormal

By default, each cluster supports a maximum of 10 running jobs.


Execution result of a job

  • Successful
  • Failed

You cannot execute a successful or failed job, but can add or copy the job. After setting job parameters, you can submit the job again.


Time when a job starts

Duration (min)

Duration of executing a job, specifically from the time when a job is started to the time when the job is completed or stopped.

Unit: minute


  • View Log: You can click View Log to view log information of a job. For details, see Viewing Job Configurations and Logs.
  • View: You can click View to view job details. For details, see Viewing Job Configurations and Logs.
  • More
    • Stop: You can click Stop to stop a running job. For details, see Stopping Jobs.
    • Copy: You can click Copy to copy and add a job. For details, see Replicating Jobs.
    • Delete: You can click Delete to delete a job. For details, see Deleting Jobs.
    • Spark SQL jobs cannot be stopped.
    • Deleted jobs cannot be recovered. Therefore, exercise caution when deleting a job.
    • If you configure the system to save job logs to an HDFS or OBS path, the system compresses the logs and saves them to the specified path after job execution is complete. In this case, the job remains in the Running state after execution is complete and changes to the Completed state after the logs are successfully saved. The time required for saving the logs depends on the log size. The process generally takes a few minutes.
Table 2 Button description



In the drop-down list, select a job state to filter jobs.

  • All (Num): displays all jobs.
  • Completed (Num): displays jobs in the Completed state.
  • Running (Num): displays jobs in the Running state.
  • Terminated (Num): displays jobs in the Terminated state.
  • Abnormal (Num): displays jobs in the Abnormal state.

Enter a job name in the search bar and click to search for a job.

Click to manually refresh the job list.