Spark Job Management¶
Based on the open-source Spark, DLI optimizes performance and reconstructs services to be compatible with the Apache Spark ecosystem and interfaces, and executes batch processing tasks.
DLI also allows you to use Spark jobs to access DLI metadata.
Spark job management provides the following functions:
In addition, you can click Quick Links to switch to the details on User Guide.
Spark Jobs Page¶
On the Overview page, click Spark Jobs to go to the SQL job management page. Alternatively, you can click Job Management > Spark Jobs. The page displays all Spark jobs. If there are a large number of jobs, they will be displayed on multiple pages. DLI allows you to view jobs in all statuses.
Parameter | Description |
---|---|
Job ID | ID of a submitted Spark job, which is generated by the system by default. |
Name | Name of a submitted Spark job. |
Queues | Queue where the submitted Spark job runs |
Username | Name of the user who executed the Spark job |
Status | Job status. The following values are available:
|
Created | Time when a job is created. Jobs can be displayed in ascending or descending order of the job creation time. |
Last Modified | Time when a job is completed. |
Operation |
|
Re-executing a Job¶
On the Spark Jobs page, click Edit in the Operation column of the job. On the Spark job creation page that is displayed, modify parameters as required and execute the job.
Searching for a Job¶
On the Spark Jobs page, select Status or Queues. The system displays the jobs that meet the filter condition in the job list.
Terminating a Job¶
On the Spark Jobs page, choose More > Terminate Job in the Operation column of the job that you want to stop.