This node enables you to execute DLI statements during periodical or real-time job scheduling. You can use parameter variables to perform incremental import and process partitions for your data warehouses.
If you select the SQL statement mode, the DataArts Factory module cannot parse the parameters contained in the SQL statement.
Database Name
Yes
Database that is configured in the SQL script. The value can be changed.
DLI Environmental Variable
No
The environment variable must start with dli.sql. or spark.sql.
If the key of the environment variable is dli.sql.shuffle.partitions or dli.sql.autoBroadcastJoinThreshold, the environment variable cannot contain the greater than (>) or less than (<) sign.
If a parameter with the same name is configured in both a job and a script, the parameter value configured in the job will overwrite that configured in the script.
Queue Name
Yes
Name of the DLI queue configured in the SQL script. The value can be changed.
You can create a resource queue using either of the following methods:
Click . On the Queue Management page of DLI, create a resource queue.
Go to the DLI console to create a resource queue.
Script Parameter
No
If the associated SQL script uses a parameter, the parameter name is displayed. Set the parameter value in the text box next to the parameter name. The parameter value can be an EL expression.
If the parameters of the associated SQL script are changed, click to refresh the parameters.
Node Name
Yes
Name of the SQL script. The value can be changed. The rules are as follows:
Name of a node. The name must contain 1 to 128 characters, including only letters, numbers, underscores (_), hyphens (-), slashes (/), less-than signs (<), and greater-than signs (>).
Record Dirty Data
Yes
Click to specify whether to record dirty data.
If you select , dirty data will be recorded.
If you do not select , dirty data will not be recorded.
Specifies how often the system check completeness of the node task. The value ranges from 1 to 60 seconds.
Max. Node Execution Duration
Yes
Execution timeout interval for the node. If retry is configured and the execution is not complete within the timeout interval, the node will not be retried and is set to the failed state.
Retry upon Failure
Yes
Indicates whether to re-execute a node task if its execution fails. Possible values:
Yes: The node task will be re-executed, and the following parameters must be configured:
Maximum Retries
Retry Interval (seconds)
No: The node task will not be re-executed. This is the default setting.
Note
If Timeout Interval is configured for the node, the node will not be executed again after the execution times out. Instead, the node is set to the failure state.
Failure Policy
Yes
Operation that will be performed if the node task fails to be executed. Possible values:
End the current job execution plan: stops running the current job. The job instance status is Failed.
Go to the next node: ignores the execution failure of the current node. The job instance status is Failure ignored.
Suspend current job execution plan: suspends running the current job. The job instance status is Waiting.
Suspend execution plans of the subsequent nodes: stops running subsequent nodes. The job instance status is Failed.