section> Computing
  • Auto Scaling
  • Bare Metal Server
  • Dedicated Host
  • Elastic Cloud Server
  • FunctionGraph
  • Image Management Service
Network
  • Direct Connect
  • Domain Name Service
  • Elastic IP
  • Elastic Load Balancing
  • Enterprise Router
  • NAT Gateway
  • Private Link Access Service
  • Secure Mail Gateway
  • Virtual Private Cloud
  • Virtual Private Network
  • VPC Endpoint
Storage
  • Cloud Backup and Recovery
  • Cloud Server Backup Service
  • Elastic Volume Service
  • Object Storage Service
  • Scalable File Service
  • Storage Disaster Recovery Service
  • Volume Backup Service
Application
  • API Gateway (APIG)
  • Application Operations Management
  • Application Performance Management
  • Distributed Message Service (for Kafka)
  • Simple Message Notification
Data Analysis
  • Cloud Search Service
  • Data Lake Insight
  • Data Warehouse Service
  • DataArts Studio
  • MapReduce Service
  • ModelArts
  • Optical Character Recognition
Container
  • Application Service Mesh
  • Cloud Container Engine
  • Cloud Container Instance
  • Software Repository for Containers
Databases
  • Data Replication Service
  • Distributed Cache Service
  • Distributed Database Middleware
  • Document Database Service
  • GeminiDB
  • Relational Database Service
  • TaurusDB
Management & Deployment
  • Cloud Create
  • Cloud Eye
  • Cloud Trace Service
  • Config
  • Log Tank Service
  • Resource Formation Service
  • Tag Management Service
Security Services
  • Anti-DDoS
  • Cloud Firewall
  • Database Security Service
  • Dedicated Web Application Firewall
  • Host Security Service
  • Identity and Access Management
  • Key Management Service
  • Web Application Firewall
Other
  • Enterprise Dashboard
  • Marketplace
  • Price Calculator
  • Status Dashboard
APIs
  • REST API
  • API Usage Guidelines
  • Endpoints
Development and Automation
  • SDKs
  • Drivers and Tools
  • Terraform
  • Ansible
  • Cloud Create
Architecture Center
  • Best Practices
  • Blueprints
IaaSComputingAuto ScalingBare Metal ServerDedicated HostElastic Cloud ServerFunctionGraphImage Management ServiceNetworkDirect ConnectDomain Name ServiceElastic IPElastic Load BalancingEnterprise RouterNAT GatewayPrivate Link Access ServiceSecure Mail GatewayVirtual Private CloudVirtual Private NetworkVPC EndpointStorageCloud Backup and RecoveryCloud Server Backup ServiceElastic Volume ServiceObject Storage ServiceScalable File ServiceStorage Disaster Recovery ServiceVolume Backup ServicePaaSApplicationAPI Gateway (APIG)Application Operations ManagementApplication Performance ManagementDistributed Message Service (for Kafka)Simple Message NotificationData AnalysisCloud Search ServiceData Lake InsightData Warehouse ServiceDataArts StudioMapReduce ServiceModelArtsOptical Character RecognitionContainerApplication Service MeshCloud Container EngineCloud Container InstanceSoftware Repository for ContainersDatabasesData Replication ServiceDistributed Cache ServiceDistributed Database MiddlewareDocument Database ServiceGeminiDBRelational Database ServiceTaurusDBManagementManagement & DeploymentCloud CreateCloud EyeCloud Trace ServiceConfigLog Tank ServiceResource Formation ServiceTag Management ServiceSecuritySecurity ServicesAnti-DDoSCloud FirewallDatabase Security ServiceDedicated Web Application FirewallHost Security ServiceIdentity and Access ManagementKey Management ServiceWeb Application FirewallOtherOtherEnterprise DashboardMarketplacePrice CalculatorStatus Dashboard

Data Lake Insight

  • API Usage Guidelines
  • Overview
  • Getting Started
  • Permission-related APIs
  • Global Variable-related APIs
  • APIs Related to Enhanced Datasource Connections
  • APIs Related to Elastic Resource Pools
  • Queue-related APIs (Recommended)
  • SQL Job-related APIs
  • Flink Job-related APIs
    • Creating a SQL Job
    • Updating a SQL Job
    • Creating a Flink Jar job
    • Updating a Flink Jar Job
    • Running Jobs in Batches
    • Listing Jobs
    • Querying Job Details
    • Querying the Job Execution Plan
    • Stopping Jobs in Batches
    • Deleting a Job
    • Deleting Jobs in Batches
    • Exporting a Flink Job
    • Importing a Flink Job
    • Generating a Static Stream Graph for a Flink SQL Job
  • APIs Related to Flink Job Templates
  • Spark Job-related APIs
  • Permissions Policies and Supported Actions
  • Out-of-Date APIs
  • Public Parameters
  • Change History
  • API Reference
  • Flink Job-related APIs
  • Creating a Flink Jar job

Creating a Flink Jar job¶

Function¶

This API is used to create custom jobs, which currently support the JAR format and run in dedicated queues.

URI¶

  • URI format

    POST /v1.0/{project_id}/streaming/flink-jobs

  • Parameter description

    Table 1 URI parameter¶

    Parameter

    Mandatory

    Type

    Description

    project_id

    Yes

    String

    Project ID, which is used for resource isolation. For details about how to obtain its value, see Obtaining a Project ID.

Request Parameters¶

Table 2 Parameter description¶

Parameter

Mandatory

Type

Description

name

Yes

String

Name of the job. The value can contain 1 to 57 characters.

desc

No

String

Job description. Length range: 0 to 512 characters.

queue_name

No

String

Name of a queue. The value can contain 0 to 128 characters.

cu_number

No

Integer

Number of CUs selected for a job.

manager_cu_number

No

Integer

Number of CUs on the management node selected by the user for a job, which corresponds to the number of Flink job managers. The default value is 1.

parallel_number

No

Integer

Number of parallel operations selected for a job.

log_enabled

No

Boolean

Whether to enable the job log function.

  • true: indicates to enable the job log function.

  • false: indicates to disable the job log function.

  • Default value: false

obs_bucket

No

String

OBS bucket where users are authorized to save logs when log_enabled is set to true.

smn_topic

No

String

SMN topic. If a job fails, the system will send a message to users subscribed to the SMN topic.

main_class

No

String

Job entry class.

entrypoint_args

No

String

Job entry parameter. Multiple parameters are separated by spaces.

restart_when_exception

No

Boolean

Whether to enable the function of restart upon exceptions. The default value is false.

entrypoint

No

String

Name of the package uploaded to OBS. You can customize the JAR file where the main job class is.

For Flink 1.15 or later, you can only select packages from OBS, instead of DLI.

Example: obs://bucket_name/test.jar

dependency_jars

No

Array of strings

Name of the package uploaded to OBS. You can customize other dependency packages of the job.

For Flink 1.15 or later, you can only select packages from OBS, instead of DLI.

Example: obs://bucket_name/test1.jar, obs://bucket_name/test2.jar

dependency_files

No

Array of strings

Name of the resource package uploaded to OBS. You can customize the dependency files of the job.

For Flink 1.15 or later, you can only select packages from OBS, instead of DLI.

Example: [obs://bucket_name/file1, obs://bucket_name/file2]

You can add the following content to the application to access the corresponding dependency file: In the command, fileName indicates the name of the file to be accessed, and ClassName indicates the name of the class that needs to access the file.

ClassName.class.getClassLoader().getResource("userData/fileName")

tm_cus

No

Integer

Number of CUs for each TaskManager. The default value is 1.

tm_slot_num

No

Integer

Number of slots in each TaskManager. The default value is (parallel_number*tm_cus)/(cu_number-manager_cu_number).

resume_checkpoint

No

Boolean

Whether the abnormal restart is recovered from the checkpoint.

resume_max_num

No

Integer

Maximum number of retry times upon exceptions. The unit is times/hour. Value range: -1 or greater than 0. The default value is -1, indicating that the number of times is unlimited.

checkpoint_path

No

String

Storage address of the checkpoint in the JAR file of the user. The path must be unique.

tags

No

Array of objects

Label of a Flink JAR job. For details, see Table 3.

runtime_config

No

String

Customizes optimization parameters when a Flink job is running.

Table 3 tags parameter¶

Parameter

Mandatory

Type

Description

key

Yes

String

Tag key.

Note

A tag key can contain a maximum of 36 characters. Special characters (=*<>\|) are not allowed, and the key cannot start with a space.

Note

A tag key can contain a maximum of 128 characters. Only letters, digits, spaces, and special characters (_.:=+-@) are allowed, but the value cannot start or end with a space or start with _sys_.

value

Yes

String

Tag key.

Note

A tag value can contain a maximum of 43 characters. Special characters (=*<>\|) are not allowed, and the value cannot start with a space.

Response Parameters¶

Table 4 Response parameters¶

Parameter

Mandatory

Type

Description

is_success

No

String

Indicates whether the request is successfully executed. Value true indicates that the request is successfully executed.

message

No

String

Message content.

job

No

Object

Information about the job status. For details, see Table 5.

Table 5 job parameters¶

Parameter

Mandatory

Type

Description

job_id

Yes

Long

Job ID.

status_name

No

String

Name of job status.

status_desc

No

String

Status description. Causes and suggestions for the abnormal status.

Example Request¶

Create a Flink Jar job named test, set the job to be executed on testQueue, set the number of CUs used by the job, and enable the job log function.

{
    "name": "test",
    "desc": "job for test",
    "queue_name": "testQueue",
    "manager_cu_number": 1,
    "cu_number": 2,
    "parallel_number": 1,
    "tm_cus": 1,
    "tm_slot_num": 1,
    "log_enabled": true,
    "obs_bucket": "bucketName",
    "smn_topic": "topic",
    "main_class": "org.apache.flink.examples.streaming.JavaQueueStream",
    "restart_when_exception": false,
    "entrypoint": "javaQueueStream.jar",
    "entrypoint_args":"-windowSize 2000 -rate 3",
    "dependency_jars": [
        "myGroup/test.jar",
        "myGroup/test1.jar"
    ],
    "dependency_files": [
        "myGroup/test.csv",
        "myGroup/test1.csv"
    ]
}

Example Response¶

{
  "is_success": true,
  "message": "A Flink job is created successfully.",
  "job": {
    "job_id": 138,
    "status_name": "job_init",
    "status_desc": ""
  }
}

Status Codes¶

Table 6 describes status codes.

Table 6 Status codes¶

Status Code

Description

200

The custom Flink job is created successfully.

400

The input parameter is invalid.

Error Codes¶

If an error occurs when this API is invoked, the system does not return the result similar to the preceding example, but returns the error code and error information. For details, see Error Codes.

  • Prev
  • Next
last updated: 2025-06-16 14:07 UTC - commit: 2d6c283406071bb470705521bc41e86fa3400203
Edit pageReport Documentation Bug
Page Contents
  • Creating a Flink Jar job
    • Function
    • URI
    • Request Parameters
    • Response Parameters
    • Example Request
    • Example Response
    • Status Codes
    • Error Codes
© T-Systems International GmbH
  • Contact
  • Data privacy
  • Disclaimer of liabilitys
  • Imprint