section> Computing
  • Auto Scaling
  • Bare Metal Server
  • Dedicated Host
  • Elastic Cloud Server
  • FunctionGraph
  • Image Management Service
Network
  • Direct Connect
  • Domain Name Service
  • Elastic IP
  • Elastic Load Balancing
  • Enterprise Router
  • NAT Gateway
  • Private Link Access Service
  • Secure Mail Gateway
  • Virtual Private Cloud
  • Virtual Private Network
  • VPC Endpoint
Storage
  • Cloud Backup and Recovery
  • Cloud Server Backup Service
  • Elastic Volume Service
  • Object Storage Service
  • Scalable File Service
  • Storage Disaster Recovery Service
  • Volume Backup Service
Application
  • API Gateway (APIG)
  • Application Operations Management
  • Application Performance Management
  • Distributed Message Service (for Kafka)
  • Simple Message Notification
Data Analysis
  • Cloud Search Service
  • Data Lake Insight
  • Data Warehouse Service
  • DataArts Studio
  • MapReduce Service
  • ModelArts
  • Optical Character Recognition
Container
  • Application Service Mesh
  • Cloud Container Engine
  • Cloud Container Instance
  • Software Repository for Containers
Databases
  • Data Replication Service
  • Distributed Cache Service
  • Distributed Database Middleware
  • Document Database Service
  • GeminiDB
  • Relational Database Service
  • TaurusDB
Management & Deployment
  • Cloud Create
  • Cloud Eye
  • Cloud Trace Service
  • Config
  • Log Tank Service
  • Resource Formation Service
  • Tag Management Service
Security Services
  • Anti-DDoS
  • Cloud Firewall
  • Database Security Service
  • Dedicated Web Application Firewall
  • Host Security Service
  • Identity and Access Management
  • Key Management Service
  • Web Application Firewall
Other
  • Enterprise Dashboard
  • Marketplace
  • Price Calculator
  • Status Dashboard
APIs
  • REST API
  • API Usage Guidelines
  • Endpoints
Development and Automation
  • SDKs
  • Drivers and Tools
  • Terraform
  • Ansible
  • Cloud Create
Architecture Center
  • Best Practices
  • Blueprints
IaaSComputingAuto ScalingBare Metal ServerDedicated HostElastic Cloud ServerFunctionGraphImage Management ServiceNetworkDirect ConnectDomain Name ServiceElastic IPElastic Load BalancingEnterprise RouterNAT GatewayPrivate Link Access ServiceSecure Mail GatewayVirtual Private CloudVirtual Private NetworkVPC EndpointStorageCloud Backup and RecoveryCloud Server Backup ServiceElastic Volume ServiceObject Storage ServiceScalable File ServiceStorage Disaster Recovery ServiceVolume Backup ServicePaaSApplicationAPI Gateway (APIG)Application Operations ManagementApplication Performance ManagementDistributed Message Service (for Kafka)Simple Message NotificationData AnalysisCloud Search ServiceData Lake InsightData Warehouse ServiceDataArts StudioMapReduce ServiceModelArtsOptical Character RecognitionContainerApplication Service MeshCloud Container EngineCloud Container InstanceSoftware Repository for ContainersDatabasesData Replication ServiceDistributed Cache ServiceDistributed Database MiddlewareDocument Database ServiceGeminiDBRelational Database ServiceTaurusDBManagementManagement & DeploymentCloud CreateCloud EyeCloud Trace ServiceConfigLog Tank ServiceResource Formation ServiceTag Management ServiceSecuritySecurity ServicesAnti-DDoSCloud FirewallDatabase Security ServiceDedicated Web Application FirewallHost Security ServiceIdentity and Access ManagementKey Management ServiceWeb Application FirewallOtherOtherEnterprise DashboardMarketplacePrice CalculatorStatus Dashboard

Data Lake Insight

  • API Usage Guidelines
  • Overview
  • Getting Started
  • Permission-related APIs
  • Global Variable-related APIs
  • APIs Related to Enhanced Datasource Connections
  • APIs Related to Elastic Resource Pools
  • Queue-related APIs (Recommended)
  • SQL Job-related APIs
  • Flink Job-related APIs
    • Creating a SQL Job
    • Updating a SQL Job
    • Creating a Flink Jar job
    • Updating a Flink Jar Job
    • Running Jobs in Batches
    • Listing Jobs
    • Querying Job Details
    • Querying the Job Execution Plan
    • Stopping Jobs in Batches
    • Deleting a Job
    • Deleting Jobs in Batches
    • Exporting a Flink Job
    • Importing a Flink Job
    • Generating a Static Stream Graph for a Flink SQL Job
  • APIs Related to Flink Job Templates
  • Spark Job-related APIs
  • Permissions Policies and Supported Actions
  • Out-of-Date APIs
  • Public Parameters
  • Change History
  • API Reference
  • Flink Job-related APIs
  • Importing a Flink Job

Importing a Flink Job¶

Function¶

This API is used to import Flink job data.

URI¶

  • URI format

    POST /v1.0/{project_id}/streaming/jobs/import

  • Parameter description

    Table 1 URI parameter¶

    Parameter

    Mandatory

    Type

    Description

    project_id

    Yes

    String

    Project ID, which is used for resource isolation. For details about how to obtain its value, see Obtaining a Project ID.

Request¶

Table 2 Request parameters¶

Parameter

Mandatory

Type

Description

zip_file

Yes

String

Path of the job ZIP file imported from OBS. You can enter a folder path to import all ZIP files in the folder.

Note

The folder can contain only .zip files.

is_cover

No

Boolean

Whether to overwrite an existing job if the name of the imported job is the same as that of the existing job in the service.

Response¶

Table 3 Response parameters¶

Parameter

Mandatory

Type

Description

is_success

No

String

Indicates whether the request is successfully executed. Value true indicates that the request is successfully executed.

message

No

String

System prompt. If execution succeeds, the parameter setting may be left blank.

job_mapping

No

Array of objects

Information about the imported job. For details, see Table 4.

Table 4 job_mapping parameter description¶

Parameter

Mandatory

Type

Description

old_job_id

No

Long

ID of a job before being imported.

new_job_id

No

Long

ID of a job after being imported. If is_cover is set to false and a job with the same name exists in the service, the returned value of this parameter is -1.

remark

No

String

Results about an imported job.

Example Request¶

Whether to overwrite the existing job if the name of the imported job is the same as that of an existing job when Flink job data is imported from OBS.

{
    "zip_file": "test/ggregate_1582677879475.zip",
    "is_cover": true
}

Example Response¶

{
    "is_success": true,
    "message": "The job is imported successfully.",
    "job_mapping": [
        {
            "old_job_id": "100",
            "new_job_id": "200",
            "remark": "Job successfully created"
        }
    ]
}

Status Codes¶

Table 5 describes status codes.

Table 5 Status codes¶

Status Code

Description

200

The job is imported successfully.

400

The input parameter is invalid.

Error Codes¶

If an error occurs when this API is invoked, the system does not return the result similar to the preceding example, but returns the error code and error information. For details, see Error Codes.

  • Prev
  • Next
last updated: 2025-06-16 14:07 UTC - commit: 2d6c283406071bb470705521bc41e86fa3400203
Edit pageReport Documentation Bug
Page Contents
  • Importing a Flink Job
    • Function
    • URI
    • Request
    • Response
    • Example Request
    • Example Response
    • Status Codes
    • Error Codes
© T-Systems International GmbH
  • Contact
  • Data privacy
  • Disclaimer of Liabilities
  • Imprint