Deploying Services

Function

This API is used to deploy a model as a service.

Debugging

You can debug this API through automatic authentication in or use the SDK sample code generated by API Explorer.

URI

POST /v1/{project_id}/services

Table 1 Path Parameters

Parameter

Mandatory

Type

Description

project_id

Yes

String

Project ID. For details, see Obtaining a Project ID and Name.

Request Parameters

Table 2 Request header parameters

Parameter

Mandatory

Type

Description

X-Auth-Token

Yes

String

User token. It can be obtained by calling the IAM API that is used to obtain a user token. The value of X-Subject-Token in the response header is the user token.

Table 3 Request body parameters

Parameter

Mandatory

Type

Description

workspace_id

No

String

ID of the workspace to which a service belongs. The default value is 0, indicating the default workspace.

schedule

No

Array of Schedule objects

Service scheduling configuration, which can be configured only for real-time services. By default, this parameter is not used. Services run for a long time.

cluster_id

No

String

Dedicated resource pool ID. By default, this parameter is left blank, indicating that dedicated resource pools are not used. When using a dedicated resource pool to deploy services, ensure that the cluster is running properly. After this parameter is configured, the network configuration of the cluster is used, and the vpc_id parameter does not take effect. If both this parameter and cluster_id in real-time config are configured, cluster_id in real-time config is preferentially used.

pool_name

No

String

Specifies the ID of the new dedicated resource pool. By default, this parameter is left blank, indicating that the dedicated resource pool is not used. This parameter corresponds to the ID of the new resource pool. When using dedicated resource pool to deploy services, ensure that the cluster status is normal. If both pool_name in real-time config and pool_name in real-time config are configured, pool_name in real-time config is preferred.

infer_type

Yes

String

Inference mode, which can be real-time[, edge,] or batch. real-time indicates a real-time service. A model is deployed as a web service and provides real-time test UI and monitoring. The service keeps running. batch indicates a batch service, which can perform inference on batch data and automatically stops after data is processed.

vpc_id

No

String

ID of the VPC to which a real-time service instance is deployed. By default, this parameter is left blank. In this case, ModelArts allocates a dedicated VPC to each user, and users are isolated from each other. To access other service components in the VPC of the service instance, set this parameter to the ID of the corresponding VPC. Once a VPC is configured, it cannot be modified. If both vpc_id and cluster_id are configured, only the dedicated resource pool takes effect.

service_name

Yes

String

Service name, which consists of 1 to 64 characters.

description

No

String

Service description, which contains a maximum of 100 characters. By default, this parameter is left blank.

security_group_id

No

String

Security group. By default, this parameter is left blank. This parameter is mandatory if vpc_id is configured. A security group is a virtual firewall that provides secure network access control policies for service instances. A security group must contain at least one inbound rule to permit the requests whose protocol is TCP, source address is 0.0.0.0/0, and port number is 8080.

subnet_network_id

No

String

ID of a subnet. By default, this parameter is left blank. This parameter is mandatory if vpc_id is configured. Enter the network ID displayed in the subnet details on the VPC management console. A subnet provides dedicated network resources that are isolated from other networks.

config

Yes

Array of ServiceConfig objects

Model running configurations. If infer_type is batch, you can configure only one model. If infer_type is real-time, you can configure multiple models and assign weights based on service requirements. However, the versions of multiple models must be unique.

additional_properties

No

Map<String,serviceAdditionalProperties>

Additional service attribute, which facilitates service management

Table 4 Schedule

Parameter

Mandatory

Type

Description

duration

Yes

Integer

Value mapping a time unit. For example, if the task stops after two hours, set time_unit to HOURS and duration to 2.

time_unit

Yes

String

Scheduling time unit. Possible values are DAYS, HOURS, and MINUTES.

type

Yes

String

Scheduling type. Only the value stop is supported.

Table 5 ServiceConfig

Parameter

Mandatory

Type

Description

custom_spec

No

CustomSpec object

Custom resource specifications

envs

No

Map<String,String>

Common parameter. (Optional) Environment variable key-value pair required for running a model. By default, this parameter is left blank.

specification

Yes

String

Common parameter. Resource flavors, which are modelarts.vm.cpu.2u, modelarts.vm.gpu.p4 (must be requested for), modelsarts.vm.ai1.a310 (must be requested for), and custom (available only when the service is deployed in a dedicated resource pool) in the current version. To request for a flavor, obtain permissions from ModelArts O&M engineers. If this parameter is set to custom, the custom_spec parameter must be specified.

weight

No

Integer

This parameter is mandatory for real-time. Weight of traffic allocated to a model. This parameter is mandatory only when infer_type is set to real-time. The sum of all weights must be equal to 100. If multiple model versions are configured with different traffic weights in a real-time service, ModelArts will continuously access the prediction API of the service and forward prediction requests to the model instances of the corresponding versions based on the weights.

model_id

Yes

String

Common parameter. Model ID, which can be obtained by calling the API for obtaining a model list.

src_path

No

String

Mandatory for batch services. OBS path to the input data of a batch job

req_uri

No

String

Mandatory for batch services. Inference API called in a batch task, which is the RESTful API exposed in the model image. You must select an API URL from the config.json file of the model for inference. If a built-in inference image of ModelArts is used, the API is displayed as /.

mapping_type

No

String

Mandatory for batch services. Mapping type of the input data. The value can be file or csv. file indicates that each inference request corresponds to a file in the input data directory. If this parameter is set to file, req_uri of the model can have only one input parameter and the type of this parameter is file. If this parameter is set to csv, each inference request corresponds to a row of data in the CSV file. When csv is used, the file in the input data directory can only be suffixed with .csv, and the mapping_rule parameter must be configured to map the index of each parameter in the inference request body to the CSV file.

cluster_id

No

String

Optional for real-time services. ID of a dedicated resource pool. This parameter is left blank by default, indicating that no dedicated resource pool is used. When using a dedicated resource pool to deploy services, ensure that the resource pool is running properly. After this parameter is configured, the network configuration of the cluster is used, and the vpc_id parameter does not take effect.

pool_name

No

String

Specifies the ID of the new dedicated resource pool. By default, this parameter is left blank, indicating that the dedicated resource pool is not used. This parameter corresponds to the ID of the new resource pool. When using dedicated resource pool to deploy services, ensure that the cluster status is normal. If pool_name in real-time config and pool_name in real-time config are configured at the same time, pool_name in real-time config is preferred.

nodes

No

Array of strings

Mandatory for edge services. Edge node ID array. The node ID is the edge node ID on IEF, which can be obtained after the edge node is created on IEF.

mapping_rule

No

Object

Optional for batch services. Mapping between input parameters and CSV data. This parameter is mandatory only when mapping_type is set to csv. The mapping rule is similar to the definition of the input parameters in the config.json file. You only need to configure the index parameters under each parameter of the string, number, integer, or boolean type, and specify the value of this parameter to the values of the index parameters in the CSV file to send an inference request. Use commas (,) to separate multiple pieces of CSV data. The values of the index parameters start from 0. If the value of the index parameter is -1, ignore this parameter. For details, see the sample of creating a batch service.

src_type

No

String

Mandatory for batch services. Data source type, which can be ManifestFile. By default, this parameter is left blank, indicating that only files in the src_path directory are read. If this parameter is set to ManifestFile, src_path must be set to a specific manifest path. Multiple data paths can be specified in the manifest file. For details, see the manifest inference specifications.

dest_path

No

String

Mandatory for batch services. OBS path to the output data of a batch job

instance_count

Yes

Integer

Common parameter. Number of instances deployed for a model. The maximum number of instances is 5. To use more instances, submit a service ticket.

additional_properties

No

Map<String,ModelAdditionalProperties>

Additional attributes for model deployment, facilitating service instance management

Table 6 CustomSpec

Parameter

Mandatory

Type

Description

gpu_p4

No

Float

(Optional) Number of GPU cores, which can be a decimal. The value cannot be smaller than 0, which allows up to two decimal places.

memory

Yes

Integer

Memory in MB, which must be an integer

cpu

Yes

Float

Number of CPU cores, which can be a decimal. The value cannot be smaller than 0.01.

ascend_a310

No

Integer

Number of Ascend chips. This parameter is optional and is not used by default. Either this parameter or gpu_p4 is configured.

Table 7 ModelAdditionalProperties

Parameter

Mandatory

Type

Description

persistent_volumes

Yes

Array of persistent_volumes objects

Persistent storage mounting

log_volume

Yes

Array of log_volume objects

Host directory mounting. This parameter takes effect only if a dedicated resource pool is used. If a public resource pool is used to deploy services, this parameter cannot be configured. Otherwise, an error will occur.

Table 8 persistent_volumes

Parameter

Mandatory

Type

Description

name

Yes

String

Image name

mount_path

Yes

String

Mount path of an image in the container

Table 9 log_volume

Parameter

Mandatory

Type

Description

host_path

Yes

String

Log path to be mapped on the host

mount_path

Yes

String

Path to the logs in the container

Table 10 serviceAdditionalProperties

Parameter

Mandatory

Type

Description

smn_notification

Yes

Map<String,smnNotification>

SMN message notification structure, which is used to notify the user of the service status change

Table 11 smnNotification

Parameter

Mandatory

Type

Description

topic_urn

Yes

String

URN of an SMN topic

events

Yes

Array of integers

Event ID. Options:

1: failed 3: running 7: concerning 11: pending

Response Parameters

Status code: 200

Table 12 Response body parameters

Parameter

Type

Description

service_id

String

Service ID

resource_ids

Array of strings

Resource ID array for the resource IDs generated by the target model

Example Requests

  • Sample request of creating a real-time service

    POST https://{endpoint}/v1/{project_id}/services
    
    {
      "infer_type" : "real-time",
      "service_name" : "mnist",
      "description" : "mnist service",
      "config" : [ {
        "specification" : "modelarts.vm.cpu.2u",
        "weight" : 100,
        "model_id" : "0e07b41b-173e-42db-8c16-8e1b44cc0d44",
        "instance_count" : 1
      } ]
    }
    
  • Sample request of creating a real-time service and configuring multiple versions for traffic distribution

    POST https://{endpoint}/v1/{project_id}/services
    
    {
      "service_name" : "mnist",
      "description" : "mnist service",
      "infer_type" : "real-time",
      "config" : [ {
        "model_id" : "xxxmodel-idxxx",
        "weight" : "70",
        "specification" : "modelarts.vm.cpu.2u",
        "instance_count" : 1,
        "envs" : {
          "model_name" : "mxnet-model-1",
          "load_epoch" : "0"
        }
      }, {
        "model_id" : "xxxxxx",
        "weight" : "30",
        "specification" : "modelarts.vm.cpu.2u",
        "instance_count" : 1
      } ]
    }
    
  • Sample request of creating a real-time service in a dedicated resource pool with custom specifications

    POST https://{endpoint}/v1/{project_id}/services
    
    {
      "service_name" : "realtime-demo",
      "description" : "",
      "infer_type" : "real-time",
      "cluster_id" : "8abf68a969c3cb3a0169c4acb24b0000",
      "config" : [ {
        "model_id" : "eb6a4a8c-5713-4a27-b8ed-c7e694499af5",
        "weight" : "100",
        "cluster_id" : "8abf68a969c3cb3a0169c4acb24b0000",
        "specification" : "custom",
        "custom_spec" : {
          "cpu" : 1.5,
          "memory" : 7500,
          "gpu_p4" : 0,
          "ascend_a310" : 0
        },
        "instance_count" : 1
      } ]
    }
    
  • Sample request of creating a real-time service and configuring it to automatically stop

    POST https://{endpoint}/v1/{project_id}/services
    
    {
      "service_name" : "service-demo",
      "description" : "demo",
      "infer_type" : "real-time",
      "config" : [ {
        "model_id" : "xxxmodel-idxxx",
        "weight" : "100",
        "specification" : "modelarts.vm.cpu.2u",
        "instance_count" : 1
      } ],
      "schedule" : [ {
        "type" : "stop",
        "time_unit" : "HOURS",
        "duration" : 1
      } ]
    }
    
  • Sample request of creating a batch service and setting mapping_type to file

    POST https://{endpoint}/v1/{project_id}/services
    
    {
      "service_name" : "batchservicetest",
      "description" : "",
      "infer_type" : "batch",
      "cluster_id" : "8abf68a969c3cb3a0169c4acb24b****",
      "config" : [ {
        "model_id" : "598b913a-af3e-41ba-a1b5-bf065320f1e2",
        "specification" : "modelarts.vm.cpu.2u",
        "instance_count" : 1,
        "src_path" : "https://infers-data.obs.xxxxx.com/xgboosterdata/",
        "dest_path" : "https://infers-data.obs.xxxxx.com/output/",
        "req_uri" : "/",
        "mapping_type" : "file"
      } ]
    }
    
  • Sample request of creating a batch service and setting mapping_type to csv

    POST https://{endpoint}/v1/{project_id}/services
    
    {
      "service_name" : "batchservicetest",
      "description" : "",
      "infer_type" : "batch",
      "config" : [ {
        "model_id" : "598b913a-af3e-41ba-a1b5-bf065320f1e2",
        "specification" : "modelarts.vm.cpu.2u",
        "instance_count" : 1,
        "src_path" : "https://infers-data.obs.xxxxx.com/xgboosterdata/",
        "dest_path" : "https://infers-data.obs.xxxxx.com/output/",
        "req_uri" : "/",
        "mapping_type" : "csv",
        "mapping_rule" : {
          "type" : "object",
          "properties" : {
            "data" : {
              "type" : "object",
              "properties" : {
                "req_data" : {
                  "type" : "array",
                  "items" : [ {
                    "type" : "object",
                    "properties" : {
                      "input5" : {
                        "type" : "number",
                        "index" : 0
                      },
                      "input4" : {
                        "type" : "number",
                        "index" : 1
                      },
                      "input3" : {
                        "type" : "number",
                        "index" : 2
                      },
                      "input2" : {
                        "type" : "number",
                        "index" : 3
                      },
                      "input1" : {
                        "type" : "number",
                        "index" : 4
                      }
                    }
                  } ]
                }
              }
            }
          }
        }
      } ]
    }
    
  • Sample request for creating an edge service

    POST https://{endpoint}/v1/{project_id}/services
    
    {
      "service_name" : "service-edge-demo",
      "description" : "",
      "infer_type" : "edge",
      "config" : [ {
        "model_id" : "eb6a4a8c-5713-4a27-b8ed-c7e694499af5",
        "specification" : "custom",
        "custom_spec" : {
          "cpu" : 1.5,
          "memory" : 7500,
          "gpu_p4" : 0,
          "ascend_a310" : 0
        },
        "envs" : { },
        "nodes" : [ "2r8c4fb9-t497-40u3-89yf-skui77db0472" ]
      } ]
    }
    

Example Responses

Status code: 200

Service deployed

{
  "service_id" : "10eb0091-887f-4839-9929-cbc884f1e20e",
  "resource_ids" : [ "INF-f878991839647358@1598319442708" ]
}

Status Codes

Status Code

Description

200

Service deployed

Error Codes

See Error Codes.