Deploying a Model as a Batch Service

After a model is prepared, you can deploy it as a batch service. The Service Deployment > Batch Services page lists all batch services. You can enter a service name in the search box in the upper right corner and click image1 to query the service.

Prerequisites

  • Data has been prepared. Specifically, you have created a model in the Normal state in ModelArts.

  • Data to be batch processed is ready and has been upload to an OBS directory.

  • At least one empty folder has been created on OBS for storing the output.

Background

  • A maximum of 1,000 batch services can be created.

  • Based on the input request (JSON or other file) defined by the model, different parameter are entered. If the model input is a JSON file, a configuration file is required to generate a mapping file. If the model input is other file, no mapping file is required.

  • Batch services can only be deployed in a public resource pool, but not a dedicated resource pool.

Procedure

  1. Log in to the ModelArts management console. In the left navigation pane, choose Service Deployment > Batch Services. By default, the Batch Services page is displayed.

  2. In the batch service list, click Deploy in the upper left corner. The Deploy page is displayed.

  3. Set parameters for a batch service.

    1. Set the basic information, including Name and Description. The name is generated by default, for example, service-bc0d. You can specify Name and Description according to actual requirements.

    2. Set other parameters, including model configurations. For details, see Table 1.

      Table 1 Parameters

      Parameter

      Description

      Model and Version

      Select the model and version that are in the Normal state.

      Input Path

      Select the OBS directory where the data is to be uploaded. Select a folder or a .manifest file. For details about the specifications of the .manifest file, see Manifest File Specifications.

      Note

      • If the input data is an image, ensure that the size of a single image is less than 10 MB.

      Request Path

      API URI of a batch service.

      Output Path

      Select the path for saving the batch prediction result. You can select the empty folder that you create.

      Specifications

      The system provides available compute resources matching your model. Select an available resource from the drop-down list.

      Instances

      Set the number of instances for the current model version. If you set Instances to 1, the standalone computing mode is used. If you set Instances to a value greater than 1, the distributed computing mode is used. Select a computing mode based on the actual requirements.

      Environment Variable

      Set environment variables and inject them to the container instance. To ensure data security, do not enter sensitive information, such as plaintext passwords, in environment variables.

  4. After setting the parameters, deploy the model as a batch service as prompted. Generally, service deployment jobs run for a period of time, which may be several minutes or tens of minutes depending on the amount of your selected data and resources.

    You can go to the batch service list to view the basic information about the batch service. In the batch service list, after the status of the newly deployed service changes from Deploying to Running, the service is deployed successfully.

Manifest File Specifications

Batch services of the inference platform support the manifest file. The manifest file describes the input and output of data.

Example input manifest file

  • File name: test.manifest

  • File content:

    {"source": "<obs path>/test/data/1.jpg"}
    {"source": "https://infers-data.obs.xxx.com:443/xgboosterdata/data.csv?AccessKeyId=2Q0V0TQ461N26DDL18RB&Expires=1550611914&Signature=wZBttZj5QZrReDhz1uDzwve8GpY%3D&x-obs-security-token=gQpzb3V0aGNoaW5hixvY8V9a1SnsxmGoHYmB1SArYMyqnQT-ZaMSxHvl68kKLAy5feYvLDM..."}
    
  • File requirements:

    1. The file name extension must be .manifest.

    2. The file content is in JSON format. Each row describes a piece of input data, which must be accurate to a file instead of a folder.

    3. A source field must be defined for the JSON content. The field value is the OBS URL of the file: <obs path>/{{Bucket name}}/{{Object name}}.

Example output manifest file

If you use an input manifest file, the output directory will contain an output manifest file.

  • Assume that the output path is //test-bucket/test/. The result is stored in the following path:

    OBS bucket/directory name
    ├── test-bucket
    │   ├── test
    │   │   ├── infer-result-0.manifest
    │   │   ├── infer-result
    │   │   │ ├── 1.jpg_result.txt
    │   │   │ ├── 2.jpg_result.txt
    
  • Content of the infer-result-0.manifest file:

    {"source": "<obs path>/obs-data-bucket/test/data/1.jpg",  "inference-loc": "<obs path>/test-bucket/test/infer-result/1.jpg_result.txt"}
    {"source ": "https://infers-data.obs.xxx.com:443/xgboosterdata/2.jpg?AccessKeyId=2Q0V0TQ461N26DDL18RB&Expires=1550611914&Signature=wZBttZj5QZrReDhz1uDzwve8GpY%3D&x-obs-security-token=gQpzb3V0aGNoaW5hixvY8V9a1SnsxmGoHYmB1SArYMyqnQT-ZaMSxHvl68kKLAy5feYvLDMNZWxzhBZ6Q-3HcoZMh9gISwQOVBwm4ZytB_m8sg1fL6isU7T3CnoL9jmvDGgT9VBC7dC1EyfSJrUcqfB...",  "inference-loc": "obs://test-bucket/test/infer-result/2.jpg_result.txt"}
    
  • File format:

    1. The file name is infer-result-{{index}}.manifest, where index is the instance ID. Each running instance of a batch service generates a manifest file.

    2. The infer-result directory is created in the manifest directory to store the result.

    3. The file content is in JSON format. Each row describes the output result of a piece of input data.

    4. The content contains two fields:

      1. source: input data description, which is the same as that of the input manifest file

      2. inference-loc: output result path in the format of <obs path>/{{Bucket name}}/{{Object name}}

Example Mapping

The following example shows the relationship between the configuration file, mapping rule, CSV data, and inference request.

Assume that the apis parameter in the configuration file used by your model is as follows:

[
    {
        "protocol": "http",
        "method": "post",
        "url": "/",
        "request": {
            "type": "object",
            "properties": {
                "data": {
                    "type": "object",
                    "properties": {
                        "req_data": {
                            "type": "array",
                            "items": [
                                {
                                    "type": "object",
                                    "properties": {
                                        "input_1": {
                                            "type": "number"
                                        },
                                        "input_2": {
                                            "type": "number"
                                        },
                                        "input_3": {
                                            "type": "number"
                                        },
                                        "input_4": {
                                            "type": "number"
                                        }
                                    }
                                }
                            ]
                        }
                    }
                }
            }
        }
    }
]

At this point, the corresponding mapping relationship is shown below. The ModelArts management console automatically resolves the mapping relationship from the configuration file. When calling a ModelArts API, write the mapping relationship by yourself according to the rule.

{
    "type": "object",
    "properties": {
        "data": {
            "type": "object",
            "properties": {
                "req_data": {
                    "type": "array",
                    "items": [
                        {
                            "type": "object",
                            "properties": {
                                "input_1": {
                                    "type": "number",
                                    "index": 0
                                },
                                "input_2": {
                                    "type": "number",
                                    "index": 1
                                },
                                "input_3": {
                                    "type": "number",
                                    "index": 2
                                },
                                "input_4": {
                                    "type": "number",
                                    "index": 3
                                }
                            }
                        }
                    ]
                }
            }
        }
    }
}

The data for inference, that is, the CSV data, is in the following format. The data must be separated by commas (,).

5.1,3.5,1.4,0.2
4.9,3.0,1.4,0.2
4.7,3.2,1.3,0.2

Depending on the defined mapping relationship, the inference request is shown below. The format is similar to the format used by the real-time service.

{
    "data": {
        "req_data": [{
            "input_1": 5.1,
            "input_2": 3.5,
            "input_3": 1.4,
            "input_4": 0.2
        }]
    }
}