Importing a Meta Model from OBS

If a model is developed and trained using a mainstream AI engine, import the model to ModelArts and use the model to create an AI application. In this way, the AI applications can be centrally managed on ModelArts.

Constraints

Prerequisites

  • The model has been developed and trained, and the type and version of the AI engine used by the model are supported by ModelArts. For details, see Supported AI Engines for ModelArts Inference.

  • The trained model package, inference code, and configuration file have been uploaded to OBS.

  • The OBS directory you use and ModelArts are in the same region.

Creating an AI Application

  1. Log in to the ModelArts console, and choose AI Application Management > AI Applications from the navigation pane. The AI Applications page is displayed.

  2. Click Create in the upper left corner.

  3. On the displayed page, configure parameters.

    1. Enter basic information about the AI application. For details, see Table 1.

      Table 1 Basic information

      Parameter

      Description

      Name

      Name of the AI application. The value can contain 1 to 64 visible characters. Only letters, digits, hyphens (-), and underscores (_) are allowed.

      Version

      Version of the AI application. The default value is 0.0.1 for the first import.

      Note

      After an AI application is created, you can create new versions using different meta models for optimization.

      Description

      Brief description of the AI application.

    2. Select the meta model source and configure related parameters. Set Meta Model Source to OBS. For details about the parameters, see Table 2.

      To import a meta model from OBS, edit the inference code and configuration file by following model package specifications and place the inference code and configuration file in the model folder storing the meta model. If the selected directory does not comply with the model package specifications, the AI application cannot be created.

      Table 2 Meta model source parameters

      Parameter

      Description

      Meta Model

      OBS path for storing the meta model.

      The OBS path cannot contain spaces. Otherwise, the creation of the AI application will fail.

      AI Engine

      AI engine, which is automatically set according to the model storage path you select, used by the meta model.

      If the AI Engine is Custom, configure the following parameters:

      • Container API: Protocol and port number for starting the model. The default request protocol and port number are HTTPS and 8080, respectively.

      • Health Check: Health check on the model. This parameter is configurable only when the health check API is configured in the custom image. Otherwise, the AI application deployment will fail.

        • Check Mode: Select HTTP request or Command.

        • Health Check URL: Enter the health check URL, which defaults to /health. This parameter is displayed when Check Mode is set to HTTP request.

        • Health Check Command: Enter the health check command. This parameter is displayed when Check Mode is set to Command.

        • Health Check Period: Enter an integer ranging from 1 to 2147483647. The unit is second. The default value is 5.

        • Delay: Set a delay for the health check to occur after the instance has started. Enter an integer ranging from 0 to 2147483647. The unit is second. The default value is 12.

        • Maximum Failures: Enter an integer ranging from 1 to 2147483647. If the service fails the specified number of consecutive health checks during startup, it will enter the abnormal state. If the service fails the specified number of consecutive health checks during operation, it will enter the alarm state. The default value is 12.

      Runtime Dependency

      Dependencies that the selected model has on the environment.

      AI Application Description

      AI application descriptions to help other developers better understand and use your application. Click Add AI Application Description and enter the document name and URL. You can add up to three descriptions.

      Configuration File

      The system associates the configuration file stored in OBS by default. After enabling this function, you can review and edit the model configuration file.

      Note

      This function is to be discontinued. After that, you can modify the model configuration by setting AI Engine, Runtime Dependency, and API Configuration.

      Deployment Type

      Choose the service types for application deployment. The service types you select will be the only options available for deployment. For instance, selecting Real-Time Services means the AI application can only be deployed as real-time services.

      API Configuration

      You can enable it to edit RESTful APIs to define the AI application input and output formats. The API configuration must comply with ModelArts specifications. For details, see apis parameters in Specifications for Editing a Model Configuration File. Code Example of apis Parameters shows an example.

    3. Check the information and click Create now.

      In the AI application list, you can view the created AI application and its version. When the status changes to Normal, the AI application is created. On this page, you can perform such operations as creating versions, publishing AI applications, and deploying services.

Follow-Up Operations

Deploying an AI Application as a Service: In the AI application list, click the down arrow on the left of an AI application name to check all versions of the AI application. Locate the row that contains the target version, click Deploy in the Operation column, and select a deployment type from the drop-down list. The AI application can be deployed as a deployment type selected during AI application creation.