Deploying a Model as a Service

Deploying a Model

You can deploy a model as a real-time service that provides a real-time test UI and monitoring capabilities. After model training is complete, you can deploy a version with the ideal accuracy and in the Successful status as a service. The procedure is as follows:

  1. On the Train Model tab page, wait until the training status changes to Successful. Click Deploy in the Version Manager pane to deploy the model as a real-time service.

  2. In the Deploy dialog box, select resource flavor, set the Auto Stop function, and click OK to start the deployment.

    • Specifications: The GPU specifications are better, and the CPU specifications are more cost-effective.

    • Compute Nodes: The default value is 1 and cannot be changed.

    • Auto Stop: After this function is enabled and the auto stop time is set, a service automatically stops at the specified time.

    The options are 1 hour later, 2 hours later, 4 hours later, 6 hours later, and Custom. If you select Custom, you can enter any integer from 1 to 24 hours in the text box on the right.

  3. After the model deployment is started, view the deployment status on the Service Deployment page.

    It takes a certain period of time to deploy a model. When the status in the Version Manager pane changes from Deploying to Running, the deployment is complete.

    Note

    On the ExeML page, trained models can only be deployed as real-time services. For details about how to deploy them as batch services, see Where Are Models Generated by ExeML Stored? What Other Operations Are Supported?

Testing a Service

  • On the Service Deployment page, select a service type. For example, on the ExeML page, the object detection model is deployed as a real-time service by default. On the Real-Time Services page, click Prediction in the Operation column of the target service to perform a service test.

  • The following describes the procedure for performing a service test after the image classification model is deployed as a service on the ExeML page.

    1. After the model is deployed, test the service using an image. On the ExeML page, click the target project, go to the Deploy Service tab page, select the service version in the Running status, click Upload in the service test area, and upload a local image to perform the test.

    2. Click Prediction to conduct the test. After the prediction is complete, label sunflowers and its detection score are displayed in the prediction result area on the right. If the model accuracy does not meet your expectation, add images on the Label Data tab page, label the images, and train and deploy the model again. Table 1 describes the parameters in the prediction result. If you are satisfied with the model prediction result, call the API to access the real-time service as prompted. For details, see "Accessing a Real-Time Service".

      Currently, only JPG, JPEG, BMP, and PNG images are supported.

      Table 1 Parameters in the prediction result

      Parameter

      Description

      predict_label

      Image prediction label

      scores

      Prediction confidence of top 5 labels

      Note

      A running real-time service keeps consuming resources. If you do not need to use the real-time service, click Stop in the Version Manager pane to stop the service. If you want to use the service again, click Start.