Introduction to Model Management

AI development and optimization require frequent iterations and debugging. Modifications in datasets, training code, or parameters affect the quality of models. If the metadata of the development process cannot be centrally managed, the optimal model may fail to be reproduced.

ModelArts model management allows you to import meta models from training jobs, OBS, and container images. In this way, you can centrally manage all iterated and debugged models.

Constraints

  • In an ExeML project, after a model is deployed, the model is automatically uploaded to the model management list. However, models generated by ExeML cannot be downloaded and can be used only for deployment and rollout.

Meta Model Sources

  • Imported from a training job: Create a training job in ModelArts and train a model. After obtaining a satisfactory model, use it to create a model and deploy the application as services.

  • Imported from OBS: If you use a mainstream framework to develop and train a model locally, you can upload the model to an OBS bucket based on the model package specifications, import the model from OBS to ModelArts, and use the model to create a model for service deployment.

  • Imported from a container image: If an AI engine is not supported by ModelArts, you can use it to build a model, import the model to ModelArts as a custom image, use the image to create a model, and deploy the model as services.

Functions of Model Management

Table 1 Functions of model management

Supported Function

Description

Creating a Model

Import the trained models to ModelArts and create models for centralized management. The following provides the operation guide for each method of importing models.

Viewing Details About a Model

After a model is created, you can view its information on the details page.

Managing ModelArts Models

To facilitate traceback and model tuning, ModelArts provides the model version management function. You can manage models by version.

Supported AI Engines for ModelArts Inference

If you import a model from a template or OBS to create a model, the following AI engines and versions are supported.

Note

  • Runtime environments marked with recommended are unified runtime images, which will be used as mainstream base inference images.

  • Images of the old version will be discontinued. Use unified images.

  • The base images to be removed are no longer maintained.

  • Naming a unified runtime image: <AI engine name and version> - <Hardware and version: CPU, CUDA, or CANN> - <Python version> - <OS version> - <CPU architecture>

Table 2 Supported AI engines and their runtime

Engine

Runtime

TensorFlow

tensorflow_1.15.0-cann_6.3.0-py_3.7-euler_2.8.3-aarch64

MindSpore

mindspore_2.0.0-cann_6.3.0-py_3.7-euler_2.8.3-aarch64

PyTorch

pytorch_1.11.0-cann_6.3.0-py_3.7-euler_2.8.3-aarch64