This article will help you to understand how it works and what the functions on the interface will enable you do.
What is Model Management?
Model Management is powered by MLflow which is an open source machine learning platform that can manage the full machine learning lifecycle.
When running experiments, metrics and parameters are logged to a remote tracking server, in this case, Amazon S3.
Artifacts such as text files, JSON files and images are saved in your chosen data lake and can be downloaded from there, or viewed within Model Management directly.
What can you do in Model Management?
Model Management enables users to monitor the performance of their models without opening a Workspace. Information about the trained models are accessible on Peak within Model Management.
It enables you to:
Track the performance of your model
Compare your model's performance across different model runs.
Save artifacts from each model run.
You can save important images, text files, etc. as artifacts.
Before you start using Model Management
If you are using Model Management for the first time, you will need to activate the Experiments page.
For details of what you will need to do this, see Activating Model Management.
Navigating Model Management
To get to the screens, go to Factory > Model Management.
The Experiments page appears.
This screen lists your current experiments.
If you have activated Model Management for the first time, the Experiments page shows the default experiment which is created by MLflow.
When hovering over an experiment, you are given options to tag, rename or delete the experiment.
Viewing information about experiment runs
After clicking on an experiment, details of each experiment run are listed along with general information about each experiment.
The information is displayed in two tabs:
This tab shows details of experiment runs such as Run ID, Model Parameter and Model Metrics for each of the runs.
Clicking on an individual run within this tab gives you detailed information on the run. It also provides you with all the artifacts that are saved, for example text files, JSON files, and images.
This tab provides general information on the experiment, such as Experiment ID and Artifact Location.