The Gradient model repository is a hub for importing, managing, and deploying ML models.
Gradient Models can be created in two ways:
- Generate models from your workloads i.e. Workflows or Notebooks
- Import into Gradient by uploading them directly from the Web UI or CLI. Learn more here.
Gradient has a repository of Models per project. The model repository holds reference to the model artifacts (files generated during training) as well as optional summary metrics associated with the model's performance such as accuracy and loss.
Supported Model formats:
- ONNX (Open Neural Network Exchange)
How to create a model
There are two ways to create a Model in Gradient, and both can be done via the web UI or CLI:
Run a workload that generates a model
View the full CLI/SDK Docs for Models here.
Upload a model
- Web UI
From there, click Upload a Model +
This will open up a modal to Upload a Model, where you can drag 'n' drop a Model file from your local machine (or click to find it locally), as well as select the model Type and provide a Name, custom Summary, and any Additional Notes as metadata. Additionally, you can click "Or Upload a directory" to select a local folder.
Then click Upload Model. This will upload and register the Model in Gradient.
You can upload a Model via the CLI with the
gradient models upload subcommand:
gradient models upload downloads/squeezenet1.1.onnx --name squeezenet --modelType ONNX
Whether you use the Web UI or CLI, you've now successfully uploaded a Model into Gradient!
Note: Uploaded Models will not be associated with an Experiment.
Now that you have a Model, whether uploaded or generated by running an Experiment, read on to learn how you can use it to create a Deployment.
How to view models in the model repository
You can view your team's models in the model repository via the Web UI or CLI, as seen below.
- Web UI
As you can see, the Web UI view shows your Model ID, when the model was created, the S3 bucket location of your model, your metrics summary data, the Experiment ID, the model type, and whether it is currently deployed on Paperspace.
You can click Deploy Model to create a deployment with your Model. And you can click Open Detail to see a more detailed view of the Model's performance metrics. This will also show a list of all of the checkpoint files (artifacts) generated by the Experiment, as well as the final Model at hand, and you can download any of those files.
$ gradient models list
| Name | ID | Model Type | Project ID | Experiment ID |
| None | moilact08jpaok | Tensorflow | prcl68pnk | eshq20m4egwl8i |
| None | mos2uhkg4yvga0p | Tensorflow | prcl68pnk | ejuxcxp2zbv0a |
| None | moc7i8v6bsrhzk | Custom | prddziv0z | e5rxj0aqtgt2 |
The following parameters can be used with the
|Filter models list by Experiment ID|
|Filter models list by Project ID|
How to rename a model
Just click on the name to rename your model.
How to delete a model
- Web UI
gradient models delete --id <your model id>
How to add metadata to models
To store Models in the Models list, add the following Model-specific parameters to the Experiment command when running an Experiment.
--modelType defines the type of model that is being generated by the experiment. For example,
--modelType Tensorflow will ensure that the model checkpoint files being generated are recognized as TensorFlow model files.
|Model Type Values||Description|
|TensorFlow compatible model outputs|
|ONNX model outputs|
|Custom model type (e.g., a simple Flask server)|
How to create custom model metadata
modelType is not specified, custom model metadata can be associated with the model for later reference by creating a gradient-model-metadata.json file in the
modelPath directory. Any valid JSON data can be stored in this file.
For models of type
Tensorflow, metadata is automatically generated for your experiment, so any custom model metadata will be ignored.
An example of custom model metadata JSON is as follows: