Skip to main content


The Gradient model repository is a hub for importing, managing, and deploying ML models.


Models are available within projects

Gradient Models can be created in two ways:

  1. Generate models from your workloads i.e. Workflows or Notebooks
  2. Import into Gradient by uploading them directly from the Web UI or CLI. Learn more here.

Gradient has a repository of Models per project. The model repository holds reference to the model artifacts (files generated during training) as well as optional summary metrics associated with the model's performance such as accuracy and loss.

Supported Model formats:

  • TensorFlow
  • ONNX (Open Neural Network Exchange)
  • Custom

How to create a model

There are two ways to create a Model in Gradient, and both can be done via the web UI or CLI:

Run a workload that generates a model

You can do this via using a Gradient Action or the SDK. This will place your Model in your Project's Model repository.


View the full CLI/SDK Docs for Models here.

Upload a model

To upload a Model via the Web UI, first navigate to the **Models** page.

From there, click Upload a Model +

This will open up a modal to Upload a Model, where you can drag 'n' drop a Model file from your local machine (or click to find it locally), as well as select the model Type and provide a Name, custom Summary, and any Additional Notes as metadata. Additionally, you can click "Or Upload a directory" to select a local folder.

Then click Upload Model. This will upload and register the Model in Gradient.

How to view models in the model repository

You can view your team's models in the model repository via the Web UI or CLI, as seen below.

Navigate to **Models** in the side nav to see your list of trained Models:

Models are available within projects

A single Model card in your Models list

As you can see, the Web UI view shows your Model ID, when the model was created, the S3 bucket location of your model, your metrics summary data, the Experiment ID, the model type, and whether it is currently deployed on Paperspace.

You can click Deploy Model to create a deployment with your Model. And you can click Open Detail to see a more detailed view of the Model's performance metrics. This will also show a list of all of the checkpoint files (artifacts) generated by the Experiment, as well as the final Model at hand, and you can download any of those files.

Expanded Model Details showing performance metrics

Expanded Model Details showing model and checkpoint files

How to rename a model

Just click on the name to rename your model.

How to delete a model

Navigate to **Models** in the side nav to see your list of trained Models. From here you can delete models by clicking the delete button.

How to add metadata to models

Model Parameters

To store Models in the Models list, add the following Model-specific parameters to the Experiment command when running an Experiment.

Model Type

--modelType defines the type of model that is being generated by the experiment. For example, --modelType Tensorflow will ensure that the model checkpoint files being generated are recognized as TensorFlow model files.

Model Type ValuesDescription
"Tensorflow"TensorFlow compatible model outputs
"ONNX"ONNX model outputs
"Custom"Custom model type (e.g., a simple Flask server)

How to create custom model metadata

When modelType is not specified, custom model metadata can be associated with the model for later reference by creating a gradient-model-metadata.json file in the modelPath directory. Any valid JSON data can be stored in this file.

For models of type Tensorflow, metadata is automatically generated for your experiment, so any custom model metadata will be ignored.

An example of custom model metadata JSON is as follows:

"metrics": [
"name": "accuracy-score",
"numberValue": 60