Paperspace Models can be generated by running machine learning Experiments. They're stored in your Project's Models list. This list holds references to the model and checkpoint files generated during the training period as well as summary metrics associated with the model's performance, such as accuracy and loss.
You can see all of your Models in the web UI or via the CLI.
Currently Supported Models:
Future Supported Models:
ONNX (Open Neural Network Exchange)
To store Models in the Models list, add the following Model-specific parameters to the Experiment command when running an Experiment.
--modelType defines the type of model that is being generated by the experiment. For example,
--modelType Tensorflow will ensure that the model checkpoint files being generated are recognized as Tensorflow model files.
Model Type Values
Tensorflow compatible model outputs
ONNX model outputs
Custom model type (e.g., a simple flask server)
Note: Paperspace Deployments currently support Tensorflow models using Tensorflow Serving, so you'll want to use the
Tensorflow option to take advantage of Deployments.
--modelPath defines where in the context of the experiment the model checkpoint files are being stored. This is a key argument that enables the evaluation and upload of the generated model files. One option is to set
--modelPath "/artifacts" and keep the checkpoint files around only in the context of the experiment. Another option is to set
--modelPath "/storage/models" to have permanent access to the model generated files in your Paperspace storage.
Enabling Support for Models with GradientCI
You can also specify the model path and model type parameters when running experiments with GradientCI . See GradientCI Models for more info.