Gradient integrates with Jupyter Notebooks and Jupyter Lab, making it easy to get a coding environment provisioned in seconds. Gradient Notebooks make it easy to explore data and coding concepts, and collaborate with other people on projects.
Experiments are designed for training machine learning models on GPUs (and other chips) without managing any infrastructure. Experiments are used to create and start either a single Job or multiple Jobs (eg for a hyperparameter search or distributed training).
Experiments are part of a larger suite of tools that work seamlessly with Gradient Notebooks, Models, and Deployments, which together form a production-ready ML/AI pipeline.
Jobs execute generic tasks on remote infrastructure and can be used to perform a variety of functions from compiling a model to running an ETL operation. Jobs are a made up of a collection of code, data, and a container that are packaged together and remotely executed.
Persistent storage is a persistent filesystem automatically mounted on every Experiment, Job, and Notebook and is ideal for storing data like images, datasets, model checkpoints, and more. Learn more here.
Artifact storage is collected and made available after the Experiment or Job run in the CLI and web interface. You can download any files that your job has placed in the
/artifacts directory from the CLI or UI. If you need to get result data from a job run out of Gradient, use the Artifacts directory. Learn more here.
The Workspace storage is typically imported from the local directory in which you started your job. The contents of that directory are zipped up and uploaded to the container in which your job runs. The Workspace exists for the duration of the job run. If you need to push code up to Gradient and run it, using the Workspace storage is the way to do it. Learn more here.
Gradient provides the ability to mount S3 compatible object storage buckets to an experiment at runtime. Learn more here.
You can either upload a model or generate models from experiments which can be interpreted and stored in the Gradient Model Repository.
Once a model is created, you can easily serve the model high-performance, low-latency micro-service with a RESTful API. Learn more here.