Repositories in a Deployment
This page provides a reference guide to getting repos into a Gradient Deployments.
The best way to bring applications into a deployment is to include everything in a single Docker container. This improves repeatability and consistency between replicas and deployment specs.
If you don't want to package everything into a Docker container, you can clone any one or more Git repositories into the deployment spec. Supply the optional repositories
configuration in your deployment spec:
dataset
: the dataset reference you want to use to sync the repositories into. This reference can be the dataset ID or dataset name.mountPath
: the directory where you want to mount the repositoriesurl
: the URL for the Git repositories that you want to clonename
: the name to give the repository directory in your container
Note that if you are using a private Git repository, you can configure authenticated access by also supplying the optional username
and password
parameters. The password field will need to reference a secret and should not contain the plaintext password itself.
Notes:
- GitHub no longer successfully authenticates with username and password and therefore will require username and access token. Details on how to create a GitHub access token can be found here.
- You must have already created a dataset to use this feature as the dataset reference is required to mount a Git repository. This is because, Gradient will clone your repositories into the specified dataset and mount that dataset to your deployment at the specified path.
For example, let's say that I want to clone two repositories, one that is public, and one that is private and requires a username and password, that be accomplish like below:
enabled: true
image: ubuntu:20.04
port: 8501
resources:
replicas: 1
instanceType: C4
repositories:
dataset: dstflhstfexl6w3
mountPath: /opt/repositories
repositories:
- url: https://github.com/tensorflow/serving.git
name: tf-serving
- url: https://github.com/paperspace/ml-private.git
name: paperspace-ml
username: paperspace-user
password: secret:paperspace-github-token
Once this completes, I will have access to the TensorFlow Serving repository at /opt/repos/tf-serving
, and to a repo of private ML code at /opt/repos/paperspace-ml
.
Note: Certain paths do not work for repository mounting locations. Repositories must be mounted to absolute paths outside of root. Therefore mount paths like /
or .
will cause an error in the Deployment.
Container Registry integration
Private Docker images can be used by referencing your container registry which contains credentials to access that image.
To configure a Container Registry, follow this short guide.
Once configured, you can specify a containerRegistry
value in your deployment spec with the name of the container registry you synced to your Workspace above.
enabled: true
image: paperspace/ml_server
containerRegistry: dockerhub-prod
port: 8501
models:
- id: model-id
path: /opt/models
env:
- name: MODEL_BASE_PATH
value: /opt/models
- name: MODEL_NAME
value: my-tf-model
resources:
replicas: 1
instanceType: C4