This page provides a reference guide to autoscaling for Gradient Deployments.
Configure autoscaling to handle scale up and down events for your Gradient deployment.
How autoscaling works
Gradient autoscaling uses the kubernetes horizontal pod autoscaler. Some defaults have been chosen to make it easier to quickly scale up and down the deployment.
Autoscaling will scale up and down the deployment based on a chosen
summary function and specified
value. The number of current replicas for each deployment will never scale below
replicas or above
To change the autoscaling configuration, update the spec through the CLI.
enabled (default: true): Turn autoscaling on or off.
maxReplicas : The upper bound on the number of replicas that can be run by the deployment. The deployment’s active replicas will always fall in the range between the value of
metric - Sets the metric used to scale up or down.
summary - Sets the function used to calculate scale events.
value - The summary number that will cause the deployment to scale.
Multiple metrics can be used in the spec to determine when to scale. If you provide multiple metric blocks, the deployment will calculate a proposed replica counts for each metric, and then scale the instances to the value of the highest replica count.
The following metrics can be used:
|cpu||average||Average cpu utilization across each replica (% of 100)||Integer|
|memory||average||Average memory utilization across each replica (% of 100)||Integer|
|requestDuration||average||Average request duration over a 5 minute period across all IPs behind the proxy (seconds)||Integer|
A spec that configures all metrics available for autoscaling. See scenarios below:
- metric: requestDuration
- metric: cpu
- metric: memory
Example Scenario: As requests begin to come through, the request duration over a 5 minute period is greater than 150 ms. As a result, the deployment scales up from 1 to 2 replicas. Over the next 5 minute interval the request duration is still longer than 150ms and the deployment scales to 3 replicas. After the stabilization period of 5 minutes, the deployment will begin to scale down as the request times have fallen below 150 ms.