Optimizing TensorFlow Models for Inference

Gradient supports deployment of models compatible with industry standards. There are a variety of optimizations you can perform on neural networks to reduce the size & latency for inference. Because we use TF Serving for TensorFlow models, we are able to support deployment of these optimized graphs.