We will showcase many Ray Tune highlights, including how to:
Set up distributed hyperparameter search in under 10 lines of code
Scale from a single machine to a cluster with minimal code changes
Trial leading search methods (ASHA, BOHB, PBT, etc) with built-in access
Visualize results with TensorBoard or MLflow
We will also share stories of users that are finding the most-performant models, while saving compute costs and maximizing CPU/GPU utilization with Ray Tune.