Hyperparameter
Tuning

Scalable and easy distributed hyperparameter tuning

Ray Tune is a Python library for fast hyperparameter tuning at scale. Easily distribute your trial runs to quickly find the best hyperparameters. You can tune your favorite machine learning framework (PyTorch, XGBoost, Scikit-Learn, TensorFlow and Keras, and more) by running state of the art algorithms such as Population Based Training (PBT) and HyperBand/ASHA. Tune further integrates with a wide range of additional hyperparameter optimization tools, including Ax, BayesOpt, BOHB, Dragonfly, FLAML, HEBO, Hyperopt, Nevergrad, Optuna, SigOpt, skopt, and ZOOpt.

Use Cases

Because Ray Tune is flexible, it can be used for many use cases.

Hyperparameter Optimization: This is the standard use case which allows you to iterate across the search space to find the optimum set of hyperparameters for a given algorithm.

AutoML: Ray Tune allows you to choose and iterate over many frameworks, algorithms and hyperparameters. Common application is to iterate over many forecasting algorithms and select the right one for the given data. Another example is to iterate over many deep learning architectures to identify the best one for the given data. This is akin to a simple neural search architecture.

Many models training: you can use Ray tune to distribute the training of many models trained on each partition for a given dataset.


Key Concepts

In essence, Tune has six core components.

First, you define the hyperparameters you want to tune in a search space and pass them into a trainable that specifies the objective you want to tune. Then you select a search algorithm to effectively optimize your parameters and optionally use a scheduler to stop searches early and speed up your experiments. Together with other configuration, your trainable, search algorithm, and scheduler are passed into Tuner, which runs your experiments and creates trials. These trials can then be used in analyses to inspect your experiment results. The following figure shows an overview of these components, which we cover in detail in the next sections.

image 4 1
Why Ray Tune?

Just a few of the many capabilities that sets Ray Tune apart from the other hyperparameter optimization libraries.

State of the art algorithms


Maximize model performance and minimize training costs by using the latest algorithms such as PBT, HyperBAND, ASHA, and more.

Library agnostic


Ray Tune supports all the popular machine learning frameworks, including PyTorch, TensorFlow, XGBoost, LightGBM, and Keras — use your favorite!

Built-in distributed model


With built-in multi-GPU and multi-node support, and seamless fault tolerance, easily parallelize your hyperparameter search jobs.

Power up existing workflows


Have an existing workflow in a another library like HyperOpt and Ax? Integrate Ray Tune to improve performance with minimal code changes.

10x your productivity


Start using Ray Tune by changing just a couple lines of code. Enjoy simpler code, automatic checkpoints and integrations with tools like MLflow and TensorBoard

Hooks into the Ray ecosystem


Use Ray Tune on its own, or combine with other Ray libraries such as XGBoost-Ray, RLib.