Ray 1.12 is here! This release includes Ray AI Runtime (alpha), a new unified experience for Ray libraries. In addition, we’ve implemented a lightweight usage statistics and data collection mechanism in Ray (turned off by default).
Want to tune hyperparameters more quickly without compromising quality? In this article, we’ll demonstrate how to use the Ray Tune library to distribute the hyperparameter tuning task among several computers.
In this hands-on article, we’ll explain how to tune hyperparameters on XGBoost. You just need to know some Python to follow along, and we’ll show you how to easily deploy machine learning models and then optimize their performance.
Introducing Ray Train, an easy-to-use library for distributed deep learning. In this post, we show how Ray Train improves developer velocity, is production-ready, and comes with batteries included.
FLAML is a lightweight Python library from Microsoft Research that finds accurate machine learning models in an efficient and economical way using cutting edge algorithms designed to be resource-efficient and easily parallelizable. FLAML can also uti...
Ray Lightning is a new plugin that makes running multi-node GPU training with PyTorch Lightning fast and easy.
XGBoost-Ray is a novel backend for distributed XGBoost training. It features multi node and multi GPU training, distributed data loading, advanced fault tolerance such as elastic training, and a seamless integration with hyperparameter optimization f...
In Ray 1.2.0, we’ve added a library for “collective communication primitives” to Ray. These primitives can be used in your Ray program to simplify the exchange of information across many distributed processes at the same time, speeding up certain dis...
Ray is a popular framework for distributed Python that can be paired with PyTorch to rapidly scale machine learning applications.
Hydra, from Facebook AI, is a framework for elegantly configuring complex applications. Since its initial release, Hydra has become a popular framework adopted by researchers and practitioners. We are happy to announce that users can now scale and la...