Posts by Richard Liaw

blog-ray-1-12-thumb
04 . 14 . 2022

Ray 1.12: Ray AI Runtime (alpha), usage data collection, and more

Ray 1.12 is here! This release includes Ray AI Runtime (alpha), a new unified experience for Ray libraries. In addition, we’ve implemented a lightweight usage statistics and data collection mechanism in Ray (turned off by default).

blog-distrubuted-hyperparameter-tuning-processing-thumb
02 . 15 . 2022

How to distribute hyperparameter tuning using Ray Tune

Want to tune hyperparameters more quickly without compromising quality? In this article, we’ll demonstrate how to use the Ray Tune library to distribute the hyperparameter tuning task among several computers.

hands-on-hyperparameter-tuning-MNIST-samples
02 . 09 . 2022

How to tune hyperparameters on XGBoost

In this hands-on article, we’ll explain how to tune hyperparameters on XGBoost. You just need to know some Python to follow along, and we’ll show you how to easily deploy machine learning models and then optimize their performance.

Ray Train
01 . 25 . 2022

Distributed deep learning with Ray Train is now in Beta

Introducing Ray Train, an easy-to-use library for distributed deep learning. In this post, we show how Ray Train improves developer velocity, is production-ready, and comes with batteries included.

FLAML XGBOOST Ray Tune
08 . 24 . 2021

Fast AutoML with FLAML + Ray Tune

FLAML is a lightweight Python library from Microsoft Research that finds accurate machine learning models in an efficient and economical way using cutting edge algorithms designed to be resource-efficient and easily parallelizable. FLAML can also uti...

Ray Lightning
08 . 19 . 2021

Introducing Ray Lightning: Multi-node PyTorch Lightning training made easy

Ray Lightning is a new plugin that makes running multi-node GPU training with PyTorch Lightning fast and easy.

XGBoost-Ray
06 . 16 . 2021

Introducing Distributed XGBoost Training with Ray

XGBoost-Ray is a novel backend for distributed XGBoost training. It features multi node and multi GPU training, distributed data loading, advanced fault tolerance such as elastic training, and a seamless integration with hyperparameter optimization f...

Sender Receiver (Introducing Collective Communication Primitive APIs in Ray)
05 . 28 . 2021

Introducing Collective Communication Primitive APIs in Ray

In Ray 1.2.0, we’ve added a library for “collective communication primitives” to Ray. These primitives can be used in your Ray program to simplify the exchange of information across many distributed processes at the same time, speeding up certain dis...

PyTorch + Ray
03 . 02 . 2021

Getting Started with Distributed Machine Learning with PyTorch and Ray

Ray is a popular framework for distributed Python that can be paired with PyTorch to rapidly scale machine learning applications.

Hydra+Ray (Anyscale)
01 . 26 . 2021

Configuring and Scaling ML with Hydra + Ray

Hydra, from Facebook AI, is a framework for elegantly configuring complex applications. Since its initial release, Hydra has become a popular framework adopted by researchers and practitioners. We are happy to announce that users can now scale and la...

12