Posts by Chandler Gibbons

blog-what-is-distributed-training-thumb
04 . 26 . 2022

What is distributed training?

In this article, we’ll introduce distributed training and how it works by parallelizing the workload across multiple processors (data parallelism or model parallelism). Then, we’ll discuss how to choose between distributed machine learning tools.

blog-ray-1-12-thumb
04 . 14 . 2022

Ray 1.12: Ray AI Runtime (alpha), usage data collection, and more

Ray 1.12 is here! This release includes Ray AI Runtime (alpha), a new unified experience for Ray libraries. In addition, we’ve implemented a lightweight usage statistics and data collection mechanism in Ray (turned off by default).

blog-ray-summit-thumb
03 . 23 . 2022

Ray Summit 2022 Call for Papers is now open

Ray Summit, the annual Ray user conference, is back this August, and the Call for Papers is open until April 18! In this blog post, we’ll give you all the information you need to prepare a stellar talk proposal.

blog-rl-summit-thumb
03 . 22 . 2022

5 reasons to attend this month’s Production RL Summit

We’re holding the first-ever Production RL Summit on March 29. Read more about our featured speakers and sessions and learn why you should join if you’re a reinforcement learning practitioner — or even if you have just a passing interest in RL.

Ray 1.11 thumb
03 . 09 . 2022

Ray 1.11: Redisless Ray, a docs redesign, and Python 3.9 support

Ray 1.11 is here! Ray no longer starts Redis by default, opening up the possibility for better support of fault tolerance and high availability in future releases. Plus, there’s a new, more intuitive design for the Ray docs.

blog-ray-serve-flask-thumb
03 . 02 . 2022

Deploying XGBoost models with Ray Serve

In this article, we’ll cover how to deploy XGBoost with two frameworks: Flask and Ray Serve. We’ll also highlight the advantages of Ray Serve over other serving solutions when comparing models in production.

blog-serving-pytorch-models-thumb
02 . 23 . 2022

Serving PyTorch models with FastAPI and Ray Serve

In this article, we will highlight the options available for serving a PyTorch model into production and deploying it with several frameworks, such as TorchServe, Flask, and FastAPI.

blog-speed-up-xgboost-training-thumb
02 . 17 . 2022

Three ways to speed up XGBoost model training

In a previous blog post, we covered the advantages and disadvantages of several approaches for speeding up XGBoost model training. In this article, we’ll dive into three different approaches, with code snippets so you can follow along.

ray-1-10-thumb
02 . 07 . 2022

Ray 1.10: Windows support beta, enhanced job submission, and more

Ray 1.10 is here! The highlights include Windows support — now in beta — and enhancements to Ray job submission, including log streaming and custom headers for authentication.