All Posts

05 . 05 . 2022

Handling files and packages on your cluster with Ray runtime environments

When developing software on a distributed cluster, managing files and packages can be tricky, especially when these dependencies are constantly changing. In this post, see how Ray runtime environments simplify this process.

05 . 03 . 2022

Riot Games and deep reinforcement learning in gaming

Learn how Riot Games is doing more than just training an agent to play a game — they're using deep RL to create a better user experience for gamers.

JPMorgan RL Summit thumbnail
05 . 03 . 2022

Reinforcement learning based on market simulation at JPMorgan

Learn how JPMorgan is using RL to model complex economic systems and efficient policy learning.

05 . 03 . 2022

Smart supply chain management with reinforcement learning at Dow

Find out how Dow is creating reinforcement learning-based agents for production scheduling to enable better and faster decision making that positively impacts customers, financial performance, and shareholder value.

Siemens RL Summit talk thumbnail
05 . 03 . 2022

Siemens brings reinforcement learning to energy, transportation and logistics

Learn how Siemens Technology has applied reinforcement in real-world use cases.

04 . 26 . 2022

What is distributed training?

In this article, we’ll introduce distributed training and how it works by parallelizing the workload across multiple processors (data parallelism or model parallelism). Then, we’ll discuss how to choose between distributed machine learning tools.

04 . 19 . 2022

New in KubeRay 0.2.0: Autoscaling (alpha), simplified installation, and more

KubeRay is a popular toolkit for managing Ray clusters on Kubernetes. The v0.2.0 release introduces several important enhancements, including autoscaling support (alpha), gRPC service and CLI for easy integration, and simplified installation.

04 . 14 . 2022

Ray 1.12: Ray AI Runtime (alpha), usage data collection, and more

Ray 1.12 is here! This release includes Ray AI Runtime (alpha), a new unified experience for Ray libraries. In addition, we’ve implemented a lightweight usage statistics and data collection mechanism in Ray (turned off by default).

04 . 06 . 2022

Biggest takeaways from our RL tutorial: Long-term rewards, offline RL, and more

At the RL Summit, Sven Mika, lead maintainer for Ray RLlib, led a hands-on tutorial on RL and RLlib for recommender systems. In this blog post, we share a quick summary of the tutorial, along with a few big takeaways from the session.

03 . 24 . 2022

Large-scale distributed training with TorchX and Ray

We're excited to introduce a new Ray Scheduler for TorchX — a joint effort from the PyTorch and Anyscale teams that allows developers to run scalable and distributed PyTorch workloads without setting up infrastructure or changing training scripts.