Ray 1.12 is here! This release includes Ray AI Runtime (alpha), a new unified ex...
At the RL Summit, Sven Mika, lead maintainer for Ray RLlib, led a hands-on tutor...
We're excited to introduce a new Ray Scheduler for TorchX — a joint effort from...
When developing software on a distributed cluster, managing files and packages can be tricky, especially when these dependencies are constantly changing. In this post, see how Ray runtime environments simplify this process.
Learn how Riot Games is doing more than just training an agent to play a game — they're using deep RL to create a better user experience for gamers.
Learn how JPMorgan is using RL to model complex economic systems and efficient policy learning.
Find out how Dow is creating reinforcement learning-based agents for production scheduling to enable better and faster decision making that positively impacts customers, financial performance, and shareholder value.
Learn how Siemens Technology has applied reinforcement in real-world use cases.
In this article, we’ll introduce distributed training and how it works by parallelizing the workload across multiple processors (data parallelism or model parallelism). Then, we’ll discuss how to choose between distributed machine learning tools.
KubeRay is a popular toolkit for managing Ray clusters on Kubernetes. The v0.2.0 release introduces several important enhancements, including autoscaling support (alpha), gRPC service and CLI for easy integration, and simplified installation.
Ray 1.12 is here! This release includes Ray AI Runtime (alpha), a new unified experience for Ray libraries. In addition, we’ve implemented a lightweight usage statistics and data collection mechanism in Ray (turned off by default).
At the RL Summit, Sven Mika, lead maintainer for Ray RLlib, led a hands-on tutorial on RL and RLlib for recommender systems. In this blog post, we share a quick summary of the tutorial, along with a few big takeaways from the session.
We're excited to introduce a new Ray Scheduler for TorchX — a joint effort from the PyTorch and Anyscale teams that allows developers to run scalable and distributed PyTorch workloads without setting up infrastructure or changing training scripts.