Ray version 1.9 has been released! Release highlights include:
Ray Train is now in beta!
Ray Docker images for multiple CUDA versions are now provided!
We are making continuing progress in improving Ray stability and usability on Windows. We encourage you to try it out and report feedback or issues at https://github.com/ray-project/ray/issues.
We are launching a Ray Job Submission server + CLI & SDK clients to make it easier to submit and monitor Ray applications
You can run
pip install -U ray to access these features and more. With that, let’s go over the highlights.
Ray Train is now in Beta! The beta version includes various usability improvements for distributed PyTorch training and checkpoint management, support for Ray Client, and an integration with Ray Datasets for distributed data ingest.
Ray Datasets now supports groupby and aggregations! This includes multi-column/multi-lambda aggregations, making it much easier to do some aggregation on multiple columns.
Ray Docker images for multiple CUDA versions are now provided! You can specify a `-cuXXX` suffix to pick a specific version. Note that `ray-ml:cpu` images are now deprecated and that the `ray-ml` images are now only built for GPU.
Check out PR #19505 to learn more.
We are launching a Ray Job Submission server + CLI & SDK clients. The goal of Ray Job submission is to provide a lightweight mechanism for users to submit their locally developed and tested application to a running remote Ray cluster, thus enabling the user to package, deploy, and manage their Ray application as Jobs. These Jobs can be submitted by a Job manager of their choice.
This post sums up just some of the release highlights. To learn about all the features and enhancements in this release including continuing progress in improving Ray stability and usability on Windows, visit the release notes. If you would like to keep up to date with all things Ray, follow @raydistributed on Twitter, and sign up for the Ray newsletter.