Ray version 1.8 has been released

By Michael Galarnyk   

Ray version 1.8 has been released! Release highlights include: 

  1. Ray SGD has been renamed to Ray Train.

  2. Ray Datasets is now in beta! The beta release includes a new integration with Ray Train yielding scalable ML ingest for distributed training. 

  3. Experimental support for Ray on Apple Silicon (M1 Macs)

You can run pip install -U ray to access these features and more. With that, let’s go over the highlights.

LinkRay SGD has been renamed to Ray Train 

The distributed deep learning library Ray SGD has been rebranded to Ray Train! Ray Train, like its predecessor, allows you to easily scale your model training using Pytorch, Tensorflow, or Horovod. The new documentation landing page can be found here. Also in this release, we’ve developed a tight integration with Ray Datasets for distributed data loading while training. See the section below for more info on this integration!

LinkRay Datasets is now in beta

Ray Datasets is now in beta! The beta release includes a new integration with Ray Train yielding scalable ML ingest for distributed training. This integration is meant to support use cases like: 

  • Large Datasets: With Ray Datasets, you can easily work with datasets that are too big to fit on a single node. Ray Datasets will distribute the dataset across the Ray Cluster and allow you to perform dataset operations (map, filter, etc.) on the distributed dataset.

  • Automatic locality-aware sharding: If provided a Ray Dataset, Ray Train will automatically shard the dataset and assign each shard to a training worker while minimizing cross-node data transfer. Unlike with standard Torch or TensorFlow datasets, each training worker will only load its assigned shard into memory rather than the entire Dataset.

  • Pipelined Execution: Ray Datasets also supports pipelining, meaning that data processing operations can be run concurrently with training. Training is no longer blocked on expensive data processing operations (such as global shuffling) and this minimizes the amount of time your GPUs are idle. See Dataset Pipelines for more information.

Check out the docs here, try it out for your ML ingest and batch inference workloads, and let us know how it goes!

LinkExperimental support for Ray on Apple Silicon (M1 Macs)

Apple is transitioning their laptops and desktop from using Intel chips to their own M1 chip. The M1 chip promises to be faster because it is optimized for Mac systems in which small size and power efficiency are critically important.  From a CPU architecture perspective, the M1 has a couple significant differences from previous Intel CPUs:

  • The CPU architecture went from x86 to ARM

  • The on-chip GPU is now made by Apple

  • Custom Apple accelerators are now available such as the Apple Neural Engine (ANE)

  • System RAM is shared by the CPU, GPU, and ANE cores

These changes mean that for optimal performance on M1 macs, software needs to be built to support these chips. In Ray version 1.8, we are adding experimental support for Ray on M1 Macs. To try it out without having dependency issues, make sure to check out the installation instructions.

LinkLearn more

That sums up the release highlights. To learn about all the features and enhancements in this release, visit the release notes. If you would like to keep up to date with all things Ray, follow @raydistributed on Twitter, and sign up for the Ray newsletter.

Next steps

Anyscale's Platform in your Cloud

Get started today with Anyscale's self-service AI/ML platform:


  • Powerful, unified platform for all your AI jobs from training to inference and fine-tuning
  • Powered by Ray. Built by the Ray creators. Ray is the high-performance technology behind many of the most sophisticated AI projects in the world (OpenAI, Uber, Netflix, Spotify)
  • AI App building and experimentation without the Infra and Ops headaches
  • Multi-cloud and on-prem hybrid support