Announcing Ray 1.0

By Eric Liang   

LinkAnnouncing Ray 1.0

Today, we’re happy to announce the release of Ray 1.0. Ray 1.0 brings a stable API and new general purpose serverless features, both important steps towards the goal of providing a universal API for distributed computing. This past release has seen 67 contributors and 458 commits, making it the among the largest yet for Ray. In addition, 1.0 brings many new community library integrations to the growing Ray ecosystem.

LinkNew Features

Ray 1.0 makes it easier than ever to build and compose highly scalable libraries, applications, and services. Here are the highlights:

Resources, Not Machines: Building distributed applications that run portably across different machine types, clusters, and clouds is a challenging task. Ray 1.0 makes this easy with an autoscaler that intelligently selects the best node types for an application’s resource requests. In addition, Ray 1.0 introduces a placement group API for fine-grained control over scheduling.

Production Serving: A general purpose serverless framework hosts both offline batch and online serving workloads. Ray 1.0 ships with Ray Serve, a production microservice and ML serving library. For custom serving applications, Ray 1.0 also introduces detached actor lifetimes, AsyncIO actors, and application-level metrics via Prometheus. Ray serving applications can be deployed in various cloud providers and on Kubernetes.

Automatic Memory Management: Users of Ray 1.0 can say goodbye to “object evicted” errors, thanks to fully automated memory management. Application performance and memory usage can be debugged in the Ray dashboard. To learn more about how Ray implements distributed reference counting with high-performance, reliability, and fault tolerance, check out the new Ray 1.0 whitepaper.

Java and Windows Support: Ray 1.0 brings native support for the Java and Windows platforms. This means that you can now use Ray to build cross-language and distributed Java applications, and install Ray on Windows.

LinkCommunity Update

Community Integrations: There are a growing number of community libraries that integrate with Ray 1.0 for distributed execution: ClassyVision, Dask, Flambe, Horovod, HuggingFace, Intel Analytics Zoo, MARS, Modin, RayDP, Seldon Alibi, and SpaCy. This means users of these libraries can now scale their applications with Ray, and Ray users can easily leverage these libraries in their distributed applications.

Open Source: At Anyscale, we’re proud to develop Ray along with the open source community. Many key Ray contributions are driven by the community — for example, ongoing projects around high availability, multi-tenancy, and placement groups are led by Ant Group, and improved autoscaler support for different Clouds has come from Amazon and Microsoft.

LinkMore Information

To learn more about Ray, join us at Ray Summit, which runs from September 30 to October 1. You can also find out more about Ray 1.0 through the Ray Slack or the Documentation.

Next steps

Anyscale's Platform in your Cloud

Get started today with Anyscale's self-service AI/ML platform:


  • Powerful, unified platform for all your AI jobs from training to inference and fine-tuning
  • Powered by Ray. Built by the Ray creators. Ray is the high-performance technology behind many of the most sophisticated AI projects in the world (OpenAI, Uber, Netflix, Spotify)
  • AI App building and experimentation without the Infra and Ops headaches
  • Multi-cloud and on-prem hybrid support