Simplest path to
scaling Python

Develop on your laptop and then scale the same Python code elastically across hundreds of nodes or GPUs on any cloud — with no changes.

Learning Ray | eBook

Learning Ray is the first and only comprehensive book on Ray and its growing ecosystem.

Download now

Ray Core Tutorial | On Demand

New to Ray? Jump-start your learning with our "Getting Started" guide.

Read now

Ray Summit 2023 | On Demand

Watch all the keynotes and sessions from Ray Summit 2023 on demand.

Watch now

Parallelize Python, with minimal code changes

Simple, flexible Python primitives

Ray translates existing Python concepts to the distributed setting, allowing any serial application to be easily parallelized with minimal code changes

Distributed libraries

Easily scale compute-heavy machine learning workloads like deep learning, model serving, and hyperparameter tuning with a strong ecosystem of distributed libraries.

Integrations

Scale existing workloads (for eg. Pytorch) on Ray with minimal effort by tapping into integrations.

ParallelizePython
ParallelizePython

Building on top of Ray has allowed us to deliver a state-of-the-art low-code deep learning platform that lets our users focus on obtaining best-in-class machine learning models for their data, not distributed systems and infrastructure.

Travis Addair, CTO, Predibase and Maintainer, Horovod / Ludwig AI

scalableMachineLearningLibraries
scalableMachineLearningLibraries

Scalable machine learning libraries

Native Ray libraries — such as Ray Tune and Ray Serve — lower the effort to scale the most compute-intensive machine learning workloads, such as hyperparameter tuning, training deep learning models, and reinforcement learning. For example, get started with distributed hyperparameter tuning in just 10 lines of code.

Build and run distributed apps

(no distributed systems expertise needed)

Creating distributed apps is hard. Ray handles all aspects of distributed execution so that developers can create performant distributed apps without building and managing infrastructure or becoming distributed systems experts.

Flawless distributed operations

Ray handles all aspects of distributed execution from scheduling and sequencing to scaling and fault tolerance.

Autoscaling

Ray dynamically provisions new nodes (or removes them) to handle variable workload needs.

Fault-tolerant

Ray gracefully handles machine failures to deliver uninterrupted execution.

distibuted-apps
distibuted-apps

My team at Afresh trains large time-series forecasters with a massive hyperparameter space. We googled Pytorch hyperparameter tuning, and found Ray Lightning. It took me 20 minutes to integrate into my code, and it worked beautifully. I was honestly shocked.

Philip Cerles, Senior Machine Learning Engineer

ScaleOnAnyCloud
ScaleOnAnyCloud

Scale on any cloud or infrastructure

Public cloud, private data centers, bare metal, Kubernetes cluster — Ray runs anywhere. Or choose Anyscale, and leave the infrastructure to us.

Trusted by leading AI and machine learning teams

From detection of geospatial anomalies to real-time recommendation, explore the stories of teams scaling machine learning on Ray.

Uber logo
Amazon logo
Visa logo
Ant Group logo
Nvidia logo
Dow logo
Intel logo

Get started with Ray

From a dedicated Slack channels to in-person meetups, we have the resources you need to get started and be successful with Ray.

Ray logo

Documentation

Reference guides, tutorials, and examples to help you get started on and advance your Ray journey.

Chat bubbles icon

Discussion Forum

Join the forum to get technical help and share best practices and tips with the Ray community.

Slack icon in black

Slack

Connect with other users and project maintainers on the Ray Slack channel.

Sign up for product updates

By signing up, you agree to receive occasional emails from Anyscale. Your personal data will be processed in accordance with Anyscale’s Privacy Policy.