Develop on your laptop and then scale the same Python code elastically across hundreds of nodes or GPUs on any cloud — with no changes.
Get early release chapters of the first and only comprehensive book on Ray and its ecosystem.Watch now
Join us in San Francisco on September 18-20 for three days of keynotes, trainings, and technical sessions.Register now
New to Ray? Jump-start your learning with this free hands-on tutorial on Ray Core.Watch the tutorial
Ray translates existing Python concepts to the distributed setting, allowing any serial application to be easily parallelized with minimal code changes
Easily scale compute-heavy machine learning workloads like deep learning, model serving, and hyperparameter tuning with a strong ecosystem of distributed libraries.
Scale existing workloads (for eg. Pytorch) on Ray with minimal effort by tapping into integrations.
Building on top of Ray has allowed us to deliver a state-of-the-art low-code deep learning platform that lets our users focus on obtaining best-in-class machine learning models for their data, not distributed systems and infrastructure.
Native Ray libraries — such as Ray Tune and Ray Serve — lower the effort to scale the most compute-intensive machine learning workloads, such as hyperparameter tuning, training deep learning models, and reinforcement learning. For example, get started with distributed hyperparameter tuning in just 10 lines of code.
Creating distributed apps is hard. Ray handles all aspects of distributed execution so that developers can create performant distributed apps without building and managing infrastructure or becoming distributed systems experts.
Ray handles all aspects of distributed execution from scheduling and sequencing to scaling and fault tolerance.
Ray dynamically provisions new nodes (or removes them) to handle variable workload needs.
Ray gracefully handles machine failures to deliver uninterrupted execution.
My team at Afresh trains large time-series forecasters with a massive hyperparameter space. We googled Pytorch hyperparameter tuning, and found Ray Lightning. It took me 20 minutes to integrate into my code, and it worked beautifully. I was honestly shocked.
Public cloud, private data centers, bare metal, Kubernetes cluster — Ray runs anywhere. Or choose Anyscale, and leave the infrastructure to us.
From detection of geospatial anomalies to real-time recommendation, explore the stories of teams scaling machine learning on Ray.
From a dedicated Slack channels to in-person meetups, we have the resources you need to get started and be successful with Ray.
Reference guides, tutorials, and examples to help you get started on and advance your Ray journey.
Join the forum to get technical help and share best practices and tips with the Ray community.
Connect with other users and project maintainers on the Ray Slack channel.