10 must-attend Ray Summit sessions: Generative AI, scalable ML workloads, and more

By Jules S. Damji and Ben Lorica   

This year’s Ray Summit has it all: developer deep dives, Ray use cases across industries, Ray in production, scalable AI/ML workloads, Generative AI and more. Below are just a few of the sessions we’re looking forward to attending this year.

The Ray Summit brings together the Ray community to delve into all aspects of AI and beyond on the Ray platform, while providing opportunities to create, learn, and discover. This year's event marks our second in-person gathering, featuring an impressive lineup of keynote speakers, breakout sessions, and lightning talks. 


The program encompasses a wide range of topics, such as developer-focused Ray-related sessions, Ray applications across different industries, practical Ray deployment strategies for constructing scalable machine learning platforms, and Generative AI sessions, all designed to cater to the diverse needs of AI/ML practitioners.

The full agenda 📅 is posted, and here are just a few of the sessions that Ben and I are looking forward to attending this year:

🗣 Solving Generative AI challenges with Ray: Anyscale senior staff software engineer Jun Gong will focus on Ray's suitability for large-scale generative AI model training and the infrastructural challenges it solves. He will cover recent improvements to the platform that make it easier and more effective for the community to train and serve large generative models. If you want to gain insight into Ray's capabilities for powering large language models that are vital to the Generative AI field, you may want to attend this session.

🗣 How Spotify Built a Robust Ray Platform with a Frictionless Developer Experience:  Spotify ML Platform engineers Keshi Dai and David Xia will present how their team improved the developer experience by simplifying access to computational resources and streamlining the coding process. Their presentation will highlight improvements to Spotify's platform, such as reliability, scalability, performance, and cost-efficiency.

🗣 Forecasting Covid Infections for the UK's National Health Service using Ray and Kubernetes: During the COVID pandemic, the UK's National Health Service (NHS) used AI to address the risk of hospitals not having enough capacity. They re-architected the system and employed Ray Core and Kuberay to create a highly stable architecture. The talk aims to give an overview of this integration, which complemented the AWS Karpenter autoscaler to make a powerful platform for Bayesian modeling. Alex Remedios, a software engineer, will share how they did it with Ray.

🗣 Scaling Computer Vision Models with Ray: A Cost-Effective and Efficient Distributed Training Framework: David Scott of Instacart will discuss the implementation of a computer vision model using the Ray open-source framework with Kubeflow, another popular distributed training framework. He evaluates the cost effectiveness, training speed, GPU utilization, and throughput of training the machine learning model using both frameworks.

🗣 Ray breaks the $1/TB barrier as the world’s most cost-efficient sorting system: Frank Luan, PhD at UC Berkeley will share how The Sky Computing Lab at UC Berkeley developed Exoshuffle, a new architecture for building distributed shuffle that achieved high performance. Building upon this architecture and using Ray, the team set a new world record on the CloudSort benchmark, sorting 100TB of data on the public cloud for only $97 worth of cloud resources or $0.97 per terabyte. This is 33% more cost-efficient than the previous record set by Apache Spark in 2016, and 15% cheaper when factoring in decreasing hardware costs.

🗣 From Apache Spark to Ray: An Exabyte-Scale Production Migration Case Study: Patrick Ames, principal software engineer, will discuss a case study about Amazon's successful transition of exabyte-scale data catalog management jobs from Apache Spark to Ray. Patrick will revisit key milestones, challenges faced, and concessions made during the transition, including future plans to incorporate Ray more deeply into critical batch and streaming business intelligence pipelines at Amazon. 

🗣 Serving Large Language Models with KubeRay on TPUs: Messrs Richard Liu and Winston Chiang, of Google, will discuss the growing popularity of large language models (LLMs) and the challenges in serving them due to their massive computational resources. They introduce KubeRay on TPUs as a solution to improve the performance of LLMs.

🗣 Heterogeneous Training Cluster with Ray at Netflix: In this talk, ML engineers Pablo Delgado and  Lingyi Lui, will discuss the increased computational resources required to train complex deep learning models and explores the benefits of using Ray for building a training cluster with a mix of CPU and GPU instances. They’ll  cover the steps involved in setting up such a cluster and demonstrate how to run distributed training jobs using Ray's automatic resource allocation and management for scheduling different types of workers.

🗣 Supercharging self-driving algorithms development with Ray: scaling simulation workloads and democratizing autotuning at Zoox: Messrs Yunpeng Pan and Ritwik Bera will share how Zoox has created an autotuning platform that speeds up algorithm development through distributed simulation and metrics evaluation. The Zook ML team utilized Ray to scale simulation and metrics workloads and enable developers to improve autonomous driving without modifying the code. Their talk will cover their autotuning process and demonstrate how Ray helped scale simulation and metrics workloads at Zoox.

🗣 Modernizing DoorDash Model Serving Platform with Ray Serve: The DoorDash ML platform team has implemented Ray for model training and inference, and has built a new model serving platform that prioritizes flexibility and self-service. DoorDash software engineers Siddharth Kodwani and Kornel Csernai will explain how they evaluated various frameworks and found that Ray Serve's user-friendly approach was the best fit for their needs. They will detail their experience of transitioning prediction services from their previous generation to the Ray Serve ecosystem.

That’s just the beginning, by no means the only picks of the sessions, of what we have in store at this year’s Ray Summit. Over the two days of the conference, you’ll learn how companies such as Amazon, Ant Group, Cruise, DoorDash, IBM, Instacart, Google, Microsoft, Netflix, Spotify, Uber, Verizon, and more are building cutting-edge AI/ML platforms and applications with Ray. 

Also, you’ll hear keynotes from Generative AI and ML luminaries such as Aidan Gomez, Co-founder and CEO, Cohere; Albert Greenberg, VP of Engineering, Uber; Brian McClendon, Senior Vice President of Engineering, Niantic, Inc; Robert Nishihara, Co-founder and CEO, Anyscale; John Schulman Co-founder, OpenAI; Ion Stoica, Co-founder, Executive Chairman and President, Anyscale and Professor, UC Berkeley; and Ya Xu, VP of Engineering, Head of Data and AI, LinkedIn.

Finally at the Raydiate Bar, we have planned enjoyable community activities and a happy hour filled with fun. Register now — early bird registration is open until May 31, 2023.

Next steps

Anyscale's Platform in your Cloud

Get started today with Anyscale's self-service AI/ML platform:

  • Powerful, unified platform for all your AI jobs from training to inference and fine-tuning
  • Powered by Ray. Built by the Ray creators. Ray is the high-performance technology behind many of the most sophisticated AI projects in the world (OpenAI, Uber, Netflix, Spotify)
  • AI App building and experimentation without the Infra and Ops headaches
  • Multi-cloud and on-prem hybrid support