Ray Deep Dives

Scaling Training and Batch Inference: A Deep Dive into AIR's Data Processing Engine

Ray Summit 2022

Are you looking to scale your ML pipeline to multiple machines? Are you encountering an ingest bottleneck, preventing you from saturating your GPUs? This talk will cover how Ray AIR uses Ray Datasets for efficient data loading and preprocessing for both training and batch inference, diving into how AIR uses Datasets to achieve high performance and scalability.

We start by giving an overview of creating training and batch inference pipelines using Ray AIR. Next, we dive into the Ray Datasets internals, detailing features such as distributed data sharding, parallel + distributed I/O and transformations, pipelining of CPU and GPU compute, autoscaling pools of inference workers, and efficient per-epoch shuffling. Finally, we present case studies of users that have deployed such AIR workloads to production and have seen the performance + scalability benefits.

You can learn more about Ray Datasets here: https://docs.ray.io/en/latest/data/dataset.html

About Clark

Clark Zinzow is a software engineer at Anyscale, working on Ray's dataplane and ML ecosystem. He enjoys working on data-intensive distributed systems and scaling ML infrastructure.

Clark Zinzow

Software Engineer, Anyscale
chucks
Ray Summit 2022 horizontal logo

Ready to Register?

Come connect with the global community of thinkers and disruptors who are building and deploying the next generation of AI and ML applications.

Save your spot
register-bottom-mobile
beanbags

Join the Conversation

Ready to get involved in the Ray community before the conference? Ask a question in the forums. Open a pull request. Or share why you’re excited with the hashtag #RaySummit on Twitter.