Webinar

Scaling Machine Learning Workloads with Ray

Wednesday, November 17, 5:00PM UTC

Modern machine learning (ML) workloads, such as deep learning and large-scale model training, are compute-intensive and require distributed execution. Ray was created in the UC Berkeley RISELab to make it easy for every engineer to scale their applications and ML workloads, without requiring any distributed systems expertise, making distributed programming easy.

Join Jules S. Damji, developer advocate at Anyscale, and Antoni Baum, software engineer at Anyscale, for an introduction to Ray for scaling your ML workloads. Learn how Ray libraries (eg., Ray Tune, Ray Serve, etc) help you easily scale every step of your ML pipeline — from model training and hyperparameter search to inference and production serving.

Highlights include: 

  • Ray overview & core concepts

  • Library ecosystem and use cases

  • Demo: Ray for scaling ML workflows

  • Getting started resources

View slides >>>

Webinar Q&A >>>

Speakers

Jules Damji

Jules Damji

Lead developer advocate, Anyscale

Jules S. Damji is a lead developer advocate at Anyscale and an MLflow contributor. He is a hands-on developer with over 20+ years of experience and has worked at leading companies, such as Sun Microsystems, Netscape, @Home, Opsware/Loudcloud, VeriSign, ProQuest, Hortonworks, and Databricks, building large-scale distributed systems. He holds a B.Sc and M.Sc in Computer Science (from Oregon State University and Cal State, Chico respectively), and an MA in Political Advocacy and Communication (from Johns Hopkins University).

Antoni Baum

Antoni Baum

Software engineer, Anyscale

Antoni is a Software Engineer at Anyscale, working on Ray Tune and other ML libraries, and a Computer Science & Econometrics MSc student. In his spare time, he contributes to various open source projects, trying to make machine learning more accessible and approachable.