Modern machine learning workloads are compute-intensive and require distributed execution. Ray is an open-source, general-purpose, distributed framework that easily scales Python applications and ML workloads from a laptop to a cluster.
This talk will cover Ray’s overview, architecture, core concepts, and design patterns. We will demonstrate how Ray can scale training, hyperparameter tuning, and inference from a single node to a cluster, with tangible performance benefits.
Jules S. Damji is a lead developer advocate at Anyscale and an MLflow contributor. He is a hands-on developer with over 20+ years of experience and has worked at leading companies, such as Sun Microsystems, Netscape, @Home, Opsware/Loudcloud, VeriSign, ProQuest, Hortonworks, and Databricks, building large-scale distributed systems. He holds a B.Sc and M.Sc in Computer Science (from Oregon State University and Cal State, Chico respectively), and an MA in Political Advocacy and Communication (from Johns Hopkins University).