Ray Summit 2022
We are delighted to host an exclusive Ray Summit Meetup, hosted by Anyscale with Ray community talks, on the eve of the summit. Invited Ray community speakers will share how they use Ray to scale and solve challenging ML problems.
You don't have to be registered for the Ray Summit to attend. The meetup is free for the community. Join us for the Ray Summit Happy Hour from 5:00 - 6:00 p.m., followed immediately by the meetup.
Agenda (The times are not strict; they may vary slightly.)
Talk 1: Ray + Arize: Close the ML infrastructure loop Detecting, diagnosing, and resolving ML model performance can be difficult for even the most sophisticated ML engineers. As more machine learning models are deployed into production, it is imperative we have tools to monitor, troubleshoot, and explain model decisions. Join Aparna Dhinakaran, chief product officer at Arize AI, in a discussion on the state of commonly seen ML production monitoring challenges. Learn how to use ML Observability from training through production environments to find upstream model issues faster, monitor your models in real time at scale, and improve model interpretability and explainability.
This talk will highlight common challenges seen in models deployed in production, including model drift, data quality issues, distribution changes, outliers, and bias. The talk will also cover best practices to address these challenges, and how to use observability and explainability to help identify model issues before they impact your business.
Talk 2: Approaching Cluster Multi-tenancy with Ray Job Maintaining a highly available service with close to zero downtime and optimal performance is both challenging and almost mandatory in the world of data-intensive operations. Our strategy for achieving scalability and reliability without compromising on simplicity of implementation involves utilizing Ray API functionality in simple but clever ways. We'll explore how to incorporate multi-cluster sanity, seamlessly swappable dependencies, and fail-safe cluster transitions with long-running Ray clusters to meet the demands of modern cloud infrastructure. We will also touch on challenges that we faced at Ikigai when implementing this strategy and how we overcame those to reach stability.
Talk 3: Large-scale distributed approximate nearest neighbor search with Ray One of the simplest and most reliable learning methods in AI is to use memory: retrieve training data points that are closest to the testing data. However, as the datasets grow, such search gets prohibitively expensive. This talk will describe how we use Ray to develop approximated large-scale, distributed nearest neighbor search. In particular, I will describe applications to fraud detection in images of the scientific literature. Next, I will describe how we harness Ray to process tens of millions of scientific articles and images and billions of keypoints extracted from these images. Then, I will describe how we use Ray to integrate GPUs into this computation, effectively cutting down billion-scale searches from days to seconds. Finally, I will discuss how we use Ray Serve to provide this search as a service, hiding all the complexity behind a simple interface. I will end by discussing how Ray compares to other tools we have attempted to use in the past, including Message Passing Interface (MPI) and Dask, and discuss the advantages and disadvantages of Ray.
Daniel Acuna is an assistant professor in the School of Information Studies at Syracuse University, Syracuse, NY. His current research aims to understand decision-making in science, from helping hiring committees to predict future academic success to removing the potential biases that scientists and funding agencies commit during peer review. To achieve these tasks, Dr. Acuna harnesses vast datasets about scientific activities and applies machine learning and artificial intelligence to uncover rules that make publication, collaboration, and funding decisions more successful. Recently, he has been interested in artificial intelligence biases and developing methods for detecting them.
Jaehyun Sim is the director of engineering at Ikigai Labs, where he is building a highly scalable and interactive data pipelining platform for raw data. He is a CNCF-certified CKA and CKAD and enjoys working with solving big data problems with a cloud-native approach, such as Kubernetes and AWS. He is currently working at making big data more transparent by making data pipelines both massively scalable and easily visualizable. He worked previously at Celect, Inc. as a data engineer and has undergraduate degrees in computer science and statistics from UC Berkeley.
Aparna Dhinakaran is the co-founder and chief product officer at Arize AI, a pioneer and early leader in machine learning (ML) observability. A frequent speaker at top conferences and thought leader in the space, Dhinakaran was recently named to the Forbes 30 Under 30. Before Arize, Dhinakaran was an ML engineer and leader at Uber, Apple, and TubeMogul (acquired by Adobe). During her time at Uber, she built several core ML infrastructure platforms, including Michelangelo. She has a bachelor's from Berkeley's Electrical Engineering and Computer Science program, where she published research with Berkeley's AI Research group.
Jules S. Damji is a lead developer advocate at Anyscale and an MLflow contributor. He is a hands-on developer with over 20 years of experience and has worked at leading companies such as Sun Microsystems, Netscape, @Home, Opsware/Loudcloud, VeriSign, ProQuest, Hortonworks, and Databricks, building large-scale distributed systems. He holds a BSc and MSc in computer science (from Oregon State University and Cal State, Chico, respectively), and an MA in political advocacy and communication (from Johns Hopkins University).
Come connect with the global community of thinkers and disruptors who are building and deploying the next generation of AI and ML applications.Save your spot