Ray Use Cases

Reducing cost, latency, and manual efforts in hyperparameter tuning

Ray Summit 2022

Building machine/deep learning models is a tough task; it involves multiple sub-tasks such as data preparation, model architecture creation, hyperparameter tuning, and extensive experimentation. Each set of hyperparameters creates a new experiment and if not done in an efficient manner, can be highly compute intensive, resulting in higher compute costs and time. Ray Tune gives us out-of-the-box features to manage compute resources efficiently. Its scheduling algorithms give us the opportunity to cut down the compute required by pruning bad trials and exploit the same resources to schedule new trials. Also, with hyperparameter tuning and extensive experimentation comes the problem of tracking, logging all these experiments, and saving the best model. Integration of Ray Tune with tools like MLflow and Weights & Biases saves the day. We will talk about how we have used Ray Tune to reduce manual efforts, compute costs, and logging hassle while training our models.

About Arun CS

Arun Kumar is a Lead AI engineer at Ridecell. He earned a Ph.D. in computer science from the University of Georgia, where his research focused on 3D computer vision. He has been working with computer vision and lidar perception teams in autonomous driving for the past three years.

Arun CS Kumar

Lead AI Engineer, Ridecell
chucks
Ray Summit 2022 horizontal logo

Ready to Register?

Come connect with the global community of thinkers and disruptors who are building and deploying the next generation of AI and ML applications.

Save your spot
register-bottom-mobile
beanbags

Join the Conversation

Ready to get involved in the Ray community before the conference? Ask a question in the forums. Open a pull request. Or share why you’re excited with the hashtag #RaySummit on Twitter.