Ray Use Cases

Reducing cost, latency, and manual efforts in hyperparameter tuning

Tuesday, August 23
11:30 AM - 12:00 PM

Building machine/deep learning models is a tough task; it involves multiple sub-tasks such as data preparation, model architecture creation, hyperparameter tuning, and extensive experimentation. Each set of hyperparameters creates a new experiment and if not done in an efficient manner, can be highly compute intensive, resulting in higher compute costs and time. Ray Tune gives us out-of-the-box features to manage compute resources efficiently. Its scheduling algorithms give us the opportunity to cut down the compute required by pruning bad trials and exploit the same resources to schedule new trials. Also, with hyperparameter tuning and extensive experimentation comes the problem of tracking, logging all these experiments, and saving the best model. Integration of Ray Tune with tools like MLflow and Weights & Biases saves the day. We will talk about how we have used Ray Tune to reduce manual efforts, compute costs, and logging hassle while training our models.

About Prajjwal

Prajjwal Mishra is a deep learning engineer working on the Perception team for the Nemo product at Ridecell. He is focusing on building a machine learning pipeline, working on computer vision problems to extract useful information from images.

Prajjwal Mishra

Deep Learning Engineer, Ridecell
chucks
Ray Summit 2022 horizontal logo

Ready to Register?

Come connect with the global community of thinkers and disruptors who are building and deploying the next generation of AI and ML applications.

Save your spot
register-bottom-mobile
beanbags

Join the Conversation

Ready to get involved in the Ray community before the conference? Ask a question in the forums. Open a pull request. Or share why you’re excited with the hashtag #RaySummit on Twitter.