HomeEventsBuilding a scalable ML model serving API with Ray Serve

Webinar

Building a scalable ML model serving API with Ray Serve

Ray Serve is a framework-agnostic and Python-first model serving library built on Ray. In this introductory webinar on Ray Serve, we will highlight how Ray Serve makes it easy to deploy, operate and scale a machine learning API.

The core of the webinar will be a live demo that shows how to build a scalable API using Natural Language Processing models.

The demo will show how to:
- Deploy a trained Python model and scale it to a cluster using Ray Serve
- Improve the HTTP API using Ray Serve’s native FastAPI integration
- Compose multiple independently-scalable models into a single model, and run them in parallel to minimize latency.

LinkView slides >>>

Speakers

Tricia Fu

Tricia Fu

Product Manager, Anyscale

Tricia is currently a Product Manager at Anyscale. Before that, she spent some time at Google as a Product Manager and LinkedIn as a Software Engineer. She holds a BS degree from UC Berkeley in Electrical Engineering and Computer Science. In her free time, she loves taking her dog on alpine lake hikes!

Other Events

Ray Libraries in Practice: Multimodal AI Workloads

06 . 25 . 2025  ,  05:00 PM (PST)

Ray Libraries in Practice: LLM Workflows

06 . 04 . 2025  ,  05:00 PM (PST)

Why RAG Breaks at Scale: The Data Pipeline Problem

05 . 22 . 2025  ,  05:00 PM (PST)