Serve models at scale

Model serving in production

Challenge

There are four common patterns of machine learning production: pipeline, ensemble, business logic, and online learning. Implementing these patterns typically involves a tradeoff between ease of development and production readiness.
Web frameworks are simple and work out of the box but can only provide single predictions; they cannot deliver performance or scale. Custom tooling glue tools together but are hard to develop, deploy, and manage. Specialized systems are great at serving ML models but they are not as flexible or easy to use and can be costly.

Solutions

Anyscale helps you go beyond existing model serving limitations with Ray and Ray Serve, which offers scalable, efficient, composable, and flexible serving. Ray Serve provides:

  • A better developer experience and abstraction
  • The ability to flexibly compose multiple models and independently scale them
  • Built in request batching to help you meet your performance objectives
  • And resource management (CPUs, GPUs) to specify fractional resource requirements

Ray Serve is the scalable and best way to do model serving

flexible-environment

Flexible environment

Effective machine learning serving frameworks need to be open to meet different demands. Ray Serve allows you to bring your own Docker, is multi-framework (e.g., TF, PyTorch, Sklear, XGboox, etc.), offers different runtime environments per tasks and actors, and different framework versions running on each task and actor.

web-server-library

A web server and an ML serving compute library

With native support for FastAPI, Ray Serve allows you to bridge the gap between web server and specialized model serving frameworks. Leverage automatic documentation, typed python (Pydantic), validation, security and authentication, performance, asynchronicity, and routing.

complex-inference-pipelines

Author complex inference pipelines

Chaining, parallelization, ensemble, and dynamic dispatch patterns can be easily expressed with plain Python code. Test locally and deploy to production with no cde changes and different runtime environments per tasks and actors. Clearly define the separation and boundaries between code and deployments.

increased-productivity

Optimize developer productivity and resource management

By building on top of Ray, Ray Serve is horizontally scalable, lightning fast, and efficient by allowing fractional and fine-grained resource allocation.

Iterate and move to production fast with Ray Serve and Anyscale

wild-life-logo

Wildlife Studios serves in-game offers 3X faster, while simultaneously reducing infrastructure spend by 95% cost, with Ray Serve.

Read Their Story
ikigai-logo

See how a small team of data scientists build a dynamic, scalable data pipeline service for their user using Ray Serve.

Read Their Story

Already using open source Ray?

Migrate your existing workloads to Anyscale with no code changes. Experience the magic of infinite scale at your fingertips.