Productionizing Machine Learning with Observability, Quality and Flexibility at Scale

Tuesday, February 7, 6:00PM UTC

See how Ray and Arize combine to provide highly scalable and easily managed ML deployments, with automatic issue detection and quick troubleshooting.

Watch this on-demand webinar hosted by Anyscale, the company behind Ray, the unified framework for scalable computing, and Arize, the leader in machine learning observability. See how Ray and Arize combine to provide ease of AI/ML development and observability along with the ability to understand performance, data quality and drift issues.

Hear how leading AI teams:

  • Bridge the gap between development and production:

    Understand how to scale ML workloads from your laptop to the cloud with no code changes.

    With a single script prepare data, tune, train and scale your workloads.

  • Scale across multiple dimensions:

    Hear how organizations are benefitting from embarrassingly parallel experiments and

    scaling across multiple cores, nodes, and data sources.

  • Increase developer velocity and speed experimentation:

    See how to speed model development and iterations without scaling complexity.

    Visualize, optimize, collaborate and standardize models

    and data pipelines.

  • Understand model drift:

    Track distribution changes in upstream data, predictions and actuals to

    proactively gauge

    model performance and find retraining opportunities.

  • Automate monitoring at scale:

    Catch performance degradation of key metrics and

    surface unknown issues

    with performance, drift, and data quality monitors.

  • Find and fix problems faster:

    Reduce time-to-resolution

    for even the most complex models with purpose-built workflows for root cause analysis.



Dat Ngo

ML Solutions Architect Arize AI

Phi Nguyen

Phi Nguyen

GTM Tech Lead, Anyscale

Phi has been working with Fortune 500 customers in Retail, CPG, HCLS, Financial services and startups to accelerate their machine learning practices. This includes a wide range of engagements such as helping teams organize and build a center of excellence for ML, MLOps processes and automation, ML use cases development and feasibility to providing cloud best practices combining Ray and public cloud such as AWS and GCP or open source projects running on Kubernetes.