Ray + Arize, Productionize ML for Scale and Usability

By Dat Ngo   

Don't miss Arize’s presentation at Ray Summit 2022!

If you’ve ever had the opportunity to bring a machine learning project to life from rapid prototyping all the way into production, you know that it is nothing short of yeoman’s work. Some would call it fun, some type-II fun, and others absolute hell – it all depends on your sense of humor and affinity for productionizing things.

But if you are on this hero’s journey, there are two arrows in your ML quiver that can make this process much more enjoyable and likely to hit the mark: Ray and Arize AI. This piece covers why you should consider using Ray’s distributed ML framework and ecosystem and Arize’s ML observability platform and how you can get started. 

LinkBackground

Imagine you just finished your ML prototype after weeks and weeks of trying to find the necessary data, cleaning/preprocessing all the data, trying to find the right model architecture, training, testing, and iterating over and over until you finally have a working model.

You present your model to the product team and tell an amazing story of how your model will improve the day-to-day operations, increase business KPIs, and a lot more. They love the presentation! They want to move forward and ask you the following: “How do we get this model serving the entire business line? And when the model goes into production, how will you know how it’s performing against our goals?”

What they are REALLY asking (in ML speak) is this: how do we get this off your laptop and take this into a very real production system at scale? And if issues around data quality, data drift or performance degradation happen, how will we catch and fix them quickly so that business outcomes aren’t negatively affected?

This blog is written to help give you a better understanding of how you can answer these questions and start tackling these productionalization tasks.

LinkThe Technologies

In this section, we will briefly review both the Ray and Arize technologies and the problems that each solve.

LinkWhat is Ray?

Ray is an open-source project developed at UC Berkeley’s RISELab. As a general-purpose and universal distributed compute framework, you can flexibly run any compute-intensive Python workload — from distributed training or hyperparameter tuning to deep reinforcement learning and production model serving.

Many times, as ML practitioners, we set out trying to bring value to our business through the ML models we build but oftentimes get sidetracked in learning and managing how to bring our models to a larger scale.

This is where Ray comes in. Ray enables the user to run Python code in a parallel fashion and across multiple machines without confining you to a specific framework – basically imagine Apache Spark but you have the availability of all things Python.

This makes it more of a general-purpose clustering and parallelization framework that can be used to build and run any type of distributed applications. Because of how Ray Core is architected, it is often thought of as a framework for building frameworks.

You can break down Ray into a couple different components. The first is Ray Core, which is a distributed computing framework. The second is the Ray Ecosystem, which broadly speaking is several task-specific libraries that come packaged with Ray.  

TL;DR on Ray:

  • Very intuitive to scale in a language that you’re comfortable with (going from laptop to distributed workloads in Python)

  • Vast ML ecosystem; not constrained by certain technologies or frameworks

  • Allows users to focus on building their ML use case, not distributed technologies

Want to go Deeper? Here are some resources:

Ray Tutorial

Ray Core

LinkWhat is Arize?

For many ML teams, once the model interacts with the real world is where the rubber meets the road. This is where Arize comes in. Arize is an ML observability platform that allows ML practitioners to easily tackle the myriad of issues likely to come across in the real world, such as:

  • Model Performance Issues: almost all models will experience some sort of performance degradation 

  • Model and Data Drift: the real world or model changing; risk to the model

  • Data Quality Issues: we all know this one

  • Model Explainability: knowing WHY my model is making the predictions it’s making

  • Model Fairness: treating groups or protected classes equitably

When it comes to model monitoring, it’s not just that we want to be alerted when there is an issue. Once a monitor fires, we want the ability to know where and why the issues happened, and how we can fix them quickly. Arize makes finding these issues intuitive and automated. Just like in software development, if you don’t know where the bug is or have no visibility into the problem then it can be painstakingly long and arduous to triage the situation.

Arize is built to do three things well. The first is to let you know when something has gone wrong. The second is helping you understand where that issue is, giving you workflows to quickly fix it. Both contribute to the third, which is to continually improve ML models once they’re in production.

As you think about scaling the infrastructure around ML models, you also want to think about scaling team capabilities. If your team is spending copious amounts of time maintaining basic model analytics and systems not purpose-built for ML monitoring and observability, there is less time spent building newer, better models for the business.

TL;DR on Arize:

  • Automated monitoring for issues your model will encounter in the wild

  • Strong troubleshooting workflows to fix issues quickly

  • Built for scale, intuition, and ease of use

Want to go Deeper? Here are some resources:

Machine Learning Observability 101

Arize Docs

LinkLet’s See it in Action

Below is a coded example of Ray with Arize. 

It’s quite a simple example but shows the scaffolding of both of the technologies working in tandem. Let’s break the notebook up into two major parts. 

1# install dependencies
2
3!pip install ray
4!pip install arize
5!pip install xgboost_ray
6
7# import data, assign features and target
8
9import pandas as pd
10import numpy as np
11from sklearn.datasets import load_breast_cancer
12from sklearn.svm import SVC
13
14data = load_breast_cancer()
15
16feature_names = data.feature_names
17target_names = data.target_names
18target = data.target
19df = pd.DataFrame(data, columns=feature_names)
20
21# train a model on the breast cancer dataset using Ray
22
23from xgboost_ray import RayDMatrix, RayParams, train
24from sklearn.datasets import load_breast_cancer
25
26train_x, train_y = load_breast_cancer(return_X_y=True)
27train_set = RayDMatrix(train_x, train_y)
28
29evals_result = {}
30bst = train(
31    {
32        "objective": "binary:logistic",
33        "eval_metric": ["logloss", "error"],
34    },
35    train_set,
36    evals_result=evals_result,
37    evals=[(train_set, "train")],
38    verbose_eval=False,
39    ray_params=RayParams(
40        num_actors=2,  # Number of remote actors
41        cpus_per_actor=1))
42
43bst.save_model("model.xgb")
44print("Final training error: {:.4f}".format(
45    evals_result["train"]["error"][-1]))
46
47# model predictions and shap using Ray
48
49from xgboost_ray import RayDMatrix, RayParams, predict
50from sklearn.datasets import load_breast_cancer
51import xgboost as xgb
52
53batch = RayDMatrix(train_x)
54bst = xgb.Booster(model_file="model.xgb")
55pred_ray = predict(bst, batch, ray_params=RayParams(num_actors=2))
56
57print(pred_ray.shape)

The first part is likely familiar, as this is one of the advantages when using Ray. Here you are training our model, using that model to predict on the breast cancer dataset, and calculating SHAP values.

A lot of the code should feel familiar, akin to using something like using sklearn fit() and predict(). Here, you are using Ray to distribute the work to two actors.

An actor is essentially a stateful worker (or a service). When a new actor is instantiated, a new worker is created, and methods of the actor are scheduled on that specific worker and can access and mutate the state of that worker. This allows you to distribute the work needed to train, predict, and compute SHAP (or do any other action that is computationally heavy).

1# create shap values column names for Arize
2
3inf_shap_values = predict(bst, batch, ray_params=RayParams(num_actors=2), pred_contribs=True)
4
5shap_values_column_names_mapping = {
6    f"{feat}": f"{feat}_shap" for feat in data["feature_names"]
7}
8
9# instantiate Arize client
10
11from arize.pandas.logger import Client, Schema
12from arize.utils.types import ModelTypes, Environments
13
14SPACE_KEY = "SPACE_KEY"
15API_KEY = "API_KEY"
16arize_client = Client(space_key=SPACE_KEY, api_key=API_KEY)
17
18model_id = "breast_cancer_prediction_SHAP"
19model_type = ModelTypes.SCORE_CATEGORICAL
20
21if SPACE_KEY == "SPACE_KEY" or API_KEY == "API_KEY":
22    raise ValueError("❌ NEED TO CHANGE SPACE AND/OR API_KEY")
23else:
24    print("Step 2 ✅: Import and Setup Arize Client Done! Now we can start using Arize!")
25
26# create functions to simulate UUIDS and timestamps for Arize 
27
28import uuid
29from datetime import datetime, timedelta
30
31# Prediction ID is required for logging any dataset
32def generate_prediction_ids(df):
33    return pd.Series((str(uuid.uuid4()) for _ in range(len(df))), index=df.index)
34
35# OPTIONAL: We can directly specify when inferences were made
36def simulate_production_timestamps(df, days=30):
37    t = datetime.now()
38    current_t, earlier_t = t.timestamp(), (t - timedelta(days=days)).timestamp()
39    return pd.Series(np.linspace(earlier_t, current_t, num=len(df)), index=df.index)
40
41# assign predictions, labels, and shaps for Arize
42
43prediction_label = pd.Series(map(lambda v: target_names[v], (pred_ray > 0.5).astype(int)))
44prediction_score = pd.Series(pred_ray)
45actual_label = pd.Series(map(lambda v: target_names[v], target))
46actual_score = pd.Series(target)
47shap_values = pd.DataFrame(inf_shap_values[:,:-1], columns=shap_values_column_names_mapping)
48shap_values = shap_values.rename(columns=shap_values_column_names_mapping)
49
50# create data frame to send to Arize
51
52production_dataset = pd.DataFrame(train_x, columns=data["feature_names"]).join(
53    [
54        pd.DataFrame(
55            {
56                "prediction_id": generate_prediction_ids(pd.DataFrame(train_x)),
57                "prediction_ts": simulate_production_timestamps(pd.DataFrame(train_x)),
58                "prediction_label": prediction_label,
59                "actual_label": actual_label,
60                "prediction_score": prediction_score,
61                "actual_score": actual_score,
62            }
63        ),
64        shap_values,
65    ]
66)
67
68production_dataset.head(5)
69
70# Define a Schema() object for Arize to pick up data from the correct columns for logging
71production_schema = Schema(
72    prediction_id_column_name="prediction_id",  # REQUIRED
73    timestamp_column_name="prediction_ts",
74    prediction_label_column_name="prediction_label",
75    prediction_score_column_name="prediction_score",
76    actual_label_column_name="actual_label",
77    actual_score_column_name="actual_score",
78    feature_column_names=feature_names,
79    shap_values_column_names=shap_values_column_names_mapping,
80)
81
82# arize_client.log returns a Response object from Python's requests module
83response = arize_client.log(
84    dataframe=production_dataset,
85    schema=production_schema,
86    model_id="ray_shap_model_example_classification",
87    model_type=ModelTypes.SCORE_CATEGORICAL,
88    environment=Environments.PRODUCTION,
89)
90
91# If successful, the server will return a status_code of 200
92if response.status_code != 200:
93    print(
94        f"❌ logging failed with response code {response.status_code}, {response.text}"
95    )
96else:
97    print(
98        f"✅ You have successfully logged {len(production_dataset)} data points to Arize!"
99    )

In the second part, you are prepping your production data – the data your model predicted on -- to be sent to Arize. Here, you are instantiating our Arize client, defining your schema, and logging your predictions to our Arize account.

Whether real time or batch architectures, you can log inference data to Arize to monitor and observe how the model is doing in production. In doing so, you have good visibility into when the model encounters performance degradation, drift, or data quality issues. If you come across these model issues, you will have the ability to quickly find and fix the issue with Arize.

LinkFood For Thought

As you think about your current ML operations, there is one thing you could probably use much more of: time. Ray and Arize can help. Instead of spending a lot of time learning how distributed technologies work or monitoring and troubleshooting models that are in production, it is worth considering offloading these tasks to technology to keep your team focused on what they do best: using deep business domain knowledge to build and deploy high-value ML models.

This blog was written in partnership with Dat Ngo, ML Solutions Architect at Arize AI

Ready to try Anyscale?

Access Anyscale today to see how companies using Anyscale and Ray benefit from rapid time-to-market and faster iterations across the entire AI lifecycle.