The Ray team at Ant Group developed the Mars On Ray scientific computing framework. Combining Mars with XGBoost on Ray and other Ray machine learning libraries makes it easy to implement an end-to-end AI pipeline in one job, and use one Python script...
This blog, Part 2, will explain how to use Ray to speed up Deep Learning forecasting when training one large global model in order to predict many target time series. We will train an LSTM version of RNN with GRN building blocks, Encoder-Decoder, and...
Why Heureka choose Ray for cost effective machine learning.
XGBoost is an open-source implementation of gradient boosting designed for speed and performance. However, even XGBoost training can sometimes be slow. This post reviews some approaches for accelerating this process like changing tree construction me...
Announcing our $100M Series C and general availability of the Anyscale managed Ray offering.
Ray version 1.9 has been released! Release highlights include: Ray Train is now in beta, Ray Datasets now supports groupby and aggregations, Ray Docker images for multiple CUDA versions, improved Windows support, and a Ray Job Submission server.
Distributed libraries allow improved performance by exploiting the full bandwidth of distributed memory, and giving greater programmability. But how does that actually work? What does the code look like?
In this post, we’ll be looking at a concrete...
Why Jaikumar Ganesh joined Anyscale as Head of Cloud Engineering.
What is statistical forecasting and how you can use ARIMA and Prophet on Ray to speed up your forecasting.
Running and monitoring distributed ML systems can be challenging. The need to manage multiple servers, and the fact that those servers emit different logs, means that there can be a lot of overhead involved in scaling up a distributed ML system. Fort...