Introducing Ray Train, an easy-to-use library for distributed deep learning. In this post, we show how Ray Train improves developer velocity, is production-ready, and comes with batteries included.
This blog, Part 2, will explain how to use Ray to speed up Deep Learning forecasting when training one large global model in order to predict many target time series. We will train an LSTM version of RNN with GRN building blocks, Encoder-Decoder, and...
Ray Lightning is a new plugin that makes running multi-node GPU training with PyTorch Lightning fast and easy.
XGBoost-Ray is a novel backend for distributed XGBoost training. It features multi node and multi GPU training, distributed data loading, advanced fault tolerance such as elastic training, and a seamless integration with hyperparameter optimization f...
Huggingface Transformers recently added the Retrieval Augmented Generation (RAG) model, a new NLP architecture that leverages external documents (like Wikipedia) to augment its knowledge and achieve state of the art results on knowledge-intensive tas...
In this blog post, we're announcing two new integrations with Ray and MLflow: Ray Tune+MLflow Tracking and Ray Serve+MLflow Models, which together make it much easier to build ML models and take them to production.