Webinar

Faster time series forecasting at scale using Ray and Anyscale

Tuesday, January 18, 5:00PM UTC

Forecasting is an important part of running any business. The “embarrassingly parallel” pattern of distributed computing works with time series algorithms such as ARIMA, Prophet, and Neural Prophet. For global deep learning algorithms, both data and model parallelism become simple and easy with Ray.

In this webinar, we will demo both patterns. For the deep learning algorithm, we will show how to take Google’s Temporal Fusion Transformer, available in PyTorch Forecasting, and turn training and inference into distributed applications using Ray. Ray code runs in parallel across local laptop cores. The exact same code can run in parallel across any cluster in any cloud too.

Speakers

Christy Bergman

Christy Bergman

Developer Advocate, Anyscale

Christy is a Developer Advocate at Anyscale. Her work involves figuring out how to parallelize different AI algorithms and creating demos and tutorials on how to use Ray and Anyscale. Before that, she was a Senior AI/ML Specialist Solutions Architect at AWS and Data Scientist at several other companies. In her spare time, she enjoys hiking and bird watching.

Amog Kamsetty

Amog Kamsetty

Software Engineer, Anyscale

Amog Kamsetty is a software engineer at Anyscale where he works on building distributed training libraries and integrations on top of Ray. He previously completed his MS degree at UC Berkeley working with Ion Stoica on machine learning for database systems.