Don't miss out! Join us for the Ray + AI Infra Summit in NYC on May 20. Register Now
Anyscale
  • Pricing
Get Started with $100 Credit

Featured Posts and News

Simplifying AI Development at Scale: Google Cloud Integrates Anyscale's RayTurbo with GKE
04 . 09 . 2025

Simplifying AI Development at Scale: Google Cloud Integrates Anyscale's RayTurbo with GKE

By Gabe Monroy, VP & GM Cloud Runtimes and
By Keerti Melkote, CEO of Anyscale
02 . 12 . 2025

Deploying DeepSeek R1 on Anyscale

By The Anyscale Team
10 . 03 . 2024

Ray Summit 2024: Breaking Through the AI Complexity Wall

By The Anyscale Team
Group 464 (1)

10 . 01 . 2024

Ray Data GA

By Hao Chen, 
Richard Liaw and 
Praveen Gorthy
Untitled design (43)

10 . 01 . 2024

Accelerated Metadata Fetching in Ray Data up to 4.5x Faster on Anyscale

By Balaji Veeramani, 
Hao Chen, 
Richard Liaw, 
Matthew Connor and 
Praveen Gorthy
image4

10 . 01 . 2024

Autoscaling Large AI Models up to 5.1x Faster on Anyscale

By Christopher Chou, 
Austin Kuo, 
Richard Liaw, 
Edward Oakes and 
Chris Sivanich
Screenshot 2024-09-27 at 12.12.40 PM

10 . 01 . 2024

Announcing RayTurbo

By Akshay Malik, 
Praveen Gorthy and 
Richard Liaw
Untitled design (41)

10 . 01 . 2024

Batch LLM Inference on Anyscale slashes AWS Bedrock costs by up to 6x

By Cody Yu, 
Scott Lee, 
Ricky Xu, 
William Lin, 
Praveen Gorthy and 
Richard Liaw
Ray 2.8 blog post

11 . 07 . 2023

Ray 2.8 features Ray Data extensions, AWS Neuron cores support, and Dashboard improvements

By Jules S. Damji and 
Richard Liaw
ray 2-7 annoucement

09 . 18 . 2023

Ray 2.7 features major stability improvements to Ray AI Libraries and KubeRay and introduces RayLLM

By Jules S. Damji and 
Richard Liaw
2.6 animations

07 . 25 . 2023

Ray 2.6 features streaming for Serve and Train and new Multi-GPU Learner API

By Jules S. Damji and 
Richard Liaw
Cover image for 23X batch blog

06 . 22 . 2023

How continuous batching enables 23x throughput in LLM inference while reducing p50 latency

By Cade Daniel, 
Chen Shen, 
Eric Liang and 
Richard Liaw
Ray_2.5_main_image

06 . 13 . 2023

Ray 2.5 features training and serving for LLMs, Multi-GPU training in RLlib, and enhanced Ray Data support

By Richard Liaw and 
Jules S. Damji
main_image

04 . 27 . 2023

Announcing Ray 2.4.0: Infrastructure for LLM training, tuning, inference, and serving

By Richard Liaw, 
Jules S. Damji and 
Jiajun Yao
fine-tune-stack

04 . 10 . 2023

How to fine tune and serve LLMs simply, quickly and cost effectively using Ray + DeepSpeed + HuggingFace

By Waleed Kadous, 
Jun Gong, 
Antoni Baum and 
Richard Liaw
12
Next

© Anyscale, Inc 2025 - Privacy Policy

Follow Anyscale

Follow Ray

Company

  • About Us
  • News
  • Careers
  • Contact Sales

Learn

  • Resources
  • Case Studies
  • Blog
  • Events
  • Ray Training
  • Ray Docs
  • Anyscale Docs

Products

  • Anyscale Platform
  • Anyscale Support
  • Ray Open Source
  • Integrations