Building LLM Apps for Production, Part 1

While setting up a naive RAG stack is straightforward, addressing a long tail of quality, evaluation and scalability challenges is essential for software engineers to render their applications production-ready.

‚ÄčThis multi-part workshop series will guide you through using LlamaIndex and Ray to implement reliable and scalable RAG.

‚ÄčYou'll learn how to build RAG and approach scalability challenges, design experiments to optimize key application components and utilize scalable workflows to quantitatively compare them.

‚ÄčInstructors:

Simon (co-founder/CTO, LlamaIndex)

‚ÄčAdam (Technical Trainer, Anyscale)

Ready to try Anyscale?

Access Anyscale today to see how companies using Anyscale and Ray benefit from rapid time-to-market and faster iterations across the entire AI lifecycle.