Build a chat assistant fast using Canopy from Pinecone and Anyscale Endpoints

Wednesday, December 13, 4:00PM UTC

This webinar will explore the challenges of building a chat assistant and how Canopy and Anyscale endpoints provide the fastest and easiest way to build your RAG based applications for free. We will go through the architecture, a real live example, and a guide on how to get started with building your own chat assistant.

Canopy is a flexible framework built on top of the Pinecone vector database that provides libraries and a simple API for chunking, embedding, chat history management, query optimization, and context retrieval.

Anyscale Endpoints is a fast and performant LLM API for building your AI based applications. Anyscale Endpoints provides a serverless service for serving and fine-tuning open LLMs such as Llama-2 and Mistral. Anyscale Endpoints now provides an embedding endpoint and allows you to finetune the largest Llama-2 70B model, giving you flexibility for open LLMs through an API.