Building an LLM open source search engine in 100 lines using LangChain and Ray

By Waleed Kadous   

This is part 1 of a blog series. In this blog, we’ll introduce you to LangChain and Ray Serve and how to use them to build a search engine using LLM embeddings and a vector database. In future parts, we will show you how to turbocharge embeddings and how to combine a vector database and an LLM to create a fact-based question answering service. Additionally, we will optimize the code and measure performance: cost, latency and throughput.

In this blog, we'll cover:

  • An introduction to LangChain and show why it’s awesome.

  • An explanation of how Ray complements LangChain by: 

    • Showing how with a few minor changes, we can speed parts of the process up by a factor of 4x or more

    • Making LangChain’s capabilities available in the cloud using Ray Serve

    • Using self-hosted models by running Ray Serve, LangChain and the model all in the same Ray cluster without having to worry about maintaining individual machines.

LinkIntroduction

Ray is a very powerful framework for ML orchestration, but with great power comes voluminous documentation. 120 megabytes in fact. How can we make that documentation more accessible?

Answer: make it searchable! It used to be that creating your own high quality search results was hard. But by using LangChain, we can build it in about 100 lines of code. 

This is where LangChain comes in. LangChain provides an amazing suite of tools for everything around LLMs. It’s kind of like HuggingFace but specialized for LLMs. There are tools (chains) for prompting, indexing, generating and summarizing text. While an amazing tool, using Ray with it can make LangChain even more powerful. In particular, it can: 

  • Simply and quickly help you deploy a LangChain service. 

  • Rather than relying on remote API calls, allow Chains to run co-located and auto-scalable with the LLMs itself. This brings all the advantages we discussed in a previous blog post: lower cost, lower latency, and control over your data.

LinkBuilding the LangChain index

doc-index-starter-langchain
Build a document index easily with Ray and Langchain

First we will build the index via the following steps. 

  1. Download the content we want to index locally. 

  2. Read the content and cut it into tiny little pieces (about a sentence each). This is because it is easier to match queries against pieces of a page rather than the whole page.

  3. Use the Sentence Transformers library from HuggingFace to generate a vector representation of each sentence. 

  4. Embed those vectors in a Vector database (we use FAISS, but you could use whatever you like). 

The amazing thing about this code is how simple it is - See Here. As you will see, thanks to LangChain, all the heavy lifting is done for us. Let’s pick a few excerpts. 

Assuming we’ve downloaded the Ray docs, this is all we have to do to read all the docs in:

1
2
loader = ReadTheDocsLoader("docs.ray.io/en/master/")
docs = loader.load() 

The next step is to break each document into little chunks. LangChain uses splitters to do this. So all we have to do is this:

1
2
3
chunks = text_splitter.create_documents(
    [doc.page_content for doc in docs], 
    metadatas=[doc.metadata for doc in docs])

We want to preserve the metadata of what the original URL was, so we make sure to retain the metadata along with these documents. 

Now we have the chunks we can embed them as vectors. LLM providers do offer APIs for doing this remotely (and this is how most people use LangChain). But, with just a little bit of glue we can download Sentence Transformers from HuggingFace and run them locally (inspired by LangChain’s support for llama.cpp). Here’s the glue code

By doing so, we reduce latency, stay on open source technologies, and don’t need a HuggingFace key or to pay for API usage. 

Finally, we have the embeddings, now we can use a vector database – in this case FAISS – to store the embeddings. Vector databases are optimized for doing quick searches in high dimensional spaces. Again, LangChain makes this effortless.

1
2
3
4
from langchain.vectorstores import FAISS

db = FAISS.from_documents(chunks, embeddings)
db.save_local(FAISS_INDEX_PATH)

And that’s it. The code for this is here. Now we can build the store.

1
% python build_vector_store.py

This takes about 8 minutes to execute. Most of that time is spent doing the embeddings. Of course, it’s not a big deal in this case, but imagine if you were indexing hundreds of gigabytes instead of hundreds of megabytes.

LinkAccelerating indexing using Ray

[Note: This is a slightly more advanced topic and can be skipped on first reading. It just shows how we can do it more quickly – 4x to 8x times more quickly]

How can we speed up indexing? The great thing is that embedding is easy to parallelize. What if we: 

  1. Sliced the list of chunks into 8 shards. 

  2. Embedded each of the 8 shards separately.

  3. Merge the shards. 

langchain-ray-accelerated-indexing
Build a document index 4-8x faster with Ray

One key thing to realize is that embedding is GPU accelerated, so if we want to do this, we need 8 GPUs. Thanks to Ray, those 8 GPUS don’t have to be on the same machine. But even on a single machine, there are significant advantages to using Ray. And one does not have to go to the complexity of setting up a Ray cluster, all you need to do is pip install ray[default] and then import ray

This requires some minor surgery to the code. Here’s what we have to do.

First, create a task that creates the embedding and then uses it to index a shard. Note the Ray annotation and us telling us each task will need a whole GPU.

1
2
3
4
5
@ray.remote(num_gpus=1)
def process_shard(shard): 
    embeddings = LocalHuggingFaceEmbeddings('multi-qa-mpnet-base-dot-v1')
    result = FAISS.from_documents(shard, embeddings)
    return result

Next, split the workload in the shards. NumPy to the rescue! This is a single line: 

1
shards = np.array_split(chunks, db_shards)

Then, create one task for each shard and wait for the results.

1
2
futures = [process_shard.remote(shards[i]) for i in range(db_shards)]
results = ray.get(futures)

Finally, let’s merge the shards together. We do this using simple linear merging. 

1
2
3
db = results[0]
for i in range(1,db_shards):
    db.merge_from(results[i])

Here’s what the sped up code looks like.

You might be wondering, does this actually work? We ran some tests on a g4dn.metal instance with 8 GPUs. The original code took 313 seconds to create the embeddings, the new code took 70 seconds, that’s about a 4.5x improvement. There’s still some one-time overheads to creating tasks, setting up the GPUs etc. This reduces as the data increases. For example, we did a simple test with 4 times the data, and it was around 80% of the theoretical maximum performance (ie. 6.5x faster vs theoretical maximum of 8x faster from the 8 GPUs). 

We can use the Ray Dashboard to see how hard those GPUs are working. Sure enough they’re all close to 100% running the process_shard method we just wrote.

accelerated-index-langchain-dashboard
Dashboard shows that GPU utilization is maxed out across all instances

It turns out merging vector databases  is pretty fast, taking only 0.3 seconds for all 8 shards to be merged. 

LinkServing

Serving-Queries-Ray-Langchain
Serve search queries with Ray and Langchain

Serving is another area where the combination of LangChain and Ray Serve shows its power. This is just scratching the surface: we’ll explore amazing capabilities like independent auto scaling and request batching in our next blog post in the series.

The steps required to do this are:

  1. Load the FAISS database we created and the instantiate the embedding

  2. Start using FAISS to do similarity searches.

Ray Serve makes this magically easy. Ray uses a “deployment” to wrap a simple python class. The __init__ method does the loading and then __call__ is what actually does the work. Ray takes care of connecting it to the internet, bringing up a service, http and so on.

Here’s a simplified version of the code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
@serve.deployment
class VectorSearchDeployment:
    def __init__(self):
        self.embeddings = … 
        self.db = FAISS.load_local(FAISS_INDEX_PATH, self.embeddings)

    def search(self,query): 
        results = self.db.max_marginal_relevance_search(query)
        retval = <some string processing of the results>
        return retval

    async def __call__(self, request: Request) -> List[str]:
        return self.search(request.query_params["query"])

deployment = VectorSearchDeployment.bind()

That’s it!

Let’s now start this service with the command line (of course Serve has more deployment options than this, but this is an easy way):

1
% serve run serve_vector_store:deployment

Now we can write a simple python script to query the service to get relevant vectors(it’s just a web server running on port 8000).

1
2
3
4
5
import requests
import sys
query = sys.argv[1]
response = requests.post(f'http://localhost:8000/?query={query}')
print(response.content.decode())

And now let’s try it out:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
$ python query.py 'Does Ray Serve support batching?'
From http://docs.ray.io/en/master/serve/performance.html

You can check out our microbenchmark instructions
to benchmark Ray Serve on your hardware.
Request Batching#
====

From http://docs.ray.io/en/master/serve/performance.html

You can enable batching by using the ray.serve.batch decorator. Let’s take a look at a simple example by modifying the MyModel class to accept a batch.
from ray import serve
import ray
@serve.deployment
class Model:
    def __call__(self, single_sample: int) -> int:
        return single_sample * 2
====

From http://docs.ray.io/en/master/ray-air/api/doc/ray.train.lightgbm.LightGBMPredictor.preferred_batch_format.html

native batch format.
DeveloperAPI: This API may change across minor Ray releases.
====

From http://docs.ray.io/en/master/serve/performance.html

Machine Learning (ML) frameworks such as Tensorflow, PyTorch, and Scikit-Learn support evaluating multiple samples at the same time.
Ray Serve allows you to take advantage of this feature via dynamic request batching.
====

LinkConclusion

We showed in the above code just how easy it is to build key components of an LLM-based search engine and serve its responses to the entire world by combining the power of LangChain and Ray Serve. And we didn’t have to deal with a single pesky API key! 

Tune in for Part 2, where we will show how to turn this into a chatgpt-like answering system. We’ll use open source LLMs like Dolly 2.0 to do that. 

And finally we’ll share Part 3 where we’ll talk about scalability and cost. The above is fine for a few hundred queries per second, but what if you need to scale to a lot more? And are the claims about latency correct?

LinkNext Steps

See part 2 here.

Review the code and data used in this blog in the following Github repo.

See our earlier blog series on solving Generative AI infrastructure with Ray.

If you are interested in learning more about Ray, see Ray.io and Docs.Ray.io.

To connect with the Ray community join #LLM on the Ray Slack or our Discuss forum.

If you are interested in our Ray hosted service for ML Training and Serving, see Anyscale.com/Platform and click the 'Try it now' button

Ray Summit 2023: If you are interested to learn much more about how Ray can be used to build performant and scalable LLM applications and fine-tune/train/serve LLMs on Ray, join Ray Summit on September 18-20th! We have a set of great keynote speakers including John Schulman from OpenAI and Aidan Gomez from Cohere, community and tech talks about Ray as well as practical training focused on LLMs.

Next steps

Anyscale's Platform in your Cloud

Get started today with Anyscale's self-service AI/ML platform:


  • Powerful, unified platform for all your AI jobs from training to inference and fine-tuning
  • Powered by Ray. Built by the Ray creators. Ray is the high-performance technology behind many of the most sophisticated AI projects in the world (OpenAI, Uber, Netflix, Spotify)
  • AI App building and experimentation without the Infra and Ops headaches
  • Multi-cloud and on-prem hybrid support