Building an LLM-powered GitHub bot to improve your pull requests

By Max Pumperla   

Update June 2024: Anyscale Endpoints (Anyscale's LLM API Offering) and Private Endpoints (self-hosted LLMs) are now available as part of the Anyscale Platform. Click here to get started on the Anyscale platform.

[ CodeInstall the bot  | Anyscale Endpoints | Ray Docs ]  · 30 min read


Technical writing is hard. Developers want to focus on their core coding responsibilities, but it’s inevitable for technical experts to also participate in documentation efforts. Often, nobody else could even write the content in question. This means that many pull requests contain both code and plain English (or other natural languages). At Anyscale, we’ve embraced the Google developer doc style guide, and recommend using Vale to enforce it. But in practice it can happen that one slips up and introduces minor syntax or grammar errors. In fact, if you’re focusing mostly on code quality, writing quality might just not be top of mind. Ineffective or misleading formulations can be the result of that.

By developing a bot that helps spot mistakes and inconsistencies in an automated fashion, developers get valuable feedback – directly in their PRs on GitHub. Let’s have a look at what Docu Mentor does for you in practice.

In this guide you’ll learn to:

  • 💻 Build an LLM-powered bot, using Anyscale Endpoints and Anyscale Services, from scratch.

  • 🚀 Scale out the bot’s workload efficiently using Ray.

  • ✅ Evaluate the bot’s performance with the help of GPT-4 as evaluator by writing automated tests.

  • 🔀 Learn to leverage the GitHub API to hook into your pull requests and issues.

  • 📦 Deploy your bot and make it publicly available.

  • 💡 Create, deploy and share your own GitHub bots.

LinkIntroducing Docu Mentor

documentor-image

Two months ago we introduced Anyscale Endpoints as a cost-effective way to work with powerful open-source LLMs. We offer Endpoints to our customers, but we also use it internally to run experiments and constantly improve our own products. For instance, we concluded that you can run summarization tasks as well as GPT-4 with much cheaper Endpoints models, and we fine-tuned models available on Endpoints for domain-specific tasks.

In this post we’re building a GitHub bot called “Docu Mentor” that you can mention in your pull requests to help you improve your writing. The bot is already up, and you can install it from GitHub in just a few clicks. The code for this whole app clocks in at just 250 lines of code and is freely available on GitHub. You can fork it, modify it, and build your own app in no time, and we’ll show you exactly how in this post.

LinkDocu Mentor in action

After you install the bot in your repo, whenever you open a new pull request, the @docu-mentor bot will write a friendly helper message as a GitHub issue comment into your PR.

documentor-pull-request

You can trigger the bot by simply creating a comment of your own saying “@docu-mentor run”. This will analyze all the changes you’ve made in this PR, to give you suggestions on how to improve it. Often, most of the files in your PR will just be code changes, so if you’re seeking feedback on a single file or a specific folder, you can point Docu Mentor to those files. Here’s an example of me asking the bot to comment on the changes I’ve made in my README.md:

documentor-bot

After a couple of seconds, Docu Mentor will get back to you with the results of its analysis. You can then incorporate that feedback into your PR to the extent you want.

docu-mentor-bot2

LinkHow Docu Mentor works

At the core of our bot is a call to the codellama/CodeLlama-34b-Instruct-hf model, for which we’re using Anyscale Endpoints. Our Python API leverages the “openai” Python package from OpenAI. In fact, it’s a drop-in-replacement for it, given that you set the right API key and endpoint URL as follows:

1
2
3
4
5
6
7
import os
import openai

ANYSCALE_API_ENDPOINT = "https://api.endpoints.anyscale.com/v1"
openai.api_base = ANYSCALE_API_ENDPOINT
# Set this env variable to your Anyscale Endpoints API KEY
openai.api_key = os.environ.get("ANYSCALE_API_KEY")

To query any LLM, we have to craft a prompt and pass it to the LLM. This is the main instruction for the Docu Mentor bot:

1
2
3
4
5
6
7
8
9
10
11
12
SYSTEM_CONTENT = """You are a helpful assistant.
Improve the following <content>. Criticise syntax, grammar, punctuation, style, etc.
Recommend common technical writing knowledge, such as used in Vale
and the Google developer documentation style guide.
If the content is good, don't comment on it.
You can use GitHub-flavored markdown syntax in your answer.
"""

PROMPT = """Improve this content.
Don't comment on file names or other meta data, just the actual text.
The <content> will be in JSON format and contains file name keys and text values. Make sure to give very concise feedback per file.
"""

We’ve set this instruction up for the system to take file-content input pairs, as this is the natural way to represent changes in pull requests. We’ll take care of dealing with the intricacies of the GitHub API much later. For now, consider the following “mentor” function as the central building block of our app:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
def mentor(
        content,
        model="codellama/CodeLlama-34b-Instruct-hf",
        system_content=SYSTEM_CONTENT,
        prompt=PROMPT
    ):
    result = openai.ChatCompletion.create(
        model=model,
        messages=[
            {"role": "system", "content": system_content},
            {"role": "user", "content": f"This is the content: {content}. {prompt}"},
        ],
        temperature=0,
    )
    usage = result.get("usage")
    prompt_tokens = usage.get("prompt_tokens")
    completion_tokens = usage.get("completion_tokens")
    content = result["choices"][0]["message"]["content"]

    return content, model, prompt_tokens, completion_tokens

As you can see, this code snippet is leveraging the OpenAI API, but because we set an Anyscale Endpoints URL and API key, we can query faster and cheaper models such as CodeLlama here.

LinkParallelizing LLM queries efficiently with Ray

We have not explicitly discussed yet what the “content” is that we want to send to the “mentor” function. Our goal is to analyze pull requests. PRs are expressed as differences (“diffs”) between the branch in which you made changes and the base branch that you want to merge your changes to. Normally, a PR touches many files, so it’s natural to analyze diffs per file. Now, if your PR not only changes many files, but also touches many lines of code, naively passing the complete diff as “content” to Docu Mentor will not be a good idea.

To address this issue, we can instead send one query per file-diff to “mentor” by leveraging Ray. This improves response quality and speed, as the number of input tokens is kept relatively small. We can do this quite easily by first creating a Ray task called “mentor_task” that simply passes through to our previous “mentor” function, by simply adding the “@ray.remote” decorator.

1
2
3
4
5
import ray

@ray.remote
def mentor_task(content, model, system_content, prompt):
    return mentor(content, model, system_content, prompt)

With that, we can define a distributed version of “mentor” by calling “mentor_task.remote(content,...)” for per-file content for a PR like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
def ray_mentor(
        content: dict,
        model="codellama/CodeLlama-34b-Instruct-hf",
        system_content=SYSTEM_CONTENT,
        prompt="Improve this content."
    ):
    futures = [
        mentor_task.remote(v, model, system_content, prompt)
        for v in content.values()
        ]
    suggestions = ray.get(futures)
    content = {k: v[0] for k, v in zip(content.keys(), suggestions)}
    prompt_tokens = sum(v[2] for v in suggestions)
    completion_tokens = sum(v[3] for v in suggestions)

    print_content = ""
    for k, v in content.items():
        print_content += f"{k}:\n\t\{v}\n\n"
    logger.info(print_content)

    return print_content, model, prompt_tokens, completion_tokens

Note how “ray_mentor” is just splitting up files and remotely executing “mentor_task” on individual file content. The results then need to be aggregated to fit into a single response that we can write to a GitHub comment. Here’s a visualization of this process:

docu-mentor-process

LinkEvaluating the approach

Before we throw any real user data against our “mentor” query, we have to investigate if using this model and prompt gives us the results we want. But what are realistic expectations for this system, and what makes the answer of our bot good or suitable? After all, we are not aware of any grammar- or style-correcting benchmarks we could test against our system. Human judgment, as is often the case with applications like this, is the ultimate litmus test, but extensive experimentation with human feedback is expensive and impossible to automate – by definition.

An interesting approach that can be fully automated is the following:

  • Pick the strongest available general-purpose LLM on the market, currently GPT-4.

  • Use this model to generate a collection of sentences and paragraphs that have grammatical, syntactic, or stylistic deficiencies.

  • Use this collection as test fixtures to define a set of tests for our bot:

  • First, run our “mentor” function against this collection and return the results.

  • Next, use GPT-4 as an evaluator of the results, by asking this stronger LLM if the proposed “mentor” results are considered good.

  • Run the tests and generate a report that evaluates how much GPT-4 agrees with the results of our bot.

If a certain threshold of agreement (X% of test sentences yield good results) is surpassed, consider the test as passing and the bot as strong enough.

automated-process

We implemented this approach as unit tests in the docu-mentor GitHub repo. In our experience, GPT-4 can sometimes be overly critical in this scenario. This is why it’s not easy to set a good “agreement” threshold. Frankly speaking, it’s not trivial for two humans to agree on matters of style either. But judging from the results of evaluation, Docu Mentor passes GPT-4’s critical view at least 90% of the time when correcting flawed sentences, and around 70% of the time when considering more complex paragraphs containing stylistic slips. In our opinion, running “mentor” with Llama-2 finds all grave errors, including typos, wrong word orders, incorrect grammar, or just any blatantly wrong formulation, with ease. While GPT-4 sometimes deviates in judgment, we find Docu Mentor to be useful in practice, especially when used as a safeguard that catches 90% of the mistakes you don’t want to have in your docs and codebase.

LinkThe components of our GitHub bot 

Now that we’ve discussed the methodology of Docu Mentor in detail, let’s return to building and deploying the actual bot. Before we dive deeper into the code, let’s first give you an overview of what the bot consists of and how we’re going to deploy it. 

  • At its core, building a GitHub app means writing a web application that can handle GitHub Webhooks. That’s ultimately just a service that reacts to events from GitHub (such as “pull request created” or “issue comment updated”).

  • We build such a service in Python and deploy it in a scalable manner on Anyscale using Ray Serve.

  • This service handles all business logic and talks to Anyscale Endpoints via the “mentor” function we defined earlier. 

  • We make it so that when you ask the bot for help, a GitHub webhook is triggered. Our Anyscale Service reacts to that hook and generates a response, which it then writes back to the same thread in your pull request by using the GitHub API.

  • To use the bot, simply talk to it in your pull requests, for instance by saying “@docu-mentor run”. The bot will then simply create a new comment on GitHub that addresses your issues.

The following diagram summarizes this execution plan visually:

execution-plan

LinkHow to implement our bot

Let’s start with a very simple Ray Serve app that uses FastAPI for its API definition. We first define a skeleton that will serve as the basis of our bot. Assuming you’ve installed Ray Serve and FastAPI (e.g. with pip install “fastapi ray[serve]”) and store the following code in a file called “main.py”, you can start a local web service by running the command “serve run main:bot” on localhost:8000:

1
2
3
4
5
6
7
8
9
10
11
12
13
async def handle_webhook(request: Request):
    return {} # This will do all the work

@serve.deployment(route_prefix="/")
@serve.ingress(app)
class ServeBot:
    @app.get("/")
    async def root(self):
        return {"message": "Docu Mentor reporting for duty!"}

    @app.post("/webhook/")
    async def handle_webhook_route(self, request: Request):
        return await handle_webhook(request)

Our app consists of two routes, the root (“/”) that we’ll simply use for smoke-testing our application, and the “webhook/” route that’s doing all the heavy lifting. To implement the “handle_webhook” function, note that we want our bot to react to two events:

  1. When a new PR gets opened by a user and the bot is installed, we want our bot to let us know of its existence and offer help.

  2. When a user mentions the bot with instructions, we want it to answer in the thread on GitHub directly below the users’ question.

Let’s tackle these steps one after the other.

LinkStep 1: Make your bot offer help

For the purpose of this blog post,we will not worry about authentication and focus on the core application. If you’re interested in that aspect, you can check out how the Docu Mentor GitHub repo, which shows you how to use the GitHub app ID and the private key generated by GitHub to securely generate a JWT token. That token is then used to create a GitHub installation access token, so that Docu Mentor can authenticate as a GitHub app installation on your repos. This makes sure that, when you call “@docu-mentor” commands on a PR, GitHub is actually allowed to send your project information (we need “pull_request” and “issue” access) to the Docu Mentor backend via webhook.

Coming back to the app itself, you can easily figure out if a PR has been opened by checking if “pull_request” is a key in the request data and an “opened” action has been triggered. If that’s the case, we can asynchronously evaluate this incoming request (using the “httpx” library) and write back a greeting to the “issue_url” of the PR.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
import httpx
import logging
import os

logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logger = logging.getLogger("Docu Mentor")

@app.post("/webhook/")
async def handle_webhook(request: Request):
    data = await request.json()

    headers = {
        "Authorization": f"token {installation_access_token}",
        "User-Agent": "docu-mentor-bot",
        "Accept": "application/vnd.github.VERSION.diff",
    }

    # Ensure PR exists and is opened
    if "pull_request" in data.keys() and (
        data["action"] in ["opened", "reopened"]
    ):
        pr = data.get("pull_request")

        async with httpx.AsyncClient() as client:
            await client.post(
                f"{pr['issue_url']}/comments", 
   json={"body": GREETING}, headers=headers
            )

        return JSONResponse(content={}, status_code=200)

This is the full implementation of the first step!

LinkStep 2: Make the bot analyze your PRs 

The next step is slightly more involved and proceeds in the following substeps:

  • For each webhook event from GitHub, check if an issue was created or edited.

  • Make sure the issue in question belongs to a PR.

  • If so, understand if our bot is mentioned in the issue comment body.

  • If the bot is mentioned in the right way (we implement several commands for it), retrieve all data for the pull request in question.

  • On demand, the bot should analyze the changes proposed in this PR.

  • “Docu Mentor” analyzes grammar and style of the writing in your PR, but you can easily imagine a similar bot that criticizes your code as well.

  • We’re using the LLM “meta-llama/Llama-2-70b-chat-hf” that’s available on Anyscale Endpoints, through our drop-in-replacement OpenAI Python integration.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
# Continuation of the "handle_webhook" function in step 1    
# Check if the event is a new or modified issue comment
    if "issue" in data.keys() and data.get("action") in ["created", "edited"]:
        issue = data["issue"]

        # Check if the issue is a pull request
        if "/pull/" in issue["html_url"]:
            pr = issue.get("pull_request")

            # Get the comment body
            comment = data.get("comment")
            comment_body = comment.get("body")

     # Strip whitespaces
            comment_body = comment_body.translate(
                str.maketrans("", "", string.whitespace.replace(" ", ""))
            )

            # Don't react if the bot talks about itself
            author_handle = comment["user"]["login"]

      # Check if the bot is mentioned in the comment
      if (
                author_handle != "docu-mentor[bot]" and "@docu-mentor run" in comment_body
            ):
                async with httpx.AsyncClient() as client:
                    # Fetch diff from GitHub
                    url = get_diff_url(pr)
                    diff_response = await client.get(url, headers=headers)
                    diff = diff_response.text

                   files_with_lines = parse_diff_to_line_numbers(diff)

         # Get head branch of the PR
         headers["Accept"] = "application/vnd.github.full+json"
         head_branch = await get_pr_head_branch(pr, headers)

         # Get files from head branch
         head_branch_files = await get_branch_files(pr, head_branch, headers)

         # Enrich diff data with context from the head branch.
         context_files = get_context_from_files(
             head_branch_files, files_with_lines
         )

  # Run mentor functionality 
  content, model, prompt_tokens, completion_tokens = \
      ray_mentor(files) if ray.is_initialized() else mentor(context_files)

                    # Let's comment on the PR issue with the LLM response
                    await client.post(
                        f"{comment['issue_url']}/comments",
                        json={
                            "body": f":rocket: Docu Mentor finished "
             + "analysing your PR! :rocket:\n\n"
                            + "Take a look at your results:\n"
                            + f"{content}\n\n"
                            + "This bot is proudly powered by "
                            + "[Anyscale Endpoints](https://app.endpoints.anyscale.com/).\n"
                            + f"It used the model {model}, used {prompt_tokens} prompt tokens, "
             + f"and {completion_tokens} completion tokens in total."
                        },
                        headers=headers,
                    )

Note that, instead of using the diffs of the PR directly, we retrieve context information for it. First, we find the line numbers affected by the pull request and then get the full line from the head branch of the pull request. This is much cleaner, as the raw diffs can be outright useless in certain situations. If a PR just changes one word, we still want Docu Mentor to critique the style of the full sentence containing the word.

In essence, this is the full code used by our bot. We omitted some details about authentication, some helper functions, and nuances about the bot commands, but this is the gist of it. To set the bot up for production, all that’s left is to deploy it on a server and make it available on the GitHub app marketplace. Let’s start with deploying to the Anyscale platform.

LinkDeploying the app on Anyscale

Deploying a Ray Serve app (like the one we just created in “main.py”) on Anyscale is simple and works in three steps:

  • Set up a new Anyscale Workspace first. In our case, we need the bot source code, and we need to set all credentials (like an Anyscale Endpoints API key). In your workspace, simply run:

1
2
3
4
git clone https://github.com/ray-project/docu-mentor.git
cd docu-mentor
cp .env_template .env 
# Set all credentials before you continue!
  • The second step is to define an Anyscale Service YAML file. We already created one in the docu-mentor repo, but put it here for sake of completeness. Essentially, you just need to define a name, the right cluster environment, your dependencies, and the main entrypoint for your Ray Serve app.

1
2
3
4
5
6
7
8
9
10
11
# service.yaml
name: docu-mentor-bot
cluster_env: default_cluster_env_2.6.3_py39:1
ray_serve_config:
    import_path: main:entrypoint
    runtime_env:
        working_dir: .
        pip: [fastapi, uvicorn, httpx, python-dotenv, openai, pyjwt, cryptography]
config:
  access:
    use_bearer_token: False
  • The last step is to roll-out the service. If you want to update the Service after initial deployment, simply run the same command again:

1
anyscale service rollout -f service.yaml

After you run this last command, our app will launch as Anyscale Service and you can retrieve its Service URL directly from Anyscale. The very last thing to do is to use this URL to create a GitHub app. We’ll link to the necessary resources directly on GitHub, but will guide you through how it’s done.

LinkRegistering the bot as GitHub app

To make our bot available to the public, and use it on GitHub, we first need to register a new GitHub app. Each app needs a name, ours is “Docu Mentor”. This translates to the following app URL for our bot https://github.com/apps/docu-mentor. In the app settings we need to configure a couple of things:

  • We have to set an app name, a description, a public Homepage URL (e.g. the link to the GitHub repository of your bot) and most importantly the Webhook URL. This has to be of the form https://<your-anyscale-service-url>/webhook/ to be correct. The webhook must be made “Active”. Make sure to save all changes at the end.

  • On the general app settings, you can also generate a private key for your app. Do so and download .pem file. Then read out the private key and put it in your .env file in your Anyscale workspace as PRIVATE_KEY.

  • You will also see your GitHub app ID, which you should store as APP_ID in your .env file.

  • In the “Permissions & events” tab of the app settings, make sure to subscribe to all “Issue”, “Issue comment” and “Pull request” events, as otherwise the bot can’t read and write that information.

  • Finally, in the “Advanced” section, make the app public.

That’s it! If you follow these steps, you can create an app just like Docu Mentor on your own.

subscribe-events

LinkConclusion

In this post, we’ve shown you how to create and deploy your own bots on GitHub. We built Docu Mentor to analyze the writing style in your PRs, but you could use this scaffold to do many other useful things. For example, you could let an LLM analyze weaknesses in your test coverage, or point out issues in your Python programs. If you enjoyed this post and like Docu Mentor, please help us improve it. We’d like to hear from you. In any case, feel free to fork and extend this solution, and share your bots with the community

Ready to try Anyscale?

Access Anyscale today to see how companies using Anyscale and Ray benefit from rapid time-to-market and faster iterations across the entire AI lifecycle.