ChatGPT with LangChain

circle-info

What we're going to cook

In this recipe, we will build a simple version of ChatGPT using the LangChainarrow-up-right framework and their ChatOpenAI model.

To keep it simple, we won't add memory to this Chat Model. However, you will be able to find a full example with a basic memory in our examples repositoryarrow-up-right.

Here's the final result:

Final result of what you're going to build

Let's code

Init AgentLabs

First, we'll init the AgentLabs SDK, our agent and to open the connection with the server.

circle-info

Looking at the full examplearrow-up-right, you will see we created a parse_env_or_raise() method. But you can handle the configuration variable the way you want.

circle-exclamation

Prepare LangChain

Then, we'll init LangChainarrow-up-right and the ChatOpenAI model. Let's import every dependency we need:

Now we have imported our dependencies, let's init our model by adding the following line.

circle-info

Setting streaming to True allows us to get fragments of the response as they arrive and not wait for the entire response to be available. You can find more info about streaming in the LangChain docsarrow-up-right.

Now, we'll create a class that extends the BaseCallbackHandler of LangChainarrow-up-right to handle the stream fragments as they arrive.

What we want is to process every incoming stream and forward it to the client.

This handler is pretty straightforward:

  • On LLM start, we create a stream for our agent

  • When we receive a token, we stream it using our agent

  • On LLM end, we close the stream for our agent

Handle incoming messages

We're mostly done! We initiated AgentLabs and configured LangChainarrow-up-right, now we need to handle the incoming messages.

To do so, we'll use the on_chat_message method provided by AgentLabs.

This method takes a handler function as an argument. Let's define it.

In this function, we handle the incoming message from the user and them we pass it to the LLM.

We also pass it a first message to give it some context so it knows how to handle the user's input.

As a second argument, you can see we give it an instance of our callback class that we previously created.

Now, every time a user sends a message, the LLM will receive it, and we'll stream the LLM responses back to the user.

Et voilà 🎉

Congrats, you created your own version of ChatGPT! You can retrieve the full example of this recipe herearrow-up-right ☺️

Last updated