AgentLabs - Docs
  • Introduction
    • What is AgentLabs?
    • Get Started with AgentLabs
    • Pricing
  • Getting Started
    • Installation
    • Secret Key
    • Using the SDK
  • Core Concepts
    • Frontend as a service
    • User Authentication
    • Project
    • Agents
    • Messages
      • Messages Format
      • One-off vs Stream
      • Typewriter animation
    • Attachments
  • Recipes
    • Before we start
    • Ping-Pong
    • ChatGPT with LangChain
    • Code Interpreter
    • Mutli-Agent with AutoGen
  • 🧙Community
    • Github
    • Discord
    • Support team
Powered by GitBook
On this page
  • What we're going to cook
  • Let's code
  • Init AgentLabs
  • Prepare LangChain
  • Handle incoming messages
  • Et voilà
  1. Recipes

ChatGPT with LangChain

PreviousPing-PongNextCode Interpreter

Last updated 1 year ago

You can retrieve the

What we're going to cook

In this recipe, we will build a simple version of ChatGPT using the framework and their ChatOpenAI model.

To keep it simple, we won't add memory to this Chat Model. However, you will be able to find a full example with a basic memory in our .

Here's the final result:

Let's code

Init AgentLabs

First, we'll init the AgentLabs SDK, our agent and to open the connection with the server.

from agentlabs.agent import Agent
from agentlabs.chat import IncomingChatMessage, MessageFormat
from agentlabs.project import Project

if __name__ == "__main__":
    env = parse_env_or_raise()
    project = Project(
            project_id=env.project_id,
            agentlabs_url=env.agentlabs_url,
            secret=env.secret,
    )

    agent = project.agent(id=env.agent_id)

    project.connect()
    project.wait()

Don't forget your OPENAI_API_KEY environment variable if you want everything to work.

Prepare LangChain

from langchain.callbacks.base import BaseCallbackHandler
from langchain.chat_models import ChatOpenAI
from langchain.schema.messages import BaseMessage, HumanMessage, SystemMessage
from langchain.schema.output import LLMResult

Now we have imported our dependencies, let's init our model by adding the following line.

llm = ChatOpenAI(streaming=True)

What we want is to process every incoming stream and forward it to the client.

class AgentLabsStreamingCallback(BaseCallbackHandler):
    def __init__(self, agent: Agent, conversation_id: str):
        super().__init__()
        self.agent = agent
        self.conversation_id = conversation_id

    def on_llm_start(
        self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
    ) -> Any:
        self.stream = self.agent.create_stream(format=MessageFormat.MARKDOWN, conversation_id=self.conversation_id)

    def on_llm_new_token(self, token: str, **kwargs: Any) -> Any:
        self.stream.write(token)

    def on_llm_end(self, response: LLMResult, **kwargs: Any) -> Any:
        self.stream.end();

This handler is pretty straightforward:

  • On LLM start, we create a stream for our agent

  • When we receive a token, we stream it using our agent

  • On LLM end, we close the stream for our agent

Handle incoming messages

To do so, we'll use the on_chat_message method provided by AgentLabs.

if __name__ == "__main__":
    env = parse_env_or_raise()

    project = Project(
            project_id=env.project_id,
            agentlabs_url=env.agentlabs_url,
            secret=env.secret,
    )

    llm = ChatOpenAI(streaming=True)

    agent = project.agent(id=env.agent_id)
    project.on_chat_message(handle_task) # ADDED THIS LINE

    project.connect()
    project.wait()

This method takes a handler function as an argument. Let's define it.

def handle_task(message: IncomingChatMessage):
    print(f"Handling message: {message.text} sent by {message.member_id}")
    messages: List[BaseMessage] = [
            SystemMessage(content="You are a general assistant designed to help people with their daily tasks. You should format your answers in markdown format as you see fit."),
            HumanMessage(content=message.text)
    ]
    callback = AgentLabsStreamingCallback(agent=agent, conversation_id=message.conversation_id)
    llm(messages, callbacks=[callback])

In this function, we handle the incoming message from the user and them we pass it to the LLM.

We also pass it a first message to give it some context so it knows how to handle the user's input.

As a second argument, you can see we give it an instance of our callback class that we previously created.

Now, every time a user sends a message, the LLM will receive it, and we'll stream the LLM responses back to the user.

Looking at the , you will see we created a parse_env_or_raise() method. But you can handle the configuration variable the way you want.

Then, we'll init and the ChatOpenAI model. Let's import every dependency we need:

Setting streaming to True allows us to get fragments of the response as they arrive and not wait for the entire response to be available. You can find more info about streaming in the .

Now, we'll create a class that extends the BaseCallbackHandler of to handle the stream fragments as they arrive.

We're mostly done! We initiated AgentLabs and configured , now we need to handle the incoming messages.

Et voilà

Congrats, you created your own version of ChatGPT! You can retrieve the

🎉
full example
LangChain
LangChain docs
LangChain
LangChain
☺️
full example of this recipe here
full example of this recipe here
LangChain
examples repository
Final result of what you're going to build