ChatGPT with LangChain
What we're going to cook
In this recipe, we will build a simple version of ChatGPT using the LangChain framework and their ChatOpenAI model.
To keep it simple, we won't add memory to this Chat Model. However, you will be able to find a full example with a basic memory in our examples repository.
Here's the final result:
Let's code
Init AgentLabs
First, we'll init the AgentLabs SDK, our agent and to open the connection with the server.
from agentlabs.agent import Agent
from agentlabs.chat import IncomingChatMessage, MessageFormat
from agentlabs.project import Project
if __name__ == "__main__":
env = parse_env_or_raise()
project = Project(
project_id=env.project_id,
agentlabs_url=env.agentlabs_url,
secret=env.secret,
)
agent = project.agent(id=env.agent_id)
project.connect()
project.wait()
Don't forget your OPENAI_API_KEY
environment variable if you want everything to work.
Prepare LangChain
Then, we'll init LangChain and the ChatOpenAI model. Let's import every dependency we need:
from langchain.callbacks.base import BaseCallbackHandler
from langchain.chat_models import ChatOpenAI
from langchain.schema.messages import BaseMessage, HumanMessage, SystemMessage
from langchain.schema.output import LLMResult
Now we have imported our dependencies, let's init our model by adding the following line.
llm = ChatOpenAI(streaming=True)
Now, we'll create a class that extends the BaseCallbackHandler
of LangChain to handle the stream fragments as they arrive.
What we want is to process every incoming stream and forward it to the client.
class AgentLabsStreamingCallback(BaseCallbackHandler):
def __init__(self, agent: Agent, conversation_id: str):
super().__init__()
self.agent = agent
self.conversation_id = conversation_id
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> Any:
self.stream = self.agent.create_stream(format=MessageFormat.MARKDOWN, conversation_id=self.conversation_id)
def on_llm_new_token(self, token: str, **kwargs: Any) -> Any:
self.stream.write(token)
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> Any:
self.stream.end();
This handler is pretty straightforward:
On LLM start, we create a stream for our agent
When we receive a token, we stream it using our agent
On LLM end, we close the stream for our agent
Handle incoming messages
We're mostly done! We initiated AgentLabs and configured LangChain, now we need to handle the incoming messages.
To do so, we'll use the on_chat_message
method provided by AgentLabs.
if __name__ == "__main__":
env = parse_env_or_raise()
project = Project(
project_id=env.project_id,
agentlabs_url=env.agentlabs_url,
secret=env.secret,
)
llm = ChatOpenAI(streaming=True)
agent = project.agent(id=env.agent_id)
project.on_chat_message(handle_task) # ADDED THIS LINE
project.connect()
project.wait()
This method takes a handler function as an argument. Let's define it.
def handle_task(message: IncomingChatMessage):
print(f"Handling message: {message.text} sent by {message.member_id}")
messages: List[BaseMessage] = [
SystemMessage(content="You are a general assistant designed to help people with their daily tasks. You should format your answers in markdown format as you see fit."),
HumanMessage(content=message.text)
]
callback = AgentLabsStreamingCallback(agent=agent, conversation_id=message.conversation_id)
llm(messages, callbacks=[callback])
In this function, we handle the incoming message from the user and them we pass it to the LLM.
We also pass it a first message to give it some context so it knows how to handle the user's input.
As a second argument, you can see we give it an instance of our callback class that we previously created.
Now, every time a user sends a message, the LLM will receive it, and we'll stream the LLM responses back to the user.
Et voilĂ đ
Congrats, you created your own version of ChatGPT! You can retrieve the full example of this recipe here âşď¸
Last updated