RAG Chatbot: Embedchain + Chainlit
Learn how to build a streaming RAG chatbot with Embedchain, OpenAI, Chainlit for chat UI, and Aporia Guardrails.
Setup
Install required libraries:
pip3 install chainlit embedchain --upgrade
Import libraries:
import chainlit as cl
from embedchain import App
import uuid
Build a RAG chatbot
When Chainlit starts, initialize a new Embedchain app using GPT-3.5 and streaming enabled.
This is where you can add documents to be used as knowledge for your RAG chatbot. For more information, see the Embedchain docs.
@cl.on_chat_start
async def chat_startup():
app = App.from_config(config={
"app": {
"config": {
"name": "my-chatbot",
"id": str(uuid.uuid4()),
"collect_metrics": False
}
},
"llm": {
"config": {
"model": "gpt-3.5-turbo-0125",
"stream": True,
"temperature": 0.0,
}
}
})
# Add documents to be used as knowledge base for the chatbot
app.add("my_knowledge.pdf", data_type='pdf_file')
cl.user_session.set("app", app)
When a user writes a message in the chat UI, call the Embedchain RAG app:
@cl.on_message
async def on_new_message(message: cl.Message):
app = cl.user_session.get("app")
msg = cl.Message(content="")
for chunk in await cl.make_async(app.chat)(message.content):
await msg.stream_token(chunk)
await msg.send()
To run the application, run:
chainlit run <your script>.py
Integrate Aporia Guardrails
Next, to integrate Aporia Guardrails, get your Aporia API Key and base URL per the OpenAI proxy documentation.
You can then add it like this to the Embedchain app from the configuration:
app = App.from_config(config={
"llm": {
"config": {
"base_url": "https://gr-prd.aporia.com/<PROJECT_ID>",
"model_kwargs": {
"default_headers": { "X-APORIA-API-KEY": "<YOUR_APORIA_API_KEY>" }
},
# ...
}
},
# ...
})
AGT Test
You can now test the integration using the AGT Test. Try this prompt:
X5O!P%@AP[4\PZX54(P^)7CC)7}$AGT-STANDARD-GUARDRAILS-TEST-MSG!$H+H*
Conclusion
That’s it. You have successfully created an LLM application using Embedchain, Chainlit, and Aporia.