Overview
Gemini is a family of generative AI models that lets developers generate content and solve problems. These models are designed and trained to handle both text and images as input. Langchain is a framework designed to make integration of Large Language Models (LLM) like Gemini easier for applications. Aporia allows you to mitigate hallucinations and emberrasing responses in customer-facing RAG applications. In this tutorial, you’ll learn how to create a basic application using Gemini, Langchain, and Aporia.Setup
First, you must install the packages and set the necessary environment variables.Installation
Install Langchain’s Python library,langchain
.
langchain-google-genai
.
Grab API Keys
To use Gemini and Aporia you need API keys. In Gemini, you can create an API key with one click in Google AI Studio. To grab your Aporia API key, create a project in Aporia and copy the API key from the user interface. You can follow the quickstart tutorial.Import the required libraries
Initialize Gemini
You must import theChatGoogleGenerativeAI
LLM from Langchain to initialize your model.
In this example you will use gemini-pro. To know more about the text model, read Google AI’s language documentation.
You can configure the model parameters such as temperature or top_p, by passing the appropriate values when creating the ChatGoogleGenerativeAI
LLM. To learn more about the parameters and their uses, read Google AI’s concepts guide.
Wrap Gemini with Aporia Guardrails
We’ll now wrap the Gemini LLM object with Aporia Guardrails. Since Aporia doesn’t natively support Gemini yet, we can use the REST API integration which is LLM-agnostic. Copy this adapter code (to be uploaded as a standalonelangchain-aporia
pip package):
Aporia <> Langchain adapter code
Aporia <> Langchain adapter code