Gemini is a family of generative AI models that lets developers generate content and solve problems. These models are designed and trained to handle both text and images as input.
Langchain is a framework designed to make integration of Large Language Models (LLM) like Gemini easier for applications.
Aporia allows you to mitigate hallucinations and emberrasing responses in customer-facing RAG applications.
In this tutorial, you’ll learn how to create a basic application using Gemini, Langchain, and Aporia.
You must import the ChatGoogleGenerativeAI LLM from Langchain to initialize your model.
In this example you will use gemini-pro. To know more about the text model, read Google AI’s language documentation.
You can configure the model parameters such as temperature or top_p, by passing the appropriate values when creating the ChatGoogleGenerativeAI LLM. To learn more about the parameters and their uses, read Google AI’s concepts guide.
Copy
from langchain_google_genai import ChatGoogleGenerativeAI# If there is no env variable set for API key, you can pass the API key# to the parameter `google_api_key` of the `ChatGoogleGenerativeAI` function:# `google_api_key="key"`.llm = ChatGoogleGenerativeAI( model="gemini-pro", temperature=0.7, top_p=0.85, google_api_key=GEMINI_API_KEY,)
We’ll now wrap the Gemini LLM object with Aporia Guardrails. Since Aporia doesn’t natively support Gemini yet, we can use the REST API integration which is LLM-agnostic.
Copy this adapter code (to be uploaded as a standalone langchain-aporia pip package):
You’ll use Langchain’s PromptTemplate to generate prompts for your task.
Copy
# To query Geminillm_prompt_template = """ You are a helpful assistant. The user asked this question: "{text}" Answer:"""llm_prompt = PromptTemplate.from_template(llm_prompt_template)