Gemini is a family of generative AI models that lets developers generate content and solve problems. These models are designed and trained to handle both text and images as input.
Langchain is a framework designed to make integration of Large Language Models (LLM) like Gemini easier for applications.
Aporia allows you to mitigate hallucinations and emberrasing responses in customer-facing RAG applications.
In this tutorial, you’ll learn how to create a basic application using Gemini, Langchain, and Aporia.
You must import the ChatGoogleGenerativeAI LLM from Langchain to initialize your model.
In this example you will use gemini-pro. To know more about the text model, read Google AI’s language documentation.
You can configure the model parameters such as temperature or top_p, by passing the appropriate values when creating the ChatGoogleGenerativeAI LLM. To learn more about the parameters and their uses, read Google AI’s concepts guide.
from langchain_google_genai import ChatGoogleGenerativeAI# If there is no env variable set for API key, you can pass the API key# to the parameter `google_api_key` of the `ChatGoogleGenerativeAI` function:# `google_api_key="key"`.llm = ChatGoogleGenerativeAI( model="gemini-pro", temperature=0.7, top_p=0.85, google_api_key=GEMINI_API_KEY,)
We’ll now wrap the Gemini LLM object with Aporia Guardrails. Since Aporia doesn’t natively support Gemini yet, we can use the REST API integration which is LLM-agnostic.
Copy this adapter code (to be uploaded as a standalone langchain-aporia pip package):
import requestsfrom typing import Any, AsyncIterator, Dict, Iterator, List, Optionalfrom langchain_core.callbacks import CallbackManagerForLLMRunfrom langchain_core.language_models import BaseChatModelfrom langchain_core.messages import BaseMessagefrom langchain_core.outputs import ChatResultfrom pydantic import PrivateAttrfrom langchain_community.adapters.openai import convert_message_to_dictclassAporiaGuardrailsChatModelWrapper(BaseChatModel): base_model: BaseChatModel = PrivateAttr(default_factory=None) aporia_url:str= PrivateAttr(default_factory=None) aporia_token:str= PrivateAttr(default_factory=None)def__init__( self, base_model: BaseChatModel, aporia_url:str, aporia_token:str,**data):super().__init__(**data) self.base_model = base_model self.aporia_url = aporia_url self.aporia_token = aporia_tokendef_generate( self, messages: List[BaseMessage], stop: Optional[List[str]]=None, run_manager: Optional[CallbackManagerForLLMRun]=None,**kwargs: Any,)-> ChatResult:# Get response from underlying model llm_response = self.base_model._generate(messages, stop, run_manager)iflen(llm_response.generations)>1:raise NotImplementedError()# Run Aporia Guardrails messages_dict =[convert_message_to_dict(m)for m in messages] guardrails_result = requests.post( url=f"{self.aporia_url}/validate", headers={"X-APORIA-API-KEY": self.aporia_token,}, json={"messages": messages_dict,"validation_target":"both","response": llm_response.generations[0].message.content}) revised_response = guardrails_result.json()["revised_response"] llm_response.generations[0].text = revised_response llm_response.generations[0].message.content = revised_responsereturn llm_response@propertydef_llm_type(self)->str:"""Get the type of language model used by this chat model."""return self.base_model._llm_type@propertydef_identifying_params(self)-> Dict[str, Any]:return self.base_model._identifying_params@propertydef_identifying_params(self)-> Dict[str, Any]:return self.base_model._identifying_params
Then, override your LLM object with the guardrailed version:
You’ll use Langchain’s PromptTemplate to generate prompts for your task.
# To query Geminillm_prompt_template =""" You are a helpful assistant. The user asked this question:"{text}" Answer:"""llm_prompt = PromptTemplate.from_template(llm_prompt_template)