Skip to main content

Overview

Gemini is a family of generative AI models that lets developers generate content and solve problems. These models are designed and trained to handle both text and images as input. Langchain is a framework designed to make integration of Large Language Models (LLM) like Gemini easier for applications. Aporia allows you to mitigate hallucinations and emberrasing responses in customer-facing RAG applications. In this tutorial, you’ll learn how to create a basic application using Gemini, Langchain, and Aporia.

Setup

First, you must install the packages and set the necessary environment variables.

Installation

Install Langchain’s Python library, langchain.
pip install --quiet langchain
Install Langchain’s integration package for Gemini, langchain-google-genai.
pip install --quiet langchain-google-genai

Grab API Keys

To use Gemini and Aporia you need API keys. In Gemini, you can create an API key with one click in Google AI Studio. To grab your Aporia API key, create a project in Aporia and copy the API key from the user interface. You can follow the quickstart tutorial.
APORIA_BASE_URL = "https://gr-prd.aporia.com/<PROJECT_ID>"
APORIA_API_KEY = "..."
GEMINI_API_KEY = "..."

Import the required libraries

from langchain import PromptTemplate
from langchain.schema import StrOutputParser

Initialize Gemini

You must import the ChatGoogleGenerativeAI LLM from Langchain to initialize your model. In this example you will use gemini-pro. To know more about the text model, read Google AI’s language documentation. You can configure the model parameters such as temperature or top_p, by passing the appropriate values when creating the ChatGoogleGenerativeAI LLM. To learn more about the parameters and their uses, read Google AI’s concepts guide.
from langchain_google_genai import ChatGoogleGenerativeAI

# If there is no env variable set for API key, you can pass the API key
# to the parameter `google_api_key` of the `ChatGoogleGenerativeAI` function:
# `google_api_key="key"`.

llm = ChatGoogleGenerativeAI(
  model="gemini-pro",
  temperature=0.7,
  top_p=0.85,
  google_api_key=GEMINI_API_KEY,
)

Wrap Gemini with Aporia Guardrails

We’ll now wrap the Gemini LLM object with Aporia Guardrails. Since Aporia doesn’t natively support Gemini yet, we can use the REST API integration which is LLM-agnostic. Copy this adapter code (to be uploaded as a standalone langchain-aporia pip package):
import requests
from typing import Any, AsyncIterator, Dict, Iterator, List, Optional

from langchain_core.callbacks import CallbackManagerForLLMRun
from langchain_core.language_models import BaseChatModel
from langchain_core.messages import BaseMessage
from langchain_core.outputs import ChatResult
from pydantic import PrivateAttr
from langchain_community.adapters.openai import convert_message_to_dict


class AporiaGuardrailsChatModelWrapper(BaseChatModel):
  base_model: BaseChatModel = PrivateAttr(default_factory=None)
  aporia_url: str = PrivateAttr(default_factory=None)
  aporia_token: str = PrivateAttr(default_factory=None)

  def __init__(
    self,
    base_model: BaseChatModel,
    aporia_url: str,
    aporia_token: str,
    **data
  ):
    super().__init__(**data)
    self.base_model = base_model
    self.aporia_url = aporia_url
    self.aporia_token = aporia_token

  def _generate(
    self,
    messages: List[BaseMessage],
    stop: Optional[List[str]] = None,
    run_manager: Optional[CallbackManagerForLLMRun] = None,
    **kwargs: Any,
  ) -> ChatResult:
    # Get response from underlying model
    llm_response = self.base_model._generate(messages, stop, run_manager)
    if len(llm_response.generations) > 1:
        raise NotImplementedError()

    # Run Aporia Guardrails
    messages_dict = [convert_message_to_dict(m) for m in messages]
    guardrails_result = requests.post(
        url=f"{self.aporia_url}/validate",
        headers={
            "X-APORIA-API-KEY": self.aporia_token,
        },
        json={
            "messages": messages_dict,
            "validation_target": "both",
            "response": llm_response.generations[0].message.content
        }
    )

    revised_response = guardrails_result.json()["revised_response"]

    llm_response.generations[0].text = revised_response
    llm_response.generations[0].message.content = revised_response

    return llm_response

  @property
  def _llm_type(self) -> str:
    """Get the type of language model used by this chat model."""
    return self.base_model._llm_type

  @property
  def _identifying_params(self) -> Dict[str, Any]:
    return self.base_model._identifying_params

  @property
  def _identifying_params(self) -> Dict[str, Any]:
    return self.base_model._identifying_params
Then, override your LLM object with the guardrailed version:
llm = AporiaGuardrailsChatModelWrapper(
  base_model=llm,
  aporia_url=APORIA_BASE_URL,
  aporia_token=APORIA_API_KEY,
)

Create prompt templates

You’ll use Langchain’s PromptTemplate to generate prompts for your task.
# To query Gemini
llm_prompt_template = """
  You are a helpful assistant.
  The user asked this question: "{text}"
  Answer:
"""

llm_prompt = PromptTemplate.from_template(llm_prompt_template)

Prompt the model

chain = llm_prompt | llm | StrOutputParser()

print(chain.invoke("Hey, how are you?"))
#   ==> I am well, thank you for asking. How are you doing today?

AGT Test

Read more here: AGT Test.
print(chain.invoke("X5O!P%@AP[4\PZX54(P^)7CC)7}$AGT-STANDARD-GUARDRAILS-TEST-MSG!$H+H*"))
#   ==> Aporia Guardrails Test: AGT detected successfully!

Conclusion

That’s it. You have successfully created an LLM application using Langchain, Gemini, and Aporia.
I