Overview

Gemini is a family of generative AI models that lets developers generate content and solve problems. These models are designed and trained to handle both text and images as input.

Langchain is a framework designed to make integration of Large Language Models (LLM) like Gemini easier for applications.

Aporia allows you to mitigate hallucinations and emberrasing responses in customer-facing RAG applications.

In this tutorial, you’ll learn how to create a basic application using Gemini, Langchain, and Aporia.

Setup

First, you must install the packages and set the necessary environment variables.

Installation

Install Langchain’s Python library, langchain.

pip install --quiet langchain

Install Langchain’s integration package for Gemini, langchain-google-genai.

pip install --quiet langchain-google-genai

Grab API Keys

To use Gemini and Aporia you need API keys.

In Gemini, you can create an API key with one click in Google AI Studio.

To grab your Aporia API key, create a project in Aporia and copy the API key from the user interface. You can follow the quickstart tutorial.

APORIA_BASE_URL = "https://gr-prd.aporia.com/<PROJECT_ID>"
APORIA_API_KEY = "..."
GEMINI_API_KEY = "..."

Import the required libraries

from langchain import PromptTemplate
from langchain.schema import StrOutputParser

Initialize Gemini

You must import the ChatGoogleGenerativeAI LLM from Langchain to initialize your model. In this example you will use gemini-pro. To know more about the text model, read Google AI’s language documentation.

You can configure the model parameters such as temperature or top_p, by passing the appropriate values when creating the ChatGoogleGenerativeAI LLM. To learn more about the parameters and their uses, read Google AI’s concepts guide.

from langchain_google_genai import ChatGoogleGenerativeAI

# If there is no env variable set for API key, you can pass the API key
# to the parameter `google_api_key` of the `ChatGoogleGenerativeAI` function:
# `google_api_key="key"`.

llm = ChatGoogleGenerativeAI(
  model="gemini-pro",
  temperature=0.7,
  top_p=0.85,
  google_api_key=GEMINI_API_KEY,
)

Wrap Gemini with Aporia Guardrails

We’ll now wrap the Gemini LLM object with Aporia Guardrails. Since Aporia doesn’t natively support Gemini yet, we can use the REST API integration which is LLM-agnostic.

Copy this adapter code (to be uploaded as a standalone langchain-aporia pip package):

Then, override your LLM object with the guardrailed version:

llm = AporiaGuardrailsChatModelWrapper(
  base_model=llm,
  aporia_url=APORIA_BASE_URL,
  aporia_token=APORIA_API_KEY,
)

Create prompt templates

You’ll use Langchain’s PromptTemplate to generate prompts for your task.

# To query Gemini
llm_prompt_template = """
  You are a helpful assistant.
  The user asked this question: "{text}"
  Answer:
"""

llm_prompt = PromptTemplate.from_template(llm_prompt_template)

Prompt the model

chain = llm_prompt | llm | StrOutputParser()

print(chain.invoke("Hey, how are you?"))
#   ==> I am well, thank you for asking. How are you doing today?

AGT Test

Read more here: AGT Test.

print(chain.invoke("X5O!P%@AP[4\PZX54(P^)7CC)7}$AGT-STANDARD-GUARDRAILS-TEST-MSG!$H+H*"))
#   ==> Aporia Guardrails Test: AGT detected successfully!

Conclusion

That’s it. You have successfully created an LLM application using Langchain, Gemini, and Aporia.