What is Aporia Guardrails?

Aporia Guardrails provides real-time protection for LLM-based systems by mitigating risks such as hallucinations, inappropriate responses, and prompt injection attacks. Positioned between your LLM provider (e.g., OpenAI, Bedrock, Mistral) and your application, Guardrails ensures that your AI models perform within safe and reliable boundaries.

Creating Projects

To make managing Guardrails easy, we’re introducing Projects—your central hub for configuring and organizing multiple policies. With Projects, you can:

  1. Group and manage policies for different applications.
  2. Monitor guardrail activity, including policy activations and detected violations.
  3. Use a Master Switch to toggle all guardrails on or off for any project.

Integration Options:

Aporia Guardrails can be integrated into your LLM applications using two methods:

  1. OpenAI Proxy: A simple and fast way to start using Guardrails if your LLM provider is OpenAI or Azure OpenAI. This method supports streaming responses, ideal for real-time applications.
  2. Aporia REST API: For those who need more control or use LLMs beyond OpenAI, our REST API provides detailed policy enforcement and is compatible with any LLM provider.

Guardrails Policies:

Along with this release, we’re introducing our first set of Guardrails policies, including:

  1. RAG Hallucination Detection: Prevents responses that risk being incorrect or irrelevant by evaluating the relevance of the context and answer.
  2. Prompt Injection Protection: Defends your application from malicious prompt injection attacks and jailbreaks by recognizing and blocking dangerous inputs.
  3. Restricted Topics: Enforces restrictions on sensitive or off-limits topics to ensure safe, compliant conversations.