February 1st 2024
We’re thrilled to officially announce Aporia Guardrails, our breakthrough solution designed to protect your LLM applications from unintended behavior, hallucinations, prompt injection attacks, and more.
What is Aporia Guardrails?
Aporia Guardrails provides real-time protection for LLM-based systems by mitigating risks such as hallucinations, inappropriate responses, and prompt injection attacks. Positioned between your LLM provider (e.g., OpenAI, Bedrock, Mistral) and your application, Guardrails ensures that your AI models perform within safe and reliable boundaries.
Creating Projects
To make managing Guardrails easy, we’re introducing Projects—your central hub for configuring and organizing multiple policies. With Projects, you can:
- Group and manage policies for different applications.
- Monitor guardrail activity, including policy activations and detected violations.
- Use a Master Switch to toggle all guardrails on or off for any project.
Integration Options:
Aporia Guardrails can be integrated into your LLM applications using two methods:
- OpenAI Proxy: A simple and fast way to start using Guardrails if your LLM provider is OpenAI or Azure OpenAI. This method supports streaming responses, ideal for real-time applications.
- Aporia REST API: For those who need more control or use LLMs beyond OpenAI, our REST API provides detailed policy enforcement and is compatible with any LLM provider.
Guardrails Policies:
Along with this release, we’re introducing our first set of Guardrails policies, including:
- RAG Hallucination Detection: Prevents responses that risk being incorrect or irrelevant by evaluating the relevance of the context and answer.
- Prompt Injection Protection: Defends your application from malicious prompt injection attacks and jailbreaks by recognizing and blocking dangerous inputs.
- Restricted Topics: Enforces restrictions on sensitive or off-limits topics to ensure safe, compliant conversations.