This guide provides an overview and comparison between the different integration methods provided by Aporia Guardrails.
Aporia Guardrails can be integrated into LLM-based applications using two distinct methods: the OpenAI Proxy and Aporia’s REST API.
Just getting started and use OpenAI or Azure OpenAI? Skip this guide and use the OpenAI proxy integration.
In this method, Aporia acts as a proxy, forwarding your requests to OpenAI and simultaneously invoking guardrails. The returned response is either the original from OpenAI or a modified version enforced by Aporia’s policies.
This is the simplest option to get started with, especially if you use OpenAI or Azure OpenAI.
X-APORIA-API-KEY
header. In the case of Azure OpenAI, add also the X-AZURE-OPENAI-ENDPOINT
header.Ideal for those seeking a hassle-free setup with minimal changes, particularly when the LLM provider is OpenAI or Azure OpenAI.
This approach involves making explicit calls to Aporia’s REST API at two key stages: before sending the prompt to the LLM to check for prompt-level policy violations (e.g. Prompt Injection) and after receiving the response to apply response-level guardrails (e.g. RAG Hallucinations).
Suited for developers requiring detailed control over policy enforcement and customization, especially when using LLM providers other than OpenAI or Azure OpenAI.
If you’re just getting started, the OpenAI Proxy is recommended due to its straightforward setup. Developers requiring more control and detailed policy management should consider transitioning to Aporia’s REST API later on.
This guide provides an overview and comparison between the different integration methods provided by Aporia Guardrails.
Aporia Guardrails can be integrated into LLM-based applications using two distinct methods: the OpenAI Proxy and Aporia’s REST API.
Just getting started and use OpenAI or Azure OpenAI? Skip this guide and use the OpenAI proxy integration.
In this method, Aporia acts as a proxy, forwarding your requests to OpenAI and simultaneously invoking guardrails. The returned response is either the original from OpenAI or a modified version enforced by Aporia’s policies.
This is the simplest option to get started with, especially if you use OpenAI or Azure OpenAI.
X-APORIA-API-KEY
header. In the case of Azure OpenAI, add also the X-AZURE-OPENAI-ENDPOINT
header.Ideal for those seeking a hassle-free setup with minimal changes, particularly when the LLM provider is OpenAI or Azure OpenAI.
This approach involves making explicit calls to Aporia’s REST API at two key stages: before sending the prompt to the LLM to check for prompt-level policy violations (e.g. Prompt Injection) and after receiving the response to apply response-level guardrails (e.g. RAG Hallucinations).
Suited for developers requiring detailed control over policy enforcement and customization, especially when using LLM providers other than OpenAI or Azure OpenAI.
If you’re just getting started, the OpenAI Proxy is recommended due to its straightforward setup. Developers requiring more control and detailed policy management should consider transitioning to Aporia’s REST API later on.