In this method, Aporia acts as a proxy, forwarding your requests to OpenAI and simultaneously invoking guardrails. The returned response is either the original from OpenAI or a modified version enforced by Aporia’s policies.This is the simplest option to get started with, especially if you use OpenAI or Azure OpenAI.
This approach involves making explicit calls to Aporia’s REST API at two key stages: before sending the prompt to the LLM to check for prompt-level policy violations (e.g. Prompt Injection) and after receiving the response to apply response-level guardrails (e.g. RAG Hallucinations).
Detailed Feedback: Returns logs detailing which policies were triggered and what actions were taken.
Custom Actions: Enables the implementation of custom responses or actions instead of using the revised response provided by Aporia, offering flexibility in handling policy violations.
LLM Provider Flexibility: Any LLM is supported with this method (OpenAI, AWS Bedrock, Vertex AI, OSS models, etc.).
Suited for developers requiring detailed control over policy enforcement and customization, especially when using LLM providers other than OpenAI or Azure OpenAI.
Simplicity vs. Customizability: The OpenAI Proxy offers simplicity for OpenAI users, whereas Aporia’s REST API offers flexible, detailed control suitable for any LLM provider.
Streaming Capabilities: Present in the OpenAI Proxy and planned for future addition to Aporia’s REST API.
If you’re just getting started, the OpenAI Proxy is recommended due to its straightforward setup. Developers requiring more control and detailed policy management should consider transitioning to Aporia’s REST API later on.