LiteLLM is an open-source AI gateway. For more information on integrating Aporia with AI gateways, see this guide.
Integration Guide
Installation
To configure LiteLLM with Aporia, start by installing LiteLLM:
pip install 'litellm[proxy]'
For more details, visit LiteLLM - Getting Started guide.
Use LiteLLM AI Gateway with Aporia Guardrails
In this tutorial we will use LiteLLM Proxy with Aporia to detect PII in requests.
1. Setup guardrails on Aporia
Pre-Call: Detect PII
Add the PII - Prompt
to your Aporia project.
2. Define Guardrails on your LiteLLM config.yaml
- Define your guardrails under the
guardrails
section and set pre_call_guardrails
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: openai/gpt-3.5-turbo
api_key: os.environ/OPENAI_API_KEY
guardrails:
- guardrail_name: "aporia-pre-guard"
litellm_params:
guardrail: aporia
mode: "during_call"
api_key: os.environ/APORIA_API_KEY_1
api_base: os.environ/APORIA_API_BASE_1
Supported values for mode
pre_call
Run before LLM call, on input
post_call
Run after LLM call, on input & output
during_call
Run during LLM call, on input
3. Start LiteLLM Gateway
litellm --config config.yaml --detailed_debug
4. Test request
Expect this to fail since since ishaan@berri.ai
in the request is PII
curl -i http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-npnwjPQciVRok5yNZgKmFQ" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "user", "content": "hi my email is ishaan@berri.ai"}
],
"guardrails": ["aporia-pre-guard"]
}'
Expected response on failure
{
"error": {
"message": {
"error": "Violated guardrail policy",
"aporia_ai_response": {
"action": "block",
"revised_prompt": null,
"revised_response": "Aporia detected and blocked PII",
"explain_log": null
}
},
"type": "None",
"param": "None",
"code": "400"
}
}
5. Control Guardrails per Project (API Key)
Use this to control what guardrails run per project. In this tutorial we only want the following guardrails to run for 1 project (API Key)
guardrails
: [“aporia-pre-guard”, “aporia”]
Step 1 Create Key with guardrail settings
curl -X POST 'http://0.0.0.0:4000/key/generate' \
-H 'Authorization: Bearer sk-1234' \
-H 'Content-Type: application/json' \
-D '{
"guardrails": ["aporia-pre-guard", "aporia"]
}
}'
Step 2 Test it with new key
curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Authorization: Bearer sk-jNm1Zar7XfNdZXp49Z1kSQ' \
--header 'Content-Type: application/json' \
--data '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "my email is ishaan@berri.ai"
}
]
}'