# Policies API This REST API documentation outlines methods for managing policies on the Aporia Policies Catalog. It includes detailed descriptions of endpoints for creating, updating, and deleting policies, complete with example requests and responses. ### Get All Policy Templates **Endpoint:** GET `https://guardrails.aporia.com/api/v1/policies` **Headers:** * `Content-Type`: `application/json` * `Authorization`: `Bearer` + Your copied Aporia API key **Response Fields:** The response type is a `list`. each object in the list contains the following fields: The policy type. The policy category. The policy default\_name. Description of the policy. **Response JSON Example:** ```json [ { "type": "aporia_guardrails_test", "category": "test", "name": "AGT Test", "description": "Test and verify that Guardrails are activated. Activate the policy by sending the following prompt: X5O!P%@AP[4\\PZX54(P^)7CC)7}$AGT-STANDARD-GUARDRAILS-TEST-MSG!$H+H*" }, { "type": "competition_discussion_on_prompt", "category": "topics", "name": "Competition Discussion - Prompt", "description": "Detects any user attempt to start a discussion including the competition mentioned in the policy." }, { "type": "competition_discussion_on_response", "category": "topics", "name": "Competition Discussion - Response", "description": "Detects any response including reference to the competition mentioned in the policy." }, { "type": "basic_restricted_topics_on_prompt", "category": "topics", "name": "Restricted Topics - Prompt", "description": "Detects any user attempt to start a discussion on the topics mentioned in the policy." }, { "type": "basic_restricted_topics_on_response", "category": "topics", "name": "Restricted Topics - Response", "description": "Detects any response including discussion on the topics mentioned in the policy." }, { "type": "sql_restricted_tables", "category": "security", "name": "SQL - Restricted Tables", "description": "Detects generation of SQL statements with access to specific tables that are considered sensitive. It is recommended to activate the policy and define system tables, as well as other tables with sensitive information." }, { "type": "sql_allowed_tables", "category": "security", "name": "SQL - Allowed tables", "description": "Detects SQL operations on tables that are not within the limits we set in the policy. Any operation on, or with another table that is not listed in the policy, will trigger the action configured in the policy. Enable this policy for achieving the finest level of security for your SQL statements." }, { "type": "sql_read_only_access", "category": "security", "name": "SQL - Read-Only Access", "description": "Detects any attempt to use SQL operations which requires more than read-only access. Activating this policy is important to avoid accidental or malicious run of dangerous SQL queries like DROP, INSERT, UPDATE and others." }, { "type": "sql_load_limit", "category": "security", "name": "SQL - Load Limit", "description": "Detects SQL statements that are likely to cause significant system load and affect performance." }, { "type": "basic_allowed_topics_on_prompt", "category": "topics", "name": "Allowed Topics - Prompt", "description": "Ensures the conversation adheres to specific and well-defined topics." }, { "type": "basic_allowed_topics_on_response", "category": "topics", "name": "Allowed Topics - Response", "description": "Ensures the conversation adheres to specific and well-defined topics." }, { "type": "prompt_injection", "category": "prompt_injection", "name": "Prompt Injection", "description": "Detects any user attempt of prompt injection or jailbreak." }, { "type": "rag_hallucination", "category": "hallucinations", "name": "RAG Hallucination", "description": "Detects any response that carries a high risk of hallucinations, thus maintaining the integrity and factual correctness of the information." }, { "type": "pii_on_prompt", "category": "security", "name": "PII - Prompt", "description": "Detects existence of PII in the user message, based on the configured sensitive data types. " }, { "type": "pii_on_response", "category": "security", "name": "PII - Response", "description": "Detects potential responses containing PII, based on the configured sensitive data types. " }, { "type": "toxicity_on_prompt", "category": "toxicity", "name": "Toxicity - Prompt", "description": "Detects user messages containing toxicity." }, { "type": "toxicity_on_response", "category": "toxicity", "name": "Toxicity - Response", "description": "Detects potential responses containing toxicity." } ] ``` ### Get Specific Policy Template **Endpoint:** GET `https://guardrails.aporia.com/api/v1/policies/{template_type}` **Headers:** * `Content-Type`: `application/json` * `Authorization`: `Bearer` + Your copied Aporia API key **Path Parameters::** The type identifier of the policy template to retrieve. **Response Fields:** The policy type. The policy category. The policy default name. Description of the policy. **Response JSON Example:** ```json { "type": "competition_discussion_on_prompt", "category": "topics", "name": "Competition Discussion - Prompt", "description": "Detects any user attempt to start a discussion including the competition mentioned in the policy." } ``` ### Create Custom Policy **Endpoint:** POST `https://guardrails.aporia.com/api/v1/policies/custom_policy` **Headers:** * `Content-Type`: `application/json` * `Authorization`: `Bearer` + Your copied Aporia API key **Request Fields:** The name of the custom policy. The target of the policy - either `prompt` or `response`. There are 2 configuration modes for custom policy - `simple` and `advanced`, each with it's own condition config. For simple mode, the following parameters must be passed: * evaluation\_instructions - Instructions that define how the policy should evaluate inputs. * modality - Defines whether instructions trigger a violation if they evaluate to `TRUE` or `FALSE`. ```json { "configuration_mode": "simple", "evaluation_instructions": "The {answer} is relevant to the {question}", "modality": "violate" } ``` For advanced mode, the following parameters must be passed: * system\_prompt - The system prompt that will be passed to the LLM * top\_p - Top-P sampling probability, between 0 and 1. Defaults to 1. * temperature - Sampling temperature to use, between 0 and 2. Defaults to 1. * modality - Defines whether instructions trigger a violation if they evaluate to `TRUE` or `FALSE`. ```json { "configuration_mode": "advanced", "system_prompt": "You will be given a question and an answer, return TRUE if the answer is relevent to the question, return FALSE otherwise. {question} {answer}", "top_p": 1.0, "temperature": 0, "modality": "violate" } ``` **Response Fields:** The custom policy type identifier. The policy category, typically 'custom' for user-defined policies. The default name for the policy template, as provided in the request. A description of the policy based on the evaluation instructions. **Response JSON Example:** ```json { "type": "custom_policy_e1dd9b4a-84e5-4a49-9c59-c62dd94572ae", "category": "custom", "name": "Your Custom Policy Name", "description": "Evaluate whether specific conditions are met as per the provided instructions." } ``` ### Edit Custom Policy **Endpoint:** PUT `https://guardrails.aporia.com/api/v1/policies/custom_policy/{custom_policy_type}` **Headers:** * `Content-Type`: `application/json` * `Authorization`: `Bearer` + Your copied Aporia API key **Path Parameters:** The custom policy type identifier to update. Returned from `Create Custom Policy` endpoint. **Request Fields:** The name of the custom policy. The target of the policy - either `prompt` or `response`. There are 2 configuration modes for custom policy - `simple` and `advanced`, each with it's own condition config. For simple mode, the following parameters must be passed: * evaluation\_instructions - Instructions that define how the policy should evaluate inputs. * modality - Defines whether instructions trigger a violation if they evaluate to `TRUE` or `FALSE`. ```json { "configuration_mode": "simple", "evaluation_instructions": "The {answer} is relevant to the {question}", "modality": "violate" } ``` For advanced mode, the following parameters must be passed: * system\_prompt - The system prompt that will be passed to the LLM * top\_p - Top-P sampling probability, between 0 and 1. Defaults to 1. * temperature - Sampling temperature to use, between 0 and 2. Defaults to 1. * modality - Defines whether instructions trigger a violation if they evaluate to `TRUE` or `FALSE`. ```json { "configuration_mode": "advanced", "system_prompt": "You will be given a question and an answer, return TRUE if the answer is relevent to the question, return FALSE otherwise. {question} {answer}", "top_p": 1.0, "temperature": 0, "modality": "violate" } ``` **Response Fields:** The custom policy type identifier. The policy category, typically 'custom' for user-defined policies. The default name for the policy template. Updated description of the policy based on the new evaluation instructions. **Response JSON Example:** ```json { "type": "custom_policy_e1dd9b4a-84e5-4a49-9c59-c62dd94572ae", "category": "custom", "name": "Your Custom Policy Name", "description": "Evaluate whether specific conditions are met as per the new instructions." } ``` ### Delete Custom Policy **Endpoint:** DELETE `https://guardrails.aporia.com/api/v1/policies/custom_policy/{custom_policy_type}` **Headers:** * `Content-Type`: `application/json` * `Authorization`: `Bearer` + Your copied Aporia API key **Path Parameters:** The custom policy type identifier to delete. Returned from `Create Custom Policy` endpoint. **Response:** `200` OK ### Create policies for multiple projects **Endpoint:** PUT `https://guardrails.aporia.com/api/v1/policies/` **Headers:** * `Content-Type`: `application/json` * `Authorization`: `Bearer` + Your copied Aporia API key **Request Fields:** The project ids to create the policies in A list of policies to create. List of policies, each Policy has the following attributes: `policy_type` (string), `priority` (int), `condition` (dict), `action` (dict). # Projects API This REST API documentation outlines methods for managing projects and policies on the Aporia platform. It includes detailed descriptions of endpoints for creating, updating, and deleting projects and their associated policies, complete with example requests and responses. ### Get All Projects **Endpoint:** GET `https://guardrails.aporia.com/api/v1/projects` **Headers:** * `Content-Type`: `application/json` * `Authorization`: `Bearer` + Your copied Aporia API key **Response Fields:** The response type is a `list`. each object in the list contains the following fields: The project ID. The project name. The project description. The project icon, possible values are `codepen`, `chatBubbleLeftRight`, `serverStack`, `academicCap`, `bookOpen`, `commandLine`, `creditCard`, `rocketLaunch`, `envelope`, `identification`. The project color, possible values are `turquoiseBlue`, `mustard`, `cornflowerBlue`, `heliotrope`, `spray`, `peachOrange`, `shocking`, `white`, `manz`, `geraldine`. The organization ID. Boolean indicating whether the project is active or not. List of policies, each Policy has the following attributes: `id` (uuid), `policy_type` (string), `name` (string), `enabled` (bool), `condition` (dict), `action` (dict). List of [extractions](/fundamentals/extractions) defined for the project. Each extraction contains the following fields: * `descriptor_type`: Either `default` or `custom`. Default extractions are supported by all Aporia policies, and it is recommended to define them for optimal results. Custom extractions are user-defined and are more versatile, but not all policies can utilize them. * `descriptor` - A descriptor of what exactly is extracted by the extraction. For `default` extractions, the supported descriptors are `question`, `context`, and `answer`. * `extraction_target` - Either `prompt` or `response`, based on where data should be extracted from (prompt or response, respectively) * `extraction` - Extraction method, can be either `RegexExtraction` or `JSONPathExtraction`. `RegexExtraction` is an object containing `type` (string equal to `regex`) and `regex` (string containing the regex expression to extract with). for example: ```json { "type": "regex", "regex": "(.+)" } ``` `JSONPathExtraction` is an object containing `type` (string equal to `jsonpath`) and `path` (string specifies the JSONPath expression used to navigate and extract specific data from a JSON document). for example: ```json { "type": "jsonpath", "regex": "$.context" } ``` Extraction method for context, can be either `RegexExtraction` or `JSONPathExtraction`. `RegexExtraction` is an object containing `type` (string equal to `regex`) and `regex` (string containing the regex expression to extract with). for example: ```json { "type": "regex", "regex": "(.+)" } ``` `JSONPathExtraction` is an object containing `type` (string equal to `jsonpath`) and `path` (string specifies the JSONPath expression used to navigate and extract specific data from a JSON document). for example: ```json { "type": "jsonpath", "regex": "$.context" } ``` Extraction method for question, can be either `RegexExtraction` or `JSONPathExtraction`. see full explanation about `RegexExtraction` and `JSONPathExtraction` in `context_extraction` field in `Get All Projects` endpoint. Extraction method for answer, can be either `RegexExtraction` or `JSONPathExtraction`. see full explanation about `RegexExtraction` and `JSONPathExtraction` in `context_extraction` field in `Get All Projects` endpoint. Maximum runtime for policies on prompt in milliseconds. Maximum runtime for policies on response in milliseconds. Project integration status, possible values are: `pending`, `failed`, `success`. The size of the project, possible values are `0`, `1`, `2`, `3`. defaults to `0`. **Response JSON Example:** ```json [ { "id": "123e4567-e89b-12d3-a456-426614174000", "name": "Test", "description": "Project to test", "icon": "chatBubbleLeftRight", "color": "mustard", "organization_id": "123e4567-e89b-12d3-a456-426614174000", "is_active": true, "policies": [ { "id": "1", "policy_type": "aporia_guardrails_test", "name": null, "enabled": true, "condition": {}, "action": { "type": "block", "response": "Aporia Guardrails Test: AGT detected successfully!" } } ], "project_extractions": [ { "descriptor": "question", "descriptor_type": "default", "extraction": {"regex": "(.+)", "type": "regex"}, "extraction_target": "prompt", }, { "descriptor": "context", "descriptor_type": "default", "extraction": {"regex": "(.+)", "type": "regex"}, "extraction_target": "prompt", }, { "descriptor": "answer", "descriptor_type": "default", "extraction": {"regex": "(.+)", "type": "regex"}, "extraction_target": "response", }, ], "context_extraction": { "type": "regex", "regex": "(.+)" }, "question_extraction": { "type": "regex", "regex": "(.+)" }, "answer_extraction": { "type": "regex", "regex": "(.+)" }, "prompt_policy_timeout_ms": null, "response_policy_timeout_ms": null, "integration_status": "success", "size": 0 } ] ``` ### Get Project by ID **Endpoint:** GET `https://guardrails.aporia.com/api/v1/projects/{project_id}` **Headers:** * `Content-Type`: `application/json` * `Authorization`: `Bearer` + Your copied Aporia API key **Path Parameters:** The ID of the project to retrieve. **Response Fields:** The project ID. The project name. The project description. The project icon, possible values are `codepen`, `chatBubbleLeftRight`, `serverStack`, `academicCap`, `bookOpen`, `commandLine`, `creditCard`, `rocketLaunch`, `envelope`, `identification`. The project color, possible values are `turquoiseBlue`, `mustard`, `cornflowerBlue`, `heliotrope`, `spray`, `peachOrange`, `shocking`, `white`, `manz`, `geraldine`. The organization ID. Boolean indicating whether the project is active or not. List of partial policies. Each PartialPolicy has the following attributes: `id` (uuid), `policy_type` (string), `name` (string), `enabled` (bool), `condition` (dict), `action` (dict). List of [extractions](/fundamentals/extractions) defined for the project. see full explanation about `project_extractions` in `Get All Projects` endpoint. Extraction method for context, can be either `RegexExtraction` or `JSONPathExtraction`. see full explanation about `RegexExtraction` and `JSONPathExtraction` in `context_extraction` field in `Get All Projects` endpoint. Extraction method for question, can be either `RegexExtraction` or `JSONPathExtraction`. see full explanation about `RegexExtraction` and `JSONPathExtraction` in `context_extraction` field in `Get All Projects` endpoint. Extraction method for answer, can be either `RegexExtraction` or `JSONPathExtraction`. see full explanation about `RegexExtraction` and `JSONPathExtraction` in `context_extraction` field in `Get All Projects` endpoint. Maximum runtime for policies on prompt in milliseconds. Maximum runtime for policies on response in milliseconds. Project integration status, possible values are: `pending`, `failed`, `success`. The size of the project, possible values are `0`, `1`, `2`, `3`. defaults to `0`. **Response JSON Example:** ```json { "id": "123e4567-e89b-12d3-a456-426614174000", "name": "Test", "description": "Project to test", "icon": "chatBubbleLeftRight", "color": "mustard", "organization_id": "123e4567-e89b-12d3-a456-426614174000", "is_active": true, "policies": [ { "id": "1", "policy_type": "aporia_guardrails_test", "name": null, "enabled": true, "condition": {}, "action": { "type": "block", "response": "Aporia Guardrails Test: AGT detected successfully!" } } ], "project_extractions": [ { "descriptor": "question", "descriptor_type": "default", "extraction": {"regex": "(.+)", "type": "regex"}, "extraction_target": "prompt", }, { "descriptor": "context", "descriptor_type": "default", "extraction": {"regex": "(.+)", "type": "regex"}, "extraction_target": "prompt", }, { "descriptor": "answer", "descriptor_type": "default", "extraction": {"regex": "(.+)", "type": "regex"}, "extraction_target": "response", }, ], "context_extraction": { "type": "regex", "regex": "(.+)" }, "question_extraction": { "type": "regex", "regex": "(.+)" }, "answer_extraction": { "type": "regex", "regex": "(.+)" }, "prompt_policy_timeout_ms": null, "response_policy_timeout_ms": null, "integration_status": "success", "size": 1 } ``` ### Create Project **Endpoint:** POST `https://guardrails.aporia.com/api/v1/projects` **Headers:** * `Content-Type`: `application/json` * `Authorization`: `Bearer` + Your copied Aporia API key **Request Fields:** The name of the project. The description of the project. Maximum runtime for policies on prompt in milliseconds. Maximum runtime for policies on response in milliseconds. Icon of the project, with possible values: `codepen`, `chatBubbleLeftRight`, `serverStack`, `academicCap`, `bookOpen`, `commandLine`, `creditCard`, `rocketLaunch`, `envelope`, `identification`. Color of the project, with possible values: `turquoiseBlue`, `mustard`, `cornflowerBlue`, `heliotrope`, `spray`, `peachOrange`, `shocking`, `white`, `manz`, `geraldine`. List of [extractions](/fundamentals/extractions) to define for the project. see full explanation about `project_extractions` in `Get All Projects` endpoint. Extraction method for context, defaults to `RegexExtraction` with a predefined regex: `(.+)`. see full explanation about `RegexExtraction` in `context_extraction` field in `Get All Projects` endpoint. Extraction method for question, defaults to `RegexExtraction` with a predefined regex: `(.+)`. see full explanation about `RegexExtraction` in `context_extraction` field in `Get All Projects` endpoint. Extraction method for answer, defaults to `RegexExtraction` with a predefined regex: `(.+)`. see full explanation about `RegexExtraction` in `context_extraction` field in `Get All Projects` endpoint. Boolean indicating whether the project is active, defaults to `true`. The size of the project, possible values are `0`, `1`, `2`, `3`. defaults to `0`. **Request JSON Example:** ```json { "name": "New Project", "description": "Description of the new project", "prompt_policy_timeout_ms": 1000, "response_policy_timeout_ms": 1000, "icon": "chatBubbleLeftRight", "color": "turquoiseBlue", "project_extractions": [ { "descriptor": "question", "descriptor_type": "default", "extraction": {"regex": "(.+)", "type": "regex"}, "extraction_target": "prompt", }, { "descriptor": "context", "descriptor_type": "default", "extraction": {"regex": "(.+)", "type": "regex"}, "extraction_target": "prompt", }, { "descriptor": "answer", "descriptor_type": "default", "extraction": {"regex": "(.+)", "type": "regex"}, "extraction_target": "response", }, ], "is_active": true, "size": 0 } ``` **Response Fields:** The response fields will mirror those specified in the ProjectRead object, as described in the previous documentation for retrieving a project. **Response JSON Example:** The response json example will be identical to the one in the `Get Project by ID` endpoint. ### Update Project **Endpoint:** PUT `https://guardrails.aporia.com/api/v1/projects/{project_id}` **Headers:** * `Content-Type`: `application/json` * `Authorization`: `Bearer` + Your copied Aporia API key **Path Parameters:** The ID of the project to update. **Request Fields:** The name of the project. The description of the project. Maximum runtime for policies on prompt in milliseconds. Maximum runtime for policies on response in milliseconds. Icon of the project, with possible values like `codepen`, `chatBubbleLeftRight`, etc. Color of the project, with possible values like `turquoiseBlue`, `mustard`, etc. List of [extractions](/fundamentals/extractions) to define for the project. see full explanation about `project_extractions` in `Get All Projects` endpoint. Extraction method for context, defaults to `RegexExtraction` with a predefined regex. see full explanation about `RegexExtraction` in `context_extraction` field in `Get All Projects` endpoint. Extraction method for question, defaults to `RegexExtraction` with a predefined regex. see full explanation about `RegexExtraction` in `context_extraction` field in `Get All Projects` endpoint. Extraction method for answer, defaults to `RegexExtraction` with a predefined regex. see full explanation about `RegexExtraction` in `context_extraction` field in `Get All Projects` endpoint. Boolean indicating whether the project is active. The size of the project, possible values are `0`, `1`, `2`, `3`. defaults to `0`. Boolean indicating whether to allow project resizing (in case we downgrade a project which surpassed the max tokens for the new project size) Boolean indicating whether to remove the scheduled size from the project Al list of policy ids to keep, in case we downgrade the project. **Request JSON Example:** ```json { "name": "Updated Project", "description": "Updated description of the project", "prompt_policy_timeout_ms": 2000, "response_policy_timeout_ms": 2000, "icon": "serverStack", "color": "cornflowerBlue", "project_extractions": [ { "descriptor": "question", "descriptor_type": "default", "extraction": {"regex": "(.+)", "type": "regex"}, "extraction_target": "prompt", }, { "descriptor": "context", "descriptor_type": "default", "extraction": {"regex": "(.+)", "type": "regex"}, "extraction_target": "prompt", }, { "descriptor": "answer", "descriptor_type": "default", "extraction": {"regex": "(.+)", "type": "regex"}, "extraction_target": "response", }, ], "is_active": false } ``` **Response Fields:** The response fields will mirror those specified in the ProjectRead object, as previously documented. **Response JSON Example:** The response json example will be identical to the one in the `Get Project by ID` endpoint. ### Delete Project **Endpoint:** DELETE `https://guardrails.aporia.com/api/v1/projects/{project_id}` **Headers:** * `Content-Type`: `application/json` * `Authorization`: `Bearer` + Your copied Aporia API key **Path Parameters:** The ID of the project to delete. **Response Fields:** The response fields will mirror those specified in the ProjectRead object, as previously documented. **Response JSON Example:** The response json example will be identical to the one in the `Get Project by ID` endpoint. ### Get All Policies of a Project **Endpoint:** GET `https://guardrails.aporia.com/api/v1/projects/{project_id}/policies` **Headers:** * `Content-Type`: `application/json` * `Authorization`: `Bearer` + Your copied Aporia API key **Path Parameters:** The ID of the project whose policies you want to retrieve. **Response Fields:** The response type is a `list`. each object in the list contains the following fields: The unique identifier of the policy. Configuration details of the action to be taken by this policy. `ActionConfig` is an object containing `type` field, with possible values of: `modify`, `rephrase`, `block`, `passthrough`. For `modify` action, extra fields will be `prefix` and `suffix`, both optional strings. The value in `prefix` will be added in the beginning of the response, and the value of `suffix` will be added in the end of the response. For `rephrase` action, extra fields will be `prompt` (required) and `llm_model_to_use` (optional). `prompt` is a string that will be used as an addition to the question being sent to the llm. `llm_model_to_use` is a string representing the llm model that will be used. default value is `gpt3.5_1106`. For `block` action, extra field will be `response`, which is a required string. This `response` will replace the original response from the llm. For `passthrough` action, there will be no extra fields. Boolean indicating whether the policy is currently enabled. Conditions under which the policy is triggered. The condition changes per policy. Type of the policy, defining its nature and behavior. The order of priority of this policy among others within the same project. There must be no duplicates. **Response JSON Example:** ```json [ { "id": "1", "action": { "type": "block", "response": "Aporia Guardrails Test: AGT detected successfully!" }, "enabled": true, "condition": {}, "policy_type": "aporia_guardrails_test", "priority": 0 }, { "id": "2", "action": { "type": "block", "response": "Toxicity detected: Message blocked because it includes toxicity. Please rephrase." }, "enabled": true, "condition": { "type": "toxicity", "categories": [ "harassment", "hate", "self_harm", "sexual", "violence" ], "top_category_theshold": 0.6, "bottom_category_theshold": 0.1 }, "policy_type": "toxicity_on_prompt", "priority": 1 } ] ``` ### Get Policy by ID **Endpoint:** GET `https://guardrails.aporia.com/api/v1/projects/{project_id}/policies/{policy_id}` **Headers:** * `Content-Type`: `application/json` * `Authorization`: `Bearer` + Your copied Aporia API key **Path Parameters:** The ID of the project from which to retrieve a specific policy. The ID of the policy to retrieve. **Response Fields:** The unique identifier of the policy. Configuration details of the action to be taken by this policy. `ActionConfig` is an object containing `type` field, with possible values of: `modify`, `rephrase`, `block`, `passthrough`. For `modify` action, extra fields will be `prefix` and `suffix`, both optional strings. The value in `prefix` will be added in the beginning of the response, and the value of `suffix` will be added in the end of the response. For `rephrase` action, extra fields will be `prompt` (required) and `llm_model_to_use` (optional). `prompt` is a string that will be used as an addition to the question being sent to the llm. `llm_model_to_use` is a string representing the llm model that will be used. default value is `gpt3.5_1106`. For `block` action, extra field will be `response`, which is a required string. This `response` will replace the original response from the llm. For `passthrough` action, there will be no extra fields. Boolean indicating whether the policy is currently enabled. Conditions under which the policy is triggered. The condition changes per policy. Type of the policy, defining its nature and behavior. The order of priority of this policy among others within the same project. There must be no duplicates. **Response JSON Example:** ```json { "id": "2", "action": { "type": "block", "response": "Toxicity detected: Message blocked because it includes toxicity. Please rephrase." }, "enabled": true, "condition": { "type": "toxicity", "categories": [ "harassment", "hate", "self_harm", "sexual", "violence" ], "top_category_threshold": 0.6, "bottom_category_threshold": 0.1 }, "policy_type": "toxicity_on_prompt", "priority": 1 } ``` ### Create Policies **Endpoint:** POST `https://guardrails.aporia.com/api/v1/projects/{project_id}/policies` **Headers:** * `Content-Type`: `application/json` * `Authorization`: `Bearer` + Your copied Aporia API key **Path Parameters:** The ID of the project within which the policy will be created. **Request Fields:** The reuqest field is a `list`. each object in the list contains the following fields: The type of policy, which defines its behavior and the template it follows. The action that the policy enforces when its conditions are met. `ActionConfig` is an object containing `type` field, with possible values of: `modify`, `rephrase`, `block`, `passthrough`. For `modify` action, extra fields will be `prefix` and `suffix`, both optional strings. The value in `prefix` will be added in the beginning of the response, and the value of `suffix` will be added in the end of the response. For `rephrase` action, extra fields will be `prompt` (required) and `llm_model_to_use` (optional). `prompt` is a string that will be used as an addition to the question being sent to the llm. `llm_model_to_use` is a string representing the llm model that will be used. default value is `gpt3.5_1106`. For `block` action, extra field will be `response`, which is a required string. This `response` will replace the original response from the llm. For `passthrough` action, there will be no extra fields. The conditions under which the policy will trigger its action. defauls to `{}`. The condition changes per policy. The priority of the policy within the project, affecting the order in which it is evaluated against others. There must be no duplicates. **Request JSON Example:** ```json [{ "policy_type": "toxicity_on_prompt", "action": { "type": "block", "response": "Toxicity detected: Message blocked because it includes toxicity. Please rephrase." }, "condition": { "type": "toxicity", "categories": ["harassment", "hate", "self_harm", "sexual", "violence"], "top_category_threshold": 0.6, "bottom_category_threshold": 0.1 }, "enabled": true, "priority": 2 }] ``` **Response Fields:** The response fields will mirror those specified in the PolicyRead object, with additional details specific to the newly created policy. **Response JSON Example:** ```json [{ "id": "123e4567-e89b-12d3-a456-426614174000", "policy_type": "toxicity_on_prompt", "action": { "type": "block", "response": "Toxicity detected: Message blocked because it includes toxicity. Please rephrase." }, "condition": { "type": "toxicity", "categories": ["harassment", "hate", "self_harm", "sexual", "violence"], "top_category_threshold": 0.6, "bottom_category_threshold": 0.1 }, "enabled": true, "priority": 2 }] ``` ### Update Policy **Endpoint:** PUT `https://guardrails.aporia.com/api/v1/projects/{project_id}/policies/{policy_id}` **Headers:** * `Content-Type`: `application/json` * `Authorization`: `Bearer` + Your copied Aporia API key **Path Parameters:** The ID of the project within which the policy will be updated. The ID of the policy to be updated. **Request Fields:** Specifies the action that the policy enforces when its conditions are met. `ActionConfig` is an object containing `type` field, with possible values of: `modify`, `rephrase`, `block`, `passthrough`. For `modify` action, extra fields will be `prefix` and `suffix`, both optional strings. The value in `prefix` will be added in the beginning of the response, and the value of `suffix` will be added in the end of the response. For `rephrase` action, extra fields will be `prompt` (required) and `llm_model_to_use` (optional). `prompt` is a string that will be used as an addition to the question being sent to the llm. `llm_model_to_use` is a string representing the llm model that will be used. default value is `gpt3.5_1106`. For `block` action, extra field will be `response`, which is a required string. This `response` will replace the original response from the llm. For `passthrough` action, there will be no extra fields. Defines the conditions under which the policy will trigger its action. The condition changes per policy. Indicates whether the policy should be active. The priority of the policy within the project, affecting the order in which it is evaluated against other policies. There must be no duplicates. **Request JSON Example:** ```json { "action": { "type": "block", "response": "Updated action response to conditions." }, "condition": { "type": "updated_condition", "value": "new_condition_value" }, "enabled": false, "priority": 1 } ``` **Response Fields:** The response fields will mirror those specified in the PolicyRead object, updated to reflect the changes made to the policy. **Response JSON Example:** ```json { "id": "2", "action": { "type": "block", "response": "Updated action response to conditions." }, "condition": { "type": "updated_condition", "value": "new_condition_value" }, "enabled": false, "policy_type": "toxicity_on_prompt", "priority": 1 } ``` ### Delete Policy **Endpoint:** DELETE `https://guardrails.aporia.com/api/v1/projects/{project_id}/policies/{policy_id}` **Headers:** * `Content-Type`: `application/json` * `Authorization`: `Bearer` + Your copied Aporia API key **Path Parameters:** The ID of the project from which a policy will be deleted. The ID of the policy to be deleted. **Response Fields:** The unique identifier of the policy. Configuration details of the action that was enforced by this policy. `ActionConfig` is an object containing `type` field, with possible values of: `modify`, `rephrase`, `block`, `passthrough`. For `modify` action, extra fields will be `prefix` and `suffix`, both optional strings. The value in `prefix` will be added in the beginning of the response, and the value of `suffix` will be added in the end of the response. For `rephrase` action, extra fields will be `prompt` (required) and `llm_model_to_use` (optional). `prompt` is a string that will be used as an addition to the question being sent to the llm. `llm_model_to_use` is a string representing the llm model that will be used. default value is `gpt3.5_1106`. For `block` action, extra field will be `response`, which is a required string. This `response` will replace the original response from the llm. For `passthrough` action, there will be no extra fields. Indicates whether the policy was enabled at the time of deletion. The conditions under which the policy triggered its action. The type of the policy, defining its nature and behavior. The priority this policy held within the project, affecting the order in which it was evaluated against other policies. There must be no duplicates. **Response JSON Example:** ```json { "id": "2", "action": { "type": "block", "response": "This policy action will no longer be triggered." }, "enabled": false, "condition": { "type": "toxicity", "categories": ["harassment", "hate", "self_harm", "sexual", "violence"] }, "policy_type": "toxicity_on_prompt", "priority": 1 } ``` # Directory sync Directory Sync helps teams manage their organization membership from a third-party identity provider like Google Directory or Okta. Like SAML, Directory Sync is only available for Enterprise Teams and can only be configured by Team Owners. When Directory Sync is configured, changes to your Directory Provider will automatically be synced with your team members. The previously existing permissions/roles will be overwritten by Directory Sync, including current user performing the sync. Make sure that you still have the right permissions/role after configuring Directory Sync, otherwise you might lock yourself out. All team members will receive an email detailing the change. For example, if a new user is added to your Okta directory, that user will automatically be invited to join your Aporia Team. If a user is removed, they will automatically be removed from the Aporia Team. You can configure a mapping between your Directory Provider's groups and a Aporia Team role. For example, your ML Engineers group on Okta can be configured with the member role on Aporia, and your Admin group can use the owner role. ## Configuring Directory Sync To configure directory sync for your team: 1. Ensure your team is selected in the scope selector 2. From your team's dashboard, select the Settings tab, and then Security & Privacy 3. Under SAML Single Sign-On, select the Configure button. This opens a dialog to guide you through configuring Directory Sync for your Team with your Directory Provider. 4. Once you have completed the configuration walkthrough, configure how Directory Groups should map to Aporia Team roles. 5. Finally, an overview of all synced members is shown. Click Confirm and Sync to complete the syncing. 6. Once confirmed, Directory Sync will be successfully configured for your Aporia Team. ## Supported providers Aporia supports the following third-party SAML providers: * Okta * Google * Azure * SAML * OneLogin # Multi-factor Authentication (MFA) ## MFA setup guide To set up multi-factor authentication (MFA) for your user, follow these steps: 1. [Log into your Aporia Guardrails account.](https://guardrails.aporia.com) 2. On the sidebar, click **Settings**. 3. Select the **Profile** tab and go to the **Authentication** section 4. Click **Setup a new Factor** 5. Provide a memorable name to identify this factor (e.g. Bitwarden, Google Authenticator, iPhone 14, etc.) 6. Click **Set factor name**. 7. A QR code will appear, scan it in your MFA app and enter the code generated: 8. Click **Enable Factor**. All done! # Security & Compliance Aporia uses and provides a variety of tools, frameworks, and features to ensure that your data is secure. ## Ownership: You own and control your data * You own your inputs and outputs * You control how long your data is retained (by default, 30 days) ## Control: You decide who has access * Enterprise-level authentication through SAML SSO * Fine-grained control over access and available features * Custom policies are yours alone to use and are not shared with anyone else ## Security: Comprehensive compliance * We've been audited for SOC 2 and HIPAA compliance * Aporia can be deployed in the same cloud provider (AWS, GCP, Azure) and region * Private Link can be set up so all data stays in your cloud provider's backbone and does not traverse the Internet * Data encryption at rest (AES-256) and in transit (TLS 1.2+) * Bring your own Key encryption so you can revoke access to data at any time * Visit our [Trust Portal](https://security.aporia.com/) to understand more about our security measures * Aporia code is peer reviewed by developers with security training. Significant design documents go through comprehensive security reviews. # Self Hosting This document provides an overview of the Aporia platform architecture, design choices and security features that enable your team to securely add guardrails to their models without exposing any sensitive data. # Overview The Aporia architecture is split into two planes to **avoid sensitive data exposure** and **simplify maintenance**. * The control plane lives in Aporia's cloud and serves the policy configuration, along with the UI and metadata. * The data plane can be deployed in your cloud environment, runs the policies themselves and provides an [OpenAI-compatible endpoint](http://localhost:3000/fundamentals/integration/openai-proxy). # Architecture Built on a robust Kubernetes architecture, the data plane is designed to expand horizontally, adapting to the volume and demands of your LLM applications. The data plane lives in your cloud provider account, and it’s a fully stateless application where all configuration is retrieved from the control plane. Any LLM prompt & response is processed in-memory only, unless users opt to storing them in an Postgres database in the customer’s cloud. Users can either use the OpenAI proxy or call the detection API directly. The data plane generates non-sensitive metadata that is pushed to the control plane (e.g. toxicity score, hallucination score). ## Data plane modes The data plane supports 2 modes: * **Azure OpenAI mode** - In this basic mode, all policies run using Azure OpenAI. While in this mode you can run the data plane without any GPUs, this mode does not support policy fine-tuning, and the accuracy/latency of the policies will be lower. * **Full mode** - In this mode, we'll run our fine-tuned small language models (SLMs) on your infrastructure. This achieves our state-of-the-art accuracy + latency but requires access to GPUs. The following architecture image describes the full mode: # Dependencies * Kubernetes (e.g. Amazon EKS) * Postgres (e.g. Amazon RDS) * RabbitMQ (e.g. Amazon MQ) # Security ## Networking All communication to the Aporia is done via a single port based on HTTPS. You can choose your own internal domain for Aporia, provide your own TLS certificates, and put Aporia behind your existing API gateway. Communication is encrypted with industry standard security protocols such as TLS 1.3. By default, Aporia will configure networking for you, but you can also control data plane networking with customer-managed VPC or VNet. Aporia does not change or modify any of your security and governance policies. Local firewalls complement security groups and subnet firewall policies to block unexpected inbound connections. ## Application The data plane runs in your cloud provider account in a Kubernetes cluster. Aporia supports AWS, Google Cloud and Azure. Aporia automatically runs the latest hardened base images, which are typically updated every 2-4 weeks. All containers run in unprivileged mode as non-root users. Every release is scanned for vulnerabilities, including container OS, third-party libraries, as well as static and dynamic code scanning. Aporia code is peer reviewed by developers with security training. Significant design documents go through comprehensive security reviews. Issues are tracked against the timeline shown in this table. Aporia’s founding team come from the elite cybersecurity Unit 8200 of the Israeli Defense Forces. # Single sign-on (SSO) To manage the members of your team through a third-party identity provider like Okta or Auth0, you can set up the Security Assertion Markup Language (SAML) feature from the team settings. To enable this feature, the team must be on the Enterprise plan and you must hold an owner role. All team members will be able to log in using your identity provider (which you can also enforce), and similar to the team email domain feature, any new users signing up with SAML will automatically be added to your team. ## Configuring SAML SSO SAML can be configured from the team settings, under the SAML Single Sign-On section. Clicking Configure will open a walkthrough that helps you configure SAML SSO for your team with your identity provider of choice. After completing the steps, SAML will be successfully configured for your team. ## Authenticating with SAML SSO Once you have configured SAML, your team members can use SAML SSO to log in or sign up to Aporia. Click "SSO" on the authentication page, then enter your work email address. ## Enforcing SAML For additional security, SAML SSO can be enforced for a team so that all team members cannot access any team information unless their current session was authenticated with SAML SSO. You can only enforce SAML SSO for a team if your current session was authenticated with SAML SSO. This ensures that your configuration is working properly before tightening access to your team information, this prevents lose of access to the team. # RAG Chatbot: Embedchain + Chainlit Learn how to build a streaming RAG chatbot with Embedchain, OpenAI, Chainlit for chat UI, and Aporia Guardrails. ## Setup Install required libraries: ```bash pip3 install chainlit embedchain --upgrade ``` Import libraries: ```python import chainlit as cl from embedchain import App import uuid ``` ## Build a RAG chatbot When Chainlit starts, initialize a new Embedchain app using GPT-3.5 and streaming enabled. This is where you can add documents to be used as knowledge for your RAG chatbot. For more information, see the [Embedchain docs](https://docs.embedchain.ai/components/data-sources/overview). ```python @cl.on_chat_start async def chat_startup(): app = App.from_config(config={ "app": { "config": { "name": "my-chatbot", "id": str(uuid.uuid4()), "collect_metrics": False } }, "llm": { "config": { "model": "gpt-3.5-turbo-0125", "stream": True, "temperature": 0.0, } } }) # Add documents to be used as knowledge base for the chatbot app.add("my_knowledge.pdf", data_type='pdf_file') cl.user_session.set("app", app) ``` When a user writes a message in the chat UI, call the Embedchain RAG app: ```python @cl.on_message async def on_new_message(message: cl.Message): app = cl.user_session.get("app") msg = cl.Message(content="") for chunk in await cl.make_async(app.chat)(message.content): await msg.stream_token(chunk) await msg.send() ``` To run the application, run: ```bash chainlit run .py ``` ## Integrate Aporia Guardrails Next, to integrate Aporia Guardrails, get your Aporia API Key and base URL per the [OpenAI proxy](/fundamentals/integration/) documentation. You can then add it like this to the Embedchain app from the configuration: ```python app = App.from_config(config={ "llm": { "config": { "base_url": "https://gr-prd.aporia.com/", "model_kwargs": { "default_headers": { "X-APORIA-API-KEY": "" } }, # ... } }, # ... }) ``` ### AGT Test You can now test the integration using the [AGT Test](/policies/agt-test). Try this prompt: ``` X5O!P%@AP[4\PZX54(P^)7CC)7}$AGT-STANDARD-GUARDRAILS-TEST-MSG!$H+H* ``` # Conclusion That's it. You have successfully created an LLM application using Embedchain, Chainlit, and Aporia. # Basic Example: Langchain + Gemini Learn how to build a basic application using Langchain, Google Gemini, and Aporia Guardrails. ## Overview [Gemini](https://ai.google.dev/models/gemini) is a family of generative AI models that lets developers generate content and solve problems. These models are designed and trained to handle both text and images as input. [Langchain](https://www.langchain.com/) is a framework designed to make integration of Large Language Models (LLM) like Gemini easier for applications. [Aporia](https://www.aporia.com/) allows you to mitigate hallucinations and emberrasing responses in customer-facing RAG applications. In this tutorial, you'll learn how to create a basic application using Gemini, Langchain, and Aporia. ## Setup First, you must install the packages and set the necessary environment variables. ### Installation Install Langchain's Python library, `langchain`. ```bash pip install --quiet langchain ``` Install Langchain's integration package for Gemini, `langchain-google-genai`. ```bash pip install --quiet langchain-google-genai ``` ### Grab API Keys To use Gemini and Aporia you need *API keys*. In Gemini, you can create an API key with one click in [Google AI Studio](https://makersuite.google.com/). To grab your Aporia API key, create a project in Aporia and copy the API key from the user interface. You can follow the [quickstart](/get-started/quickstart) tutorial. ```python APORIA_BASE_URL = "https://gr-prd.aporia.com/" APORIA_API_KEY = "..." GEMINI_API_KEY = "..." ``` ### Import the required libraries ```python from langchain import PromptTemplate from langchain.schema import StrOutputParser ``` ### Initialize Gemini You must import the `ChatGoogleGenerativeAI` LLM from Langchain to initialize your model. In this example you will use **gemini-pro**. To know more about the text model, read Google AI's [language documentation](https://ai.google.dev/models/gemini). You can configure the model parameters such as ***temperature*** or ***top\_p***, by passing the appropriate values when creating the `ChatGoogleGenerativeAI` LLM. To learn more about the parameters and their uses, read Google AI's [concepts guide](https://ai.google.dev/docs/concepts#model_parameters). ```python from langchain_google_genai import ChatGoogleGenerativeAI # If there is no env variable set for API key, you can pass the API key # to the parameter `google_api_key` of the `ChatGoogleGenerativeAI` function: # `google_api_key="key"`. llm = ChatGoogleGenerativeAI( model="gemini-pro", temperature=0.7, top_p=0.85, google_api_key=GEMINI_API_KEY, ) ``` # Wrap Gemini with Aporia Guardrails We'll now wrap the Gemini LLM object with Aporia Guardrails. Since Aporia doesn't natively support Gemini yet, we can use the [REST API](/fundamentals/integration/rest-api) integration which is LLM-agnostic. Copy this adapter code (to be uploaded as a standalone `langchain-aporia` pip package): ```python import requests from typing import Any, AsyncIterator, Dict, Iterator, List, Optional from langchain_core.callbacks import CallbackManagerForLLMRun from langchain_core.language_models import BaseChatModel from langchain_core.messages import BaseMessage from langchain_core.outputs import ChatResult from pydantic import PrivateAttr from langchain_community.adapters.openai import convert_message_to_dict class AporiaGuardrailsChatModelWrapper(BaseChatModel): base_model: BaseChatModel = PrivateAttr(default_factory=None) aporia_url: str = PrivateAttr(default_factory=None) aporia_token: str = PrivateAttr(default_factory=None) def __init__( self, base_model: BaseChatModel, aporia_url: str, aporia_token: str, **data ): super().__init__(**data) self.base_model = base_model self.aporia_url = aporia_url self.aporia_token = aporia_token def _generate( self, messages: List[BaseMessage], stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any, ) -> ChatResult: # Get response from underlying model llm_response = self.base_model._generate(messages, stop, run_manager) if len(llm_response.generations) > 1: raise NotImplementedError() # Run Aporia Guardrails messages_dict = [convert_message_to_dict(m) for m in messages] guardrails_result = requests.post( url=f"{self.aporia_url}/validate", headers={ "X-APORIA-API-KEY": self.aporia_token, }, json={ "messages": messages_dict, "validation_target": "both", "response": llm_response.generations[0].message.content } ) revised_response = guardrails_result.json()["revised_response"] llm_response.generations[0].text = revised_response llm_response.generations[0].message.content = revised_response return llm_response @property def _llm_type(self) -> str: """Get the type of language model used by this chat model.""" return self.base_model._llm_type @property def _identifying_params(self) -> Dict[str, Any]: return self.base_model._identifying_params @property def _identifying_params(self) -> Dict[str, Any]: return self.base_model._identifying_params ``` Then, override your LLM object with the guardrailed version: ```python llm = AporiaGuardrailsChatModelWrapper( base_model=llm, aporia_url=APORIA_BASE_URL, aporia_token=APORIA_API_KEY, ) ``` ### Create prompt templates You'll use Langchain's [PromptTemplate](https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/) to generate prompts for your task. ```python # To query Gemini llm_prompt_template = """ You are a helpful assistant. The user asked this question: "{text}" Answer: """ llm_prompt = PromptTemplate.from_template(llm_prompt_template) ``` ### Prompt the model ```python chain = llm_prompt | llm | StrOutputParser() print(chain.invoke("Hey, how are you?")) # ==> I am well, thank you for asking. How are you doing today? ``` ### AGT Test Read more here: [AGT Test](/policies/agt-test). ```python print(chain.invoke("X5O!P%@AP[4\PZX54(P^)7CC)7}$AGT-STANDARD-GUARDRAILS-TEST-MSG!$H+H*")) # ==> Aporia Guardrails Test: AGT detected successfully! ``` # Conclusion That's it. You have successfully created an LLM application using Langchain, Gemini, and Aporia. # Cloudflare AI Gateway Cloudflare integration is upcoming, stay tuned! # LiteLLM integration [LiteLLM](https://github.com/BerriAI/litellm) is an open-source AI gateway. For more information on integrating Aporia with AI gateways, [see this guide](/fundamentals/ai-gateways/overview). ## Integration Guide ### Installation To configure LiteLLM with Aporia, start by installing LiteLLM: ```bash pip install 'litellm[proxy]' ``` For more details, visit [LiteLLM - Getting Started guide.](https://docs.litellm.ai/docs/) ## Use LiteLLM AI Gateway with Aporia Guardrails In this tutorial we will use LiteLLM Proxy with Aporia to detect PII in requests. ## 1. Setup guardrails on Aporia ### Pre-Call: Detect PII Add the `PII - Prompt` to your Aporia project. ## 2. Define Guardrails on your LiteLLM config.yaml * Define your guardrails under the `guardrails` section and set `pre_call_guardrails` ```yaml model_list: - model_name: gpt-3.5-turbo litellm_params: model: openai/gpt-3.5-turbo api_key: os.environ/OPENAI_API_KEY guardrails: - guardrail_name: "aporia-pre-guard" litellm_params: guardrail: aporia # supported values: "aporia", "lakera" mode: "during_call" api_key: os.environ/APORIA_API_KEY_1 api_base: os.environ/APORIA_API_BASE_1 ``` ### Supported values for `mode` * `pre_call` Run **before** LLM call, on **input** * `post_call` Run **after** LLM call, on **input & output** * `during_call` Run **during** LLM call, on **input** ## 3. Start LiteLLM Gateway ```shell litellm --config config.yaml --detailed_debug ``` ## 4. Test request import { Tabs, Tab } from "@mintlify/components"; Expect this to fail since since `ishaan@berri.ai` in the request is PII ```shell curl -i http://localhost:4000/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer sk-npnwjPQciVRok5yNZgKmFQ" \ -d '{ "model": "gpt-3.5-turbo", "messages": [ {"role": "user", "content": "hi my email is ishaan@berri.ai"} ], "guardrails": ["aporia-pre-guard"] }' ``` Expected response on failure ```shell { "error": { "message": { "error": "Violated guardrail policy", "aporia_ai_response": { "action": "block", "revised_prompt": null, "revised_response": "Aporia detected and blocked PII", "explain_log": null } }, "type": "None", "param": "None", "code": "400" } } ``` ```shell curl -i http://localhost:4000/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer sk-npnwjPQciVRok5yNZgKmFQ" \ -d '{ "model": "gpt-3.5-turbo", "messages": [ {"role": "user", "content": "hi what is the weather"} ], "guardrails": ["aporia-pre-guard"] }' ``` ## 5. Control Guardrails per Project (API Key) Use this to control what guardrails run per project. In this tutorial we only want the following guardrails to run for 1 project (API Key) * `guardrails`: \["aporia-pre-guard", "aporia"] **Step 1** Create Key with guardrail settings ```shell curl -X POST 'http://0.0.0.0:4000/key/generate' \ -H 'Authorization: Bearer sk-1234' \ -H 'Content-Type: application/json' \ -D '{ "guardrails": ["aporia-pre-guard", "aporia"] } }' ``` ```shell curl --location 'http://0.0.0.0:4000/key/update' \ --header 'Authorization: Bearer sk-1234' \ --header 'Content-Type: application/json' \ --data '{ "key": "sk-jNm1Zar7XfNdZXp49Z1kSQ", "guardrails": ["aporia-pre-guard", "aporia"] } }' ``` **Step 2** Test it with new key ```shell curl --location 'http://0.0.0.0:4000/chat/completions' \ --header 'Authorization: Bearer sk-jNm1Zar7XfNdZXp49Z1kSQ' \ --header 'Content-Type: application/json' \ --data '{ "model": "gpt-3.5-turbo", "messages": [ { "role": "user", "content": "my email is ishaan@berri.ai" } ] }' ``` # Overview By integrating Aporia with your AI Gateway, every new LLM-based application gets out-of-the-box guardrails. Teams can then add custom policies for their project. ## What is an AI Gateway? An AI Gateway (or LLM Gateway) is a centralized proxy for LLM-based applications within an organization. This setup enhances governance, management, and control for enterprises. By routing LLM requests through a centralized gateway rather than directly to LLM providers, you gain multiple benefits: 1. **Less vendor lock-in:** Facilitates easier migrations between different LLM providers. 2. **Cost control:** Manage and monitor expenses on a team-by-team basis. 3. **Rate limit control:** Enforces request limits on a team-by-team basis. 4. **Retries & Caching:** Improves performance and reliability of LLM calls. 5. **Analytics:** Provides insights into usage patterns and operational metrics. ## Aporia Guardrails & AI Gateways Aporia Guardrails is a great fit for AI Gateways: every new LLM app automatically gets default out-of-the-box guardrails for hallucinations, inappropriate responses, prompt injections, data leakage, and more. If a specific team needs to [customize guardrails for their project](/fundamentals/customization), they can log-in to the Aporia dashboard and edit the different policies. Specific integration examples: * [LiteLLM](/fundamentals/ai-gateways/litellm) * [Portkey](/fundamentals/ai-gateways/portkey) * [Cloudflare AI Gateway](/fundamentals/ai-gateways/cloudflare) If you're using an AI Gateway not listed here, please contact us at [support@aporia.com](mailto:support@aporia.com). We'd be happy to add more examples! # Portkey integration ### 1. Add Aporia API Key to Portkey * Inside Portkey, navigate to the "Integrations" page under "Settings". * Click on the edit button for the Aporia integration and add your API key. ### 2. Add Aporia's Guardrail Check * Navigate to the "Guardrails" page inside Portkey. * Search for "Validate - Project" Guardrail Check and click on `Add`. * Input your corresponding Aporia Project ID where you are defining the policies. * Save the check, set any actions you want on the check, and create the Guardrail! | Check Name | Description | Parameters | Supported Hooks | | ------------------- | --------------------------------------------------------------------------------------- | -------------------- | ----------------------------------------- | | Validate - Projects | Runs a project containing policies set in Aporia and returns a `PASS` or `FAIL` verdict | Project ID: `string` | `beforeRequestHooks`, `afterRequestHooks` | Your Aporia Guardrail is now ready to be added to any Portkey request you'd like! ### 3. Add Guardrail ID to a Config and Make Your Request * When you save a Guardrail, you'll get an associated Guardrail ID - add this ID to the `before_request_hooks` or `after_request_hooks` methods in your Portkey Config. * Save this Config and pass it along with any Portkey request you're making! Your requests are now guarded by your Aporia policies and you can see the Verdict and any action you take directly on Portkey logs! More detailed logs for your requests will also be available on your Aporia dashboard. *** # Customization Aporia Guardrails is highly customizable, and we continuously add more customization options. Learn how to customize guardrails for your needs. ## Get Started To begin customizing your project, enter the policies tab of your project by logging into the [Aporia dashboard](https://guardrails.aporia.com), selecting your project and clicking on the **Policies** tab. Here, you can add new policies1, customize2, and delete existing ones3. A policy in Aporia is a specific safeguard against a single LLM risk. Examples include RAG hallucinations, Restricted topics, or Prompt Injection. Each policy allows for various customizations, such as adjustable sensitivity levels or topics to restrict. ## Adding a policy To add a new policy, click **Add policy** to enter the policy catalog: Select the policies you'd like to add and click **Add to project**. ## Editing a policy Next to the new policy you want to edit, select the ellipses (…) menu and click **Edit configuration**. Overview of the edit configuration page: 1. **Policy Detection Customization:** Use this section to customize the policy detection algorithm (e.g. topics to restrict). The configuration options here depend on the type of policy you are editing. 2. **Action Customization:** Customize the actions taken when a violation is detected in this section. 3. **Sandbox:** Test your policy configurations using the chatbot sandbox. Enable or disable a policy using the **Policy State** toggle. 4. **Save Changes:** Click this button to save and implement your changes. The [Quickstart](/get-started/quickstart) guide includes an end-to-end example of how to customize a policy. ## Deleting a policy To delete a policy: 1. [Log into your Aporia Guardrails account.](https://guardrails.aporia.com) 2. Select your project and click on the **Policies** tab. 3. Next to the policy you’d like to remove, select the ellipses (…) and then select **Delete policy** from the menu. ## Custom policies You can also build your own custom policies by writing a prompt. See the [Custom Policy](/policies/custom-policy) documentation for more information. # Extractions Extractions are specific parts of the prompt or response that you define, such as a **question**, **answer**, or **context**. These help Aporia know exactly what to check when running policies on your prompts or responses. ## Why Do You Need to Define Extractions? Defining extractions ensures that our policies run accurately on the correct parts of your prompts or responses. For example, if we want to detect prompt injection, we need to check the user's question part, not the system prompt. Without this distinction, there could be false positives. ## How and Why Do We Use Extractions? The logic behind extractions is straightforward. Aporia checks the last message received: 1. If it matches an extraction, we run the policy on this part. 2. If it doesn't match, we move to the previous message and so on. Make sure to define **question**, **context**, and **answer** extractions for optimal policy performance. To give you a sense of how it looks in "real life," here's an example: ### Prompt: ``` You are a tourist guide. Help answer the user's question according to the text book. Text: Paris, the capital city of France, is renowned for its rich history, iconic landmarks, and vibrant culture. Known as the "City of Light," Paris is famous for its artistic heritage, with landmarks such as the Eiffel Tower, the Louvre Museum, and Notre-Dame Cathedral. The city is a hub of fashion, cuisine, and art, attracting millions of tourists each year. Paris is also celebrated for its charming neighborhoods, such as Montmartre and Le Marais, and its lively café culture. The Seine River flows through the heart of Paris, adding to the city's picturesque beauty. User's question: What is the capital of France? ``` ### Response: ``` The capital of France is Paris. ``` # Overview This guide provides an overview and comparison between the different integration methods provided by Aporia Guardrails. Aporia Guardrails can be integrated into LLM-based applications using two distinct methods: the OpenAI Proxy and Aporia's REST API. Just getting started and use OpenAI or Azure OpenAI? [Skip this guide and use the OpenAI proxy integration.](/fundamentals/integration/openai-proxy) ## Method 1: OpenAI Proxy ### Overview In this method, Aporia acts as a proxy, forwarding your requests to OpenAI and simultaneously invoking guardrails. The returned response is either the original from OpenAI or a modified version enforced by Aporia's policies. This is the simplest option to get started with, especially if you use OpenAI or Azure OpenAI. ### Key Features * **Ease of Setup:** Modify the base URL and add the `X-APORIA-API-KEY` header. In the case of Azure OpenAI, add also the `X-AZURE-OPENAI-ENDPOINT` header. * **Streaming Support:** Ideal for real-time applications and chatbots, fully supporting streaming. * **LLM Provider Specific:** Can only be used if the LLM provider is OpenAI or Azure OpenAI. ### Recommended Use Ideal for those seeking a hassle-free setup with minimal changes, particularly when the LLM provider is OpenAI or Azure OpenAI. ## Method 2: Aporia's REST API ### Overview This approach involves making explicit calls to Aporia's REST API at two key stages: before sending the prompt to the LLM to check for prompt-level policy violations (e.g. Prompt Injection) and after receiving the response to apply response-level guardrails (e.g. RAG Hallucinations). ### Key Features * **Detailed Feedback:** Returns logs detailing which policies were triggered and what actions were taken. * **Custom Actions:** Enables the implementation of custom responses or actions instead of using the revised response provided by Aporia, offering flexibility in handling policy violations. * **LLM Provider Flexibility:** Any LLM is supported with this method (OpenAI, AWS Bedrock, Vertex AI, OSS models, etc.). ### Recommended Use Suited for developers requiring detailed control over policy enforcement and customization, especially when using LLM providers other than OpenAI or Azure OpenAI. ## Comparison of Methods * **Simplicity vs. Customizability:** The OpenAI Proxy offers simplicity for OpenAI users, whereas Aporia's REST API offers flexible, detailed control suitable for any LLM provider. * **Streaming Capabilities:** Present in the OpenAI Proxy and planned for future addition to Aporia's REST API. If you're just getting started, the OpenAI Proxy is recommended due to its straightforward setup. Developers requiring more control and detailed policy management should consider transitioning to Aporia's REST API later on. # OpenAI Proxy ## Overview In this method, Aporia acts as a proxy, forwarding your requests to OpenAI and simultaneously invoking guardrails. The returned response is either the original from OpenAI or a modified version enforced by Aporia's policies. This integration supports real-time applications through streaming capabilities, making it particularly useful for chatbots. If you're just getting started and your app is based on OpenAI or Azure OpenAI, **this method is highly recommended**. All you need to do is replace the OpenAI Base URL and add Aporia's API Key header. ## Prerequisites To use this integration method, ensure you have: 1. [Created an Aporia Guardrails project.](/fundamentals/projects#creating-a-project) ## Integration Guide ### Step 1: Gather Aporia's Base URL and API Key 1. Log into the [Aporia dashboard](https://guardrails.aporia.com). 2. Select your project and click on the **Integration** tab. 3. Under Integration, ensure that **Host URL** is active. 4. Copy the **Host URL**. 5. Click on **"API Keys Table"** to navigate to your keys table. 6. Create a new API key and **save it somewhere safe and accessible**. If you lose this secret key, you'll need to create a new one. ### Step 2: Integrate into Your Code 1. Locate the section in your codebase where you use the OpenAI's API. 2. Replace the existing `base_url` in your code with the URL copied from the Aporia dashboard. 3. Add the `X-APORIA-API-KEY` header to your HTTP requests using the `default_headers` parameter provided by OpenAI's SDK. ## Code Example Here is a basic example of how to configure the OpenAI client to use Aporia's OpenAI Proxy method: ```python Python (OpenAI) from openai import OpenAI client = OpenAI( api_key='', base_url='', default_headers={'X-APORIA-API-KEY': ''} ) chat_completion = client.chat.completions.create( model="gpt-3.5-turbo", messages=[ { "role": "user", "content": "Hello world", } ], user="", ) ``` ```javascript Node.js (OpenAI) import OpenAI from "openai"; const openai = new OpenAI({ apiKey: "", baseURL: "", defaultHeaders: {"X-APORIA-API-KEY": ""}, }); async function chat() { const completion = await openai.chat.completions.create({ messages: [{ role: "system", content: "You are a helpful assistant." }], model: "gpt-3.5-turbo", user: "", }); } ``` ```javascript LangChain.js import { ChatOpenAI } from "@langchain/openai"; const model = new ChatOpenAI({ apiKey: "", configuration: { baseURL: "", defaultHeaders: {"X-APORIA-API-KEY": ""}, }, user: "", }); const response = await model.invoke( "What would be a good company name a company that makes colorful socks?" ); console.log(response); ``` ## Azure OpenAI To integrate Aporia with Azure OpenAI, use the `X-AZURE-OPENAI-ENDPOINT` header to specify your Azure OpenAI endpoint. ```python Python (Azure OpenAI) from openai import AzureOpenAI client = AzureOpenAI( azure_endpoint="/azure", # Note the /azure! azure_deployment="", api_version="", api_key="", default_headers={ "X-APORIA-API-KEY": "", "X-AZURE-OPENAI-ENDPOINT": "", } ) chat_completion = client.chat.completions.create( model="gpt-3.5-turbo", messages=[ { "role": "user", "content": "Hello world", } ], user="", ) ``` # REST API ## Overview Aporia’s REST API method involves explicit API calls to enforce guardrails before and after LLM interactions, suitable for applications requiring a high level of customization and control over content policy enforcement. ## Prerequisites Before you begin, ensure you have [created an Aporia Guardrails project](/fundamentals/projects#creating-a-project). ## Integration Guide ### Step 1: Gather Aporia's API Key 1. Log into the [Aporia dashboard](https://guardrails.aporia.com) and select your project. 2. Click on the **Integration** tab. 3. Ensure that **REST API** is activated. 4. Note down the API Key displayed. ### Step 2: Integrate into Your Code 1. Locate where your code makes LLM calls, such as OpenAI API calls. 2. Before sending the prompt to the LLM, and after receiving the LLM's response, incorporate calls to Aporia’s REST API to enforce the respective guardrails. ### API Endpoint and JSON Structure **Endpoint:** POST `https://gr-prd.aporia.com//validate` **Headers:** * `Content-Type`: `application/json` * `X-APORIA-API-KEY`: Your copied Aporia API key **Request Fields:** OpenAI-compatible array of messages. Each message should include `role` and `content`. Possible `role` values are `system`, `user`, `assistant`, or `other` for any unsupported roles. The target of the validation which can be `prompt`, `response`, or `both`. The raw response from the LLM before any modifications. It is required if 'validation\_target' includes 'response'. Whether to return detailed explanations for the actions taken by the guardrails. An optional session ID to track related interactions across multiple requests. An optional user ID to associate sessions with specific user and monitor user activity. **Response Fields:** The action taken by the guardrails, possible values are `modify`, `passthrough`, `block`, `rephrase`. The revised version of the LLM's response based on the applied guardrails. A detailed log of each policy's application, including the policy ID, target, result, and details of the action taken. The final result of the policy execution, detailing the log of policies applied and the specific actions taken for each. **Request JSON Example:** ```json { "messages": [ { "role": "user", "content": "This is a test prompt" } ], "response": "Response from LLM here", // Optional // "validation_target": "both", // "explain": false, // "session_id": "optional-session-id" // "user": "optional-user-id" } ``` **Response JSON Example:** ```json { "action": "modify", "revised_response": "Modified response based on policy", "explain_log": [ { "policy_id": "001", "target": "response", "result": "issue_detected", "details": { ... } }, ... ], "policy_execution_result": { "policy_log": [ { "policy_id": "001", "policy_type": "content_check", "target": "response" } ], "action": { "type": "modify", "revised_message": "Modified response based on policy" } } } ``` ## Best practices ### Request timeout Set up a timeout of 5 second to the HTTP request in case there's any failure on Aporia's side. If you are using the `fetch` API in JavaScript, you can provide an abort signal using the [AbortController API](https://developer.mozilla.org/en-US/docs/Web/API/AbortController) and trigger it with `setTimeout`. [See this example.](https://dev.to/zsevic/timeout-with-fetch-api-49o3) If you are using the requests library in Python, you can simply provide a `timeout` argument: ```python import requests requests.post( "https://gr-prd.aporia.com//validate", timeout=5, ... ) ``` # Projects overview To integrate Aporia Guardrails, you need to create a Project, which groups the configurations of multiple policies. Learn how to set up projects with this guide. To integrate Aporia Guardrails, you need to create a **Project**. A Project groups the configurations of multiple policies. A policy is a specific safeguard against a single LLM risk. Examples include [RAG hallucinations](/policies/rag-hallucination), [Restricted topics](/policies/restricted-topics), or [Prompt Injection](/policies/prompt-injection). Each policy offers various customization capabilities, such as adjustable sensitivity levels, or topics to restrict. Each project in Aporia can be connected to one or more LLM applications, *as long as they share the same policies*. ## Creating a project To create a new project: 1. On the Aporia [dashboard](https://guardrails.aporia.com/), click **Add project**. 2. In the **Project name** field, enter a friendly project name (e.g., *Customer support chatbot*). Alternatively, select one of the suggested names. 3. Optionally, provide a description for your project in the **Description** field. 4. Optionally, choose an icon and a color for your project. 5. Click **Add**. ## Managing a Project Each Aporia Guardrails project features a dedicated dashboard to monitor its activity, customize policies, and more. ### Master switch Each project includes a **master switch** that allows you to toggle all guardrails on or off with a single click. Notes: * When the master switch is turned off, the [OpenAI Proxy](/fundamentals/integration/openai-proxy) proxies all requests directly to OpenAI, bypassing any guardrails policy. * With the master switch turned off, detectors do not operate, meaning you will not see any logs or statistics from the period during which it is off. ### Project overview The **Overview** tab allows you to monitor the activity of your guardrails policies within this project. You can use the time period dropdown to select the time period you wish to focus on. If a specific message (e.g., a user's question in a chatbot, or an LLM response) is evaluated by a specific policy (e.g., Prompt Injection), and the policy does not detect an issue, this message is tagged as legitimate. Otherwise, it is tagged as a violation. You can currently view the following data: * **Total Messages:** Total number of messages evaluated by the guardrails system. Each message can be either a prompt or a response. This count includes both violations and legitimate messages. * **Policy Activations:** Total number of policy violations detected by all policies in this project. * **Actions:** Statistics on the actions taken by the guardrails. * **Activity:** This chart displays the number of violations (red) versus legitimate messages (green) over time. * **Violations:** This chart provides a detailed breakdown of the specific violations detected (e.g., restricted topics, hallucinations, etc.). ### Policies The **Policies** tab allows you to view the policies that are configured for this project. In each policy, you can see its name (e.g., SQL - Allowed tables), what category this policy is part of (e.g., Security), what action should be taken if a violation is detected (e.g., Override response), and a **State** toggle to turn this policy on or off. To quickly edit or delete a policy, hover it and you'll see the More Options menu: ## Integrating your LLM app See [Integration](/fundamentals/integration/integration-overview). # Streaming support Aporia Guardrails provides guardrails for both prompt-level and response-level streaming, which is critical for building reliable chatbot experiences. Aporia Guardrails includes streaming support for completions requested from LLM providers. This feature is particularly crucial for real-time applications, such as chatbots, where immediate responsiveness is essential. ## Understanding Streaming ### What is Streaming? Typically, when a completion is requested from an LLM provider such as OpenAI, the entire content is generated and then returned to the user in a single response. This can lead to significant delays, resulting in a poor user experience, especially with longer completions. Streaming mitigates this issue by delivering the completion in parts, enabling the initial parts of the output to be displayed while the remaining content is still being generated. ### Challenges in Streaming + Guardrails While streaming improves response times, it introduces complexities in content moderation. Streaming partial completions makes it challenging to fully assess the content for issues such as toxicity, prompt injections, and hallucinations. Aporia Guardrails is designed to address these challenges effectively within a streaming context. ## Aporia's Streaming Support Currently, Aporia supports streaming through the [OpenAI proxy integration](/fundamentals/integration/openai-proxy). Integration via the [REST API](/fundamentals/integration/rest-api) is planned for a future release. By default, Aporia processes chunks of partial completions received from OpenAI, and executes all policies simultaneously for every chunk of partial completions with historical context, and without significantly increasing latency or token usage. You can also set the `X-RESPONSE-CHUNKED: false` HTTP header to wait until the entire response is retrieved, run guardrails, and then simulate a streaming experience for UX. # Team Management Learn how to manage team members on Aporia, and how to assign roles to each member with role-based access control (RBAC). As the organization owner, you have the ability to manage your organization's composition and the roles of its members, controlling the actions they can perform. These role assignments, governed by Role-Based Access Control (RBAC) permissions, define the access level each member has across all projects within the team's scope. ## Adding team members and assigning roles 1. [Log into your Aporia Guardrails account.](https://guardrails.aporia.com) 2. On the sidebar, click **Settings**. 3. Select the **Organizations** tab and go to the **Members** section 4. Click **Invite Members**. 5. Enter the email address of the person you would like to invite, assign their role, and select the **Send Invite** button. You can invite multiple people at once using the **Add another one** button: 6. You can view all pending invites in the **Pending Invites** tab. Once a member has accepted an invitation to the team, they'll be displayed as team members with their assigned role 7. Once a member has been accepted onto the team, you can edit their role using the **Change Role** button located alongside their assigned role in the Members section. ## Delete a member Organization admins can delete members: 1. [Log into your Aporia Guardrails account.](https://guardrails.aporia.com) 2. On the sidebar, click **Settings**. 3. Select the **Organizations** tab and go to the **Members** section 4. Next to the name of the person you'd like to remove, select the ellipses (…) and then select **Remove** from the menu. # Introduction Aporia Guardrails mitigates LLM hallucinations, inappropriate responses, prompt injection attacks, and other unintended behaviors in **real-time**. Positioned between the LLM (e.g., OpenAI, Bedrock, Mistral) and your application, Aporia enables scaling from a few beta users to millions confidently. ## Setting up The first step to world-class LLM-based apps is setting up guardrails. Try Aporia in a no-code sandbox environment Learn why guardrails are must-have for enterprise-grade LLM apps Learn how to quickly integrate Aporia to your LLM-based apps ## Make it yours Customize Aporia's built-in policies and add new ones to make them perfect for your app. Customize Aporia's built-in policies for your needs Create a new custom policy from scratch # Quickstart Add Aporia Guardrails to your LLM-based app in under 5 minutes by following this quickstart tutorial. Welcome to Aporia! This guide introduces you to the basics of our platform. Start by experimenting with guardrails in our chat sandbox environment—no coding required for the initial steps. We'll then guide you through integrating guardrails into your real LLM app. If you don't have an account yet, [book a 20 min call with us](https://www.aporia.com/demo/) to get access.