ποΈ Aim Security
Quick Start
ποΈ Aporia
Use Aporia to detect PII in requests and profanity in responses
ποΈ Azure Content Safety Guardrail
LiteLLM supports Azure Content Safety guardrails via the Azure Content Safety API.
ποΈ Bedrock Guardrails
If you haven't set up or authenticated your Bedrock provider yet, see the Bedrock Provider Setup & Authentication Guide.
ποΈ CrowdStrike AIDR
The CrowdStrike AIDR guardrail uses configurable detection policies to identify
ποΈ Custom Code Guardrail
Write custom guardrail logic using Python-like code that runs in a sandboxed environment.
ποΈ Custom Guardrail
Use this if you want to write code to run a custom guardrail
ποΈ DynamoAI Guardrails
LiteLLM supports DynamoAI guardrails for content moderation and policy enforcement on LLM inputs and outputs.
ποΈ EnkryptAI Guardrails
LiteLLM supports EnkryptAI guardrails for content moderation and safety checks on LLM inputs and outputs.
ποΈ Gray Swan Cygnal Guardrail
Use Gray Swan Cygnal to continuously monitor conversations for policy violations, indirect prompt injection (IPI), jailbreak attempts, and other safety risks.
ποΈ Guardrails AI
Use Guardrails AI (guardrailsai.com) to add checks to LLM output.
ποΈ HiddenLayer Guardrails
LiteLLM ships with a native integration for HiddenLayer. The proxy sends every request/response to HiddenLayerβs /detection/v1/interactions endpoint so you can block or redact unsafe content before it reaches your users.
ποΈ IBM Guardrails
LiteLLM works with IBM's FMS Guardrails for content safety. You can use it to detect jailbreaks, PII, hate speech, and more.
ποΈ Javelin Guardrails
Javelin provides AI safety and content moderation services with support for prompt injection detection, trust & safety violations, and language detection.
ποΈ Lakera AI
Supported endpoints: The Lakera v2 integration only supports the chat completions endpoint (/v1/chat/completions). It is not supported for the Responses API, /v1/messages, MCP, A2A, or other proxy endpoints.
ποΈ Lasso Security
Use Lasso Security to protect your LLM applications from prompt injection attacks, harmful content generation, and other security threats through comprehensive input and output validation.
ποΈ Google Cloud Model Armor
LiteLLM supports Google Cloud Model Armor guardrails via the Model Armor API.
ποΈ Noma Security
Use Noma Security to protect your LLM applications with comprehensive AI content moderation and safety guardrails.
ποΈ Onyx Security
Quick Start
ποΈ OpenAI Moderation
Overview
ποΈ Pangea
The Pangea guardrail uses configurable detection policies (called recipes) from its AI Guard service to identify and mitigate risks in AI application traffic, including:
ποΈ PANW Prisma AIRS
LiteLLM supports PANW Prisma AIRS (AI Runtime Security) guardrails via the Prisma AIRS Scan API. This integration provides Security-as-Code for AI applications using Palo Alto Networks' AI security platform.
ποΈ PII, PHI Masking - Presidio
Overview
ποΈ Pillar Security
Pillar Security integrates with LiteLLM Proxy via the Generic Guardrail API, providing comprehensive AI security scanning for your LLM applications.
ποΈ In-memory Prompt Injection Detection
LiteLLM Supports the following methods for detecting prompt injection attacks
ποΈ Qualifire
Use Qualifire to evaluate LLM outputs for quality, safety, and reliability. Detect prompt injections, hallucinations, PII, harmful content, and validate that your AI follows instructions.
ποΈ β¨ Secret Detection/Redaction (Enterprise-only)
β Use this to REDACT API Keys, Secrets sent in requests to an LLM.
ποΈ LiteLLM Tool Permission Guardrail
LiteLLM provides the LiteLLM Tool Permission Guardrail that lets you control which tool calls a model is allowed to invoke, using configurable allow/deny rules. This offers fine-grained, provider-agnostic control over tool execution (e.g., OpenAI Chat Completions toolcalls, Anthropic Messages tooluse, MCP tools).
ποΈ Zscaler AI Guard
Overview