Skip to main content

EnkryptAI Guardrails

LiteLLM supports EnkryptAI guardrails for content moderation and safety checks on LLM inputs and outputs.

Quick Startโ€‹

1. Define Guardrails on your LiteLLM config.yamlโ€‹

Define your guardrails under the guardrails section:

model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: openai/gpt-3.5-turbo
api_key: os.environ/OPENAI_API_KEY

guardrails:
- guardrail_name: "enkryptai-guard"
litellm_params:
guardrail: enkryptai
mode: "pre_call"
api_key: os.environ/ENKRYPTAI_API_KEY
detectors:
toxicity:
enabled: true
nsfw:
enabled: true
pii:
enabled: true
entities: ["email", "phone", "secrets"]
injection_attack:
enabled: true

Supported values for modeโ€‹

  • pre_call - Run before LLM call, on input
  • post_call - Run after LLM call, on output
  • during_call - Run during LLM call, on input. Same as pre_call but runs in parallel as LLM call

Available Detectorsโ€‹

EnkryptAI supports multiple content detection types:

  • toxicity - Detect toxic language
  • nsfw - Detect NSFW (Not Safe For Work) content
  • pii - Detect personally identifiable information
    • Configure entities: ["pii", "email", "phone", "secrets", "ip_address", "url"]
  • injection_attack - Detect prompt injection attempts
  • keyword_detector - Detect custom keywords/phrases
  • policy_violation - Detect policy violations
  • bias - Detect biased content
  • sponge_attack - Detect sponge attacks

2. Set Environment Variablesโ€‹

export ENKRYPTAI_API_KEY="your-api-key"

3. Start LiteLLM Gatewayโ€‹

litellm --config config.yaml --detailed_debug

4. Test Requestโ€‹

Langchain, OpenAI SDK Usage Examples

curl -i http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-1234" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "user", "content": "Hello, how can you help me today?"}
],
"guardrails": ["enkryptai-guard"]
}'

Response: HTTP 200 Success

Content passes all detector checks and is allowed through.

Video Walkthroughโ€‹

Advanced Configurationโ€‹

Using Custom Policiesโ€‹

You can specify a custom EnkryptAI policy:

guardrails:
- guardrail_name: "enkryptai-custom"
litellm_params:
guardrail: enkryptai
mode: "pre_call"
api_key: os.environ/ENKRYPTAI_API_KEY
policy_name: "my-custom-policy" # Sent via x-enkrypt-policy header
detectors:
toxicity:
enabled: true

Using Deploymentsโ€‹

Specify an EnkryptAI deployment:

guardrails:
- guardrail_name: "enkryptai-deployment"
litellm_params:
guardrail: enkryptai
mode: "pre_call"
api_key: os.environ/ENKRYPTAI_API_KEY
deployment_name: "production" # Sent via X-Enkrypt-Deployment header
detectors:
toxicity:
enabled: true

Monitor Mode (Logging Without Blocking)โ€‹

Set block_on_violation: false to log violations without blocking requests:

guardrails:
- guardrail_name: "enkryptai-monitor"
litellm_params:
guardrail: enkryptai
mode: "pre_call"
api_key: os.environ/ENKRYPTAI_API_KEY
block_on_violation: false # Log violations but don't block
detectors:
toxicity:
enabled: true
nsfw:
enabled: true

In monitor mode, all violations are logged but requests are never blocked.

Input and Output Guardrailsโ€‹

Configure separate guardrails for input and output:

guardrails:
# Input guardrail
- guardrail_name: "enkryptai-input"
litellm_params:
guardrail: enkryptai
mode: "pre_call"
api_key: os.environ/ENKRYPTAI_API_KEY
detectors:
pii:
enabled: true
entities: ["email", "phone", "ssn"]
injection_attack:
enabled: true

# Output guardrail
- guardrail_name: "enkryptai-output"
litellm_params:
guardrail: enkryptai
mode: "post_call"
api_key: os.environ/ENKRYPTAI_API_KEY
detectors:
toxicity:
enabled: true
nsfw:
enabled: true

Configuration Optionsโ€‹

ParameterTypeDescriptionDefault
api_keystringEnkryptAI API keyENKRYPTAI_API_KEY env var
api_basestringEnkryptAI API base URLhttps://api.enkryptai.com
policy_namestringCustom policy name (sent via x-enkrypt-policy header)None
deployment_namestringDeployment name (sent via X-Enkrypt-Deployment header)None
detectorsobjectDetector configuration{}
block_on_violationbooleanBlock requests on violationstrue
modestringWhen to run: pre_call, post_call, or during_callRequired

Observabilityโ€‹

EnkryptAI guardrail logs include:

  • guardrail_status: success, guardrail_intervened, or guardrail_failed_to_respond
  • guardrail_provider: enkryptai
  • guardrail_json_response: Full API response with detection details
  • duration: Time taken for guardrail check
  • start_time and end_time: Timestamps

These logs are available through your configured LiteLLM logging callbacks.

Error Handlingโ€‹

The guardrail handles errors gracefully:

  • API Failures: Logs error and raises exception
  • Rate Limits (429): Logs error and raises exception
  • Invalid Configuration: Raises ValueError on initialization

Set block_on_violation: false to continue processing even when violations are detected (monitor mode).

Supportโ€‹

For more information about EnkryptAI: