Skip to main content

Pillar Security

Use Pillar Security for comprehensive LLM security including:

  • Prompt Injection Protection: Prevent malicious prompt manipulation
  • Jailbreak Detection: Detect attempts to bypass AI safety measures
  • PII Detection & Monitoring: Automatically detect sensitive information
  • Secret Detection: Identify API keys, tokens, and credentials
  • Content Moderation: Filter harmful or inappropriate content
  • Toxic Language: Filter offensive or harmful language

Quick Start​

1. Get API Key​

  1. Get your Pillar Security account from Pillar Security
  2. Sign up for a Pillar Security account at Pillar Dashboard
  3. Get your API key from the dashboard
  4. Set your API key as an environment variable:
    export PILLAR_API_KEY="your_api_key_here"
    export PILLAR_API_BASE="https://api.pillar.security" # Optional, default

2. Configure LiteLLM Proxy​

Add Pillar Security to your config.yaml:

🌟 Recommended Configuration:

model_list:
- model_name: gpt-4.1-mini
litellm_params:
model: openai/gpt-4.1-mini
api_key: os.environ/OPENAI_API_KEY

guardrails:
- guardrail_name: "pillar-monitor-everything" # you can change my name
litellm_params:
guardrail: pillar
mode: [pre_call, post_call] # Monitor both input and output
api_key: os.environ/PILLAR_API_KEY # Your Pillar API key
api_base: os.environ/PILLAR_API_BASE # Pillar API endpoint
on_flagged_action: "monitor" # Log threats but allow requests
fallback_on_error: "allow" # Gracefully degrade if Pillar is down (default)
timeout: 5.0 # Timeout for Pillar API calls in seconds (default)
persist_session: true # Keep conversations visible in Pillar dashboard
async_mode: false # Request synchronous verdicts
include_scanners: true # Return scanner category breakdown
include_evidence: true # Include detailed findings for triage
default_on: true # Enable for all requests

general_settings:
master_key: "your-secure-master-key-here"

litellm_settings:
set_verbose: true # Enable detailed logging

Note: Virtual key context is automatically passed as headers - no additional configuration needed!

3. Start the Proxy​

litellm --config config.yaml --port 4000

Guardrail Modes​

Overview​

Pillar Security supports three execution modes for comprehensive protection:

ModeWhen It RunsWhat It ProtectsUse Case
pre_callBefore LLM callUser input onlyBlock malicious prompts, prevent prompt injection
during_callParallel with LLM callUser input onlyInput monitoring with lower latency
post_callAfter LLM responseFull conversation contextOutput filtering, PII detection in responses
  • βœ… Complete Protection: Guards both incoming prompts and outgoing responses
  • βœ… Prompt Injection Defense: Blocks malicious input before reaching the LLM
  • βœ… Response Monitoring: Detects PII, secrets, or inappropriate content in outputs
  • βœ… Full Context Analysis: Pillar sees the complete conversation for better detection

Alternative Configurations​

Best for:

  • πŸ›‘οΈ Input Protection: Block malicious prompts before they reach the LLM
  • ⚑ Simple Setup: Single guardrail configuration
  • 🚫 Immediate Blocking: Stop threats at the input stage
model_list:
- model_name: gpt-4.1-mini
litellm_params:
model: openai/gpt-4.1-mini
api_key: os.environ/OPENAI_API_KEY

guardrails:
- guardrail_name: "pillar-input-only"
litellm_params:
guardrail: pillar
mode: "pre_call" # Input scanning only
api_key: os.environ/PILLAR_API_KEY # Your Pillar API key
api_base: os.environ/PILLAR_API_BASE # Pillar API endpoint
on_flagged_action: "block" # Block malicious requests
persist_session: true # Keep records for investigation
async_mode: false # Require an immediate verdict
include_scanners: true # Understand which rule triggered
include_evidence: true # Capture concrete evidence
default_on: true # Enable for all requests

general_settings:
master_key: "YOUR_LITELLM_PROXY_MASTER_KEY"

litellm_settings:
set_verbose: true

Configuration Reference​

Environment Variables​

You can configure Pillar Security using environment variables:

export PILLAR_API_KEY="your_api_key_here"
export PILLAR_API_BASE="https://api.pillar.security"
export PILLAR_ON_FLAGGED_ACTION="monitor"
export PILLAR_FALLBACK_ON_ERROR="allow"
export PILLAR_TIMEOUT="5.0"

Session Tracking​

Pillar supports comprehensive session tracking using LiteLLM's metadata system:

curl -X POST "http://localhost:4000/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-key" \
-d '{
"model": "gpt-4.1-mini",
"messages": [...],
"user": "user-123",
"metadata": {
"pillar_session_id": "conversation-456"
}
}'

This provides clear, explicit conversation tracking that works seamlessly with LiteLLM's session management. When using monitor mode, the session ID is returned in the x-pillar-session-id response header for easy correlation and tracking.

Actions on Flagged Content​

Block​

Raises an exception and prevents the request from reaching the LLM:

on_flagged_action: "block"

Monitor (Default)​

Logs the violation but allows the request to proceed:

on_flagged_action: "monitor"

Response Headers:

You can opt in to receiving detection details in response headers by configuring include_scanners: true and/or include_evidence: true. When enabled, these headers are included for every requestβ€”not just flagged onesβ€”enabling comprehensive metrics, false positive analysis, and threat investigation.

  • x-pillar-flagged: Boolean string indicating Pillar's blocking recommendation ("true" or "false")
  • x-pillar-scanners: URL-encoded JSON object showing scanner categories (e.g., %7B%22jailbreak%22%3Atrue%7D) β€” requires include_scanners: true
  • x-pillar-evidence: URL-encoded JSON array of detection evidence (may contain items even when flagged is false) β€” requires include_evidence: true
  • x-pillar-session-id: URL-encoded session ID for correlation and investigation
Understanding flagged vs Scanner Results

The flagged field is Pillar's policy-level blocking recommendation, which may differ from individual scanner results:

  • flagged: true β†’ Pillar recommends blocking based on your configured policies
  • flagged: false β†’ Pillar does not recommend blocking, but individual scanners may still detect content

For example, the toxic_language scanner might detect profanity (scanners.toxic_language: true) while flagged remains false if your Pillar policy doesn't block on toxic language alone. This allows you to:

  • Monitor threats without blocking users
  • Build metrics on detection rates vs block rates
  • Analyze false positive rates by comparing scanner results to user feedback

The x-pillar-scanners, x-pillar-evidence, and x-pillar-session-id headers use URL encoding (percent-encoding) to convert JSON data into an ASCII-safe format. This is necessary because HTTP headers only support ISO-8859-1 characters and cannot contain raw JSON special characters ({, ", :) or Unicode text. To read these headers, first URL-decode the value, then parse it as JSON.

LiteLLM truncates the x-pillar-evidence header to a maximum of 8 KB per header to avoid proxy limits. Note that most proxies and servers also enforce a total header size limit of approximately 32 KB across all headers combined. When truncation occurs, each affected evidence item includes an "evidence_truncated": true flag and the metadata contains pillar_evidence_truncated: true.

Example Response Headers (URL-encoded):

x-pillar-flagged: true
x-pillar-session-id: abc-123-def-456
x-pillar-scanners: %7B%22jailbreak%22%3Atrue%2C%22prompt_injection%22%3Afalse%2C%22toxic_language%22%3Afalse%7D
x-pillar-evidence: %5B%7B%22category%22%3A%22prompt_injection%22%2C%22evidence%22%3A%22Ignore%20previous%20instructions%22%7D%5D

After Decoding:

// x-pillar-scanners
{"jailbreak": true, "prompt_injection": false, "toxic_language": false}

// x-pillar-evidence
[{"category": "prompt_injection", "evidence": "Ignore previous instructions"}]

Decoding Example (Python):

from urllib.parse import unquote
import json

# Step 1: URL-decode the header value (converts %7B to {, %22 to ", etc.)
# Step 2: Parse the resulting JSON string
scanners = json.loads(unquote(response.headers["x-pillar-scanners"]))
evidence = json.loads(unquote(response.headers["x-pillar-evidence"]))

# Session ID is a plain string, so only URL-decode is needed (no JSON parsing)
session_id = unquote(response.headers["x-pillar-session-id"])
tip

LiteLLM mirrors the encoded values onto metadata["pillar_response_headers"] so you can inspect exactly what was returned. When truncation occurs, it sets metadata["pillar_evidence_truncated"] to true and marks affected evidence items with "evidence_truncated": true. Evidence text is shortened with a ...[truncated] suffix, and entire evidence entries may be removed if necessary to stay under the 8 KB header limit. Check these flags to determine if full evidence details are available in your logs.

This allows your application to:

  • Track threats without blocking legitimate users
  • Implement custom handling logic based on threat types
  • Build analytics and alerting on security events
  • Correlate threats across requests using session IDs

Resilience and Error Handling​

Graceful Degradation (fallback_on_error)​

Control what happens when the Pillar API is unavailable (network errors, timeouts, service outages):

fallback_on_error: "allow"  # Default - recommended for production resilience

Available Options:

  • allow (Default - Recommended): Proceed without scanning when Pillar is unavailable

    • No service interruption if Pillar is down
    • Best for production where availability is critical
    • Security scans are skipped during outages (logged as warnings)
    guardrails:
    - guardrail_name: "pillar-resilient"
    litellm_params:
    guardrail: pillar
    fallback_on_error: "allow" # Graceful degradation
  • block: Reject all requests when Pillar is unavailable

    • Fail-secure approach - no request proceeds without scanning
    • Service interruption during Pillar outages
    • Returns 503 Service Unavailable error
    guardrails:
    - guardrail_name: "pillar-fail-secure"
    litellm_params:
    guardrail: pillar
    fallback_on_error: "block" # Fail secure

Timeout Configuration​

Configure how long to wait for Pillar API responses:

Example Configurations:

# Production: Default - Fast with graceful degradation
guardrails:
- guardrail_name: "pillar-production"
litellm_params:
guardrail: pillar
timeout: 5.0 # Default - fast failure detection
fallback_on_error: "allow" # Graceful degradation (required)

Environment Variables:

export PILLAR_FALLBACK_ON_ERROR="allow"
export PILLAR_TIMEOUT="5.0"

Advanced Configuration​

Quick takeaways

  • Every request still runs all Pillar scanners; these options only change what comes back.
  • Choose richer responses when you need audit trails, lighter responses when latency or cost matters.
  • Blocking is controlled by LiteLLM’s on_flagged_action configurationβ€”Pillar headers do not change block/monitor behaviour.

Pillar Security executes the full scanner suite on each call. The settings below tune the Protect response headers LiteLLM sends, letting you balance fidelity, retention, and latency.

Response Control​

Data Retention (persist_session)​

persist_session: false  # Default: true
  • Why: Controls whether Pillar stores session data for dashboard visibility.
  • Set false for: Ephemeral testing, privacy-sensitive interactions.
  • Set true for: Production monitoring, compliance, historical review (default behaviour).
  • Impact: false means the conversation will not appear in the Pillar dashboard.

Response Detail Level​

The following toggles grow the payload size without changing detection behaviour.

include_scanners: true    # β†’ plr_scanners (default true in LiteLLM)
include_evidence: true # β†’ plr_evidence (default true in LiteLLM)
  • Minimal response (include_scanners=false, include_evidence=false)

    {
    "session_id": "abc-123",
    "flagged": true
    }

    Use when you only care about whether Pillar detected a threat.

    πŸ“ Note: flagged: true means Pillar’s scanners recommend blocking. Pillar only reports this verdictβ€”LiteLLM enforces your policy via the on_flagged_action configuration (no Pillar header controls it):

    • on_flagged_action: "block" β†’ LiteLLM raises a 400 guardrail error
    • on_flagged_action: "monitor" β†’ LiteLLM logs the threat but still returns the LLM response
  • Scanner breakdown (include_scanners=true)

    {
    "session_id": "abc-123",
    "flagged": true,
    "scanners": {
    "jailbreak": true,
    "prompt_injection": false,
    "pii": false,
    "secret": false,
    "toxic_language": false
    /* ... more categories ... */
    }
    }

    Use when you need to know which categories triggered.

  • Full context (both toggles true)

    {
    "session_id": "abc-123",
    "flagged": true,
    "scanners": { /* ... */ },
    "evidence": [
    {
    "category": "jailbreak",
    "type": "prompt_injection",
    "evidence": "Ignore previous instructions",
    "metadata": { "start_idx": 0, "end_idx": 28 }
    }
    ]
    }

    Ideal for debugging, audit logs, or compliance exports.

Processing Mode (async_mode)​

async_mode: true  # Default: false
  • Why: Queue the request for background processing instead of waiting for a synchronous verdict.
  • Response shape:
    {
    "status": "queued",
    "session_id": "abc-123",
    "position": 1
    }
  • Set true for: Large batch jobs, latency-tolerant pipelines.
  • Set false for: Real-time user flows (default).
  • ⚠️ Note: Async mode returns only a 202 queue acknowledgment (no flagged verdict). LiteLLM treats that as β€œno block,” so the pre-call hook always allows the request. Use async mode only for post-call or monitor-only workflows where delayed review is acceptable.

Complete Examples​

guardrails:
# Production: full fidelity & dashboard visibility
- guardrail_name: "pillar-production"
litellm_params:
guardrail: pillar
mode: [pre_call, post_call]
persist_session: true
include_scanners: true
include_evidence: true
on_flagged_action: "block"

# Testing: lightweight, no persistence
- guardrail_name: "pillar-testing"
litellm_params:
guardrail: pillar
mode: pre_call
persist_session: false
include_scanners: false
include_evidence: false
on_flagged_action: "monitor"

Keep in mind that LiteLLM forwards these values as the documented plr_* headers, so any direct HTTP integrations outside the proxy can reuse the same guidance.

Examples​

Safe request

# Test with safe content
curl -X POST "http://localhost:4000/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_LITELLM_PROXY_MASTER_KEY" \
-d '{
"model": "gpt-4.1-mini",
"messages": [{"role": "user", "content": "Hello! Can you tell me a joke?"}],
"max_tokens": 100
}'

Expected response (Allowed):

{
"id": "chatcmpl-BvQhm0VZpiDSEbrssSzO7GLHgHCkW",
"object": "chat.completion",
"created": 1753027050,
"model": "gpt-4.1-mini-2025-04-14",
"system_fingerprint": null,
"choices": [
{
"index": 0,
"finish_reason": "stop",
"message": {
"role": "assistant",
"content": "Sure! Here's a joke for you:\n\nWhy don't scientists trust atoms? \nBecause they make up everything!",
"tool_calls": null,
"function_call": null,
"annotations": []
},
"provider_specific_fields": {}
}
],
"usage": {
"completion_tokens": 22,
"prompt_tokens": 16,
"total_tokens": 38,
"completion_tokens_details": {
"accepted_prediction_tokens": 0,
"audio_tokens": 0,
"reasoning_tokens": 0,
"rejected_prediction_tokens": 0
},
"prompt_tokens_details": {
"audio_tokens": 0,
"cached_tokens": 0,
"text_tokens": null,
"image_tokens": null
}
},
"service_tier": "default"
}

Support​

Feel free to contact us at support@pillar.security

πŸ“š Resources​