guardrails
, logging
, virtual key management
, new models
info
Get a 7 day free trial for LiteLLM Enterprise here.
no call needed
New Featuresโ
โจ Log Guardrail Tracesโ
Track guardrail failure rate and if a guardrail is going rogue and failing requests. Start here
Traced Guardrail Successโ
Traced Guardrail Failureโ
/guardrails/list
โ
/guardrails/list
allows clients to view available guardrails + supported guardrail params
curl -X GET 'http://0.0.0.0:4000/guardrails/list'
Expected response
{
"guardrails": [
{
"guardrail_name": "aporia-post-guard",
"guardrail_info": {
"params": [
{
"name": "toxicity_score",
"type": "float",
"description": "Score between 0-1 indicating content toxicity level"
},
{
"name": "pii_detection",
"type": "boolean"
}
]
}
}
]
}
โจ Guardrails with Mock LLMโ
Send mock_response
to test guardrails without making an LLM call. More info on mock_response
here
curl -i http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-npnwjPQciVRok5yNZgKmFQ" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "user", "content": "hi my email is ishaan@berri.ai"}
],
"mock_response": "This is a mock response",
"guardrails": ["aporia-pre-guard", "aporia-post-guard"]
}'
Assign Keys to Usersโ
You can now assign keys to users via Proxy UI
New Modelsโ
openrouter/openai/o1
vertex_ai/mistral-large@2411
Fixesโ
- Fix
vertex_ai/
mistral model pricing: https://github.com/BerriAI/litellm/pull/7345 - Missing model_group field in logs for aspeech call types https://github.com/BerriAI/litellm/pull/7392