Skip to main content

v1.56.3

ยท 2 min read

guardrails, logging, virtual key management, new models

info

Get a 7 day free trial for LiteLLM Enterprise here.

no call needed

New Featuresโ€‹

โœจ Log Guardrail Tracesโ€‹

Track guardrail failure rate and if a guardrail is going rogue and failing requests. Start here

Traced Guardrail Successโ€‹

Traced Guardrail Failureโ€‹

/guardrails/listโ€‹

/guardrails/list allows clients to view available guardrails + supported guardrail params

curl -X GET 'http://0.0.0.0:4000/guardrails/list'

Expected response

{
"guardrails": [
{
"guardrail_name": "aporia-post-guard",
"guardrail_info": {
"params": [
{
"name": "toxicity_score",
"type": "float",
"description": "Score between 0-1 indicating content toxicity level"
},
{
"name": "pii_detection",
"type": "boolean"
}
]
}
}
]
}

โœจ Guardrails with Mock LLMโ€‹

Send mock_response to test guardrails without making an LLM call. More info on mock_response here

curl -i http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-npnwjPQciVRok5yNZgKmFQ" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "user", "content": "hi my email is ishaan@berri.ai"}
],
"mock_response": "This is a mock response",
"guardrails": ["aporia-pre-guard", "aporia-post-guard"]
}'

Assign Keys to Usersโ€‹

You can now assign keys to users via Proxy UI

New Modelsโ€‹

  • openrouter/openai/o1
  • vertex_ai/mistral-large@2411

Fixesโ€‹