Skip to main content

One post tagged with "virtual key management"

View All Tags

v1.56.3

Krrish Dholakia
CEO, LiteLLM
Ishaan Jaffer
CTO, LiteLLM

guardrails, logging, virtual key management, new models

info

Get a 7 day free trial for LiteLLM Enterprise here.

no call needed

New Features​

✨ Log Guardrail Traces​

Track guardrail failure rate and if a guardrail is going rogue and failing requests. Start here

Traced Guardrail Success​

Traced Guardrail Failure​

/guardrails/list​

/guardrails/list allows clients to view available guardrails + supported guardrail params

curl -X GET 'http://0.0.0.0:4000/guardrails/list'

Expected response

{
"guardrails": [
{
"guardrail_name": "aporia-post-guard",
"guardrail_info": {
"params": [
{
"name": "toxicity_score",
"type": "float",
"description": "Score between 0-1 indicating content toxicity level"
},
{
"name": "pii_detection",
"type": "boolean"
}
]
}
}
]
}

✨ Guardrails with Mock LLM​

Send mock_response to test guardrails without making an LLM call. More info on mock_response here

curl -i http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-npnwjPQciVRok5yNZgKmFQ" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "user", "content": "hi my email is ishaan@berri.ai"}
],
"mock_response": "This is a mock response",
"guardrails": ["aporia-pre-guard", "aporia-post-guard"]
}'

Assign Keys to Users​

You can now assign keys to users via Proxy UI

New Models​

  • openrouter/openai/o1
  • vertex_ai/mistral-large@2411

Fixes​