[Pre-Release] v1.79.0-stable - Search APIs
Deploy this versionโ
- Docker
- Pip
docker run litellm
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:v1.79.0.rc.1
pip install litellm
pip install litellm==1.79.0
Major Changesโ
- Cohere models will now be routed to Cohere v2 API by default - PR #15722
Key Highlightsโ
- Search APIs - Native
/v1/searchendpoint with support for Perplexity, Tavily, Parallel AI, Exa AI, DataforSEO, and Google PSE with cost tracking - Vector Stores - Vertex AI Search API integration as vector store through LiteLLM with passthrough endpoint support
- Guardrails Expansion - Apply guardrails across Responses API, Image Gen, Text completions, Audio transcriptions, Audio Speech, Rerank, and Anthropic Messages API via unified
apply_guardrailsfunction - New Guardrail Providers - Gray Swan, Dynamo AI, IBM Guardrails, Lasso Security v3, and Bedrock Guardrail apply_guardrail endpoint support
- Video Generation API - Native support for OpenAI Sora-2 and Azure Sora-2 (Pro, Pro-High-Res) with cost tracking and logging support
- Azure AI Speech (TTS) - Native Azure AI Speech integration with cost tracking for standard and HD voices
New Models / Updated Modelsโ
New Model Supportโ
| Provider | Model | Context Window | Input ($/1M tokens) | Output ($/1M tokens) | Features |
|---|---|---|---|---|---|
| Bedrock | anthropic.claude-3-7-sonnet-20240620-v1:0 | 200K | $3.60 | $18.00 | Chat, reasoning, vision, function calling, prompt caching, computer use |
| Bedrock GovCloud | us-gov-west-1/anthropic.claude-3-7-sonnet-20250219-v1:0 | 200K | $3.60 | $18.00 | Chat, reasoning, vision, function calling, prompt caching, computer use |
| Vertex AI | mistral-medium-3 | 128K | $0.40 | $2.00 | Chat, function calling, tool choice |
| Vertex AI | codestral-2 | 128K | $0.30 | $0.90 | Chat, function calling, tool choice |
| Bedrock | amazon.titan-image-generator-v1 | - | - | - | Image generation - $0.008/image, $0.01/premium image |
| Bedrock | amazon.titan-image-generator-v2 | - | - | - | Image generation - $0.008/image, $0.01/premium image |
| OpenAI | sora-2 | - | - | - | Video generation - $0.10/video/second |
| Azure | sora-2 | - | - | - | Video generation - $0.10/video/second |
| Azure | sora-2-pro | - | - | - | Video generation - $0.30/video/second |
| Azure | sora-2-pro-high-res | - | - | - | Video generation - $0.50/video/second |
Featuresโ
-
- Add AWS us-gov-west-1 Claude 3.7 Sonnet costs - PR #15775
- Fix the date for sonnet 3.7 in govcloud - PR #15800
- Use proper bedrock model name in health check - PR #15808
- Support for embeddings_by_type Response Format in Bedrock Cohere Embed v1 - PR #15707
- Add titan image generations with cost tracking - PR #15916
-
- Add mistral medium 3 and Codestral 2 on vertex - PR #15887
-
- Allow prompt caching to be used for Anthropic Claude on Databricks - PR #15801
-
- OpenAI videos refactoring - PR #15900
-
General
- Read from custom-llm-provider header - PR #15528
LLM API Endpointsโ
Featuresโ
-
- Add gpt 4.1 pricing for response endpoint - PR #15593
- Fix Incorrect status value in responses api with gemini - PR #15753
- Simplify reasoning item handling for gpt-5-codex - PR #15815
- ErrorEvent ValidationError when OpenAI Responses API returns nested error structure - PR #15804
- Fix reasoning item ID auto-generation causing encrypted content verification errors - PR #15782
- Support tags in metadata - PR #15867
- Security: prevent User A from retrieving User B's response, if response.id is leaked - PR #15757
-
- Add def search() APIs for Web Search - Perplexity API - PR #15769
- Add Tavily Search API - PR #15770
- Add Parallel AI - Search API - PR #15772
- Add EXA AI Search API to LiteLLM - PR #15774
- Add /search endpoint on LiteLLM Gateway - PR #15780
- Add DataforSEO Search API - PR #15817
- Add Google PSE Search Provider - PR #15816
- Add cost tracking for Search API requests - Google PSE, Tavily, Parallel AI, Exa AI - PR #15821
- Backend: Allow storing configured Search APIs in DB - PR #15862
- Exa Search API - ensure request params are sent to Exa AI - PR #15855
-
- Support Vertex AI Search API as vector store through LiteLLM - PR #15781
- Azure AI - Search Vector Stores - PR #15873
- VertexAI Search Vector Store - Passthrough endpoint support + Vector store search Cost tracking support - PR #15824
- Don't raise error if managed object is not found - PR #15873
- Show config.yaml vector stores on UI - PR #15873
- Cost tracking for search spend - PR #15859
-
- Pass user-defined headers and extra_headers to image-edit calls - PR #15811
-
- Fix: Hooks broken on /bedrock passthrough due to missing metadata - PR #15849
-
- Fix: OpenAI Realtime API integration fails due to websockets.exceptions.PayloadTooBig error - PR #15751
Management Endpoints / UIโ
Featuresโ
-
Passthrough
-
Organizations
- Allow org admins to create teams on UI - PR #15924
-
Search Tools
-
General
- Fix routing for custom server root path - PR #15701
Logging / Guardrail / Prompt Management Integrationsโ
Featuresโ
-
- Add SENTRY_ENVIRONMENT configuration for Sentry integration - PR #15760
-
- Fix JSON serialization error in Helicone logging by removing OpenTelemetry span from metadata - PR #15728
-
- Fix MLFlow tags - split request_tags into (key, val) if request_tag has colon - PR #15914
-
General
- Rename configured_cold_storage_logger to cold_storage_custom_logger - PR #15798
Guardrailsโ
-
- New Guardrail - Dynamo AI Guardrail - PR #15920
-
- IBM Guardrails integration - PR #15924
-
- Implement Bedrock Guardrail apply_guardrail endpoint support - PR #15892
-
General
- Guardrails - Responses API, Image Gen, Text completions, Audio transcriptions, Audio Speech, Rerank, Anthropic Messages API support via the unified
apply_guardrailsfunction - PR #15706
- Guardrails - Responses API, Image Gen, Text completions, Audio transcriptions, Audio Speech, Rerank, Anthropic Messages API support via the unified
Spend Tracking, Budgets and Rate Limitingโ
- Rate Limiting
MCP Gatewayโ
- OAuth
Performance / Loadbalancing / Reliability improvementsโ
-
Database
- Minimize the occurrence of deadlocks - PR #15281
-
Redis
- Apply max_connections configuration to Redis async client - PR #15797
-
Caching
- Add documentation for
enable_caching_on_provider_specific_optional_paramssetting - PR #15885
- Add documentation for
Documentation Updatesโ
- Provider Documentation
New Contributorsโ
- @tlecomte made their first contribution in PR #15528
- @tomhaynes made their first contribution in PR #15645
- @talalryz made their first contribution in PR #15720
- @1vinodsingh1 made their first contribution in PR #15736
- @nuernber made their first contribution in PR #15775
- @Thomas-Mildner made their first contribution in PR #15760
- @javiergarciapleo made their first contribution in PR #15721
- @lshgdut made their first contribution in PR #15717
- @kk-wangjifeng made their first contribution in PR #15530
- @anthonyivn2 made their first contribution in PR #15801
- @romanglo made their first contribution in PR #15707
- @mythral made their first contribution in PR #15859
- @mubashirosmani made their first contribution in PR #15866
- @CAFxX made their first contribution in PR #15281
- @reflection made their first contribution in PR #15914
- @shadielfares made their first contribution in PR #15917
PR Count Summaryโ
10/26/2025โ
- New Models / Updated Models: 20
- LLM API Endpoints: 29
- Management Endpoints / UI: 5
- Logging / Guardrail / Prompt Management Integrations: 10
- Spend Tracking, Budgets and Rate Limiting: 2
- MCP Gateway: 2
- Performance / Loadbalancing / Reliability improvements: 3
- Documentation Updates: 5

