/search
Feature | Supported |
---|---|
Supported Providers | perplexity , tavily , parallel_ai , exa_ai |
Cost Tracking | โ |
Logging | โ |
Load Balancing | โ |
LiteLLM follows the Perplexity API request/response for the Search API
Supported from LiteLLM v1.78.7+
LiteLLM Python SDK Usageโ
Quick Startโ
from litellm import search
import os
os.environ["PERPLEXITYAI_API_KEY"] = "pplx-..."
response = search(
query="latest AI developments in 2024",
search_provider="perplexity",
max_results=5
)
# Access search results
for result in response.results:
print(f"{result.title}: {result.url}")
print(f"Snippet: {result.snippet}\n")
Async Usageโ
from litellm import asearch
import os, asyncio
os.environ["PERPLEXITYAI_API_KEY"] = "pplx-..."
async def search_async():
response = await asearch(
query="machine learning research papers",
search_provider="perplexity",
max_results=10,
search_domain_filter=["arxiv.org", "nature.com"]
)
# Access search results
for result in response.results:
print(f"{result.title}: {result.url}")
print(f"Snippet: {result.snippet}")
asyncio.run(search_async())
Optional Parametersโ
response = search(
query="AI developments",
search_provider="perplexity",
# Unified parameters (work across all providers)
max_results=10, # Maximum number of results (1-20)
search_domain_filter=["arxiv.org"], # Filter to specific domains
country="US", # Country code filter
max_tokens_per_page=1024 # Max tokens per page
)
LiteLLM AI Gateway Usageโ
LiteLLM provides a Perplexity API compatible /search
endpoint for search calls.
Setup
Add this to your litellm proxy config.yaml
model_list:
- model_name: gpt-4
litellm_params:
model: gpt-4
api_key: os.environ/OPENAI_API_KEY
search_tools:
- search_tool_name: perplexity-search
litellm_params:
search_provider: perplexity
api_key: os.environ/PERPLEXITYAI_API_KEY
- search_tool_name: tavily-search
litellm_params:
search_provider: tavily
api_key: os.environ/TAVILY_API_KEY
Start litellm
litellm --config /path/to/config.yaml
# RUNNING on http://0.0.0.0:4000
Test Requestโ
Option 1: Search tool name in URL (Recommended - keeps body Perplexity-compatible)
curl http://0.0.0.0:4000/v1/search/perplexity-search \
-H "Authorization: Bearer sk-1234" \
-H "Content-Type: application/json" \
-d '{
"query": "latest AI developments 2024",
"max_results": 5,
"search_domain_filter": ["arxiv.org", "nature.com"],
"country": "US"
}'
Option 2: Search tool name in body
curl http://0.0.0.0:4000/v1/search \
-H "Authorization: Bearer sk-1234" \
-H "Content-Type: application/json" \
-d '{
"search_tool_name": "perplexity-search",
"query": "latest AI developments 2024",
"max_results": 5
}'
Load Balancingโ
Configure multiple search providers for automatic load balancing and fallbacks:
search_tools:
- search_tool_name: my-search
litellm_params:
search_provider: perplexity
api_key: os.environ/PERPLEXITYAI_API_KEY
- search_tool_name: my-search
litellm_params:
search_provider: tavily
api_key: os.environ/TAVILY_API_KEY
- search_tool_name: my-search
litellm_params:
search_provider: exa_ai
api_key: os.environ/EXA_API_KEY
router_settings:
routing_strategy: simple-shuffle # or 'least-busy', 'latency-based-routing'
Test with load balancing:
curl http://0.0.0.0:4000/v1/search/my-search \
-H "Authorization: Bearer sk-1234" \
-H "Content-Type: application/json" \
-d '{
"query": "AI developments",
"max_results": 10
}'
Request/Response Formatโ
LiteLLM follows the Perplexity Search API specification.
See the official Perplexity Search documentation for complete details.
Example Requestโ
{
"query": "latest AI developments 2024",
"max_results": 10,
"search_domain_filter": ["arxiv.org", "nature.com"],
"country": "US",
"max_tokens_per_page": 1024
}
Request Parametersโ
Parameter | Type | Required | Description |
---|---|---|---|
query | string or array | Yes | Search query. Can be a single string or array of strings |
search_provider | string | Yes (SDK) | The search provider to use: "perplexity" , "tavily" , "parallel_ai" , or "exa_ai" |
search_tool_name | string | Yes (Proxy) | Name of the search tool configured in config.yaml |
max_results | integer | No | Maximum number of results to return (1-20). Default: 10 |
search_domain_filter | array | No | List of domains to filter results (max 20 domains) |
max_tokens_per_page | integer | No | Maximum tokens per page to process. Default: 1024 |
country | string | No | Country code filter (e.g., "US" , "GB" , "DE" ) |
Query Format Examples:
# Single query
query = "AI developments"
# Multiple queries
query = ["AI developments", "machine learning trends"]
Response Formatโ
The response follows Perplexity's search format with the following structure:
{
"object": "search",
"results": [
{
"title": "Latest Advances in Artificial Intelligence",
"url": "https://arxiv.org/paper/example",
"snippet": "This paper discusses recent developments in AI...",
"date": "2024-01-15"
},
{
"title": "Machine Learning Breakthroughs",
"url": "https://nature.com/articles/ml-breakthrough",
"snippet": "Researchers have achieved new milestones...",
"date": "2024-01-10"
}
]
}
Response Fieldsโ
Field | Type | Description |
---|---|---|
object | string | Always "search" for search responses |
results | array | List of search results |
results[].title | string | Title of the search result |
results[].url | string | URL of the search result |
results[].snippet | string | Text snippet from the result |
results[].date | string | Optional publication or last updated date |
Supported Providersโ
Provider | Environment Variable | search_provider Value |
---|---|---|
Perplexity AI | PERPLEXITYAI_API_KEY | perplexity |
Tavily | TAVILY_API_KEY | tavily |
Exa AI | EXA_API_KEY | exa_ai |
Parallel AI | PARALLEL_AI_API_KEY | parallel_ai |
Perplexity AIโ
Get API Key: https://www.perplexity.ai/settings/api
LiteLLM Python SDKโ
import os
from litellm import search
os.environ["PERPLEXITYAI_API_KEY"] = "pplx-..."
response = search(
query="latest AI developments",
search_provider="perplexity",
max_results=5
)
LiteLLM AI Gatewayโ
1. Setup config.yaml
model_list:
- model_name: gpt-4
litellm_params:
model: gpt-4
api_key: os.environ/OPENAI_API_KEY
search_tools:
- search_tool_name: perplexity-search
litellm_params:
search_provider: perplexity
api_key: os.environ/PERPLEXITYAI_API_KEY
2. Start the proxy
litellm --config /path/to/config.yaml
# RUNNING on http://0.0.0.0:4000
3. Test the search endpoint
curl http://0.0.0.0:4000/v1/search/perplexity-search \
-H "Authorization: Bearer sk-1234" \
-H "Content-Type: application/json" \
-d '{
"query": "latest AI developments",
"max_results": 5
}'
Tavilyโ
Get API Key: https://tavily.com
LiteLLM Python SDKโ
import os
from litellm import search
os.environ["TAVILY_API_KEY"] = "tvly-..."
response = search(
query="latest AI developments",
search_provider="tavily",
max_results=5
)
LiteLLM AI Gatewayโ
1. Setup config.yaml
model_list:
- model_name: gpt-4
litellm_params:
model: gpt-4
api_key: os.environ/OPENAI_API_KEY
search_tools:
- search_tool_name: tavily-search
litellm_params:
search_provider: tavily
api_key: os.environ/TAVILY_API_KEY
2. Start the proxy
litellm --config /path/to/config.yaml
# RUNNING on http://0.0.0.0:4000
3. Test the search endpoint
curl http://0.0.0.0:4000/v1/search/tavily-search \
-H "Authorization: Bearer sk-1234" \
-H "Content-Type: application/json" \
-d '{
"query": "latest AI developments",
"max_results": 5
}'
Exa AIโ
Get API Key: https://exa.ai
LiteLLM Python SDKโ
import os
from litellm import search
os.environ["EXA_API_KEY"] = "exa-..."
response = search(
query="latest AI developments",
search_provider="exa_ai",
max_results=5
)
LiteLLM AI Gatewayโ
1. Setup config.yaml
model_list:
- model_name: gpt-4
litellm_params:
model: gpt-4
api_key: os.environ/OPENAI_API_KEY
search_tools:
- search_tool_name: exa-search
litellm_params:
search_provider: exa_ai
api_key: os.environ/EXA_API_KEY
2. Start the proxy
litellm --config /path/to/config.yaml
# RUNNING on http://0.0.0.0:4000
3. Test the search endpoint
curl http://0.0.0.0:4000/v1/search/exa-search \
-H "Authorization: Bearer sk-1234" \
-H "Content-Type: application/json" \
-d '{
"query": "latest AI developments",
"max_results": 5
}'
Parallel AIโ
Get API Key: https://www.parallel.ai
LiteLLM Python SDKโ
import os
from litellm import search
os.environ["PARALLEL_AI_API_KEY"] = "..."
response = search(
query="latest AI developments",
search_provider="parallel_ai",
max_results=5
)
LiteLLM AI Gatewayโ
1. Setup config.yaml
model_list:
- model_name: gpt-4
litellm_params:
model: gpt-4
api_key: os.environ/OPENAI_API_KEY
search_tools:
- search_tool_name: parallel-search
litellm_params:
search_provider: parallel_ai
api_key: os.environ/PARALLEL_AI_API_KEY
2. Start the proxy
litellm --config /path/to/config.yaml
# RUNNING on http://0.0.0.0:4000
3. Test the search endpoint
curl http://0.0.0.0:4000/v1/search/parallel-search \
-H "Authorization: Bearer sk-1234" \
-H "Content-Type: application/json" \
-d '{
"query": "latest AI developments",
"max_results": 5
}'
Provider-specific parametersโ
Sending provider-specific parameters is supported for all providers, you just need to pass them in the request body.
Tavily Searchโ
import os
from litellm import search
os.environ["TAVILY_API_KEY"] = "tvly-..."
response = search(
query="latest tech news",
search_provider="tavily",
max_results=5,
# Tavily-specific parameters
topic="news", # 'general', 'news', 'finance'
search_depth="advanced", # 'basic', 'advanced'
include_answer=True, # Include AI-generated answer
include_raw_content=True # Include raw HTML content
)
Exa AI Searchโ
import os
from litellm import search
os.environ["EXA_API_KEY"] = "exa-..."
response = search(
query="AI research papers",
search_provider="exa_ai",
max_results=10,
search_domain_filter=["arxiv.org"],
# Exa-specific parameters
type="neural", # 'neural', 'keyword', or 'auto'
contents={"text": True}, # Request text content
use_autoprompt=True # Enable Exa's autoprompt
)
Parallel AI Searchโ
import os
from litellm import search
os.environ["PARALLEL_AI_API_KEY"] = "..."
response = search(
query="latest developments in quantum computing",
search_provider="parallel_ai",
max_results=5,
# Parallel AI-specific parameters
processor="pro", # 'base' or 'pro'
max_chars_per_result=500 # Max characters per result
)