Skip to main content

Langchain, OpenAI SDK, LlamaIndex, Instructor, Curl examples

LiteLLM Proxy is OpenAI-Compatible, and supports:

LiteLLM Proxy is Azure OpenAI-compatible:

  • /chat/completions
  • /completions
  • /embeddings

LiteLLM Proxy is Anthropic-compatible:

  • /messages

LiteLLM Proxy is Vertex AI compatible:

This doc covers:

  • /chat/completion
  • /embedding

These are selected examples. LiteLLM Proxy is OpenAI-Compatible, it works with any project that calls OpenAI. Just change the base_url, api_key and model.

To pass provider-specific args, go here

To drop unsupported params (E.g. frequency_penalty for bedrock with librechat), go here

info

Input, Output, Exceptions are mapped to the OpenAI format for all supported models

How to send requests to the proxy, pass metadata, allow users to pass in their OpenAI API key

/chat/completions

Request Format

Set extra_body={"metadata": { }} to metadata you want to pass

import openai
client = openai.OpenAI(
api_key="anything",
base_url="http://0.0.0.0:4000"
)

# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
],
extra_body={ # pass in any provider-specific param, if not supported by openai, https://docs.litellm.ai/docs/completion/input#provider-specific-params
"metadata": { # 👈 use for logging additional params (e.g. to langfuse)
"generation_name": "ishaan-generation-openai-client",
"generation_id": "openai-client-gen-id22",
"trace_id": "openai-client-trace-id22",
"trace_user_id": "openai-client-user-id2"
}
}
)

print(response)

Response Format

{
"id": "chatcmpl-8c5qbGTILZa1S4CK3b31yj5N40hFN",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "As an AI language model, I do not have a physical form or personal preferences. However, I am programmed to assist with various topics and provide information on a wide range of subjects. Is there something specific you would like assistance with?",
"role": "assistant"
}
}
],
"created": 1704089632,
"model": "gpt-35-turbo",
"object": "chat.completion",
"system_fingerprint": null,
"usage": {
"completion_tokens": 47,
"prompt_tokens": 12,
"total_tokens": 59
},
"_response_ms": 1753.426
}

Function Calling

Here's some examples of doing function calling with the proxy.

You can use the proxy for function calling with any openai-compatible project.

curl http://0.0.0.0:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPTIONAL_YOUR_PROXY_KEY" \
-d '{
"model": "gpt-4-turbo",
"messages": [
{
"role": "user",
"content": "What'\''s the weather like in Boston today?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
}
],
"tool_choice": "auto"
}'

/embeddings

Request Format

Input, Output and Exceptions are mapped to the OpenAI format for all supported models

import openai
from openai import OpenAI

# set base_url to your proxy server
# set api_key to send to proxy server
client = OpenAI(api_key="<proxy-api-key>", base_url="http://0.0.0.0:4000")

response = client.embeddings.create(
input=["hello from litellm"],
model="text-embedding-ada-002"
)

print(response)

Response Format

{
"object": "list",
"data": [
{
"object": "embedding",
"embedding": [
0.0023064255,
-0.009327292,
....
-0.0028842222,
],
"index": 0
}
],
"model": "text-embedding-ada-002",
"usage": {
"prompt_tokens": 8,
"total_tokens": 8
}
}

/moderations

Request Format

Input, Output and Exceptions are mapped to the OpenAI format for all supported models

import openai
from openai import OpenAI

# set base_url to your proxy server
# set api_key to send to proxy server
client = OpenAI(api_key="<proxy-api-key>", base_url="http://0.0.0.0:4000")

response = client.moderations.create(
input="hello from litellm",
model="text-moderation-stable"
)

print(response)

Response Format

{
"id": "modr-8sFEN22QCziALOfWTa77TodNLgHwA",
"model": "text-moderation-007",
"results": [
{
"categories": {
"harassment": false,
"harassment/threatening": false,
"hate": false,
"hate/threatening": false,
"self-harm": false,
"self-harm/instructions": false,
"self-harm/intent": false,
"sexual": false,
"sexual/minors": false,
"violence": false,
"violence/graphic": false
},
"category_scores": {
"harassment": 0.000019947197870351374,
"harassment/threatening": 5.5971017900446896e-6,
"hate": 0.000028560316422954202,
"hate/threatening": 2.2631787999216613e-8,
"self-harm": 2.9121162015144364e-7,
"self-harm/instructions": 9.314219084899378e-8,
"self-harm/intent": 8.093739012338119e-8,
"sexual": 0.00004414955765241757,
"sexual/minors": 0.0000156943697220413,
"violence": 0.00022354527027346194,
"violence/graphic": 8.804164281173144e-6
},
"flagged": false
}
]
}

Using with OpenAI compatible projects

Set base_url to the LiteLLM Proxy server

import openai
client = openai.OpenAI(
api_key="anything",
base_url="http://0.0.0.0:4000"
)

# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
])

print(response)

Using with Vertex, Boto3, Anthropic SDK (Native format)

👉 Here's how to use litellm proxy with Vertex, boto3, Anthropic SDK - in the native format

Advanced

(BETA) Batch Completions - pass multiple models

Use this when you want to send 1 request to N Models

Expected Request Format

Pass model as a string of comma separated value of models. Example "model"="llama3,gpt-3.5-turbo"

This same request will be sent to the following model groups on the litellm proxy config.yaml

  • model_name="llama3"
  • model_name="gpt-3.5-turbo"
import openai

client = openai.OpenAI(api_key="sk-1234", base_url="http://0.0.0.0:4000")

response = client.chat.completions.create(
model="gpt-3.5-turbo,llama3",
messages=[
{"role": "user", "content": "this is a test request, write a short poem"}
],
)

print(response)

Expected Response Format

Get a list of responses when model is passed as a list

[
ChatCompletion(
id='chatcmpl-9NoYhS2G0fswot0b6QpoQgmRQMaIf',
choices=[
Choice(
finish_reason='stop',
index=0,
logprobs=None,
message=ChatCompletionMessage(
content='In the depths of my soul, a spark ignites\nA light that shines so pure and bright\nIt dances and leaps, refusing to die\nA flame of hope that reaches the sky\n\nIt warms my heart and fills me with bliss\nA reminder that in darkness, there is light to kiss\nSo I hold onto this fire, this guiding light\nAnd let it lead me through the darkest night.',
role='assistant',
function_call=None,
tool_calls=None
)
)
],
created=1715462919,
model='gpt-3.5-turbo-0125',
object='chat.completion',
system_fingerprint=None,
usage=CompletionUsage(
completion_tokens=83,
prompt_tokens=17,
total_tokens=100
)
),
ChatCompletion(
id='chatcmpl-4ac3e982-da4e-486d-bddb-ed1d5cb9c03c',
choices=[
Choice(
finish_reason='stop',
index=0,
logprobs=None,
message=ChatCompletionMessage(
content="A test request, and I'm delighted!\nHere's a short poem, just for you:\n\nMoonbeams dance upon the sea,\nA path of light, for you to see.\nThe stars up high, a twinkling show,\nA night of wonder, for all to know.\n\nThe world is quiet, save the night,\nA peaceful hush, a gentle light.\nThe world is full, of beauty rare,\nA treasure trove, beyond compare.\n\nI hope you enjoyed this little test,\nA poem born, of whimsy and jest.\nLet me know, if there's anything else!",
role='assistant',
function_call=None,
tool_calls=None
)
)
],
created=1715462919,
model='groq/llama3-8b-8192',
object='chat.completion',
system_fingerprint='fp_a2c8d063cb',
usage=CompletionUsage(
completion_tokens=120,
prompt_tokens=20,
total_tokens=140
)
)
]

Pass User LLM API Keys, Fallbacks

Allow your end-users to pass their model list, api base, OpenAI API key (any LiteLLM supported provider) to make requests

Note This is not related to virtual keys. This is for when you want to pass in your users actual LLM API keys.

info

You can pass a litellm.RouterConfig as user_config, See all supported params here https://github.com/BerriAI/litellm/blob/main/litellm/types/router.py

Step 1: Define user model list & config

import os

user_config = {
'model_list': [
{
'model_name': 'user-azure-instance',
'litellm_params': {
'model': 'azure/chatgpt-v-2',
'api_key': os.getenv('AZURE_API_KEY'),
'api_version': os.getenv('AZURE_API_VERSION'),
'api_base': os.getenv('AZURE_API_BASE'),
'timeout': 10,
},
'tpm': 240000,
'rpm': 1800,
},
{
'model_name': 'user-openai-instance',
'litellm_params': {
'model': 'gpt-3.5-turbo',
'api_key': os.getenv('OPENAI_API_KEY'),
'timeout': 10,
},
'tpm': 240000,
'rpm': 1800,
},
],
'num_retries': 2,
'allowed_fails': 3,
'fallbacks': [
{
'user-azure-instance': ['user-openai-instance']
}
]
}


Step 2: Send user_config in extra_body

import openai
client = openai.OpenAI(
api_key="sk-1234",
base_url="http://0.0.0.0:4000"
)

# send request to `user-azure-instance`
response = client.chat.completions.create(model="user-azure-instance", messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
],
extra_body={
"user_config": user_config
}
) # 👈 User config

print(response)

Pass User LLM API Keys / API Base

Allows your users to pass in their OpenAI API key/API base (any LiteLLM supported provider) to make requests

Here's how to do it:

1. Enable configurable clientside auth credentials for a provider

model_list:
- model_name: "fireworks_ai/*"
litellm_params:
model: "fireworks_ai/*"
configurable_clientside_auth_params: ["api_base"]
# OR
configurable_clientside_auth_params: [{"api_base": "^https://litellm.*direct\.fireworks\.ai/v1$"}] # 👈 regex

Specify any/all auth params you want the user to be able to configure:

  • api_base (✅ regex supported)
  • api_key
  • base_url

(check provider docs for provider-specific auth params - e.g. vertex_project)

2. Test it!

import openai
client = openai.OpenAI(
api_key="sk-1234",
base_url="http://0.0.0.0:4000"
)

# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
],
extra_body={"api_key": "my-bad-key", "api_base": "https://litellm-dev.direct.fireworks.ai/v1"}) # 👈 clientside credentials

print(response)

More examples:

Pass in the litellm_params (E.g. api_key, api_base, etc.) via the extra_body parameter in the OpenAI client.

import openai
client = openai.OpenAI(
api_key="sk-1234",
base_url="http://0.0.0.0:4000"
)

# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
],
extra_body={
"api_key": "my-azure-key",
"api_base": "my-azure-base",
"api_version": "my-azure-version"
}) # 👈 User Key

print(response)