LiteLLM Proxy CLI
The litellm-proxy
CLI is a command-line tool for managing your LiteLLM proxy
server. It provides commands for managing models, credentials, API keys, users,
and more, as well as making chat and HTTP requests to the proxy server.
Feature | What you can do |
---|---|
Models Management | List, add, update, and delete models |
Credentials Management | Manage provider credentials |
Keys Management | Generate, list, and delete API keys |
User Management | Create, list, and delete users |
Chat Completions | Run chat completions |
HTTP Requests | Make custom HTTP requests to the proxy server |
Quick Startโ
-
Install the CLI
If you have uv installed, you can try this:
uv tool install 'litellm[proxy]'
If that works, you'll see something like this:
...
Installed 2 executables: litellm, litellm-proxyand now you can use the tool by just typing
litellm-proxy
in your terminal:litellm-proxy
-
Set up environment variables
export LITELLM_PROXY_URL=http://localhost:4000
export LITELLM_PROXY_API_KEY=sk-your-key(Replace with your actual proxy URL and API key)
-
Make your first request (list models)
litellm-proxy models list
If the CLI is set up correctly, you should see a list of available models or a table output.
-
Troubleshooting
- If you see an error, check your environment variables and proxy server status.
Authentication using CLIโ
You can use the CLI to authenticate to the LiteLLM Gateway. This is great if you're trying to give a large number of developers self-serve access to the LiteLLM Gateway.
For an indepth guide, see CLI Authentication.
-
Set up the proxy URL
export LITELLM_PROXY_URL=http://localhost:4000
(Replace with your actual proxy URL)
-
Login
litellm-proxy login
This will open a browser window to authenticate. If you have connected LiteLLM Proxy to your SSO provider, you can login with your SSO credentials. Once logged in, you can use the CLI to make requests to the LiteLLM Gateway.
-
Test your authentication
litellm-proxy models list
This will list all the models available to you.
Main Commandsโ
Models Managementโ
-
List, add, update, get, and delete models on the proxy.
-
Example:
litellm-proxy models list
litellm-proxy models add gpt-4 \
--param api_key=sk-123 \
--param max_tokens=2048
litellm-proxy models update <model-id> -p temperature=0.7
litellm-proxy models delete <model-id>
Credentials Managementโ
-
List, create, get, and delete credentials for LLM providers.
-
Example:
litellm-proxy credentials list
litellm-proxy credentials create azure-prod \
--info='{"custom_llm_provider": "azure"}' \
--values='{"api_key": "sk-123", "api_base": "https://prod.azure.openai.com"}'
litellm-proxy credentials get azure-cred
litellm-proxy credentials delete azure-cred
Keys Managementโ
-
List, generate, get info, and delete API keys.
-
Example:
litellm-proxy keys list
litellm-proxy keys generate \
--models=gpt-4 \
--spend=100 \
--duration=24h \
--key-alias=my-key
litellm-proxy keys info --key sk-key1
litellm-proxy keys delete --keys sk-key1,sk-key2 --key-aliases alias1,alias2
User Managementโ
-
List, create, get info, and delete users.
-
Example:
litellm-proxy users list
litellm-proxy users create \
--email=user@example.com \
--role=internal_user \
--alias="Alice" \
--team=team1 \
--max-budget=100.0
litellm-proxy users get --id <user-id>
litellm-proxy users delete <user-id>
Chat Completionsโ
-
Ask for chat completions from the proxy server.
-
Example:
litellm-proxy chat completions gpt-4 -m "user:Hello, how are you?"
General HTTP Requestsโ
-
Make direct HTTP requests to the proxy server.
-
Example:
litellm-proxy http request \
POST /chat/completions \
--json '{"model": "gpt-4", "messages": [{"role": "user", "content": "Hello"}]}'
Environment Variablesโ
LITELLM_PROXY_URL
: Base URL of the proxy serverLITELLM_PROXY_API_KEY
: API key for authentication
Examplesโ
-
List all models:
litellm-proxy models list
-
Add a new model:
litellm-proxy models add gpt-4 \
--param api_key=sk-123 \
--param max_tokens=2048 -
Create a credential:
litellm-proxy credentials create azure-prod \
--info='{"custom_llm_provider": "azure"}' \
--values='{"api_key": "sk-123", "api_base": "https://prod.azure.openai.com"}' -
Generate an API key:
litellm-proxy keys generate \
--models=gpt-4 \
--spend=100 \
--duration=24h \
--key-alias=my-key -
Chat completion:
litellm-proxy chat completions gpt-4 \
-m "user:Write a story" -
Custom HTTP request:
litellm-proxy http request \
POST /chat/completions \
--json '{"model": "gpt-4", "messages": [{"role": "user", "content": "Hello"}]}'
Error Handlingโ
The CLI will display error messages for:
- Server not accessible
- Authentication failures
- Invalid parameters or JSON
- Nonexistent models/credentials
- Any other operation failures
Use the --debug
flag for detailed debugging output.
For full command reference and advanced usage, see the CLI README.