Skip to main content

LiteLLM Proxy CLI

The litellm-proxy CLI is a command-line tool for managing your LiteLLM proxy server. It provides commands for managing models, credentials, API keys, users, and more, as well as making chat and HTTP requests to the proxy server.

FeatureWhat you can do
Models ManagementList, add, update, and delete models
Credentials ManagementManage provider credentials
Keys ManagementGenerate, list, and delete API keys
User ManagementCreate, list, and delete users
Chat CompletionsRun chat completions
HTTP RequestsMake custom HTTP requests to the proxy server

Quick Start​

  1. Install the CLI

    If you have uv installed, you can try this:

    uvx --from=litellm[proxy] litellm-proxy

    and if things are working, you should see something like this:

    Usage: litellm-proxy [OPTIONS] COMMAND [ARGS]...

    LiteLLM Proxy CLI - Manage your LiteLLM proxy server

    Options:
    --base-url TEXT Base URL of the LiteLLM proxy server [env var:
    LITELLM_PROXY_URL]
    --api-key TEXT API key for authentication [env var:
    LITELLM_PROXY_API_KEY]
    --help Show this message and exit.

    Commands:
    chat Chat with models through the LiteLLM proxy server
    credentials Manage credentials for the LiteLLM proxy server
    http Make HTTP requests to the LiteLLM proxy server
    keys Manage API keys for the LiteLLM proxy server
    models Manage models on your LiteLLM proxy server

    If this works, you can make use of the tool more convenient by doing:

    uv tool install litellm[proxy]

    If that works, you'll see something like this:

    ...
    Installed 2 executables: litellm, litellm-proxy

    and now you can use the tool by just typing litellm-proxy in your terminal:

    litellm-proxy

    In the future if you want to upgrade, you can do so with:

    uv tool upgrade litellm[proxy]

    or if you want to uninstall, you can do so with:

    uv tool uninstall litellm

    If you don't have uv or otherwise want to use pip, you can activate a virtual environment and install the package manually:

    pip install 'litellm[proxy]'
  2. Set up environment variables

    export LITELLM_PROXY_URL=http://localhost:4000
    export LITELLM_PROXY_API_KEY=sk-your-key

    (Replace with your actual proxy URL and API key)

  3. Make your first request (list models)

    litellm-proxy models list

    If the CLI is set up correctly, you should see a list of available models or a table output.

  4. Troubleshooting

    • If you see an error, check your environment variables and proxy server status.

Configuration​

You can configure the CLI using environment variables or command-line options:

  • LITELLM_PROXY_URL: Base URL of the LiteLLM proxy server (default: http://localhost:4000)
  • LITELLM_PROXY_API_KEY: API key for authentication

Main Commands​

Models Management​

  • List, add, update, get, and delete models on the proxy.

  • Example:

    litellm-proxy models list
    litellm-proxy models add gpt-4 \
    --param api_key=sk-123 \
    --param max_tokens=2048
    litellm-proxy models update <model-id> -p temperature=0.7
    litellm-proxy models delete <model-id>

    API used (OpenAPI)

Credentials Management​

  • List, create, get, and delete credentials for LLM providers.

  • Example:

    litellm-proxy credentials list
    litellm-proxy credentials create azure-prod \
    --info='{"custom_llm_provider": "azure"}' \
    --values='{"api_key": "sk-123", "api_base": "https://prod.azure.openai.com"}'
    litellm-proxy credentials get azure-cred
    litellm-proxy credentials delete azure-cred

    API used (OpenAPI)

Keys Management​

  • List, generate, get info, and delete API keys.

  • Example:

    litellm-proxy keys list
    litellm-proxy keys generate \
    --models=gpt-4 \
    --spend=100 \
    --duration=24h \
    --key-alias=my-key
    litellm-proxy keys info --key sk-key1
    litellm-proxy keys delete --keys sk-key1,sk-key2 --key-aliases alias1,alias2

    API used (OpenAPI)

User Management​

  • List, create, get info, and delete users.

  • Example:

    litellm-proxy users list
    litellm-proxy users create \
    --email=user@example.com \
    --role=internal_user \
    --alias="Alice" \
    --team=team1 \
    --max-budget=100.0
    litellm-proxy users get --id <user-id>
    litellm-proxy users delete <user-id>

    API used (OpenAPI)

Chat Completions​

  • Ask for chat completions from the proxy server.

  • Example:

    litellm-proxy chat completions gpt-4 -m "user:Hello, how are you?"

    API used (OpenAPI)

General HTTP Requests​

  • Make direct HTTP requests to the proxy server.

  • Example:

    litellm-proxy http request \
    POST /chat/completions \
    --json '{"model": "gpt-4", "messages": [{"role": "user", "content": "Hello"}]}'

    All APIs (OpenAPI)

Environment Variables​

  • LITELLM_PROXY_URL: Base URL of the proxy server
  • LITELLM_PROXY_API_KEY: API key for authentication

Examples​

  1. List all models:

    litellm-proxy models list
  2. Add a new model:

    litellm-proxy models add gpt-4 \
    --param api_key=sk-123 \
    --param max_tokens=2048
  3. Create a credential:

    litellm-proxy credentials create azure-prod \
    --info='{"custom_llm_provider": "azure"}' \
    --values='{"api_key": "sk-123", "api_base": "https://prod.azure.openai.com"}'
  4. Generate an API key:

    litellm-proxy keys generate \
    --models=gpt-4 \
    --spend=100 \
    --duration=24h \
    --key-alias=my-key
  5. Chat completion:

    litellm-proxy chat completions gpt-4 \
    -m "user:Write a story"
  6. Custom HTTP request:

    litellm-proxy http request \
    POST /chat/completions \
    --json '{"model": "gpt-4", "messages": [{"role": "user", "content": "Hello"}]}'

Error Handling​

The CLI will display error messages for:

  • Server not accessible
  • Authentication failures
  • Invalid parameters or JSON
  • Nonexistent models/credentials
  • Any other operation failures

Use the --debug flag for detailed debugging output.

For full command reference and advanced usage, see the CLI README.