Skip to main content

Per-Team/Project Credential Routing

Route the same model to different LLM provider endpoints (e.g. different Azure instances) based on which team or project makes the request.

Overview​

In multi-tenant deployments, different teams often need the same model name (e.g., gpt-4) to hit different provider endpoints — for example, separate Azure OpenAI instances per business unit for cost isolation, data residency, or rate limit separation.

Credential routing lets you configure this in team/project metadata using the existing credentials table, without duplicating model definitions or creating separate model groups per team.

Hotel Team → gpt-4 → https://hotel-eastus.openai.azure.com/
Flight Team → gpt-4 → https://flight-centralus.openai.azure.com/

Precedence Chain​

When a request comes in, the system walks this precedence chain (first match wins):

  1. Clientside credentials — api_base/api_key passed in the request body (docs)
  2. Project model-specific — override for this exact model in the project's model_config
  3. Project default — defaultconfig in the project's model_config
  4. Team model-specific — override for this exact model in the team's model_config
  5. Team default — defaultconfig in the team's model_config
  6. Deployment default — the model's litellm_params as configured in config.yaml

Quick Start​

Step 1: Create Credentials​

Store your Azure endpoint credentials in the credentials table. You can do this via the UI or API:

# Create credential for Hotel team's Azure endpoint
curl -X POST 'http://0.0.0.0:4000/credentials' \
-H 'Authorization: Bearer sk-1234' \
-H 'Content-Type: application/json' \
-d '{
"credential_name": "hotel-azure-eastus",
"credential_values": {
"api_base": "https://hotel-eastus.openai.azure.com/",
"api_key": "sk-azure-hotel-key-xxx"
}
}'
# Create credential for Flight team's Azure endpoint
curl -X POST 'http://0.0.0.0:4000/credentials' \
-H 'Authorization: Bearer sk-1234' \
-H 'Content-Type: application/json' \
-d '{
"credential_name": "flight-azure-centralus",
"credential_values": {
"api_base": "https://flight-centralus.openai.azure.com/",
"api_key": "sk-azure-flight-key-xxx"
}
}'

Step 2: Set model_config on Teams​

Add a model_config key to the team's metadata referencing the credential by name:

# Hotel team — default Azure endpoint for all models
curl -X PATCH 'http://0.0.0.0:4000/team/update' \
-H 'Authorization: Bearer sk-1234' \
-H 'Content-Type: application/json' \
-d '{
"team_id": "hotel-team-id",
"metadata": {
"model_config": {
"defaultconfig": {
"azure": {
"litellm_credentials": "hotel-azure-eastus"
}
}
}
}
}'
# Flight team — default Azure endpoint for all models
curl -X PATCH 'http://0.0.0.0:4000/team/update' \
-H 'Authorization: Bearer sk-1234' \
-H 'Content-Type: application/json' \
-d '{
"team_id": "flight-team-id",
"metadata": {
"model_config": {
"defaultconfig": {
"azure": {
"litellm_credentials": "flight-azure-centralus"
}
}
}
}
}'

Step 3: Make Requests​

Requests are automatically routed to the correct Azure endpoint based on the API key's team:

# Request using Hotel team's API key → routes to hotel-eastus.openai.azure.com
curl http://localhost:4000/v1/chat/completions \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-hotel-team-key' \
-d '{"model": "gpt-4", "messages": [{"role": "user", "content": "Hello"}]}'

# Request using Flight team's API key → routes to flight-centralus.openai.azure.com
curl http://localhost:4000/v1/chat/completions \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-flight-team-key' \
-d '{"model": "gpt-4", "messages": [{"role": "user", "content": "Hello"}]}'

Per-Model Overrides​

You can set different credentials for specific models while keeping a default for everything else:

curl -X PATCH 'http://0.0.0.0:4000/team/update' \
-H 'Authorization: Bearer sk-1234' \
-H 'Content-Type: application/json' \
-d '{
"team_id": "hotel-team-id",
"metadata": {
"model_config": {
"defaultconfig": {
"azure": {
"litellm_credentials": "hotel-azure-eastus"
}
},
"gpt-4": {
"azure": {
"litellm_credentials": "hotel-azure-westus"
}
}
}
}
}'

With this config:

  • gpt-4 requests → hotel-azure-westus credential (model-specific)
  • All other models → hotel-azure-eastus credential (default)

Project-Level Overrides​

Projects inherit their team's model_config but can override at the project level. Project overrides take precedence over team overrides.

# Project overrides the team default for all models
curl -X PATCH 'http://0.0.0.0:4000/project/update' \
-H 'Authorization: Bearer sk-1234' \
-H 'Content-Type: application/json' \
-d '{
"project_id": "hotel-rec-app-id",
"metadata": {
"model_config": {
"defaultconfig": {
"azure": {
"litellm_credentials": "hotel-rec-azure"
}
},
"gpt-4-vision": {
"azure": {
"litellm_credentials": "hotel-rec-vision"
}
}
}
}
}'

Full Example: Hotel Team with Two Projects​

Setup:

  • Hotel Team: default hotel-azure-eastus, GPT-4 override to hotel-azure-westus
  • Hotel Rec App (project): default hotel-rec-azure, GPT-4-Vision override to hotel-rec-vision
  • Hotel Review App (project): no overrides — inherits team config

Resolution:

RequestResolved CredentialWhy
Hotel Rec App → gpt-4hotel-rec-azureProject default (no project model-specific match for gpt-4)
Hotel Rec App → gpt-4-visionhotel-rec-visionProject model-specific
Hotel Review App → gpt-3.5hotel-azure-eastusTeam default (no project config)
Hotel Review App → gpt-4hotel-azure-westusTeam model-specific

model_config Schema​

The model_config key is a JSON object in team/project metadata:

{
"model_config": {
"defaultconfig": {
"<provider>": {
"litellm_credentials": "<credential-name>"
}
},
"<model-name>": {
"<provider>": {
"litellm_credentials": "<credential-name>"
}
}
}
}
FieldDescription
defaultconfigFallback credential for any model not explicitly listed
<model-name>Model-specific override — must match the LiteLLM model group name
<provider>Provider key (e.g. azure, openai, bedrock). When the model name includes a provider prefix (e.g. azure/gpt-4), the system prefers the matching provider key
litellm_credentialsName of a credential in the credentials table

Credential Values​

The referenced credential can contain any combination of:

KeyDescription
api_baseProvider endpoint URL
api_keyAPI key for the provider
api_versionAPI version (e.g. for Azure)

Only keys present in the credential are applied. Keys already in the request (e.g. clientside api_version) are never overwritten.

Enabling the Feature​

This feature is disabled by default and must be explicitly enabled. To enable it:

litellm_settings:
enable_model_config_credential_overrides: true
info

The feature flag must be enabled before model_config entries in team/project metadata take effect. Without it, credential routing is completely inert — no metadata is read, no credentials are resolved.