Skip to main content

Arize Phoenix Prompt Management

Use prompt versions from Arize Phoenix with LiteLLM SDK and Proxy.

Quick Start​

SDK​

import litellm

response = litellm.completion(
model="gpt-4o",
prompt_id="UHJvbXB0VmVyc2lvbjox",
prompt_integration="arize_phoenix",
api_key="your-arize-phoenix-token",
api_base="https://app.phoenix.arize.com/s/your-workspace",
prompt_variables={"question": "What is AI?"},
)

Proxy​

1. Add prompt to config

prompts:
- prompt_id: "simple_prompt"
litellm_params:
prompt_id: "UHJvbXB0VmVyc2lvbjox"
prompt_integration: "arize_phoenix"
api_base: https://app.phoenix.arize.com/s/your-workspace
api_key: os.environ/PHOENIX_API_KEY
ignore_prompt_manager_model: true # optional: use model from config instead
ignore_prompt_manager_optional_params: true # optional: ignore temp, max_tokens from prompt

2. Make request

curl -X POST 'http://0.0.0.0:4000/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-1234' \
-d '{
"model": "gpt-3.5-turbo",
"prompt_id": "simple_prompt",
"prompt_variables": {
"question": "Explain quantum computing"
}
}'

Configuration​

Get Arize Phoenix Credentials​

  1. API Token: Get from Arize Phoenix Settings
  2. Workspace URL: https://app.phoenix.arize.com/s/{your-workspace}
  3. Prompt ID: Found in prompt version URL

Set environment variable:

export PHOENIX_API_KEY="your-token"

SDK + PROXY Options​

ParameterRequiredDescription
prompt_idYesArize Phoenix prompt version ID
prompt_integrationYesSet to "arize_phoenix"
api_baseYesWorkspace URL
api_keyYesAccess token
prompt_variablesNoVariables for template

Proxy-only Options​

ParameterDescription
ignore_prompt_manager_modelUse config model instead of prompt's model
ignore_prompt_manager_optional_paramsIgnore temperature, max_tokens from prompt

Variable Templates​

Arize Phoenix uses Mustache/Handlebars syntax:

# Template: "Hello {{name}}, question: {{question}}"
prompt_variables = {
"name": "Alice",
"question": "What is ML?"
}
# Result: "Hello Alice, question: What is ML?"

Combine with Additional Messages​

response = litellm.completion(
model="gpt-4o",
prompt_id="UHJvbXB0VmVyc2lvbjox",
prompt_integration="arize_phoenix",
api_base="https://app.phoenix.arize.com/s/your-workspace",
prompt_variables={"question": "Explain AI"},
messages=[
{"role": "user", "content": "Keep it under 50 words"}
]
)

Error Handling​

try:
response = litellm.completion(
model="gpt-4o",
prompt_id="invalid-id",
prompt_integration="arize_phoenix",
api_base="https://app.phoenix.arize.com/s/workspace"
)
except Exception as e:
print(f"Error: {e}")
# 404: Prompt not found
# 401: Invalid credentials
# 403: Access denied

Support​