Skip to main content

Braintrust - Evals + Logging

Braintrust manages evaluations, logging, prompt playground, to data management for AI products.

Quick Start​

# pip install langfuse 
import litellm
import os

# set env
os.environ["BRAINTRUST_API_KEY"] = ""
os.environ['OPENAI_API_KEY']=""

# set braintrust as a callback, litellm will send the data to braintrust
litellm.callbacks = ["braintrust"]

# openai call
response = litellm.completion(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hi 👋 - i'm openai"}
]
)

OpenAI Proxy Usage​

  1. Add keys to env
BRAINTRUST_API_KEY="" 
  1. Add braintrust to callbacks
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: gpt-3.5-turbo
api_key: os.environ/OPENAI_API_KEY


litellm_settings:
callbacks: ["braintrust"]
  1. Test it!
curl -X POST 'http://0.0.0.0:4000/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-1234' \
-D '{
"model": "groq-llama3",
"messages": [
{ "role": "system", "content": "Use your tools smartly"},
{ "role": "user", "content": "What time is it now? Use your tool"}
]
}'

Advanced - pass Project ID​

response = litellm.completion(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hi 👋 - i'm openai"}
],
metadata={
"project_id": "my-special-project"
}
)

Full API Spec​

Here's everything you can pass in metadata for a braintrust request

braintrust_* - any metadata field starting with braintrust_ will be passed as metadata to the logging request

project_id - set the project id for a braintrust call. Default is litellm.