Callbacks
Use Callbacks to send Output Data to Posthog, Sentry etc
liteLLM provides input_callbacks
, success_callbacks
and failure_callbacks
, making it easy for you to send data to a particular provider depending on the status of your responses.
tip
New to LiteLLM Callbacks? Check out our comprehensive Callback Management Guide to understand when to use different callback hooks like async_log_success_event
vs async_post_call_success_hook
.
liteLLM supports:
- Custom Callback Functions
- Callback Management Guide - Comprehensive guide for choosing the right hooks
- Lunary
- Langfuse
- LangSmith
- Helicone
- Traceloop
- Athina
- Sentry
- PostHog
- Slack
This is not an extensive list. Please check the dropdown for all logging integrations.
Quick Start
from litellm import completion
# set callbacks
litellm.input_callback=["sentry"] # for sentry breadcrumbing - logs the input being sent to the api
litellm.success_callback=["posthog", "helicone", "langfuse", "lunary", "athina"]
litellm.failure_callback=["sentry", "lunary", "langfuse"]
## set env variables
os.environ['LUNARY_PUBLIC_KEY'] = ""
os.environ['SENTRY_DSN'], os.environ['SENTRY_API_TRACE_RATE']= ""
os.environ['POSTHOG_API_KEY'], os.environ['POSTHOG_API_URL'] = "api-key", "api-url"
os.environ["HELICONE_API_KEY"] = ""
os.environ["TRACELOOP_API_KEY"] = ""
os.environ["LUNARY_PUBLIC_KEY"] = ""
os.environ["ATHINA_API_KEY"] = ""
os.environ["LANGFUSE_PUBLIC_KEY"] = ""
os.environ["LANGFUSE_SECRET_KEY"] = ""
os.environ["LANGFUSE_HOST"] = ""
response = completion(model="gpt-3.5-turbo", messages=messages)