Skip to main content

Lunary - Logging and tracing LLM input/output

Lunary is an open-source AI developer platform providing observability, prompt management, and evaluation tools for AI developers.

Use Lunary to log requests across all LLM Providers (OpenAI, Azure, Anthropic, Cohere, Replicate, PaLM)

liteLLM provides callbacks, making it easy for you to log data depending on the status of your responses.

info

We want to learn how we can make the callbacks better! Meet the founders or join our discord

Using Callbacks

First, sign up to get a public key on the Lunary dashboard.

Use just 2 lines of code, to instantly log your responses across all providers with lunary:

litellm.success_callback = ["lunary"]
litellm.failure_callback = ["lunary"]

Complete code

from litellm import completion

## set env variables
os.environ["LUNARY_PUBLIC_KEY"] = "your-lunary-public-key"

os.environ["OPENAI_API_KEY"] = ""

# set callbacks
litellm.success_callback = ["lunary"]
litellm.failure_callback = ["lunary"]

#openai call
response = completion(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}],
user="ishaan_litellm"
)

Templates

You can use Lunary to manage prompt templates and use them across all your LLM providers.

Make sure to have lunary installed:

pip install lunary

Then, use the following code to pull templates into Lunary:

from litellm import completion
from lunary

template = lunary.render_template("template-slug", {
"name": "John", # Inject variables
})

litellm.success_callback = ["lunary"]

result = completion(**template)

Support & Talk to Founders