Skip to main content

Supabase Tutorial

Supabase is an open source Firebase alternative. Start your project with a Postgres database, Authentication, instant APIs, Edge Functions, Realtime subscriptions, Storage, and Vector embeddings.

Use Supabase to log requests and see total spend across all LLM Providers (OpenAI, Azure, Anthropic, Cohere, Replicate, PaLM)

liteLLM provides success_callbacks and failure_callbacks, making it easy for you to send data to a particular provider depending on the status of your responses.

In this case, we want to log requests to Supabase in both scenarios - when it succeeds and fails.

Create a supabase table

Go to your Supabase project > go to the Supabase SQL Editor and create a new table with this configuration.

Note: You can change the table name. Just don't change the column names.

create table
public.request_logs (
id bigint generated by default as identity,
created_at timestamp with time zone null default now(),
model text null default ''::text,
messages json null default '{}'::json,
response json null default '{}'::json,
end_user text null default ''::text,
status text null default ''::text,
error json null default '{}'::json,
response_time real null default '0'::real,
total_cost real null,
additional_details json null default '{}'::json,
litellm_call_id text unique,
primary key (id)
) tablespace pg_default;

Use Callbacks

Use just 2 lines of code, to instantly see costs and log your responses across all providers with Supabase:

litellm.success_callback=["supabase"]
litellm.failure_callback=["supabase"]

Complete code

from litellm import completion

## set env variables
### SUPABASE
os.environ["SUPABASE_URL"] = "your-supabase-url"
os.environ["SUPABASE_KEY"] = "your-supabase-key"

## LLM API KEY
os.environ["OPENAI_API_KEY"] = ""

# set callbacks
litellm.success_callback=["supabase"]
litellm.failure_callback=["supabase"]

# openai call
response = completion(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}],
user="ishaan22" # identify users
)

# bad call, expect this call to fail and get logged
response = completion(
model="chatgpt-test",
messages=[{"role": "user", "content": "Hi 👋 - i'm a bad call to test error logging"}]
)

Additional Controls

Identify end-user

Pass user to litellm.completion to map your llm call to an end-user

response = completion(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}],
user="ishaan22" # identify users
)

Different Table name

If you modified your table name, here's how to pass the new name.

litellm.modify_integration("supabase",{"table_name": "litellm_logs"})

Support & Talk to Founders