Skip to main content

LangGraph

Call LangGraph agents through LiteLLM using the OpenAI chat completions format.

PropertyDetails
DescriptionLangGraph is a framework for building stateful, multi-actor applications with LLMs. LiteLLM supports calling LangGraph agents via their streaming and non-streaming endpoints.
Provider Route on LiteLLMlanggraph/{agent_id}
Provider DocLangGraph Platform ↗

Prerequisites: You need a running LangGraph server. See Setting Up a Local LangGraph Server below.

Quick Start​

Model Format​

Model Format
langgraph/{agent_id}

Example:

  • langgraph/agent - calls the default agent

LiteLLM Python SDK​

Basic LangGraph Completion
import litellm

response = litellm.completion(
model="langgraph/agent",
messages=[
{"role": "user", "content": "What is 25 * 4?"}
],
api_base="http://localhost:2024",
)

print(response.choices[0].message.content)
Streaming LangGraph Response
import litellm

response = litellm.completion(
model="langgraph/agent",
messages=[
{"role": "user", "content": "What is the weather in Tokyo?"}
],
api_base="http://localhost:2024",
stream=True,
)

for chunk in response:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")

LiteLLM Proxy​

1. Configure your model in config.yaml​

LiteLLM Proxy Configuration
model_list:
- model_name: langgraph-agent
litellm_params:
model: langgraph/agent
api_base: http://localhost:2024

2. Start the LiteLLM Proxy​

Start LiteLLM Proxy
litellm --config config.yaml

3. Make requests to your LangGraph agent​

Basic Request
curl http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $LITELLM_API_KEY" \
-d '{
"model": "langgraph-agent",
"messages": [
{"role": "user", "content": "What is 25 * 4?"}
]
}'
Streaming Request
curl http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $LITELLM_API_KEY" \
-d '{
"model": "langgraph-agent",
"messages": [
{"role": "user", "content": "What is the weather in Tokyo?"}
],
"stream": true
}'

Environment Variables​

VariableDescription
LANGGRAPH_API_BASEBase URL of your LangGraph server (default: http://localhost:2024)
LANGGRAPH_API_KEYOptional API key for authentication

Supported Parameters​

ParameterTypeDescription
modelstringThe agent ID in format langgraph/{agent_id}
messagesarrayChat messages in OpenAI format
streambooleanEnable streaming responses
api_basestringLangGraph server URL
api_keystringOptional API key

Setting Up a Local LangGraph Server​

Before using LiteLLM with LangGraph, you need a running LangGraph server.

Prerequisites​

  • Python 3.11+
  • An LLM API key (OpenAI or Google Gemini)

1. Install the LangGraph CLI​

pip install "langgraph-cli[inmem]"

2. Create a new LangGraph project​

langgraph new my-agent --template new-langgraph-project-python
cd my-agent

3. Install dependencies​

pip install -e .

4. Set your API key​

echo "OPENAI_API_KEY=your_key_here" > .env

5. Start the server​

langgraph dev

The server will start at http://localhost:2024.

Verify the server is running​

curl -s --request POST \
--url "http://localhost:2024/runs/wait" \
--header 'Content-Type: application/json' \
--data '{
"assistant_id": "agent",
"input": {
"messages": [{"role": "human", "content": "Hello!"}]
}
}'

Further Reading​