Skip to main content

Manus

Use Manus AI agents through LiteLLM's OpenAI-compatible Responses API.

PropertyDetails
DescriptionManus is an AI agent platform for complex reasoning tasks, document analysis, and multi-step workflows with asynchronous task execution.
Provider Route on LiteLLMmanus/{agent_profile}
Supported Operations/responses (Responses API)
Provider DocManus API ↗

Model Format​

manus/{agent_profile}

Examples:

  • manus/manus-1.6 - General purpose agent
  • manus/manus-1.6-lite - Lightweight agent for simple tasks
  • manus/manus-1.6-max - Advanced agent for complex analysis

LiteLLM Python SDK​

Basic Usage
import litellm
import os
import time

# Set API key
os.environ["MANUS_API_KEY"] = "your-manus-api-key"

# Create task
response = litellm.responses(
model="manus/manus-1.6",
input="What's the capital of France?",
)

print(f"Task ID: {response.id}")
print(f"Status: {response.status}") # "running"

# Poll until complete
task_id = response.id
while response.status == "running":
time.sleep(5)
response = litellm.get_response(
response_id=task_id,
custom_llm_provider="manus",
)
print(f"Status: {response.status}")

# Get results
if response.status == "completed":
for message in response.output:
if message.role == "assistant":
print(message.content[0].text)

LiteLLM AI Gateway​

Setup​

config.yaml
model_list:
- model_name: manus-agent
litellm_params:
model: manus/manus-1.6
api_key: os.environ/MANUS_API_KEY
Start Proxy
litellm --config config.yaml

Usage​

Create Task
# Create task
curl -X POST http://localhost:4000/responses \
-H "Authorization: Bearer your-proxy-key" \
-H "Content-Type: application/json" \
-d '{
"model": "manus-agent",
"input": "What is the capital of France?"
}'

# Response
{
"id": "task_abc123",
"status": "running",
"metadata": {
"task_url": "https://manus.im/app/task_abc123"
}
}
Poll for Completion
# Check status (repeat until status is "completed")
curl http://localhost:4000/responses/task_abc123 \
-H "Authorization: Bearer your-proxy-key"

# When completed
{
"id": "task_abc123",
"status": "completed",
"output": [
{
"role": "user",
"content": [{"text": "What is the capital of France?"}]
},
{
"role": "assistant",
"content": [{"text": "The capital of France is Paris."}]
}
]
}

How It Works​

Manus operates as an asynchronous agent API:

  1. Create Task: When you call litellm.responses(), Manus creates a task and returns immediately with status: "running"
  2. Task Executes: The agent works on your request in the background
  3. Poll for Completion: You must repeatedly call litellm.get_response() or client.responses.retrieve() until the status changes to "completed"
  4. Get Results: Once completed, the output field contains the full conversation

Task Statuses:

  • running - Agent is actively working
  • pending - Agent is waiting for input
  • completed - Task finished successfully
  • error - Task failed
Production Usage

For production applications, use webhooks instead of polling to get notified when tasks complete.

Supported Parameters​

ParameterSupportedNotes
input✅Text, images, or structured content
stream✅Fake streaming (task runs async)
max_output_tokens✅Limits response length
previous_response_id✅For multi-turn conversations