Skip to main content

Vertex AI Agent Engine

Call Vertex AI Agent Engine (Reasoning Engines) in the OpenAI Request/Response format.

PropertyDetails
DescriptionVertex AI Agent Engine provides hosted agent runtimes that can execute agentic workflows with foundation models, tools, and custom logic.
Provider Route on LiteLLMvertex_ai/agent_engine/{RESOURCE_NAME}
Supported Endpoints/chat/completions, /v1/messages, /v1/responses, /v1/a2a/message/send
Provider DocVertex AI Agent Engine ↗

Quick Start​

Model Format​

Model Format
vertex_ai/agent_engine/{RESOURCE_NAME}

Example:

  • vertex_ai/agent_engine/projects/1060139831167/locations/us-central1/reasoningEngines/8263861224643493888

LiteLLM Python SDK​

Basic Agent Completion
import litellm

response = litellm.completion(
model="vertex_ai/agent_engine/projects/1060139831167/locations/us-central1/reasoningEngines/8263861224643493888",
messages=[
{"role": "user", "content": "Explain machine learning in simple terms"}
],
)

print(response.choices[0].message.content)
Streaming Agent Responses
import litellm

response = await litellm.acompletion(
model="vertex_ai/agent_engine/projects/1060139831167/locations/us-central1/reasoningEngines/8263861224643493888",
messages=[
{"role": "user", "content": "What are the key principles of software architecture?"}
],
stream=True,
)

async for chunk in response:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")

LiteLLM Proxy​

1. Configure your model in config.yaml​

LiteLLM Proxy Configuration
model_list:
- model_name: vertex-agent-1
litellm_params:
model: vertex_ai/agent_engine/projects/1060139831167/locations/us-central1/reasoningEngines/8263861224643493888
vertex_project: your-project-id
vertex_location: us-central1

2. Start the LiteLLM Proxy​

Start LiteLLM Proxy
litellm --config config.yaml

3. Make requests to your Vertex AI Agent Engine​

Basic Agent Request
curl http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $LITELLM_API_KEY" \
-d '{
"model": "vertex-agent-1",
"messages": [
{"role": "user", "content": "Summarize the main benefits of cloud computing"}
]
}'

LiteLLM A2A Gateway​

You can also connect to Vertex AI Agent Engine through LiteLLM's A2A (Agent-to-Agent) Gateway UI. This provides a visual way to register and test agents without writing code.

1. Navigate to Agents​

From the sidebar, click "Agents" to open the agent management page, then click "+ Add New Agent".

Click Agents

Add New Agent

2. Select Vertex AI Agent Engine Type​

Click "A2A Standard" to see available agent types, then select "Vertex AI Agent Engine".

Select A2A Standard

Select Vertex AI Agent Engine

3. Configure the Agent​

Fill in the following fields:

  • Agent Name - A friendly name for your agent (e.g., my-vertex-agent)
  • Reasoning Engine Resource ID - The full resource path from Google Cloud Console (e.g., projects/1060139831167/locations/us-central1/reasoningEngines/8263861224643493888)
  • Vertex Project - Your Google Cloud project ID
  • Vertex Location - The region where your agent is deployed (e.g., us-central1)

Enter Agent Name

Enter Resource ID

You can find the Resource ID in Google Cloud Console under Vertex AI > Agent Engine:

Copy Resource ID from Google Cloud Console

Enter Vertex Project

You can find the Project ID in Google Cloud Console:

Copy Project ID from Google Cloud Console

Enter Vertex Location

4. Create Agent​

Click "Create Agent" to save your configuration.

Create Agent

5. Test in Playground​

Go to "Playground" in the sidebar to test your agent.

Go to Playground

6. Select A2A Endpoint​

Click the endpoint dropdown and select /v1/a2a/message/send.

Select Endpoint

7. Select Your Agent and Send a Message​

Pick your Vertex AI Agent Engine from the dropdown and send a test message.

Select Agent

Send Message

Agent Response

Environment Variables​

VariableDescription
GOOGLE_APPLICATION_CREDENTIALSPath to service account JSON key file
VERTEXAI_PROJECTGoogle Cloud project ID
VERTEXAI_LOCATIONGoogle Cloud region (default: us-central1)
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
export VERTEXAI_PROJECT="your-project-id"
export VERTEXAI_LOCATION="us-central1"

Further Reading​