Skip to main content

LiteLLM Prompt Management (GitOps)

Store prompts as .prompt files in your repository and use them directly with LiteLLM. No external services required.

Quick Start​

1. Create a .prompt file

Create prompts/hello.prompt:

---
model: gpt-4
temperature: 0.7
---
System: You are a helpful assistant.

User: {{user_message}}

2. Use with LiteLLM

import litellm

# Set the global prompt directory
litellm.global_prompt_directory = "prompts/"

response = litellm.completion(
model="dotprompt/gpt-4",
prompt_id="hello",
prompt_variables={"user_message": "What is the capital of France?"}
)

.prompt File Format​

.prompt files use YAML frontmatter for metadata and support Jinja2 templating:

---
model: gpt-4 # Model to use
temperature: 0.7 # Optional parameters
max_tokens: 1000
input:
schema:
user_message: string # Input validation (optional)
---
System: You are a helpful {{role}} assistant.

User: {{user_message}}

Advanced Features​

Multi-role conversations:

---
model: gpt-4
temperature: 0.3
---
System: You are a helpful coding assistant.

User: {{user_question}}

Dynamic model selection:

---
model: "{{preferred_model}}" # Model can be a variable
temperature: 0.7
---
System: You are a helpful assistant specialized in {{domain}}.

User: {{user_message}}

API Reference​

For dotprompt integration, use these parameters:

model: dotprompt/<base_model>     # required (e.g., dotprompt/gpt-4)
prompt_id: str # required - the .prompt filename without extension
prompt_variables: Optional[dict] # optional - variables for template rendering

Example API call:

response = litellm.completion(
model="dotprompt/gpt-4",
prompt_id="hello",
prompt_variables={"user_message": "Hello world"},
messages=[{"role": "user", "content": "This will be ignored"}]
)