Skip to main content

Meta Llama

PropertyDetails
DescriptionMeta's Llama API provides access to Meta's family of large language models.
Provider Route on LiteLLMmeta_llama/
Supported Endpoints/chat/completions, /completions, /responses
API ReferenceLlama API Reference ↗

Required Variables​

Environment Variables
os.environ["LLAMA_API_KEY"] = ""  # your Meta Llama API key

Usage - LiteLLM Python SDK​

Non-streaming​

Meta Llama Non-streaming Completion
import os
import litellm
from litellm import completion

os.environ["LLAMA_API_KEY"] = "" # your Meta Llama API key

messages = [{"content": "Hello, how are you?", "role": "user"}]

# Meta Llama call
response = completion(model="meta_llama/Llama-3.3-70B-Instruct", messages=messages)

Streaming​

Meta Llama Streaming Completion
import os
import litellm
from litellm import completion

os.environ["LLAMA_API_KEY"] = "" # your Meta Llama API key

messages = [{"content": "Hello, how are you?", "role": "user"}]

# Meta Llama call with streaming
response = completion(
model="meta_llama/Llama-3.3-70B-Instruct",
messages=messages,
stream=True
)

for chunk in response:
print(chunk)

Usage - LiteLLM Proxy​

Add the following to your LiteLLM Proxy configuration file:

config.yaml
model_list:
- model_name: meta_llama/Llama-3.3-70B-Instruct
litellm_params:
model: meta_llama/Llama-3.3-70B-Instruct
api_key: os.environ/LLAMA_API_KEY

- model_name: meta_llama/Llama-3.3-8B-Instruct
litellm_params:
model: meta_llama/Llama-3.3-8B-Instruct
api_key: os.environ/LLAMA_API_KEY

Start your LiteLLM Proxy server:

Start LiteLLM Proxy
litellm --config config.yaml

# RUNNING on http://0.0.0.0:4000
Meta Llama via Proxy - Non-streaming
from openai import OpenAI

# Initialize client with your proxy URL
client = OpenAI(
base_url="http://localhost:4000", # Your proxy URL
api_key="your-proxy-api-key" # Your proxy API key
)

# Non-streaming response
response = client.chat.completions.create(
model="meta_llama/Llama-3.3-70B-Instruct",
messages=[{"role": "user", "content": "Write a short poem about AI."}]
)

print(response.choices[0].message.content)
Meta Llama via Proxy - Streaming
from openai import OpenAI

# Initialize client with your proxy URL
client = OpenAI(
base_url="http://localhost:4000", # Your proxy URL
api_key="your-proxy-api-key" # Your proxy API key
)

# Streaming response
response = client.chat.completions.create(
model="meta_llama/Llama-3.3-70B-Instruct",
messages=[{"role": "user", "content": "Write a short poem about AI."}],
stream=True
)

for chunk in response:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")

For more detailed information on using the LiteLLM Proxy, see the LiteLLM Proxy documentation.