Skip to main content

Text Completion

Usage​

from litellm import text_completion

response = text_completion(
model="gpt-3.5-turbo-instruct",
prompt="Say this is a test",
max_tokens=7
)

Input Params​

LiteLLM accepts and translates the OpenAI Text Completion params across all supported providers.

Required Fields​

  • model: string - ID of the model to use
  • prompt: string or array - The prompt(s) to generate completions for

Optional Fields​

  • best_of: integer - Generates best_of completions server-side and returns the "best" one
  • echo: boolean - Echo back the prompt in addition to the completion.
  • frequency_penalty: number - Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency.
  • logit_bias: map - Modify the likelihood of specified tokens appearing in the completion
  • logprobs: integer - Include the log probabilities on the logprobs most likely tokens. Max value of 5
  • max_tokens: integer - The maximum number of tokens to generate.
  • n: integer - How many completions to generate for each prompt.
  • presence_penalty: number - Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far.
  • seed: integer - If specified, system will attempt to make deterministic samples
  • stop: string or array - Up to 4 sequences where the API will stop generating tokens
  • stream: boolean - Whether to stream back partial progress. Defaults to false
  • suffix: string - The suffix that comes after a completion of inserted text
  • temperature: number - What sampling temperature to use, between 0 and 2.
  • top_p: number - An alternative to sampling with temperature, called nucleus sampling.
  • user: string - A unique identifier representing your end-user

Output Format​

Here's the exact JSON output format you can expect from completion calls:

Follows OpenAI's output format

{
"id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7",
"object": "text_completion",
"created": 1589478378,
"model": "gpt-3.5-turbo-instruct",
"system_fingerprint": "fp_44709d6fcb",
"choices": [
{
"text": "\n\nThis is indeed a test",
"index": 0,
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 7,
"total_tokens": 12
}
}

Supported Providers​

ProviderLink to Usage
OpenAIUsage
Azure OpenAIUsage