Skip to main content

GitHub Copilot

https://docs.github.com/en/copilot

tip

We support GitHub Copilot Chat API with automatic authentication handling

PropertyDetails
DescriptionGitHub Copilot Chat API provides access to GitHub's AI-powered coding assistant.
Provider Route on LiteLLMgithub_copilot/
Supported Endpoints/chat/completions
API ReferenceGitHub Copilot docs

Authenticationโ€‹

GitHub Copilot uses OAuth device flow for authentication. On first use, you'll be prompted to authenticate via GitHub:

  1. LiteLLM will display a device code and verification URL
  2. Visit the URL and enter the code to authenticate
  3. Your credentials will be stored locally for future use

Usage - LiteLLM Python SDKโ€‹

Chat Completionโ€‹

GitHub Copilot Chat Completion
from litellm import completion

response = completion(
model="github_copilot/gpt-4",
messages=[{"role": "user", "content": "Write a Python function to calculate fibonacci numbers"}],
extra_headers={
"editor-version": "vscode/1.85.1",
"Copilot-Integration-Id": "vscode-chat"
}
)
print(response)
GitHub Copilot Chat Completion - Streaming
from litellm import completion

stream = completion(
model="github_copilot/gpt-4",
messages=[{"role": "user", "content": "Explain async/await in Python"}],
stream=True,
extra_headers={
"editor-version": "vscode/1.85.1",
"Copilot-Integration-Id": "vscode-chat"
}
)

for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")

Usage - LiteLLM Proxyโ€‹

Add the following to your LiteLLM Proxy configuration file:

config.yaml
model_list:
- model_name: github_copilot/gpt-4
litellm_params:
model: github_copilot/gpt-4

Start your LiteLLM Proxy server:

Start LiteLLM Proxy
litellm --config config.yaml

# RUNNING on http://0.0.0.0:4000
GitHub Copilot via Proxy - Non-streaming
from openai import OpenAI

# Initialize client with your proxy URL
client = OpenAI(
base_url="http://localhost:4000", # Your proxy URL
api_key="your-proxy-api-key" # Your proxy API key
)

# Non-streaming response
response = client.chat.completions.create(
model="github_copilot/gpt-4",
messages=[{"role": "user", "content": "How do I optimize this SQL query?"}],
extra_headers={
"editor-version": "vscode/1.85.1",
"Copilot-Integration-Id": "vscode-chat"
}
)

print(response.choices[0].message.content)

Getting Startedโ€‹

  1. Ensure you have GitHub Copilot access (paid GitHub subscription required)
  2. Run your first LiteLLM request - you'll be prompted to authenticate
  3. Follow the device flow authentication process
  4. Start making requests to GitHub Copilot through LiteLLM

Configurationโ€‹

Environment Variablesโ€‹

You can customize token storage locations:

Environment Variables
# Optional: Custom token directory
export GITHUB_COPILOT_TOKEN_DIR="~/.config/litellm/github_copilot"

# Optional: Custom access token file name
export GITHUB_COPILOT_ACCESS_TOKEN_FILE="access-token"

# Optional: Custom API key file name
export GITHUB_COPILOT_API_KEY_FILE="api-key.json"

Headersโ€‹

GitHub Copilot supports various editor-specific headers:

Common Headers
extra_headers = {
"editor-version": "vscode/1.85.1", # Editor version
"editor-plugin-version": "copilot/1.155.0", # Plugin version
"Copilot-Integration-Id": "vscode-chat", # Integration ID
"user-agent": "GithubCopilot/1.155.0" # User agent
}