Model Configuration
Overview
models.json is a configuration file used to customize the model list and control the model dropdown display. This configuration supports two levels:
- User-level:
~/.codebuddy/models.json- Global configuration applicable to all projects - Project-level:
<workspace>/.codebuddy/models.json- Project-specific configuration with higher priority than user-level
Configuration File Locations
User-level Configuration
~/.codebuddy/models.jsonProject-level Configuration
<project-root>/.codebuddy/models.jsonConfiguration Priority
Configuration merge priority from highest to lowest:
- Project-level models.json
- User-level models.json
- Built-in default configuration
Project-level configuration will override user-level configuration for the same model definitions (based on id field matching). availableModels field: project-level completely overrides user-level, no merging.
Configuration Structure
json
{
"models": [
{
"id": "model-id",
"name": "Model Display Name",
"vendor": "vendor-name",
"apiKey": "sk-actual-api-key-value",
"maxInputTokens": 200000,
"maxOutputTokens": 8192,
"url": "https://api.example.com/v1/chat/completions",
"supportsToolCall": true,
"supportsImages": true
}
],
"availableModels": ["model-id-1", "model-id-2"]
}Configuration Field Description
models
Type: Array<LanguageModel>
Define custom model list. You can add new models or override built-in model configurations.
LanguageModel Fields
| Field | Type | Required | Description |
|---|---|---|---|
id | string | ✓ | Model unique identifier |
name | string | - | Model display name |
vendor | string | - | Model vendor (e.g., OpenAI, Anthropic, Google) |
apiKey | string | - | API key (actual key value, not environment variable name) |
maxInputTokens | number | - | Maximum input tokens |
maxOutputTokens | number | - | Maximum output tokens |
url | string | - | API endpoint URL (must be complete interface path, typically ending with /chat/completions) |
supportsToolCall | boolean | - | Whether tool calls are supported |
supportsImages | boolean | - | Whether image input is supported |
supportsReasoning | boolean | - | Whether reasoning mode is supported |
Important Notes:
- Currently only supports OpenAI interface format API
urlfield must be the complete interface path, typically ending with/chat/completions- Examples:
https://api.openai.com/v1/chat/completionsorhttp://localhost:11434/v1/chat/completions
availableModels
Type: Array<string>
Control which models are displayed in the model dropdown list. Only model IDs listed in this array will be shown in the UI.
- If not configured or empty array, all models are displayed
- When configured, only listed model IDs are displayed
- Can include both built-in and custom model IDs
Use Cases
1. Add Custom Model
Add new model configuration at user or project level:
json
{
"models": [
{
"id": "my-custom-model",
"name": "My Custom Model",
"vendor": "OpenAI",
"apiKey": "sk-custom-key-here",
"maxInputTokens": 128000,
"maxOutputTokens": 4096,
"url": "https://api.myservice.com/v1/chat/completions",
"supportsToolCall": true
}
]
}2. Override Built-in Model Configuration
Modify default parameters of built-in models:
json
{
"models": [
{
"id": "gpt-4-turbo",
"name": "GPT-4 Turbo (Custom Endpoint)",
"vendor": "OpenAI",
"url": "https://my-proxy.example.com/v1/chat/completions",
"apiKey": "sk-your-key-here"
}
]
}3. Limit Available Model List
Only display specific models in the dropdown list:
json
{
"availableModels": [
"gpt-4-turbo",
"gpt-4o",
"my-custom-model"
]
}4. Project-Specific Configuration
Use different models or API endpoints for specific projects:
Project A (.codebuddy/models.json):
json
{
"models": [
{
"id": "project-a-model",
"name": "Project A Model",
"vendor": "OpenAI",
"url": "https://project-a-api.example.com/v1/chat/completions",
"apiKey": "sk-project-a-key",
"maxInputTokens": 100000,
"maxOutputTokens": 4096
}
],
"availableModels": ["project-a-model", "gpt-4-turbo"]
}Hot Reload
Configuration file supports hot reload:
- File changes are automatically detected
- Uses 1 second debounce delay to avoid frequent reloads
- Configuration updates are automatically synced to the application
Monitored files:
~/.codebuddy/models.json(user-level)<workspace>/.codebuddy/models.json(project-level)
Tagging System
Models added through models.json are automatically tagged with the custom tag for easy identification and filtering in the UI.
Merge Strategy
Configuration uses SmartMerge strategy:
- Model configurations with the same ID are overridden
- Models with different IDs are appended
- Project-level configuration takes priority over user-level configuration
availableModelsfiltering is executed after all merging is complete
Example Configurations
API Endpoint URL Format
Must use complete path: All custom model url fields should typically end with /chat/completions.
✅ Correct Examples:
https://api.openai.com/v1/chat/completions
https://api.myservice.com/v1/chat/completions
http://localhost:11434/v1/chat/completions
https://my-proxy.example.com/v1/chat/completions❌ Incorrect Examples:
https://api.openai.com/v1
https://api.myservice.com
http://localhost:11434OpenRouter Platform Configuration Example
Using OpenRouter to access various models:
json
{
"models": [
{
"id": "openai/gpt-4o",
"name": "open-router-model",
"url": "https://openrouter.ai/api/v1/chat/completions",
"apiKey": "sk-or-v1-your-openrouter-api-key",
"maxInputTokens": 128000,
"maxOutputTokens": 4096,
"supportsToolCall": true,
"supportsImages": false
}
]
}DeepSeek Platform Configuration Example
Using DeepSeek models:
json
{
"models": [
{
"id": "deepseek-chat",
"name": "DeepSeek Chat",
"vendor": "DeepSeek",
"url": "https://api.deepseek.com/v1/chat/completions",
"apiKey": "sk-your-deepseek-api-key",
"maxInputTokens": 32000,
"maxOutputTokens": 4096,
"supportsToolCall": true,
"supportsImages": false
}
]
}Complete Example
json
{
"models": [
{
"id": "gpt-4o",
"name": "GPT-4o",
"vendor": "OpenAI",
"apiKey": "sk-your-openai-key",
"maxInputTokens": 128000,
"maxOutputTokens": 16384,
"supportsToolCall": true,
"supportsImages": true
},
{
"id": "my-local-llm",
"name": "My Local LLM",
"vendor": "Ollama",
"url": "http://localhost:11434/v1/chat/completions",
"apiKey": "ollama",
"maxInputTokens": 8192,
"maxOutputTokens": 2048,
"supportsToolCall": true
}
],
"availableModels": [
"gpt-4o",
"my-local-llm"
]
}Troubleshooting
Configuration Not Taking Effect
- Check if JSON format is correct
- Confirm file path is correct
- View log output to confirm configuration is loaded
- Confirm API keys in environment variables are set
Model Not Showing in List
- Check if model ID is listed in
availableModels - Confirm
modelsconfiguration is correct - Verify all required fields (
id,name,provider) are provided
Hot Reload Not Triggered
- Configuration file changes have 1 second debounce delay
- Ensure file is actually saved to disk
- Check if file watching started normally (view debug logs)