Appearance
BYOK Models
HELIX uses a Bring Your Own Key (BYOK) approach to AI models. You provide your own API keys and choose which models power your agents. No AI usage fees from HELIX — you pay your provider directly.
Supported Providers
| Provider | Example Models | Base URL |
|---|---|---|
| Moonshot | kimi-k2.5, moonshot-v1-auto | https://api.moonshot.cn/v1 |
| OpenAI | gpt-4o, gpt-4o-mini, o1 | https://api.openai.com/v1 |
| Anthropic | claude-sonnet-4-20250514, claude-3.5-haiku | https://api.anthropic.com/v1 |
| NVIDIA | Various NIM models | NVIDIA NIM endpoint |
| Custom | Any OpenAI-compatible API | Your custom endpoint |
The Custom provider works with any API that follows the OpenAI chat completions format. This includes local models (Ollama, vLLM, LM Studio), cloud providers (Together AI, Groq, Fireworks), and enterprise deployments.
Organization Default Model
The default model is used by all agents unless overridden. Configure it during onboarding or in Settings > AI Models.
Setting Up a Model
- Go to Settings > AI Models
- Click + Add Model
- Fill in:
- Provider: Select from the dropdown
- Model Name: The model identifier (e.g.,
gpt-4o) - Display Name: Friendly name shown in the UI
- API Key: Your provider's API key
- Base URL: Auto-filled for known providers, editable for custom
- Click Test Connection to verify
- Click Set as Default if this should be the organization default
Model Settings
| Setting | Description | Default |
|---|---|---|
| Context Window | Maximum input tokens | 256,000 |
| Max Tokens | Maximum output tokens per request | 8,192 |
| Base URL | API endpoint | Auto-detected |
Per-Agent Model Overrides
Different agents can use different models. This is configured in the agent's settings:
- Go to Agents and select an agent
- In the agent settings, find AI Model
- Select a different model from the configured models
- Save — this agent now uses its own model for all tasks
Use Cases for Overrides
- Coding agents might use a model specialized for code generation
- Writing agents might use a model known for creative writing
- Cost optimization — use cheaper models for simple tasks, premium models for complex ones
- Speed — use faster models for time-sensitive agents
Testing a Model
After adding a model configuration, use the Test Connection button to verify:
- HELIX sends a simple test prompt to the API
- If successful, you see a confirmation with the model's response
- If it fails, check your API key, model name, and base URL
Multiple Models
You can add multiple model configurations simultaneously:
- One default model for general use
- Specialized models for specific agents
- Backup models in case a provider has issues
Switch between models at any time — changes take effect on the next task execution.
Environment Variables
Models can also be configured via environment variables in .env:
bash
MODEL_PROVIDER=moonshot # Provider name
MODEL_NAME=kimi-k2.5 # Model identifier
MODEL_API_KEY=sk-xxx # Your API key
MODEL_BASE_URL= # Custom endpoint (blank = auto)
MODEL_DISPLAY_NAME=Kimi K2.5 # Friendly name
MODEL_CONTEXT_WINDOW=256000 # Max context tokens
MODEL_MAX_TOKENS=8192 # Max output tokensThese set the initial default model. Models added through the UI are stored in the database and take precedence.