Skip to content

BYOK Models

HELIX uses a Bring Your Own Key (BYOK) approach to AI models. You provide your own API keys and choose which models power your agents. No AI usage fees from HELIX — you pay your provider directly.

Supported Providers

ProviderExample ModelsBase URL
Moonshotkimi-k2.5, moonshot-v1-autohttps://api.moonshot.cn/v1
OpenAIgpt-4o, gpt-4o-mini, o1https://api.openai.com/v1
Anthropicclaude-sonnet-4-20250514, claude-3.5-haikuhttps://api.anthropic.com/v1
NVIDIAVarious NIM modelsNVIDIA NIM endpoint
CustomAny OpenAI-compatible APIYour custom endpoint

The Custom provider works with any API that follows the OpenAI chat completions format. This includes local models (Ollama, vLLM, LM Studio), cloud providers (Together AI, Groq, Fireworks), and enterprise deployments.

Organization Default Model

The default model is used by all agents unless overridden. Configure it during onboarding or in Settings > AI Models.

Setting Up a Model

  1. Go to Settings > AI Models
  2. Click + Add Model
  3. Fill in:
    • Provider: Select from the dropdown
    • Model Name: The model identifier (e.g., gpt-4o)
    • Display Name: Friendly name shown in the UI
    • API Key: Your provider's API key
    • Base URL: Auto-filled for known providers, editable for custom
  4. Click Test Connection to verify
  5. Click Set as Default if this should be the organization default

Model Settings

SettingDescriptionDefault
Context WindowMaximum input tokens256,000
Max TokensMaximum output tokens per request8,192
Base URLAPI endpointAuto-detected

Per-Agent Model Overrides

Different agents can use different models. This is configured in the agent's settings:

  1. Go to Agents and select an agent
  2. In the agent settings, find AI Model
  3. Select a different model from the configured models
  4. Save — this agent now uses its own model for all tasks

Use Cases for Overrides

  • Coding agents might use a model specialized for code generation
  • Writing agents might use a model known for creative writing
  • Cost optimization — use cheaper models for simple tasks, premium models for complex ones
  • Speed — use faster models for time-sensitive agents

Testing a Model

After adding a model configuration, use the Test Connection button to verify:

  1. HELIX sends a simple test prompt to the API
  2. If successful, you see a confirmation with the model's response
  3. If it fails, check your API key, model name, and base URL

Multiple Models

You can add multiple model configurations simultaneously:

  • One default model for general use
  • Specialized models for specific agents
  • Backup models in case a provider has issues

Switch between models at any time — changes take effect on the next task execution.

Environment Variables

Models can also be configured via environment variables in .env:

bash
MODEL_PROVIDER=moonshot          # Provider name
MODEL_NAME=kimi-k2.5            # Model identifier
MODEL_API_KEY=sk-xxx             # Your API key
MODEL_BASE_URL=                  # Custom endpoint (blank = auto)
MODEL_DISPLAY_NAME=Kimi K2.5    # Friendly name
MODEL_CONTEXT_WINDOW=256000     # Max context tokens
MODEL_MAX_TOKENS=8192           # Max output tokens

These set the initial default model. Models added through the UI are stored in the database and take precedence.

Built by HelixNode