StationOne supports multiple AI model providers. Once configured, models become available across all workspaces. Access model configuration via Settings → Models.
Configuring AI Providers
- Open Settings (gear icon or menu bar).
- Go to the Models tab.
- Select a provider and enter the required credentials.
Supported Providers
| Provider | Requirements |
|---|---|
| OpenAI | API Key. Supports GPT-5 variants, o3/o1 series, GPT-4. Optional vision model fallback and custom API base URL. |
| Anthropic | API Key. Supports Claude model family. |
| API Key. Supports Gemini models. | |
| xAI | API Key. Supports Grok models. |
| Meta | API Key. Supports Llama models. |
| Mistral AI | API Key. |
| DeepSeek | API Key. |
| Groq | API Key. Fast inference. |
| Cerebras | API Key. |
| Ollama | API Base URL (local). No API key needed. |
| LM Studio | API Base URL (local). No API key needed. |
| Azure | Requires deployment creation with API endpoint, credentials, and deployment details. |
| OpenRouter | API Key. Includes provider order configuration. |
Common Options
Most providers offer:
- API Key input (masked).
- Vision model fallback selection.
- Disable plugins for all models toggle.
LLM Proxy (Subscription Feature)
Subscribers with a Pro+ plan can access LLM models through StationOne’s proxy without managing their own API keys:
- Models are automatically available when your subscription is active.
- No API key configuration needed for proxied models.
- Model alias resolution maps your requests to the appropriate provider.
Workspace Model Management
Each workspace can configure which models are available:
- Right-click workspace → Settings → Models.
- Select online or local models.
- Check/uncheck specific models to make them available or unavailable.
- Use Prune Models to remove models that are no longer available from providers.