1. Home
  2. Settings
  3. Models Setup

Models Setup

StationOne supports multiple AI model providers. Once configured, models become available across all workspaces. Access model configuration via SettingsModels.

Configuring AI Providers

  1. Open Settings (gear icon or menu bar).
  2. Go to the Models tab.
  3. Select a provider and enter the required credentials.

Supported Providers

ProviderRequirements
OpenAIAPI Key. Supports GPT-5 variants, o3/o1 series, GPT-4. Optional vision model fallback and custom API base URL.
AnthropicAPI Key. Supports Claude model family.
GoogleAPI Key. Supports Gemini models.
xAIAPI Key. Supports Grok models.
MetaAPI Key. Supports Llama models.
Mistral AIAPI Key.
DeepSeekAPI Key.
GroqAPI Key. Fast inference.
CerebrasAPI Key.
OllamaAPI Base URL (local). No API key needed.
LM StudioAPI Base URL (local). No API key needed.
AzureRequires deployment creation with API endpoint, credentials, and deployment details.
OpenRouterAPI Key. Includes provider order configuration.

Common Options

Most providers offer:

  • API Key input (masked).
  • Vision model fallback selection.
  • Disable plugins for all models toggle.

LLM Proxy (Subscription Feature)

Subscribers with a Pro+ plan can access LLM models through StationOne’s proxy without managing their own API keys:

  • Models are automatically available when your subscription is active.
  • No API key configuration needed for proxied models.
  • Model alias resolution maps your requests to the appropriate provider.

Workspace Model Management

Each workspace can configure which models are available:

  1. Right-click workspace → SettingsModels.
  2. Select online or local models.
  3. Check/uncheck specific models to make them available or unavailable.
  4. Use Prune Models to remove models that are no longer available from providers.
Updated on March 17, 2026
Was this article helpful?