Models
See also:
mux supports multiple AI providers through its flexible provider architecture.
Supported Providers
Anthropic (Cloud)
Best supported provider with full feature support:
anthropic:claude-sonnet-4-5anthropic:claude-opus-4-1
Setup:
Anthropic can be configured via ~/.mux/providers.jsonc or environment variables:
{
"anthropic": {
"apiKey": "sk-ant-...",
// Optional: custom base URL (mux auto-appends /v1 if missing)
"baseUrl": "https://api.anthropic.com",
},
}
Or set environment variables:
ANTHROPIC_API_KEYorANTHROPIC_AUTH_TOKEN— API key (required if not in providers.jsonc)ANTHROPIC_BASE_URL— Custom base URL (optional)
Note: Environment variables are read automatically if no config is provided. The /v1 path suffix is normalized automatically—you can omit it from base URLs.
OpenAI (Cloud)
GPT-5 family of models:
openai:gpt-5openai:gpt-5-pro
Google (Cloud)
Access Gemini models directly via Google's generative AI API:
google:gemini-3-pro-previewgoogle:gemini-2.5-progoogle:gemini-2.5-flash
Setup:
- Get your API key from Google AI Studio
- Add to
~/.mux/providers.jsonc:
{
"google": {
"apiKey": "AIza...",
},
}
openai:gpt-5-codex
Note: Anthropic models are better supported than GPT-5 class models due to an outstanding issue in the Vercel AI SDK.
TODO: add issue link here.
xAI (Grok)
Frontier reasoning models from xAI with built-in search orchestration:
xai:grok-4-1— Fast unified model (switches between reasoning/non-reasoning based on thinking toggle)xai:grok-code— Optimized for coding tasks
Setup:
- Create an API key at console.x.ai
- Add to
~/.mux/providers.jsonc:
{
"xai": {
"apiKey": "sk-xai-...",
},
}
Search orchestration:
Mux enables Grok's live search by default using mode: "auto" with citations. Add searchParameters to providers.jsonc if you want to customize the defaults (e.g., regional focus, time filters, or disabling search entirely per workspace).
OpenRouter (Cloud)
Access 300+ models from multiple providers through a single API:
openrouter:z-ai/glm-4.6openrouter:anthropic/claude-3.5-sonnetopenrouter:google/gemini-2.0-flash-thinking-expopenrouter:deepseek/deepseek-chatopenrouter:openai/gpt-4o- Any model from OpenRouter Models
Setup:
- Get your API key from openrouter.ai
- Add to
~/.mux/providers.jsonc:
{
"openrouter": {
"apiKey": "sk-or-v1-...",
},
}
Provider Routing (Advanced):
OpenRouter can route requests to specific infrastructure providers (Cerebras, Fireworks, Together, etc.). Configure provider preferences in ~/.mux/providers.jsonc:
{
"openrouter": {
"apiKey": "sk-or-v1-...",
// Use Cerebras for ultra-fast inference
"order": ["Cerebras", "Fireworks"], // Try in order
"allow_fallbacks": true, // Allow other providers if unavailable
},
}
Or require a specific provider (no fallbacks):
{
"openrouter": {
"apiKey": "sk-or-v1-...",
"order": ["Cerebras"], // Only try Cerebras
"allow_fallbacks": false, // Fail if Cerebras unavailable
},
}
Provider Routing Options:
order: Array of provider names to try in priority order (e.g.,["Cerebras", "Fireworks"])allow_fallbacks: Boolean - whether to fall back to other providers (default:true)only: Array - restrict to only these providersignore: Array - exclude specific providersrequire_parameters: Boolean - only use providers supporting all your request parametersdata_collection:"allow"or"deny"- control whether providers can store/train on your data
See OpenRouter Provider Routing docs for details.
Reasoning Models:
OpenRouter supports reasoning models like Claude Sonnet Thinking. Use the thinking slider to control reasoning effort:
- Off: No extended reasoning
- Low: Quick reasoning for straightforward tasks
- Medium: Standard reasoning for moderate complexity (default)
- High: Deep reasoning for complex problems
The thinking level is passed to OpenRouter as reasoning.effort and works with any reasoning-capable model. See OpenRouter Reasoning docs for details.
Ollama (Local)
Run models locally with Ollama. No API key required:
ollama:gpt-oss:20bollama:gpt-oss:120bollama:qwen3-coder:30b- Any model from the Ollama Library
Setup:
- Install Ollama from ollama.com
- Pull a model:
ollama pull gpt-oss:20b - That's it! Ollama works out-of-the-box with no configuration needed.
Custom Configuration (optional):
By default, mux connects to Ollama at http://localhost:11434/api. To use a remote instance or custom port, add to ~/.mux/providers.jsonc:
{
"ollama": {
"baseUrl": "http://your-server:11434/api",
},
}
Amazon Bedrock (Cloud)
Access Anthropic Claude and other models through AWS Bedrock:
bedrock:us.anthropic.claude-sonnet-4-20250514-v1:0bedrock:us.amazon.nova-pro-v1:0
Model IDs follow the Bedrock format: [region.]vendor.model-name-version. mux automatically parses these for display (e.g., us.anthropic.claude-sonnet-4-20250514-v1:0 displays as "Sonnet 4").
Authentication Options:
Bedrock supports multiple authentication methods, tried in order:
- Bearer Token (simplest) — A single API key for Bedrock access
- Explicit Credentials — Access Key ID + Secret Access Key in config
- AWS Credential Chain — Automatic credential resolution (recommended for AWS environments)
Option 1: Bearer Token
The simplest approach if you have a Bedrock API key:
{
"bedrock": {
"region": "us-east-1",
"bearerToken": "your-bedrock-api-key",
},
}
Or set via environment variable:
export AWS_REGION=us-east-1
export AWS_BEARER_TOKEN_BEDROCK=your-bedrock-api-key
Option 2: Explicit AWS Credentials
Use IAM access keys directly:
{
"bedrock": {
"region": "us-east-1",
"accessKeyId": "AKIA...",
"secretAccessKey": "...",
},
}
Option 3: AWS Credential Chain (Recommended for AWS)
If no explicit credentials are provided, mux uses the AWS SDK's fromNodeProviderChain() which automatically resolves credentials from (in order):
- Environment variables —
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_SESSION_TOKEN - Shared credentials file —
~/.aws/credentials(supports profiles viaAWS_PROFILE) - SSO credentials — AWS IAM Identity Center (configure with
aws sso login) - EC2 instance profile — Automatic on EC2 instances with IAM roles
- ECS task role — Automatic in ECS containers
- EKS Pod Identity / IRSA — Automatic in Kubernetes with IAM Roles for Service Accounts
For region, mux checks AWS_REGION and AWS_DEFAULT_REGION environment variables, so standard AWS CLI configurations work automatically.
This means if you're already authenticated with AWS CLI (aws sso login or configured credentials), mux will automatically use those credentials:
{
"bedrock": {
"region": "us-east-1",
// No credentials needed — uses AWS credential chain
},
}
Required IAM Permissions:
Your AWS credentials need the bedrock:InvokeModel and bedrock:InvokeModelWithResponseStream permissions for the models you want to use.
Provider Configuration
All providers are configured in ~/.mux/providers.jsonc. Example configurations:
{
// Anthropic: config OR env vars (ANTHROPIC_API_KEY, ANTHROPIC_BASE_URL)
"anthropic": {
"apiKey": "sk-ant-...",
},
// Required for OpenAI models
"openai": {
"apiKey": "sk-...",
},
// Required for Google models
"google": {
"apiKey": "AIza...",
},
// Required for Grok models
"xai": {
"apiKey": "sk-xai-...",
},
// Required for OpenRouter models
"openrouter": {
"apiKey": "sk-or-v1-...",
},
// Bedrock (uses AWS credential chain if no explicit credentials)
"bedrock": {
"region": "us-east-1",
},
// Optional for Ollama (only needed for custom URL)
"ollama": {
"baseUrl": "http://your-server:11434/api",
},
}
Model Selection
The quickest way to switch models is with the keyboard shortcut:
- macOS:
Cmd+/ - Windows/Linux:
Ctrl+/
Alternatively, use the Command Palette (Cmd+Shift+P / Ctrl+Shift+P):
- Type "model"
- Select "Change Model"
- Choose from available models
Models are specified in the format: provider:model-name