Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
Reference for LLM/model providers (not chat channels like WhatsApp/Telegram). For model selection rules, see Models.
Most provider-specific logic lives in provider plugins (
registerProvider(...)The full list of provider-SDK hooks and bundled-plugin examples lives in Provider plugins. A provider that needs a totally custom request executor is a separate, deeper extension surface.
OpenClaw ships with the pi‑ai catalog. These providers require no
models.providersopenaiOPENAI_API_KEYOPENAI_API_KEYSOPENAI_API_KEY_1OPENAI_API_KEY_2OPENCLAW_LIVE_OPENAI_KEYopenai/gpt-5.5openai/gpt-5.4-miniopenclaw models list --provider openaiopenclaw onboard --auth-choice openai-api-keyautoagents.defaults.models["openai/<model>"].params.transport"sse""websocket""auto"params.openaiWsWarmuptruefalseagents.defaults.models["openai/<model>"].params.serviceTier/fastparams.fastModeopenai/*service_tier=priorityapi.openai.comparams.serviceTier/fastoriginatorversionUser-Agentapi.openai.comstoreopenai/gpt-5.3-codex-sparkjson5{ agents: { defaults: { model: { primary: "openai/gpt-5.5" } } }, }
anthropicANTHROPIC_API_KEYANTHROPIC_API_KEYSANTHROPIC_API_KEY_1ANTHROPIC_API_KEY_2OPENCLAW_LIVE_ANTHROPIC_KEYanthropic/claude-opus-4-6openclaw onboard --auth-choice apiKey/fastparams.fastModeapi.anthropic.comservice_tierautostandard_onlyanthropic/claude-opus-4-7agents.defaults.agentRuntime.id: "claude-cli"claude-cli/claude-opus-4-7json5{ agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } }, }
openai-codexopenai-codex/gpt-5.5openai/gpt-5.5agents.defaults.agentRuntime.id: "codex"codex/gpt-*openai-codex/*codex/*openclaw onboard --auth-choice openai-codexopenclaw models auth login --provider openai-codexautoagents.defaults.models["openai-codex/<model>"].params.transport"sse""websocket""auto"params.serviceTierchatgpt.com/backend-apioriginatorversionUser-Agentchatgpt.com/backend-api/fastparams.fastModeopenai/*service_tier=priorityopenai-codex/gpt-5.5contextWindow = 400000contextTokens = 272000models.providers.openai-codex.models[].contextTokensopenai-codex/gpt-5.5openai/gpt-5.5json5{ agents: { defaults: { model: { primary: "openai-codex/gpt-5.5" } } }, }
json5{ models: { providers: { "openai-codex": { models: [{ id: "gpt-5.5", contextTokens: 160000 }], }, }, }, }
Z.AI Coding Plan or general API endpoints.
MiniMax Coding Plan OAuth or API key access.
Qwen Cloud provider surface plus Alibaba DashScope and Coding Plan endpoint mapping.
OPENCODE_API_KEYOPENCODE_ZEN_API_KEYopencodeopencode-goopencode/claude-opus-4-6opencode-go/kimi-k2.6openclaw onboard --auth-choice opencode-zenopenclaw onboard --auth-choice opencode-gojson5{ agents: { defaults: { model: { primary: "opencode/claude-opus-4-6" } } }, }
googleGEMINI_API_KEYGEMINI_API_KEYSGEMINI_API_KEY_1GEMINI_API_KEY_2GOOGLE_API_KEYOPENCLAW_LIVE_GEMINI_KEYgoogle/gemini-3.1-pro-previewgoogle/gemini-3-flash-previewgoogle/gemini-3.1-flash-previewgoogle/gemini-3-flash-previewgoogle/gemini-3.1-progoogle/gemini-3.1-pro-previewopenclaw onboard --auth-choice gemini-api-key/think adaptivethinkingLevelthinkingBudget: -1agents.defaults.models["google/<model>"].params.cachedContentcached_contentcachedContents/...cacheReadgoogle-vertexgoogle-gemini-cliGemini CLI OAuth is shipped as part of the bundled
googletext<Tab title="npm"> ```bash} npm install -g @google/gemini-cli ``` </Tab> </Tabs>
textDefault model: `google-gemini-cli/gemini-3-flash-preview`. You do **not** paste a client id or secret into `openclaw.json`. The CLI login flow stores tokens in auth profiles on the gateway host.
Gemini CLI JSON replies are parsed from
responsestatsstats.cachedcacheReadzaiZAI_API_KEYzai/glm-5.1openclaw onboard --auth-choice zai-api-keyz.ai/*z-ai/*zai/*zai-api-keyzai-coding-globalzai-coding-cnzai-globalzai-cnvercel-ai-gatewayAI_GATEWAY_API_KEYvercel-ai-gateway/anthropic/claude-opus-4.6vercel-ai-gateway/moonshotai/kimi-k2.6openclaw onboard --auth-choice ai-gateway-api-keykilocodeKILOCODE_API_KEYkilocode/kilo/autoopenclaw onboard --auth-choice kilocode-api-keyhttps://api.kilo.ai/api/gateway/kilocode/kilo/autohttps://api.kilo.ai/api/gateway/modelskilocode/kilo/autoSee /providers/kilocode for setup details.
| Provider | Id | Auth env | Example model |
|---|---|---|---|
| BytePlus | text byteplustext byteplus-plan | text BYTEPLUS_API_KEY | text byteplus-plan/ark-code-latest |
| Cerebras | text cerebras | text CEREBRAS_API_KEY | text cerebras/zai-glm-4.7 |
| Cloudflare AI Gateway | text cloudflare-ai-gateway | text CLOUDFLARE_AI_GATEWAY_API_KEY | — |
| DeepInfra | text deepinfra | text DEEPINFRA_API_KEY | text deepinfra/deepseek-ai/DeepSeek-V3.2 |
| DeepSeek | text deepseek | text DEEPSEEK_API_KEY | text deepseek/deepseek-v4-flash |
| GitHub Copilot | text github-copilot | text COPILOT_GITHUB_TOKENtext GH_TOKENtext GITHUB_TOKEN | — |
| Groq | text groq | text GROQ_API_KEY | — |
| Hugging Face Inference | text huggingface | text HUGGINGFACE_HUB_TOKENtext HF_TOKEN | text huggingface/deepseek-ai/DeepSeek-R1 |
| Kilo Gateway | text kilocode | text KILOCODE_API_KEY | text kilocode/kilo/auto |
| Kimi Coding | text kimi | text KIMI_API_KEYtext KIMICODE_API_KEY | text kimi/kimi-code |
| MiniMax | text minimaxtext minimax-portal | text MINIMAX_API_KEYtext MINIMAX_OAUTH_TOKEN | text minimax/MiniMax-M2.7 |
| Mistral | text mistral | text MISTRAL_API_KEY | text mistral/mistral-large-latest |
| Moonshot | text moonshot | text MOONSHOT_API_KEY | text moonshot/kimi-k2.6 |
| NVIDIA | text nvidia | text NVIDIA_API_KEY | text nvidia/nvidia/nemotron-3-super-120b-a12b |
| OpenRouter | text openrouter | text OPENROUTER_API_KEY | text openrouter/auto |
| Qianfan | text qianfan | text QIANFAN_API_KEY | text qianfan/deepseek-v3.2 |
| Qwen Cloud | text qwen | text QWEN_API_KEYtext MODELSTUDIO_API_KEYtext DASHSCOPE_API_KEY | text qwen/qwen3.5-plus |
| StepFun | text stepfuntext stepfun-plan | text STEPFUN_API_KEY | text stepfun/step-3.5-flash |
| Together | text together | text TOGETHER_API_KEY | text together/moonshotai/Kimi-K2.5 |
| Venice | text venice | text VENICE_API_KEY | — |
| Vercel AI Gateway | text vercel-ai-gateway | text AI_GATEWAY_API_KEY | text vercel-ai-gateway/anthropic/claude-opus-4.6 |
| Volcano Engine (Doubao) | text volcenginetext volcengine-plan | text VOLCANO_ENGINE_API_KEY | text volcengine-plan/ark-code-latest |
| xAI | text xai | text XAI_API_KEY | text xai/grok-4 |
| Xiaomi | text xiaomi | text XIAOMI_API_KEY | text xiaomi/mimo-v2-flash |
models.providersUse
models.providersmodels.jsonMany of the bundled provider plugins below already publish a default catalog. Use explicit
models.providers.<id>Gateway model capability checks also read explicit
models.providers.<id>.models[]input: ["text", "image"]Moonshot ships as a bundled provider plugin. Use the built-in provider by default, and add an explicit
models.providers.moonshotmoonshotMOONSHOT_API_KEYmoonshot/kimi-k2.6openclaw onboard --auth-choice moonshot-api-keyopenclaw onboard --auth-choice moonshot-api-key-cnKimi K2 model IDs:
moonshot/kimi-k2.6moonshot/kimi-k2.5moonshot/kimi-k2-thinkingmoonshot/kimi-k2-thinking-turbomoonshot/kimi-k2-turbojson5{ agents: { defaults: { model: { primary: "moonshot/kimi-k2.6" } }, }, models: { mode: "merge", providers: { moonshot: { baseUrl: "https://api.moonshot.ai/v1", apiKey: "${MOONSHOT_API_KEY}", api: "openai-completions", models: [{ id: "kimi-k2.6", name: "Kimi K2.6" }], }, }, }, }
Kimi Coding uses Moonshot AI's Anthropic-compatible endpoint:
kimiKIMI_API_KEYkimi/kimi-codejson5{ env: { KIMI_API_KEY: "sk-..." }, agents: { defaults: { model: { primary: "kimi/kimi-code" } }, }, }
Legacy
kimi/k2p5Volcano Engine (火山引擎) provides access to Doubao and other models in China.
volcenginevolcengine-planVOLCANO_ENGINE_API_KEYvolcengine-plan/ark-code-latestopenclaw onboard --auth-choice volcengine-api-keyjson5{ agents: { defaults: { model: { primary: "volcengine-plan/ark-code-latest" } }, }, }
Onboarding defaults to the coding surface, but the general
volcengine/*In onboarding/configure model pickers, the Volcengine auth choice prefers both
volcengine/*volcengine-plan/*BytePlus ARK provides access to the same models as Volcano Engine for international users.
byteplusbyteplus-planBYTEPLUS_API_KEYbyteplus-plan/ark-code-latestopenclaw onboard --auth-choice byteplus-api-keyjson5{ agents: { defaults: { model: { primary: "byteplus-plan/ark-code-latest" } }, }, }
Onboarding defaults to the coding surface, but the general
byteplus/*In onboarding/configure model pickers, the BytePlus auth choice prefers both
byteplus/*byteplus-plan/*Synthetic provides Anthropic-compatible models behind the
syntheticsyntheticSYNTHETIC_API_KEYsynthetic/hf:MiniMaxAI/MiniMax-M2.5openclaw onboard --auth-choice synthetic-api-keyjson5{ agents: { defaults: { model: { primary: "synthetic/hf:MiniMaxAI/MiniMax-M2.5" } }, }, models: { mode: "merge", providers: { synthetic: { baseUrl: "https://api.synthetic.new/anthropic", apiKey: "${SYNTHETIC_API_KEY}", api: "anthropic-messages", models: [{ id: "hf:MiniMaxAI/MiniMax-M2.5", name: "MiniMax M2.5" }], }, }, }, }
MiniMax is configured via
models.providers--auth-choice minimax-global-oauth--auth-choice minimax-cn-oauth--auth-choice minimax-global-api--auth-choice minimax-cn-apiMINIMAX_API_KEYminimaxMINIMAX_OAUTH_TOKENMINIMAX_API_KEYminimax-portalSee /providers/minimax for setup details, model options, and config snippets.
Plugin-owned capability split:
minimax/MiniMax-M2.7minimax/image-01minimax-portal/image-01MiniMax-VL-01minimaxLM Studio ships as a bundled provider plugin which uses the native API:
lmstudioLM_API_TOKENhttp://localhost:1234/v1Then set a model (replace with one of the IDs returned by
http://localhost:1234/api/v1/modelsjson5{ agents: { defaults: { model: { primary: "lmstudio/openai/gpt-oss-20b" } }, }, }
OpenClaw uses LM Studio's native
/api/v1/models/api/v1/models/load/v1/chat/completionsOllama ships as a bundled provider plugin and uses Ollama's native API:
ollamaollama/llama3.3bash# Install Ollama, then pull a model: ollama pull llama3.3
json5{ agents: { defaults: { model: { primary: "ollama/llama3.3" } }, }, }
Ollama is detected locally at
http://127.0.0.1:11434OLLAMA_API_KEYopenclaw onboardvLLM ships as a bundled provider plugin for local/self-hosted OpenAI-compatible servers:
vllmhttp://127.0.0.1:8000/v1To opt in to auto-discovery locally (any value works if your server doesn't enforce auth):
bashexport VLLM_API_KEY="vllm-local"
Then set a model (replace with one of the IDs returned by
/v1/modelsjson5{ agents: { defaults: { model: { primary: "vllm/your-model-id" } }, }, }
See /providers/vllm for details.
SGLang ships as a bundled provider plugin for fast self-hosted OpenAI-compatible servers:
sglanghttp://127.0.0.1:30000/v1To opt in to auto-discovery locally (any value works if your server does not enforce auth):
bashexport SGLANG_API_KEY="sglang-local"
Then set a model (replace with one of the IDs returned by
/v1/modelsjson5{ agents: { defaults: { model: { primary: "sglang/your-model-id" } }, }, }
See /providers/sglang for details.
Example (OpenAI‑compatible):
json5{ agents: { defaults: { model: { primary: "lmstudio/my-local-model" }, models: { "lmstudio/my-local-model": { alias: "Local" } }, }, }, models: { providers: { lmstudio: { baseUrl: "http://localhost:1234/v1", apiKey: "${LM_API_TOKEN}", api: "openai-completions", timeoutSeconds: 300, models: [ { id: "my-local-model", name: "Local Model", reasoning: false, input: ["text"], cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }, contextWindow: 200000, maxTokens: 8192, }, ], }, }, }, }
bashopenclaw onboard --auth-choice opencode-zen openclaw models set opencode/claude-opus-4-6 openclaw models list
See also: Configuration for full configuration examples.
© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine