TaskFlow
DashboardFreewriteWhiteboardsProjectsCRMTasksNotificationsSettingsAgent TowerAPI Docs
OpenClaw Docs
?

User

Member

Caricamento in corso...

Home
Progetti
Task
Notifiche
CRM

    OpenClaw

    Documentation Mirror

    Documentation Overview

    Docs

    Auth credential semantics
    Scheduled tasks
    Hooks
    Automation & tasks
    Standing orders
    Task flow
    Background tasks
    BlueBubbles
    Broadcast groups
    Channel routing
    Discord
    Feishu
    Google Chat
    Group messages
    Groups
    iMessage
    Chat channels
    IRC
    LINE
    Channel location parsing
    Matrix
    Matrix migration
    Matrix push rules for quiet previews
    Mattermost
    Microsoft Teams
    Nextcloud Talk
    Nostr
    Pairing
    QA channel
    QQ bot
    Signal
    Slack
    Synology Chat
    Telegram
    Tlon
    Channel troubleshooting
    Twitch
    WeChat
    WhatsApp
    Yuanbao
    Zalo
    Zalo personal
    CI pipeline
    ACP
    Agent
    Agents
    Approvals
    Backup
    Browser
    Channels
    Clawbot
    `openclaw commitments`
    Completion
    Config
    Configure
    Cron
    Daemon
    Dashboard
    Devices
    Directory
    DNS
    Docs
    Doctor
    Flows (redirect)
    Gateway
    Health
    Hooks
    CLI reference
    Inference CLI
    Logs
    MCP
    Memory
    Message
    Migrate
    Models
    Node
    Nodes
    Onboard
    Pairing
    Plugins
    Proxy
    QR
    Reset
    Sandbox CLI
    Secrets
    Security
    Sessions
    Setup
    Skills
    Status
    System
    `openclaw tasks`
    TUI
    Uninstall
    Update
    Voicecall
    Webhooks
    Wiki
    Active memory
    Agent runtime
    Agent loop
    Agent runtimes
    Agent workspace
    Gateway architecture
    Channel docking
    Inferred commitments
    Compaction
    Context
    Context engine
    Delegate architecture
    Dreaming
    Experimental features
    Features
    Markdown formatting
    Memory overview
    Builtin memory engine
    Honcho memory
    QMD memory engine
    Memory search
    Messages
    Model failover
    Model providers
    Models CLI
    Multi-agent routing
    OAuth
    OpenClaw App SDK
    Presence
    QA overview
    Matrix QA
    Command queue
    Steering queue
    Retry policy
    Session management
    Session pruning
    Session tools
    SOUL.md personality guide
    Streaming and chunking
    System prompt
    Timezones
    TypeBox
    Typing indicators
    Usage tracking
    Date and time
    Node + tsx crash
    Diagnostics flags
    Authentication
    Background exec and process tool
    Bonjour discovery
    Bridge protocol
    CLI backends
    Configuration — agents
    Configuration — channels
    Configuration — tools and custom providers
    Configuration
    Configuration examples
    Configuration reference
    Diagnostics export
    Discovery and transports
    Doctor
    Gateway lock
    Health checks
    Heartbeat
    Gateway runbook
    Local models
    Gateway logging
    Multiple gateways
    Network model
    OpenAI chat completions
    OpenResponses API
    OpenShell
    OpenTelemetry export
    Gateway-owned pairing
    Prometheus metrics
    Gateway protocol
    Remote access
    Remote gateway setup
    Sandbox vs tool policy vs elevated
    Sandboxing
    Secrets management
    Secrets apply plan contract
    Security audit checks
    Security
    Tailscale
    Tools invoke API
    Troubleshooting
    Trusted proxy auth
    Debugging
    Environment variables
    FAQ
    FAQ: first-run setup
    FAQ: models and auth
    GPT-5.5 / Codex agentic parity
    GPT-5.5 / Codex parity maintainer notes
    Help
    Scripts
    Testing
    Testing: live suites
    General troubleshooting
    OpenClaw
    Ansible
    Azure
    Bun (experimental)
    ClawDock
    Release channels
    DigitalOcean
    Docker
    Docker VM runtime
    exe.dev
    Fly.io
    GCP
    Hetzner
    Hostinger
    Install
    Installer internals
    Kubernetes
    macOS VMs
    Migration guide
    Migrating from Claude
    Migrating from Hermes
    Nix
    Node.js
    Northflank
    Oracle Cloud
    Podman
    Railway
    Raspberry Pi
    Render
    Uninstall
    Updating
    Logging
    Network
    Audio and voice notes
    Camera capture
    Image and media support
    Nodes
    Location command
    Media understanding
    Talk mode
    Node troubleshooting
    Voice wake
    Pi integration architecture
    Pi development workflow
    Android app
    Platforms
    iOS app
    Linux app
    Gateway on macOS
    Canvas
    Gateway lifecycle
    macOS dev setup
    Health checks (macOS)
    Menu bar icon
    macOS logging
    Menu bar
    Peekaboo bridge
    macOS permissions
    Remote control
    macOS signing
    Skills (macOS)
    Voice overlay
    Voice wake (macOS)
    WebChat (macOS)
    macOS IPC
    macOS app
    Windows
    Plugin internals
    Plugin architecture internals
    Building plugins
    Plugin bundles
    Codex Computer Use
    Codex harness
    Community plugins
    Plugin compatibility
    Google Meet plugin
    Plugin hooks
    Plugin manifest
    Memory LanceDB
    Memory wiki
    Message presentation
    Agent harness plugins
    Building channel plugins
    Channel turn kernel
    Plugin entry points
    Plugin SDK migration
    Plugin SDK overview
    Building provider plugins
    Plugin runtime helpers
    Plugin setup and config
    Plugin SDK subpaths
    Plugin testing
    Skill workshop plugin
    Voice call plugin
    Webhooks plugin
    Zalo personal plugin
    OpenProse
    Alibaba Model Studio
    Anthropic
    Arcee AI
    Azure Speech
    Amazon Bedrock
    Amazon Bedrock Mantle
    Chutes
    Claude Max API proxy
    Cloudflare AI gateway
    ComfyUI
    Deepgram
    Deepinfra
    DeepSeek
    ElevenLabs
    Fal
    Fireworks
    GitHub Copilot
    GLM (Zhipu)
    Google (Gemini)
    Gradium
    Groq
    Hugging Face (inference)
    Provider directory
    Inferrs
    Inworld
    Kilocode
    LiteLLM
    LM Studio
    MiniMax
    Mistral
    Model provider quickstart
    Moonshot AI
    NVIDIA
    Ollama
    OpenAI
    OpenCode
    OpenCode Go
    OpenRouter
    Perplexity
    Qianfan
    Qwen
    Runway
    SGLang
    StepFun
    Synthetic
    Tencent Cloud (TokenHub)
    Together AI
    Venice AI
    Vercel AI gateway
    vLLM
    Volcengine (Doubao)
    Vydra
    xAI
    Xiaomi MiMo
    Z.AI
    Default AGENTS.md
    Release policy
    API usage and costs
    Credits
    Device model database
    Full release validation
    Memory configuration reference
    OpenClaw App SDK API design
    Prompt caching
    Rich output protocol
    RPC adapters
    SecretRef credential surface
    Session management deep dive
    AGENTS.md template
    BOOT.md template
    BOOTSTRAP.md template
    HEARTBEAT.md template
    IDENTITY template
    SOUL.md template
    TOOLS.md template
    USER template
    Tests
    Token use and costs
    Transcript hygiene
    Onboarding reference
    Contributing to the threat model
    Threat model (MITRE ATLAS)
    Formal verification (security models)
    Network proxy
    Agent bootstrapping
    Docs directory
    Getting started
    Docs hubs
    OpenClaw lore
    Onboarding (macOS app)
    Onboarding overview
    Personal assistant setup
    Setup
    Showcase
    Onboarding (CLI)
    CLI automation
    CLI setup reference
    ACP agents
    ACP agents — setup
    Agent send
    apply_patch tool
    Brave search
    Browser (OpenClaw-managed)
    Browser control API
    Browser troubleshooting
    Browser login
    WSL2 + Windows + remote Chrome CDP troubleshooting
    BTW side questions
    ClawHub
    Code execution
    Creating skills
    Diffs
    DuckDuckGo search
    Elevated mode
    Exa search
    Exec tool
    Exec approvals
    Exec approvals — advanced
    Firecrawl
    Gemini search
    Grok search
    Image generation
    Tools and plugins
    Kimi search
    LLM task
    Lobster
    Tool-loop detection
    Media overview
    MiniMax search
    Multi-agent sandbox and tools
    Music generation
    Ollama web search
    PDF tool
    Perplexity search
    Plugins
    Reactions
    SearXNG search
    Skills
    Skills config
    Slash commands
    Sub-agents
    Tavily
    Thinking levels
    Tokenjuice
    Trajectory bundles
    Text-to-speech
    Video generation
    Web search
    Web fetch
    Linux server
    Control UI
    Dashboard
    Web
    TUI
    WebChat

    OpenAPI Specs

    openapi
    TaskFlow
    docs/openclaw
    Original Docs

    Real-time Synchronized Documentation

    Last sync: 01/05/2026 07:04:36

    Note: This content is mirrored from docs.openclaw.ai and is subject to their terms and conditions.

    OpenClaw Docs

    v2.4.0 Production

    Last synced: Today, 22:00

    Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.

    Use this file to discover all available pages before exploring further.

    Model providers

    Reference for LLM/model providers (not chat channels like WhatsApp/Telegram). For model selection rules, see Models.

    Quick rules

    Plugin-owned provider behavior

    Most provider-specific logic lives in provider plugins (

    text
    registerProvider(...)
    ) while OpenClaw keeps the generic inference loop. Plugins own onboarding, model catalogs, auth env-var mapping, transport/config normalization, tool-schema cleanup, failover classification, OAuth refresh, usage reporting, thinking/reasoning profiles, and more.

    The full list of provider-SDK hooks and bundled-plugin examples lives in Provider plugins. A provider that needs a totally custom request executor is a separate, deeper extension surface.

    note

    Provider-owned runner behavior lives on explicit provider hooks such as replay policy, tool-schema normalization, stream wrapping, and transport/request helpers. The legacy `ProviderPlugin.capabilities` static bag is compatibility-only and is no longer read by shared runner logic.

    API key rotation

    Built-in providers (pi-ai catalog)

    OpenClaw ships with the pi‑ai catalog. These providers require no

    text
    models.providers
    config; just set auth + pick a model.

    OpenAI

    • Provider:
      text
      openai
    • Auth:
      text
      OPENAI_API_KEY
    • Optional rotation:
      text
      OPENAI_API_KEYS
      ,
      text
      OPENAI_API_KEY_1
      ,
      text
      OPENAI_API_KEY_2
      , plus
      text
      OPENCLAW_LIVE_OPENAI_KEY
      (single override)
    • Example models:
      text
      openai/gpt-5.5
      ,
      text
      openai/gpt-5.4-mini
    • Verify account/model availability with
      text
      openclaw models list --provider openai
      if a specific install or API key behaves differently.
    • CLI:
      text
      openclaw onboard --auth-choice openai-api-key
    • Default transport is
      text
      auto
      (WebSocket-first, SSE fallback)
    • Override per model via
      text
      agents.defaults.models["openai/<model>"].params.transport
      (
      text
      "sse"
      ,
      text
      "websocket"
      , or
      text
      "auto"
      )
    • OpenAI Responses WebSocket warm-up defaults to enabled via
      text
      params.openaiWsWarmup
      (
      text
      true
      /
      text
      false
      )
    • OpenAI priority processing can be enabled via
      text
      agents.defaults.models["openai/<model>"].params.serviceTier
    • text
      /fast
      and
      text
      params.fastMode
      map direct
      text
      openai/*
      Responses requests to
      text
      service_tier=priority
      on
      text
      api.openai.com
    • Use
      text
      params.serviceTier
      when you want an explicit tier instead of the shared
      text
      /fast
      toggle
    • Hidden OpenClaw attribution headers (
      text
      originator
      ,
      text
      version
      ,
      text
      User-Agent
      ) apply only on native OpenAI traffic to
      text
      api.openai.com
      , not generic OpenAI-compatible proxies
    • Native OpenAI routes also keep Responses
      text
      store
      , prompt-cache hints, and OpenAI reasoning-compat payload shaping; proxy routes do not
    • text
      openai/gpt-5.3-codex-spark
      is intentionally suppressed in OpenClaw because live OpenAI API requests reject it and the current Codex catalog does not expose it
    json5
    { agents: { defaults: { model: { primary: "openai/gpt-5.5" } } }, }

    Anthropic

    • Provider:
      text
      anthropic
    • Auth:
      text
      ANTHROPIC_API_KEY
    • Optional rotation:
      text
      ANTHROPIC_API_KEYS
      ,
      text
      ANTHROPIC_API_KEY_1
      ,
      text
      ANTHROPIC_API_KEY_2
      , plus
      text
      OPENCLAW_LIVE_ANTHROPIC_KEY
      (single override)
    • Example model:
      text
      anthropic/claude-opus-4-6
    • CLI:
      text
      openclaw onboard --auth-choice apiKey
    • Direct public Anthropic requests support the shared
      text
      /fast
      toggle and
      text
      params.fastMode
      , including API-key and OAuth-authenticated traffic sent to
      text
      api.anthropic.com
      ; OpenClaw maps that to Anthropic
      text
      service_tier
      (
      text
      auto
      vs
      text
      standard_only
      )
    • Preferred Claude CLI config keeps the model ref canonical and selects the CLI backend separately:
      text
      anthropic/claude-opus-4-7
      with
      text
      agents.defaults.agentRuntime.id: "claude-cli"
      . Legacy
      text
      claude-cli/claude-opus-4-7
      refs still work for compatibility.

    note

    Anthropic staff told us OpenClaw-style Claude CLI usage is allowed again, so OpenClaw treats Claude CLI reuse and `claude -p` usage as sanctioned for this integration unless Anthropic publishes a new policy. Anthropic setup-token remains available as a supported OpenClaw token path, but OpenClaw now prefers Claude CLI reuse and `claude -p` when available.
    json5
    { agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } }, }

    OpenAI Codex OAuth

    • Provider:
      text
      openai-codex
    • Auth: OAuth (ChatGPT)
    • PI model ref:
      text
      openai-codex/gpt-5.5
    • Native Codex app-server harness ref:
      text
      openai/gpt-5.5
      with
      text
      agents.defaults.agentRuntime.id: "codex"
    • Native Codex app-server harness docs: Codex harness
    • Legacy model refs:
      text
      codex/gpt-*
    • Plugin boundary:
      text
      openai-codex/*
      loads the OpenAI plugin; the native Codex app-server plugin is selected only by the Codex harness runtime or legacy
      text
      codex/*
      refs.
    • CLI:
      text
      openclaw onboard --auth-choice openai-codex
      or
      text
      openclaw models auth login --provider openai-codex
    • Default transport is
      text
      auto
      (WebSocket-first, SSE fallback)
    • Override per PI model via
      text
      agents.defaults.models["openai-codex/<model>"].params.transport
      (
      text
      "sse"
      ,
      text
      "websocket"
      , or
      text
      "auto"
      )
    • text
      params.serviceTier
      is also forwarded on native Codex Responses requests (
      text
      chatgpt.com/backend-api
      )
    • Hidden OpenClaw attribution headers (
      text
      originator
      ,
      text
      version
      ,
      text
      User-Agent
      ) are only attached on native Codex traffic to
      text
      chatgpt.com/backend-api
      , not generic OpenAI-compatible proxies
    • Shares the same
      text
      /fast
      toggle and
      text
      params.fastMode
      config as direct
      text
      openai/*
      ; OpenClaw maps that to
      text
      service_tier=priority
    • text
      openai-codex/gpt-5.5
      uses the Codex catalog native
      text
      contextWindow = 400000
      and default runtime
      text
      contextTokens = 272000
      ; override the runtime cap with
      text
      models.providers.openai-codex.models[].contextTokens
    • Policy note: OpenAI Codex OAuth is explicitly supported for external tools/workflows like OpenClaw.
    • Use
      text
      openai-codex/gpt-5.5
      when you want the Codex OAuth/subscription route; use
      text
      openai/gpt-5.5
      when your API-key setup and local catalog expose the public API route.
    json5
    { agents: { defaults: { model: { primary: "openai-codex/gpt-5.5" } } }, }
    json5
    { models: { providers: { "openai-codex": { models: [{ id: "gpt-5.5", contextTokens: 160000 }], }, }, }, }

    Other subscription-style hosted options

    GLM models

    Z.AI Coding Plan or general API endpoints.

    MiniMax

    MiniMax Coding Plan OAuth or API key access.

    Qwen Cloud

    Qwen Cloud provider surface plus Alibaba DashScope and Coding Plan endpoint mapping.

    OpenCode

    • Auth:
      text
      OPENCODE_API_KEY
      (or
      text
      OPENCODE_ZEN_API_KEY
      )
    • Zen runtime provider:
      text
      opencode
    • Go runtime provider:
      text
      opencode-go
    • Example models:
      text
      opencode/claude-opus-4-6
      ,
      text
      opencode-go/kimi-k2.6
    • CLI:
      text
      openclaw onboard --auth-choice opencode-zen
      or
      text
      openclaw onboard --auth-choice opencode-go
    json5
    { agents: { defaults: { model: { primary: "opencode/claude-opus-4-6" } } }, }

    Google Gemini (API key)

    • Provider:
      text
      google
    • Auth:
      text
      GEMINI_API_KEY
    • Optional rotation:
      text
      GEMINI_API_KEYS
      ,
      text
      GEMINI_API_KEY_1
      ,
      text
      GEMINI_API_KEY_2
      ,
      text
      GOOGLE_API_KEY
      fallback, and
      text
      OPENCLAW_LIVE_GEMINI_KEY
      (single override)
    • Example models:
      text
      google/gemini-3.1-pro-preview
      ,
      text
      google/gemini-3-flash-preview
    • Compatibility: legacy OpenClaw config using
      text
      google/gemini-3.1-flash-preview
      is normalized to
      text
      google/gemini-3-flash-preview
    • Alias:
      text
      google/gemini-3.1-pro
      is accepted and normalized to Google's live Gemini API id,
      text
      google/gemini-3.1-pro-preview
    • CLI:
      text
      openclaw onboard --auth-choice gemini-api-key
    • Thinking:
      text
      /think adaptive
      uses Google dynamic thinking. Gemini 3/3.1 omit a fixed
      text
      thinkingLevel
      ; Gemini 2.5 sends
      text
      thinkingBudget: -1
      .
    • Direct Gemini runs also accept
      text
      agents.defaults.models["google/<model>"].params.cachedContent
      (or legacy
      text
      cached_content
      ) to forward a provider-native
      text
      cachedContents/...
      handle; Gemini cache hits surface as OpenClaw
      text
      cacheRead

    Google Vertex and Gemini CLI

    • Providers:
      text
      google-vertex
      ,
      text
      google-gemini-cli
    • Auth: Vertex uses gcloud ADC; Gemini CLI uses its OAuth flow

    warning

    Gemini CLI OAuth in OpenClaw is an unofficial integration. Some users have reported Google account restrictions after using third-party clients. Review Google terms and use a non-critical account if you choose to proceed.

    Gemini CLI OAuth is shipped as part of the bundled

    text
    google
    plugin.

    Install Gemini CLI

    ```bash} brew install gemini-cli ```
    text
    <Tab title="npm"> ```bash} npm install -g @google/gemini-cli ``` </Tab> </Tabs>

    Enable plugin

    ```bash} openclaw plugins enable google ```

    Login

    ```bash} openclaw models auth login --provider google-gemini-cli --set-default ```
    text
    Default model: `google-gemini-cli/gemini-3-flash-preview`. You do **not** paste a client id or secret into `openclaw.json`. The CLI login flow stores tokens in auth profiles on the gateway host.

    Set project (if needed)

    If requests fail after login, set `GOOGLE_CLOUD_PROJECT` or `GOOGLE_CLOUD_PROJECT_ID` on the gateway host.

    Gemini CLI JSON replies are parsed from

    text
    response
    ; usage falls back to
    text
    stats
    , with
    text
    stats.cached
    normalized into OpenClaw
    text
    cacheRead
    .

    Z.AI (GLM)

    • Provider:
      text
      zai
    • Auth:
      text
      ZAI_API_KEY
    • Example model:
      text
      zai/glm-5.1
    • CLI:
      text
      openclaw onboard --auth-choice zai-api-key
      • Aliases:
        text
        z.ai/*
        and
        text
        z-ai/*
        normalize to
        text
        zai/*
      • text
        zai-api-key
        auto-detects the matching Z.AI endpoint;
        text
        zai-coding-global
        ,
        text
        zai-coding-cn
        ,
        text
        zai-global
        , and
        text
        zai-cn
        force a specific surface

    Vercel AI Gateway

    • Provider:
      text
      vercel-ai-gateway
    • Auth:
      text
      AI_GATEWAY_API_KEY
    • Example models:
      text
      vercel-ai-gateway/anthropic/claude-opus-4.6
      ,
      text
      vercel-ai-gateway/moonshotai/kimi-k2.6
    • CLI:
      text
      openclaw onboard --auth-choice ai-gateway-api-key

    Kilo Gateway

    • Provider:
      text
      kilocode
    • Auth:
      text
      KILOCODE_API_KEY
    • Example model:
      text
      kilocode/kilo/auto
    • CLI:
      text
      openclaw onboard --auth-choice kilocode-api-key
    • Base URL:
      text
      https://api.kilo.ai/api/gateway/
    • Static fallback catalog ships
      text
      kilocode/kilo/auto
      ; live
      text
      https://api.kilo.ai/api/gateway/models
      discovery can expand the runtime catalog further.
    • Exact upstream routing behind
      text
      kilocode/kilo/auto
      is owned by Kilo Gateway, not hard-coded in OpenClaw.

    See /providers/kilocode for setup details.

    Other bundled provider plugins

    ProviderIdAuth envExample model
    BytePlus
    text
    byteplus
    /
    text
    byteplus-plan
    text
    BYTEPLUS_API_KEY
    text
    byteplus-plan/ark-code-latest
    Cerebras
    text
    cerebras
    text
    CEREBRAS_API_KEY
    text
    cerebras/zai-glm-4.7
    Cloudflare AI Gateway
    text
    cloudflare-ai-gateway
    text
    CLOUDFLARE_AI_GATEWAY_API_KEY
    —
    DeepInfra
    text
    deepinfra
    text
    DEEPINFRA_API_KEY
    text
    deepinfra/deepseek-ai/DeepSeek-V3.2
    DeepSeek
    text
    deepseek
    text
    DEEPSEEK_API_KEY
    text
    deepseek/deepseek-v4-flash
    GitHub Copilot
    text
    github-copilot
    text
    COPILOT_GITHUB_TOKEN
    /
    text
    GH_TOKEN
    /
    text
    GITHUB_TOKEN
    —
    Groq
    text
    groq
    text
    GROQ_API_KEY
    —
    Hugging Face Inference
    text
    huggingface
    text
    HUGGINGFACE_HUB_TOKEN
    or
    text
    HF_TOKEN
    text
    huggingface/deepseek-ai/DeepSeek-R1
    Kilo Gateway
    text
    kilocode
    text
    KILOCODE_API_KEY
    text
    kilocode/kilo/auto
    Kimi Coding
    text
    kimi
    text
    KIMI_API_KEY
    or
    text
    KIMICODE_API_KEY
    text
    kimi/kimi-code
    MiniMax
    text
    minimax
    /
    text
    minimax-portal
    text
    MINIMAX_API_KEY
    /
    text
    MINIMAX_OAUTH_TOKEN
    text
    minimax/MiniMax-M2.7
    Mistral
    text
    mistral
    text
    MISTRAL_API_KEY
    text
    mistral/mistral-large-latest
    Moonshot
    text
    moonshot
    text
    MOONSHOT_API_KEY
    text
    moonshot/kimi-k2.6
    NVIDIA
    text
    nvidia
    text
    NVIDIA_API_KEY
    text
    nvidia/nvidia/nemotron-3-super-120b-a12b
    OpenRouter
    text
    openrouter
    text
    OPENROUTER_API_KEY
    text
    openrouter/auto
    Qianfan
    text
    qianfan
    text
    QIANFAN_API_KEY
    text
    qianfan/deepseek-v3.2
    Qwen Cloud
    text
    qwen
    text
    QWEN_API_KEY
    /
    text
    MODELSTUDIO_API_KEY
    /
    text
    DASHSCOPE_API_KEY
    text
    qwen/qwen3.5-plus
    StepFun
    text
    stepfun
    /
    text
    stepfun-plan
    text
    STEPFUN_API_KEY
    text
    stepfun/step-3.5-flash
    Together
    text
    together
    text
    TOGETHER_API_KEY
    text
    together/moonshotai/Kimi-K2.5
    Venice
    text
    venice
    text
    VENICE_API_KEY
    —
    Vercel AI Gateway
    text
    vercel-ai-gateway
    text
    AI_GATEWAY_API_KEY
    text
    vercel-ai-gateway/anthropic/claude-opus-4.6
    Volcano Engine (Doubao)
    text
    volcengine
    /
    text
    volcengine-plan
    text
    VOLCANO_ENGINE_API_KEY
    text
    volcengine-plan/ark-code-latest
    xAI
    text
    xai
    text
    XAI_API_KEY
    text
    xai/grok-4
    Xiaomi
    text
    xiaomi
    text
    XIAOMI_API_KEY
    text
    xiaomi/mimo-v2-flash

    Quirks worth knowing

    Providers via
    text
    models.providers
    (custom/base URL)

    Use

    text
    models.providers
    (or
    text
    models.json
    ) to add custom providers or OpenAI/Anthropic‑compatible proxies.

    Many of the bundled provider plugins below already publish a default catalog. Use explicit

    text
    models.providers.<id>
    entries only when you want to override the default base URL, headers, or model list.

    Gateway model capability checks also read explicit

    text
    models.providers.<id>.models[]
    metadata. If a custom or proxy model accepts images, set
    text
    input: ["text", "image"]
    on that model so WebChat and node-origin attachment paths pass images as native model inputs instead of text-only media refs.

    Moonshot AI (Kimi)

    Moonshot ships as a bundled provider plugin. Use the built-in provider by default, and add an explicit

    text
    models.providers.moonshot
    entry only when you need to override the base URL or model metadata:

    • Provider:
      text
      moonshot
    • Auth:
      text
      MOONSHOT_API_KEY
    • Example model:
      text
      moonshot/kimi-k2.6
    • CLI:
      text
      openclaw onboard --auth-choice moonshot-api-key
      or
      text
      openclaw onboard --auth-choice moonshot-api-key-cn

    Kimi K2 model IDs:

    • text
      moonshot/kimi-k2.6
    • text
      moonshot/kimi-k2.5
    • text
      moonshot/kimi-k2-thinking
    • text
      moonshot/kimi-k2-thinking-turbo
    • text
      moonshot/kimi-k2-turbo
    json5
    { agents: { defaults: { model: { primary: "moonshot/kimi-k2.6" } }, }, models: { mode: "merge", providers: { moonshot: { baseUrl: "https://api.moonshot.ai/v1", apiKey: "${MOONSHOT_API_KEY}", api: "openai-completions", models: [{ id: "kimi-k2.6", name: "Kimi K2.6" }], }, }, }, }

    Kimi coding

    Kimi Coding uses Moonshot AI's Anthropic-compatible endpoint:

    • Provider:
      text
      kimi
    • Auth:
      text
      KIMI_API_KEY
    • Example model:
      text
      kimi/kimi-code
    json5
    { env: { KIMI_API_KEY: "sk-..." }, agents: { defaults: { model: { primary: "kimi/kimi-code" } }, }, }

    Legacy

    text
    kimi/k2p5
    remains accepted as a compatibility model id.

    Volcano Engine (Doubao)

    Volcano Engine (火山引擎) provides access to Doubao and other models in China.

    • Provider:
      text
      volcengine
      (coding:
      text
      volcengine-plan
      )
    • Auth:
      text
      VOLCANO_ENGINE_API_KEY
    • Example model:
      text
      volcengine-plan/ark-code-latest
    • CLI:
      text
      openclaw onboard --auth-choice volcengine-api-key
    json5
    { agents: { defaults: { model: { primary: "volcengine-plan/ark-code-latest" } }, }, }

    Onboarding defaults to the coding surface, but the general

    text
    volcengine/*
    catalog is registered at the same time.

    In onboarding/configure model pickers, the Volcengine auth choice prefers both

    text
    volcengine/*
    and
    text
    volcengine-plan/*
    rows. If those models are not loaded yet, OpenClaw falls back to the unfiltered catalog instead of showing an empty provider-scoped picker.

    * `volcengine/doubao-seed-1-8-251228` (Doubao Seed 1.8) * `volcengine/doubao-seed-code-preview-251028` * `volcengine/kimi-k2-5-260127` (Kimi K2.5) * `volcengine/glm-4-7-251222` (GLM 4.7) * `volcengine/deepseek-v3-2-251201` (DeepSeek V3.2 128K) * `volcengine-plan/ark-code-latest` * `volcengine-plan/doubao-seed-code` * `volcengine-plan/kimi-k2.5` * `volcengine-plan/kimi-k2-thinking` * `volcengine-plan/glm-4.7`

    BytePlus (International)

    BytePlus ARK provides access to the same models as Volcano Engine for international users.

    • Provider:
      text
      byteplus
      (coding:
      text
      byteplus-plan
      )
    • Auth:
      text
      BYTEPLUS_API_KEY
    • Example model:
      text
      byteplus-plan/ark-code-latest
    • CLI:
      text
      openclaw onboard --auth-choice byteplus-api-key
    json5
    { agents: { defaults: { model: { primary: "byteplus-plan/ark-code-latest" } }, }, }

    Onboarding defaults to the coding surface, but the general

    text
    byteplus/*
    catalog is registered at the same time.

    In onboarding/configure model pickers, the BytePlus auth choice prefers both

    text
    byteplus/*
    and
    text
    byteplus-plan/*
    rows. If those models are not loaded yet, OpenClaw falls back to the unfiltered catalog instead of showing an empty provider-scoped picker.

    * `byteplus/seed-1-8-251228` (Seed 1.8) * `byteplus/kimi-k2-5-260127` (Kimi K2.5) * `byteplus/glm-4-7-251222` (GLM 4.7) * `byteplus-plan/ark-code-latest` * `byteplus-plan/doubao-seed-code` * `byteplus-plan/kimi-k2.5` * `byteplus-plan/kimi-k2-thinking` * `byteplus-plan/glm-4.7`

    Synthetic

    Synthetic provides Anthropic-compatible models behind the

    text
    synthetic
    provider:

    • Provider:
      text
      synthetic
    • Auth:
      text
      SYNTHETIC_API_KEY
    • Example model:
      text
      synthetic/hf:MiniMaxAI/MiniMax-M2.5
    • CLI:
      text
      openclaw onboard --auth-choice synthetic-api-key
    json5
    { agents: { defaults: { model: { primary: "synthetic/hf:MiniMaxAI/MiniMax-M2.5" } }, }, models: { mode: "merge", providers: { synthetic: { baseUrl: "https://api.synthetic.new/anthropic", apiKey: "${SYNTHETIC_API_KEY}", api: "anthropic-messages", models: [{ id: "hf:MiniMaxAI/MiniMax-M2.5", name: "MiniMax M2.5" }], }, }, }, }

    MiniMax

    MiniMax is configured via

    text
    models.providers
    because it uses custom endpoints:

    • MiniMax OAuth (Global):
      text
      --auth-choice minimax-global-oauth
    • MiniMax OAuth (CN):
      text
      --auth-choice minimax-cn-oauth
    • MiniMax API key (Global):
      text
      --auth-choice minimax-global-api
    • MiniMax API key (CN):
      text
      --auth-choice minimax-cn-api
    • Auth:
      text
      MINIMAX_API_KEY
      for
      text
      minimax
      ;
      text
      MINIMAX_OAUTH_TOKEN
      or
      text
      MINIMAX_API_KEY
      for
      text
      minimax-portal

    See /providers/minimax for setup details, model options, and config snippets.

    note

    On MiniMax's Anthropic-compatible streaming path, OpenClaw disables thinking by default unless you explicitly set it, and `/fast on` rewrites `MiniMax-M2.7` to `MiniMax-M2.7-highspeed`.

    Plugin-owned capability split:

    • Text/chat defaults stay on
      text
      minimax/MiniMax-M2.7
    • Image generation is
      text
      minimax/image-01
      or
      text
      minimax-portal/image-01
    • Image understanding is plugin-owned
      text
      MiniMax-VL-01
      on both MiniMax auth paths
    • Web search stays on provider id
      text
      minimax

    LM Studio

    LM Studio ships as a bundled provider plugin which uses the native API:

    • Provider:
      text
      lmstudio
    • Auth:
      text
      LM_API_TOKEN
    • Default inference base URL:
      text
      http://localhost:1234/v1

    Then set a model (replace with one of the IDs returned by

    text
    http://localhost:1234/api/v1/models
    ):

    json5
    { agents: { defaults: { model: { primary: "lmstudio/openai/gpt-oss-20b" } }, }, }

    OpenClaw uses LM Studio's native

    text
    /api/v1/models
    and
    text
    /api/v1/models/load
    for discovery + auto-load, with
    text
    /v1/chat/completions
    for inference by default. See /providers/lmstudio for setup and troubleshooting.

    Ollama

    Ollama ships as a bundled provider plugin and uses Ollama's native API:

    • Provider:
      text
      ollama
    • Auth: None required (local server)
    • Example model:
      text
      ollama/llama3.3
    • Installation: https://ollama.com/download
    bash
    # Install Ollama, then pull a model: ollama pull llama3.3
    json5
    { agents: { defaults: { model: { primary: "ollama/llama3.3" } }, }, }

    Ollama is detected locally at

    text
    http://127.0.0.1:11434
    when you opt in with
    text
    OLLAMA_API_KEY
    , and the bundled provider plugin adds Ollama directly to
    text
    openclaw onboard
    and the model picker. See /providers/ollama for onboarding, cloud/local mode, and custom configuration.

    vLLM

    vLLM ships as a bundled provider plugin for local/self-hosted OpenAI-compatible servers:

    • Provider:
      text
      vllm
    • Auth: Optional (depends on your server)
    • Default base URL:
      text
      http://127.0.0.1:8000/v1

    To opt in to auto-discovery locally (any value works if your server doesn't enforce auth):

    bash
    export VLLM_API_KEY="vllm-local"

    Then set a model (replace with one of the IDs returned by

    text
    /v1/models
    ):

    json5
    { agents: { defaults: { model: { primary: "vllm/your-model-id" } }, }, }

    See /providers/vllm for details.

    SGLang

    SGLang ships as a bundled provider plugin for fast self-hosted OpenAI-compatible servers:

    • Provider:
      text
      sglang
    • Auth: Optional (depends on your server)
    • Default base URL:
      text
      http://127.0.0.1:30000/v1

    To opt in to auto-discovery locally (any value works if your server does not enforce auth):

    bash
    export SGLANG_API_KEY="sglang-local"

    Then set a model (replace with one of the IDs returned by

    text
    /v1/models
    ):

    json5
    { agents: { defaults: { model: { primary: "sglang/your-model-id" } }, }, }

    See /providers/sglang for details.

    Local proxies (LM Studio, vLLM, LiteLLM, etc.)

    Example (OpenAI‑compatible):

    json5
    { agents: { defaults: { model: { primary: "lmstudio/my-local-model" }, models: { "lmstudio/my-local-model": { alias: "Local" } }, }, }, models: { providers: { lmstudio: { baseUrl: "http://localhost:1234/v1", apiKey: "${LM_API_TOKEN}", api: "openai-completions", timeoutSeconds: 300, models: [ { id: "my-local-model", name: "Local Model", reasoning: false, input: ["text"], cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }, contextWindow: 200000, maxTokens: 8192, }, ], }, }, }, }

    CLI examples

    bash
    openclaw onboard --auth-choice opencode-zen openclaw models set opencode/claude-opus-4-6 openclaw models list

    See also: Configuration for full configuration examples.

    Related

    • Configuration reference — model config keys
    • Model failover — fallback chains and retry behavior
    • Models — model configuration and aliases
    • Providers — per-provider setup guides

    © 2024 TaskFlow Mirror

    Powered by TaskFlow Sync Engine