TaskFlow
DashboardFreewriteWhiteboardsProjectsCRMTasksNotificationsSettingsAgent TowerAPI Docs
OpenClaw Docs
?

User

Member

Caricamento in corso...

Home
Progetti
Task
Notifiche
CRM

    OpenClaw

    Documentation Mirror

    Documentation Overview

    Docs

    Auth credential semantics
    Scheduled tasks
    Hooks
    Automation & tasks
    Standing orders
    Task flow
    Background tasks
    BlueBubbles
    Broadcast groups
    Channel routing
    Discord
    Feishu
    Google Chat
    Group messages
    Groups
    iMessage
    Chat channels
    IRC
    LINE
    Channel location parsing
    Matrix
    Matrix migration
    Matrix push rules for quiet previews
    Mattermost
    Microsoft Teams
    Nextcloud Talk
    Nostr
    Pairing
    QA channel
    QQ bot
    Signal
    Slack
    Synology Chat
    Telegram
    Tlon
    Channel troubleshooting
    Twitch
    WeChat
    WhatsApp
    Yuanbao
    Zalo
    Zalo personal
    CI pipeline
    ACP
    Agent
    Agents
    Approvals
    Backup
    Browser
    Channels
    Clawbot
    `openclaw commitments`
    Completion
    Config
    Configure
    Cron
    Daemon
    Dashboard
    Devices
    Directory
    DNS
    Docs
    Doctor
    Flows (redirect)
    Gateway
    Health
    Hooks
    CLI reference
    Inference CLI
    Logs
    MCP
    Memory
    Message
    Migrate
    Models
    Node
    Nodes
    Onboard
    Pairing
    Plugins
    Proxy
    QR
    Reset
    Sandbox CLI
    Secrets
    Security
    Sessions
    Setup
    Skills
    Status
    System
    `openclaw tasks`
    TUI
    Uninstall
    Update
    Voicecall
    Webhooks
    Wiki
    Active memory
    Agent runtime
    Agent loop
    Agent runtimes
    Agent workspace
    Gateway architecture
    Channel docking
    Inferred commitments
    Compaction
    Context
    Context engine
    Delegate architecture
    Dreaming
    Experimental features
    Features
    Markdown formatting
    Memory overview
    Builtin memory engine
    Honcho memory
    QMD memory engine
    Memory search
    Messages
    Model failover
    Model providers
    Models CLI
    Multi-agent routing
    OAuth
    OpenClaw App SDK
    Presence
    QA overview
    Matrix QA
    Command queue
    Steering queue
    Retry policy
    Session management
    Session pruning
    Session tools
    SOUL.md personality guide
    Streaming and chunking
    System prompt
    Timezones
    TypeBox
    Typing indicators
    Usage tracking
    Date and time
    Node + tsx crash
    Diagnostics flags
    Authentication
    Background exec and process tool
    Bonjour discovery
    Bridge protocol
    CLI backends
    Configuration — agents
    Configuration — channels
    Configuration — tools and custom providers
    Configuration
    Configuration examples
    Configuration reference
    Diagnostics export
    Discovery and transports
    Doctor
    Gateway lock
    Health checks
    Heartbeat
    Gateway runbook
    Local models
    Gateway logging
    Multiple gateways
    Network model
    OpenAI chat completions
    OpenResponses API
    OpenShell
    OpenTelemetry export
    Gateway-owned pairing
    Prometheus metrics
    Gateway protocol
    Remote access
    Remote gateway setup
    Sandbox vs tool policy vs elevated
    Sandboxing
    Secrets management
    Secrets apply plan contract
    Security audit checks
    Security
    Tailscale
    Tools invoke API
    Troubleshooting
    Trusted proxy auth
    Debugging
    Environment variables
    FAQ
    FAQ: first-run setup
    FAQ: models and auth
    GPT-5.5 / Codex agentic parity
    GPT-5.5 / Codex parity maintainer notes
    Help
    Scripts
    Testing
    Testing: live suites
    General troubleshooting
    OpenClaw
    Ansible
    Azure
    Bun (experimental)
    ClawDock
    Release channels
    DigitalOcean
    Docker
    Docker VM runtime
    exe.dev
    Fly.io
    GCP
    Hetzner
    Hostinger
    Install
    Installer internals
    Kubernetes
    macOS VMs
    Migration guide
    Migrating from Claude
    Migrating from Hermes
    Nix
    Node.js
    Northflank
    Oracle Cloud
    Podman
    Railway
    Raspberry Pi
    Render
    Uninstall
    Updating
    Logging
    Network
    Audio and voice notes
    Camera capture
    Image and media support
    Nodes
    Location command
    Media understanding
    Talk mode
    Node troubleshooting
    Voice wake
    Pi integration architecture
    Pi development workflow
    Android app
    Platforms
    iOS app
    Linux app
    Gateway on macOS
    Canvas
    Gateway lifecycle
    macOS dev setup
    Health checks (macOS)
    Menu bar icon
    macOS logging
    Menu bar
    Peekaboo bridge
    macOS permissions
    Remote control
    macOS signing
    Skills (macOS)
    Voice overlay
    Voice wake (macOS)
    WebChat (macOS)
    macOS IPC
    macOS app
    Windows
    Plugin internals
    Plugin architecture internals
    Building plugins
    Plugin bundles
    Codex Computer Use
    Codex harness
    Community plugins
    Plugin compatibility
    Google Meet plugin
    Plugin hooks
    Plugin manifest
    Memory LanceDB
    Memory wiki
    Message presentation
    Agent harness plugins
    Building channel plugins
    Channel turn kernel
    Plugin entry points
    Plugin SDK migration
    Plugin SDK overview
    Building provider plugins
    Plugin runtime helpers
    Plugin setup and config
    Plugin SDK subpaths
    Plugin testing
    Skill workshop plugin
    Voice call plugin
    Webhooks plugin
    Zalo personal plugin
    OpenProse
    Alibaba Model Studio
    Anthropic
    Arcee AI
    Azure Speech
    Amazon Bedrock
    Amazon Bedrock Mantle
    Chutes
    Claude Max API proxy
    Cloudflare AI gateway
    ComfyUI
    Deepgram
    Deepinfra
    DeepSeek
    ElevenLabs
    Fal
    Fireworks
    GitHub Copilot
    GLM (Zhipu)
    Google (Gemini)
    Gradium
    Groq
    Hugging Face (inference)
    Provider directory
    Inferrs
    Inworld
    Kilocode
    LiteLLM
    LM Studio
    MiniMax
    Mistral
    Model provider quickstart
    Moonshot AI
    NVIDIA
    Ollama
    OpenAI
    OpenCode
    OpenCode Go
    OpenRouter
    Perplexity
    Qianfan
    Qwen
    Runway
    SGLang
    StepFun
    Synthetic
    Tencent Cloud (TokenHub)
    Together AI
    Venice AI
    Vercel AI gateway
    vLLM
    Volcengine (Doubao)
    Vydra
    xAI
    Xiaomi MiMo
    Z.AI
    Default AGENTS.md
    Release policy
    API usage and costs
    Credits
    Device model database
    Full release validation
    Memory configuration reference
    OpenClaw App SDK API design
    Prompt caching
    Rich output protocol
    RPC adapters
    SecretRef credential surface
    Session management deep dive
    AGENTS.md template
    BOOT.md template
    BOOTSTRAP.md template
    HEARTBEAT.md template
    IDENTITY template
    SOUL.md template
    TOOLS.md template
    USER template
    Tests
    Token use and costs
    Transcript hygiene
    Onboarding reference
    Contributing to the threat model
    Threat model (MITRE ATLAS)
    Formal verification (security models)
    Network proxy
    Agent bootstrapping
    Docs directory
    Getting started
    Docs hubs
    OpenClaw lore
    Onboarding (macOS app)
    Onboarding overview
    Personal assistant setup
    Setup
    Showcase
    Onboarding (CLI)
    CLI automation
    CLI setup reference
    ACP agents
    ACP agents — setup
    Agent send
    apply_patch tool
    Brave search
    Browser (OpenClaw-managed)
    Browser control API
    Browser troubleshooting
    Browser login
    WSL2 + Windows + remote Chrome CDP troubleshooting
    BTW side questions
    ClawHub
    Code execution
    Creating skills
    Diffs
    DuckDuckGo search
    Elevated mode
    Exa search
    Exec tool
    Exec approvals
    Exec approvals — advanced
    Firecrawl
    Gemini search
    Grok search
    Image generation
    Tools and plugins
    Kimi search
    LLM task
    Lobster
    Tool-loop detection
    Media overview
    MiniMax search
    Multi-agent sandbox and tools
    Music generation
    Ollama web search
    PDF tool
    Perplexity search
    Plugins
    Reactions
    SearXNG search
    Skills
    Skills config
    Slash commands
    Sub-agents
    Tavily
    Thinking levels
    Tokenjuice
    Trajectory bundles
    Text-to-speech
    Video generation
    Web search
    Web fetch
    Linux server
    Control UI
    Dashboard
    Web
    TUI
    WebChat

    OpenAPI Specs

    openapi
    TaskFlow
    docs/openclaw
    Original Docs

    Real-time Synchronized Documentation

    Last sync: 01/05/2026 07:00:06

    Note: This content is mirrored from docs.openclaw.ai and is subject to their terms and conditions.

    OpenClaw Docs

    v2.4.0 Production

    Last synced: Today, 22:00

    Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.

    Use this file to discover all available pages before exploring further.

    Thinking levels

    What it does

    • Inline directive in any inbound body:
      text
      /t <level>
      ,
      text
      /think:<level>
      , or
      text
      /thinking <level>
      .
    • Levels (aliases):
      text
      off | minimal | low | medium | high | xhigh | adaptive | max
      • minimal → “think”
      • low → “think hard”
      • medium → “think harder”
      • high → “ultrathink” (max budget)
      • xhigh → “ultrathink+” (GPT-5.2+ and Codex models, plus Anthropic Claude Opus 4.7 effort)
      • adaptive → provider-managed adaptive thinking (supported for Claude 4.6 on Anthropic/Bedrock, Anthropic Claude Opus 4.7, and Google Gemini dynamic thinking)
      • max → provider max reasoning (Anthropic Claude Opus 4.7; Ollama maps this to its highest native
        text
        think
        effort)
      • text
        x-high
        ,
        text
        x_high
        ,
        text
        extra-high
        ,
        text
        extra high
        , and
        text
        extra_high
        map to
        text
        xhigh
        .
      • text
        highest
        maps to
        text
        high
        .
    • Provider notes:
      • Thinking menus and pickers are provider-profile driven. Provider plugins declare the exact level set for the selected model, including labels such as binary
        text
        on
        .
      • text
        adaptive
        ,
        text
        xhigh
        , and
        text
        max
        are only advertised for provider/model profiles that support them. Typed directives for unsupported levels are rejected with that model's valid options.
      • Existing stored unsupported levels are remapped by provider profile rank.
        text
        adaptive
        falls back to
        text
        medium
        on non-adaptive models, while
        text
        xhigh
        and
        text
        max
        fall back to the largest supported non-off level for the selected model.
      • Anthropic Claude 4.6 models default to
        text
        adaptive
        when no explicit thinking level is set.
      • Anthropic Claude Opus 4.7 does not default to adaptive thinking. Its API effort default remains provider-owned unless you explicitly set a thinking level.
      • Anthropic Claude Opus 4.7 maps
        text
        /think xhigh
        to adaptive thinking plus
        text
        output_config.effort: "xhigh"
        , because
        text
        /think
        is a thinking directive and
        text
        xhigh
        is the Opus 4.7 effort setting.
      • Anthropic Claude Opus 4.7 also exposes
        text
        /think max
        ; it maps to the same provider-owned max effort path.
      • DeepSeek V4 models expose
        text
        /think xhigh|max
        ; both map to DeepSeek
        text
        reasoning_effort: "max"
        while lower non-off levels map to
        text
        high
        .
      • Ollama thinking-capable models expose
        text
        /think low|medium|high|max
        ;
        text
        max
        maps to native
        text
        think: "high"
        because Ollama's native API accepts
        text
        low
        ,
        text
        medium
        , and
        text
        high
        effort strings.
      • OpenAI GPT models map
        text
        /think
        through model-specific Responses API effort support.
        text
        /think off
        sends
        text
        reasoning.effort: "none"
        only when the target model supports it; otherwise OpenClaw omits the disabled reasoning payload instead of sending an unsupported value.
      • Custom OpenAI-compatible catalog entries can opt into
        text
        /think xhigh
        by setting
        text
        models.providers.<provider>.models[].compat.supportedReasoningEfforts
        to include
        text
        "xhigh"
        . This uses the same compat metadata that maps outbound OpenAI reasoning effort payloads, so menus, session validation, agent CLI, and
        text
        llm-task
        agree with transport behavior.
      • Stale configured OpenRouter Hunter Alpha refs skip proxy reasoning injection because that retired route could return final answer text through reasoning fields.
      • Google Gemini maps
        text
        /think adaptive
        to Gemini's provider-owned dynamic thinking. Gemini 3 requests omit a fixed
        text
        thinkingLevel
        , while Gemini 2.5 requests send
        text
        thinkingBudget: -1
        ; fixed levels still map to the closest Gemini
        text
        thinkingLevel
        or budget for that model family.
      • MiniMax (
        text
        minimax/*
        ) on the Anthropic-compatible streaming path defaults to
        text
        thinking: { type: "disabled" }
        unless you explicitly set thinking in model params or request params. This avoids leaked
        text
        reasoning_content
        deltas from MiniMax's non-native Anthropic stream format.
      • Z.AI (
        text
        zai/*
        ) only supports binary thinking (
        text
        on
        /
        text
        off
        ). Any non-
        text
        off
        level is treated as
        text
        on
        (mapped to
        text
        low
        ).
      • Moonshot (
        text
        moonshot/*
        ) maps
        text
        /think off
        to
        text
        thinking: { type: "disabled" }
        and any non-
        text
        off
        level to
        text
        thinking: { type: "enabled" }
        . When thinking is enabled, Moonshot only accepts
        text
        tool_choice
        text
        auto|none
        ; OpenClaw normalizes incompatible values to
        text
        auto
        .

    Resolution order

    1. Inline directive on the message (applies only to that message).
    2. Session override (set by sending a directive-only message).
    3. Per-agent default (
      text
      agents.list[].thinkingDefault
      in config).
    4. Global default (
      text
      agents.defaults.thinkingDefault
      in config).
    5. Fallback: provider-declared default when available; otherwise reasoning-capable models resolve to
      text
      medium
      or the nearest supported non-
      text
      off
      level for that model, and non-reasoning models stay
      text
      off
      .

    Setting a session default

    • Send a message that is only the directive (whitespace allowed), e.g.
      text
      /think:medium
      or
      text
      /t high
      .
    • That sticks for the current session (per-sender by default); cleared by
      text
      /think:off
      or session idle reset.
    • Confirmation reply is sent (
      text
      Thinking level set to high.
      /
      text
      Thinking disabled.
      ). If the level is invalid (e.g.
      text
      /thinking big
      ), the command is rejected with a hint and the session state is left unchanged.
    • Send
      text
      /think
      (or
      text
      /think:
      ) with no argument to see the current thinking level.

    Application by agent

    • Embedded Pi: the resolved level is passed to the in-process Pi agent runtime.

    Fast mode (/fast)

    • Levels:
      text
      on|off
      .
    • Directive-only message toggles a session fast-mode override and replies
      text
      Fast mode enabled.
      /
      text
      Fast mode disabled.
      .
    • Send
      text
      /fast
      (or
      text
      /fast status
      ) with no mode to see the current effective fast-mode state.
    • OpenClaw resolves fast mode in this order:
      1. Inline/directive-only
        text
        /fast on|off
      2. Session override
      3. Per-agent default (
        text
        agents.list[].fastModeDefault
        )
      4. Per-model config:
        text
        agents.defaults.models["<provider>/<model>"].params.fastMode
      5. Fallback:
        text
        off
    • For
      text
      openai/*
      , fast mode maps to OpenAI priority processing by sending
      text
      service_tier=priority
      on supported Responses requests.
    • For
      text
      openai-codex/*
      , fast mode sends the same
      text
      service_tier=priority
      flag on Codex Responses. OpenClaw keeps one shared
      text
      /fast
      toggle across both auth paths.
    • For direct public
      text
      anthropic/*
      requests, including OAuth-authenticated traffic sent to
      text
      api.anthropic.com
      , fast mode maps to Anthropic service tiers:
      text
      /fast on
      sets
      text
      service_tier=auto
      ,
      text
      /fast off
      sets
      text
      service_tier=standard_only
      .
    • For
      text
      minimax/*
      on the Anthropic-compatible path,
      text
      /fast on
      (or
      text
      params.fastMode: true
      ) rewrites
      text
      MiniMax-M2.7
      to
      text
      MiniMax-M2.7-highspeed
      .
    • Explicit Anthropic
      text
      serviceTier
      /
      text
      service_tier
      model params override the fast-mode default when both are set. OpenClaw still skips Anthropic service-tier injection for non-Anthropic proxy base URLs.
    • text
      /status
      shows
      text
      Fast
      only when fast mode is enabled.

    Verbose directives (/verbose or /v)

    • Levels:
      text
      on
      (minimal) |
      text
      full
      |
      text
      off
      (default).
    • Directive-only message toggles session verbose and replies
      text
      Verbose logging enabled.
      /
      text
      Verbose logging disabled.
      ; invalid levels return a hint without changing state.
    • text
      /verbose off
      stores an explicit session override; clear it via the Sessions UI by choosing
      text
      inherit
      .
    • Inline directive affects only that message; session/global defaults apply otherwise.
    • Send
      text
      /verbose
      (or
      text
      /verbose:
      ) with no argument to see the current verbose level.
    • When verbose is on, agents that emit structured tool results (Pi, other JSON agents) send each tool call back as its own metadata-only message, prefixed with
      text
      <emoji> <tool-name>: <arg>
      when available (path/command). These tool summaries are sent as soon as each tool starts (separate bubbles), not as streaming deltas.
    • Tool failure summaries remain visible in normal mode, but raw error detail suffixes are hidden unless verbose is
      text
      on
      or
      text
      full
      .
    • When verbose is
      text
      full
      , tool outputs are also forwarded after completion (separate bubble, truncated to a safe length). If you toggle
      text
      /verbose on|full|off
      while a run is in-flight, subsequent tool bubbles honor the new setting.

    Plugin trace directives (/trace)

    • Levels:
      text
      on
      |
      text
      off
      (default).
    • Directive-only message toggles session plugin trace output and replies
      text
      Plugin trace enabled.
      /
      text
      Plugin trace disabled.
      .
    • Inline directive affects only that message; session/global defaults apply otherwise.
    • Send
      text
      /trace
      (or
      text
      /trace:
      ) with no argument to see the current trace level.
    • text
      /trace
      is narrower than
      text
      /verbose
      : it only exposes plugin-owned trace/debug lines such as Active Memory debug summaries.
    • Trace lines can appear in
      text
      /status
      and as a follow-up diagnostic message after the normal assistant reply.

    Reasoning visibility (/reasoning)

    • Levels:
      text
      on|off|stream
      .
    • Directive-only message toggles whether thinking blocks are shown in replies.
    • When enabled, reasoning is sent as a separate message prefixed with
      text
      Reasoning:
      .
    • text
      stream
      (Telegram only): streams reasoning into the Telegram draft bubble while the reply is generating, then sends the final answer without reasoning.
    • Alias:
      text
      /reason
      .
    • Send
      text
      /reasoning
      (or
      text
      /reasoning:
      ) with no argument to see the current reasoning level.
    • Resolution order: inline directive, then session override, then per-agent default (
      text
      agents.list[].reasoningDefault
      ), then fallback (
      text
      off
      ).

    Malformed local-model reasoning tags are handled conservatively. Closed

    text
    <think>...</think>
    blocks stay hidden on normal replies, and unclosed reasoning after already visible text is also hidden. If a reply is fully wrapped in a single unclosed opening tag and would otherwise deliver as empty text, OpenClaw removes the malformed opening tag and delivers the remaining text.

    Related

    • Elevated mode docs live in Elevated mode.

    Heartbeats

    • Heartbeat probe body is the configured heartbeat prompt (default:
      text
      Read HEARTBEAT.md if it exists (workspace context). Follow it strictly. Do not infer or repeat old tasks from prior chats. If nothing needs attention, reply HEARTBEAT_OK.
      ). Inline directives in a heartbeat message apply as usual (but avoid changing session defaults from heartbeats).
    • Heartbeat delivery defaults to the final payload only. To also send the separate
      text
      Reasoning:
      message (when available), set
      text
      agents.defaults.heartbeat.includeReasoning: true
      or per-agent
      text
      agents.list[].heartbeat.includeReasoning: true
      .

    Web chat UI

    • The web chat thinking selector mirrors the session's stored level from the inbound session store/config when the page loads.
    • Picking another level writes the session override immediately via
      text
      sessions.patch
      ; it does not wait for the next send and it is not a one-shot
      text
      thinkingOnce
      override.
    • The first option is always
      text
      Default (<resolved level>)
      , where the resolved default comes from the active session model's provider thinking profile plus the same fallback logic that
      text
      /status
      and
      text
      session_status
      use.
    • The picker uses
      text
      thinkingLevels
      returned by the gateway session row/defaults, with
      text
      thinkingOptions
      kept as a legacy label list. The browser UI does not keep its own provider regex list; plugins own model-specific level sets.
    • text
      /think:<level>
      still works and updates the same stored session level, so chat directives and the picker stay in sync.

    Provider profiles

    • Provider plugins can expose
      text
      resolveThinkingProfile(ctx)
      to define the model's supported levels and default.
    • Provider plugins that proxy Claude models should reuse
      text
      resolveClaudeThinkingProfile(modelId)
      from
      text
      openclaw/plugin-sdk/provider-model-shared
      so direct Anthropic and proxy catalogs stay aligned.
    • Each profile level has a stored canonical
      text
      id
      (
      text
      off
      ,
      text
      minimal
      ,
      text
      low
      ,
      text
      medium
      ,
      text
      high
      ,
      text
      xhigh
      ,
      text
      adaptive
      , or
      text
      max
      ) and may include a display
      text
      label
      . Binary providers use
      text
      { id: "low", label: "on" }
      .
    • Tool plugins that need to validate an explicit thinking override should use
      text
      api.runtime.agent.resolveThinkingPolicy({ provider, model })
      plus
      text
      api.runtime.agent.normalizeThinkingLevel(...)
      ; they should not keep their own provider/model level lists.
    • Tool plugins with access to configured custom model metadata can pass
      text
      catalog
      into
      text
      resolveThinkingPolicy
      so
      text
      compat.supportedReasoningEfforts
      opt-ins are reflected in plugin-side validation.
    • Published legacy hooks (
      text
      supportsXHighThinking
      ,
      text
      isBinaryThinking
      , and
      text
      resolveDefaultThinkingLevel
      ) remain as compatibility adapters, but new custom level sets should use
      text
      resolveThinkingProfile
      .
    • Gateway rows/defaults expose
      text
      thinkingLevels
      ,
      text
      thinkingOptions
      , and
      text
      thinkingDefault
      so ACP/chat clients render the same profile ids and labels that runtime validation uses.

    © 2024 TaskFlow Mirror

    Powered by TaskFlow Sync Engine