Use this file to discover all available pages before exploring further.
Thinking levels
What it does
- Inline directive in any inbound body: , , or .
- Levels (aliases):
off | minimal | low | medium | high | xhigh | adaptive | max
- minimal → “think”
- low → “think hard”
- medium → “think harder”
- high → “ultrathink” (max budget)
- xhigh → “ultrathink+” (GPT-5.2+ and Codex models, plus Anthropic Claude Opus 4.7 effort)
- adaptive → provider-managed adaptive thinking (supported for Claude 4.6 on Anthropic/Bedrock, Anthropic Claude Opus 4.7, and Google Gemini dynamic thinking)
- max → provider max reasoning (Anthropic Claude Opus 4.7; Ollama maps this to its highest native effort)
- , , , , and map to .
- maps to .
- Provider notes:
- Thinking menus and pickers are provider-profile driven. Provider plugins declare the exact level set for the selected model, including labels such as binary .
- , , and are only advertised for provider/model profiles that support them. Typed directives for unsupported levels are rejected with that model's valid options.
- Existing stored unsupported levels are remapped by provider profile rank. falls back to on non-adaptive models, while and fall back to the largest supported non-off level for the selected model.
- Anthropic Claude 4.6 models default to when no explicit thinking level is set.
- Anthropic Claude Opus 4.7 does not default to adaptive thinking. Its API effort default remains provider-owned unless you explicitly set a thinking level.
- Anthropic Claude Opus 4.7 maps to adaptive thinking plus
output_config.effort: "xhigh"
, because is a thinking directive and is the Opus 4.7 effort setting.
- Anthropic Claude Opus 4.7 also exposes ; it maps to the same provider-owned max effort path.
- DeepSeek V4 models expose ; both map to DeepSeek while lower non-off levels map to .
- Ollama thinking-capable models expose
/think low|medium|high|max
; maps to native because Ollama's native API accepts , , and effort strings.
- OpenAI GPT models map through model-specific Responses API effort support. sends only when the target model supports it; otherwise OpenClaw omits the disabled reasoning payload instead of sending an unsupported value.
- Custom OpenAI-compatible catalog entries can opt into by setting
models.providers.<provider>.models[].compat.supportedReasoningEfforts
to include . This uses the same compat metadata that maps outbound OpenAI reasoning effort payloads, so menus, session validation, agent CLI, and agree with transport behavior.
- Stale configured OpenRouter Hunter Alpha refs skip proxy reasoning injection because that retired route could return final answer text through reasoning fields.
- Google Gemini maps to Gemini's provider-owned dynamic thinking. Gemini 3 requests omit a fixed , while Gemini 2.5 requests send ; fixed levels still map to the closest Gemini or budget for that model family.
- MiniMax () on the Anthropic-compatible streaming path defaults to
thinking: { type: "disabled" }
unless you explicitly set thinking in model params or request params. This avoids leaked deltas from MiniMax's non-native Anthropic stream format.
- Z.AI () only supports binary thinking (/). Any non- level is treated as (mapped to ).
- Moonshot () maps to
thinking: { type: "disabled" }
and any non- level to thinking: { type: "enabled" }
. When thinking is enabled, Moonshot only accepts ; OpenClaw normalizes incompatible values to .
Resolution order
- Inline directive on the message (applies only to that message).
- Session override (set by sending a directive-only message).
- Per-agent default (
agents.list[].thinkingDefault
in config).
- Global default (
agents.defaults.thinkingDefault
in config).
- Fallback: provider-declared default when available; otherwise reasoning-capable models resolve to or the nearest supported non- level for that model, and non-reasoning models stay .
Setting a session default
- Send a message that is only the directive (whitespace allowed), e.g. or .
- That sticks for the current session (per-sender by default); cleared by or session idle reset.
- Confirmation reply is sent (
Thinking level set to high.
/ ). If the level is invalid (e.g. ), the command is rejected with a hint and the session state is left unchanged.
- Send (or ) with no argument to see the current thinking level.
Application by agent
- Embedded Pi: the resolved level is passed to the in-process Pi agent runtime.
Fast mode (/fast)
- Levels: .
- Directive-only message toggles a session fast-mode override and replies / .
- Send (or ) with no mode to see the current effective fast-mode state.
- OpenClaw resolves fast mode in this order:
- Inline/directive-only
- Session override
- Per-agent default (
agents.list[].fastModeDefault
)
- Per-model config:
agents.defaults.models["<provider>/<model>"].params.fastMode
- Fallback:
- For , fast mode maps to OpenAI priority processing by sending on supported Responses requests.
- For , fast mode sends the same flag on Codex Responses. OpenClaw keeps one shared toggle across both auth paths.
- For direct public requests, including OAuth-authenticated traffic sent to , fast mode maps to Anthropic service tiers: sets , sets
service_tier=standard_only
.
- For on the Anthropic-compatible path, (or ) rewrites to .
- Explicit Anthropic / model params override the fast-mode default when both are set. OpenClaw still skips Anthropic service-tier injection for non-Anthropic proxy base URLs.
- shows only when fast mode is enabled.
Verbose directives (/verbose or /v)
- Levels: (minimal) | | (default).
- Directive-only message toggles session verbose and replies /
Verbose logging disabled.
; invalid levels return a hint without changing state.
- stores an explicit session override; clear it via the Sessions UI by choosing .
- Inline directive affects only that message; session/global defaults apply otherwise.
- Send (or ) with no argument to see the current verbose level.
- When verbose is on, agents that emit structured tool results (Pi, other JSON agents) send each tool call back as its own metadata-only message, prefixed with
<emoji> <tool-name>: <arg>
when available (path/command). These tool summaries are sent as soon as each tool starts (separate bubbles), not as streaming deltas.
- Tool failure summaries remain visible in normal mode, but raw error detail suffixes are hidden unless verbose is or .
- When verbose is , tool outputs are also forwarded after completion (separate bubble, truncated to a safe length). If you toggle while a run is in-flight, subsequent tool bubbles honor the new setting.
Plugin trace directives (/trace)
- Levels: | (default).
- Directive-only message toggles session plugin trace output and replies / .
- Inline directive affects only that message; session/global defaults apply otherwise.
- Send (or ) with no argument to see the current trace level.
- is narrower than : it only exposes plugin-owned trace/debug lines such as Active Memory debug summaries.
- Trace lines can appear in and as a follow-up diagnostic message after the normal assistant reply.
Reasoning visibility (/reasoning)
- Levels: .
- Directive-only message toggles whether thinking blocks are shown in replies.
- When enabled, reasoning is sent as a separate message prefixed with .
- (Telegram only): streams reasoning into the Telegram draft bubble while the reply is generating, then sends the final answer without reasoning.
- Alias: .
- Send (or ) with no argument to see the current reasoning level.
- Resolution order: inline directive, then session override, then per-agent default (
agents.list[].reasoningDefault
), then fallback ().
Malformed local-model reasoning tags are handled conservatively. Closed
blocks stay hidden on normal replies, and unclosed reasoning after already visible text is also hidden. If a reply is fully wrapped in a single unclosed opening tag and would otherwise deliver as empty text, OpenClaw removes the malformed opening tag and delivers the remaining text.
Related
Heartbeats
- Heartbeat probe body is the configured heartbeat prompt (default:
Read HEARTBEAT.md if it exists (workspace context). Follow it strictly. Do not infer or repeat old tasks from prior chats. If nothing needs attention, reply HEARTBEAT_OK.
). Inline directives in a heartbeat message apply as usual (but avoid changing session defaults from heartbeats).
- Heartbeat delivery defaults to the final payload only. To also send the separate message (when available), set
agents.defaults.heartbeat.includeReasoning: true
or per-agent agents.list[].heartbeat.includeReasoning: true
.
Web chat UI
- The web chat thinking selector mirrors the session's stored level from the inbound session store/config when the page loads.
- Picking another level writes the session override immediately via ; it does not wait for the next send and it is not a one-shot override.
- The first option is always
Default (<resolved level>)
, where the resolved default comes from the active session model's provider thinking profile plus the same fallback logic that and use.
- The picker uses returned by the gateway session row/defaults, with kept as a legacy label list. The browser UI does not keep its own provider regex list; plugins own model-specific level sets.
- still works and updates the same stored session level, so chat directives and the picker stay in sync.
Provider profiles
- Provider plugins can expose
resolveThinkingProfile(ctx)
to define the model's supported levels and default.
- Provider plugins that proxy Claude models should reuse
resolveClaudeThinkingProfile(modelId)
from openclaw/plugin-sdk/provider-model-shared
so direct Anthropic and proxy catalogs stay aligned.
- Each profile level has a stored canonical (, , , , , , , or ) and may include a display . Binary providers use
{ id: "low", label: "on" }
.
- Tool plugins that need to validate an explicit thinking override should use
api.runtime.agent.resolveThinkingPolicy({ provider, model })
plus api.runtime.agent.normalizeThinkingLevel(...)
; they should not keep their own provider/model level lists.
- Tool plugins with access to configured custom model metadata can pass into so
compat.supportedReasoningEfforts
opt-ins are reflected in plugin-side validation.
- Published legacy hooks (, , and
resolveDefaultThinkingLevel
) remain as compatibility adapters, but new custom level sets should use .
- Gateway rows/defaults expose , , and so ACP/chat clients render the same profile ids and labels that runtime validation uses.