TaskFlow
DashboardFreewriteWhiteboardsProjectsCRMTasksNotificationsSettingsAgent TowerAPI Docs
OpenClaw Docs
?

User

Member

Caricamento in corso...

Home
Progetti
Task
Notifiche
CRM

    OpenClaw

    Documentation Mirror

    Documentation Overview

    Docs

    Auth credential semantics
    Scheduled tasks
    Hooks
    Automation & tasks
    Standing orders
    Task flow
    Background tasks
    BlueBubbles
    Broadcast groups
    Channel routing
    Discord
    Feishu
    Google Chat
    Group messages
    Groups
    iMessage
    Chat channels
    IRC
    LINE
    Channel location parsing
    Matrix
    Matrix migration
    Matrix push rules for quiet previews
    Mattermost
    Microsoft Teams
    Nextcloud Talk
    Nostr
    Pairing
    QA channel
    QQ bot
    Signal
    Slack
    Synology Chat
    Telegram
    Tlon
    Channel troubleshooting
    Twitch
    WeChat
    WhatsApp
    Yuanbao
    Zalo
    Zalo personal
    CI pipeline
    ACP
    Agent
    Agents
    Approvals
    Backup
    Browser
    Channels
    Clawbot
    `openclaw commitments`
    Completion
    Config
    Configure
    Cron
    Daemon
    Dashboard
    Devices
    Directory
    DNS
    Docs
    Doctor
    Flows (redirect)
    Gateway
    Health
    Hooks
    CLI reference
    Inference CLI
    Logs
    MCP
    Memory
    Message
    Migrate
    Models
    Node
    Nodes
    Onboard
    Pairing
    Plugins
    Proxy
    QR
    Reset
    Sandbox CLI
    Secrets
    Security
    Sessions
    Setup
    Skills
    Status
    System
    `openclaw tasks`
    TUI
    Uninstall
    Update
    Voicecall
    Webhooks
    Wiki
    Active memory
    Agent runtime
    Agent loop
    Agent runtimes
    Agent workspace
    Gateway architecture
    Channel docking
    Inferred commitments
    Compaction
    Context
    Context engine
    Delegate architecture
    Dreaming
    Experimental features
    Features
    Markdown formatting
    Memory overview
    Builtin memory engine
    Honcho memory
    QMD memory engine
    Memory search
    Messages
    Model failover
    Model providers
    Models CLI
    Multi-agent routing
    OAuth
    OpenClaw App SDK
    Presence
    QA overview
    Matrix QA
    Command queue
    Steering queue
    Retry policy
    Session management
    Session pruning
    Session tools
    SOUL.md personality guide
    Streaming and chunking
    System prompt
    Timezones
    TypeBox
    Typing indicators
    Usage tracking
    Date and time
    Node + tsx crash
    Diagnostics flags
    Authentication
    Background exec and process tool
    Bonjour discovery
    Bridge protocol
    CLI backends
    Configuration — agents
    Configuration — channels
    Configuration — tools and custom providers
    Configuration
    Configuration examples
    Configuration reference
    Diagnostics export
    Discovery and transports
    Doctor
    Gateway lock
    Health checks
    Heartbeat
    Gateway runbook
    Local models
    Gateway logging
    Multiple gateways
    Network model
    OpenAI chat completions
    OpenResponses API
    OpenShell
    OpenTelemetry export
    Gateway-owned pairing
    Prometheus metrics
    Gateway protocol
    Remote access
    Remote gateway setup
    Sandbox vs tool policy vs elevated
    Sandboxing
    Secrets management
    Secrets apply plan contract
    Security audit checks
    Security
    Tailscale
    Tools invoke API
    Troubleshooting
    Trusted proxy auth
    Debugging
    Environment variables
    FAQ
    FAQ: first-run setup
    FAQ: models and auth
    GPT-5.5 / Codex agentic parity
    GPT-5.5 / Codex parity maintainer notes
    Help
    Scripts
    Testing
    Testing: live suites
    General troubleshooting
    OpenClaw
    Ansible
    Azure
    Bun (experimental)
    ClawDock
    Release channels
    DigitalOcean
    Docker
    Docker VM runtime
    exe.dev
    Fly.io
    GCP
    Hetzner
    Hostinger
    Install
    Installer internals
    Kubernetes
    macOS VMs
    Migration guide
    Migrating from Claude
    Migrating from Hermes
    Nix
    Node.js
    Northflank
    Oracle Cloud
    Podman
    Railway
    Raspberry Pi
    Render
    Uninstall
    Updating
    Logging
    Network
    Audio and voice notes
    Camera capture
    Image and media support
    Nodes
    Location command
    Media understanding
    Talk mode
    Node troubleshooting
    Voice wake
    Pi integration architecture
    Pi development workflow
    Android app
    Platforms
    iOS app
    Linux app
    Gateway on macOS
    Canvas
    Gateway lifecycle
    macOS dev setup
    Health checks (macOS)
    Menu bar icon
    macOS logging
    Menu bar
    Peekaboo bridge
    macOS permissions
    Remote control
    macOS signing
    Skills (macOS)
    Voice overlay
    Voice wake (macOS)
    WebChat (macOS)
    macOS IPC
    macOS app
    Windows
    Plugin internals
    Plugin architecture internals
    Building plugins
    Plugin bundles
    Codex Computer Use
    Codex harness
    Community plugins
    Plugin compatibility
    Google Meet plugin
    Plugin hooks
    Plugin manifest
    Memory LanceDB
    Memory wiki
    Message presentation
    Agent harness plugins
    Building channel plugins
    Channel turn kernel
    Plugin entry points
    Plugin SDK migration
    Plugin SDK overview
    Building provider plugins
    Plugin runtime helpers
    Plugin setup and config
    Plugin SDK subpaths
    Plugin testing
    Skill workshop plugin
    Voice call plugin
    Webhooks plugin
    Zalo personal plugin
    OpenProse
    Alibaba Model Studio
    Anthropic
    Arcee AI
    Azure Speech
    Amazon Bedrock
    Amazon Bedrock Mantle
    Chutes
    Claude Max API proxy
    Cloudflare AI gateway
    ComfyUI
    Deepgram
    Deepinfra
    DeepSeek
    ElevenLabs
    Fal
    Fireworks
    GitHub Copilot
    GLM (Zhipu)
    Google (Gemini)
    Gradium
    Groq
    Hugging Face (inference)
    Provider directory
    Inferrs
    Inworld
    Kilocode
    LiteLLM
    LM Studio
    MiniMax
    Mistral
    Model provider quickstart
    Moonshot AI
    NVIDIA
    Ollama
    OpenAI
    OpenCode
    OpenCode Go
    OpenRouter
    Perplexity
    Qianfan
    Qwen
    Runway
    SGLang
    StepFun
    Synthetic
    Tencent Cloud (TokenHub)
    Together AI
    Venice AI
    Vercel AI gateway
    vLLM
    Volcengine (Doubao)
    Vydra
    xAI
    Xiaomi MiMo
    Z.AI
    Default AGENTS.md
    Release policy
    API usage and costs
    Credits
    Device model database
    Full release validation
    Memory configuration reference
    OpenClaw App SDK API design
    Prompt caching
    Rich output protocol
    RPC adapters
    SecretRef credential surface
    Session management deep dive
    AGENTS.md template
    BOOT.md template
    BOOTSTRAP.md template
    HEARTBEAT.md template
    IDENTITY template
    SOUL.md template
    TOOLS.md template
    USER template
    Tests
    Token use and costs
    Transcript hygiene
    Onboarding reference
    Contributing to the threat model
    Threat model (MITRE ATLAS)
    Formal verification (security models)
    Network proxy
    Agent bootstrapping
    Docs directory
    Getting started
    Docs hubs
    OpenClaw lore
    Onboarding (macOS app)
    Onboarding overview
    Personal assistant setup
    Setup
    Showcase
    Onboarding (CLI)
    CLI automation
    CLI setup reference
    ACP agents
    ACP agents — setup
    Agent send
    apply_patch tool
    Brave search
    Browser (OpenClaw-managed)
    Browser control API
    Browser troubleshooting
    Browser login
    WSL2 + Windows + remote Chrome CDP troubleshooting
    BTW side questions
    ClawHub
    Code execution
    Creating skills
    Diffs
    DuckDuckGo search
    Elevated mode
    Exa search
    Exec tool
    Exec approvals
    Exec approvals — advanced
    Firecrawl
    Gemini search
    Grok search
    Image generation
    Tools and plugins
    Kimi search
    LLM task
    Lobster
    Tool-loop detection
    Media overview
    MiniMax search
    Multi-agent sandbox and tools
    Music generation
    Ollama web search
    PDF tool
    Perplexity search
    Plugins
    Reactions
    SearXNG search
    Skills
    Skills config
    Slash commands
    Sub-agents
    Tavily
    Thinking levels
    Tokenjuice
    Trajectory bundles
    Text-to-speech
    Video generation
    Web search
    Web fetch
    Linux server
    Control UI
    Dashboard
    Web
    TUI
    WebChat

    OpenAPI Specs

    openapi
    TaskFlow
    docs/openclaw
    Original Docs

    Real-time Synchronized Documentation

    Last sync: 01/05/2026 07:04:04

    Note: This content is mirrored from docs.openclaw.ai and is subject to their terms and conditions.

    OpenClaw Docs

    v2.4.0 Production

    Last synced: Today, 22:00

    Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.

    Use this file to discover all available pages before exploring further.

    Configuration — agents

    Agent-scoped configuration keys under

    text
    agents.*
    ,
    text
    multiAgent.*
    ,
    text
    session.*
    ,
    text
    messages.*
    , and
    text
    talk.*
    . For channels, tools, gateway runtime, and other top-level keys, see Configuration reference.

    Agent defaults

    text
    agents.defaults.workspace

    Default:

    text
    ~/.openclaw/workspace
    .

    json5
    { agents: { defaults: { workspace: "~/.openclaw/workspace" } }, }

    text
    agents.defaults.repoRoot

    Optional repository root shown in the system prompt's Runtime line. If unset, OpenClaw auto-detects by walking upward from the workspace.

    json5
    { agents: { defaults: { repoRoot: "~/Projects/openclaw" } }, }

    text
    agents.defaults.skills

    Optional default skill allowlist for agents that do not set

    text
    agents.list[].skills
    .

    json5
    { agents: { defaults: { skills: ["github", "weather"] }, list: [ { id: "writer" }, // inherits github, weather { id: "docs", skills: ["docs-search"] }, // replaces defaults { id: "locked-down", skills: [] }, // no skills ], }, }
    • Omit
      text
      agents.defaults.skills
      for unrestricted skills by default.
    • Omit
      text
      agents.list[].skills
      to inherit the defaults.
    • Set
      text
      agents.list[].skills: []
      for no skills.
    • A non-empty
      text
      agents.list[].skills
      list is the final set for that agent; it does not merge with defaults.

    text
    agents.defaults.skipBootstrap

    Disables automatic creation of workspace bootstrap files (

    text
    AGENTS.md
    ,
    text
    SOUL.md
    ,
    text
    TOOLS.md
    ,
    text
    IDENTITY.md
    ,
    text
    USER.md
    ,
    text
    HEARTBEAT.md
    ,
    text
    BOOTSTRAP.md
    ).

    json5
    { agents: { defaults: { skipBootstrap: true } }, }

    text
    agents.defaults.contextInjection

    Controls when workspace bootstrap files are injected into the system prompt. Default:

    text
    "always"
    .

    • text
      "continuation-skip"
      : safe continuation turns (after a completed assistant response) skip workspace bootstrap re-injection, reducing prompt size. Heartbeat runs and post-compaction retries still rebuild context.
    • text
      "never"
      : disable workspace bootstrap and context-file injection on every turn. Use this only for agents that fully own their prompt lifecycle (custom context engines, native runtimes that build their own context, or specialized bootstrap-free workflows). Heartbeat and compaction-recovery turns also skip injection.
    json5
    { agents: { defaults: { contextInjection: "continuation-skip" } }, }

    text
    agents.defaults.bootstrapMaxChars

    Max characters per workspace bootstrap file before truncation. Default:

    text
    12000
    .

    json5
    { agents: { defaults: { bootstrapMaxChars: 12000 } }, }

    text
    agents.defaults.bootstrapTotalMaxChars

    Max total characters injected across all workspace bootstrap files. Default:

    text
    60000
    .

    json5
    { agents: { defaults: { bootstrapTotalMaxChars: 60000 } }, }

    text
    agents.defaults.bootstrapPromptTruncationWarning

    Controls agent-visible warning text when bootstrap context is truncated. Default:

    text
    "once"
    .

    • text
      "off"
      : never inject warning text into the system prompt.
    • text
      "once"
      : inject warning once per unique truncation signature (recommended).
    • text
      "always"
      : inject warning on every run when truncation exists.
    json5
    { agents: { defaults: { bootstrapPromptTruncationWarning: "once" } }, // off | once | always }

    Context budget ownership map

    OpenClaw has multiple high-volume prompt/context budgets, and they are intentionally split by subsystem instead of all flowing through one generic knob.

    • text
      agents.defaults.bootstrapMaxChars
      /
      text
      agents.defaults.bootstrapTotalMaxChars
      : normal workspace bootstrap injection.
    • text
      agents.defaults.startupContext.*
      : one-shot reset/startup model-run prelude, including recent daily
      text
      memory/*.md
      files. Bare chat
      text
      /new
      and
      text
      /reset
      commands are acknowledged without invoking the model.
    • text
      skills.limits.*
      : the compact skills list injected into the system prompt.
    • text
      agents.defaults.contextLimits.*
      : bounded runtime excerpts and injected runtime-owned blocks.
    • text
      memory.qmd.limits.*
      : indexed memory-search snippet and injection sizing.

    Use the matching per-agent override only when one agent needs a different budget:

    • text
      agents.list[].skillsLimits.maxSkillsPromptChars
    • text
      agents.list[].contextLimits.*

    text
    agents.defaults.startupContext

    Controls the first-turn startup prelude injected on reset/startup model runs. Bare chat

    text
    /new
    and
    text
    /reset
    commands acknowledge the reset without invoking the model, so they do not load this prelude.

    json5
    { agents: { defaults: { startupContext: { enabled: true, applyOn: ["new", "reset"], dailyMemoryDays: 2, maxFileBytes: 16384, maxFileChars: 1200, maxTotalChars: 2800, }, }, }, }

    text
    agents.defaults.contextLimits

    Shared defaults for bounded runtime context surfaces.

    json5
    { agents: { defaults: { contextLimits: { memoryGetMaxChars: 12000, memoryGetDefaultLines: 120, toolResultMaxChars: 16000, postCompactionMaxChars: 1800, }, }, }, }
    • text
      memoryGetMaxChars
      : default
      text
      memory_get
      excerpt cap before truncation metadata and continuation notice are added.
    • text
      memoryGetDefaultLines
      : default
      text
      memory_get
      line window when
      text
      lines
      is omitted.
    • text
      toolResultMaxChars
      : live tool-result cap used for persisted results and overflow recovery.
    • text
      postCompactionMaxChars
      : AGENTS.md excerpt cap used during post-compaction refresh injection.

    text
    agents.list[].contextLimits

    Per-agent override for the shared

    text
    contextLimits
    knobs. Omitted fields inherit from
    text
    agents.defaults.contextLimits
    .

    json5
    { agents: { defaults: { contextLimits: { memoryGetMaxChars: 12000, toolResultMaxChars: 16000, }, }, list: [ { id: "tiny-local", contextLimits: { memoryGetMaxChars: 6000, toolResultMaxChars: 8000, }, }, ], }, }

    text
    skills.limits.maxSkillsPromptChars

    Global cap for the compact skills list injected into the system prompt. This does not affect reading

    text
    SKILL.md
    files on demand.

    json5
    { skills: { limits: { maxSkillsPromptChars: 18000, }, }, }

    text
    agents.list[].skillsLimits.maxSkillsPromptChars

    Per-agent override for the skills prompt budget.

    json5
    { agents: { list: [ { id: "tiny-local", skillsLimits: { maxSkillsPromptChars: 6000, }, }, ], }, }

    text
    agents.defaults.imageMaxDimensionPx

    Max pixel size for the longest image side in transcript/tool image blocks before provider calls. Default:

    text
    1200
    .

    Lower values usually reduce vision-token usage and request payload size for screenshot-heavy runs. Higher values preserve more visual detail.

    json5
    { agents: { defaults: { imageMaxDimensionPx: 1200 } }, }

    text
    agents.defaults.userTimezone

    Timezone for system prompt context (not message timestamps). Falls back to host timezone.

    json5
    { agents: { defaults: { userTimezone: "America/Chicago" } }, }

    text
    agents.defaults.timeFormat

    Time format in system prompt. Default:

    text
    auto
    (OS preference).

    json5
    { agents: { defaults: { timeFormat: "auto" } }, // auto | 12 | 24 }

    text
    agents.defaults.model

    json5
    { agents: { defaults: { models: { "anthropic/claude-opus-4-6": { alias: "opus" }, "minimax/MiniMax-M2.7": { alias: "minimax" }, }, model: { primary: "anthropic/claude-opus-4-6", fallbacks: ["minimax/MiniMax-M2.7"], }, imageModel: { primary: "openrouter/qwen/qwen-2.5-vl-72b-instruct:free", fallbacks: ["openrouter/google/gemini-2.0-flash-vision:free"], }, imageGenerationModel: { primary: "openai/gpt-image-2", fallbacks: ["google/gemini-3.1-flash-image-preview"], }, videoGenerationModel: { primary: "qwen/wan2.6-t2v", fallbacks: ["qwen/wan2.6-i2v"], }, pdfModel: { primary: "anthropic/claude-opus-4-6", fallbacks: ["openai/gpt-5.4-mini"], }, params: { cacheRetention: "long" }, // global default provider params agentRuntime: { id: "pi", // pi | auto | registered harness id, e.g. codex fallback: "pi", // pi | none }, pdfMaxBytesMb: 10, pdfMaxPages: 20, thinkingDefault: "low", verboseDefault: "off", reasoningDefault: "off", elevatedDefault: "on", timeoutSeconds: 600, mediaMaxMb: 5, contextTokens: 200000, maxConcurrent: 3, }, }, }
    • text
      model
      : accepts either a string (
      text
      "provider/model"
      ) or an object (
      text
      { primary, fallbacks }
      ).
      • String form sets only the primary model.
      • Object form sets primary plus ordered failover models.
    • text
      imageModel
      : accepts either a string (
      text
      "provider/model"
      ) or an object (
      text
      { primary, fallbacks }
      ).
      • Used by the
        text
        image
        tool path as its vision-model config.
      • Also used as fallback routing when the selected/default model cannot accept image input.
      • Prefer explicit
        text
        provider/model
        refs. Bare IDs are accepted for compatibility; if a bare ID uniquely matches a configured image-capable entry in
        text
        models.providers.*.models
        , OpenClaw qualifies it to that provider. Ambiguous configured matches require an explicit provider prefix.
    • text
      imageGenerationModel
      : accepts either a string (
      text
      "provider/model"
      ) or an object (
      text
      { primary, fallbacks }
      ).
      • Used by the shared image-generation capability and any future tool/plugin surface that generates images.
      • Typical values:
        text
        google/gemini-3.1-flash-image-preview
        for native Gemini image generation,
        text
        fal/fal-ai/flux/dev
        for fal,
        text
        openai/gpt-image-2
        for OpenAI Images, or
        text
        openai/gpt-image-1.5
        for transparent-background OpenAI PNG/WebP output.
      • If you select a provider/model directly, configure matching provider auth too (for example
        text
        GEMINI_API_KEY
        or
        text
        GOOGLE_API_KEY
        for
        text
        google/*
        ,
        text
        OPENAI_API_KEY
        or OpenAI Codex OAuth for
        text
        openai/gpt-image-2
        /
        text
        openai/gpt-image-1.5
        ,
        text
        FAL_KEY
        for
        text
        fal/*
        ).
      • If omitted,
        text
        image_generate
        can still infer an auth-backed provider default. It tries the current default provider first, then the remaining registered image-generation providers in provider-id order.
    • text
      musicGenerationModel
      : accepts either a string (
      text
      "provider/model"
      ) or an object (
      text
      { primary, fallbacks }
      ).
      • Used by the shared music-generation capability and the built-in
        text
        music_generate
        tool.
      • Typical values:
        text
        google/lyria-3-clip-preview
        ,
        text
        google/lyria-3-pro-preview
        , or
        text
        minimax/music-2.6
        .
      • If omitted,
        text
        music_generate
        can still infer an auth-backed provider default. It tries the current default provider first, then the remaining registered music-generation providers in provider-id order.
      • If you select a provider/model directly, configure the matching provider auth/API key too.
    • text
      videoGenerationModel
      : accepts either a string (
      text
      "provider/model"
      ) or an object (
      text
      { primary, fallbacks }
      ).
      • Used by the shared video-generation capability and the built-in
        text
        video_generate
        tool.
      • Typical values:
        text
        qwen/wan2.6-t2v
        ,
        text
        qwen/wan2.6-i2v
        ,
        text
        qwen/wan2.6-r2v
        ,
        text
        qwen/wan2.6-r2v-flash
        , or
        text
        qwen/wan2.7-r2v
        .
      • If omitted,
        text
        video_generate
        can still infer an auth-backed provider default. It tries the current default provider first, then the remaining registered video-generation providers in provider-id order.
      • If you select a provider/model directly, configure the matching provider auth/API key too.
      • The bundled Qwen video-generation provider supports up to 1 output video, 1 input image, 4 input videos, 10 seconds duration, and provider-level
        text
        size
        ,
        text
        aspectRatio
        ,
        text
        resolution
        ,
        text
        audio
        , and
        text
        watermark
        options.
    • text
      pdfModel
      : accepts either a string (
      text
      "provider/model"
      ) or an object (
      text
      { primary, fallbacks }
      ).
      • Used by the
        text
        pdf
        tool for model routing.
      • If omitted, the PDF tool falls back to
        text
        imageModel
        , then to the resolved session/default model.
    • text
      pdfMaxBytesMb
      : default PDF size limit for the
      text
      pdf
      tool when
      text
      maxBytesMb
      is not passed at call time.
    • text
      pdfMaxPages
      : default maximum pages considered by extraction fallback mode in the
      text
      pdf
      tool.
    • text
      verboseDefault
      : default verbose level for agents. Values:
      text
      "off"
      ,
      text
      "on"
      ,
      text
      "full"
      . Default:
      text
      "off"
      .
    • text
      reasoningDefault
      : default reasoning visibility for agents. Values:
      text
      "off"
      ,
      text
      "on"
      ,
      text
      "stream"
      . Per-agent
      text
      agents.list[].reasoningDefault
      overrides this default. Configured reasoning defaults are only applied for owners, authorized senders, or operator-admin gateway contexts when no per-message or session reasoning override is set.
    • text
      elevatedDefault
      : default elevated-output level for agents. Values:
      text
      "off"
      ,
      text
      "on"
      ,
      text
      "ask"
      ,
      text
      "full"
      . Default:
      text
      "on"
      .
    • text
      model.primary
      : format
      text
      provider/model
      (e.g.
      text
      openai/gpt-5.5
      for API-key access or
      text
      openai-codex/gpt-5.5
      for Codex OAuth). If you omit the provider, OpenClaw tries an alias first, then a unique configured-provider match for that exact model id, and only then falls back to the configured default provider (deprecated compatibility behavior, so prefer explicit
      text
      provider/model
      ). If that provider no longer exposes the configured default model, OpenClaw falls back to the first configured provider/model instead of surfacing a stale removed-provider default.
    • text
      models
      : the configured model catalog and allowlist for
      text
      /model
      . Each entry can include
      text
      alias
      (shortcut) and
      text
      params
      (provider-specific, for example
      text
      temperature
      ,
      text
      maxTokens
      ,
      text
      cacheRetention
      ,
      text
      context1m
      ,
      text
      responsesServerCompaction
      ,
      text
      responsesCompactThreshold
      ,
      text
      chat_template_kwargs
      ,
      text
      extra_body
      /
      text
      extraBody
      ).
      • Safe edits: use
        text
        openclaw config set agents.defaults.models '<json>' --strict-json --merge
        to add entries.
        text
        config set
        refuses replacements that would remove existing allowlist entries unless you pass
        text
        --replace
        .
      • Provider-scoped configure/onboarding flows merge selected provider models into this map and preserve unrelated providers already configured.
      • For direct OpenAI Responses models, server-side compaction is enabled automatically. Use
        text
        params.responsesServerCompaction: false
        to stop injecting
        text
        context_management
        , or
        text
        params.responsesCompactThreshold
        to override the threshold. See OpenAI server-side compaction.
    • text
      params
      : global default provider parameters applied to all models. Set at
      text
      agents.defaults.params
      (e.g.
      text
      { cacheRetention: "long" }
      ).
    • text
      params
      merge precedence (config):
      text
      agents.defaults.params
      (global base) is overridden by
      text
      agents.defaults.models["provider/model"].params
      (per-model), then
      text
      agents.list[].params
      (matching agent id) overrides by key. See Prompt Caching for details.
    • text
      params.extra_body
      /
      text
      params.extraBody
      : advanced pass-through JSON merged into
      text
      api: "openai-completions"
      request bodies for OpenAI-compatible proxies. If it collides with generated request keys, the extra body wins; non-native completions routes still strip OpenAI-only
      text
      store
      afterward.
    • text
      params.chat_template_kwargs
      : vLLM/OpenAI-compatible chat-template arguments merged into top-level
      text
      api: "openai-completions"
      request bodies. For
      text
      vllm/nemotron-3-*
      with thinking off, the bundled vLLM plugin automatically sends
      text
      enable_thinking: false
      and
      text
      force_nonempty_content: true
      ; explicit
      text
      chat_template_kwargs
      override generated defaults, and
      text
      extra_body.chat_template_kwargs
      still has final precedence. For vLLM Qwen thinking controls, set
      text
      params.qwenThinkingFormat
      to
      text
      "chat-template"
      or
      text
      "top-level"
      on that model entry.
    • text
      compat.supportedReasoningEfforts
      : per-model OpenAI-compatible reasoning effort list. Include
      text
      "xhigh"
      for custom endpoints that truly accept it; OpenClaw then exposes
      text
      /think xhigh
      in command menus, Gateway session rows, session patch validation, agent CLI validation, and
      text
      llm-task
      validation for that configured provider/model. Use
      text
      compat.reasoningEffortMap
      when the backend wants a provider-specific value for a canonical level.
    • text
      params.preserveThinking
      : Z.AI-only opt-in for preserved thinking. When enabled and thinking is on, OpenClaw sends
      text
      thinking.clear_thinking: false
      and replays prior
      text
      reasoning_content
      ; see Z.AI thinking and preserved thinking.
    • text
      agentRuntime
      : default low-level agent runtime policy. Omitted id defaults to OpenClaw Pi. Use
      text
      id: "pi"
      to force the built-in PI harness,
      text
      id: "auto"
      to let registered plugin harnesses claim supported models, a registered harness id such as
      text
      id: "codex"
      , or a supported CLI backend alias such as
      text
      id: "claude-cli"
      . Set
      text
      fallback: "none"
      to disable automatic PI fallback. Explicit plugin runtimes such as
      text
      codex
      fail closed by default unless you set
      text
      fallback: "pi"
      in the same override scope. Keep model refs canonical as
      text
      provider/model
      ; select Codex, Claude CLI, Gemini CLI, and other execution backends through runtime config instead of legacy runtime provider prefixes. See Agent runtimes for how this differs from provider/model selection.
    • Config writers that mutate these fields (for example
      text
      /models set
      ,
      text
      /models set-image
      , and fallback add/remove commands) save canonical object form and preserve existing fallback lists when possible.
    • text
      maxConcurrent
      : max parallel agent runs across sessions (each session still serialized). Default: 4.

    text
    agents.defaults.agentRuntime

    text
    agentRuntime
    controls which low-level executor runs agent turns. Most deployments should keep the default OpenClaw Pi runtime. Use it when a trusted plugin provides a native harness, such as the bundled Codex app-server harness, or when you want a supported CLI backend such as Claude CLI. For the mental model, see Agent runtimes.

    json5
    { agents: { defaults: { model: "openai/gpt-5.5", agentRuntime: { id: "codex", fallback: "none", }, }, }, }
    • text
      id
      :
      text
      "auto"
      ,
      text
      "pi"
      , a registered plugin harness id, or a supported CLI backend alias. The bundled Codex plugin registers
      text
      codex
      ; the bundled Anthropic plugin provides the
      text
      claude-cli
      CLI backend.
    • text
      fallback
      :
      text
      "pi"
      or
      text
      "none"
      . In
      text
      id: "auto"
      , omitted fallback defaults to
      text
      "pi"
      so old configs can keep using PI when no plugin harness claims a run. In explicit plugin runtime mode, such as
      text
      id: "codex"
      , omitted fallback defaults to
      text
      "none"
      so a missing harness fails instead of silently using PI. Runtime overrides do not inherit fallback from a broader scope; set
      text
      fallback: "pi"
      alongside the explicit runtime when you intentionally want that compatibility fallback. Selected plugin harness failures always surface directly.
    • Environment overrides:
      text
      OPENCLAW_AGENT_RUNTIME=<id|auto|pi>
      overrides
      text
      id
      ;
      text
      OPENCLAW_AGENT_HARNESS_FALLBACK=pi|none
      overrides fallback for that process.
    • For Codex-only deployments, set
      text
      model: "openai/gpt-5.5"
      and
      text
      agentRuntime.id: "codex"
      . You may also set
      text
      agentRuntime.fallback: "none"
      explicitly for readability; it is the default for explicit plugin runtimes.
    • For Claude CLI deployments, prefer
      text
      model: "anthropic/claude-opus-4-7"
      plus
      text
      agentRuntime.id: "claude-cli"
      . Legacy
      text
      claude-cli/claude-opus-4-7
      model refs still work for compatibility, but new config should keep provider/model selection canonical and put the execution backend in
      text
      agentRuntime.id
      .
    • Older runtime-policy keys are rewritten to
      text
      agentRuntime
      by
      text
      openclaw doctor --fix
      .
    • Harness choice is pinned per session id after the first embedded run. Config/env changes affect new or reset sessions, not an existing transcript. Legacy sessions with transcript history but no recorded pin are treated as PI-pinned.
      text
      /status
      reports the effective runtime, for example
      text
      Runtime: OpenClaw Pi Default
      or
      text
      Runtime: OpenAI Codex
      .
    • This only controls text agent-turn execution. Media generation, vision, PDF, music, video, and TTS still use their provider/model settings.

    Built-in alias shorthands (only apply when the model is in

    text
    agents.defaults.models
    ):

    AliasModel
    text
    opus
    text
    anthropic/claude-opus-4-6
    text
    sonnet
    text
    anthropic/claude-sonnet-4-6
    text
    gpt
    text
    openai/gpt-5.5
    or
    text
    openai-codex/gpt-5.5
    text
    gpt-mini
    text
    openai/gpt-5.4-mini
    text
    gpt-nano
    text
    openai/gpt-5.4-nano
    text
    gemini
    text
    google/gemini-3.1-pro-preview
    text
    gemini-flash
    text
    google/gemini-3-flash-preview
    text
    gemini-flash-lite
    text
    google/gemini-3.1-flash-lite-preview

    Your configured aliases always win over defaults.

    Z.AI GLM-4.x models automatically enable thinking mode unless you set

    text
    --thinking off
    or define
    text
    agents.defaults.models["zai/<model>"].params.thinking
    yourself. Z.AI models enable
    text
    tool_stream
    by default for tool call streaming. Set
    text
    agents.defaults.models["zai/<model>"].params.tool_stream
    to
    text
    false
    to disable it. Anthropic Claude 4.6 models default to
    text
    adaptive
    thinking when no explicit thinking level is set.

    text
    agents.defaults.cliBackends

    Optional CLI backends for text-only fallback runs (no tool calls). Useful as a backup when API providers fail.

    json5
    { agents: { defaults: { cliBackends: { "codex-cli": { command: "/opt/homebrew/bin/codex", }, "my-cli": { command: "my-cli", args: ["--json"], output: "json", modelArg: "--model", sessionArg: "--session", sessionMode: "existing", systemPromptArg: "--system", // Or use systemPromptFileArg when the CLI accepts a prompt file flag. systemPromptWhen: "first", imageArg: "--image", imageMode: "repeat", }, }, }, }, }
    • CLI backends are text-first; tools are always disabled.
    • Sessions supported when
      text
      sessionArg
      is set.
    • Image pass-through supported when
      text
      imageArg
      accepts file paths.

    text
    agents.defaults.systemPromptOverride

    Replace the entire OpenClaw-assembled system prompt with a fixed string. Set at the default level (

    text
    agents.defaults.systemPromptOverride
    ) or per agent (
    text
    agents.list[].systemPromptOverride
    ). Per-agent values take precedence; an empty or whitespace-only value is ignored. Useful for controlled prompt experiments.

    json5
    { agents: { defaults: { systemPromptOverride: "You are a helpful assistant.", }, }, }

    text
    agents.defaults.promptOverlays

    Provider-independent prompt overlays applied by model family. GPT-5-family model ids receive the shared behavior contract across providers;

    text
    personality
    controls only the friendly interaction-style layer.

    json5
    { agents: { defaults: { promptOverlays: { gpt5: { personality: "friendly", // friendly | on | off }, }, }, }, }
    • text
      "friendly"
      (default) and
      text
      "on"
      enable the friendly interaction-style layer.
    • text
      "off"
      disables only the friendly layer; the tagged GPT-5 behavior contract remains enabled.
    • Legacy
      text
      plugins.entries.openai.config.personality
      is still read when this shared setting is unset.

    text
    agents.defaults.heartbeat

    Periodic heartbeat runs.

    json5
    { agents: { defaults: { heartbeat: { every: "30m", // 0m disables model: "openai/gpt-5.4-mini", includeReasoning: false, includeSystemPromptSection: true, // default: true; false omits the Heartbeat section from the system prompt lightContext: false, // default: false; true keeps only HEARTBEAT.md from workspace bootstrap files isolatedSession: false, // default: false; true runs each heartbeat in a fresh session (no conversation history) skipWhenBusy: false, // default: false; true also waits for subagent/nested lanes session: "main", to: "+15555550123", directPolicy: "allow", // allow (default) | block target: "none", // default: none | options: last | whatsapp | telegram | discord | ... prompt: "Read HEARTBEAT.md if it exists...", ackMaxChars: 300, suppressToolErrorWarnings: false, timeoutSeconds: 45, }, }, }, }
    • text
      every
      : duration string (ms/s/m/h). Default:
      text
      30m
      (API-key auth) or
      text
      1h
      (OAuth auth). Set to
      text
      0m
      to disable.
    • text
      includeSystemPromptSection
      : when false, omits the Heartbeat section from the system prompt and skips
      text
      HEARTBEAT.md
      injection into bootstrap context. Default:
      text
      true
      .
    • text
      suppressToolErrorWarnings
      : when true, suppresses tool error warning payloads during heartbeat runs.
    • text
      timeoutSeconds
      : maximum time in seconds allowed for a heartbeat agent turn before it is aborted. Leave unset to use
      text
      agents.defaults.timeoutSeconds
      .
    • text
      directPolicy
      : direct/DM delivery policy.
      text
      allow
      (default) permits direct-target delivery.
      text
      block
      suppresses direct-target delivery and emits
      text
      reason=dm-blocked
      .
    • text
      lightContext
      : when true, heartbeat runs use lightweight bootstrap context and keep only
      text
      HEARTBEAT.md
      from workspace bootstrap files.
    • text
      isolatedSession
      : when true, each heartbeat runs in a fresh session with no prior conversation history. Same isolation pattern as cron
      text
      sessionTarget: "isolated"
      . Reduces per-heartbeat token cost from ~100K to ~2-5K tokens.
    • text
      skipWhenBusy
      : when true, heartbeat runs defer on extra busy lanes: subagent or nested command work. Cron lanes always defer heartbeats, even without this flag.
    • Per-agent: set
      text
      agents.list[].heartbeat
      . When any agent defines
      text
      heartbeat
      , only those agents run heartbeats.
    • Heartbeats run full agent turns — shorter intervals burn more tokens.

    text
    agents.defaults.compaction

    json5
    { agents: { defaults: { compaction: { mode: "safeguard", // default | safeguard provider: "my-provider", // id of a registered compaction provider plugin (optional) timeoutSeconds: 900, reserveTokensFloor: 24000, keepRecentTokens: 50000, identifierPolicy: "strict", // strict | off | custom identifierInstructions: "Preserve deployment IDs, ticket IDs, and host:port pairs exactly.", // used when identifierPolicy=custom qualityGuard: { enabled: true, maxRetries: 1 }, midTurnPrecheck: { enabled: false }, // optional Pi tool-loop pressure check postCompactionSections: ["Session Startup", "Red Lines"], // [] disables reinjection model: "openrouter/anthropic/claude-sonnet-4-6", // optional compaction-only model override truncateAfterCompaction: true, // rotate to a smaller successor JSONL after compaction maxActiveTranscriptBytes: "20mb", // optional preflight local compaction trigger notifyUser: true, // send brief notices when compaction starts and completes (default: false) memoryFlush: { enabled: true, model: "ollama/qwen3:8b", // optional memory-flush-only model override softThresholdTokens: 6000, systemPrompt: "Session nearing compaction. Store durable memories now.", prompt: "Write any lasting notes to memory/YYYY-MM-DD.md; reply with the exact silent token NO_REPLY if nothing to store.", }, }, }, }, }
    • text
      mode
      :
      text
      default
      or
      text
      safeguard
      (chunked summarization for long histories). See Compaction.
    • text
      provider
      : id of a registered compaction provider plugin. When set, the provider's
      text
      summarize()
      is called instead of built-in LLM summarization. Falls back to built-in on failure. Setting a provider forces
      text
      mode: "safeguard"
      . See Compaction.
    • text
      timeoutSeconds
      : maximum seconds allowed for a single compaction operation before OpenClaw aborts it. Default:
      text
      900
      .
    • text
      keepRecentTokens
      : Pi cut-point budget for keeping the most recent transcript tail verbatim. Manual
      text
      /compact
      honors this when explicitly set; otherwise manual compaction is a hard checkpoint.
    • text
      identifierPolicy
      :
      text
      strict
      (default),
      text
      off
      , or
      text
      custom
      .
      text
      strict
      prepends built-in opaque identifier retention guidance during compaction summarization.
    • text
      identifierInstructions
      : optional custom identifier-preservation text used when
      text
      identifierPolicy=custom
      .
    • text
      qualityGuard
      : retry-on-malformed-output checks for safeguard summaries. Enabled by default in safeguard mode; set
      text
      enabled: false
      to skip the audit.
    • text
      midTurnPrecheck
      : optional Pi tool-loop pressure check. When
      text
      enabled: true
      , OpenClaw checks context pressure after tool results are appended and before the next model call. If the context no longer fits, it aborts the current attempt before submitting the prompt and reuses the existing precheck recovery path to truncate tool results or compact and retry. Works with both
      text
      default
      and
      text
      safeguard
      compaction modes. Default: disabled.
    • text
      postCompactionSections
      : optional AGENTS.md H2/H3 section names to re-inject after compaction. Defaults to
      text
      ["Session Startup", "Red Lines"]
      ; set
      text
      []
      to disable reinjection. When unset or explicitly set to that default pair, older
      text
      Every Session
      /
      text
      Safety
      headings are also accepted as a legacy fallback.
    • text
      model
      : optional
      text
      provider/model-id
      override for compaction summarization only. Use this when the main session should keep one model but compaction summaries should run on another; when unset, compaction uses the session's primary model.
    • text
      maxActiveTranscriptBytes
      : optional byte threshold (
      text
      number
      or strings like
      text
      "20mb"
      ) that triggers normal local compaction before a run when the active JSONL grows past the threshold. Requires
      text
      truncateAfterCompaction
      so successful compaction can rotate to a smaller successor transcript. Disabled when unset or
      text
      0
      .
    • text
      notifyUser
      : when
      text
      true
      , sends brief notices to the user when compaction starts and when it completes (for example, "Compacting context..." and "Compaction complete"). Disabled by default to keep compaction silent.
    • text
      memoryFlush
      : silent agentic turn before auto-compaction to store durable memories. Set
      text
      model
      to an exact provider/model such as
      text
      ollama/qwen3:8b
      when this housekeeping turn should stay on a local model; the override does not inherit the active session fallback chain. Skipped when workspace is read-only.

    text
    agents.defaults.contextPruning

    Prunes old tool results from in-memory context before sending to the LLM. Does not modify session history on disk.

    json5
    { agents: { defaults: { contextPruning: { mode: "cache-ttl", // off | cache-ttl ttl: "1h", // duration (ms/s/m/h), default unit: minutes keepLastAssistants: 3, softTrimRatio: 0.3, hardClearRatio: 0.5, minPrunableToolChars: 50000, softTrim: { maxChars: 4000, headChars: 1500, tailChars: 1500 }, hardClear: { enabled: true, placeholder: "[Old tool result content cleared]" }, tools: { deny: ["browser", "canvas"] }, }, }, }, }

    © 2024 TaskFlow Mirror

    Powered by TaskFlow Sync Engine