TaskFlow
DashboardFreewriteWhiteboardsProjectsCRMTasksNotificationsSettingsAgent TowerAPI Docs
OpenClaw Docs
?

User

Member

Caricamento in corso...

Home
Progetti
Task
Notifiche
CRM

    OpenClaw

    Documentation Mirror

    Documentation Overview

    Docs

    Auth credential semantics
    Scheduled tasks
    Hooks
    Automation & tasks
    Standing orders
    Task flow
    Background tasks
    BlueBubbles
    Broadcast groups
    Channel routing
    Discord
    Feishu
    Google Chat
    Group messages
    Groups
    iMessage
    Chat channels
    IRC
    LINE
    Channel location parsing
    Matrix
    Matrix migration
    Matrix push rules for quiet previews
    Mattermost
    Microsoft Teams
    Nextcloud Talk
    Nostr
    Pairing
    QA channel
    QQ bot
    Signal
    Slack
    Synology Chat
    Telegram
    Tlon
    Channel troubleshooting
    Twitch
    WeChat
    WhatsApp
    Yuanbao
    Zalo
    Zalo personal
    CI pipeline
    ACP
    Agent
    Agents
    Approvals
    Backup
    Browser
    Channels
    Clawbot
    `openclaw commitments`
    Completion
    Config
    Configure
    Cron
    Daemon
    Dashboard
    Devices
    Directory
    DNS
    Docs
    Doctor
    Flows (redirect)
    Gateway
    Health
    Hooks
    CLI reference
    Inference CLI
    Logs
    MCP
    Memory
    Message
    Migrate
    Models
    Node
    Nodes
    Onboard
    Pairing
    Plugins
    Proxy
    QR
    Reset
    Sandbox CLI
    Secrets
    Security
    Sessions
    Setup
    Skills
    Status
    System
    `openclaw tasks`
    TUI
    Uninstall
    Update
    Voicecall
    Webhooks
    Wiki
    Active memory
    Agent runtime
    Agent loop
    Agent runtimes
    Agent workspace
    Gateway architecture
    Channel docking
    Inferred commitments
    Compaction
    Context
    Context engine
    Delegate architecture
    Dreaming
    Experimental features
    Features
    Markdown formatting
    Memory overview
    Builtin memory engine
    Honcho memory
    QMD memory engine
    Memory search
    Messages
    Model failover
    Model providers
    Models CLI
    Multi-agent routing
    OAuth
    OpenClaw App SDK
    Presence
    QA overview
    Matrix QA
    Command queue
    Steering queue
    Retry policy
    Session management
    Session pruning
    Session tools
    SOUL.md personality guide
    Streaming and chunking
    System prompt
    Timezones
    TypeBox
    Typing indicators
    Usage tracking
    Date and time
    Node + tsx crash
    Diagnostics flags
    Authentication
    Background exec and process tool
    Bonjour discovery
    Bridge protocol
    CLI backends
    Configuration — agents
    Configuration — channels
    Configuration — tools and custom providers
    Configuration
    Configuration examples
    Configuration reference
    Diagnostics export
    Discovery and transports
    Doctor
    Gateway lock
    Health checks
    Heartbeat
    Gateway runbook
    Local models
    Gateway logging
    Multiple gateways
    Network model
    OpenAI chat completions
    OpenResponses API
    OpenShell
    OpenTelemetry export
    Gateway-owned pairing
    Prometheus metrics
    Gateway protocol
    Remote access
    Remote gateway setup
    Sandbox vs tool policy vs elevated
    Sandboxing
    Secrets management
    Secrets apply plan contract
    Security audit checks
    Security
    Tailscale
    Tools invoke API
    Troubleshooting
    Trusted proxy auth
    Debugging
    Environment variables
    FAQ
    FAQ: first-run setup
    FAQ: models and auth
    GPT-5.5 / Codex agentic parity
    GPT-5.5 / Codex parity maintainer notes
    Help
    Scripts
    Testing
    Testing: live suites
    General troubleshooting
    OpenClaw
    Ansible
    Azure
    Bun (experimental)
    ClawDock
    Release channels
    DigitalOcean
    Docker
    Docker VM runtime
    exe.dev
    Fly.io
    GCP
    Hetzner
    Hostinger
    Install
    Installer internals
    Kubernetes
    macOS VMs
    Migration guide
    Migrating from Claude
    Migrating from Hermes
    Nix
    Node.js
    Northflank
    Oracle Cloud
    Podman
    Railway
    Raspberry Pi
    Render
    Uninstall
    Updating
    Logging
    Network
    Audio and voice notes
    Camera capture
    Image and media support
    Nodes
    Location command
    Media understanding
    Talk mode
    Node troubleshooting
    Voice wake
    Pi integration architecture
    Pi development workflow
    Android app
    Platforms
    iOS app
    Linux app
    Gateway on macOS
    Canvas
    Gateway lifecycle
    macOS dev setup
    Health checks (macOS)
    Menu bar icon
    macOS logging
    Menu bar
    Peekaboo bridge
    macOS permissions
    Remote control
    macOS signing
    Skills (macOS)
    Voice overlay
    Voice wake (macOS)
    WebChat (macOS)
    macOS IPC
    macOS app
    Windows
    Plugin internals
    Plugin architecture internals
    Building plugins
    Plugin bundles
    Codex Computer Use
    Codex harness
    Community plugins
    Plugin compatibility
    Google Meet plugin
    Plugin hooks
    Plugin manifest
    Memory LanceDB
    Memory wiki
    Message presentation
    Agent harness plugins
    Building channel plugins
    Channel turn kernel
    Plugin entry points
    Plugin SDK migration
    Plugin SDK overview
    Building provider plugins
    Plugin runtime helpers
    Plugin setup and config
    Plugin SDK subpaths
    Plugin testing
    Skill workshop plugin
    Voice call plugin
    Webhooks plugin
    Zalo personal plugin
    OpenProse
    Alibaba Model Studio
    Anthropic
    Arcee AI
    Azure Speech
    Amazon Bedrock
    Amazon Bedrock Mantle
    Chutes
    Claude Max API proxy
    Cloudflare AI gateway
    ComfyUI
    Deepgram
    Deepinfra
    DeepSeek
    ElevenLabs
    Fal
    Fireworks
    GitHub Copilot
    GLM (Zhipu)
    Google (Gemini)
    Gradium
    Groq
    Hugging Face (inference)
    Provider directory
    Inferrs
    Inworld
    Kilocode
    LiteLLM
    LM Studio
    MiniMax
    Mistral
    Model provider quickstart
    Moonshot AI
    NVIDIA
    Ollama
    OpenAI
    OpenCode
    OpenCode Go
    OpenRouter
    Perplexity
    Qianfan
    Qwen
    Runway
    SGLang
    StepFun
    Synthetic
    Tencent Cloud (TokenHub)
    Together AI
    Venice AI
    Vercel AI gateway
    vLLM
    Volcengine (Doubao)
    Vydra
    xAI
    Xiaomi MiMo
    Z.AI
    Default AGENTS.md
    Release policy
    API usage and costs
    Credits
    Device model database
    Full release validation
    Memory configuration reference
    OpenClaw App SDK API design
    Prompt caching
    Rich output protocol
    RPC adapters
    SecretRef credential surface
    Session management deep dive
    AGENTS.md template
    BOOT.md template
    BOOTSTRAP.md template
    HEARTBEAT.md template
    IDENTITY template
    SOUL.md template
    TOOLS.md template
    USER template
    Tests
    Token use and costs
    Transcript hygiene
    Onboarding reference
    Contributing to the threat model
    Threat model (MITRE ATLAS)
    Formal verification (security models)
    Network proxy
    Agent bootstrapping
    Docs directory
    Getting started
    Docs hubs
    OpenClaw lore
    Onboarding (macOS app)
    Onboarding overview
    Personal assistant setup
    Setup
    Showcase
    Onboarding (CLI)
    CLI automation
    CLI setup reference
    ACP agents
    ACP agents — setup
    Agent send
    apply_patch tool
    Brave search
    Browser (OpenClaw-managed)
    Browser control API
    Browser troubleshooting
    Browser login
    WSL2 + Windows + remote Chrome CDP troubleshooting
    BTW side questions
    ClawHub
    Code execution
    Creating skills
    Diffs
    DuckDuckGo search
    Elevated mode
    Exa search
    Exec tool
    Exec approvals
    Exec approvals — advanced
    Firecrawl
    Gemini search
    Grok search
    Image generation
    Tools and plugins
    Kimi search
    LLM task
    Lobster
    Tool-loop detection
    Media overview
    MiniMax search
    Multi-agent sandbox and tools
    Music generation
    Ollama web search
    PDF tool
    Perplexity search
    Plugins
    Reactions
    SearXNG search
    Skills
    Skills config
    Slash commands
    Sub-agents
    Tavily
    Thinking levels
    Tokenjuice
    Trajectory bundles
    Text-to-speech
    Video generation
    Web search
    Web fetch
    Linux server
    Control UI
    Dashboard
    Web
    TUI
    WebChat

    OpenAPI Specs

    openapi
    TaskFlow
    docs/openclaw
    Original Docs

    Real-time Synchronized Documentation

    Last sync: 01/05/2026 07:03:01

    Note: This content is mirrored from docs.openclaw.ai and is subject to their terms and conditions.

    OpenClaw Docs

    v2.4.0 Production

    Last synced: Today, 22:00

    Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.

    Use this file to discover all available pages before exploring further.

    CLI backends

    OpenClaw can run local AI CLIs as a text-only fallback when API providers are down, rate-limited, or temporarily misbehaving. This is intentionally conservative:

    • OpenClaw tools are not injected directly, but backends with
      text
      bundleMcp: true
      can receive gateway tools via a loopback MCP bridge.
    • JSONL streaming for CLIs that support it.
    • Sessions are supported (so follow-up turns stay coherent).
    • Images can be passed through if the CLI accepts image paths.

    This is designed as a safety net rather than a primary path. Use it when you want “always works” text responses without relying on external APIs.

    If you want a full harness runtime with ACP session controls, background tasks, thread/conversation binding, and persistent external coding sessions, use ACP Agents instead. CLI backends are not ACP.

    Beginner-friendly quick start

    You can use Codex CLI without any config (the bundled OpenAI plugin registers a default backend):

    bash
    openclaw agent --message "hi" --model codex-cli/gpt-5.5

    If your gateway runs under launchd/systemd and PATH is minimal, add just the command path:

    json5
    { agents: { defaults: { cliBackends: { "codex-cli": { command: "/opt/homebrew/bin/codex", }, }, }, }, }

    That’s it. No keys, no extra auth config needed beyond the CLI itself.

    If you use a bundled CLI backend as the primary message provider on a gateway host, OpenClaw now auto-loads the owning bundled plugin when your config explicitly references that backend in a model ref or under

    text
    agents.defaults.cliBackends
    .

    Using it as a fallback

    Add a CLI backend to your fallback list so it only runs when primary models fail:

    json5
    { agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6", fallbacks: ["codex-cli/gpt-5.5"], }, models: { "anthropic/claude-opus-4-6": { alias: "Opus" }, "codex-cli/gpt-5.5": {}, }, }, }, }

    Notes:

    • If you use
      text
      agents.defaults.models
      (allowlist), you must include your CLI backend models there too.
    • If the primary provider fails (auth, rate limits, timeouts), OpenClaw will try the CLI backend next.

    Configuration overview

    All CLI backends live under:

    text
    agents.defaults.cliBackends

    Each entry is keyed by a provider id (e.g.

    text
    codex-cli
    ,
    text
    my-cli
    ). The provider id becomes the left side of your model ref:

    text
    <provider>/<model>

    Example configuration

    json5
    { agents: { defaults: { cliBackends: { "codex-cli": { command: "/opt/homebrew/bin/codex", }, "my-cli": { command: "my-cli", args: ["--json"], output: "json", input: "arg", modelArg: "--model", modelAliases: { "claude-opus-4-6": "opus", "claude-sonnet-4-6": "sonnet", }, sessionArg: "--session", sessionMode: "existing", sessionIdFields: ["session_id", "conversation_id"], systemPromptArg: "--system", // For CLIs with a dedicated prompt-file flag: // systemPromptFileArg: "--system-file", // Codex-style CLIs can point at a prompt file instead: // systemPromptFileConfigArg: "-c", // systemPromptFileConfigKey: "model_instructions_file", systemPromptWhen: "first", imageArg: "--image", imageMode: "repeat", serialize: true, }, }, }, }, }

    How it works

    1. Selects a backend based on the provider prefix (
      text
      codex-cli/...
      ).
    2. Builds a system prompt using the same OpenClaw prompt + workspace context.
    3. Executes the CLI with a session id (if supported) so history stays consistent. The bundled
      text
      claude-cli
      backend keeps a Claude stdio process alive per OpenClaw session and sends follow-up turns over stream-json stdin.
    4. Parses output (JSON or plain text) and returns the final text.
    5. Persists session ids per backend, so follow-ups reuse the same CLI session.

    note

    The bundled Anthropic `claude-cli` backend is supported again. Anthropic staff told us OpenClaw-style Claude CLI usage is allowed again, so OpenClaw treats `claude -p` usage as sanctioned for this integration unless Anthropic publishes a new policy.

    The bundled OpenAI

    text
    codex-cli
    backend passes OpenClaw's system prompt through Codex's
    text
    model_instructions_file
    config override (
    text
    -c model_instructions_file="..."
    ). Codex does not expose a Claude-style
    text
    --append-system-prompt
    flag, so OpenClaw writes the assembled prompt to a temporary file for each fresh Codex CLI session.

    The bundled Anthropic

    text
    claude-cli
    backend receives the OpenClaw skills snapshot two ways: the compact OpenClaw skills catalog in the appended system prompt, and a temporary Claude Code plugin passed with
    text
    --plugin-dir
    . The plugin contains only the eligible skills for that agent/session, so Claude Code's native skill resolver sees the same filtered set that OpenClaw would otherwise advertise in the prompt. Skill env/API key overrides are still applied by OpenClaw to the child process environment for the run.

    Claude CLI also has its own noninteractive permission mode. OpenClaw maps that to the existing exec policy instead of adding Claude-specific config: when the effective requested exec policy is YOLO (

    text
    tools.exec.security: "full"
    and
    text
    tools.exec.ask: "off"
    ), OpenClaw adds
    text
    --permission-mode bypassPermissions
    . Per-agent
    text
    agents.list[].tools.exec
    settings override global
    text
    tools.exec
    for that agent. To force a different Claude mode, set explicit raw backend args such as
    text
    --permission-mode default
    or
    text
    --permission-mode acceptEdits
    under
    text
    agents.defaults.cliBackends.claude-cli.args
    and matching
    text
    resumeArgs
    .

    Before OpenClaw can use the bundled

    text
    claude-cli
    backend, Claude Code itself must already be logged in on the same host:

    bash
    claude auth login claude auth status --text openclaw models auth login --provider anthropic --method cli --set-default

    Use

    text
    agents.defaults.cliBackends.claude-cli.command
    only when the
    text
    claude
    binary is not already on
    text
    PATH
    .

    Sessions

    • If the CLI supports sessions, set
      text
      sessionArg
      (e.g.
      text
      --session-id
      ) or
      text
      sessionArgs
      (placeholder
      text
      {sessionId}
      ) when the ID needs to be inserted into multiple flags.
    • If the CLI uses a resume subcommand with different flags, set
      text
      resumeArgs
      (replaces
      text
      args
      when resuming) and optionally
      text
      resumeOutput
      (for non-JSON resumes).
    • text
      sessionMode
      :
      • text
        always
        : always send a session id (new UUID if none stored).
      • text
        existing
        : only send a session id if one was stored before.
      • text
        none
        : never send a session id.
    • text
      claude-cli
      defaults to
      text
      liveSession: "claude-stdio"
      ,
      text
      output: "jsonl"
      , and
      text
      input: "stdin"
      so follow-up turns reuse the live Claude process while it is active. Warm stdio is the default now, including for custom configs that omit transport fields. If the Gateway restarts or the idle process exits, OpenClaw resumes from the stored Claude session id. Stored session ids are verified against an existing readable project transcript before resume, so phantom bindings are cleared with
      text
      reason=transcript-missing
      instead of silently starting a fresh Claude CLI session under
      text
      --resume
      .
    • Stored CLI sessions are provider-owned continuity. The implicit daily session reset does not cut them;
      text
      /reset
      and explicit
      text
      session.reset
      policies still do.

    Serialization notes:

    • text
      serialize: true
      keeps same-lane runs ordered.
    • Most CLIs serialize on one provider lane.
    • OpenClaw drops stored CLI session reuse when the selected auth identity changes, including a changed auth profile id, static API key, static token, or OAuth account identity when the CLI exposes one. OAuth access and refresh token rotation does not cut the stored CLI session. If a CLI does not expose a stable OAuth account id, OpenClaw lets that CLI enforce resume permissions.

    Fallback prelude from claude-cli sessions

    When a

    text
    claude-cli
    attempt fails over to a non-CLI candidate in
    text
    agents.defaults.model.fallbacks
    , OpenClaw seeds the next attempt with a context prelude harvested from Claude Code's local JSONL transcript at
    text
    ~/.claude/projects/
    . Without this seed, the fallback provider would start cold because OpenClaw's own session transcript is empty for
    text
    claude-cli
    runs.

    • The prelude prefers the latest
      text
      /compact
      summary or
      text
      compact_boundary
      marker, then appends the most recent post-boundary turns up to a char budget. Pre-boundary turns are dropped because the summary already represents them.
    • Tool blocks are coalesced to compact
      text
      (tool call: name)
      and
      text
      (tool result: …)
      hints to keep the prompt budget honest. The summary is labeled
      text
      (truncated)
      if it overflows.
    • Same-provider
      text
      claude-cli
      to
      text
      claude-cli
      fallbacks rely on Claude's own
      text
      --resume
      and skip the prelude.
    • The seed reuses the existing Claude session-file path validation, so arbitrary paths cannot be read.

    Images (pass-through)

    If your CLI accepts image paths, set

    text
    imageArg
    :

    json5
    imageArg: "--image", imageMode: "repeat"

    OpenClaw will write base64 images to temp files. If

    text
    imageArg
    is set, those paths are passed as CLI args. If
    text
    imageArg
    is missing, OpenClaw appends the file paths to the prompt (path injection), which is enough for CLIs that auto- load local files from plain paths.

    Inputs / outputs

    • text
      output: "json"
      (default) tries to parse JSON and extract text + session id.
    • For Gemini CLI JSON output, OpenClaw reads reply text from
      text
      response
      and usage from
      text
      stats
      when
      text
      usage
      is missing or empty.
    • text
      output: "jsonl"
      parses JSONL streams (for example Codex CLI
      text
      --json
      ) and extracts the final agent message plus session identifiers when present.
    • text
      output: "text"
      treats stdout as the final response.

    Input modes:

    • text
      input: "arg"
      (default) passes the prompt as the last CLI arg.
    • text
      input: "stdin"
      sends the prompt via stdin.
    • If the prompt is very long and
      text
      maxPromptArgChars
      is set, stdin is used.

    Defaults (plugin-owned)

    The bundled OpenAI plugin also registers a default for

    text
    codex-cli
    :

    • text
      command: "codex"
    • text
      args: ["exec","--json","--color","never","--sandbox","workspace-write","--skip-git-repo-check"]
    • text
      resumeArgs: ["exec","resume","{sessionId}","-c","sandbox_mode=\"workspace-write\"","--skip-git-repo-check"]
    • text
      output: "jsonl"
    • text
      resumeOutput: "text"
    • text
      modelArg: "--model"
    • text
      imageArg: "--image"
    • text
      sessionMode: "existing"

    The bundled Google plugin also registers a default for

    text
    google-gemini-cli
    :

    • text
      command: "gemini"
    • text
      args: ["--output-format", "json", "--prompt", "{prompt}"]
    • text
      resumeArgs: ["--resume", "{sessionId}", "--output-format", "json", "--prompt", "{prompt}"]
    • text
      imageArg: "@"
    • text
      imagePathScope: "workspace"
    • text
      modelArg: "--model"
    • text
      sessionMode: "existing"
    • text
      sessionIdFields: ["session_id", "sessionId"]

    Prerequisite: the local Gemini CLI must be installed and available as

    text
    gemini
    on
    text
    PATH
    (
    text
    brew install gemini-cli
    or
    text
    npm install -g @google/gemini-cli
    ).

    Gemini CLI JSON notes:

    • Reply text is read from the JSON
      text
      response
      field.
    • Usage falls back to
      text
      stats
      when
      text
      usage
      is absent or empty.
    • text
      stats.cached
      is normalized into OpenClaw
      text
      cacheRead
      .
    • If
      text
      stats.input
      is missing, OpenClaw derives input tokens from
      text
      stats.input_tokens - stats.cached
      .

    Override only if needed (common: absolute

    text
    command
    path).

    Plugin-owned defaults

    CLI backend defaults are now part of the plugin surface:

    • Plugins register them with
      text
      api.registerCliBackend(...)
      .
    • The backend
      text
      id
      becomes the provider prefix in model refs.
    • User config in
      text
      agents.defaults.cliBackends.<id>
      still overrides the plugin default.
    • Backend-specific config cleanup stays plugin-owned through the optional
      text
      normalizeConfig
      hook.

    Plugins that need tiny prompt/message compatibility shims can declare bidirectional text transforms without replacing a provider or CLI backend:

    typescript
    api.registerTextTransforms({ input: [ { from: /red basket/g, to: "blue basket" }, { from: /paper ticket/g, to: "digital ticket" }, { from: /left shelf/g, to: "right shelf" }, ], output: [ { from: /blue basket/g, to: "red basket" }, { from: /digital ticket/g, to: "paper ticket" }, { from: /right shelf/g, to: "left shelf" }, ], });

    text
    input
    rewrites the system prompt and user prompt passed to the CLI.
    text
    output
    rewrites streamed assistant deltas and parsed final text before OpenClaw handles its own control markers and channel delivery.

    For CLIs that emit Claude Code stream-json compatible JSONL, set

    text
    jsonlDialect: "claude-stream-json"
    on that backend's config.

    Bundle MCP overlays

    CLI backends do not receive OpenClaw tool calls directly, but a backend can opt into a generated MCP config overlay with

    text
    bundleMcp: true
    .

    Current bundled behavior:

    • text
      claude-cli
      : generated strict MCP config file
    • text
      codex-cli
      : inline config overrides for
      text
      mcp_servers
      ; the generated OpenClaw loopback server is marked with Codex's per-server tool approval mode so MCP calls cannot stall on local approval prompts
    • text
      google-gemini-cli
      : generated Gemini system settings file

    When bundle MCP is enabled, OpenClaw:

    • spawns a loopback HTTP MCP server that exposes gateway tools to the CLI process
    • authenticates the bridge with a per-session token (
      text
      OPENCLAW_MCP_TOKEN
      )
    • scopes tool access to the current session, account, and channel context
    • loads enabled bundle-MCP servers for the current workspace
    • merges them with any existing backend MCP config/settings shape
    • rewrites the launch config using the backend-owned integration mode from the owning extension

    If no MCP servers are enabled, OpenClaw still injects a strict config when a backend opts into bundle MCP so background runs stay isolated.

    Session-scoped bundled MCP runtimes are cached for reuse within a session, then reaped after

    text
    mcp.sessionIdleTtlMs
    milliseconds of idle time (default 10 minutes; set
    text
    0
    to disable). One-shot embedded runs such as auth probes, slug generation, and active-memory recall request cleanup at run end so stdio children and Streamable HTTP/SSE streams do not outlive the run.

    Limitations

    • No direct OpenClaw tool calls. OpenClaw does not inject tool calls into the CLI backend protocol. Backends only see gateway tools when they opt into
      text
      bundleMcp: true
      .
    • Streaming is backend-specific. Some backends stream JSONL; others buffer until exit.
    • Structured outputs depend on the CLI’s JSON format.
    • Codex CLI sessions resume via text output (no JSONL), which is less structured than the initial
      text
      --json
      run. OpenClaw sessions still work normally.

    Troubleshooting

    • CLI not found: set
      text
      command
      to a full path.
    • Wrong model name: use
      text
      modelAliases
      to map
      text
      provider/model
      → CLI model.
    • No session continuity: ensure
      text
      sessionArg
      is set and
      text
      sessionMode
      is not
      text
      none
      (Codex CLI currently cannot resume with JSON output).
    • Images ignored: set
      text
      imageArg
      (and verify CLI supports file paths).

    Related

    • Gateway runbook
    • Local models

    © 2024 TaskFlow Mirror

    Powered by TaskFlow Sync Engine