TaskFlow
DashboardFreewriteWhiteboardsProjectsCRMTasksNotificationsSettingsAgent TowerAPI Docs
OpenClaw Docs
?

User

Member

Caricamento in corso...

Home
Progetti
Task
Notifiche
CRM

    OpenClaw

    Documentation Mirror

    Documentation Overview

    Docs

    Auth credential semantics
    Scheduled tasks
    Hooks
    Automation & tasks
    Standing orders
    Task flow
    Background tasks
    BlueBubbles
    Broadcast groups
    Channel routing
    Discord
    Feishu
    Google Chat
    Group messages
    Groups
    iMessage
    Chat channels
    IRC
    LINE
    Channel location parsing
    Matrix
    Matrix migration
    Matrix push rules for quiet previews
    Mattermost
    Microsoft Teams
    Nextcloud Talk
    Nostr
    Pairing
    QA channel
    QQ bot
    Signal
    Slack
    Synology Chat
    Telegram
    Tlon
    Channel troubleshooting
    Twitch
    WeChat
    WhatsApp
    Yuanbao
    Zalo
    Zalo personal
    CI pipeline
    ACP
    Agent
    Agents
    Approvals
    Backup
    Browser
    Channels
    Clawbot
    `openclaw commitments`
    Completion
    Config
    Configure
    Cron
    Daemon
    Dashboard
    Devices
    Directory
    DNS
    Docs
    Doctor
    Flows (redirect)
    Gateway
    Health
    Hooks
    CLI reference
    Inference CLI
    Logs
    MCP
    Memory
    Message
    Migrate
    Models
    Node
    Nodes
    Onboard
    Pairing
    Plugins
    Proxy
    QR
    Reset
    Sandbox CLI
    Secrets
    Security
    Sessions
    Setup
    Skills
    Status
    System
    `openclaw tasks`
    TUI
    Uninstall
    Update
    Voicecall
    Webhooks
    Wiki
    Active memory
    Agent runtime
    Agent loop
    Agent runtimes
    Agent workspace
    Gateway architecture
    Channel docking
    Inferred commitments
    Compaction
    Context
    Context engine
    Delegate architecture
    Dreaming
    Experimental features
    Features
    Markdown formatting
    Memory overview
    Builtin memory engine
    Honcho memory
    QMD memory engine
    Memory search
    Messages
    Model failover
    Model providers
    Models CLI
    Multi-agent routing
    OAuth
    OpenClaw App SDK
    Presence
    QA overview
    Matrix QA
    Command queue
    Steering queue
    Retry policy
    Session management
    Session pruning
    Session tools
    SOUL.md personality guide
    Streaming and chunking
    System prompt
    Timezones
    TypeBox
    Typing indicators
    Usage tracking
    Date and time
    Node + tsx crash
    Diagnostics flags
    Authentication
    Background exec and process tool
    Bonjour discovery
    Bridge protocol
    CLI backends
    Configuration — agents
    Configuration — channels
    Configuration — tools and custom providers
    Configuration
    Configuration examples
    Configuration reference
    Diagnostics export
    Discovery and transports
    Doctor
    Gateway lock
    Health checks
    Heartbeat
    Gateway runbook
    Local models
    Gateway logging
    Multiple gateways
    Network model
    OpenAI chat completions
    OpenResponses API
    OpenShell
    OpenTelemetry export
    Gateway-owned pairing
    Prometheus metrics
    Gateway protocol
    Remote access
    Remote gateway setup
    Sandbox vs tool policy vs elevated
    Sandboxing
    Secrets management
    Secrets apply plan contract
    Security audit checks
    Security
    Tailscale
    Tools invoke API
    Troubleshooting
    Trusted proxy auth
    Debugging
    Environment variables
    FAQ
    FAQ: first-run setup
    FAQ: models and auth
    GPT-5.5 / Codex agentic parity
    GPT-5.5 / Codex parity maintainer notes
    Help
    Scripts
    Testing
    Testing: live suites
    General troubleshooting
    OpenClaw
    Ansible
    Azure
    Bun (experimental)
    ClawDock
    Release channels
    DigitalOcean
    Docker
    Docker VM runtime
    exe.dev
    Fly.io
    GCP
    Hetzner
    Hostinger
    Install
    Installer internals
    Kubernetes
    macOS VMs
    Migration guide
    Migrating from Claude
    Migrating from Hermes
    Nix
    Node.js
    Northflank
    Oracle Cloud
    Podman
    Railway
    Raspberry Pi
    Render
    Uninstall
    Updating
    Logging
    Network
    Audio and voice notes
    Camera capture
    Image and media support
    Nodes
    Location command
    Media understanding
    Talk mode
    Node troubleshooting
    Voice wake
    Pi integration architecture
    Pi development workflow
    Android app
    Platforms
    iOS app
    Linux app
    Gateway on macOS
    Canvas
    Gateway lifecycle
    macOS dev setup
    Health checks (macOS)
    Menu bar icon
    macOS logging
    Menu bar
    Peekaboo bridge
    macOS permissions
    Remote control
    macOS signing
    Skills (macOS)
    Voice overlay
    Voice wake (macOS)
    WebChat (macOS)
    macOS IPC
    macOS app
    Windows
    Plugin internals
    Plugin architecture internals
    Building plugins
    Plugin bundles
    Codex Computer Use
    Codex harness
    Community plugins
    Plugin compatibility
    Google Meet plugin
    Plugin hooks
    Plugin manifest
    Memory LanceDB
    Memory wiki
    Message presentation
    Agent harness plugins
    Building channel plugins
    Channel turn kernel
    Plugin entry points
    Plugin SDK migration
    Plugin SDK overview
    Building provider plugins
    Plugin runtime helpers
    Plugin setup and config
    Plugin SDK subpaths
    Plugin testing
    Skill workshop plugin
    Voice call plugin
    Webhooks plugin
    Zalo personal plugin
    OpenProse
    Alibaba Model Studio
    Anthropic
    Arcee AI
    Azure Speech
    Amazon Bedrock
    Amazon Bedrock Mantle
    Chutes
    Claude Max API proxy
    Cloudflare AI gateway
    ComfyUI
    Deepgram
    Deepinfra
    DeepSeek
    ElevenLabs
    Fal
    Fireworks
    GitHub Copilot
    GLM (Zhipu)
    Google (Gemini)
    Gradium
    Groq
    Hugging Face (inference)
    Provider directory
    Inferrs
    Inworld
    Kilocode
    LiteLLM
    LM Studio
    MiniMax
    Mistral
    Model provider quickstart
    Moonshot AI
    NVIDIA
    Ollama
    OpenAI
    OpenCode
    OpenCode Go
    OpenRouter
    Perplexity
    Qianfan
    Qwen
    Runway
    SGLang
    StepFun
    Synthetic
    Tencent Cloud (TokenHub)
    Together AI
    Venice AI
    Vercel AI gateway
    vLLM
    Volcengine (Doubao)
    Vydra
    xAI
    Xiaomi MiMo
    Z.AI
    Default AGENTS.md
    Release policy
    API usage and costs
    Credits
    Device model database
    Full release validation
    Memory configuration reference
    OpenClaw App SDK API design
    Prompt caching
    Rich output protocol
    RPC adapters
    SecretRef credential surface
    Session management deep dive
    AGENTS.md template
    BOOT.md template
    BOOTSTRAP.md template
    HEARTBEAT.md template
    IDENTITY template
    SOUL.md template
    TOOLS.md template
    USER template
    Tests
    Token use and costs
    Transcript hygiene
    Onboarding reference
    Contributing to the threat model
    Threat model (MITRE ATLAS)
    Formal verification (security models)
    Network proxy
    Agent bootstrapping
    Docs directory
    Getting started
    Docs hubs
    OpenClaw lore
    Onboarding (macOS app)
    Onboarding overview
    Personal assistant setup
    Setup
    Showcase
    Onboarding (CLI)
    CLI automation
    CLI setup reference
    ACP agents
    ACP agents — setup
    Agent send
    apply_patch tool
    Brave search
    Browser (OpenClaw-managed)
    Browser control API
    Browser troubleshooting
    Browser login
    WSL2 + Windows + remote Chrome CDP troubleshooting
    BTW side questions
    ClawHub
    Code execution
    Creating skills
    Diffs
    DuckDuckGo search
    Elevated mode
    Exa search
    Exec tool
    Exec approvals
    Exec approvals — advanced
    Firecrawl
    Gemini search
    Grok search
    Image generation
    Tools and plugins
    Kimi search
    LLM task
    Lobster
    Tool-loop detection
    Media overview
    MiniMax search
    Multi-agent sandbox and tools
    Music generation
    Ollama web search
    PDF tool
    Perplexity search
    Plugins
    Reactions
    SearXNG search
    Skills
    Skills config
    Slash commands
    Sub-agents
    Tavily
    Thinking levels
    Tokenjuice
    Trajectory bundles
    Text-to-speech
    Video generation
    Web search
    Web fetch
    Linux server
    Control UI
    Dashboard
    Web
    TUI
    WebChat

    OpenAPI Specs

    openapi
    TaskFlow
    docs/openclaw
    Original Docs

    Real-time Synchronized Documentation

    Last sync: 01/05/2026 07:05:24

    Note: This content is mirrored from docs.openclaw.ai and is subject to their terms and conditions.

    OpenClaw Docs

    v2.4.0 Production

    Last synced: Today, 22:00

    Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.

    Use this file to discover all available pages before exploring further.

    Agent runtimes

    An agent runtime is the component that owns one prepared model loop: it receives the prompt, drives model output, handles native tool calls, and returns the finished turn to OpenClaw.

    Runtimes are easy to confuse with providers because both show up near model configuration. They are different layers:

    LayerExamplesWhat it means
    Provider
    text
    openai
    ,
    text
    anthropic
    ,
    text
    openai-codex
    How OpenClaw authenticates, discovers models, and names model refs.
    Model
    text
    gpt-5.5
    ,
    text
    claude-opus-4-6
    The model selected for the agent turn.
    Agent runtime
    text
    pi
    ,
    text
    codex
    ,
    text
    claude-cli
    The low level loop or backend that executes the prepared turn.
    ChannelTelegram, Discord, Slack, WhatsAppWhere messages enter and leave OpenClaw.

    You will also see the word harness in code. A harness is the implementation that provides an agent runtime. For example, the bundled Codex harness implements the

    text
    codex
    runtime. Public config uses
    text
    agentRuntime.id
    ;
    text
    openclaw doctor --fix
    rewrites older runtime-policy keys to that shape.

    There are two runtime families:

    • Embedded harnesses run inside OpenClaw's prepared agent loop. Today this is the built-in
      text
      pi
      runtime plus registered plugin harnesses such as
      text
      codex
      .
    • CLI backends run a local CLI process while keeping the model ref canonical. For example,
      text
      anthropic/claude-opus-4-7
      with
      text
      agentRuntime.id: "claude-cli"
      means "select the Anthropic model, execute through Claude CLI."
      text
      claude-cli
      is not an embedded harness id and must not be passed to AgentHarness selection.

    Three things named Codex

    Most confusion comes from three different surfaces sharing the Codex name:

    SurfaceOpenClaw name/configWhat it does
    Codex OAuth provider route
    text
    openai-codex/*
    model refs
    Uses ChatGPT/Codex subscription OAuth through the normal OpenClaw PI runner.
    Native Codex app-server runtime
    text
    agentRuntime.id: "codex"
    Runs the embedded agent turn through the bundled Codex app-server harness.
    Codex ACP adapter
    text
    runtime: "acp"
    ,
    text
    agentId: "codex"
    Runs Codex through the external ACP/acpx control plane. Use only when ACP/acpx is explicitly asked.
    Native Codex chat-control command set
    text
    /codex ...
    Binds, resumes, steers, stops, and inspects Codex app-server threads from chat.
    OpenAI Platform API route for GPT/Codex-style models
    text
    openai/*
    model refs
    Uses OpenAI API-key auth unless a runtime override, such as
    text
    runtime: "codex"
    , runs the turn.

    Those surfaces are intentionally independent. Enabling the

    text
    codex
    plugin makes the native app-server features available; it does not rewrite
    text
    openai-codex/*
    into
    text
    openai/*
    , does not change existing sessions, and does not make ACP the Codex default. Selecting
    text
    openai-codex/*
    means "use the Codex OAuth provider route" unless you separately force a runtime.

    The common Codex setup uses the

    text
    openai
    provider with the
    text
    codex
    runtime:

    json5
    { agents: { defaults: { model: "openai/gpt-5.5", agentRuntime: { id: "codex", }, }, }, }

    That means OpenClaw selects an OpenAI model ref, then asks the Codex app-server runtime to run the embedded agent turn. It does not mean the channel, model provider catalog, or OpenClaw session store becomes Codex.

    When the bundled

    text
    codex
    plugin is enabled, natural-language Codex control should use the native
    text
    /codex
    command surface (
    text
    /codex bind
    ,
    text
    /codex threads
    ,
    text
    /codex resume
    ,
    text
    /codex steer
    ,
    text
    /codex stop
    ) instead of ACP. Use ACP for Codex only when the user explicitly asks for ACP/acpx or is testing the ACP adapter path. Claude Code, Gemini CLI, OpenCode, Cursor, and similar external harnesses still use ACP.

    This is the agent-facing decision tree:

    1. If the user asks for Codex bind/control/thread/resume/steer/stop, use the native
      text
      /codex
      command surface when the bundled
      text
      codex
      plugin is enabled.
    2. If the user asks for Codex as the embedded runtime, use
      text
      openai/<model>
      with
      text
      agentRuntime.id: "codex"
      .
    3. If the user asks for Codex OAuth/subscription auth on the normal OpenClaw runner, use
      text
      openai-codex/<model>
      and leave the runtime as PI.
    4. If the user explicitly says ACP, acpx, or Codex ACP adapter, use ACP with
      text
      runtime: "acp"
      and
      text
      agentId: "codex"
      .
    5. If the request is for Claude Code, Gemini CLI, OpenCode, Cursor, Droid, or another external harness, use ACP/acpx, not the native sub-agent runtime.
    You mean...Use...
    Codex app-server chat/thread control
    text
    /codex ...
    from the bundled
    text
    codex
    plugin
    Codex app-server embedded agent runtime
    text
    agentRuntime.id: "codex"
    OpenAI Codex OAuth on the PI runner
    text
    openai-codex/*
    model refs
    Claude Code or other external harnessACP/acpx

    For the OpenAI-family prefix split, see OpenAI and Model providers. For the Codex runtime support contract, see Codex harness.

    Runtime ownership

    Different runtimes own different amounts of the loop.

    SurfaceOpenClaw PI embeddedCodex app-server
    Model loop ownerOpenClaw through the PI embedded runnerCodex app-server
    Canonical thread stateOpenClaw transcriptCodex thread, plus OpenClaw transcript mirror
    OpenClaw dynamic toolsNative OpenClaw tool loopBridged through the Codex adapter
    Native shell and file toolsPI/OpenClaw pathCodex-native tools, bridged through native hooks where supported
    Context engineNative OpenClaw context assemblyOpenClaw projects assembled context into the Codex turn
    CompactionOpenClaw or selected context engineCodex-native compaction, with OpenClaw notifications and mirror maintenance
    Channel deliveryOpenClawOpenClaw

    This ownership split is the main design rule:

    • If OpenClaw owns the surface, OpenClaw can provide normal plugin hook behavior.
    • If the native runtime owns the surface, OpenClaw needs runtime events or native hooks.
    • If the native runtime owns canonical thread state, OpenClaw should mirror and project context, not rewrite unsupported internals.

    Runtime selection

    OpenClaw chooses an embedded runtime after provider and model resolution:

    1. A session's recorded runtime wins. Config changes do not hot-switch an existing transcript to a different native thread system.
    2. text
      OPENCLAW_AGENT_RUNTIME=<id>
      forces that runtime for new or reset sessions.
    3. text
      agents.defaults.agentRuntime.id
      or
      text
      agents.list[].agentRuntime.id
      can set
      text
      auto
      ,
      text
      pi
      , a registered embedded harness id such as
      text
      codex
      , or a supported CLI backend alias such as
      text
      claude-cli
      .
    4. In
      text
      auto
      mode, registered plugin runtimes can claim supported provider/model pairs.
    5. If no runtime claims a turn in
      text
      auto
      mode and
      text
      fallback: "pi"
      is set (the default), OpenClaw uses PI as the compatibility fallback. Set
      text
      fallback: "none"
      to make unmatched
      text
      auto
      -mode selection fail instead.

    Explicit plugin runtimes fail closed by default. For example,

    text
    runtime: "codex"
    means Codex or a clear selection error unless you set
    text
    fallback: "pi"
    in the same override scope. A runtime override does not inherit a broader fallback setting, so an agent-level
    text
    runtime: "codex"
    is not silently routed back to PI just because defaults used
    text
    fallback: "pi"
    .

    CLI backend aliases are different from embedded harness ids. The preferred Claude CLI form is:

    json5
    { agents: { defaults: { model: "anthropic/claude-opus-4-7", agentRuntime: { id: "claude-cli" }, }, }, }

    Legacy refs such as

    text
    claude-cli/claude-opus-4-7
    remain supported for compatibility, but new config should keep the provider/model canonical and put the execution backend in
    text
    agentRuntime.id
    .

    text
    auto
    mode is intentionally conservative. Plugin runtimes can claim provider/model pairs they understand, but the Codex plugin does not claim the
    text
    openai-codex
    provider in
    text
    auto
    mode. That keeps
    text
    openai-codex/*
    as the explicit PI Codex OAuth route and avoids silently moving subscription-auth configs onto the native app-server harness.

    If

    text
    openclaw doctor
    warns that the
    text
    codex
    plugin is enabled while
    text
    openai-codex/*
    still routes through PI, treat that as a diagnosis, not a migration. Keep the config unchanged when PI Codex OAuth is what you want. Switch to
    text
    openai/<model>
    plus
    text
    agentRuntime.id: "codex"
    only when you want native Codex app-server execution.

    Compatibility contract

    When a runtime is not PI, it should document what OpenClaw surfaces it supports. Use this shape for runtime docs:

    QuestionWhy it matters
    Who owns the model loop?Determines where retries, tool continuation, and final answer decisions happen.
    Who owns canonical thread history?Determines whether OpenClaw can edit history or only mirror it.
    Do OpenClaw dynamic tools work?Messaging, sessions, cron, and OpenClaw-owned tools rely on this.
    Do dynamic tool hooks work?Plugins expect
    text
    before_tool_call
    ,
    text
    after_tool_call
    , and middleware around OpenClaw-owned tools.
    Do native tool hooks work?Shell, patch, and runtime-owned tools need native hook support for policy and observation.
    Does the context engine lifecycle run?Memory and context plugins depend on assemble, ingest, after-turn, and compaction lifecycle.
    What compaction data is exposed?Some plugins only need notifications, while others need kept/dropped metadata.
    What is intentionally unsupported?Users should not assume PI equivalence where the native runtime owns more state.

    The Codex runtime support contract is documented in Codex harness.

    Status labels

    Status output may show both

    text
    Execution
    and
    text
    Runtime
    labels. Read them as diagnostics, not as provider names.

    • A model ref such as
      text
      openai/gpt-5.5
      tells you the selected provider/model.
    • A runtime id such as
      text
      codex
      tells you which loop is executing the turn.
    • A channel label such as Telegram or Discord tells you where the conversation is happening.

    If a session still shows PI after changing runtime config, start a new session with

    text
    /new
    or clear the current one with
    text
    /reset
    . Existing sessions keep their recorded runtime so a transcript is not replayed through two incompatible native session systems.

    Related

    • Codex harness
    • OpenAI
    • Agent harness plugins
    • Agent loop
    • Models
    • Status

    © 2024 TaskFlow Mirror

    Powered by TaskFlow Sync Engine