TaskFlow
DashboardFreewriteWhiteboardsProjectsCRMTasksNotificationsSettingsAgent TowerAPI Docs
OpenClaw Docs
?

User

Member

Caricamento in corso...

Home
Progetti
Task
Notifiche
CRM

    OpenClaw

    Documentation Mirror

    Documentation Overview

    Docs

    Auth credential semantics
    Scheduled tasks
    Hooks
    Automation & tasks
    Standing orders
    Task flow
    Background tasks
    BlueBubbles
    Broadcast groups
    Channel routing
    Discord
    Feishu
    Google Chat
    Group messages
    Groups
    iMessage
    Chat channels
    IRC
    LINE
    Channel location parsing
    Matrix
    Matrix migration
    Matrix push rules for quiet previews
    Mattermost
    Microsoft Teams
    Nextcloud Talk
    Nostr
    Pairing
    QA channel
    QQ bot
    Signal
    Slack
    Synology Chat
    Telegram
    Tlon
    Channel troubleshooting
    Twitch
    WeChat
    WhatsApp
    Yuanbao
    Zalo
    Zalo personal
    CI pipeline
    ACP
    Agent
    Agents
    Approvals
    Backup
    Browser
    Channels
    Clawbot
    `openclaw commitments`
    Completion
    Config
    Configure
    Cron
    Daemon
    Dashboard
    Devices
    Directory
    DNS
    Docs
    Doctor
    Flows (redirect)
    Gateway
    Health
    Hooks
    CLI reference
    Inference CLI
    Logs
    MCP
    Memory
    Message
    Migrate
    Models
    Node
    Nodes
    Onboard
    Pairing
    Plugins
    Proxy
    QR
    Reset
    Sandbox CLI
    Secrets
    Security
    Sessions
    Setup
    Skills
    Status
    System
    `openclaw tasks`
    TUI
    Uninstall
    Update
    Voicecall
    Webhooks
    Wiki
    Active memory
    Agent runtime
    Agent loop
    Agent runtimes
    Agent workspace
    Gateway architecture
    Channel docking
    Inferred commitments
    Compaction
    Context
    Context engine
    Delegate architecture
    Dreaming
    Experimental features
    Features
    Markdown formatting
    Memory overview
    Builtin memory engine
    Honcho memory
    QMD memory engine
    Memory search
    Messages
    Model failover
    Model providers
    Models CLI
    Multi-agent routing
    OAuth
    OpenClaw App SDK
    Presence
    QA overview
    Matrix QA
    Command queue
    Steering queue
    Retry policy
    Session management
    Session pruning
    Session tools
    SOUL.md personality guide
    Streaming and chunking
    System prompt
    Timezones
    TypeBox
    Typing indicators
    Usage tracking
    Date and time
    Node + tsx crash
    Diagnostics flags
    Authentication
    Background exec and process tool
    Bonjour discovery
    Bridge protocol
    CLI backends
    Configuration — agents
    Configuration — channels
    Configuration — tools and custom providers
    Configuration
    Configuration examples
    Configuration reference
    Diagnostics export
    Discovery and transports
    Doctor
    Gateway lock
    Health checks
    Heartbeat
    Gateway runbook
    Local models
    Gateway logging
    Multiple gateways
    Network model
    OpenAI chat completions
    OpenResponses API
    OpenShell
    OpenTelemetry export
    Gateway-owned pairing
    Prometheus metrics
    Gateway protocol
    Remote access
    Remote gateway setup
    Sandbox vs tool policy vs elevated
    Sandboxing
    Secrets management
    Secrets apply plan contract
    Security audit checks
    Security
    Tailscale
    Tools invoke API
    Troubleshooting
    Trusted proxy auth
    Debugging
    Environment variables
    FAQ
    FAQ: first-run setup
    FAQ: models and auth
    GPT-5.5 / Codex agentic parity
    GPT-5.5 / Codex parity maintainer notes
    Help
    Scripts
    Testing
    Testing: live suites
    General troubleshooting
    OpenClaw
    Ansible
    Azure
    Bun (experimental)
    ClawDock
    Release channels
    DigitalOcean
    Docker
    Docker VM runtime
    exe.dev
    Fly.io
    GCP
    Hetzner
    Hostinger
    Install
    Installer internals
    Kubernetes
    macOS VMs
    Migration guide
    Migrating from Claude
    Migrating from Hermes
    Nix
    Node.js
    Northflank
    Oracle Cloud
    Podman
    Railway
    Raspberry Pi
    Render
    Uninstall
    Updating
    Logging
    Network
    Audio and voice notes
    Camera capture
    Image and media support
    Nodes
    Location command
    Media understanding
    Talk mode
    Node troubleshooting
    Voice wake
    Pi integration architecture
    Pi development workflow
    Android app
    Platforms
    iOS app
    Linux app
    Gateway on macOS
    Canvas
    Gateway lifecycle
    macOS dev setup
    Health checks (macOS)
    Menu bar icon
    macOS logging
    Menu bar
    Peekaboo bridge
    macOS permissions
    Remote control
    macOS signing
    Skills (macOS)
    Voice overlay
    Voice wake (macOS)
    WebChat (macOS)
    macOS IPC
    macOS app
    Windows
    Plugin internals
    Plugin architecture internals
    Building plugins
    Plugin bundles
    Codex Computer Use
    Codex harness
    Community plugins
    Plugin compatibility
    Google Meet plugin
    Plugin hooks
    Plugin manifest
    Memory LanceDB
    Memory wiki
    Message presentation
    Agent harness plugins
    Building channel plugins
    Channel turn kernel
    Plugin entry points
    Plugin SDK migration
    Plugin SDK overview
    Building provider plugins
    Plugin runtime helpers
    Plugin setup and config
    Plugin SDK subpaths
    Plugin testing
    Skill workshop plugin
    Voice call plugin
    Webhooks plugin
    Zalo personal plugin
    OpenProse
    Alibaba Model Studio
    Anthropic
    Arcee AI
    Azure Speech
    Amazon Bedrock
    Amazon Bedrock Mantle
    Chutes
    Claude Max API proxy
    Cloudflare AI gateway
    ComfyUI
    Deepgram
    Deepinfra
    DeepSeek
    ElevenLabs
    Fal
    Fireworks
    GitHub Copilot
    GLM (Zhipu)
    Google (Gemini)
    Gradium
    Groq
    Hugging Face (inference)
    Provider directory
    Inferrs
    Inworld
    Kilocode
    LiteLLM
    LM Studio
    MiniMax
    Mistral
    Model provider quickstart
    Moonshot AI
    NVIDIA
    Ollama
    OpenAI
    OpenCode
    OpenCode Go
    OpenRouter
    Perplexity
    Qianfan
    Qwen
    Runway
    SGLang
    StepFun
    Synthetic
    Tencent Cloud (TokenHub)
    Together AI
    Venice AI
    Vercel AI gateway
    vLLM
    Volcengine (Doubao)
    Vydra
    xAI
    Xiaomi MiMo
    Z.AI
    Default AGENTS.md
    Release policy
    API usage and costs
    Credits
    Device model database
    Full release validation
    Memory configuration reference
    OpenClaw App SDK API design
    Prompt caching
    Rich output protocol
    RPC adapters
    SecretRef credential surface
    Session management deep dive
    AGENTS.md template
    BOOT.md template
    BOOTSTRAP.md template
    HEARTBEAT.md template
    IDENTITY template
    SOUL.md template
    TOOLS.md template
    USER template
    Tests
    Token use and costs
    Transcript hygiene
    Onboarding reference
    Contributing to the threat model
    Threat model (MITRE ATLAS)
    Formal verification (security models)
    Network proxy
    Agent bootstrapping
    Docs directory
    Getting started
    Docs hubs
    OpenClaw lore
    Onboarding (macOS app)
    Onboarding overview
    Personal assistant setup
    Setup
    Showcase
    Onboarding (CLI)
    CLI automation
    CLI setup reference
    ACP agents
    ACP agents — setup
    Agent send
    apply_patch tool
    Brave search
    Browser (OpenClaw-managed)
    Browser control API
    Browser troubleshooting
    Browser login
    WSL2 + Windows + remote Chrome CDP troubleshooting
    BTW side questions
    ClawHub
    Code execution
    Creating skills
    Diffs
    DuckDuckGo search
    Elevated mode
    Exa search
    Exec tool
    Exec approvals
    Exec approvals — advanced
    Firecrawl
    Gemini search
    Grok search
    Image generation
    Tools and plugins
    Kimi search
    LLM task
    Lobster
    Tool-loop detection
    Media overview
    MiniMax search
    Multi-agent sandbox and tools
    Music generation
    Ollama web search
    PDF tool
    Perplexity search
    Plugins
    Reactions
    SearXNG search
    Skills
    Skills config
    Slash commands
    Sub-agents
    Tavily
    Thinking levels
    Tokenjuice
    Trajectory bundles
    Text-to-speech
    Video generation
    Web search
    Web fetch
    Linux server
    Control UI
    Dashboard
    Web
    TUI
    WebChat

    OpenAPI Specs

    openapi
    TaskFlow
    docs/openclaw
    Original Docs

    Real-time Synchronized Documentation

    Last sync: 01/05/2026 07:04:25

    Note: This content is mirrored from docs.openclaw.ai and is subject to their terms and conditions.

    OpenClaw Docs

    v2.4.0 Production

    Last synced: Today, 22:00

    Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.

    Use this file to discover all available pages before exploring further.

    Agent harness plugins

    An agent harness is the low level executor for one prepared OpenClaw agent turn. It is not a model provider, not a channel, and not a tool registry. For the user-facing mental model, see Agent runtimes.

    Use this surface only for bundled or trusted native plugins. The contract is still experimental because the parameter types intentionally mirror the current embedded runner.

    When to use a harness

    Register an agent harness when a model family has its own native session runtime and the normal OpenClaw provider transport is the wrong abstraction.

    Examples:

    • a native coding-agent server that owns threads and compaction
    • a local CLI or daemon that must stream native plan/reasoning/tool events
    • a model runtime that needs its own resume id in addition to the OpenClaw session transcript

    Do not register a harness just to add a new LLM API. For normal HTTP or WebSocket model APIs, build a provider plugin.

    What core still owns

    Before a harness is selected, OpenClaw has already resolved:

    • provider and model
    • runtime auth state
    • thinking level and context budget
    • the OpenClaw transcript/session file
    • workspace, sandbox, and tool policy
    • channel reply callbacks and streaming callbacks
    • model fallback and live model switching policy

    That split is intentional. A harness runs a prepared attempt; it does not pick providers, replace channel delivery, or silently switch models.

    The prepared attempt also includes

    text
    params.runtimePlan
    , an OpenClaw-owned policy bundle for runtime decisions that must stay shared across PI and native harnesses:

    • text
      runtimePlan.tools.normalize(...)
      and
      text
      runtimePlan.tools.logDiagnostics(...)
      for provider-aware tool schema policy
    • text
      runtimePlan.transcript.resolvePolicy(...)
      for transcript sanitization and tool-call repair policy
    • text
      runtimePlan.delivery.isSilentPayload(...)
      for shared
      text
      NO_REPLY
      and media delivery suppression
    • text
      runtimePlan.outcome.classifyRunResult(...)
      for model fallback classification
    • text
      runtimePlan.observability
      for resolved provider/model/harness metadata

    Harnesses may use the plan for decisions that need to match PI behavior, but should still treat it as host-owned attempt state. Do not mutate it or use it to switch providers/models inside a turn.

    Register a harness

    Import:

    text
    openclaw/plugin-sdk/agent-harness

    typescript
    import type { AgentHarness } from "openclaw/plugin-sdk/agent-harness"; import { definePluginEntry } from "openclaw/plugin-sdk/plugin-entry"; const myHarness: AgentHarness = { id: "my-harness", label: "My native agent harness", supports(ctx) { return ctx.provider === "my-provider" ? { supported: true, priority: 100 } : { supported: false }; }, async runAttempt(params) { // Start or resume your native thread. // Use params.prompt, params.tools, params.images, params.onPartialReply, // params.onAgentEvent, and the other prepared attempt fields. return await runMyNativeTurn(params); }, }; export default definePluginEntry({ id: "my-native-agent", name: "My Native Agent", description: "Runs selected models through a native agent daemon.", register(api) { api.registerAgentHarness(myHarness); }, });

    Selection policy

    OpenClaw chooses a harness after provider/model resolution:

    1. An existing session's recorded harness id wins, so config/env changes do not hot-switch that transcript to another runtime.
    2. text
      OPENCLAW_AGENT_RUNTIME=<id>
      forces a registered harness with that id for sessions that are not already pinned.
    3. text
      OPENCLAW_AGENT_RUNTIME=pi
      forces the built-in PI harness.
    4. text
      OPENCLAW_AGENT_RUNTIME=auto
      asks registered harnesses if they support the resolved provider/model.
    5. If no registered harness matches, OpenClaw uses PI unless PI fallback is disabled.

    Plugin harness failures surface as run failures. In

    text
    auto
    mode, PI fallback is only used when no registered plugin harness supports the resolved provider/model. Once a plugin harness has claimed a run, OpenClaw does not replay that same turn through PI because that can change auth/runtime semantics or duplicate side effects.

    The selected harness id is persisted with the session id after an embedded run. Legacy sessions created before harness pins are treated as PI-pinned once they have transcript history. Use a new/reset session when changing between PI and a native plugin harness.

    text
    /status
    shows non-default harness ids such as
    text
    codex
    next to
    text
    Fast
    ; PI stays hidden because it is the default compatibility path. If the selected harness is surprising, enable
    text
    agents/harness
    debug logging and inspect the gateway's structured
    text
    agent harness selected
    record. It includes the selected harness id, selection reason, runtime/fallback policy, and, in
    text
    auto
    mode, each plugin candidate's support result.

    The bundled Codex plugin registers

    text
    codex
    as its harness id. Core treats that as an ordinary plugin harness id; Codex-specific aliases belong in the plugin or operator config, not in the shared runtime selector.

    Provider plus harness pairing

    Most harnesses should also register a provider. The provider makes model refs, auth status, model metadata, and

    text
    /model
    selection visible to the rest of OpenClaw. The harness then claims that provider in
    text
    supports(...)
    .

    The bundled Codex plugin follows this pattern:

    • preferred user model refs:
      text
      openai/gpt-5.5
      plus
      text
      agentRuntime.id: "codex"
    • compatibility refs: legacy
      text
      codex/gpt-*
      refs remain accepted, but new configs should not use them as normal provider/model refs
    • harness id:
      text
      codex
    • auth: synthetic provider availability, because the Codex harness owns the native Codex login/session
    • app-server request: OpenClaw sends the bare model id to Codex and lets the harness talk to the native app-server protocol

    The Codex plugin is additive. Plain

    text
    openai/gpt-*
    refs continue to use the normal OpenClaw provider path unless you force the Codex harness with
    text
    agentRuntime.id: "codex"
    . Older
    text
    codex/gpt-*
    refs still select the Codex provider and harness for compatibility.

    For operator setup, model prefix examples, and Codex-only configs, see Codex Harness.

    OpenClaw requires Codex app-server

    text
    0.125.0
    or newer. The Codex plugin checks the app-server initialize handshake and blocks older or unversioned servers so OpenClaw only runs against the protocol surface it has been tested with. The
    text
    0.125.0
    floor includes the native MCP hook payload support that landed in Codex
    text
    0.124.0
    , while pinning OpenClaw to the newer tested stable line.

    Tool-result middleware

    Bundled plugins can attach runtime-neutral tool-result middleware through

    text
    api.registerAgentToolResultMiddleware(...)
    when their manifest declares the targeted runtime ids in
    text
    contracts.agentToolResultMiddleware
    . This trusted seam is for async tool-result transforms that must run before PI or Codex feeds tool output back into the model.

    Legacy bundled plugins can still use

    text
    api.registerCodexAppServerExtensionFactory(...)
    for Codex app-server-only middleware, but new result transforms should use the runtime-neutral API. The Pi-only
    text
    api.registerEmbeddedExtensionFactory(...)
    hook has been removed; Pi tool-result transforms must use runtime-neutral middleware.

    Terminal outcome classification

    Native harnesses that own their own protocol projection can use

    text
    classifyAgentHarnessTerminalOutcome(...)
    from
    text
    openclaw/plugin-sdk/agent-harness-runtime
    when a completed turn produced no visible assistant text. The helper returns
    text
    empty
    ,
    text
    reasoning-only
    , or
    text
    planning-only
    so OpenClaw's fallback policy can decide whether to retry on a different model. It intentionally leaves prompt errors, in-flight turns, and intentional silent replies such as
    text
    NO_REPLY
    unclassified.

    Native Codex harness mode

    The bundled

    text
    codex
    harness is the native Codex mode for embedded OpenClaw agent turns. Enable the bundled
    text
    codex
    plugin first, and include
    text
    codex
    in
    text
    plugins.allow
    if your config uses a restrictive allowlist. Native app-server configs should use
    text
    openai/gpt-*
    with
    text
    agentRuntime.id: "codex"
    . Use
    text
    openai-codex/*
    for Codex OAuth through PI instead. Legacy
    text
    codex/*
    model refs remain compatibility aliases for the native harness.

    When this mode runs, Codex owns the native thread id, resume behavior, compaction, and app-server execution. OpenClaw still owns the chat channel, visible transcript mirror, tool policy, approvals, media delivery, and session selection. Use

    text
    agentRuntime.id: "codex"
    without a
    text
    fallback
    override when you need to prove that only the Codex app-server path can claim the run. Explicit plugin runtimes already fail closed by default. Set
    text
    fallback: "pi"
    only when you intentionally want PI to handle missing harness selection. Codex app-server failures already fail directly instead of retrying through PI.

    Disable PI fallback

    By default, OpenClaw runs embedded agents with

    text
    agents.defaults.agentRuntime
    set to
    text
    { id: "auto", fallback: "pi" }
    . In
    text
    auto
    mode, registered plugin harnesses can claim a provider/model pair. If none match, OpenClaw falls back to PI.

    In

    text
    auto
    mode, set
    text
    fallback: "none"
    when you need missing plugin harness selection to fail instead of using PI. Explicit plugin runtimes such as
    text
    runtime: "codex"
    already fail closed by default, unless
    text
    fallback: "pi"
    is set in the same config or environment override scope. Selected plugin harness failures always fail hard. This does not block an explicit
    text
    runtime: "pi"
    or
    text
    OPENCLAW_AGENT_RUNTIME=pi
    .

    For Codex-only embedded runs:

    json
    { "agents": { "defaults": { "model": "openai/gpt-5.5", "agentRuntime": { "id": "codex" } } } }

    If you want any registered plugin harness to claim matching models but never want OpenClaw to silently fall back to PI, keep

    text
    runtime: "auto"
    and disable the fallback:

    json
    { "agents": { "defaults": { "agentRuntime": { "id": "auto", "fallback": "none" } } } }

    Per-agent overrides use the same shape:

    json
    { "agents": { "defaults": { "agentRuntime": { "id": "auto", "fallback": "pi" } }, "list": [ { "id": "codex-only", "model": "openai/gpt-5.5", "agentRuntime": { "id": "codex", "fallback": "none" } } ] } }

    text
    OPENCLAW_AGENT_RUNTIME
    still overrides the configured runtime. Use
    text
    OPENCLAW_AGENT_HARNESS_FALLBACK=none
    to disable PI fallback from the environment.

    bash
    OPENCLAW_AGENT_RUNTIME=codex \ OPENCLAW_AGENT_HARNESS_FALLBACK=none \ openclaw gateway run

    With fallback disabled, a session fails early when the requested harness is not registered, does not support the resolved provider/model, or fails before producing turn side effects. That is intentional for Codex-only deployments and for live tests that must prove the Codex app-server path is actually in use.

    This setting only controls the embedded agent harness. It does not disable image, video, music, TTS, PDF, or other provider-specific model routing.

    Native sessions and transcript mirror

    A harness may keep a native session id, thread id, or daemon-side resume token. Keep that binding explicitly associated with the OpenClaw session, and keep mirroring user-visible assistant/tool output into the OpenClaw transcript.

    The OpenClaw transcript remains the compatibility layer for:

    • channel-visible session history
    • transcript search and indexing
    • switching back to the built-in PI harness on a later turn
    • generic
      text
      /new
      ,
      text
      /reset
      , and session deletion behavior

    If your harness stores a sidecar binding, implement

    text
    reset(...)
    so OpenClaw can clear it when the owning OpenClaw session is reset.

    Tool and media results

    Core constructs the OpenClaw tool list and passes it into the prepared attempt. When a harness executes a dynamic tool call, return the tool result back through the harness result shape instead of sending channel media yourself.

    This keeps text, image, video, music, TTS, approval, and messaging-tool outputs on the same delivery path as PI-backed runs.

    Current limitations

    • The public import path is generic, but some attempt/result type aliases still carry
      text
      Pi
      names for compatibility.
    • Third-party harness installation is experimental. Prefer provider plugins until you need a native session runtime.
    • Harness switching is supported across turns. Do not switch harnesses in the middle of a turn after native tools, approvals, assistant text, or message sends have started.

    Related

    • SDK Overview
    • Runtime Helpers
    • Provider Plugins
    • Codex Harness
    • Model Providers

    © 2024 TaskFlow Mirror

    Powered by TaskFlow Sync Engine