TaskFlow
DashboardFreewriteWhiteboardsProjectsCRMTasksNotificationsSettingsAgent TowerAPI Docs
OpenClaw Docs
?

User

Member

Caricamento in corso...

Home
Progetti
Task
Notifiche
CRM

    OpenClaw

    Documentation Mirror

    Documentation Overview

    Docs

    Auth credential semantics
    Scheduled tasks
    Hooks
    Automation & tasks
    Standing orders
    Task flow
    Background tasks
    BlueBubbles
    Broadcast groups
    Channel routing
    Discord
    Feishu
    Google Chat
    Group messages
    Groups
    iMessage
    Chat channels
    IRC
    LINE
    Channel location parsing
    Matrix
    Matrix migration
    Matrix push rules for quiet previews
    Mattermost
    Microsoft Teams
    Nextcloud Talk
    Nostr
    Pairing
    QA channel
    QQ bot
    Signal
    Slack
    Synology Chat
    Telegram
    Tlon
    Channel troubleshooting
    Twitch
    WeChat
    WhatsApp
    Yuanbao
    Zalo
    Zalo personal
    CI pipeline
    ACP
    Agent
    Agents
    Approvals
    Backup
    Browser
    Channels
    Clawbot
    `openclaw commitments`
    Completion
    Config
    Configure
    Cron
    Daemon
    Dashboard
    Devices
    Directory
    DNS
    Docs
    Doctor
    Flows (redirect)
    Gateway
    Health
    Hooks
    CLI reference
    Inference CLI
    Logs
    MCP
    Memory
    Message
    Migrate
    Models
    Node
    Nodes
    Onboard
    Pairing
    Plugins
    Proxy
    QR
    Reset
    Sandbox CLI
    Secrets
    Security
    Sessions
    Setup
    Skills
    Status
    System
    `openclaw tasks`
    TUI
    Uninstall
    Update
    Voicecall
    Webhooks
    Wiki
    Active memory
    Agent runtime
    Agent loop
    Agent runtimes
    Agent workspace
    Gateway architecture
    Channel docking
    Inferred commitments
    Compaction
    Context
    Context engine
    Delegate architecture
    Dreaming
    Experimental features
    Features
    Markdown formatting
    Memory overview
    Builtin memory engine
    Honcho memory
    QMD memory engine
    Memory search
    Messages
    Model failover
    Model providers
    Models CLI
    Multi-agent routing
    OAuth
    OpenClaw App SDK
    Presence
    QA overview
    Matrix QA
    Command queue
    Steering queue
    Retry policy
    Session management
    Session pruning
    Session tools
    SOUL.md personality guide
    Streaming and chunking
    System prompt
    Timezones
    TypeBox
    Typing indicators
    Usage tracking
    Date and time
    Node + tsx crash
    Diagnostics flags
    Authentication
    Background exec and process tool
    Bonjour discovery
    Bridge protocol
    CLI backends
    Configuration — agents
    Configuration — channels
    Configuration — tools and custom providers
    Configuration
    Configuration examples
    Configuration reference
    Diagnostics export
    Discovery and transports
    Doctor
    Gateway lock
    Health checks
    Heartbeat
    Gateway runbook
    Local models
    Gateway logging
    Multiple gateways
    Network model
    OpenAI chat completions
    OpenResponses API
    OpenShell
    OpenTelemetry export
    Gateway-owned pairing
    Prometheus metrics
    Gateway protocol
    Remote access
    Remote gateway setup
    Sandbox vs tool policy vs elevated
    Sandboxing
    Secrets management
    Secrets apply plan contract
    Security audit checks
    Security
    Tailscale
    Tools invoke API
    Troubleshooting
    Trusted proxy auth
    Debugging
    Environment variables
    FAQ
    FAQ: first-run setup
    FAQ: models and auth
    GPT-5.5 / Codex agentic parity
    GPT-5.5 / Codex parity maintainer notes
    Help
    Scripts
    Testing
    Testing: live suites
    General troubleshooting
    OpenClaw
    Ansible
    Azure
    Bun (experimental)
    ClawDock
    Release channels
    DigitalOcean
    Docker
    Docker VM runtime
    exe.dev
    Fly.io
    GCP
    Hetzner
    Hostinger
    Install
    Installer internals
    Kubernetes
    macOS VMs
    Migration guide
    Migrating from Claude
    Migrating from Hermes
    Nix
    Node.js
    Northflank
    Oracle Cloud
    Podman
    Railway
    Raspberry Pi
    Render
    Uninstall
    Updating
    Logging
    Network
    Audio and voice notes
    Camera capture
    Image and media support
    Nodes
    Location command
    Media understanding
    Talk mode
    Node troubleshooting
    Voice wake
    Pi integration architecture
    Pi development workflow
    Android app
    Platforms
    iOS app
    Linux app
    Gateway on macOS
    Canvas
    Gateway lifecycle
    macOS dev setup
    Health checks (macOS)
    Menu bar icon
    macOS logging
    Menu bar
    Peekaboo bridge
    macOS permissions
    Remote control
    macOS signing
    Skills (macOS)
    Voice overlay
    Voice wake (macOS)
    WebChat (macOS)
    macOS IPC
    macOS app
    Windows
    Plugin internals
    Plugin architecture internals
    Building plugins
    Plugin bundles
    Codex Computer Use
    Codex harness
    Community plugins
    Plugin compatibility
    Google Meet plugin
    Plugin hooks
    Plugin manifest
    Memory LanceDB
    Memory wiki
    Message presentation
    Agent harness plugins
    Building channel plugins
    Channel turn kernel
    Plugin entry points
    Plugin SDK migration
    Plugin SDK overview
    Building provider plugins
    Plugin runtime helpers
    Plugin setup and config
    Plugin SDK subpaths
    Plugin testing
    Skill workshop plugin
    Voice call plugin
    Webhooks plugin
    Zalo personal plugin
    OpenProse
    Alibaba Model Studio
    Anthropic
    Arcee AI
    Azure Speech
    Amazon Bedrock
    Amazon Bedrock Mantle
    Chutes
    Claude Max API proxy
    Cloudflare AI gateway
    ComfyUI
    Deepgram
    Deepinfra
    DeepSeek
    ElevenLabs
    Fal
    Fireworks
    GitHub Copilot
    GLM (Zhipu)
    Google (Gemini)
    Gradium
    Groq
    Hugging Face (inference)
    Provider directory
    Inferrs
    Inworld
    Kilocode
    LiteLLM
    LM Studio
    MiniMax
    Mistral
    Model provider quickstart
    Moonshot AI
    NVIDIA
    Ollama
    OpenAI
    OpenCode
    OpenCode Go
    OpenRouter
    Perplexity
    Qianfan
    Qwen
    Runway
    SGLang
    StepFun
    Synthetic
    Tencent Cloud (TokenHub)
    Together AI
    Venice AI
    Vercel AI gateway
    vLLM
    Volcengine (Doubao)
    Vydra
    xAI
    Xiaomi MiMo
    Z.AI
    Default AGENTS.md
    Release policy
    API usage and costs
    Credits
    Device model database
    Full release validation
    Memory configuration reference
    OpenClaw App SDK API design
    Prompt caching
    Rich output protocol
    RPC adapters
    SecretRef credential surface
    Session management deep dive
    AGENTS.md template
    BOOT.md template
    BOOTSTRAP.md template
    HEARTBEAT.md template
    IDENTITY template
    SOUL.md template
    TOOLS.md template
    USER template
    Tests
    Token use and costs
    Transcript hygiene
    Onboarding reference
    Contributing to the threat model
    Threat model (MITRE ATLAS)
    Formal verification (security models)
    Network proxy
    Agent bootstrapping
    Docs directory
    Getting started
    Docs hubs
    OpenClaw lore
    Onboarding (macOS app)
    Onboarding overview
    Personal assistant setup
    Setup
    Showcase
    Onboarding (CLI)
    CLI automation
    CLI setup reference
    ACP agents
    ACP agents — setup
    Agent send
    apply_patch tool
    Brave search
    Browser (OpenClaw-managed)
    Browser control API
    Browser troubleshooting
    Browser login
    WSL2 + Windows + remote Chrome CDP troubleshooting
    BTW side questions
    ClawHub
    Code execution
    Creating skills
    Diffs
    DuckDuckGo search
    Elevated mode
    Exa search
    Exec tool
    Exec approvals
    Exec approvals — advanced
    Firecrawl
    Gemini search
    Grok search
    Image generation
    Tools and plugins
    Kimi search
    LLM task
    Lobster
    Tool-loop detection
    Media overview
    MiniMax search
    Multi-agent sandbox and tools
    Music generation
    Ollama web search
    PDF tool
    Perplexity search
    Plugins
    Reactions
    SearXNG search
    Skills
    Skills config
    Slash commands
    Sub-agents
    Tavily
    Thinking levels
    Tokenjuice
    Trajectory bundles
    Text-to-speech
    Video generation
    Web search
    Web fetch
    Linux server
    Control UI
    Dashboard
    Web
    TUI
    WebChat

    OpenAPI Specs

    openapi
    TaskFlow
    docs/openclaw
    Original Docs

    Real-time Synchronized Documentation

    Last sync: 01/05/2026 07:05:24

    Note: This content is mirrored from docs.openclaw.ai and is subject to their terms and conditions.

    OpenClaw Docs

    v2.4.0 Production

    Last synced: Today, 22:00

    Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.

    Use this file to discover all available pages before exploring further.

    GPT-5.5 / Codex agentic parity

    GPT-5.5 / Codex Agentic Parity in OpenClaw

    OpenClaw already worked well with tool-using frontier models, but GPT-5.5 and Codex-style models were still underperforming in a few practical ways:

    • they could stop after planning instead of doing the work
    • they could use strict OpenAI/Codex tool schemas incorrectly
    • they could ask for
      text
      /elevated full
      even when full access was impossible
    • they could lose long-running task state during replay or compaction
    • parity claims against Claude Opus 4.6 were based on anecdotes instead of repeatable scenarios

    This parity program fixes those gaps in four reviewable slices.

    What changed

    PR A: strict-agentic execution

    This slice adds an opt-in

    text
    strict-agentic
    execution contract for embedded Pi GPT-5 runs.

    When enabled, OpenClaw stops accepting plan-only turns as “good enough” completion. If the model only says what it intends to do and does not actually use tools or make progress, OpenClaw retries with an act-now steer and then fails closed with an explicit blocked state instead of silently ending the task.

    This improves the GPT-5.5 experience most on:

    • short “ok do it” follow-ups
    • code tasks where the first step is obvious
    • flows where
      text
      update_plan
      should be progress tracking rather than filler text

    PR B: runtime truthfulness

    This slice makes OpenClaw tell the truth about two things:

    • why the provider/runtime call failed
    • whether
      text
      /elevated full
      is actually available

    That means GPT-5.5 gets better runtime signals for missing scope, auth refresh failures, HTML 403 auth failures, proxy issues, DNS or timeout failures, and blocked full-access modes. The model is less likely to hallucinate the wrong remediation or keep asking for a permission mode the runtime cannot provide.

    PR C: execution correctness

    This slice improves two kinds of correctness:

    • provider-owned OpenAI/Codex tool-schema compatibility
    • replay and long-task liveness surfacing

    The tool-compat work reduces schema friction for strict OpenAI/Codex tool registration, especially around parameter-free tools and strict object-root expectations. The replay/liveness work makes long-running tasks more observable, so paused, blocked, and abandoned states are visible instead of disappearing into generic failure text.

    PR D: parity harness

    This slice adds the first-wave QA-lab parity pack so GPT-5.5 and Opus 4.6 can be exercised through the same scenarios and compared using shared evidence.

    The parity pack is the proof layer. It does not change runtime behavior by itself.

    After you have two

    text
    qa-suite-summary.json
    artifacts, generate the release-gate comparison with:

    bash
    pnpm openclaw qa parity-report \ --repo-root . \ --candidate-summary .artifacts/qa-e2e/gpt55/qa-suite-summary.json \ --baseline-summary .artifacts/qa-e2e/opus46/qa-suite-summary.json \ --output-dir .artifacts/qa-e2e/parity

    That command writes:

    • a human-readable Markdown report
    • a machine-readable JSON verdict
    • an explicit
      text
      pass
      /
      text
      fail
      gate result

    Why this improves GPT-5.5 in practice

    Before this work, GPT-5.5 on OpenClaw could feel less agentic than Opus in real coding sessions because the runtime tolerated behaviors that are especially harmful for GPT-5-style models:

    • commentary-only turns
    • schema friction around tools
    • vague permission feedback
    • silent replay or compaction breakage

    The goal is not to make GPT-5.5 imitate Opus. The goal is to give GPT-5.5 a runtime contract that rewards real progress, supplies cleaner tool and permission semantics, and turns failure modes into explicit machine- and human-readable states.

    That changes the user experience from:

    • “the model had a good plan but stopped”

    to:

    • “the model either acted, or OpenClaw surfaced the exact reason it could not”

    Before vs after for GPT-5.5 users

    Before this programAfter PR A-D
    GPT-5.5 could stop after a reasonable plan without taking the next tool stepPR A turns “plan only” into “act now or surface a blocked state”
    Strict tool schemas could reject parameter-free or OpenAI/Codex-shaped tools in confusing waysPR C makes provider-owned tool registration and invocation more predictable
    text
    /elevated full
    guidance could be vague or wrong in blocked runtimes
    PR B gives GPT-5.5 and the user truthful runtime and permission hints
    Replay or compaction failures could feel like the task silently disappearedPR C surfaces paused, blocked, abandoned, and replay-invalid outcomes explicitly
    “GPT-5.5 feels worse than Opus” was mostly anecdotalPR D turns that into the same scenario pack, the same metrics, and a hard pass/fail gate

    Architecture

    mermaid
    flowchart TD A["User request"] --> B["Embedded Pi runtime"] B --> C["Strict-agentic execution contract"] B --> D["Provider-owned tool compatibility"] B --> E["Runtime truthfulness"] B --> F["Replay and liveness state"] C --> G["Tool call or explicit blocked state"] D --> G E --> G F --> G G --> H["QA-lab parity pack"] H --> I["Scenario report and parity gate"]

    Release flow

    mermaid
    flowchart LR A["Merged runtime slices (PR A-C)"] --> B["Run GPT-5.5 parity pack"] A --> C["Run Opus 4.6 parity pack"] B --> D["qa-suite-summary.json"] C --> E["qa-suite-summary.json"] D --> F["openclaw qa parity-report"] E --> F F --> G["qa-agentic-parity-report.md"] F --> H["qa-agentic-parity-summary.json"] H --> I{"Gate pass?"} I -- "yes" --> J["Evidence-backed parity claim"] I -- "no" --> K["Keep runtime/review loop open"]

    Scenario pack

    The first-wave parity pack currently covers five scenarios:

    text
    approval-turn-tool-followthrough

    Checks that the model does not stop at “I’ll do that” after a short approval. It should take the first concrete action in the same turn.

    text
    model-switch-tool-continuity

    Checks that tool-using work remains coherent across model/runtime switching boundaries instead of resetting into commentary or losing execution context.

    text
    source-docs-discovery-report

    Checks that the model can read source and docs, synthesize findings, and continue the task agentically rather than producing a thin summary and stopping early.

    text
    image-understanding-attachment

    Checks that mixed-mode tasks involving attachments remain actionable and do not collapse into vague narration.

    text
    compaction-retry-mutating-tool

    Checks that a task with a real mutating write keeps replay-unsafety explicit instead of quietly looking replay-safe if the run compacts, retries, or loses reply state under pressure.

    Scenario matrix

    ScenarioWhat it testsGood GPT-5.5 behaviorFailure signal
    text
    approval-turn-tool-followthrough
    Short approval turns after a planStarts the first concrete tool action immediately instead of restating intentplan-only follow-up, no tool activity, or blocked turn without a real blocker
    text
    model-switch-tool-continuity
    Runtime/model switching under tool usePreserves task context and continues acting coherentlyresets into commentary, loses tool context, or stops after switch
    text
    source-docs-discovery-report
    Source reading + synthesis + actionFinds sources, uses tools, and produces a useful report without stallingthin summary, missing tool work, or incomplete-turn stop
    text
    image-understanding-attachment
    Attachment-driven agentic workInterprets the attachment, connects it to tools, and continues the taskvague narration, attachment ignored, or no concrete next action
    text
    compaction-retry-mutating-tool
    Mutating work under compaction pressurePerforms a real write and keeps replay-unsafety explicit after the side effectmutating write happens but replay safety is implied, missing, or contradictory

    Release gate

    GPT-5.5 can only be considered at parity or better when the merged runtime passes the parity pack and the runtime-truthfulness regressions at the same time.

    Required outcomes:

    • no plan-only stall when the next tool action is clear
    • no fake completion without real execution
    • no incorrect
      text
      /elevated full
      guidance
    • no silent replay or compaction abandonment
    • parity-pack metrics that are at least as strong as the agreed Opus 4.6 baseline

    For the first-wave harness, the gate compares:

    • completion rate
    • unintended-stop rate
    • valid-tool-call rate
    • fake-success count

    Parity evidence is intentionally split across two layers:

    • PR D proves same-scenario GPT-5.5 vs Opus 4.6 behavior with QA-lab
    • PR B deterministic suites prove auth, proxy, DNS, and
      text
      /elevated full
      truthfulness outside the harness

    Goal-to-evidence matrix

    Completion gate itemOwning PREvidence sourcePass signal
    GPT-5.5 no longer stalls after planningPR A
    text
    approval-turn-tool-followthrough
    plus PR A runtime suites
    approval turns trigger real work or an explicit blocked state
    GPT-5.5 no longer fakes progress or fake tool completionPR A + PR Dparity report scenario outcomes and fake-success countno suspicious pass results and no commentary-only completion
    GPT-5.5 no longer gives false
    text
    /elevated full
    guidance
    PR Bdeterministic truthfulness suitesblocked reasons and full-access hints stay runtime-accurate
    Replay/liveness failures stay explicitPR C + PR DPR C lifecycle/replay suites plus
    text
    compaction-retry-mutating-tool
    mutating work keeps replay-unsafety explicit instead of silently disappearing
    GPT-5.5 matches or beats Opus 4.6 on the agreed metricsPR D
    text
    qa-agentic-parity-report.md
    and
    text
    qa-agentic-parity-summary.json
    same scenario coverage and no regression on completion, stop behavior, or valid tool use

    How to read the parity verdict

    Use the verdict in

    text
    qa-agentic-parity-summary.json
    as the final machine-readable decision for the first-wave parity pack.

    • text
      pass
      means GPT-5.5 covered the same scenarios as Opus 4.6 and did not regress on the agreed aggregate metrics.
    • text
      fail
      means at least one hard gate tripped: weaker completion, worse unintended stops, weaker valid tool use, any fake-success case, or mismatched scenario coverage.
    • “shared/base CI issue” is not itself a parity result. If CI noise outside PR D blocks a run, the verdict should wait for a clean merged-runtime execution instead of being inferred from branch-era logs.
    • Auth, proxy, DNS, and
      text
      /elevated full
      truthfulness still come from PR B’s deterministic suites, so the final release claim needs both: a passing PR D parity verdict and green PR B truthfulness coverage.

    Who should enable
    text
    strict-agentic

    Use

    text
    strict-agentic
    when:

    • the agent is expected to act immediately when a next step is obvious
    • GPT-5.5 or Codex-family models are the primary runtime
    • you prefer explicit blocked states over “helpful” recap-only replies

    Keep the default contract when:

    • you want the existing looser behavior
    • you are not using GPT-5-family models
    • you are testing prompts rather than runtime enforcement

    Related

    • GPT-5.5 / Codex parity maintainer notes

    © 2024 TaskFlow Mirror

    Powered by TaskFlow Sync Engine