TaskFlow
DashboardFreewriteWhiteboardsProjectsCRMTasksNotificationsSettingsAgent TowerAPI Docs
OpenClaw Docs
?

User

Member

Caricamento in corso...

Home
Progetti
Task
Notifiche
CRM

    OpenClaw

    Documentation Mirror

    Documentation Overview

    Docs

    Auth credential semantics
    Scheduled tasks
    Hooks
    Automation & tasks
    Standing orders
    Task flow
    Background tasks
    BlueBubbles
    Broadcast groups
    Channel routing
    Discord
    Feishu
    Google Chat
    Group messages
    Groups
    iMessage
    Chat channels
    IRC
    LINE
    Channel location parsing
    Matrix
    Matrix migration
    Matrix push rules for quiet previews
    Mattermost
    Microsoft Teams
    Nextcloud Talk
    Nostr
    Pairing
    QA channel
    QQ bot
    Signal
    Slack
    Synology Chat
    Telegram
    Tlon
    Channel troubleshooting
    Twitch
    WeChat
    WhatsApp
    Yuanbao
    Zalo
    Zalo personal
    CI pipeline
    ACP
    Agent
    Agents
    Approvals
    Backup
    Browser
    Channels
    Clawbot
    `openclaw commitments`
    Completion
    Config
    Configure
    Cron
    Daemon
    Dashboard
    Devices
    Directory
    DNS
    Docs
    Doctor
    Flows (redirect)
    Gateway
    Health
    Hooks
    CLI reference
    Inference CLI
    Logs
    MCP
    Memory
    Message
    Migrate
    Models
    Node
    Nodes
    Onboard
    Pairing
    Plugins
    Proxy
    QR
    Reset
    Sandbox CLI
    Secrets
    Security
    Sessions
    Setup
    Skills
    Status
    System
    `openclaw tasks`
    TUI
    Uninstall
    Update
    Voicecall
    Webhooks
    Wiki
    Active memory
    Agent runtime
    Agent loop
    Agent runtimes
    Agent workspace
    Gateway architecture
    Channel docking
    Inferred commitments
    Compaction
    Context
    Context engine
    Delegate architecture
    Dreaming
    Experimental features
    Features
    Markdown formatting
    Memory overview
    Builtin memory engine
    Honcho memory
    QMD memory engine
    Memory search
    Messages
    Model failover
    Model providers
    Models CLI
    Multi-agent routing
    OAuth
    OpenClaw App SDK
    Presence
    QA overview
    Matrix QA
    Command queue
    Steering queue
    Retry policy
    Session management
    Session pruning
    Session tools
    SOUL.md personality guide
    Streaming and chunking
    System prompt
    Timezones
    TypeBox
    Typing indicators
    Usage tracking
    Date and time
    Node + tsx crash
    Diagnostics flags
    Authentication
    Background exec and process tool
    Bonjour discovery
    Bridge protocol
    CLI backends
    Configuration — agents
    Configuration — channels
    Configuration — tools and custom providers
    Configuration
    Configuration examples
    Configuration reference
    Diagnostics export
    Discovery and transports
    Doctor
    Gateway lock
    Health checks
    Heartbeat
    Gateway runbook
    Local models
    Gateway logging
    Multiple gateways
    Network model
    OpenAI chat completions
    OpenResponses API
    OpenShell
    OpenTelemetry export
    Gateway-owned pairing
    Prometheus metrics
    Gateway protocol
    Remote access
    Remote gateway setup
    Sandbox vs tool policy vs elevated
    Sandboxing
    Secrets management
    Secrets apply plan contract
    Security audit checks
    Security
    Tailscale
    Tools invoke API
    Troubleshooting
    Trusted proxy auth
    Debugging
    Environment variables
    FAQ
    FAQ: first-run setup
    FAQ: models and auth
    GPT-5.5 / Codex agentic parity
    GPT-5.5 / Codex parity maintainer notes
    Help
    Scripts
    Testing
    Testing: live suites
    General troubleshooting
    OpenClaw
    Ansible
    Azure
    Bun (experimental)
    ClawDock
    Release channels
    DigitalOcean
    Docker
    Docker VM runtime
    exe.dev
    Fly.io
    GCP
    Hetzner
    Hostinger
    Install
    Installer internals
    Kubernetes
    macOS VMs
    Migration guide
    Migrating from Claude
    Migrating from Hermes
    Nix
    Node.js
    Northflank
    Oracle Cloud
    Podman
    Railway
    Raspberry Pi
    Render
    Uninstall
    Updating
    Logging
    Network
    Audio and voice notes
    Camera capture
    Image and media support
    Nodes
    Location command
    Media understanding
    Talk mode
    Node troubleshooting
    Voice wake
    Pi integration architecture
    Pi development workflow
    Android app
    Platforms
    iOS app
    Linux app
    Gateway on macOS
    Canvas
    Gateway lifecycle
    macOS dev setup
    Health checks (macOS)
    Menu bar icon
    macOS logging
    Menu bar
    Peekaboo bridge
    macOS permissions
    Remote control
    macOS signing
    Skills (macOS)
    Voice overlay
    Voice wake (macOS)
    WebChat (macOS)
    macOS IPC
    macOS app
    Windows
    Plugin internals
    Plugin architecture internals
    Building plugins
    Plugin bundles
    Codex Computer Use
    Codex harness
    Community plugins
    Plugin compatibility
    Google Meet plugin
    Plugin hooks
    Plugin manifest
    Memory LanceDB
    Memory wiki
    Message presentation
    Agent harness plugins
    Building channel plugins
    Channel turn kernel
    Plugin entry points
    Plugin SDK migration
    Plugin SDK overview
    Building provider plugins
    Plugin runtime helpers
    Plugin setup and config
    Plugin SDK subpaths
    Plugin testing
    Skill workshop plugin
    Voice call plugin
    Webhooks plugin
    Zalo personal plugin
    OpenProse
    Alibaba Model Studio
    Anthropic
    Arcee AI
    Azure Speech
    Amazon Bedrock
    Amazon Bedrock Mantle
    Chutes
    Claude Max API proxy
    Cloudflare AI gateway
    ComfyUI
    Deepgram
    Deepinfra
    DeepSeek
    ElevenLabs
    Fal
    Fireworks
    GitHub Copilot
    GLM (Zhipu)
    Google (Gemini)
    Gradium
    Groq
    Hugging Face (inference)
    Provider directory
    Inferrs
    Inworld
    Kilocode
    LiteLLM
    LM Studio
    MiniMax
    Mistral
    Model provider quickstart
    Moonshot AI
    NVIDIA
    Ollama
    OpenAI
    OpenCode
    OpenCode Go
    OpenRouter
    Perplexity
    Qianfan
    Qwen
    Runway
    SGLang
    StepFun
    Synthetic
    Tencent Cloud (TokenHub)
    Together AI
    Venice AI
    Vercel AI gateway
    vLLM
    Volcengine (Doubao)
    Vydra
    xAI
    Xiaomi MiMo
    Z.AI
    Default AGENTS.md
    Release policy
    API usage and costs
    Credits
    Device model database
    Full release validation
    Memory configuration reference
    OpenClaw App SDK API design
    Prompt caching
    Rich output protocol
    RPC adapters
    SecretRef credential surface
    Session management deep dive
    AGENTS.md template
    BOOT.md template
    BOOTSTRAP.md template
    HEARTBEAT.md template
    IDENTITY template
    SOUL.md template
    TOOLS.md template
    USER template
    Tests
    Token use and costs
    Transcript hygiene
    Onboarding reference
    Contributing to the threat model
    Threat model (MITRE ATLAS)
    Formal verification (security models)
    Network proxy
    Agent bootstrapping
    Docs directory
    Getting started
    Docs hubs
    OpenClaw lore
    Onboarding (macOS app)
    Onboarding overview
    Personal assistant setup
    Setup
    Showcase
    Onboarding (CLI)
    CLI automation
    CLI setup reference
    ACP agents
    ACP agents — setup
    Agent send
    apply_patch tool
    Brave search
    Browser (OpenClaw-managed)
    Browser control API
    Browser troubleshooting
    Browser login
    WSL2 + Windows + remote Chrome CDP troubleshooting
    BTW side questions
    ClawHub
    Code execution
    Creating skills
    Diffs
    DuckDuckGo search
    Elevated mode
    Exa search
    Exec tool
    Exec approvals
    Exec approvals — advanced
    Firecrawl
    Gemini search
    Grok search
    Image generation
    Tools and plugins
    Kimi search
    LLM task
    Lobster
    Tool-loop detection
    Media overview
    MiniMax search
    Multi-agent sandbox and tools
    Music generation
    Ollama web search
    PDF tool
    Perplexity search
    Plugins
    Reactions
    SearXNG search
    Skills
    Skills config
    Slash commands
    Sub-agents
    Tavily
    Thinking levels
    Tokenjuice
    Trajectory bundles
    Text-to-speech
    Video generation
    Web search
    Web fetch
    Linux server
    Control UI
    Dashboard
    Web
    TUI
    WebChat

    OpenAPI Specs

    openapi
    TaskFlow
    docs/openclaw
    Original Docs

    Real-time Synchronized Documentation

    Last sync: 01/05/2026 06:59:59

    Note: This content is mirrored from docs.openclaw.ai and is subject to their terms and conditions.

    OpenClaw Docs

    v2.4.0 Production

    Last synced: Today, 22:00

    Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.

    Use this file to discover all available pages before exploring further.

    Codex harness

    The bundled

    text
    codex
    plugin lets OpenClaw run embedded agent turns through the Codex app-server instead of the built-in PI harness.

    Use this when you want Codex to own the low-level agent session: model discovery, native thread resume, native compaction, and app-server execution. OpenClaw still owns chat channels, session files, model selection, tools, approvals, media delivery, and the visible transcript mirror.

    If you are trying to orient yourself, start with Agent runtimes. The short version is:

    text
    openai/gpt-5.5
    is the model ref,
    text
    codex
    is the runtime, and Telegram, Discord, Slack, or another channel remains the communication surface.

    Quick config

    To use the Codex harness for GPT agent turns, keep the model ref canonical as

    text
    openai/gpt-*
    , enable the bundled
    text
    codex
    plugin, and set
    text
    agentRuntime.id: "codex"
    :

    json5
    { plugins: { entries: { codex: { enabled: true, }, }, }, agents: { defaults: { model: "openai/gpt-5.5", agentRuntime: { id: "codex", fallback: "none", }, }, }, }

    If your config uses

    text
    plugins.allow
    , include
    text
    codex
    there too:

    json5
    { plugins: { allow: ["codex"], entries: { codex: { enabled: true, }, }, }, }

    Do not use

    text
    openai-codex/gpt-*
    for this path. That selects Codex OAuth through the normal PI runner unless you separately force a runtime. Config changes apply to new or reset sessions; existing sessions keep their recorded runtime.

    What this plugin changes

    The bundled

    text
    codex
    plugin contributes several separate capabilities:

    CapabilityHow you use itWhat it does
    Native embedded runtime
    text
    agentRuntime.id: "codex"
    Runs OpenClaw embedded agent turns through Codex app-server.
    Native chat-control commands
    text
    /codex bind
    ,
    text
    /codex resume
    ,
    text
    /codex steer
    , ...
    Binds and controls Codex app-server threads from a messaging conversation.
    Codex app-server provider/catalog
    text
    codex
    internals, surfaced through the harness
    Lets the runtime discover and validate app-server models.
    Codex media-understanding path
    text
    codex/*
    image-model compatibility paths
    Runs bounded Codex app-server turns for supported image understanding models.
    Native hook relayPlugin hooks around Codex-native eventsLets OpenClaw observe/block supported Codex-native tool/finalization events.

    Enabling the plugin makes those capabilities available. It does not:

    • start using Codex for every OpenAI model
    • convert
      text
      openai-codex/*
      model refs into the native runtime
    • make ACP/acpx the default Codex path
    • hot-switch existing sessions that already recorded a PI runtime
    • replace OpenClaw channel delivery, session files, auth-profile storage, or message routing

    The same plugin also owns the native

    text
    /codex
    chat-control command surface. If the plugin is enabled and the user asks to bind, resume, steer, stop, or inspect Codex threads from chat, agents should prefer
    text
    /codex ...
    over ACP. ACP remains the explicit fallback when the user asks for ACP/acpx or is testing the ACP Codex adapter.

    Native Codex turns keep OpenClaw plugin hooks as the public compatibility layer. These are in-process OpenClaw hooks, not Codex

    text
    hooks.json
    command hooks:

    • text
      before_prompt_build
    • text
      before_compaction
      ,
      text
      after_compaction
    • text
      llm_input
      ,
      text
      llm_output
    • text
      before_tool_call
      ,
      text
      after_tool_call
    • text
      before_message_write
      for mirrored transcript records
    • text
      before_agent_finalize
      through Codex
      text
      Stop
      relay
    • text
      agent_end

    Plugins can also register runtime-neutral tool-result middleware to rewrite OpenClaw dynamic tool results after OpenClaw executes the tool and before the result is returned to Codex. This is separate from the public

    text
    tool_result_persist
    plugin hook, which transforms OpenClaw-owned transcript tool-result writes.

    For the plugin hook semantics themselves, see Plugin hooks and Plugin guard behavior.

    The harness is off by default. New configs should keep OpenAI model refs canonical as

    text
    openai/gpt-*
    and explicitly force
    text
    agentRuntime.id: "codex"
    or
    text
    OPENCLAW_AGENT_RUNTIME=codex
    when they want native app-server execution. Legacy
    text
    codex/*
    model refs still auto-select the harness for compatibility, but runtime-backed legacy provider prefixes are not shown as normal model/provider choices.

    If the

    text
    codex
    plugin is enabled but the primary model is still
    text
    openai-codex/*
    ,
    text
    openclaw doctor
    warns instead of changing the route. That is intentional:
    text
    openai-codex/*
    remains the PI Codex OAuth/subscription path, and native app-server execution stays an explicit runtime choice.

    Route map

    Use this table before changing config:

    Desired behaviorModel refRuntime configPlugin requirementExpected status label
    OpenAI API through normal OpenClaw runner
    text
    openai/gpt-*
    omitted or
    text
    runtime: "pi"
    OpenAI provider
    text
    Runtime: OpenClaw Pi Default
    Codex OAuth/subscription through PI
    text
    openai-codex/gpt-*
    omitted or
    text
    runtime: "pi"
    OpenAI Codex OAuth provider
    text
    Runtime: OpenClaw Pi Default
    Native Codex app-server embedded turns
    text
    openai/gpt-*
    text
    agentRuntime.id: "codex"
    text
    codex
    plugin
    text
    Runtime: OpenAI Codex
    Mixed providers with conservative auto modeprovider-specific refs
    text
    agentRuntime.id: "auto"
    Optional plugin runtimesDepends on selected runtime
    Explicit Codex ACP adapter sessionACP prompt/model dependent
    text
    sessions_spawn
    with
    text
    runtime: "acp"
    healthy
    text
    acpx
    backend
    ACP task/session status

    The important split is provider versus runtime:

    • text
      openai-codex/*
      answers "which provider/auth route should PI use?"
    • text
      agentRuntime.id: "codex"
      answers "which loop should execute this embedded turn?"
    • text
      /codex ...
      answers "which native Codex conversation should this chat bind or control?"
    • ACP answers "which external harness process should acpx launch?"

    Pick the right model prefix

    OpenAI-family routes are prefix-specific. Use

    text
    openai-codex/*
    when you want Codex OAuth through PI; use
    text
    openai/*
    when you want direct OpenAI API access or when you are forcing the native Codex app-server harness:

    Model refRuntime pathUse when
    text
    openai/gpt-5.4
    OpenAI provider through OpenClaw/PI plumbingYou want current direct OpenAI Platform API access with
    text
    OPENAI_API_KEY
    .
    text
    openai-codex/gpt-5.5
    OpenAI Codex OAuth through OpenClaw/PIYou want ChatGPT/Codex subscription auth with the default PI runner.
    text
    openai/gpt-5.5
    +
    text
    agentRuntime.id: "codex"
    Codex app-server harnessYou want native Codex app-server execution for the embedded agent turn.

    GPT-5.5 is currently subscription/OAuth-only in OpenClaw. Use

    text
    openai-codex/gpt-5.5
    for PI OAuth, or
    text
    openai/gpt-5.5
    with the Codex app-server harness. Direct API-key access for
    text
    openai/gpt-5.5
    is supported once OpenAI enables GPT-5.5 on the public API.

    Legacy

    text
    codex/gpt-*
    refs remain accepted as compatibility aliases. Doctor compatibility migration rewrites legacy primary runtime refs to canonical model refs and records the runtime policy separately, while fallback-only legacy refs are left unchanged because runtime is configured for the whole agent container. New PI Codex OAuth configs should use
    text
    openai-codex/gpt-*
    ; new native app-server harness configs should use
    text
    openai/gpt-*
    plus
    text
    agentRuntime.id: "codex"
    .

    text
    agents.defaults.imageModel
    follows the same prefix split. Use
    text
    openai-codex/gpt-*
    when image understanding should run through the OpenAI Codex OAuth provider path. Use
    text
    codex/gpt-*
    when image understanding should run through a bounded Codex app-server turn. The Codex app-server model must advertise image input support; text-only Codex models fail before the media turn starts.

    Use

    text
    /status
    to confirm the effective harness for the current session. If the selection is surprising, enable debug logging for the
    text
    agents/harness
    subsystem and inspect the gateway's structured
    text
    agent harness selected
    record. It includes the selected harness id, selection reason, runtime/fallback policy, and, in
    text
    auto
    mode, each plugin candidate's support result.

    What doctor warnings mean

    text
    openclaw doctor
    warns when all of these are true:

    • the bundled
      text
      codex
      plugin is enabled or allowed
    • an agent's primary model is
      text
      openai-codex/*
    • that agent's effective runtime is not
      text
      codex

    That warning exists because users often expect "Codex plugin enabled" to imply "native Codex app-server runtime." OpenClaw does not make that leap. The warning means:

    • No change is required if you intended ChatGPT/Codex OAuth through PI.
    • Change the model to
      text
      openai/<model>
      and set
      text
      agentRuntime.id: "codex"
      if you intended native app-server execution.
    • Existing sessions still need
      text
      /new
      or
      text
      /reset
      after a runtime change, because session runtime pins are sticky.

    Harness selection is not a live session control. When an embedded turn runs, OpenClaw records the selected harness id on that session and keeps using it for later turns in the same session id. Change

    text
    agentRuntime
    config or
    text
    OPENCLAW_AGENT_RUNTIME
    when you want future sessions to use another harness; use
    text
    /new
    or
    text
    /reset
    to start a fresh session before switching an existing conversation between PI and Codex. This avoids replaying one transcript through two incompatible native session systems.

    Legacy sessions created before harness pins are treated as PI-pinned once they have transcript history. Use

    text
    /new
    or
    text
    /reset
    to opt that conversation into Codex after changing config.

    text
    /status
    shows the effective model runtime. The default PI harness appears as
    text
    Runtime: OpenClaw Pi Default
    , and the Codex app-server harness appears as
    text
    Runtime: OpenAI Codex
    .

    Requirements

    • OpenClaw with the bundled
      text
      codex
      plugin available.
    • Codex app-server
      text
      0.125.0
      or newer. The bundled plugin manages a compatible Codex app-server binary by default, so local
      text
      codex
      commands on
      text
      PATH
      do not affect normal harness startup.
    • Codex auth available to the app-server process or to OpenClaw's Codex auth bridge. Local app-server launches use an OpenClaw-managed Codex home for each agent and an isolated child
      text
      HOME
      , so they do not read your personal
      text
      ~/.codex
      account, skills, plugins, config, thread state, or native
      text
      $HOME/.agents/skills
      by default.

    The plugin blocks older or unversioned app-server handshakes. That keeps OpenClaw on the protocol surface it has been tested against.

    For live and Docker smoke tests, auth usually comes from the Codex CLI account or an OpenClaw

    text
    openai-codex
    auth profile. Local stdio app-server launches can also fall back to
    text
    CODEX_API_KEY
    /
    text
    OPENAI_API_KEY
    when no account is present.

    Add Codex alongside other models

    Do not set

    text
    agentRuntime.id: "codex"
    globally if the same agent should freely switch between Codex and non-Codex provider models. A forced runtime applies to every embedded turn for that agent or session. If you select an Anthropic model while that runtime is forced, OpenClaw still tries the Codex harness and fails closed instead of silently routing that turn through PI.

    Use one of these shapes instead:

    • Put Codex on a dedicated agent with
      text
      agentRuntime.id: "codex"
      .
    • Keep the default agent on
      text
      agentRuntime.id: "auto"
      and PI fallback for normal mixed provider usage.
    • Use legacy
      text
      codex/*
      refs only for compatibility. New configs should prefer
      text
      openai/*
      plus an explicit Codex runtime policy.

    For example, this keeps the default agent on normal automatic selection and adds a separate Codex agent:

    json5
    { plugins: { entries: { codex: { enabled: true, }, }, }, agents: { defaults: { agentRuntime: { id: "auto", fallback: "pi", }, }, list: [ { id: "main", default: true, model: "anthropic/claude-opus-4-6", }, { id: "codex", name: "Codex", model: "openai/gpt-5.5", agentRuntime: { id: "codex", }, }, ], }, }

    With this shape:

    • The default
      text
      main
      agent uses the normal provider path and PI compatibility fallback.
    • The
      text
      codex
      agent uses the Codex app-server harness.
    • If Codex is missing or unsupported for the
      text
      codex
      agent, the turn fails instead of quietly using PI.

    Agent command routing

    Agents should route user requests by intent, not by the word "Codex" alone:

    User asks for...Agent should use...
    "Bind this chat to Codex"
    text
    /codex bind
    "Resume Codex thread
    text
    <id>
    here"
    text
    /codex resume <id>
    "Show Codex threads"
    text
    /codex threads
    "File a support report for a bad Codex run"
    text
    /diagnostics [note]
    "Only send Codex feedback for this attached thread"
    text
    /codex diagnostics [note]
    "Use Codex as the runtime for this agent"config change to
    text
    agentRuntime.id
    "Use my ChatGPT/Codex subscription with normal OpenClaw"
    text
    openai-codex/*
    model refs
    "Run Codex through ACP/acpx"ACP
    text
    sessions_spawn({ runtime: "acp", ... })
    "Start Claude Code/Gemini/OpenCode/Cursor in a thread"ACP/acpx, not
    text
    /codex
    and not native sub-agents

    OpenClaw only advertises ACP spawn guidance to agents when ACP is enabled, dispatchable, and backed by a loaded runtime backend. If ACP is not available, the system prompt and plugin skills should not teach the agent about ACP routing.

    Codex-only deployments

    Force the Codex harness when you need to prove that every embedded agent turn uses Codex. Explicit plugin runtimes default to no PI fallback, so

    text
    fallback: "none"
    is optional but often useful as documentation:

    json5
    { agents: { defaults: { model: "openai/gpt-5.5", agentRuntime: { id: "codex", fallback: "none", }, }, }, }

    Environment override:

    bash
    OPENCLAW_AGENT_RUNTIME=codex openclaw gateway run

    With Codex forced, OpenClaw fails early if the Codex plugin is disabled, the app-server is too old, or the app-server cannot start. Set

    text
    OPENCLAW_AGENT_HARNESS_FALLBACK=pi
    only if you intentionally want PI to handle missing harness selection.

    Per-agent Codex

    You can make one agent Codex-only while the default agent keeps normal auto-selection:

    json5
    { agents: { defaults: { agentRuntime: { id: "auto", fallback: "pi", }, }, list: [ { id: "main", default: true, model: "anthropic/claude-opus-4-6", }, { id: "codex", name: "Codex", model: "openai/gpt-5.5", agentRuntime: { id: "codex", fallback: "none", }, }, ], }, }

    Use normal session commands to switch agents and models.

    text
    /new
    creates a fresh OpenClaw session and the Codex harness creates or resumes its sidecar app-server thread as needed.
    text
    /reset
    clears the OpenClaw session binding for that thread and lets the next turn resolve the harness from current config again.

    Model discovery

    By default, the Codex plugin asks the app-server for available models. If discovery fails or times out, it uses a bundled fallback catalog for:

    • GPT-5.5
    • GPT-5.4 mini
    • GPT-5.2

    You can tune discovery under

    text
    plugins.entries.codex.config.discovery
    :

    json5
    { plugins: { entries: { codex: { enabled: true, config: { discovery: { enabled: true, timeoutMs: 2500, }, }, }, }, }, }

    Disable discovery when you want startup to avoid probing Codex and stick to the fallback catalog:

    json5
    { plugins: { entries: { codex: { enabled: true, config: { discovery: { enabled: false, }, }, }, }, }, }

    App-server connection and policy

    By default, the plugin starts OpenClaw's managed Codex binary locally with:

    bash
    codex app-server --listen stdio://

    The managed binary is declared as a bundled plugin runtime dependency and staged with the rest of the

    text
    codex
    plugin dependencies. This keeps the app-server version tied to the bundled plugin instead of whichever separate Codex CLI happens to be installed locally. Set
    text
    appServer.command
    only when you intentionally want to run a different executable.

    By default, OpenClaw starts local Codex harness sessions in YOLO mode:

    text
    approvalPolicy: "never"
    ,
    text
    approvalsReviewer: "user"
    , and
    text
    sandbox: "danger-full-access"
    . This is the trusted local operator posture used for autonomous heartbeats: Codex can use shell and network tools without stopping on native approval prompts that nobody is around to answer.

    To opt in to Codex guardian-reviewed approvals, set

    text
    appServer.mode: "guardian"
    :

    json5
    { plugins: { entries: { codex: { enabled: true, config: { appServer: { mode: "guardian", serviceTier: "fast", }, }, }, }, }, }

    Guardian mode uses Codex's native auto-review approval path. When Codex asks to leave the sandbox, write outside the workspace, or add permissions like network access, Codex routes that approval request to the native reviewer instead of a human prompt. The reviewer applies Codex's risk framework and approves or denies the specific request. Use Guardian when you want more guardrails than YOLO mode but still need unattended agents to make progress.

    The

    text
    guardian
    preset expands to
    text
    approvalPolicy: "on-request"
    ,
    text
    approvalsReviewer: "auto_review"
    , and
    text
    sandbox: "workspace-write"
    . Individual policy fields still override
    text
    mode
    , so advanced deployments can mix the preset with explicit choices. The older
    text
    guardian_subagent
    reviewer value is still accepted as a compatibility alias, but new configs should use
    text
    auto_review
    .

    For an already-running app-server, use WebSocket transport:

    json5
    { plugins: { entries: { codex: { enabled: true, config: { appServer: { transport: "websocket", url: "ws://127.0.0.1:39175", authToken: "${CODEX_APP_SERVER_TOKEN}", requestTimeoutMs: 60000, }, }, }, }, }, }

    Stdio app-server launches inherit OpenClaw's process environment by default, but OpenClaw owns the Codex app-server account bridge and sets both

    text
    CODEX_HOME
    and
    text
    HOME
    to per-agent directories under that agent's OpenClaw state. Codex's own skill loader reads
    text
    $CODEX_HOME/skills
    and
    text
    $HOME/.agents/skills
    , so both values are isolated for local app-server launches. That keeps Codex-native skills, plugins, config, accounts, and thread state scoped to the OpenClaw agent instead of leaking in from the operator's personal Codex CLI home.

    OpenClaw plugins and OpenClaw skill snapshots still flow through OpenClaw's own plugin registry and skill loader. Personal Codex CLI assets do not. If you have useful Codex CLI skills or plugins that should become part of an OpenClaw agent, inventory them explicitly:

    bash
    openclaw migrate codex --dry-run openclaw migrate apply codex --yes

    The Codex migration provider copies skills into the current OpenClaw agent workspace. Codex native plugins, hooks, and config files are reported or archived for manual review instead of being activated automatically, because they can execute commands, expose MCP servers, or carry credentials.

    Auth is selected in this order:

    1. An explicit OpenClaw Codex auth profile for the agent.
    2. The app-server's existing account in that agent's Codex home.
    3. For local stdio app-server launches only,
      text
      CODEX_API_KEY
      , then
      text
      OPENAI_API_KEY
      , when no app-server account is present and OpenAI auth is still required.

    When OpenClaw sees a ChatGPT subscription-style Codex auth profile, it removes

    text
    CODEX_API_KEY
    and
    text
    OPENAI_API_KEY
    from the spawned Codex child process. That keeps Gateway-level API keys available for embeddings or direct OpenAI models without making native Codex app-server turns bill through the API by accident. Explicit Codex API-key profiles and local stdio env-key fallback use app-server login instead of inherited child-process env. WebSocket app-server connections do not receive Gateway env API-key fallback; use an explicit auth profile or the remote app-server's own account.

    If a deployment needs additional environment isolation, add those variables to

    text
    appServer.clearEnv
    :

    json5
    { plugins: { entries: { codex: { enabled: true, config: { appServer: { clearEnv: ["CODEX_API_KEY", "OPENAI_API_KEY"], }, }, }, }, }, }

    text
    appServer.clearEnv
    only affects the spawned Codex app-server child process.

    Supported

    text
    appServer
    fields:

    FieldDefaultMeaning
    text
    transport
    text
    "stdio"
    text
    "stdio"
    spawns Codex;
    text
    "websocket"
    connects to
    text
    url
    .
    text
    command
    managed Codex binaryExecutable for stdio transport. Leave unset to use the managed binary; set it only for an explicit override.
    text
    args
    text
    ["app-server", "--listen", "stdio://"]
    Arguments for stdio transport.
    text
    url
    unsetWebSocket app-server URL.
    text
    authToken
    unsetBearer token for WebSocket transport.
    text
    headers
    text
    {}
    Extra WebSocket headers.
    text
    clearEnv
    text
    []
    Extra environment variable names removed from the spawned stdio app-server process after OpenClaw builds its inherited environment.
    text
    CODEX_HOME
    and
    text
    HOME
    are reserved for OpenClaw's per-agent Codex isolation on local launches.
    text
    requestTimeoutMs
    text
    60000
    Timeout for app-server control-plane calls.
    text
    mode
    text
    "yolo"
    Preset for YOLO or guardian-reviewed execution.
    text
    approvalPolicy
    text
    "never"
    Native Codex approval policy sent to thread start/resume/turn.
    text
    sandbox
    text
    "danger-full-access"
    Native Codex sandbox mode sent to thread start/resume.
    text
    approvalsReviewer
    text
    "user"
    Use
    text
    "auto_review"
    to let Codex review native approval prompts.
    text
    guardian_subagent
    remains a legacy alias.
    text
    serviceTier
    unsetOptional Codex app-server service tier:
    text
    "fast"
    ,
    text
    "flex"
    , or
    text
    null
    . Invalid legacy values are ignored.

    OpenClaw-owned dynamic tool calls are bounded independently from

    text
    appServer.requestTimeoutMs
    : each Codex
    text
    item/tool/call
    request must receive an OpenClaw response within 30 seconds. On timeout, OpenClaw aborts the tool signal where supported and returns a failed dynamic-tool response to Codex so the turn can continue instead of leaving the session in
    text
    processing
    .

    After OpenClaw responds to a Codex turn-scoped app-server request, the harness also expects Codex to finish the native turn with

    text
    turn/completed
    . If the app-server goes quiet for 60 seconds after that response, OpenClaw best-effort interrupts the Codex turn, records a diagnostic timeout, and releases the OpenClaw session lane so follow-up chat messages are not queued behind a stale native turn.

    Environment overrides remain available for local testing:

    • text
      OPENCLAW_CODEX_APP_SERVER_BIN
    • text
      OPENCLAW_CODEX_APP_SERVER_ARGS
    • text
      OPENCLAW_CODEX_APP_SERVER_MODE=yolo|guardian
    • text
      OPENCLAW_CODEX_APP_SERVER_APPROVAL_POLICY
    • text
      OPENCLAW_CODEX_APP_SERVER_SANDBOX

    text
    OPENCLAW_CODEX_APP_SERVER_BIN
    bypasses the managed binary when
    text
    appServer.command
    is unset.

    text
    OPENCLAW_CODEX_APP_SERVER_GUARDIAN=1
    was removed. Use
    text
    plugins.entries.codex.config.appServer.mode: "guardian"
    instead, or
    text
    OPENCLAW_CODEX_APP_SERVER_MODE=guardian
    for one-off local testing. Config is preferred for repeatable deployments because it keeps the plugin behavior in the same reviewed file as the rest of the Codex harness setup.

    Computer use

    Computer Use is covered in its own setup guide: Codex Computer Use.

    The short version: OpenClaw does not vendor the desktop-control app or execute desktop actions itself. It prepares Codex app-server, verifies that the

    text
    computer-use
    MCP server is available, and then lets Codex handle the native MCP tool calls during Codex-mode turns.

    For direct TryCua driver access outside the Codex marketplace flow, register

    text
    cua-driver mcp
    with
    text
    openclaw mcp set cua-driver '{"command":"cua-driver","args":["mcp"]}'
    . See Codex Computer Use for the distinction between Codex-owned Computer Use and direct MCP registration.

    Minimal config:

    json5
    { plugins: { entries: { codex: { enabled: true, config: { computerUse: { autoInstall: true, }, }, }, }, }, agents: { defaults: { model: "openai/gpt-5.5", agentRuntime: { id: "codex", fallback: "none", }, }, }, }

    The setup can be checked or installed from the command surface:

    • text
      /codex computer-use status
    • text
      /codex computer-use install
    • text
      /codex computer-use install --source <marketplace-source>
    • text
      /codex computer-use install --marketplace-path <path>

    Computer Use is macOS-specific and may require local OS permissions before the Codex MCP server can control apps. If

    text
    computerUse.enabled
    is true and the MCP server is unavailable, Codex-mode turns fail before the thread starts instead of silently running without the native Computer Use tools. See Codex Computer Use for marketplace choices, remote catalog limits, status reasons, and troubleshooting.

    When

    text
    computerUse.autoInstall
    is true, OpenClaw can register the standard bundled Codex Desktop marketplace from
    text
    /Applications/Codex.app/Contents/Resources/plugins/openai-bundled
    if Codex has not discovered a local marketplace yet. Use
    text
    /new
    or
    text
    /reset
    after changing runtime or Computer Use config so existing sessions do not keep an old PI or Codex thread binding.

    Common recipes

    Local Codex with default stdio transport:

    json5
    { plugins: { entries: { codex: { enabled: true, }, }, }, }

    Codex-only harness validation:

    json5
    { agents: { defaults: { model: "openai/gpt-5.5", agentRuntime: { id: "codex", }, }, }, plugins: { entries: { codex: { enabled: true, }, }, }, }

    Guardian-reviewed Codex approvals:

    json5
    { plugins: { entries: { codex: { enabled: true, config: { appServer: { mode: "guardian", approvalPolicy: "on-request", approvalsReviewer: "auto_review", sandbox: "workspace-write", }, }, }, }, }, }

    Remote app-server with explicit headers:

    json5
    { plugins: { entries: { codex: { enabled: true, config: { appServer: { transport: "websocket", url: "ws://gateway-host:39175", headers: { "X-OpenClaw-Agent": "main", }, }, }, }, }, }, }

    Model switching stays OpenClaw-controlled. When an OpenClaw session is attached to an existing Codex thread, the next turn sends the currently selected OpenAI model, provider, approval policy, sandbox, and service tier to app-server again. Switching from

    text
    openai/gpt-5.5
    to
    text
    openai/gpt-5.2
    keeps the thread binding but asks Codex to continue with the newly selected model.

    Codex command

    The bundled plugin registers

    text
    /codex
    as an authorized slash command. It is generic and works on any channel that supports OpenClaw text commands.

    Common forms:

    • text
      /codex status
      shows live app-server connectivity, models, account, rate limits, MCP servers, and skills.
    • text
      /codex models
      lists live Codex app-server models.
    • text
      /codex threads [filter]
      lists recent Codex threads.
    • text
      /codex resume <thread-id>
      attaches the current OpenClaw session to an existing Codex thread.
    • text
      /codex compact
      asks Codex app-server to compact the attached thread.
    • text
      /codex review
      starts Codex native review for the attached thread.
    • text
      /codex diagnostics [note]
      asks before sending Codex diagnostics feedback for the attached thread.
    • text
      /codex computer-use status
      checks the configured Computer Use plugin and MCP server.
    • text
      /codex computer-use install
      installs the configured Computer Use plugin and reloads MCP servers.
    • text
      /codex account
      shows account and rate-limit status.
    • text
      /codex mcp
      lists Codex app-server MCP server status.
    • text
      /codex skills
      lists Codex app-server skills.

    Common debugging workflow

    When a Codex-backed agent does something surprising in Telegram, Discord, Slack, or another channel, start with the conversation where the problem happened:

    1. Run
      text
      /diagnostics bad tool choice after image upload
      or another short note that describes what you saw.
    2. Approve the diagnostics request once. The approval creates the local Gateway diagnostics zip and, because the session is using the Codex harness, also sends the relevant Codex feedback bundle to OpenAI servers.
    3. Copy the completed diagnostics reply into the bug report or support thread. It includes the local bundle path, privacy summary, OpenClaw session ids, Codex thread ids, and an
      text
      Inspect locally
      line for each Codex thread.
    4. If you want to debug the run yourself, run the printed
      text
      Inspect locally
      command in a terminal. It looks like
      text
      codex resume <thread-id>
      and opens the native Codex thread so you can inspect the conversation, continue it locally, or ask Codex why it chose a particular tool or plan.

    Use

    text
    /codex diagnostics [note]
    only when you specifically want the Codex feedback upload for the currently attached thread without the full OpenClaw Gateway diagnostics bundle. For most support reports,
    text
    /diagnostics [note]
    is the better starting point because it ties the local Gateway state and Codex thread ids together in one reply. See Diagnostics export for the full privacy model and group-chat behavior.

    Core OpenClaw also exposes owner-only

    text
    /diagnostics [note]
    as the general Gateway diagnostics command. Its approval prompt shows the sensitive-data preamble, links to Diagnostics Export, and requests
    text
    openclaw gateway diagnostics export --json
    through explicit exec approval every time. Do not approve diagnostics with an allow-all rule. After approval, OpenClaw sends a pasteable report with the local bundle path and manifest summary. When the active OpenClaw session is using the Codex harness, that same approval also authorizes sending the relevant Codex feedback bundles to OpenAI servers. The approval prompt says that Codex feedback will be sent, but it does not list Codex session or thread ids before approval.

    If

    text
    /diagnostics
    is invoked by an owner in a group chat, OpenClaw keeps the shared channel clean: the group receives only a short notice, while the diagnostics preamble, approval prompts, and Codex session/thread ids are sent to the owner through the private approval route. If there is no private owner route, OpenClaw refuses the group request and asks the owner to run it from a DM.

    The approved Codex upload calls Codex app-server

    text
    feedback/upload
    and asks app-server to include logs for each listed thread and spawned Codex subthreads when available. The upload goes through Codex's normal feedback path to OpenAI servers; if Codex feedback is disabled in that app-server, the command returns the app-server error. The completed diagnostics reply lists the channels, OpenClaw session ids, Codex thread ids, and local
    text
    codex resume <thread-id>
    commands for the threads that were sent. If you deny or ignore the approval, OpenClaw does not print those Codex ids. This upload does not replace the local Gateway diagnostics export.

    text
    /codex resume
    writes the same sidecar binding file that the harness uses for normal turns. On the next message, OpenClaw resumes that Codex thread, passes the currently selected OpenClaw model into app-server, and keeps extended history enabled.

    Inspect a Codex thread from the CLI

    The fastest way to understand a bad Codex run is often to open the native Codex thread directly:

    sh
    codex resume <thread-id>

    Use this when you notice a bug in a channel conversation and want to inspect the problematic Codex session, continue it locally, or ask Codex why it made a particular tool or reasoning choice. The easiest path is usually to run

    text
    /diagnostics [note]
    first: after you approve it, the completed report lists each Codex thread and prints an
    text
    Inspect locally
    command, for example
    text
    codex resume <thread-id>
    . You can copy that command directly into a terminal.

    You can also get a thread id from

    text
    /codex binding
    for the current chat or
    text
    /codex threads [filter]
    for recent Codex app-server threads, then run the same
    text
    codex resume
    command in your shell.

    The command surface requires Codex app-server

    text
    0.125.0
    or newer. Individual control methods are reported as
    text
    unsupported by this Codex app-server
    if a future or custom app-server does not expose that JSON-RPC method.

    Hook boundaries

    The Codex harness has three hook layers:

    LayerOwnerPurpose
    OpenClaw plugin hooksOpenClawProduct/plugin compatibility across PI and Codex harnesses.
    Codex app-server extension middlewareOpenClaw bundled pluginsPer-turn adapter behavior around OpenClaw dynamic tools.
    Codex native hooksCodexLow-level Codex lifecycle and native tool policy from Codex config.

    OpenClaw does not use project or global Codex

    text
    hooks.json
    files to route OpenClaw plugin behavior. For the supported native tool and permission bridge, OpenClaw injects per-thread Codex config for
    text
    PreToolUse
    ,
    text
    PostToolUse
    ,
    text
    PermissionRequest
    , and
    text
    Stop
    . Other Codex hooks such as
    text
    SessionStart
    and
    text
    UserPromptSubmit
    remain Codex-level controls; they are not exposed as OpenClaw plugin hooks in the v1 contract.

    For OpenClaw dynamic tools, OpenClaw executes the tool after Codex asks for the call, so OpenClaw fires the plugin and middleware behavior it owns in the harness adapter. For Codex-native tools, Codex owns the canonical tool record. OpenClaw can mirror selected events, but it cannot rewrite the native Codex thread unless Codex exposes that operation through app-server or native hook callbacks.

    Compaction and LLM lifecycle projections come from Codex app-server notifications and OpenClaw adapter state, not native Codex hook commands. OpenClaw's

    text
    before_compaction
    ,
    text
    after_compaction
    ,
    text
    llm_input
    , and
    text
    llm_output
    events are adapter-level observations, not byte-for-byte captures of Codex's internal request or compaction payloads.

    Codex native

    text
    hook/started
    and
    text
    hook/completed
    app-server notifications are projected as
    text
    codex_app_server.hook
    agent events for trajectory and debugging. They do not invoke OpenClaw plugin hooks.

    V1 support contract

    Codex mode is not PI with a different model call underneath. Codex owns more of the native model loop, and OpenClaw adapts its plugin and session surfaces around that boundary.

    Supported in Codex runtime v1:

    SurfaceSupportWhy
    OpenAI model loop through CodexSupportedCodex app-server owns the OpenAI turn, native thread resume, and native tool continuation.
    OpenClaw channel routing and deliverySupportedTelegram, Discord, Slack, WhatsApp, iMessage, and other channels stay outside the model runtime.
    OpenClaw dynamic toolsSupportedCodex asks OpenClaw to execute these tools, so OpenClaw stays in the execution path.
    Prompt and context pluginsSupportedOpenClaw builds prompt overlays and projects context into the Codex turn before starting or resuming the thread.
    Context engine lifecycleSupportedAssemble, ingest or after-turn maintenance, and context-engine compaction coordination run for Codex turns.
    Dynamic tool hooksSupported
    text
    before_tool_call
    ,
    text
    after_tool_call
    , and tool-result middleware run around OpenClaw-owned dynamic tools.
    Lifecycle hooksSupported as adapter observations
    text
    llm_input
    ,
    text
    llm_output
    ,
    text
    agent_end
    ,
    text
    before_compaction
    , and
    text
    after_compaction
    fire with honest Codex-mode payloads.
    Final-answer revision gateSupported through the native hook relayCodex
    text
    Stop
    is relayed to
    text
    before_agent_finalize
    ;
    text
    revise
    asks Codex for one more model pass before finalization.
    Native shell, patch, and MCP block or observeSupported through the native hook relayCodex
    text
    PreToolUse
    and
    text
    PostToolUse
    are relayed for committed native tool surfaces, including MCP payloads on Codex app-server
    text
    0.125.0
    or newer. Blocking is supported; argument rewriting is not.
    Native permission policySupported through the native hook relayCodex
    text
    PermissionRequest
    can be routed through OpenClaw policy where the runtime exposes it. If OpenClaw returns no decision, Codex continues through its normal guardian or user approval path.
    App-server trajectory captureSupportedOpenClaw records the request it sent to app-server and the app-server notifications it receives.

    Not supported in Codex runtime v1:

    SurfaceV1 boundaryFuture path
    Native tool argument mutationCodex native pre-tool hooks can block, but OpenClaw does not rewrite Codex-native tool arguments.Requires Codex hook/schema support for replacement tool input.
    Editable Codex-native transcript historyCodex owns canonical native thread history. OpenClaw owns a mirror and can project future context, but should not mutate unsupported internals.Add explicit Codex app-server APIs if native thread surgery is needed.
    text
    tool_result_persist
    for Codex-native tool records
    That hook transforms OpenClaw-owned transcript writes, not Codex-native tool records.Could mirror transformed records, but canonical rewrite needs Codex support.
    Rich native compaction metadataOpenClaw observes compaction start and completion, but does not receive a stable kept/dropped list, token delta, or summary payload.Needs richer Codex compaction events.
    Compaction interventionCurrent OpenClaw compaction hooks are notification-level in Codex mode.Add Codex pre/post compaction hooks if plugins need to veto or rewrite native compaction.
    Byte-for-byte model API request captureOpenClaw can capture app-server requests and notifications, but Codex core builds the final OpenAI API request internally.Needs a Codex model-request tracing event or debug API.

    Tools, media, and compaction

    The Codex harness changes the low-level embedded agent executor only.

    OpenClaw still builds the tool list and receives dynamic tool results from the harness. Text, images, video, music, TTS, approvals, and messaging-tool output continue through the normal OpenClaw delivery path.

    The native hook relay is intentionally generic, but the v1 support contract is limited to the Codex-native tool and permission paths that OpenClaw tests. In the Codex runtime, that includes shell, patch, and MCP

    text
    PreToolUse
    ,
    text
    PostToolUse
    , and
    text
    PermissionRequest
    payloads. Do not assume every future Codex hook event is an OpenClaw plugin surface until the runtime contract names it.

    For

    text
    PermissionRequest
    , OpenClaw only returns explicit allow or deny decisions when policy decides. A no-decision result is not an allow. Codex treats it as no hook decision and falls through to its own guardian or user approval path.

    Codex MCP tool approval elicitations are routed through OpenClaw's plugin approval flow when Codex marks

    text
    _meta.codex_approval_kind
    as
    text
    "mcp_tool_call"
    . Codex
    text
    request_user_input
    prompts are sent back to the originating chat, and the next queued follow-up message answers that native server request instead of being steered as extra context. Other MCP elicitation requests still fail closed.

    Active-run queue steering maps onto Codex app-server

    text
    turn/steer
    . With the default
    text
    messages.queue.mode: "steer"
    , OpenClaw batches queued chat messages for the configured quiet window and sends them as one
    text
    turn/steer
    request in arrival order. Legacy
    text
    queue
    mode sends separate
    text
    turn/steer
    requests. Codex review and manual compaction turns can reject same-turn steering, in which case OpenClaw uses the followup queue when the selected mode allows fallback. See Steering queue.

    When the selected model uses the Codex harness, native thread compaction is delegated to Codex app-server. OpenClaw keeps a transcript mirror for channel history, search,

    text
    /new
    ,
    text
    /reset
    , and future model or harness switching. The mirror includes the user prompt, final assistant text, and lightweight Codex reasoning or plan records when the app-server emits them. Today, OpenClaw only records native compaction start and completion signals. It does not yet expose a human-readable compaction summary or an auditable list of which entries Codex kept after compaction.

    Because Codex owns the canonical native thread,

    text
    tool_result_persist
    does not currently rewrite Codex-native tool result records. It only applies when OpenClaw is writing an OpenClaw-owned session transcript tool result.

    Media generation does not require PI. Image, video, music, PDF, TTS, and media understanding continue to use the matching provider/model settings such as

    text
    agents.defaults.imageGenerationModel
    ,
    text
    videoGenerationModel
    ,
    text
    pdfModel
    , and
    text
    messages.tts
    .

    Troubleshooting

    Codex does not appear as a normal

    text
    /model
    provider: that is expected for new configs. Select an
    text
    openai/gpt-*
    model with
    text
    agentRuntime.id: "codex"
    (or a legacy
    text
    codex/*
    ref), enable
    text
    plugins.entries.codex.enabled
    , and check whether
    text
    plugins.allow
    excludes
    text
    codex
    .

    OpenClaw uses PI instead of Codex:

    text
    agentRuntime.id: "auto"
    can still use PI as the compatibility backend when no Codex harness claims the run. Set
    text
    agentRuntime.id: "codex"
    to force Codex selection while testing. A forced Codex runtime now fails instead of falling back to PI unless you explicitly set
    text
    agentRuntime.fallback: "pi"
    . Once Codex app-server is selected, its failures surface directly without extra fallback config.

    The app-server is rejected: upgrade Codex so the app-server handshake reports version

    text
    0.125.0
    or newer. Same-version prereleases or build-suffixed versions such as
    text
    0.125.0-alpha.2
    or
    text
    0.125.0+custom
    are rejected because the stable
    text
    0.125.0
    protocol floor is what OpenClaw tests.

    Model discovery is slow: lower

    text
    plugins.entries.codex.config.discovery.timeoutMs
    or disable discovery.

    WebSocket transport fails immediately: check

    text
    appServer.url
    ,
    text
    authToken
    , and that the remote app-server speaks the same Codex app-server protocol version.

    A non-Codex model uses PI: that is expected unless you forced

    text
    agentRuntime.id: "codex"
    for that agent or selected a legacy
    text
    codex/*
    ref. Plain
    text
    openai/gpt-*
    and other provider refs stay on their normal provider path in
    text
    auto
    mode. If you force
    text
    agentRuntime.id: "codex"
    , every embedded turn for that agent must be a Codex-supported OpenAI model.

    Computer Use is installed but tools do not run: check

    text
    /codex computer-use status
    from a fresh session. If a tool reports
    text
    Native hook relay unavailable
    , use
    text
    /new
    or
    text
    /reset
    ; if it persists, restart the gateway to clear stale native hook registrations. If
    text
    computer-use.list_apps
    times out, restart Codex Computer Use or Codex Desktop and retry.

    Related

    • Agent harness plugins
    • Agent runtimes
    • Model providers
    • OpenAI provider
    • Status
    • Plugin hooks
    • Configuration reference
    • Testing

    © 2024 TaskFlow Mirror

    Powered by TaskFlow Sync Engine