Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
The bundled
codexUse this when you want Codex to own the low-level agent session: model discovery, native thread resume, native compaction, and app-server execution. OpenClaw still owns chat channels, session files, model selection, tools, approvals, media delivery, and the visible transcript mirror.
If you are trying to orient yourself, start with Agent runtimes. The short version is:
openai/gpt-5.5codexTo use the Codex harness for GPT agent turns, keep the model ref canonical as
openai/gpt-*codexagentRuntime.id: "codex"json5{ plugins: { entries: { codex: { enabled: true, }, }, }, agents: { defaults: { model: "openai/gpt-5.5", agentRuntime: { id: "codex", fallback: "none", }, }, }, }
If your config uses
plugins.allowcodexjson5{ plugins: { allow: ["codex"], entries: { codex: { enabled: true, }, }, }, }
Do not use
openai-codex/gpt-*The bundled
codex| Capability | How you use it | What it does |
|---|---|---|
| Native embedded runtime | text agentRuntime.id: "codex" | Runs OpenClaw embedded agent turns through Codex app-server. |
| Native chat-control commands | text /codex bindtext /codex resumetext /codex steer | Binds and controls Codex app-server threads from a messaging conversation. |
| Codex app-server provider/catalog | text codex | Lets the runtime discover and validate app-server models. |
| Codex media-understanding path | text codex/* | Runs bounded Codex app-server turns for supported image understanding models. |
| Native hook relay | Plugin hooks around Codex-native events | Lets OpenClaw observe/block supported Codex-native tool/finalization events. |
Enabling the plugin makes those capabilities available. It does not:
openai-codex/*The same plugin also owns the native
/codex/codex ...Native Codex turns keep OpenClaw plugin hooks as the public compatibility layer. These are in-process OpenClaw hooks, not Codex
hooks.jsonbefore_prompt_buildbefore_compactionafter_compactionllm_inputllm_outputbefore_tool_callafter_tool_callbefore_message_writebefore_agent_finalizeStopagent_endPlugins can also register runtime-neutral tool-result middleware to rewrite OpenClaw dynamic tool results after OpenClaw executes the tool and before the result is returned to Codex. This is separate from the public
tool_result_persistFor the plugin hook semantics themselves, see Plugin hooks and Plugin guard behavior.
The harness is off by default. New configs should keep OpenAI model refs canonical as
openai/gpt-*agentRuntime.id: "codex"OPENCLAW_AGENT_RUNTIME=codexcodex/*If the
codexopenai-codex/*openclaw doctoropenai-codex/*Use this table before changing config:
| Desired behavior | Model ref | Runtime config | Plugin requirement | Expected status label |
|---|---|---|---|---|
| OpenAI API through normal OpenClaw runner | text openai/gpt-* | omitted or text runtime: "pi" | OpenAI provider | text Runtime: OpenClaw Pi Default |
| Codex OAuth/subscription through PI | text openai-codex/gpt-* | omitted or text runtime: "pi" | OpenAI Codex OAuth provider | text Runtime: OpenClaw Pi Default |
| Native Codex app-server embedded turns | text openai/gpt-* | text agentRuntime.id: "codex" | text codex | text Runtime: OpenAI Codex |
| Mixed providers with conservative auto mode | provider-specific refs | text agentRuntime.id: "auto" | Optional plugin runtimes | Depends on selected runtime |
| Explicit Codex ACP adapter session | ACP prompt/model dependent | text sessions_spawntext runtime: "acp" | healthy text acpx | ACP task/session status |
The important split is provider versus runtime:
openai-codex/*agentRuntime.id: "codex"/codex ...OpenAI-family routes are prefix-specific. Use
openai-codex/*openai/*| Model ref | Runtime path | Use when |
|---|---|---|
text openai/gpt-5.4 | OpenAI provider through OpenClaw/PI plumbing | You want current direct OpenAI Platform API access with text OPENAI_API_KEY |
text openai-codex/gpt-5.5 | OpenAI Codex OAuth through OpenClaw/PI | You want ChatGPT/Codex subscription auth with the default PI runner. |
text openai/gpt-5.5text agentRuntime.id: "codex" | Codex app-server harness | You want native Codex app-server execution for the embedded agent turn. |
GPT-5.5 is currently subscription/OAuth-only in OpenClaw. Use
openai-codex/gpt-5.5openai/gpt-5.5openai/gpt-5.5Legacy
codex/gpt-*openai-codex/gpt-*openai/gpt-*agentRuntime.id: "codex"agents.defaults.imageModelopenai-codex/gpt-*codex/gpt-*Use
/statusagents/harnessagent harness selectedautoopenclaw doctorcodexopenai-codex/*codexThat warning exists because users often expect "Codex plugin enabled" to imply "native Codex app-server runtime." OpenClaw does not make that leap. The warning means:
openai/<model>agentRuntime.id: "codex"/new/resetHarness selection is not a live session control. When an embedded turn runs, OpenClaw records the selected harness id on that session and keeps using it for later turns in the same session id. Change
agentRuntimeOPENCLAW_AGENT_RUNTIME/new/resetLegacy sessions created before harness pins are treated as PI-pinned once they have transcript history. Use
/new/reset/statusRuntime: OpenClaw Pi DefaultRuntime: OpenAI Codexcodex0.125.0codexPATHHOME~/.codex$HOME/.agents/skillsThe plugin blocks older or unversioned app-server handshakes. That keeps OpenClaw on the protocol surface it has been tested against.
For live and Docker smoke tests, auth usually comes from the Codex CLI account or an OpenClaw
openai-codexCODEX_API_KEYOPENAI_API_KEYDo not set
agentRuntime.id: "codex"Use one of these shapes instead:
agentRuntime.id: "codex"agentRuntime.id: "auto"codex/*openai/*For example, this keeps the default agent on normal automatic selection and adds a separate Codex agent:
json5{ plugins: { entries: { codex: { enabled: true, }, }, }, agents: { defaults: { agentRuntime: { id: "auto", fallback: "pi", }, }, list: [ { id: "main", default: true, model: "anthropic/claude-opus-4-6", }, { id: "codex", name: "Codex", model: "openai/gpt-5.5", agentRuntime: { id: "codex", }, }, ], }, }
With this shape:
maincodexcodexAgents should route user requests by intent, not by the word "Codex" alone:
| User asks for... | Agent should use... |
|---|---|
| "Bind this chat to Codex" | text /codex bind |
| "Resume Codex thread text <id> | text /codex resume <id> |
| "Show Codex threads" | text /codex threads |
| "File a support report for a bad Codex run" | text /diagnostics [note] |
| "Only send Codex feedback for this attached thread" | text /codex diagnostics [note] |
| "Use Codex as the runtime for this agent" | config change to text agentRuntime.id |
| "Use my ChatGPT/Codex subscription with normal OpenClaw" | text openai-codex/* |
| "Run Codex through ACP/acpx" | ACP text sessions_spawn({ runtime: "acp", ... }) |
| "Start Claude Code/Gemini/OpenCode/Cursor in a thread" | ACP/acpx, not text /codex |
OpenClaw only advertises ACP spawn guidance to agents when ACP is enabled, dispatchable, and backed by a loaded runtime backend. If ACP is not available, the system prompt and plugin skills should not teach the agent about ACP routing.
Force the Codex harness when you need to prove that every embedded agent turn uses Codex. Explicit plugin runtimes default to no PI fallback, so
fallback: "none"json5{ agents: { defaults: { model: "openai/gpt-5.5", agentRuntime: { id: "codex", fallback: "none", }, }, }, }
Environment override:
bashOPENCLAW_AGENT_RUNTIME=codex openclaw gateway run
With Codex forced, OpenClaw fails early if the Codex plugin is disabled, the app-server is too old, or the app-server cannot start. Set
OPENCLAW_AGENT_HARNESS_FALLBACK=piYou can make one agent Codex-only while the default agent keeps normal auto-selection:
json5{ agents: { defaults: { agentRuntime: { id: "auto", fallback: "pi", }, }, list: [ { id: "main", default: true, model: "anthropic/claude-opus-4-6", }, { id: "codex", name: "Codex", model: "openai/gpt-5.5", agentRuntime: { id: "codex", fallback: "none", }, }, ], }, }
Use normal session commands to switch agents and models.
/new/resetBy default, the Codex plugin asks the app-server for available models. If discovery fails or times out, it uses a bundled fallback catalog for:
You can tune discovery under
plugins.entries.codex.config.discoveryjson5{ plugins: { entries: { codex: { enabled: true, config: { discovery: { enabled: true, timeoutMs: 2500, }, }, }, }, }, }
Disable discovery when you want startup to avoid probing Codex and stick to the fallback catalog:
json5{ plugins: { entries: { codex: { enabled: true, config: { discovery: { enabled: false, }, }, }, }, }, }
By default, the plugin starts OpenClaw's managed Codex binary locally with:
bashcodex app-server --listen stdio://
The managed binary is declared as a bundled plugin runtime dependency and staged with the rest of the
codexappServer.commandBy default, OpenClaw starts local Codex harness sessions in YOLO mode:
approvalPolicy: "never"approvalsReviewer: "user"sandbox: "danger-full-access"To opt in to Codex guardian-reviewed approvals, set
appServer.mode: "guardian"json5{ plugins: { entries: { codex: { enabled: true, config: { appServer: { mode: "guardian", serviceTier: "fast", }, }, }, }, }, }
Guardian mode uses Codex's native auto-review approval path. When Codex asks to leave the sandbox, write outside the workspace, or add permissions like network access, Codex routes that approval request to the native reviewer instead of a human prompt. The reviewer applies Codex's risk framework and approves or denies the specific request. Use Guardian when you want more guardrails than YOLO mode but still need unattended agents to make progress.
The
guardianapprovalPolicy: "on-request"approvalsReviewer: "auto_review"sandbox: "workspace-write"modeguardian_subagentauto_reviewFor an already-running app-server, use WebSocket transport:
json5{ plugins: { entries: { codex: { enabled: true, config: { appServer: { transport: "websocket", url: "ws://127.0.0.1:39175", authToken: "${CODEX_APP_SERVER_TOKEN}", requestTimeoutMs: 60000, }, }, }, }, }, }
Stdio app-server launches inherit OpenClaw's process environment by default, but OpenClaw owns the Codex app-server account bridge and sets both
CODEX_HOMEHOME$CODEX_HOME/skills$HOME/.agents/skillsOpenClaw plugins and OpenClaw skill snapshots still flow through OpenClaw's own plugin registry and skill loader. Personal Codex CLI assets do not. If you have useful Codex CLI skills or plugins that should become part of an OpenClaw agent, inventory them explicitly:
bashopenclaw migrate codex --dry-run openclaw migrate apply codex --yes
The Codex migration provider copies skills into the current OpenClaw agent workspace. Codex native plugins, hooks, and config files are reported or archived for manual review instead of being activated automatically, because they can execute commands, expose MCP servers, or carry credentials.
Auth is selected in this order:
CODEX_API_KEYOPENAI_API_KEYWhen OpenClaw sees a ChatGPT subscription-style Codex auth profile, it removes
CODEX_API_KEYOPENAI_API_KEYIf a deployment needs additional environment isolation, add those variables to
appServer.clearEnvjson5{ plugins: { entries: { codex: { enabled: true, config: { appServer: { clearEnv: ["CODEX_API_KEY", "OPENAI_API_KEY"], }, }, }, }, }, }
appServer.clearEnvSupported
appServer| Field | Default | Meaning |
|---|---|---|
text transport | text "stdio" | text "stdio"text "websocket"text url |
text command | managed Codex binary | Executable for stdio transport. Leave unset to use the managed binary; set it only for an explicit override. |
text args | text ["app-server", "--listen", "stdio://"] | Arguments for stdio transport. |
text url | unset | WebSocket app-server URL. |
text authToken | unset | Bearer token for WebSocket transport. |
text headers | text {} | Extra WebSocket headers. |
text clearEnv | text [] | Extra environment variable names removed from the spawned stdio app-server process after OpenClaw builds its inherited environment. text CODEX_HOMEtext HOME |
text requestTimeoutMs | text 60000 | Timeout for app-server control-plane calls. |
text mode | text "yolo" | Preset for YOLO or guardian-reviewed execution. |
text approvalPolicy | text "never" | Native Codex approval policy sent to thread start/resume/turn. |
text sandbox | text "danger-full-access" | Native Codex sandbox mode sent to thread start/resume. |
text approvalsReviewer | text "user" | Use text "auto_review"text guardian_subagent |
text serviceTier | unset | Optional Codex app-server service tier: text "fast"text "flex"text null |
OpenClaw-owned dynamic tool calls are bounded independently from
appServer.requestTimeoutMsitem/tool/callprocessingAfter OpenClaw responds to a Codex turn-scoped app-server request, the harness also expects Codex to finish the native turn with
turn/completedEnvironment overrides remain available for local testing:
OPENCLAW_CODEX_APP_SERVER_BINOPENCLAW_CODEX_APP_SERVER_ARGSOPENCLAW_CODEX_APP_SERVER_MODE=yolo|guardianOPENCLAW_CODEX_APP_SERVER_APPROVAL_POLICYOPENCLAW_CODEX_APP_SERVER_SANDBOXOPENCLAW_CODEX_APP_SERVER_BINappServer.commandOPENCLAW_CODEX_APP_SERVER_GUARDIAN=1plugins.entries.codex.config.appServer.mode: "guardian"OPENCLAW_CODEX_APP_SERVER_MODE=guardianComputer Use is covered in its own setup guide: Codex Computer Use.
The short version: OpenClaw does not vendor the desktop-control app or execute desktop actions itself. It prepares Codex app-server, verifies that the
computer-useFor direct TryCua driver access outside the Codex marketplace flow, register
cua-driver mcpopenclaw mcp set cua-driver '{"command":"cua-driver","args":["mcp"]}'Minimal config:
json5{ plugins: { entries: { codex: { enabled: true, config: { computerUse: { autoInstall: true, }, }, }, }, }, agents: { defaults: { model: "openai/gpt-5.5", agentRuntime: { id: "codex", fallback: "none", }, }, }, }
The setup can be checked or installed from the command surface:
/codex computer-use status/codex computer-use install/codex computer-use install --source <marketplace-source>/codex computer-use install --marketplace-path <path>Computer Use is macOS-specific and may require local OS permissions before the Codex MCP server can control apps. If
computerUse.enabledWhen
computerUse.autoInstall/Applications/Codex.app/Contents/Resources/plugins/openai-bundled/new/resetLocal Codex with default stdio transport:
json5{ plugins: { entries: { codex: { enabled: true, }, }, }, }
Codex-only harness validation:
json5{ agents: { defaults: { model: "openai/gpt-5.5", agentRuntime: { id: "codex", }, }, }, plugins: { entries: { codex: { enabled: true, }, }, }, }
Guardian-reviewed Codex approvals:
json5{ plugins: { entries: { codex: { enabled: true, config: { appServer: { mode: "guardian", approvalPolicy: "on-request", approvalsReviewer: "auto_review", sandbox: "workspace-write", }, }, }, }, }, }
Remote app-server with explicit headers:
json5{ plugins: { entries: { codex: { enabled: true, config: { appServer: { transport: "websocket", url: "ws://gateway-host:39175", headers: { "X-OpenClaw-Agent": "main", }, }, }, }, }, }, }
Model switching stays OpenClaw-controlled. When an OpenClaw session is attached to an existing Codex thread, the next turn sends the currently selected OpenAI model, provider, approval policy, sandbox, and service tier to app-server again. Switching from
openai/gpt-5.5openai/gpt-5.2The bundled plugin registers
/codexCommon forms:
/codex status/codex models/codex threads [filter]/codex resume <thread-id>/codex compact/codex review/codex diagnostics [note]/codex computer-use status/codex computer-use install/codex account/codex mcp/codex skillsWhen a Codex-backed agent does something surprising in Telegram, Discord, Slack, or another channel, start with the conversation where the problem happened:
/diagnostics bad tool choice after image uploadInspect locallyInspect locallycodex resume <thread-id>Use
/codex diagnostics [note]/diagnostics [note]Core OpenClaw also exposes owner-only
/diagnostics [note]openclaw gateway diagnostics export --jsonIf
/diagnosticsThe approved Codex upload calls Codex app-server
feedback/uploadcodex resume <thread-id>/codex resumeThe fastest way to understand a bad Codex run is often to open the native Codex thread directly:
shcodex resume <thread-id>
Use this when you notice a bug in a channel conversation and want to inspect the problematic Codex session, continue it locally, or ask Codex why it made a particular tool or reasoning choice. The easiest path is usually to run
/diagnostics [note]Inspect locallycodex resume <thread-id>You can also get a thread id from
/codex binding/codex threads [filter]codex resumeThe command surface requires Codex app-server
0.125.0unsupported by this Codex app-serverThe Codex harness has three hook layers:
| Layer | Owner | Purpose |
|---|---|---|
| OpenClaw plugin hooks | OpenClaw | Product/plugin compatibility across PI and Codex harnesses. |
| Codex app-server extension middleware | OpenClaw bundled plugins | Per-turn adapter behavior around OpenClaw dynamic tools. |
| Codex native hooks | Codex | Low-level Codex lifecycle and native tool policy from Codex config. |
OpenClaw does not use project or global Codex
hooks.jsonPreToolUsePostToolUsePermissionRequestStopSessionStartUserPromptSubmitFor OpenClaw dynamic tools, OpenClaw executes the tool after Codex asks for the call, so OpenClaw fires the plugin and middleware behavior it owns in the harness adapter. For Codex-native tools, Codex owns the canonical tool record. OpenClaw can mirror selected events, but it cannot rewrite the native Codex thread unless Codex exposes that operation through app-server or native hook callbacks.
Compaction and LLM lifecycle projections come from Codex app-server notifications and OpenClaw adapter state, not native Codex hook commands. OpenClaw's
before_compactionafter_compactionllm_inputllm_outputCodex native
hook/startedhook/completedcodex_app_server.hookCodex mode is not PI with a different model call underneath. Codex owns more of the native model loop, and OpenClaw adapts its plugin and session surfaces around that boundary.
Supported in Codex runtime v1:
| Surface | Support | Why |
|---|---|---|
| OpenAI model loop through Codex | Supported | Codex app-server owns the OpenAI turn, native thread resume, and native tool continuation. |
| OpenClaw channel routing and delivery | Supported | Telegram, Discord, Slack, WhatsApp, iMessage, and other channels stay outside the model runtime. |
| OpenClaw dynamic tools | Supported | Codex asks OpenClaw to execute these tools, so OpenClaw stays in the execution path. |
| Prompt and context plugins | Supported | OpenClaw builds prompt overlays and projects context into the Codex turn before starting or resuming the thread. |
| Context engine lifecycle | Supported | Assemble, ingest or after-turn maintenance, and context-engine compaction coordination run for Codex turns. |
| Dynamic tool hooks | Supported | text before_tool_calltext after_tool_call |
| Lifecycle hooks | Supported as adapter observations | text llm_inputtext llm_outputtext agent_endtext before_compactiontext after_compaction |
| Final-answer revision gate | Supported through the native hook relay | Codex text Stoptext before_agent_finalizetext revise |
| Native shell, patch, and MCP block or observe | Supported through the native hook relay | Codex text PreToolUsetext PostToolUsetext 0.125.0 |
| Native permission policy | Supported through the native hook relay | Codex text PermissionRequest |
| App-server trajectory capture | Supported | OpenClaw records the request it sent to app-server and the app-server notifications it receives. |
Not supported in Codex runtime v1:
| Surface | V1 boundary | Future path |
|---|---|---|
| Native tool argument mutation | Codex native pre-tool hooks can block, but OpenClaw does not rewrite Codex-native tool arguments. | Requires Codex hook/schema support for replacement tool input. |
| Editable Codex-native transcript history | Codex owns canonical native thread history. OpenClaw owns a mirror and can project future context, but should not mutate unsupported internals. | Add explicit Codex app-server APIs if native thread surgery is needed. |
text tool_result_persist | That hook transforms OpenClaw-owned transcript writes, not Codex-native tool records. | Could mirror transformed records, but canonical rewrite needs Codex support. |
| Rich native compaction metadata | OpenClaw observes compaction start and completion, but does not receive a stable kept/dropped list, token delta, or summary payload. | Needs richer Codex compaction events. |
| Compaction intervention | Current OpenClaw compaction hooks are notification-level in Codex mode. | Add Codex pre/post compaction hooks if plugins need to veto or rewrite native compaction. |
| Byte-for-byte model API request capture | OpenClaw can capture app-server requests and notifications, but Codex core builds the final OpenAI API request internally. | Needs a Codex model-request tracing event or debug API. |
The Codex harness changes the low-level embedded agent executor only.
OpenClaw still builds the tool list and receives dynamic tool results from the harness. Text, images, video, music, TTS, approvals, and messaging-tool output continue through the normal OpenClaw delivery path.
The native hook relay is intentionally generic, but the v1 support contract is limited to the Codex-native tool and permission paths that OpenClaw tests. In the Codex runtime, that includes shell, patch, and MCP
PreToolUsePostToolUsePermissionRequestFor
PermissionRequestCodex MCP tool approval elicitations are routed through OpenClaw's plugin approval flow when Codex marks
_meta.codex_approval_kind"mcp_tool_call"request_user_inputActive-run queue steering maps onto Codex app-server
turn/steermessages.queue.mode: "steer"turn/steerqueueturn/steerWhen the selected model uses the Codex harness, native thread compaction is delegated to Codex app-server. OpenClaw keeps a transcript mirror for channel history, search,
/new/resetBecause Codex owns the canonical native thread,
tool_result_persistMedia generation does not require PI. Image, video, music, PDF, TTS, and media understanding continue to use the matching provider/model settings such as
agents.defaults.imageGenerationModelvideoGenerationModelpdfModelmessages.ttsCodex does not appear as a normal /model
openai/gpt-*agentRuntime.id: "codex"codex/*plugins.entries.codex.enabledplugins.allowcodexOpenClaw uses PI instead of Codex:
agentRuntime.id: "auto"agentRuntime.id: "codex"agentRuntime.fallback: "pi"The app-server is rejected: upgrade Codex so the app-server handshake reports version
0.125.00.125.0-alpha.20.125.0+custom0.125.0Model discovery is slow: lower
plugins.entries.codex.config.discovery.timeoutMsWebSocket transport fails immediately: check
appServer.urlauthTokenA non-Codex model uses PI: that is expected unless you forced
agentRuntime.id: "codex"codex/*openai/gpt-*autoagentRuntime.id: "codex"Computer Use is installed but tools do not run: check
/codex computer-use statusNative hook relay unavailable/new/resetcomputer-use.list_apps© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine