Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
An agent runtime is the component that owns one prepared model loop: it receives the prompt, drives model output, handles native tool calls, and returns the finished turn to OpenClaw.
Runtimes are easy to confuse with providers because both show up near model configuration. They are different layers:
| Layer | Examples | What it means |
|---|---|---|
| Provider | text openaitext anthropictext openai-codex | How OpenClaw authenticates, discovers models, and names model refs. |
| Model | text gpt-5.5text claude-opus-4-6 | The model selected for the agent turn. |
| Agent runtime | text pitext codextext claude-cli | The low level loop or backend that executes the prepared turn. |
| Channel | Telegram, Discord, Slack, WhatsApp | Where messages enter and leave OpenClaw. |
You will also see the word harness in code. A harness is the implementation that provides an agent runtime. For example, the bundled Codex harness implements the
codexagentRuntime.idopenclaw doctor --fixThere are two runtime families:
picodexanthropic/claude-opus-4-7agentRuntime.id: "claude-cli"claude-cliMost confusion comes from three different surfaces sharing the Codex name:
| Surface | OpenClaw name/config | What it does |
|---|---|---|
| Codex OAuth provider route | text openai-codex/* | Uses ChatGPT/Codex subscription OAuth through the normal OpenClaw PI runner. |
| Native Codex app-server runtime | text agentRuntime.id: "codex" | Runs the embedded agent turn through the bundled Codex app-server harness. |
| Codex ACP adapter | text runtime: "acp"text agentId: "codex" | Runs Codex through the external ACP/acpx control plane. Use only when ACP/acpx is explicitly asked. |
| Native Codex chat-control command set | text /codex ... | Binds, resumes, steers, stops, and inspects Codex app-server threads from chat. |
| OpenAI Platform API route for GPT/Codex-style models | text openai/* | Uses OpenAI API-key auth unless a runtime override, such as text runtime: "codex" |
Those surfaces are intentionally independent. Enabling the
codexopenai-codex/*openai/*openai-codex/*The common Codex setup uses the
openaicodexjson5{ agents: { defaults: { model: "openai/gpt-5.5", agentRuntime: { id: "codex", }, }, }, }
That means OpenClaw selects an OpenAI model ref, then asks the Codex app-server runtime to run the embedded agent turn. It does not mean the channel, model provider catalog, or OpenClaw session store becomes Codex.
When the bundled
codex/codex/codex bind/codex threads/codex resume/codex steer/codex stopThis is the agent-facing decision tree:
/codexcodexopenai/<model>agentRuntime.id: "codex"openai-codex/<model>runtime: "acp"agentId: "codex"| You mean... | Use... |
|---|---|
| Codex app-server chat/thread control | text /codex ...text codex |
| Codex app-server embedded agent runtime | text agentRuntime.id: "codex" |
| OpenAI Codex OAuth on the PI runner | text openai-codex/* |
| Claude Code or other external harness | ACP/acpx |
For the OpenAI-family prefix split, see OpenAI and Model providers. For the Codex runtime support contract, see Codex harness.
Different runtimes own different amounts of the loop.
| Surface | OpenClaw PI embedded | Codex app-server |
|---|---|---|
| Model loop owner | OpenClaw through the PI embedded runner | Codex app-server |
| Canonical thread state | OpenClaw transcript | Codex thread, plus OpenClaw transcript mirror |
| OpenClaw dynamic tools | Native OpenClaw tool loop | Bridged through the Codex adapter |
| Native shell and file tools | PI/OpenClaw path | Codex-native tools, bridged through native hooks where supported |
| Context engine | Native OpenClaw context assembly | OpenClaw projects assembled context into the Codex turn |
| Compaction | OpenClaw or selected context engine | Codex-native compaction, with OpenClaw notifications and mirror maintenance |
| Channel delivery | OpenClaw | OpenClaw |
This ownership split is the main design rule:
OpenClaw chooses an embedded runtime after provider and model resolution:
OPENCLAW_AGENT_RUNTIME=<id>agents.defaults.agentRuntime.idagents.list[].agentRuntime.idautopicodexclaude-cliautoautofallback: "pi"fallback: "none"autoExplicit plugin runtimes fail closed by default. For example,
runtime: "codex"fallback: "pi"runtime: "codex"fallback: "pi"CLI backend aliases are different from embedded harness ids. The preferred Claude CLI form is:
json5{ agents: { defaults: { model: "anthropic/claude-opus-4-7", agentRuntime: { id: "claude-cli" }, }, }, }
Legacy refs such as
claude-cli/claude-opus-4-7agentRuntime.idautoopenai-codexautoopenai-codex/*If
openclaw doctorcodexopenai-codex/*openai/<model>agentRuntime.id: "codex"When a runtime is not PI, it should document what OpenClaw surfaces it supports. Use this shape for runtime docs:
| Question | Why it matters |
|---|---|
| Who owns the model loop? | Determines where retries, tool continuation, and final answer decisions happen. |
| Who owns canonical thread history? | Determines whether OpenClaw can edit history or only mirror it. |
| Do OpenClaw dynamic tools work? | Messaging, sessions, cron, and OpenClaw-owned tools rely on this. |
| Do dynamic tool hooks work? | Plugins expect text before_tool_calltext after_tool_call |
| Do native tool hooks work? | Shell, patch, and runtime-owned tools need native hook support for policy and observation. |
| Does the context engine lifecycle run? | Memory and context plugins depend on assemble, ingest, after-turn, and compaction lifecycle. |
| What compaction data is exposed? | Some plugins only need notifications, while others need kept/dropped metadata. |
| What is intentionally unsupported? | Users should not assume PI equivalence where the native runtime owns more state. |
The Codex runtime support contract is documented in Codex harness.
Status output may show both
ExecutionRuntimeopenai/gpt-5.5codexIf a session still shows PI after changing runtime config, start a new session with
/new/reset© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine