Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
An agent harness is the low level executor for one prepared OpenClaw agent turn. It is not a model provider, not a channel, and not a tool registry. For the user-facing mental model, see Agent runtimes.
Use this surface only for bundled or trusted native plugins. The contract is still experimental because the parameter types intentionally mirror the current embedded runner.
Register an agent harness when a model family has its own native session runtime and the normal OpenClaw provider transport is the wrong abstraction.
Examples:
Do not register a harness just to add a new LLM API. For normal HTTP or WebSocket model APIs, build a provider plugin.
Before a harness is selected, OpenClaw has already resolved:
That split is intentional. A harness runs a prepared attempt; it does not pick providers, replace channel delivery, or silently switch models.
The prepared attempt also includes
params.runtimePlanruntimePlan.tools.normalize(...)runtimePlan.tools.logDiagnostics(...)runtimePlan.transcript.resolvePolicy(...)runtimePlan.delivery.isSilentPayload(...)NO_REPLYruntimePlan.outcome.classifyRunResult(...)runtimePlan.observabilityHarnesses may use the plan for decisions that need to match PI behavior, but should still treat it as host-owned attempt state. Do not mutate it or use it to switch providers/models inside a turn.
Import:
openclaw/plugin-sdk/agent-harnesstypescriptimport type { AgentHarness } from "openclaw/plugin-sdk/agent-harness"; import { definePluginEntry } from "openclaw/plugin-sdk/plugin-entry"; const myHarness: AgentHarness = { id: "my-harness", label: "My native agent harness", supports(ctx) { return ctx.provider === "my-provider" ? { supported: true, priority: 100 } : { supported: false }; }, async runAttempt(params) { // Start or resume your native thread. // Use params.prompt, params.tools, params.images, params.onPartialReply, // params.onAgentEvent, and the other prepared attempt fields. return await runMyNativeTurn(params); }, }; export default definePluginEntry({ id: "my-native-agent", name: "My Native Agent", description: "Runs selected models through a native agent daemon.", register(api) { api.registerAgentHarness(myHarness); }, });
OpenClaw chooses a harness after provider/model resolution:
OPENCLAW_AGENT_RUNTIME=<id>OPENCLAW_AGENT_RUNTIME=piOPENCLAW_AGENT_RUNTIME=autoPlugin harness failures surface as run failures. In
autoThe selected harness id is persisted with the session id after an embedded run. Legacy sessions created before harness pins are treated as PI-pinned once they have transcript history. Use a new/reset session when changing between PI and a native plugin harness.
/statuscodexFastagents/harnessagent harness selectedautoThe bundled Codex plugin registers
codexMost harnesses should also register a provider. The provider makes model refs, auth status, model metadata, and
/modelsupports(...)The bundled Codex plugin follows this pattern:
openai/gpt-5.5agentRuntime.id: "codex"codex/gpt-*codexThe Codex plugin is additive. Plain
openai/gpt-*agentRuntime.id: "codex"codex/gpt-*For operator setup, model prefix examples, and Codex-only configs, see Codex Harness.
OpenClaw requires Codex app-server
0.125.00.125.00.124.0Bundled plugins can attach runtime-neutral tool-result middleware through
api.registerAgentToolResultMiddleware(...)contracts.agentToolResultMiddlewareLegacy bundled plugins can still use
api.registerCodexAppServerExtensionFactory(...)api.registerEmbeddedExtensionFactory(...)Native harnesses that own their own protocol projection can use
classifyAgentHarnessTerminalOutcome(...)openclaw/plugin-sdk/agent-harness-runtimeemptyreasoning-onlyplanning-onlyNO_REPLYThe bundled
codexcodexcodexplugins.allowopenai/gpt-*agentRuntime.id: "codex"openai-codex/*codex/*When this mode runs, Codex owns the native thread id, resume behavior, compaction, and app-server execution. OpenClaw still owns the chat channel, visible transcript mirror, tool policy, approvals, media delivery, and session selection. Use
agentRuntime.id: "codex"fallbackfallback: "pi"By default, OpenClaw runs embedded agents with
agents.defaults.agentRuntime{ id: "auto", fallback: "pi" }autoIn
autofallback: "none"runtime: "codex"fallback: "pi"runtime: "pi"OPENCLAW_AGENT_RUNTIME=piFor Codex-only embedded runs:
json{ "agents": { "defaults": { "model": "openai/gpt-5.5", "agentRuntime": { "id": "codex" } } } }
If you want any registered plugin harness to claim matching models but never want OpenClaw to silently fall back to PI, keep
runtime: "auto"json{ "agents": { "defaults": { "agentRuntime": { "id": "auto", "fallback": "none" } } } }
Per-agent overrides use the same shape:
json{ "agents": { "defaults": { "agentRuntime": { "id": "auto", "fallback": "pi" } }, "list": [ { "id": "codex-only", "model": "openai/gpt-5.5", "agentRuntime": { "id": "codex", "fallback": "none" } } ] } }
OPENCLAW_AGENT_RUNTIMEOPENCLAW_AGENT_HARNESS_FALLBACK=nonebashOPENCLAW_AGENT_RUNTIME=codex \ OPENCLAW_AGENT_HARNESS_FALLBACK=none \ openclaw gateway run
With fallback disabled, a session fails early when the requested harness is not registered, does not support the resolved provider/model, or fails before producing turn side effects. That is intentional for Codex-only deployments and for live tests that must prove the Codex app-server path is actually in use.
This setting only controls the embedded agent harness. It does not disable image, video, music, TTS, PDF, or other provider-specific model routing.
A harness may keep a native session id, thread id, or daemon-side resume token. Keep that binding explicitly associated with the OpenClaw session, and keep mirroring user-visible assistant/tool output into the OpenClaw transcript.
The OpenClaw transcript remains the compatibility layer for:
/new/resetIf your harness stores a sidecar binding, implement
reset(...)Core constructs the OpenClaw tool list and passes it into the prepared attempt. When a harness executes a dynamic tool call, return the tool result back through the harness result shape instead of sending channel media yourself.
This keeps text, image, video, music, TTS, approval, and messaging-tool outputs on the same delivery path as PI-backed runs.
Pi© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine