Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
OpenAI provides developer APIs for GPT models, and Codex is also available as a ChatGPT-plan coding agent through OpenAI's Codex clients. OpenClaw keeps those surfaces separate so config stays predictable.
OpenClaw supports three OpenAI-family routes. The model prefix selects the provider/auth route; a separate runtime setting selects who executes the embedded agent loop:
openai/*openai-codex/*openai/*agents.defaults.agentRuntime.id: "codex"OpenAI explicitly supports subscription OAuth usage in external tools and workflows like OpenClaw.
Provider, model, runtime, and channel are separate layers. If those labels are getting mixed together, read Agent runtimes before changing config.
| Goal | Use | Notes |
|---|---|---|
| Direct API-key billing | text openai/gpt-5.5 | Set text OPENAI_API_KEY |
| GPT-5.5 with ChatGPT/Codex subscription auth | text openai-codex/gpt-5.5 | Default PI route for Codex OAuth. Best first choice for subscription setups. |
| GPT-5.5 with native Codex app-server behavior | text openai/gpt-5.5text agentRuntime.id: "codex" | Forces the Codex app-server harness for that model ref. |
| Image generation or editing | text openai/gpt-image-2 | Works with either text OPENAI_API_KEY |
| Transparent-background images | text openai/gpt-image-1.5 | Use text outputFormat=pngtext webptext openai.background=transparent |
The names are similar but not interchangeable:
| Name you see | Layer | Meaning |
|---|---|---|
text openai | Provider prefix | Direct OpenAI Platform API route. |
text openai-codex | Provider prefix | OpenAI Codex OAuth/subscription route through the normal OpenClaw PI runner. |
text codex | Plugin | Bundled OpenClaw plugin that provides native Codex app-server runtime and text /codex |
text agentRuntime.id: codex | Agent runtime | Force the native Codex app-server harness for embedded turns. |
text /codex ... | Chat command set | Bind/control Codex app-server threads from a conversation. |
text runtime: "acp", agentId: "codex" | ACP session route | Explicit fallback path that runs Codex through ACP/acpx. |
This means a config can intentionally contain both
openai-codex/*codex/codexopenclaw doctor| OpenAI capability | OpenClaw surface | Status |
|---|---|---|
| Chat / Responses | text openai/<model> | Yes |
| Codex subscription models | text openai-codex/<model>text openai-codex | Yes |
| Codex app-server harness | text openai/<model>text agentRuntime.id: codex | Yes |
| Server-side web search | Native OpenAI Responses tool | Yes, when web search is enabled and no provider pinned |
| Images | text image_generate | Yes |
| Videos | text video_generate | Yes |
| Text-to-speech | text messages.tts.provider: "openai"text tts | Yes |
| Batch speech-to-text | text tools.media.audio | Yes |
| Streaming speech-to-text | Voice Call text streaming.provider: "openai" | Yes |
| Realtime voice | Voice Call text realtime.provider: "openai" | Yes |
| Embeddings | memory embedding provider | Yes |
OpenClaw can use OpenAI, or an OpenAI-compatible embedding endpoint, for
memory_searchjson5{ agents: { defaults: { memorySearch: { provider: "openai", model: "text-embedding-3-small", }, }, }, }
For OpenAI-compatible endpoints that require asymmetric embedding labels, set
queryInputTypedocumentInputTypememorySearchinput_typequeryInputTypedocumentInputTypeChoose your preferred auth method and follow the setup steps.
text<Steps> <Step title="Get your API key"> Create or copy an API key from the [OpenAI Platform dashboard](https://platform.openai.com/api-keys). </Step> <Step title="Run onboarding"> ```bash} openclaw onboard --auth-choice openai-api-key ``` Or pass the key directly: ```bash} openclaw onboard --openai-api-key "$OPENAI_API_KEY" ``` </Step> <Step title="Verify the model is available"> ```bash} openclaw models list --provider openai ``` </Step> </Steps> ### Route summary | Model ref | Runtime config | Route | Auth | | --------------------- | --------------------------------- | -------------------------- | ---------------- | | `openai/gpt-5.5` | omitted / `agentRuntime.id: "pi"` | Direct OpenAI Platform API | `OPENAI_API_KEY` | | `openai/gpt-5.4-mini` | omitted / `agentRuntime.id: "pi"` | Direct OpenAI Platform API | `OPENAI_API_KEY` | | `openai/gpt-5.5` | `agentRuntime.id: "codex"` | Codex app-server harness | Codex app-server | <Note> `openai/*` is the direct OpenAI API-key route unless you explicitly force the Codex app-server harness. Use `openai-codex/*` for Codex OAuth through the default PI runner, or use `openai/gpt-5.5` with `agentRuntime.id: "codex"` for native Codex app-server execution. </Note> ### Config example ```json5} { env: { OPENAI_API_KEY: "sk-..." }, agents: { defaults: { model: { primary: "openai/gpt-5.5" } } }, } ``` <Warning> OpenClaw does **not** expose `openai/gpt-5.3-codex-spark`. Live OpenAI API requests reject that model, and the current Codex catalog does not expose it either. </Warning>
text<Steps> <Step title="Run Codex OAuth"> ```bash} openclaw onboard --auth-choice openai-codex ``` Or run OAuth directly: ```bash} openclaw models auth login --provider openai-codex ``` For headless or callback-hostile setups, add `--device-code` to sign in with a ChatGPT device-code flow instead of the localhost browser callback: ```bash} openclaw models auth login --provider openai-codex --device-code ``` </Step> <Step title="Set the default model"> ```bash} openclaw config set agents.defaults.model.primary openai-codex/gpt-5.5 ``` </Step> <Step title="Verify the model is available"> ```bash} openclaw models list --provider openai-codex ``` </Step> </Steps> ### Route summary | Model ref | Runtime config | Route | Auth | | --------------------------- | -------------------------- | --------------------------------------------------------- | --------------------- | | `openai-codex/gpt-5.5` | omitted / `runtime: "pi"` | ChatGPT/Codex OAuth through PI | Codex sign-in | | `openai-codex/gpt-5.4-mini` | omitted / `runtime: "pi"` | ChatGPT/Codex OAuth through PI | Codex sign-in | | `openai-codex/gpt-5.5` | `runtime: "auto"` | Still PI unless a plugin explicitly claims `openai-codex` | Codex sign-in | | `openai/gpt-5.5` | `agentRuntime.id: "codex"` | Codex app-server harness | Codex app-server auth | <Note> Keep using the `openai-codex` provider id for auth/profile commands. The `openai-codex/*` model prefix is also the explicit PI route for Codex OAuth. It does not select or auto-enable the bundled Codex app-server harness. </Note> ### Config example ```json5} { agents: { defaults: { model: { primary: "openai-codex/gpt-5.5" } } }, } ``` <Note> Onboarding no longer imports OAuth material from `~/.codex`. Sign in with browser OAuth (default) or the device-code flow above — OpenClaw manages the resulting credentials in its own agent auth store. </Note> ### Status indicator Chat `/status` shows which model runtime is active for the current session. The default PI harness appears as `Runtime: OpenClaw Pi Default`. When the bundled Codex app-server harness is selected, `/status` shows `Runtime: OpenAI Codex`. Existing sessions keep their recorded harness id, so use `/new` or `/reset` after changing `agentRuntime` if you want `/status` to reflect a new PI/Codex choice. ### Doctor warning If the bundled `codex` plugin is enabled while this tab's `openai-codex/*` route is selected, `openclaw doctor` warns that the model still resolves through PI. Keep the config unchanged when that is the intended subscription-auth route. Switch to `openai/<model>` plus `agentRuntime.id: "codex"` only when you want native Codex app-server execution. ### Context window cap OpenClaw treats model metadata and the runtime context cap as separate values. For `openai-codex/gpt-5.5` through Codex OAuth: * Native `contextWindow`: `1000000` * Default runtime `contextTokens` cap: `272000` The smaller default cap has better latency and quality characteristics in practice. Override it with `contextTokens`: ```json5} { models: { providers: { "openai-codex": { models: [{ id: "gpt-5.5", contextTokens: 160000 }], }, }, }, } ``` <Note> Use `contextWindow` to declare native model metadata. Use `contextTokens` to limit the runtime context budget. </Note> ### Catalog recovery OpenClaw uses upstream Codex catalog metadata for `gpt-5.5` when it is present. If live Codex discovery omits the `openai-codex/gpt-5.5` row while the account is authenticated, OpenClaw synthesizes that OAuth model row so cron, sub-agent, and configured default-model runs do not fail with `Unknown model`.
The native Codex app-server harness uses
openai/*agentRuntime.id: "codex"openai-codexCODEX_API_KEYOPENAI_API_KEYThat means a local ChatGPT/Codex subscription sign-in is not replaced just because the gateway process also has
OPENAI_API_KEYCODEX_API_KEYOPENAI_API_KEYThe bundled
openaiimage_generateopenai/gpt-image-2| Capability | OpenAI API key | Codex OAuth |
|---|---|---|
| Model ref | text openai/gpt-image-2 | text openai/gpt-image-2 |
| Auth | text OPENAI_API_KEY | OpenAI Codex OAuth sign-in |
| Transport | OpenAI Images API | Codex Responses backend |
| Max images per request | 4 | 4 |
| Edit mode | Enabled (up to 5 reference images) | Enabled (up to 5 reference images) |
| Size overrides | Supported, including 2K/4K sizes | Supported, including 2K/4K sizes |
| Aspect ratio / resolution | Not forwarded to OpenAI Images API | Mapped to a supported size when safe |
json5{ agents: { defaults: { imageGenerationModel: { primary: "openai/gpt-image-2" }, }, }, }
gpt-image-2gpt-image-1.5gpt-image-1gpt-image-1-miniopenai/gpt-image-1.5gpt-image-2background: "transparent"For a transparent-background request, agents should call
image_generatemodel: "openai/gpt-image-1.5"outputFormat: "png""webp"background: "transparent"openai.backgroundopenai/gpt-image-2gpt-image-1.5The same setting is exposed for headless CLI runs:
bashopenclaw infer image generate \ --model openai/gpt-image-1.5 \ --output-format png \ --background transparent \ --prompt "A simple red circle sticker on a transparent background" \ --json
Use the same
--output-format--backgroundopenclaw infer image edit--openai-backgroundFor Codex OAuth installs, keep the same
openai/gpt-image-2openai-codexOPENAI_API_KEYmodels.providers.openaibrowser.ssrfPolicy.dangerouslyAllowPrivateNetwork: trueGenerate:
text/tool image_generate model=openai/gpt-image-2 prompt="A polished launch poster for OpenClaw on macOS" size=3840x2160 count=1
Generate a transparent PNG:
text/tool image_generate model=openai/gpt-image-1.5 prompt="A simple red circle sticker on a transparent background" outputFormat=png background=transparent
Edit:
text/tool image_generate model=openai/gpt-image-2 prompt="Preserve the object shape, change the material to translucent glass" image=/path/to/reference.png size=1024x1536
The bundled
openaivideo_generate| Capability | Value |
|---|---|
| Default model | text openai/sora-2 |
| Modes | Text-to-video, image-to-video, single-video edit |
| Reference inputs | 1 image or 1 video |
| Size overrides | Supported |
| Other overrides | text aspectRatiotext resolutiontext audiotext watermark |
json5{ agents: { defaults: { videoGenerationModel: { primary: "openai/sora-2" }, }, }, }
OpenClaw adds a shared GPT-5 prompt contribution for GPT-5-family runs across providers. It applies by model id, so
openai-codex/gpt-5.5openai/gpt-5.5openrouter/openai/gpt-5.5opencode/gpt-5.5The bundled native Codex harness uses the same GPT-5 behavior and heartbeat overlay through Codex app-server developer instructions, so
openai/gpt-5.xagentRuntime.id: "codex"The GPT-5 contribution adds a tagged behavior contract for persona persistence, execution safety, tool discipline, output shape, completion checks, and verification. Channel-specific reply and silent-message behavior stays in the shared OpenClaw system prompt and outbound delivery policy. The GPT-5 guidance is always enabled for matching models. The friendly interaction-style layer is separate and configurable.
| Value | Effect |
|---|---|
text "friendly" | Enable the friendly interaction-style layer |
text "on" | Alias for text "friendly" |
text "off" | Disable only the friendly style layer |
The bundled
openaimodels.providers.openai.baseUrlUse Azure OpenAI when:
For Azure image generation through the bundled
openaimodels.providers.openai.baseUrlapiKeyjson5{ models: { providers: { openai: { baseUrl: "https://<your-resource>.openai.azure.com", apiKey: "<azure-openai-api-key>", }, }, }, }
OpenClaw recognizes these Azure host suffixes for the Azure image-generation route:
*.openai.azure.com*.services.ai.azure.com*.cognitiveservices.azure.comFor image-generation requests on a recognized Azure host, OpenClaw:
api-keyAuthorization: Bearer/openai/deployments/{deployment}/...?api-version=...timeoutMsOther base URLs (public OpenAI, OpenAI-compatible proxies) keep the standard OpenAI image request shape.
Set
AZURE_OPENAI_API_VERSIONbashexport AZURE_OPENAI_API_VERSION="2024-12-01-preview"
The default is
2024-12-01-previewAzure OpenAI binds models to deployments. For Azure image-generation requests routed through the bundled
openaimodelIf you create a deployment called
gpt-image-2-prodgpt-image-2text/tool image_generate model=openai/gpt-image-2-prod prompt="A clean poster" size=1024x1024 count=1
The same deployment-name rule applies to image-generation calls routed through the bundled
openaiAzure image generation is currently available only in a subset of regions (for example
eastus2swedencentralpolandcentralwestus3uaenorthAzure OpenAI and public OpenAI do not always accept the same image parameters. Azure may reject options that public OpenAI allows (for example certain
backgroundgpt-image-2For chat or Responses traffic on Azure (beyond image generation), use the onboarding flow or a dedicated Azure provider config —
openai.baseUrlazure-openai-responses/*© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine