Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
OpenClaw’s Gateway can serve a small OpenAI-compatible Chat Completions endpoint.
This endpoint is disabled by default. Enable it in config first.
POST /v1/chat/completionshttp://<gateway-host>:<port>/v1/chat/completionsWhen the Gateway’s OpenAI-compatible HTTP surface is enabled, it also serves:
GET /v1/modelsGET /v1/models/{id}POST /v1/embeddingsPOST /v1/responsesUnder the hood, requests are executed as a normal Gateway agent run (same codepath as
openclaw agentUses the Gateway auth configuration.
Common HTTP auth paths:
gateway.auth.mode="token""password"Authorization: Bearer <token-or-password>gateway.auth.mode="trusted-proxy"gateway.auth.mode="none"Notes:
gateway.auth.mode="token"gateway.auth.tokenOPENCLAW_GATEWAY_TOKENgateway.auth.mode="password"gateway.auth.passwordOPENCLAW_GATEWAY_PASSWORDgateway.auth.mode="trusted-proxy"gateway.auth.trustedProxy.allowLoopback = truegateway.auth.rateLimit429Retry-AfterTreat this endpoint as a full operator-access surface for the gateway instance.
tokenpasswordx-openclaw-scopesgateway.auth.mode="none"x-openclaw-scopesAuth matrix:
gateway.auth.mode="token""password"Authorization: Bearer ...x-openclaw-scopesoperator.adminoperator.approvalsoperator.pairingoperator.readoperator.talk.secretsoperator.writegateway.auth.mode="none"x-openclaw-scopesoperator.adminSee Security and Remote access.
OpenClaw treats the OpenAI
modelmodel: "openclaw"model: "openclaw/default"model: "openclaw/<agentId>"Optional request headers:
x-openclaw-model: <provider/model-or-bare-id>x-openclaw-agent-id: <agentId>x-openclaw-session-key: <sessionKey>x-openclaw-message-channel: <channel>Compatibility aliases still accepted:
model: "openclaw:<agentId>"model: "agent:<agentId>"Set
gateway.http.endpoints.chatCompletions.enabledtruejson5{ gateway: { http: { endpoints: { chatCompletions: { enabled: true }, }, }, }, }
Set
gateway.http.endpoints.chatCompletions.enabledfalsejson5{ gateway: { http: { endpoints: { chatCompletions: { enabled: false }, }, }, }, }
By default the endpoint is stateless per request (a new session key is generated each call).
If the request includes an OpenAI
userThis is the highest-leverage compatibility set for self-hosted frontends and tooling:
/v1/models/v1/embeddings/v1/chat/completions/v1/responsesSet
stream: trueContent-Type: text/event-streamdata: <json>data: [DONE]For a basic Open WebUI connection:
http://127.0.0.1:18789/v1http://host.docker.internal:18789/v1openclaw/defaultExpected behavior:
GET /v1/modelsopenclaw/defaultopenclaw/defaultx-openclaw-modelQuick smoke:
bashcurl -sS http://127.0.0.1:18789/v1/models \ -H 'Authorization: Bearer YOUR_TOKEN'
If that returns
openclaw/defaultNon-streaming:
bashcurl -sS http://127.0.0.1:18789/v1/chat/completions \ -H 'Authorization: Bearer YOUR_TOKEN' \ -H 'Content-Type: application/json' \ -d '{ "model": "openclaw/default", "messages": [{"role":"user","content":"hi"}] }'
Streaming:
bashcurl -N http://127.0.0.1:18789/v1/chat/completions \ -H 'Authorization: Bearer YOUR_TOKEN' \ -H 'Content-Type: application/json' \ -H 'x-openclaw-model: openai/gpt-5.4' \ -d '{ "model": "openclaw/research", "stream": true, "messages": [{"role":"user","content":"hi"}] }'
List models:
bashcurl -sS http://127.0.0.1:18789/v1/models \ -H 'Authorization: Bearer YOUR_TOKEN'
Fetch one model:
bashcurl -sS http://127.0.0.1:18789/v1/models/openclaw%2Fdefault \ -H 'Authorization: Bearer YOUR_TOKEN'
Create embeddings:
bashcurl -sS http://127.0.0.1:18789/v1/embeddings \ -H 'Authorization: Bearer YOUR_TOKEN' \ -H 'Content-Type: application/json' \ -H 'x-openclaw-model: openai/text-embedding-3-small' \ -d '{ "model": "openclaw/default", "input": ["alpha", "beta"] }'
Notes:
/v1/modelsopenclaw/defaultx-openclaw-modelmodel/v1/embeddingsinput© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine