Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
OpenClaw integrates with Ollama's native API (
/api/chatCloud + LocalCloud onlyhttps://ollama.comLocal onlyOllama provider config uses
baseUrlbaseURLbaseUrlChoose your preferred setup method and mode.
text<Steps> <Step title="Run onboarding"> ```bash} openclaw onboard ``` Select **Ollama** from the provider list. </Step> <Step title="Choose your mode"> * **Cloud + Local** — local Ollama host plus cloud models routed through that host * **Cloud only** — hosted Ollama models via `https://ollama.com` * **Local only** — local models only </Step> <Step title="Select a model"> `Cloud only` prompts for `OLLAMA_API_KEY` and suggests hosted cloud defaults. `Cloud + Local` and `Local only` ask for an Ollama base URL, discover available models, and auto-pull the selected local model if it is not available yet. When Ollama reports an installed `:latest` tag such as `gemma4:latest`, setup shows that installed model once instead of showing both `gemma4` and `gemma4:latest` or pulling the bare alias again. `Cloud + Local` also checks whether that Ollama host is signed in for cloud access. </Step> <Step title="Verify the model is available"> ```bash} openclaw models list --provider ollama ``` </Step> </Steps> ### Non-interactive mode ```bash} openclaw onboard --non-interactive \ --auth-choice ollama \ --accept-risk ``` Optionally specify a custom base URL or model: ```bash} openclaw onboard --non-interactive \ --auth-choice ollama \ --custom-base-url "http://ollama-host:11434" \ --custom-model-id "qwen3.5:27b" \ --accept-risk ```
text<Steps> <Step title="Choose cloud or local"> * **Cloud + Local**: install Ollama, sign in with `ollama signin`, and route cloud requests through that host * **Cloud only**: use `https://ollama.com` with an `OLLAMA_API_KEY` * **Local only**: install Ollama from [ollama.com/download](https://ollama.com/download) </Step> <Step title="Pull a local model (local only)"> ```bash} ollama pull gemma4 # or ollama pull gpt-oss:20b # or ollama pull llama3.3 ``` </Step> <Step title="Enable Ollama for OpenClaw"> For `Cloud only`, use your real `OLLAMA_API_KEY`. For host-backed setups, any placeholder value works: ```bash} # Cloud export OLLAMA_API_KEY="your-ollama-api-key" # Local-only export OLLAMA_API_KEY="ollama-local" # Or configure in your config file openclaw config set models.providers.ollama.apiKey "OLLAMA_API_KEY" ``` </Step> <Step title="Inspect and set your model"> ```bash} openclaw models list openclaw models set ollama/gemma4 ``` Or set the default in config: ```json5} { agents: { defaults: { model: { primary: "ollama/gemma4" }, }, }, } ``` </Step> </Steps>
textUse **Cloud + Local** during setup. OpenClaw prompts for the Ollama base URL, discovers local models from that host, and checks whether the host is signed in for cloud access with `ollama signin`. When the host is signed in, OpenClaw also suggests hosted cloud defaults such as `kimi-k2.5:cloud`, `minimax-m2.7:cloud`, and `glm-5.1:cloud`. If the host is not signed in yet, OpenClaw keeps the setup local-only until you run `ollama signin`.
textUse **Cloud only** during setup. OpenClaw prompts for `OLLAMA_API_KEY`, sets `baseUrl: "https://ollama.com"`, and seeds the hosted cloud model list. This path does **not** require a local Ollama server or `ollama signin`. The cloud model list shown during `openclaw onboard` is populated live from `https://ollama.com/api/tags`, capped at 500 entries, so the picker reflects the current hosted catalog rather than a static seed. If `ollama.com` is unreachable or returns no models at setup time, OpenClaw falls back to the previous hardcoded suggestions so onboarding still completes.
textOpenClaw currently suggests `gemma4` as the local default.
When you set
OLLAMA_API_KEYmodels.providers.ollamaapi: "ollama"http://127.0.0.1:11434| Behavior | Detail |
|---|---|
| Catalog query | Queries text /api/tags |
| Capability detection | Uses best-effort text /api/showtext contextWindowtext num_ctx |
| Vision models | Models with a text visiontext /api/showtext input: ["text", "image"] |
| Reasoning detection | Uses text /api/showtext thinkingtext r1text reasoningtext think |
| Token limits | Sets text maxTokens |
| Costs | Sets all costs to text 0 |
This avoids manual model entries while keeping the catalog aligned with the local Ollama instance. You can use a full ref such as
ollama/<pulled-model>:latestinfer model runmodels.jsonFor signed-in Ollama hosts, some
:cloud/api/chat/api/show/api/tagsollama/<model>:cloud/api/showbash# See what models are available ollama list openclaw models list
For a narrow text-generation smoke test that avoids the full agent tool surface, use local
infer model runbashOLLAMA_API_KEY=ollama-local \ openclaw infer model run \ --local \ --model ollama/llama3.2:latest \ --prompt "Reply with exactly: pong" \ --json
That path still uses OpenClaw's configured provider, auth, and native Ollama transport, but it does not start a chat-agent turn or load MCP/tool context. If this succeeds while normal agent replies fail, troubleshoot the model's agent prompt/tool capacity next.
For a narrow vision-model smoke test on the same lean path, add one or more image files to
infer model runbashOLLAMA_API_KEY=ollama-local \ openclaw infer model run \ --local \ --model ollama/qwen2.5vl:7b \ --prompt "Describe this image in one sentence." \ --file ./photo.jpg \ --json
model run --fileimage/*openclaw infer audio transcribeWhen you switch a conversation with
/model ollama/<model>baseUrlIsolated cron jobs do one extra local safety check before they start the agent turn. If the selected model resolves to a local, private-network, or
.local/api/tagsskippedollama/<model>Live-verify the local text path, native stream path, and embeddings against local Ollama with:
bashOPENCLAW_LIVE_TEST=1 OPENCLAW_LIVE_OLLAMA=1 OPENCLAW_LIVE_OLLAMA_WEB_SEARCH=0 \ pnpm test:live -- extensions/ollama/ollama.live.test.ts
To add a new model, simply pull it with Ollama:
bashollama pull mistral
The new model will be automatically discovered and available to use.
The bundled Ollama plugin registers Ollama as an image-capable media-understanding provider. This lets OpenClaw route explicit image-description requests and configured image-model defaults through local or hosted Ollama vision models.
For local vision, pull a model that supports images:
bashollama pull qwen2.5vl:7b export OLLAMA_API_KEY="ollama-local"
Then verify with the infer CLI:
bashopenclaw infer image describe \ --file ./photo.jpg \ --model ollama/qwen2.5vl:7b \ --json
--model<provider/model>openclaw infer image describeUse
infer image describeagents.defaults.imageModelinfer model run --fileTo make Ollama the default image-understanding model for inbound media, configure
agents.defaults.imageModeljson5{ agents: { defaults: { imageModel: { primary: "ollama/qwen2.5vl:7b", }, }, }, }
Prefer the full
ollama/<model>models.providers.ollama.modelsinput: ["text", "image"]imageModelqwen2.5vl:7bollama/qwen2.5vl:7bSlow local vision models can need a longer image-understanding timeout than cloud models. They can also crash or stop when Ollama tries to allocate the full advertised vision context on constrained hardware. Set a capability timeout, and cap
num_ctxjson5{ models: { providers: { ollama: { models: [ { id: "qwen2.5vl:7b", name: "qwen2.5vl:7b", input: ["text", "image"], params: { num_ctx: 2048, keep_alive: "1m" }, }, ], }, }, }, tools: { media: { image: { timeoutSeconds: 180, models: [{ provider: "ollama", model: "qwen2.5vl:7b", timeoutSeconds: 300 }], }, }, }, }
This timeout applies to inbound image understanding and to the explicit
imagemodels.providers.ollama.timeoutSecondsLive-verify the explicit image tool against local Ollama with:
bashOPENCLAW_LIVE_TEST=1 OPENCLAW_LIVE_OLLAMA_IMAGE=1 \ pnpm test:live -- src/agents/tools/image-tool.ollama.live.test.ts
If you define
models.providers.ollama.modelsjson5{ id: "qwen2.5vl:7b", name: "qwen2.5vl:7b", input: ["text", "image"], contextWindow: 128000, maxTokens: 8192, }
OpenClaw rejects image-description requests for models that are not marked image-capable. With implicit discovery, OpenClaw reads this from Ollama when
/api/showtext```bash} export OLLAMA_API_KEY="ollama-local" ``` <Tip> If `OLLAMA_API_KEY` is set, you can omit `apiKey` in the provider entry and OpenClaw will fill it for availability checks. </Tip>
text```json5} { models: { providers: { ollama: { baseUrl: "https://ollama.com", apiKey: "OLLAMA_API_KEY", api: "ollama", models: [ { id: "kimi-k2.5:cloud", name: "kimi-k2.5:cloud", reasoning: false, input: ["text", "image"], cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }, contextWindow: 128000, maxTokens: 8192 } ] } } } } ```
text```json5} { models: { providers: { ollama: { apiKey: "ollama-local", baseUrl: "http://ollama-host:11434", // No /v1 - use native Ollama API URL api: "ollama", // Set explicitly to guarantee native tool-calling behavior timeoutSeconds: 300, // Optional: give cold local models longer to connect and stream models: [ { id: "qwen3:32b", name: "qwen3:32b", params: { keep_alive: "15m", // Optional: keep the model loaded between turns }, }, ], }, }, }, } ``` <Warning> Do not add `/v1` to the URL. The `/v1` path uses OpenAI-compatible mode, where tool calling is not reliable. Use the base Ollama URL without a path suffix. </Warning>
Use these as starting points and replace model IDs with the exact names from
ollama listopenclaw models list --provider ollamaOnce configured, all your Ollama models are available:
json5{ agents: { defaults: { model: { primary: "ollama/gpt-oss:20b", fallbacks: ["ollama/llama3.3", "ollama/qwen2.5-coder:32b"], }, }, }, }
Custom Ollama provider ids are also supported. When a model ref uses the active provider prefix, such as
ollama-spark/qwen3:32bqwen3:32bFor slow local models, prefer provider-scoped request tuning before raising the whole agent runtime timeout:
json5{ models: { providers: { ollama: { timeoutSeconds: 300, models: [ { id: "gemma4:26b", name: "gemma4:26b", params: { keep_alive: "15m" }, }, ], }, }, }, }
timeoutSecondsparams.keep_alivekeep_alive/api/chatbash# Ollama daemon visible to this machine curl http://127.0.0.1:11434/api/tags # OpenClaw catalog and selected model openclaw models list --provider ollama openclaw models status # Direct model smoke openclaw infer model run \ --model ollama/gemma4 \ --prompt "Reply with exactly: ok"
For remote hosts, replace
127.0.0.1baseUrlcurlOpenClaw supports Ollama Web Search as a bundled
web_search| Property | Detail |
|---|---|
| Host | Uses your configured Ollama host ( text models.providers.ollama.baseUrltext http://127.0.0.1:11434text https://ollama.com |
| Auth | Key-free for signed-in local Ollama hosts; text OLLAMA_API_KEYtext https://ollama.com |
| Requirement | Local/self-hosted hosts must be running and signed in with text ollama signintext baseUrl: "https://ollama.com" |
Choose Ollama Web Search during
openclaw onboardopenclaw configure --section webjson5{ tools: { web: { search: { provider: "ollama", }, }, }, }
For direct hosted search through Ollama Cloud:
json5{ models: { providers: { ollama: { baseUrl: "https://ollama.com", apiKey: "OLLAMA_API_KEY", api: "ollama", models: [{ id: "kimi-k2.5:cloud", name: "kimi-k2.5:cloud", input: ["text"] }], }, }, }, tools: { web: { search: { provider: "ollama" }, }, }, }
For a signed-in local daemon, OpenClaw uses the daemon's
/api/experimental/web_searchhttps://ollama.com/api/web_search© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine