TaskFlow
DashboardFreewriteWhiteboardsProjectsCRMTasksNotificationsSettingsAgent TowerAPI Docs
OpenClaw Docs
?

User

Member

Caricamento in corso...

Home
Progetti
Task
Notifiche
CRM

    OpenClaw

    Documentation Mirror

    Documentation Overview

    Docs

    Auth credential semantics
    Scheduled tasks
    Hooks
    Automation & tasks
    Standing orders
    Task flow
    Background tasks
    BlueBubbles
    Broadcast groups
    Channel routing
    Discord
    Feishu
    Google Chat
    Group messages
    Groups
    iMessage
    Chat channels
    IRC
    LINE
    Channel location parsing
    Matrix
    Matrix migration
    Matrix push rules for quiet previews
    Mattermost
    Microsoft Teams
    Nextcloud Talk
    Nostr
    Pairing
    QA channel
    QQ bot
    Signal
    Slack
    Synology Chat
    Telegram
    Tlon
    Channel troubleshooting
    Twitch
    WeChat
    WhatsApp
    Yuanbao
    Zalo
    Zalo personal
    CI pipeline
    ACP
    Agent
    Agents
    Approvals
    Backup
    Browser
    Channels
    Clawbot
    `openclaw commitments`
    Completion
    Config
    Configure
    Cron
    Daemon
    Dashboard
    Devices
    Directory
    DNS
    Docs
    Doctor
    Flows (redirect)
    Gateway
    Health
    Hooks
    CLI reference
    Inference CLI
    Logs
    MCP
    Memory
    Message
    Migrate
    Models
    Node
    Nodes
    Onboard
    Pairing
    Plugins
    Proxy
    QR
    Reset
    Sandbox CLI
    Secrets
    Security
    Sessions
    Setup
    Skills
    Status
    System
    `openclaw tasks`
    TUI
    Uninstall
    Update
    Voicecall
    Webhooks
    Wiki
    Active memory
    Agent runtime
    Agent loop
    Agent runtimes
    Agent workspace
    Gateway architecture
    Channel docking
    Inferred commitments
    Compaction
    Context
    Context engine
    Delegate architecture
    Dreaming
    Experimental features
    Features
    Markdown formatting
    Memory overview
    Builtin memory engine
    Honcho memory
    QMD memory engine
    Memory search
    Messages
    Model failover
    Model providers
    Models CLI
    Multi-agent routing
    OAuth
    OpenClaw App SDK
    Presence
    QA overview
    Matrix QA
    Command queue
    Steering queue
    Retry policy
    Session management
    Session pruning
    Session tools
    SOUL.md personality guide
    Streaming and chunking
    System prompt
    Timezones
    TypeBox
    Typing indicators
    Usage tracking
    Date and time
    Node + tsx crash
    Diagnostics flags
    Authentication
    Background exec and process tool
    Bonjour discovery
    Bridge protocol
    CLI backends
    Configuration — agents
    Configuration — channels
    Configuration — tools and custom providers
    Configuration
    Configuration examples
    Configuration reference
    Diagnostics export
    Discovery and transports
    Doctor
    Gateway lock
    Health checks
    Heartbeat
    Gateway runbook
    Local models
    Gateway logging
    Multiple gateways
    Network model
    OpenAI chat completions
    OpenResponses API
    OpenShell
    OpenTelemetry export
    Gateway-owned pairing
    Prometheus metrics
    Gateway protocol
    Remote access
    Remote gateway setup
    Sandbox vs tool policy vs elevated
    Sandboxing
    Secrets management
    Secrets apply plan contract
    Security audit checks
    Security
    Tailscale
    Tools invoke API
    Troubleshooting
    Trusted proxy auth
    Debugging
    Environment variables
    FAQ
    FAQ: first-run setup
    FAQ: models and auth
    GPT-5.5 / Codex agentic parity
    GPT-5.5 / Codex parity maintainer notes
    Help
    Scripts
    Testing
    Testing: live suites
    General troubleshooting
    OpenClaw
    Ansible
    Azure
    Bun (experimental)
    ClawDock
    Release channels
    DigitalOcean
    Docker
    Docker VM runtime
    exe.dev
    Fly.io
    GCP
    Hetzner
    Hostinger
    Install
    Installer internals
    Kubernetes
    macOS VMs
    Migration guide
    Migrating from Claude
    Migrating from Hermes
    Nix
    Node.js
    Northflank
    Oracle Cloud
    Podman
    Railway
    Raspberry Pi
    Render
    Uninstall
    Updating
    Logging
    Network
    Audio and voice notes
    Camera capture
    Image and media support
    Nodes
    Location command
    Media understanding
    Talk mode
    Node troubleshooting
    Voice wake
    Pi integration architecture
    Pi development workflow
    Android app
    Platforms
    iOS app
    Linux app
    Gateway on macOS
    Canvas
    Gateway lifecycle
    macOS dev setup
    Health checks (macOS)
    Menu bar icon
    macOS logging
    Menu bar
    Peekaboo bridge
    macOS permissions
    Remote control
    macOS signing
    Skills (macOS)
    Voice overlay
    Voice wake (macOS)
    WebChat (macOS)
    macOS IPC
    macOS app
    Windows
    Plugin internals
    Plugin architecture internals
    Building plugins
    Plugin bundles
    Codex Computer Use
    Codex harness
    Community plugins
    Plugin compatibility
    Google Meet plugin
    Plugin hooks
    Plugin manifest
    Memory LanceDB
    Memory wiki
    Message presentation
    Agent harness plugins
    Building channel plugins
    Channel turn kernel
    Plugin entry points
    Plugin SDK migration
    Plugin SDK overview
    Building provider plugins
    Plugin runtime helpers
    Plugin setup and config
    Plugin SDK subpaths
    Plugin testing
    Skill workshop plugin
    Voice call plugin
    Webhooks plugin
    Zalo personal plugin
    OpenProse
    Alibaba Model Studio
    Anthropic
    Arcee AI
    Azure Speech
    Amazon Bedrock
    Amazon Bedrock Mantle
    Chutes
    Claude Max API proxy
    Cloudflare AI gateway
    ComfyUI
    Deepgram
    Deepinfra
    DeepSeek
    ElevenLabs
    Fal
    Fireworks
    GitHub Copilot
    GLM (Zhipu)
    Google (Gemini)
    Gradium
    Groq
    Hugging Face (inference)
    Provider directory
    Inferrs
    Inworld
    Kilocode
    LiteLLM
    LM Studio
    MiniMax
    Mistral
    Model provider quickstart
    Moonshot AI
    NVIDIA
    Ollama
    OpenAI
    OpenCode
    OpenCode Go
    OpenRouter
    Perplexity
    Qianfan
    Qwen
    Runway
    SGLang
    StepFun
    Synthetic
    Tencent Cloud (TokenHub)
    Together AI
    Venice AI
    Vercel AI gateway
    vLLM
    Volcengine (Doubao)
    Vydra
    xAI
    Xiaomi MiMo
    Z.AI
    Default AGENTS.md
    Release policy
    API usage and costs
    Credits
    Device model database
    Full release validation
    Memory configuration reference
    OpenClaw App SDK API design
    Prompt caching
    Rich output protocol
    RPC adapters
    SecretRef credential surface
    Session management deep dive
    AGENTS.md template
    BOOT.md template
    BOOTSTRAP.md template
    HEARTBEAT.md template
    IDENTITY template
    SOUL.md template
    TOOLS.md template
    USER template
    Tests
    Token use and costs
    Transcript hygiene
    Onboarding reference
    Contributing to the threat model
    Threat model (MITRE ATLAS)
    Formal verification (security models)
    Network proxy
    Agent bootstrapping
    Docs directory
    Getting started
    Docs hubs
    OpenClaw lore
    Onboarding (macOS app)
    Onboarding overview
    Personal assistant setup
    Setup
    Showcase
    Onboarding (CLI)
    CLI automation
    CLI setup reference
    ACP agents
    ACP agents — setup
    Agent send
    apply_patch tool
    Brave search
    Browser (OpenClaw-managed)
    Browser control API
    Browser troubleshooting
    Browser login
    WSL2 + Windows + remote Chrome CDP troubleshooting
    BTW side questions
    ClawHub
    Code execution
    Creating skills
    Diffs
    DuckDuckGo search
    Elevated mode
    Exa search
    Exec tool
    Exec approvals
    Exec approvals — advanced
    Firecrawl
    Gemini search
    Grok search
    Image generation
    Tools and plugins
    Kimi search
    LLM task
    Lobster
    Tool-loop detection
    Media overview
    MiniMax search
    Multi-agent sandbox and tools
    Music generation
    Ollama web search
    PDF tool
    Perplexity search
    Plugins
    Reactions
    SearXNG search
    Skills
    Skills config
    Slash commands
    Sub-agents
    Tavily
    Thinking levels
    Tokenjuice
    Trajectory bundles
    Text-to-speech
    Video generation
    Web search
    Web fetch
    Linux server
    Control UI
    Dashboard
    Web
    TUI
    WebChat

    OpenAPI Specs

    openapi
    TaskFlow
    docs/openclaw
    Original Docs

    Real-time Synchronized Documentation

    Last sync: 01/05/2026 07:04:36

    Note: This content is mirrored from docs.openclaw.ai and is subject to their terms and conditions.

    OpenClaw Docs

    v2.4.0 Production

    Last synced: Today, 22:00

    Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.

    Use this file to discover all available pages before exploring further.

    Ollama

    OpenClaw integrates with Ollama's native API (

    text
    /api/chat
    ) for hosted cloud models and local/self-hosted Ollama servers. You can use Ollama in three modes:
    text
    Cloud + Local
    through a reachable Ollama host,
    text
    Cloud only
    against
    text
    https://ollama.com
    , or
    text
    Local only
    against a reachable Ollama host.

    warning

    **Remote Ollama users**: Do not use the `/v1` OpenAI-compatible URL (`http://host:11434/v1`) with OpenClaw. This breaks tool calling and models may output raw tool JSON as plain text. Use the native Ollama API URL instead: `baseUrl: "http://host:11434"` (no `/v1`).

    Ollama provider config uses

    text
    baseUrl
    as the canonical key. OpenClaw also accepts
    text
    baseURL
    for compatibility with OpenAI SDK-style examples, but new config should prefer
    text
    baseUrl
    .

    Auth rules

    Getting started

    Choose your preferred setup method and mode.

    **Best for:** fastest path to a working Ollama cloud or local setup.
    text
    <Steps> <Step title="Run onboarding"> ```bash} openclaw onboard ``` Select **Ollama** from the provider list. </Step> <Step title="Choose your mode"> * **Cloud + Local** — local Ollama host plus cloud models routed through that host * **Cloud only** — hosted Ollama models via `https://ollama.com` * **Local only** — local models only </Step> <Step title="Select a model"> `Cloud only` prompts for `OLLAMA_API_KEY` and suggests hosted cloud defaults. `Cloud + Local` and `Local only` ask for an Ollama base URL, discover available models, and auto-pull the selected local model if it is not available yet. When Ollama reports an installed `:latest` tag such as `gemma4:latest`, setup shows that installed model once instead of showing both `gemma4` and `gemma4:latest` or pulling the bare alias again. `Cloud + Local` also checks whether that Ollama host is signed in for cloud access. </Step> <Step title="Verify the model is available"> ```bash} openclaw models list --provider ollama ``` </Step> </Steps> ### Non-interactive mode ```bash} openclaw onboard --non-interactive \ --auth-choice ollama \ --accept-risk ``` Optionally specify a custom base URL or model: ```bash} openclaw onboard --non-interactive \ --auth-choice ollama \ --custom-base-url "http://ollama-host:11434" \ --custom-model-id "qwen3.5:27b" \ --accept-risk ```
    **Best for:** full control over cloud or local setup.
    text
    <Steps> <Step title="Choose cloud or local"> * **Cloud + Local**: install Ollama, sign in with `ollama signin`, and route cloud requests through that host * **Cloud only**: use `https://ollama.com` with an `OLLAMA_API_KEY` * **Local only**: install Ollama from [ollama.com/download](https://ollama.com/download) </Step> <Step title="Pull a local model (local only)"> ```bash} ollama pull gemma4 # or ollama pull gpt-oss:20b # or ollama pull llama3.3 ``` </Step> <Step title="Enable Ollama for OpenClaw"> For `Cloud only`, use your real `OLLAMA_API_KEY`. For host-backed setups, any placeholder value works: ```bash} # Cloud export OLLAMA_API_KEY="your-ollama-api-key" # Local-only export OLLAMA_API_KEY="ollama-local" # Or configure in your config file openclaw config set models.providers.ollama.apiKey "OLLAMA_API_KEY" ``` </Step> <Step title="Inspect and set your model"> ```bash} openclaw models list openclaw models set ollama/gemma4 ``` Or set the default in config: ```json5} { agents: { defaults: { model: { primary: "ollama/gemma4" }, }, }, } ``` </Step> </Steps>

    Cloud models

    `Cloud + Local` uses a reachable Ollama host as the control point for both local and cloud models. This is Ollama's preferred hybrid flow.
    text
    Use **Cloud + Local** during setup. OpenClaw prompts for the Ollama base URL, discovers local models from that host, and checks whether the host is signed in for cloud access with `ollama signin`. When the host is signed in, OpenClaw also suggests hosted cloud defaults such as `kimi-k2.5:cloud`, `minimax-m2.7:cloud`, and `glm-5.1:cloud`. If the host is not signed in yet, OpenClaw keeps the setup local-only until you run `ollama signin`.
    `Cloud only` runs against Ollama's hosted API at `https://ollama.com`.
    text
    Use **Cloud only** during setup. OpenClaw prompts for `OLLAMA_API_KEY`, sets `baseUrl: "https://ollama.com"`, and seeds the hosted cloud model list. This path does **not** require a local Ollama server or `ollama signin`. The cloud model list shown during `openclaw onboard` is populated live from `https://ollama.com/api/tags`, capped at 500 entries, so the picker reflects the current hosted catalog rather than a static seed. If `ollama.com` is unreachable or returns no models at setup time, OpenClaw falls back to the previous hardcoded suggestions so onboarding still completes.
    In local-only mode, OpenClaw discovers models from the configured Ollama instance. This path is for local or self-hosted Ollama servers.
    text
    OpenClaw currently suggests `gemma4` as the local default.

    Model discovery (implicit provider)

    When you set

    text
    OLLAMA_API_KEY
    (or an auth profile) and do not define
    text
    models.providers.ollama
    or another custom remote provider with
    text
    api: "ollama"
    , OpenClaw discovers models from the local Ollama instance at
    text
    http://127.0.0.1:11434
    .

    BehaviorDetail
    Catalog queryQueries
    text
    /api/tags
    Capability detectionUses best-effort
    text
    /api/show
    lookups to read
    text
    contextWindow
    , expanded
    text
    num_ctx
    Modelfile parameters, and capabilities including vision/tools
    Vision modelsModels with a
    text
    vision
    capability reported by
    text
    /api/show
    are marked as image-capable (
    text
    input: ["text", "image"]
    ), so OpenClaw auto-injects images into the prompt
    Reasoning detectionUses
    text
    /api/show
    capabilities when available, including
    text
    thinking
    ; falls back to a model-name heuristic (
    text
    r1
    ,
    text
    reasoning
    ,
    text
    think
    ) when Ollama omits capabilities
    Token limitsSets
    text
    maxTokens
    to the default Ollama max-token cap used by OpenClaw
    CostsSets all costs to
    text
    0

    This avoids manual model entries while keeping the catalog aligned with the local Ollama instance. You can use a full ref such as

    text
    ollama/<pulled-model>:latest
    in local
    text
    infer model run
    ; OpenClaw resolves that installed model from Ollama's live catalog without requiring a hand-written
    text
    models.json
    entry.

    For signed-in Ollama hosts, some

    text
    :cloud
    models may be usable through
    text
    /api/chat
    and
    text
    /api/show
    before they appear in
    text
    /api/tags
    . When you explicitly select a full
    text
    ollama/<model>:cloud
    ref, OpenClaw validates that exact missing model with
    text
    /api/show
    and adds it to the runtime catalog only if Ollama confirms model metadata. Typos still fail as unknown models instead of being auto-created.

    bash
    # See what models are available ollama list openclaw models list

    For a narrow text-generation smoke test that avoids the full agent tool surface, use local

    text
    infer model run
    with a full Ollama model ref:

    bash
    OLLAMA_API_KEY=ollama-local \ openclaw infer model run \ --local \ --model ollama/llama3.2:latest \ --prompt "Reply with exactly: pong" \ --json

    That path still uses OpenClaw's configured provider, auth, and native Ollama transport, but it does not start a chat-agent turn or load MCP/tool context. If this succeeds while normal agent replies fail, troubleshoot the model's agent prompt/tool capacity next.

    For a narrow vision-model smoke test on the same lean path, add one or more image files to

    text
    infer model run
    . This sends the prompt and image directly to the selected Ollama vision model without loading chat tools, memory, or prior session context:

    bash
    OLLAMA_API_KEY=ollama-local \ openclaw infer model run \ --local \ --model ollama/qwen2.5vl:7b \ --prompt "Describe this image in one sentence." \ --file ./photo.jpg \ --json

    text
    model run --file
    accepts files detected as
    text
    image/*
    , including common PNG, JPEG, and WebP inputs. Non-image files are rejected before Ollama is called. For speech recognition, use
    text
    openclaw infer audio transcribe
    instead.

    When you switch a conversation with

    text
    /model ollama/<model>
    , OpenClaw treats that as an exact user selection. If the configured Ollama
    text
    baseUrl
    is unreachable, the next reply fails with the provider error instead of silently answering from another configured fallback model.

    Isolated cron jobs do one extra local safety check before they start the agent turn. If the selected model resolves to a local, private-network, or

    text
    .local
    Ollama provider and
    text
    /api/tags
    is unreachable, OpenClaw records that cron run as
    text
    skipped
    with the selected
    text
    ollama/<model>
    in the error text. The endpoint preflight is cached for 5 minutes, so multiple cron jobs pointed at the same stopped Ollama daemon do not all launch failing model requests.

    Live-verify the local text path, native stream path, and embeddings against local Ollama with:

    bash
    OPENCLAW_LIVE_TEST=1 OPENCLAW_LIVE_OLLAMA=1 OPENCLAW_LIVE_OLLAMA_WEB_SEARCH=0 \ pnpm test:live -- extensions/ollama/ollama.live.test.ts

    To add a new model, simply pull it with Ollama:

    bash
    ollama pull mistral

    The new model will be automatically discovered and available to use.

    note

    If you set `models.providers.ollama` explicitly, or configure a custom remote provider such as `models.providers.ollama-cloud` with `api: "ollama"`, auto-discovery is skipped and you must define models manually. Loopback custom providers such as `http://127.0.0.2:11434` are still treated as local. See the explicit config section below.

    Vision and image description

    The bundled Ollama plugin registers Ollama as an image-capable media-understanding provider. This lets OpenClaw route explicit image-description requests and configured image-model defaults through local or hosted Ollama vision models.

    For local vision, pull a model that supports images:

    bash
    ollama pull qwen2.5vl:7b export OLLAMA_API_KEY="ollama-local"

    Then verify with the infer CLI:

    bash
    openclaw infer image describe \ --file ./photo.jpg \ --model ollama/qwen2.5vl:7b \ --json

    text
    --model
    must be a full
    text
    <provider/model>
    ref. When it is set,
    text
    openclaw infer image describe
    runs that model directly instead of skipping description because the model supports native vision.

    Use

    text
    infer image describe
    when you want OpenClaw's image-understanding provider flow, configured
    text
    agents.defaults.imageModel
    , and image-description output shape. Use
    text
    infer model run --file
    when you want a raw multimodal model probe with a custom prompt and one or more images.

    To make Ollama the default image-understanding model for inbound media, configure

    text
    agents.defaults.imageModel
    :

    json5
    { agents: { defaults: { imageModel: { primary: "ollama/qwen2.5vl:7b", }, }, }, }

    Prefer the full

    text
    ollama/<model>
    ref. If the same model is listed under
    text
    models.providers.ollama.models
    with
    text
    input: ["text", "image"]
    and no other configured image provider exposes that bare model ID, OpenClaw also normalizes a bare
    text
    imageModel
    ref such as
    text
    qwen2.5vl:7b
    to
    text
    ollama/qwen2.5vl:7b
    . If more than one configured image provider has the same bare ID, use the provider prefix explicitly.

    Slow local vision models can need a longer image-understanding timeout than cloud models. They can also crash or stop when Ollama tries to allocate the full advertised vision context on constrained hardware. Set a capability timeout, and cap

    text
    num_ctx
    on the model entry when you only need a normal image-description turn:

    json5
    { models: { providers: { ollama: { models: [ { id: "qwen2.5vl:7b", name: "qwen2.5vl:7b", input: ["text", "image"], params: { num_ctx: 2048, keep_alive: "1m" }, }, ], }, }, }, tools: { media: { image: { timeoutSeconds: 180, models: [{ provider: "ollama", model: "qwen2.5vl:7b", timeoutSeconds: 300 }], }, }, }, }

    This timeout applies to inbound image understanding and to the explicit

    text
    image
    tool the agent can call during a turn. Provider-level
    text
    models.providers.ollama.timeoutSeconds
    still controls the underlying Ollama HTTP request guard for normal model calls.

    Live-verify the explicit image tool against local Ollama with:

    bash
    OPENCLAW_LIVE_TEST=1 OPENCLAW_LIVE_OLLAMA_IMAGE=1 \ pnpm test:live -- src/agents/tools/image-tool.ollama.live.test.ts

    If you define

    text
    models.providers.ollama.models
    manually, mark vision models with image input support:

    json5
    { id: "qwen2.5vl:7b", name: "qwen2.5vl:7b", input: ["text", "image"], contextWindow: 128000, maxTokens: 8192, }

    OpenClaw rejects image-description requests for models that are not marked image-capable. With implicit discovery, OpenClaw reads this from Ollama when

    text
    /api/show
    reports a vision capability.

    Configuration

    The simplest local-only enablement path is via environment variable:
    text
    ```bash} export OLLAMA_API_KEY="ollama-local" ``` <Tip> If `OLLAMA_API_KEY` is set, you can omit `apiKey` in the provider entry and OpenClaw will fill it for availability checks. </Tip>
    Use explicit config when you want hosted cloud setup, Ollama runs on another host/port, you want to force specific context windows or model lists, or you want fully manual model definitions.
    text
    ```json5} { models: { providers: { ollama: { baseUrl: "https://ollama.com", apiKey: "OLLAMA_API_KEY", api: "ollama", models: [ { id: "kimi-k2.5:cloud", name: "kimi-k2.5:cloud", reasoning: false, input: ["text", "image"], cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }, contextWindow: 128000, maxTokens: 8192 } ] } } } } ```
    If Ollama is running on a different host or port (explicit config disables auto-discovery, so define models manually):
    text
    ```json5} { models: { providers: { ollama: { apiKey: "ollama-local", baseUrl: "http://ollama-host:11434", // No /v1 - use native Ollama API URL api: "ollama", // Set explicitly to guarantee native tool-calling behavior timeoutSeconds: 300, // Optional: give cold local models longer to connect and stream models: [ { id: "qwen3:32b", name: "qwen3:32b", params: { keep_alive: "15m", // Optional: keep the model loaded between turns }, }, ], }, }, }, } ``` <Warning> Do not add `/v1` to the URL. The `/v1` path uses OpenAI-compatible mode, where tool calling is not reliable. Use the base Ollama URL without a path suffix. </Warning>

    Common recipes

    Use these as starting points and replace model IDs with the exact names from

    text
    ollama list
    or
    text
    openclaw models list --provider ollama
    .

    Model selection

    Once configured, all your Ollama models are available:

    json5
    { agents: { defaults: { model: { primary: "ollama/gpt-oss:20b", fallbacks: ["ollama/llama3.3", "ollama/qwen2.5-coder:32b"], }, }, }, }

    Custom Ollama provider ids are also supported. When a model ref uses the active provider prefix, such as

    text
    ollama-spark/qwen3:32b
    , OpenClaw strips only that prefix before calling Ollama so the server receives
    text
    qwen3:32b
    .

    For slow local models, prefer provider-scoped request tuning before raising the whole agent runtime timeout:

    json5
    { models: { providers: { ollama: { timeoutSeconds: 300, models: [ { id: "gemma4:26b", name: "gemma4:26b", params: { keep_alive: "15m" }, }, ], }, }, }, }

    text
    timeoutSeconds
    applies to the model HTTP request, including connection setup, headers, body streaming, and the total guarded-fetch abort.
    text
    params.keep_alive
    is forwarded to Ollama as top-level
    text
    keep_alive
    on native
    text
    /api/chat
    requests; set it per model when first-turn load time is the bottleneck.

    Quick verification

    bash
    # Ollama daemon visible to this machine curl http://127.0.0.1:11434/api/tags # OpenClaw catalog and selected model openclaw models list --provider ollama openclaw models status # Direct model smoke openclaw infer model run \ --model ollama/gemma4 \ --prompt "Reply with exactly: ok"

    For remote hosts, replace

    text
    127.0.0.1
    with the host used in
    text
    baseUrl
    . If
    text
    curl
    works but OpenClaw does not, check whether the Gateway runs on a different machine, container, or service account.

    Ollama Web Search

    OpenClaw supports Ollama Web Search as a bundled

    text
    web_search
    provider.

    PropertyDetail
    HostUses your configured Ollama host (
    text
    models.providers.ollama.baseUrl
    when set, otherwise
    text
    http://127.0.0.1:11434
    );
    text
    https://ollama.com
    uses the hosted API directly
    AuthKey-free for signed-in local Ollama hosts;
    text
    OLLAMA_API_KEY
    or configured provider auth for direct
    text
    https://ollama.com
    search or auth-protected hosts
    RequirementLocal/self-hosted hosts must be running and signed in with
    text
    ollama signin
    ; direct hosted search requires
    text
    baseUrl: "https://ollama.com"
    plus a real Ollama API key

    Choose Ollama Web Search during

    text
    openclaw onboard
    or
    text
    openclaw configure --section web
    , or set:

    json5
    { tools: { web: { search: { provider: "ollama", }, }, }, }

    For direct hosted search through Ollama Cloud:

    json5
    { models: { providers: { ollama: { baseUrl: "https://ollama.com", apiKey: "OLLAMA_API_KEY", api: "ollama", models: [{ id: "kimi-k2.5:cloud", name: "kimi-k2.5:cloud", input: ["text"] }], }, }, }, tools: { web: { search: { provider: "ollama" }, }, }, }

    For a signed-in local daemon, OpenClaw uses the daemon's

    text
    /api/experimental/web_search
    proxy. For
    text
    https://ollama.com
    , it calls the hosted
    text
    /api/web_search
    endpoint directly.

    note

    For the full setup and behavior details, see [Ollama Web Search](/tools/ollama-search).

    Advanced configuration

    Troubleshooting

    note

    More help: [Troubleshooting](/help/troubleshooting) and [FAQ](/help/faq).

    Related

    Model providers

    Overview of all providers, model refs, and failover behavior.

    Model selection

    How to choose and configure models.

    Ollama Web Search

    Full setup and behavior details for Ollama-powered web search.

    Configuration

    Full config reference.

    © 2024 TaskFlow Mirror

    Powered by TaskFlow Sync Engine