TaskFlow
DashboardFreewriteWhiteboardsProjectsCRMTasksNotificationsSettingsAgent TowerAPI Docs
OpenClaw Docs
?

User

Member

Caricamento in corso...

Home
Progetti
Task
Notifiche
CRM

    OpenClaw

    Documentation Mirror

    Documentation Overview

    Docs

    Auth credential semantics
    Scheduled tasks
    Hooks
    Automation & tasks
    Standing orders
    Task flow
    Background tasks
    BlueBubbles
    Broadcast groups
    Channel routing
    Discord
    Feishu
    Google Chat
    Group messages
    Groups
    iMessage
    Chat channels
    IRC
    LINE
    Channel location parsing
    Matrix
    Matrix migration
    Matrix push rules for quiet previews
    Mattermost
    Microsoft Teams
    Nextcloud Talk
    Nostr
    Pairing
    QA channel
    QQ bot
    Signal
    Slack
    Synology Chat
    Telegram
    Tlon
    Channel troubleshooting
    Twitch
    WeChat
    WhatsApp
    Yuanbao
    Zalo
    Zalo personal
    CI pipeline
    ACP
    Agent
    Agents
    Approvals
    Backup
    Browser
    Channels
    Clawbot
    `openclaw commitments`
    Completion
    Config
    Configure
    Cron
    Daemon
    Dashboard
    Devices
    Directory
    DNS
    Docs
    Doctor
    Flows (redirect)
    Gateway
    Health
    Hooks
    CLI reference
    Inference CLI
    Logs
    MCP
    Memory
    Message
    Migrate
    Models
    Node
    Nodes
    Onboard
    Pairing
    Plugins
    Proxy
    QR
    Reset
    Sandbox CLI
    Secrets
    Security
    Sessions
    Setup
    Skills
    Status
    System
    `openclaw tasks`
    TUI
    Uninstall
    Update
    Voicecall
    Webhooks
    Wiki
    Active memory
    Agent runtime
    Agent loop
    Agent runtimes
    Agent workspace
    Gateway architecture
    Channel docking
    Inferred commitments
    Compaction
    Context
    Context engine
    Delegate architecture
    Dreaming
    Experimental features
    Features
    Markdown formatting
    Memory overview
    Builtin memory engine
    Honcho memory
    QMD memory engine
    Memory search
    Messages
    Model failover
    Model providers
    Models CLI
    Multi-agent routing
    OAuth
    OpenClaw App SDK
    Presence
    QA overview
    Matrix QA
    Command queue
    Steering queue
    Retry policy
    Session management
    Session pruning
    Session tools
    SOUL.md personality guide
    Streaming and chunking
    System prompt
    Timezones
    TypeBox
    Typing indicators
    Usage tracking
    Date and time
    Node + tsx crash
    Diagnostics flags
    Authentication
    Background exec and process tool
    Bonjour discovery
    Bridge protocol
    CLI backends
    Configuration — agents
    Configuration — channels
    Configuration — tools and custom providers
    Configuration
    Configuration examples
    Configuration reference
    Diagnostics export
    Discovery and transports
    Doctor
    Gateway lock
    Health checks
    Heartbeat
    Gateway runbook
    Local models
    Gateway logging
    Multiple gateways
    Network model
    OpenAI chat completions
    OpenResponses API
    OpenShell
    OpenTelemetry export
    Gateway-owned pairing
    Prometheus metrics
    Gateway protocol
    Remote access
    Remote gateway setup
    Sandbox vs tool policy vs elevated
    Sandboxing
    Secrets management
    Secrets apply plan contract
    Security audit checks
    Security
    Tailscale
    Tools invoke API
    Troubleshooting
    Trusted proxy auth
    Debugging
    Environment variables
    FAQ
    FAQ: first-run setup
    FAQ: models and auth
    GPT-5.5 / Codex agentic parity
    GPT-5.5 / Codex parity maintainer notes
    Help
    Scripts
    Testing
    Testing: live suites
    General troubleshooting
    OpenClaw
    Ansible
    Azure
    Bun (experimental)
    ClawDock
    Release channels
    DigitalOcean
    Docker
    Docker VM runtime
    exe.dev
    Fly.io
    GCP
    Hetzner
    Hostinger
    Install
    Installer internals
    Kubernetes
    macOS VMs
    Migration guide
    Migrating from Claude
    Migrating from Hermes
    Nix
    Node.js
    Northflank
    Oracle Cloud
    Podman
    Railway
    Raspberry Pi
    Render
    Uninstall
    Updating
    Logging
    Network
    Audio and voice notes
    Camera capture
    Image and media support
    Nodes
    Location command
    Media understanding
    Talk mode
    Node troubleshooting
    Voice wake
    Pi integration architecture
    Pi development workflow
    Android app
    Platforms
    iOS app
    Linux app
    Gateway on macOS
    Canvas
    Gateway lifecycle
    macOS dev setup
    Health checks (macOS)
    Menu bar icon
    macOS logging
    Menu bar
    Peekaboo bridge
    macOS permissions
    Remote control
    macOS signing
    Skills (macOS)
    Voice overlay
    Voice wake (macOS)
    WebChat (macOS)
    macOS IPC
    macOS app
    Windows
    Plugin internals
    Plugin architecture internals
    Building plugins
    Plugin bundles
    Codex Computer Use
    Codex harness
    Community plugins
    Plugin compatibility
    Google Meet plugin
    Plugin hooks
    Plugin manifest
    Memory LanceDB
    Memory wiki
    Message presentation
    Agent harness plugins
    Building channel plugins
    Channel turn kernel
    Plugin entry points
    Plugin SDK migration
    Plugin SDK overview
    Building provider plugins
    Plugin runtime helpers
    Plugin setup and config
    Plugin SDK subpaths
    Plugin testing
    Skill workshop plugin
    Voice call plugin
    Webhooks plugin
    Zalo personal plugin
    OpenProse
    Alibaba Model Studio
    Anthropic
    Arcee AI
    Azure Speech
    Amazon Bedrock
    Amazon Bedrock Mantle
    Chutes
    Claude Max API proxy
    Cloudflare AI gateway
    ComfyUI
    Deepgram
    Deepinfra
    DeepSeek
    ElevenLabs
    Fal
    Fireworks
    GitHub Copilot
    GLM (Zhipu)
    Google (Gemini)
    Gradium
    Groq
    Hugging Face (inference)
    Provider directory
    Inferrs
    Inworld
    Kilocode
    LiteLLM
    LM Studio
    MiniMax
    Mistral
    Model provider quickstart
    Moonshot AI
    NVIDIA
    Ollama
    OpenAI
    OpenCode
    OpenCode Go
    OpenRouter
    Perplexity
    Qianfan
    Qwen
    Runway
    SGLang
    StepFun
    Synthetic
    Tencent Cloud (TokenHub)
    Together AI
    Venice AI
    Vercel AI gateway
    vLLM
    Volcengine (Doubao)
    Vydra
    xAI
    Xiaomi MiMo
    Z.AI
    Default AGENTS.md
    Release policy
    API usage and costs
    Credits
    Device model database
    Full release validation
    Memory configuration reference
    OpenClaw App SDK API design
    Prompt caching
    Rich output protocol
    RPC adapters
    SecretRef credential surface
    Session management deep dive
    AGENTS.md template
    BOOT.md template
    BOOTSTRAP.md template
    HEARTBEAT.md template
    IDENTITY template
    SOUL.md template
    TOOLS.md template
    USER template
    Tests
    Token use and costs
    Transcript hygiene
    Onboarding reference
    Contributing to the threat model
    Threat model (MITRE ATLAS)
    Formal verification (security models)
    Network proxy
    Agent bootstrapping
    Docs directory
    Getting started
    Docs hubs
    OpenClaw lore
    Onboarding (macOS app)
    Onboarding overview
    Personal assistant setup
    Setup
    Showcase
    Onboarding (CLI)
    CLI automation
    CLI setup reference
    ACP agents
    ACP agents — setup
    Agent send
    apply_patch tool
    Brave search
    Browser (OpenClaw-managed)
    Browser control API
    Browser troubleshooting
    Browser login
    WSL2 + Windows + remote Chrome CDP troubleshooting
    BTW side questions
    ClawHub
    Code execution
    Creating skills
    Diffs
    DuckDuckGo search
    Elevated mode
    Exa search
    Exec tool
    Exec approvals
    Exec approvals — advanced
    Firecrawl
    Gemini search
    Grok search
    Image generation
    Tools and plugins
    Kimi search
    LLM task
    Lobster
    Tool-loop detection
    Media overview
    MiniMax search
    Multi-agent sandbox and tools
    Music generation
    Ollama web search
    PDF tool
    Perplexity search
    Plugins
    Reactions
    SearXNG search
    Skills
    Skills config
    Slash commands
    Sub-agents
    Tavily
    Thinking levels
    Tokenjuice
    Trajectory bundles
    Text-to-speech
    Video generation
    Web search
    Web fetch
    Linux server
    Control UI
    Dashboard
    Web
    TUI
    WebChat

    OpenAPI Specs

    openapi
    TaskFlow
    docs/openclaw
    Original Docs

    Real-time Synchronized Documentation

    Last sync: 01/05/2026 07:00:04

    Note: This content is mirrored from docs.openclaw.ai and is subject to their terms and conditions.

    OpenClaw Docs

    v2.4.0 Production

    Last synced: Today, 22:00

    Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.

    Use this file to discover all available pages before exploring further.

    Memory configuration reference

    This page lists every configuration knob for OpenClaw memory search. For conceptual overviews, see:

    Memory overview

    How memory works.

    Builtin engine

    Default SQLite backend.

    QMD engine

    Local-first sidecar.

    Memory search

    Search pipeline and tuning.

    Active memory

    Memory sub-agent for interactive sessions.

    All memory search settings live under

    text
    agents.defaults.memorySearch
    in
    text
    openclaw.json
    unless noted otherwise.

    note

    If you are looking for the **active memory** feature toggle and sub-agent config, that lives under `plugins.entries.active-memory` instead of `memorySearch`.

    Active memory uses a two-gate model:

    1. the plugin must be enabled and target the current agent id
    2. the request must be an eligible interactive persistent chat session

    See Active Memory for the activation model, plugin-owned config, transcript persistence, and safe rollout pattern.


    Provider selection

    KeyTypeDefaultDescription
    text
    provider
    text
    string
    auto-detectedEmbedding adapter ID such as
    text
    bedrock
    ,
    text
    deepinfra
    ,
    text
    gemini
    ,
    text
    github-copilot
    ,
    text
    local
    ,
    text
    mistral
    ,
    text
    ollama
    ,
    text
    openai
    , or
    text
    voyage
    ; may also be a configured
    text
    models.providers.<id>
    whose
    text
    api
    points at one of those adapters
    text
    model
    text
    string
    provider defaultEmbedding model name
    text
    fallback
    text
    string
    text
    "none"
    Fallback adapter ID when the primary fails
    text
    enabled
    text
    boolean
    text
    true
    Enable or disable memory search

    Auto-detection order

    When

    text
    provider
    is not set, OpenClaw selects the first available:

    local

    Selected if `memorySearch.local.modelPath` is configured and the file exists.

    github-copilot

    Selected if a GitHub Copilot token can be resolved (env var or auth profile).

    openai

    Selected if an OpenAI key can be resolved.

    gemini

    Selected if a Gemini key can be resolved.

    voyage

    Selected if a Voyage key can be resolved.

    mistral

    Selected if a Mistral key can be resolved.

    deepinfra

    Selected if a DeepInfra key can be resolved.

    bedrock

    Selected if the AWS SDK credential chain resolves (instance role, access keys, profile, SSO, web identity, or shared config).

    text
    ollama
    is supported but not auto-detected (set it explicitly).

    Custom provider ids

    text
    memorySearch.provider
    can point at a custom
    text
    models.providers.<id>
    entry. OpenClaw resolves that provider's
    text
    api
    owner for the embedding adapter while preserving the custom provider id for endpoint, auth, and model-prefix handling. This lets multi-GPU or multi-host setups dedicate memory embeddings to a specific local endpoint:

    json5
    { models: { providers: { "ollama-5080": { api: "ollama", baseUrl: "http://gpu-box.local:11435", apiKey: "ollama-local", models: [{ id: "qwen3-embedding:0.6b" }], }, }, }, agents: { defaults: { memorySearch: { provider: "ollama-5080", model: "qwen3-embedding:0.6b", }, }, }, }

    API key resolution

    Remote embeddings require an API key. Bedrock uses the AWS SDK default credential chain instead (instance roles, SSO, access keys).

    ProviderEnv varConfig key
    BedrockAWS credential chainNo API key needed
    DeepInfra
    text
    DEEPINFRA_API_KEY
    text
    models.providers.deepinfra.apiKey
    Gemini
    text
    GEMINI_API_KEY
    text
    models.providers.google.apiKey
    GitHub Copilot
    text
    COPILOT_GITHUB_TOKEN
    ,
    text
    GH_TOKEN
    ,
    text
    GITHUB_TOKEN
    Auth profile via device login
    Mistral
    text
    MISTRAL_API_KEY
    text
    models.providers.mistral.apiKey
    Ollama
    text
    OLLAMA_API_KEY
    (placeholder)
    --
    OpenAI
    text
    OPENAI_API_KEY
    text
    models.providers.openai.apiKey
    Voyage
    text
    VOYAGE_API_KEY
    text
    models.providers.voyage.apiKey

    note

    Codex OAuth covers chat/completions only and does not satisfy embedding requests.

    Remote endpoint config

    For custom OpenAI-compatible endpoints or overriding provider defaults:

    Custom API base URL. Override API key. Extra HTTP headers (merged with provider defaults).
    json5
    { agents: { defaults: { memorySearch: { provider: "openai", model: "text-embedding-3-small", remote: { baseUrl: "https://api.example.com/v1/", apiKey: "YOUR_KEY", }, }, }, }, }

    Provider-specific config

    Inline embedding timeout

    Override the timeout for inline embedding batches during memory indexing.

    Unset uses the provider default: 600 seconds for local/self-hosted providers such as

    text
    local
    ,
    text
    ollama
    , and
    text
    lmstudio
    , and 120 seconds for hosted providers. Increase this when local CPU-bound embedding batches are healthy but slow.


    Hybrid search config

    All under

    text
    memorySearch.query.hybrid
    :

    KeyTypeDefaultDescription
    text
    enabled
    text
    boolean
    text
    true
    Enable hybrid BM25 + vector search
    text
    vectorWeight
    text
    number
    text
    0.7
    Weight for vector scores (0-1)
    text
    textWeight
    text
    number
    text
    0.3
    Weight for BM25 scores (0-1)
    text
    candidateMultiplier
    text
    number
    text
    4
    Candidate pool size multiplier
    | Key | Type | Default | Description | | ------------- | --------- | ------- | ------------------------------------ | | `mmr.enabled` | `boolean` | `false` | Enable MMR re-ranking | | `mmr.lambda` | `number` | `0.7` | 0 = max diversity, 1 = max relevance | | Key | Type | Default | Description | | ---------------------------- | --------- | ------- | ------------------------- | | `temporalDecay.enabled` | `boolean` | `false` | Enable recency boost | | `temporalDecay.halfLifeDays` | `number` | `30` | Score halves every N days |
    text
    Evergreen files (`MEMORY.md`, non-dated files in `memory/`) are never decayed.

    Full example

    json5
    { agents: { defaults: { memorySearch: { query: { hybrid: { vectorWeight: 0.7, textWeight: 0.3, mmr: { enabled: true, lambda: 0.7 }, temporalDecay: { enabled: true, halfLifeDays: 30 }, }, }, }, }, }, }

    Additional memory paths

    KeyTypeDescription
    text
    extraPaths
    text
    string[]
    Additional directories or files to index
    json5
    { agents: { defaults: { memorySearch: { extraPaths: ["../team-docs", "/srv/shared-notes"], }, }, }, }

    Paths can be absolute or workspace-relative. Directories are scanned recursively for

    text
    .md
    files. Symlink handling depends on the active backend: the builtin engine ignores symlinks, while QMD follows the underlying QMD scanner behavior.

    For agent-scoped cross-agent transcript search, use

    text
    agents.list[].memorySearch.qmd.extraCollections
    instead of
    text
    memory.qmd.paths
    . Those extra collections follow the same
    text
    { path, name, pattern? }
    shape, but they are merged per agent and can preserve explicit shared names when the path points outside the current workspace. If the same resolved path appears in both
    text
    memory.qmd.paths
    and
    text
    memorySearch.qmd.extraCollections
    , QMD keeps the first entry and skips the duplicate.


    Multimodal memory (Gemini)

    Index images and audio alongside Markdown using Gemini Embedding 2:

    KeyTypeDefaultDescription
    text
    multimodal.enabled
    text
    boolean
    text
    false
    Enable multimodal indexing
    text
    multimodal.modalities
    text
    string[]
    --
    text
    ["image"]
    ,
    text
    ["audio"]
    , or
    text
    ["all"]
    text
    multimodal.maxFileBytes
    text
    number
    text
    10000000
    Max file size for indexing

    note

    Only applies to files in `extraPaths`. Default memory roots stay Markdown-only. Requires `gemini-embedding-2-preview`. `fallback` must be `"none"`.

    Supported formats:

    text
    .jpg
    ,
    text
    .jpeg
    ,
    text
    .png
    ,
    text
    .webp
    ,
    text
    .gif
    ,
    text
    .heic
    ,
    text
    .heif
    (images);
    text
    .mp3
    ,
    text
    .wav
    ,
    text
    .ogg
    ,
    text
    .opus
    ,
    text
    .m4a
    ,
    text
    .aac
    ,
    text
    .flac
    (audio).


    Embedding cache

    KeyTypeDefaultDescription
    text
    cache.enabled
    text
    boolean
    text
    false
    Cache chunk embeddings in SQLite
    text
    cache.maxEntries
    text
    number
    text
    50000
    Max cached embeddings

    Prevents re-embedding unchanged text during reindex or transcript updates.


    Batch indexing

    KeyTypeDefaultDescription
    text
    remote.nonBatchConcurrency
    text
    number
    text
    4
    Parallel inline embeddings
    text
    remote.batch.enabled
    text
    boolean
    text
    false
    Enable batch embedding API
    text
    remote.batch.concurrency
    text
    number
    text
    2
    Parallel batch jobs
    text
    remote.batch.wait
    text
    boolean
    text
    true
    Wait for batch completion
    text
    remote.batch.pollIntervalMs
    text
    number
    --Poll interval
    text
    remote.batch.timeoutMinutes
    text
    number
    --Batch timeout

    Available for

    text
    openai
    ,
    text
    gemini
    , and
    text
    voyage
    . OpenAI batch is typically fastest and cheapest for large backfills.

    text
    remote.nonBatchConcurrency
    controls inline embedding calls used by local/self-hosted providers and hosted providers when provider batch APIs are not active. Ollama defaults to
    text
    1
    for non-batch indexing to avoid overwhelming smaller local hosts; set a higher value on larger machines.

    This is separate from

    text
    sync.embeddingBatchTimeoutSeconds
    , which controls the timeout for inline embedding calls.


    Session memory search (experimental)

    Index session transcripts and surface them via

    text
    memory_search
    :

    KeyTypeDefaultDescription
    text
    experimental.sessionMemory
    text
    boolean
    text
    false
    Enable session indexing
    text
    sources
    text
    string[]
    text
    ["memory"]
    Add
    text
    "sessions"
    to include transcripts
    text
    sync.sessions.deltaBytes
    text
    number
    text
    100000
    Byte threshold for reindex
    text
    sync.sessions.deltaMessages
    text
    number
    text
    50
    Message threshold for reindex

    warning

    Session indexing is opt-in and runs asynchronously. Results can be slightly stale. Session logs live on disk, so treat filesystem access as the trust boundary.

    SQLite vector acceleration (sqlite-vec)

    KeyTypeDefaultDescription
    text
    store.vector.enabled
    text
    boolean
    text
    true
    Use sqlite-vec for vector queries
    text
    store.vector.extensionPath
    text
    string
    bundledOverride sqlite-vec path

    When sqlite-vec is unavailable, OpenClaw falls back to in-process cosine similarity automatically.


    Index storage

    KeyTypeDefaultDescription
    text
    store.path
    text
    string
    text
    ~/.openclaw/memory/{agentId}.sqlite
    Index location (supports
    text
    {agentId}
    token)
    text
    store.fts.tokenizer
    text
    string
    text
    unicode61
    FTS5 tokenizer (
    text
    unicode61
    or
    text
    trigram
    )

    QMD backend config

    Set

    text
    memory.backend = "qmd"
    to enable. All QMD settings live under
    text
    memory.qmd
    :

    KeyTypeDefaultDescription
    text
    command
    text
    string
    text
    qmd
    QMD executable path; set an absolute path when service
    text
    PATH
    differs from your shell
    text
    searchMode
    text
    string
    text
    search
    Search command:
    text
    search
    ,
    text
    vsearch
    ,
    text
    query
    text
    includeDefaultMemory
    text
    boolean
    text
    true
    Auto-index
    text
    MEMORY.md
    +
    text
    memory/**/*.md
    text
    paths[]
    text
    array
    --Extra paths:
    text
    { name, path, pattern? }
    text
    sessions.enabled
    text
    boolean
    text
    false
    Index session transcripts
    text
    sessions.retentionDays
    text
    number
    --Transcript retention
    text
    sessions.exportDir
    text
    string
    --Export directory

    text
    searchMode: "search"
    is lexical/BM25-only. OpenClaw does not run semantic vector readiness probes or QMD embedding maintenance for that mode, including during
    text
    memory status --deep
    ;
    text
    vsearch
    and
    text
    query
    continue to require QMD vector readiness and embeddings.

    OpenClaw prefers current QMD collection and MCP query shapes, but keeps older QMD releases working by trying compatible collection pattern flags and older MCP tool names when needed. When QMD advertises support for multiple collection filters, same-source collections are searched with one QMD process; older QMD builds keep the per-collection compatibility path. Same-source means durable memory collections are grouped together, while session transcript collections remain a separate group so source diversification still has both inputs.

    note

    QMD model overrides stay on the QMD side, not OpenClaw config. If you need to override QMD's models globally, set environment variables such as `QMD_EMBED_MODEL`, `QMD_RERANK_MODEL`, and `QMD_GENERATE_MODEL` in the gateway runtime environment.

    QMD boot refreshes use a one-shot subprocess path during gateway startup. The long-lived QMD manager still owns the regular file watcher and interval timers when memory search is opened for interactive use.

    Full QMD example

    json5
    { memory: { backend: "qmd", citations: "auto", qmd: { includeDefaultMemory: true, update: { interval: "5m", debounceMs: 15000 }, limits: { maxResults: 6, timeoutMs: 4000 }, scope: { default: "deny", rules: [{ action: "allow", match: { chatType: "direct" } }], }, paths: [{ name: "docs", path: "~/notes", pattern: "**/*.md" }], }, }, }

    Dreaming

    Dreaming is configured under

    text
    plugins.entries.memory-core.config.dreaming
    , not under
    text
    agents.defaults.memorySearch
    .

    Dreaming runs as one scheduled sweep and uses internal light/deep/REM phases as an implementation detail.

    For conceptual behavior and slash commands, see Dreaming.

    User settings

    KeyTypeDefaultDescription
    text
    enabled
    text
    boolean
    text
    false
    Enable or disable dreaming entirely
    text
    frequency
    text
    string
    text
    0 3 * * *
    Optional cron cadence for the full dreaming sweep
    text
    model
    text
    string
    default modelOptional Dream Diary subagent model override

    Example

    json5
    { plugins: { entries: { "memory-core": { subagent: { allowModelOverride: true, allowedModels: ["anthropic/claude-sonnet-4-6"], }, config: { dreaming: { enabled: true, frequency: "0 3 * * *", model: "anthropic/claude-sonnet-4-6", }, }, }, }, }, }

    note

    * Dreaming writes machine state to `memory/.dreams/`. * Dreaming writes human-readable narrative output to `DREAMS.md` (or existing `dreams.md`). * `dreaming.model` uses the existing plugin subagent trust gate; set `plugins.entries.memory-core.subagent.allowModelOverride: true` before enabling it. * Dream Diary retries once with the session default model when the configured model is unavailable. Trust or allowlist failures are logged and are not silently retried. * The light/deep/REM phase policy and thresholds are internal behavior, not user-facing config.

    Related

    • Configuration reference
    • Memory overview
    • Memory search

    © 2024 TaskFlow Mirror

    Powered by TaskFlow Sync Engine