Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
memory_searchIf you have a GitHub Copilot subscription, OpenAI, Gemini, Voyage, or Mistral API key configured, memory search works automatically. To set a provider explicitly:
json5{ agents: { defaults: { memorySearch: { provider: "openai", // or "gemini", "local", "ollama", etc. }, }, }, }
For multi-endpoint setups,
providermodels.providers.<id>ollama-5080api: "ollama"For local embeddings with no API key, set
provider: "local"node-llama-cppopenclaw doctor --fixSome OpenAI-compatible embedding endpoints require asymmetric labels such as
input_type: "query"input_type: "document""passage"memorySearch.queryInputTypememorySearch.documentInputType| Provider | ID | Needs API key | Notes |
|---|---|---|---|
| Bedrock | text bedrock | No | Auto-detected when the AWS credential chain resolves |
| Gemini | text gemini | Yes | Supports image/audio indexing |
| GitHub Copilot | text github-copilot | No | Auto-detected, uses Copilot subscription |
| Local | text local | No | GGUF model, ~0.6 GB download |
| Mistral | text mistral | Yes | Auto-detected |
| Ollama | text ollama | No | Local, must set explicitly |
| OpenAI | text openai | Yes | Auto-detected, fast |
| Voyage | text voyage | Yes | Auto-detected |
OpenClaw runs two retrieval paths in parallel and merges the results:
mermaidflowchart LR Q["Query"] --> E["Embedding"] Q --> T["Tokenize"] E --> VS["Vector Search"] T --> BM["BM25 Search"] VS --> M["Weighted Merge"] BM --> M M --> R["Top Results"]
If only one path is available (no embeddings or no FTS), the other runs alone.
When embeddings are unavailable, OpenClaw still uses lexical ranking over FTS results instead of falling back to raw exact-match ordering only. That degraded mode boosts chunks with stronger query-term coverage and relevant file paths, which keeps recall useful even without
sqlite-vecTwo optional features help when you have a large note history:
Old notes gradually lose ranking weight so recent information surfaces first. With the default half-life of 30 days, a note from last month scores at 50% of its original weight. Evergreen files like
MEMORY.mdReduces redundant results. If five notes all mention the same router config, MMR ensures the top results cover different topics instead of repeating.
json5{ agents: { defaults: { memorySearch: { query: { hybrid: { mmr: { enabled: true }, temporalDecay: { enabled: true }, }, }, }, }, }, }
With Gemini Embedding 2, you can index images and audio files alongside Markdown. Search queries remain text, but they match against visual and audio content. See the Memory configuration reference for setup.
You can optionally index session transcripts so
memory_searchmemorySearch.experimental.sessionMemoryNo results? Run
openclaw memory statusopenclaw memory index --forceOnly keyword matches? Your embedding provider may not be configured. Check
openclaw memory status --deepLocal embeddings time out?
ollamalmstudiolocalagents.defaults.memorySearch.sync.embeddingBatchTimeoutSecondsopenclaw memory index --forceCJK text not found? Rebuild the FTS index with
openclaw memory index --force© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine