Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
memory-lancedbUse it when you want a local vector database for memory, need an OpenAI-compatible embedding endpoint, or want to keep a memory database outside the default built-in memory store.
json5{ plugins: { slots: { memory: "memory-lancedb", }, entries: { "memory-lancedb": { enabled: true, config: { embedding: { provider: "openai", model: "text-embedding-3-small", }, autoRecall: true, autoCapture: false, }, }, }, }, }
Restart the Gateway after changing plugin config:
bashopenclaw gateway restart
Then verify the plugin is loaded:
bashopenclaw plugins list
memory-lancedbmemory-coreembedding.providerembedding.apiKeymodels.providers.<provider>.apiKeyjson5{ plugins: { slots: { memory: "memory-lancedb", }, entries: { "memory-lancedb": { enabled: true, config: { embedding: { provider: "openai", model: "text-embedding-3-small", }, autoRecall: true, }, }, }, }, }
This path works with provider auth profiles that expose embedding credentials. For example, GitHub Copilot can be used when the Copilot profile/plan supports embeddings:
json5{ plugins: { slots: { memory: "memory-lancedb", }, entries: { "memory-lancedb": { enabled: true, config: { embedding: { provider: "github-copilot", model: "text-embedding-3-small", }, }, }, }, }, }
OpenAI Codex / ChatGPT OAuth (
openai-codexOPENAI_API_KEYmodels.providers.openai.apiKeyFor Ollama embeddings, prefer the bundled Ollama embedding provider. It uses the native Ollama
/api/embedjson5{ plugins: { slots: { memory: "memory-lancedb", }, entries: { "memory-lancedb": { enabled: true, config: { embedding: { provider: "ollama", baseUrl: "http://127.0.0.1:11434", model: "mxbai-embed-large", dimensions: 1024, }, recallMaxChars: 400, autoRecall: true, autoCapture: false, }, }, }, }, }
Set
dimensionstext-embedding-3-smalltext-embedding-3-largeFor small local embedding models, lower
recallMaxCharsSome OpenAI-compatible embedding providers reject the
encoding_formatnumber[]memory-lancedbencoding_formatIf you have a raw OpenAI-compatible embeddings endpoint that does not have a bundled provider adapter, omit
embedding.provideropenaiembedding.apiKeyembedding.baseUrlSet
embedding.dimensionsembedding-32048json5{ plugins: { entries: { "memory-lancedb": { enabled: true, config: { embedding: { apiKey: "${ZHIPU_API_KEY}", baseUrl: "https://open.bigmodel.cn/api/paas/v4", model: "embedding-3", dimensions: 2048, }, }, }, }, }, }
memory-lancedb| Setting | Default | Range | Applies to |
|---|---|---|---|
text recallMaxChars | text 1000 | 100-10000 | text sent to the embedding API for recall |
text captureMaxChars | text 500 | 100-10000 | assistant message length eligible for capture |
recallMaxCharsmemory_recallmemory_forgetopenclaw ltm searchcaptureMaxCharsWhen
memory-lancedbltmbashopenclaw ltm list openclaw ltm search "project preferences" openclaw ltm stats
The plugin also extends
openclaw memoryquerybashopenclaw memory query --cols id,text,createdAt --limit 20 openclaw memory query --filter "category = 'preference'" --order-by createdAt:desc
--cols <columns>idtextimportancecategorycreatedAt--filter <condition>--limit <n>10--order-by <column>:<asc|desc>Agents also get LanceDB memory tools from the active memory plugin:
memory_recallmemory_storememory_forgetBy default, LanceDB data lives under
~/.openclaw/memory/lancedbdbPathjson5{ plugins: { entries: { "memory-lancedb": { enabled: true, config: { dbPath: "~/.openclaw/memory/lancedb", embedding: { apiKey: "${OPENAI_API_KEY}", model: "text-embedding-3-small", }, }, }, }, }, }
storageOptions${ENV_VAR}json5{ plugins: { entries: { "memory-lancedb": { enabled: true, config: { dbPath: "s3://memory-bucket/openclaw", storageOptions: { access_key: "${AWS_ACCESS_KEY_ID}", secret_key: "${AWS_SECRET_ACCESS_KEY}", endpoint: "${AWS_ENDPOINT_URL}", }, embedding: { apiKey: "${OPENAI_API_KEY}", model: "text-embedding-3-small", }, }, }, }, }, }
memory-lancedb@lancedb/lancedbIf an older install logs a missing
dist/package.json@lancedb/lancedbIf the plugin logs that LanceDB is unavailable on
darwin-x64memory-lancedbThis usually means the embedding model rejected the recall query:
textmemory-lancedb: recall failed: Error: 400 the input length exceeds the context length
Set a lower
recallMaxCharsjson5{ plugins: { entries: { "memory-lancedb": { config: { recallMaxChars: 400, }, }, }, }, }
For Ollama, also verify the embedding server is reachable from the Gateway host:
bashcurl http://127.0.0.1:11434/v1/embeddings \ -H "Content-Type: application/json" \ -d '{"model":"mxbai-embed-large","input":"hello"}'
Without
dimensionsembedding.dimensionsCheck that
plugins.slots.memorymemory-lancedbbashopenclaw ltm stats openclaw ltm search "recent preference"
If
autoCapturememory_storeautoCapture© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine