Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
Every model has a context window: the maximum number of tokens it can process. When a conversation approaches that limit, OpenClaw compacts older messages into a summary so the chat can continue.
When OpenClaw splits history into compaction chunks, it keeps assistant tool calls paired with their matching
toolResultThe full conversation history stays on disk. Compaction only changes what the model sees on the next turn.
Auto-compaction is on by default. It runs when the session nears the context limit, or when the model returns a context-overflow error (in which case OpenClaw compacts and retries).
You will see:
🧹 Auto-compaction complete/status🧹 Compactions: <count>Type
/compacttext/compact Focus on the API design decisions
When
agents.defaults.compaction.keepRecentTokensConfigure compaction under
agents.defaults.compactionopenclaw.jsonBy default, compaction uses the agent's primary model. Set
agents.defaults.compaction.modelprovider/model-idjson{ "agents": { "defaults": { "compaction": { "model": "openrouter/anthropic/claude-sonnet-4-6" } } } }
This works with local models too, for example a second Ollama model dedicated to summarization:
json{ "agents": { "defaults": { "compaction": { "model": "ollama/llama3.1:8b" } } } }
When unset, compaction uses the agent's primary model.
Compaction summarization preserves opaque identifiers by default (
identifierPolicy: "strict"identifierPolicy: "off"identifierPolicy: "custom"identifierInstructionsWhen
agents.defaults.compaction.maxActiveTranscriptBytesWhen
agents.defaults.compaction.truncateAfterCompactionPre-compaction checkpoints are retained only while they stay below OpenClaw's checkpoint size cap; oversized active transcripts still compact, but OpenClaw skips the large debug snapshot instead of doubling disk usage.
By default, compaction runs silently. Set
notifyUserjson5{ agents: { defaults: { compaction: { notifyUser: true, }, }, }, }
Before compaction, OpenClaw can run a silent memory flush turn to store durable notes to disk. Set
agents.defaults.compaction.memoryFlush.modeljson{ "agents": { "defaults": { "compaction": { "memoryFlush": { "model": "ollama/qwen3:8b" } } } } }
The memory-flush model override is exact and does not inherit the active session fallback chain. See Memory for details and config.
Plugins can register a custom compaction provider via
registerCompactionProvider()To use a registered provider, set its id in your config:
json{ "agents": { "defaults": { "compaction": { "provider": "my-provider" } } } }
Setting a
providermode: "safeguard"| Compaction | Pruning | |
|---|---|---|
| What it does | Summarizes older conversation | Trims old tool results |
| Saved? | Yes (in session transcript) | No (in-memory only, per request) |
| Scope | Entire conversation | Tool results only |
Session pruning is a lighter-weight complement that trims tool output without summarizing.
Compacting too often? The model's context window may be small, or tool outputs may be large. Try enabling session pruning.
Context feels stale after compaction? Use
/compact Focus on <topic>Need a clean slate?
/newFor advanced configuration (reserve tokens, identifier preservation, custom context engines, OpenAI server-side compaction), see the Session management deep dive.
before_compactionafter_compaction© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine