Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
Session pruning trims old tool results from the context before each LLM call. It reduces context bloat from accumulated tool outputs (exec results, file reads, search results) without rewriting normal conversation text.
Long sessions accumulate tool output that inflates the context window. This increases cost and can force compaction sooner than necessary.
Pruning is especially valuable for Anthropic prompt caching. After the cache TTL expires, the next request re-caches the full prompt. Pruning reduces the cache-write size, directly lowering cost.
...OpenClaw also builds a separate idempotent replay view for sessions that persist raw image blocks or prompt-hydration media markers in history.
usertoolResult[image data removed - already processed by model][media attached: ...][Image: source: ...]media://inbound/...[media reference removed - already processed by model]OpenClaw auto-enables pruning for Anthropic profiles:
| Profile type | Pruning enabled | Heartbeat |
|---|---|---|
| Anthropic OAuth/token auth (including Claude CLI reuse) | Yes | 1 hour |
| API key | Yes | 30 min |
If you set explicit values, OpenClaw does not override them.
Pruning is off by default for non-Anthropic providers. To enable:
json5{ agents: { defaults: { contextPruning: { mode: "cache-ttl", ttl: "5m" }, }, }, }
To disable: set
mode: "off"| Pruning | Compaction | |
|---|---|---|
| What | Trims tool results | Summarizes conversation |
| Saved? | No (per-request) | Yes (in transcript) |
| Scope | Tool results only | Entire conversation |
They complement each other -- pruning keeps tool output lean between compaction cycles.
contextPruning.*© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine