Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
OpenClaw tracks tokens, not characters. Tokens are model-specific, but most OpenAI-style models average ~4 characters per token for English text.
OpenClaw assembles its own system prompt on every run. It includes:
readskills.limits.maxSkillsPromptCharsagents.list[].skillsLimits.maxSkillsPromptCharsAGENTS.mdSOUL.mdTOOLS.mdIDENTITY.mdUSER.mdHEARTBEAT.mdBOOTSTRAP.mdMEMORY.mdmemory.mdopenclaw doctor --fixMEMORY.mdagents.defaults.bootstrapMaxCharsagents.defaults.bootstrapTotalMaxCharsmemory/*.md/new/resetagents.defaults.startupContextSee the full breakdown in System Prompt.
Everything the model receives counts toward the context limit:
Some runtime-heavy surfaces have their own explicit caps:
agents.defaults.contextLimits.memoryGetMaxCharsagents.defaults.contextLimits.memoryGetDefaultLinesagents.defaults.contextLimits.toolResultMaxCharsagents.defaults.contextLimits.postCompactionMaxCharsPer-agent overrides live under
agents.list[].contextLimitsFor images, OpenClaw downscales transcript/tool image payloads before provider calls. Use
agents.defaults.imageMaxDimensionPx1200For a practical breakdown (per injected file, tools, skills, and system prompt size), use
/context list/context detailUse these in chat:
/status/usage off|tokens|fullresponseUsage/usage costOther surfaces:
/status/usageopenclaw status --usageopenclaw channels listX% leftUsage surfaces normalize common provider-native field aliases before display. For OpenAI-family Responses traffic, that includes both
input_tokensoutput_tokensprompt_tokenscompletion_tokens/status/usageresponsestats.cachedcacheReadstats.input_tokens - stats.cachedstats.inputtotal_tokens0/statussession_statususage.cost/usage costOpenClaw keeps provider usage accounting separate from the current context snapshot. Provider
usage.totalpromptTokenscontext.usedCosts are estimated from your model pricing config:
textmodels.providers.<provider>.models[].cost
These are USD per 1M tokens for
inputoutputcacheReadcacheWriteGateway startup also performs an optional background pricing bootstrap for configured model refs that do not already have local pricing. That bootstrap fetches remote OpenRouter and LiteLLM pricing catalogs. Set
models.pricing.enabled: falsemodels.providers.*.models[].costProvider prompt caching only applies within the cache TTL window. OpenClaw can optionally run cache-ttl pruning: it prunes the session once the cache TTL has expired, then resets the cache window so subsequent requests can re-use the freshly cached context instead of re-caching the full history. This keeps cache write costs lower when a session goes idle past the TTL.
Configure it in Gateway configuration and see the behavior details in Session pruning.
Heartbeat can keep the cache warm across idle gaps. If your model cache TTL is
1h55mIn multi-agent setups, you can keep one shared model config and tune cache behavior per agent with
agents.list[].params.cacheRetentionFor a full knob-by-knob guide, see Prompt Caching.
For Anthropic API pricing, cache reads are significantly cheaper than input tokens, while cache writes are billed at a higher multiplier. See Anthropic’s prompt caching pricing for the latest rates and TTL multipliers: https://docs.anthropic.com/docs/build-with-claude/prompt-caching
yamlagents: defaults: model: primary: "anthropic/claude-opus-4-6" models: "anthropic/claude-opus-4-6": params: cacheRetention: "long" heartbeat: every: "55m"
yamlagents: defaults: model: primary: "anthropic/claude-opus-4-6" models: "anthropic/claude-opus-4-6": params: cacheRetention: "long" # default baseline for most agents list: - id: "research" default: true heartbeat: every: "55m" # keep long cache warm for deep sessions - id: "alerts" params: cacheRetention: "none" # avoid cache writes for bursty notifications
agents.list[].paramsparamscacheRetentionAnthropic's 1M context window is currently beta-gated. OpenClaw can inject the required
anthropic-betacontext1myamlagents: defaults: models: "anthropic/claude-opus-4-6": params: context1m: true
This maps to Anthropic's
context-1m-2025-08-07This only applies when
context1m: trueRequirement: the credential must be eligible for long-context usage. If not, Anthropic responds with a provider-side rate limit error for that request.
If you authenticate Anthropic with OAuth/subscription tokens (
sk-ant-oat-*context-1m-*/compactagents.defaults.imageMaxDimensionPxSee Skills for the exact skill list overhead formula.
© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine