Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
Agent-scoped configuration keys under
agents.*multiAgent.*session.*messages.*talk.*agents.defaults.workspaceDefault:
~/.openclaw/workspacejson5{ agents: { defaults: { workspace: "~/.openclaw/workspace" } }, }
agents.defaults.repoRootOptional repository root shown in the system prompt's Runtime line. If unset, OpenClaw auto-detects by walking upward from the workspace.
json5{ agents: { defaults: { repoRoot: "~/Projects/openclaw" } }, }
agents.defaults.skillsOptional default skill allowlist for agents that do not set
agents.list[].skillsjson5{ agents: { defaults: { skills: ["github", "weather"] }, list: [ { id: "writer" }, // inherits github, weather { id: "docs", skills: ["docs-search"] }, // replaces defaults { id: "locked-down", skills: [] }, // no skills ], }, }
agents.defaults.skillsagents.list[].skillsagents.list[].skills: []agents.list[].skillsagents.defaults.skipBootstrapDisables automatic creation of workspace bootstrap files (
AGENTS.mdSOUL.mdTOOLS.mdIDENTITY.mdUSER.mdHEARTBEAT.mdBOOTSTRAP.mdjson5{ agents: { defaults: { skipBootstrap: true } }, }
agents.defaults.contextInjectionControls when workspace bootstrap files are injected into the system prompt. Default:
"always""continuation-skip""never"json5{ agents: { defaults: { contextInjection: "continuation-skip" } }, }
agents.defaults.bootstrapMaxCharsMax characters per workspace bootstrap file before truncation. Default:
12000json5{ agents: { defaults: { bootstrapMaxChars: 12000 } }, }
agents.defaults.bootstrapTotalMaxCharsMax total characters injected across all workspace bootstrap files. Default:
60000json5{ agents: { defaults: { bootstrapTotalMaxChars: 60000 } }, }
agents.defaults.bootstrapPromptTruncationWarningControls agent-visible warning text when bootstrap context is truncated. Default:
"once""off""once""always"json5{ agents: { defaults: { bootstrapPromptTruncationWarning: "once" } }, // off | once | always }
OpenClaw has multiple high-volume prompt/context budgets, and they are intentionally split by subsystem instead of all flowing through one generic knob.
agents.defaults.bootstrapMaxCharsagents.defaults.bootstrapTotalMaxCharsagents.defaults.startupContext.*memory/*.md/new/resetskills.limits.*agents.defaults.contextLimits.*memory.qmd.limits.*Use the matching per-agent override only when one agent needs a different budget:
agents.list[].skillsLimits.maxSkillsPromptCharsagents.list[].contextLimits.*agents.defaults.startupContextControls the first-turn startup prelude injected on reset/startup model runs. Bare chat
/new/resetjson5{ agents: { defaults: { startupContext: { enabled: true, applyOn: ["new", "reset"], dailyMemoryDays: 2, maxFileBytes: 16384, maxFileChars: 1200, maxTotalChars: 2800, }, }, }, }
agents.defaults.contextLimitsShared defaults for bounded runtime context surfaces.
json5{ agents: { defaults: { contextLimits: { memoryGetMaxChars: 12000, memoryGetDefaultLines: 120, toolResultMaxChars: 16000, postCompactionMaxChars: 1800, }, }, }, }
memoryGetMaxCharsmemory_getmemoryGetDefaultLinesmemory_getlinestoolResultMaxCharspostCompactionMaxCharsagents.list[].contextLimitsPer-agent override for the shared
contextLimitsagents.defaults.contextLimitsjson5{ agents: { defaults: { contextLimits: { memoryGetMaxChars: 12000, toolResultMaxChars: 16000, }, }, list: [ { id: "tiny-local", contextLimits: { memoryGetMaxChars: 6000, toolResultMaxChars: 8000, }, }, ], }, }
skills.limits.maxSkillsPromptCharsGlobal cap for the compact skills list injected into the system prompt. This does not affect reading
SKILL.mdjson5{ skills: { limits: { maxSkillsPromptChars: 18000, }, }, }
agents.list[].skillsLimits.maxSkillsPromptCharsPer-agent override for the skills prompt budget.
json5{ agents: { list: [ { id: "tiny-local", skillsLimits: { maxSkillsPromptChars: 6000, }, }, ], }, }
agents.defaults.imageMaxDimensionPxMax pixel size for the longest image side in transcript/tool image blocks before provider calls. Default:
1200Lower values usually reduce vision-token usage and request payload size for screenshot-heavy runs. Higher values preserve more visual detail.
json5{ agents: { defaults: { imageMaxDimensionPx: 1200 } }, }
agents.defaults.userTimezoneTimezone for system prompt context (not message timestamps). Falls back to host timezone.
json5{ agents: { defaults: { userTimezone: "America/Chicago" } }, }
agents.defaults.timeFormatTime format in system prompt. Default:
autojson5{ agents: { defaults: { timeFormat: "auto" } }, // auto | 12 | 24 }
agents.defaults.modeljson5{ agents: { defaults: { models: { "anthropic/claude-opus-4-6": { alias: "opus" }, "minimax/MiniMax-M2.7": { alias: "minimax" }, }, model: { primary: "anthropic/claude-opus-4-6", fallbacks: ["minimax/MiniMax-M2.7"], }, imageModel: { primary: "openrouter/qwen/qwen-2.5-vl-72b-instruct:free", fallbacks: ["openrouter/google/gemini-2.0-flash-vision:free"], }, imageGenerationModel: { primary: "openai/gpt-image-2", fallbacks: ["google/gemini-3.1-flash-image-preview"], }, videoGenerationModel: { primary: "qwen/wan2.6-t2v", fallbacks: ["qwen/wan2.6-i2v"], }, pdfModel: { primary: "anthropic/claude-opus-4-6", fallbacks: ["openai/gpt-5.4-mini"], }, params: { cacheRetention: "long" }, // global default provider params agentRuntime: { id: "pi", // pi | auto | registered harness id, e.g. codex fallback: "pi", // pi | none }, pdfMaxBytesMb: 10, pdfMaxPages: 20, thinkingDefault: "low", verboseDefault: "off", reasoningDefault: "off", elevatedDefault: "on", timeoutSeconds: 600, mediaMaxMb: 5, contextTokens: 200000, maxConcurrent: 3, }, }, }
model"provider/model"{ primary, fallbacks }imageModel"provider/model"{ primary, fallbacks }imageprovider/modelmodels.providers.*.modelsimageGenerationModel"provider/model"{ primary, fallbacks }google/gemini-3.1-flash-image-previewfal/fal-ai/flux/devopenai/gpt-image-2openai/gpt-image-1.5GEMINI_API_KEYGOOGLE_API_KEYgoogle/*OPENAI_API_KEYopenai/gpt-image-2openai/gpt-image-1.5FAL_KEYfal/*image_generatemusicGenerationModel"provider/model"{ primary, fallbacks }music_generategoogle/lyria-3-clip-previewgoogle/lyria-3-pro-previewminimax/music-2.6music_generatevideoGenerationModel"provider/model"{ primary, fallbacks }video_generateqwen/wan2.6-t2vqwen/wan2.6-i2vqwen/wan2.6-r2vqwen/wan2.6-r2v-flashqwen/wan2.7-r2vvideo_generatesizeaspectRatioresolutionaudiowatermarkpdfModel"provider/model"{ primary, fallbacks }pdfimageModelpdfMaxBytesMbpdfmaxBytesMbpdfMaxPagespdfverboseDefault"off""on""full""off"reasoningDefault"off""on""stream"agents.list[].reasoningDefaultelevatedDefault"off""on""ask""full""on"model.primaryprovider/modelopenai/gpt-5.5openai-codex/gpt-5.5provider/modelmodels/modelaliasparamstemperaturemaxTokenscacheRetentioncontext1mresponsesServerCompactionresponsesCompactThresholdchat_template_kwargsextra_bodyextraBodyopenclaw config set agents.defaults.models '<json>' --strict-json --mergeconfig set--replaceparams.responsesServerCompaction: falsecontext_managementparams.responsesCompactThresholdparamsagents.defaults.params{ cacheRetention: "long" }paramsagents.defaults.paramsagents.defaults.models["provider/model"].paramsagents.list[].paramsparams.extra_bodyparams.extraBodyapi: "openai-completions"storeparams.chat_template_kwargsapi: "openai-completions"vllm/nemotron-3-*enable_thinking: falseforce_nonempty_content: truechat_template_kwargsextra_body.chat_template_kwargsparams.qwenThinkingFormat"chat-template""top-level"compat.supportedReasoningEfforts"xhigh"/think xhighllm-taskcompat.reasoningEffortMapparams.preserveThinkingthinking.clear_thinking: falsereasoning_contentagentRuntimeid: "pi"id: "auto"id: "codex"id: "claude-cli"fallback: "none"codexfallback: "pi"provider/model/models set/models set-imagemaxConcurrentagents.defaults.agentRuntimeagentRuntimejson5{ agents: { defaults: { model: "openai/gpt-5.5", agentRuntime: { id: "codex", fallback: "none", }, }, }, }
id"auto""pi"codexclaude-clifallback"pi""none"id: "auto""pi"id: "codex""none"fallback: "pi"OPENCLAW_AGENT_RUNTIME=<id|auto|pi>idOPENCLAW_AGENT_HARNESS_FALLBACK=pi|nonemodel: "openai/gpt-5.5"agentRuntime.id: "codex"agentRuntime.fallback: "none"model: "anthropic/claude-opus-4-7"agentRuntime.id: "claude-cli"claude-cli/claude-opus-4-7agentRuntime.idagentRuntimeopenclaw doctor --fix/statusRuntime: OpenClaw Pi DefaultRuntime: OpenAI CodexBuilt-in alias shorthands (only apply when the model is in
agents.defaults.models| Alias | Model |
|---|---|
text opus | text anthropic/claude-opus-4-6 |
text sonnet | text anthropic/claude-sonnet-4-6 |
text gpt | text openai/gpt-5.5text openai-codex/gpt-5.5 |
text gpt-mini | text openai/gpt-5.4-mini |
text gpt-nano | text openai/gpt-5.4-nano |
text gemini | text google/gemini-3.1-pro-preview |
text gemini-flash | text google/gemini-3-flash-preview |
text gemini-flash-lite | text google/gemini-3.1-flash-lite-preview |
Your configured aliases always win over defaults.
Z.AI GLM-4.x models automatically enable thinking mode unless you set
--thinking offagents.defaults.models["zai/<model>"].params.thinkingtool_streamagents.defaults.models["zai/<model>"].params.tool_streamfalseadaptiveagents.defaults.cliBackendsOptional CLI backends for text-only fallback runs (no tool calls). Useful as a backup when API providers fail.
json5{ agents: { defaults: { cliBackends: { "codex-cli": { command: "/opt/homebrew/bin/codex", }, "my-cli": { command: "my-cli", args: ["--json"], output: "json", modelArg: "--model", sessionArg: "--session", sessionMode: "existing", systemPromptArg: "--system", // Or use systemPromptFileArg when the CLI accepts a prompt file flag. systemPromptWhen: "first", imageArg: "--image", imageMode: "repeat", }, }, }, }, }
sessionArgimageArgagents.defaults.systemPromptOverrideReplace the entire OpenClaw-assembled system prompt with a fixed string. Set at the default level (
agents.defaults.systemPromptOverrideagents.list[].systemPromptOverridejson5{ agents: { defaults: { systemPromptOverride: "You are a helpful assistant.", }, }, }
agents.defaults.promptOverlaysProvider-independent prompt overlays applied by model family. GPT-5-family model ids receive the shared behavior contract across providers;
personalityjson5{ agents: { defaults: { promptOverlays: { gpt5: { personality: "friendly", // friendly | on | off }, }, }, }, }
"friendly""on""off"plugins.entries.openai.config.personalityagents.defaults.heartbeatPeriodic heartbeat runs.
json5{ agents: { defaults: { heartbeat: { every: "30m", // 0m disables model: "openai/gpt-5.4-mini", includeReasoning: false, includeSystemPromptSection: true, // default: true; false omits the Heartbeat section from the system prompt lightContext: false, // default: false; true keeps only HEARTBEAT.md from workspace bootstrap files isolatedSession: false, // default: false; true runs each heartbeat in a fresh session (no conversation history) skipWhenBusy: false, // default: false; true also waits for subagent/nested lanes session: "main", to: "+15555550123", directPolicy: "allow", // allow (default) | block target: "none", // default: none | options: last | whatsapp | telegram | discord | ... prompt: "Read HEARTBEAT.md if it exists...", ackMaxChars: 300, suppressToolErrorWarnings: false, timeoutSeconds: 45, }, }, }, }
every30m1h0mincludeSystemPromptSectionHEARTBEAT.mdtruesuppressToolErrorWarningstimeoutSecondsagents.defaults.timeoutSecondsdirectPolicyallowblockreason=dm-blockedlightContextHEARTBEAT.mdisolatedSessionsessionTarget: "isolated"skipWhenBusyagents.list[].heartbeatheartbeatagents.defaults.compactionjson5{ agents: { defaults: { compaction: { mode: "safeguard", // default | safeguard provider: "my-provider", // id of a registered compaction provider plugin (optional) timeoutSeconds: 900, reserveTokensFloor: 24000, keepRecentTokens: 50000, identifierPolicy: "strict", // strict | off | custom identifierInstructions: "Preserve deployment IDs, ticket IDs, and host:port pairs exactly.", // used when identifierPolicy=custom qualityGuard: { enabled: true, maxRetries: 1 }, midTurnPrecheck: { enabled: false }, // optional Pi tool-loop pressure check postCompactionSections: ["Session Startup", "Red Lines"], // [] disables reinjection model: "openrouter/anthropic/claude-sonnet-4-6", // optional compaction-only model override truncateAfterCompaction: true, // rotate to a smaller successor JSONL after compaction maxActiveTranscriptBytes: "20mb", // optional preflight local compaction trigger notifyUser: true, // send brief notices when compaction starts and completes (default: false) memoryFlush: { enabled: true, model: "ollama/qwen3:8b", // optional memory-flush-only model override softThresholdTokens: 6000, systemPrompt: "Session nearing compaction. Store durable memories now.", prompt: "Write any lasting notes to memory/YYYY-MM-DD.md; reply with the exact silent token NO_REPLY if nothing to store.", }, }, }, }, }
modedefaultsafeguardprovidersummarize()mode: "safeguard"timeoutSeconds900keepRecentTokens/compactidentifierPolicystrictoffcustomstrictidentifierInstructionsidentifierPolicy=customqualityGuardenabled: falsemidTurnPrecheckenabled: truedefaultsafeguardpostCompactionSections["Session Startup", "Red Lines"][]Every SessionSafetymodelprovider/model-idmaxActiveTranscriptBytesnumber"20mb"truncateAfterCompaction0notifyUsertruememoryFlushmodelollama/qwen3:8bagents.defaults.contextPruningPrunes old tool results from in-memory context before sending to the LLM. Does not modify session history on disk.
json5{ agents: { defaults: { contextPruning: { mode: "cache-ttl", // off | cache-ttl ttl: "1h", // duration (ms/s/m/h), default unit: minutes keepLastAssistants: 3, softTrimRatio: 0.3, hardClearRatio: 0.5, minPrunableToolChars: 50000, softTrim: { maxChars: 4000, headChars: 1500, tailChars: 1500 }, hardClear: { enabled: true, placeholder: "[Old tool result content cleared]" }, tools: { deny: ["browser", "canvas"] }, }, }, }, }
© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine