Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
tools.*tools.profiletools.allowtools.deny| Profile | Includes |
|---|---|
text minimal | text session_status |
text coding | text group:fstext group:runtimetext group:webtext group:sessionstext group:memorytext crontext imagetext image_generatetext video_generate |
text messaging | text group:messagingtext sessions_listtext sessions_historytext sessions_sendtext session_status |
text full | No restriction (same as unset) |
| Group | Tools |
|---|---|
text group:runtime | text exectext processtext code_executiontext bashtext exec |
text group:fs | text readtext writetext edittext apply_patch |
text group:sessions | text sessions_listtext sessions_historytext sessions_sendtext sessions_spawntext sessions_yieldtext subagentstext session_status |
text group:memory | text memory_searchtext memory_get |
text group:web | text web_searchtext x_searchtext web_fetch |
text group:ui | text browsertext canvas |
text group:automation | text crontext gateway |
text group:messaging | text message |
text group:nodes | text nodes |
text group:agents | text agents_list |
text group:media | text imagetext image_generatetext video_generatetext tts |
text group:openclaw | All built-in tools (excludes provider plugins) |
tools.allowtools.denyGlobal tool allow/deny policy (deny wins). Case-insensitive, supports
*json5{ tools: { deny: ["browser", "canvas"] }, }
tools.byProviderFurther restrict tools for specific providers or models. Order: base profile → provider profile → allow/deny.
json5{ tools: { profile: "coding", byProvider: { "google-antigravity": { profile: "minimal" }, "openai/gpt-5.4": { allow: ["group:fs", "sessions_list"] }, }, }, }
tools.elevatedControls elevated exec access outside the sandbox:
json5{ tools: { elevated: { enabled: true, allowFrom: { whatsapp: ["+15555550123"], discord: ["1234567890123", "987654321098765432"], }, }, }, }
agents.list[].tools.elevated/elevated on|off|ask|fullexecgatewaynodenodetools.execjson5{ tools: { exec: { backgroundMs: 10000, timeoutSec: 1800, cleanupMs: 1800000, notifyOnExit: true, notifyOnExitEmptySuccess: false, applyPatch: { enabled: false, allowModels: ["gpt-5.5"], }, }, }, }
tools.loopDetectionTool-loop safety checks are disabled by default. Set
enabled: truetools.loopDetectionagents.list[].tools.loopDetectionjson5{ tools: { loopDetection: { enabled: true, historySize: 30, warningThreshold: 10, criticalThreshold: 20, globalCircuitBreakerThreshold: 30, detectors: { genericRepeat: true, knownPollNoProgress: true, pingPong: true, }, }, }, }
tools.webjson5{ tools: { web: { search: { enabled: true, apiKey: "brave_api_key", // or BRAVE_API_KEY env maxResults: 5, timeoutSeconds: 30, cacheTtlMinutes: 15, }, fetch: { enabled: true, provider: "firecrawl", // optional; omit for auto-detect maxChars: 50000, maxCharsCap: 50000, maxResponseBytes: 2000000, timeoutSeconds: 30, cacheTtlMinutes: 15, maxRedirects: 3, readability: true, userAgent: "custom-ua", }, }, }, }
tools.mediaConfigures inbound media understanding (image/audio/video):
json5{ tools: { media: { concurrency: 2, asyncCompletion: { directSend: false, // opt-in: send finished async video directly to the channel }, audio: { enabled: true, maxBytes: 20971520, scope: { default: "deny", rules: [{ action: "allow", match: { chatType: "direct" } }], }, models: [ { provider: "openai", model: "gpt-4o-mini-transcribe" }, { type: "cli", command: "whisper", args: ["--model", "base", "{{MediaPath}}"] }, ], }, image: { enabled: true, timeoutSeconds: 180, models: [{ provider: "ollama", model: "gemma4:26b", timeoutSeconds: 300 }], }, video: { enabled: true, maxBytes: 52428800, models: [{ provider: "google", model: "gemini-3-flash-preview" }], }, }, }, }
tools.agentToAgentjson5{ tools: { agentToAgent: { enabled: false, allow: ["home", "work"], }, }, }
tools.sessionsControls which sessions can be targeted by the session tools (
sessions_listsessions_historysessions_sendDefault:
treejson5{ tools: { sessions: { // "self" | "tree" | "agent" | "all" visibility: "tree", }, }, }
tools.sessions_spawnControls inline attachment support for
sessions_spawnjson5{ tools: { sessions_spawn: { attachments: { enabled: false, // opt-in: set true to allow inline file attachments maxTotalBytes: 5242880, // 5 MB total across all files maxFiles: 50, maxFileBytes: 1048576, // 1 MB per file retainOnSessionKeep: false, // keep attachments when cleanup="keep" }, }, }, }
tools.experimentalExperimental built-in tool flags. Default off unless a strict-agentic GPT-5 auto-enable rule applies.
json5{ tools: { experimental: { planTool: true, // enable experimental update_plan }, }, }
planToolupdate_planfalseagents.defaults.embeddedPi.executionContract"strict-agentic"truefalsein_progressagents.defaults.subagentsjson5{ agents: { defaults: { subagents: { allowAgents: ["research"], model: "minimax/MiniMax-M2.7", maxConcurrent: 8, runTimeoutSeconds: 900, archiveAfterMinutes: 60, }, }, }, }
modelallowAgentssessions_spawnsubagents.allowAgents["*"]runTimeoutSecondssessions_spawnrunTimeoutSeconds0tools.subagents.tools.allowtools.subagents.tools.denyOpenClaw uses the built-in model catalog. Add custom providers via
models.providers~/.openclaw/agents/<agentId>/agent/models.jsonjson5{ models: { mode: "merge", // merge (default) | replace providers: { "custom-proxy": { baseUrl: "http://localhost:4000/v1", apiKey: "LITELLM_KEY", api: "openai-completions", // openai-completions | openai-responses | anthropic-messages | google-generative-ai models: [ { id: "llama-3.1-8b", name: "Llama 3.1 8B", reasoning: false, input: ["text"], cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }, contextWindow: 128000, contextTokens: 96000, maxTokens: 32000, }, ], }, }, }, }
Interactive custom-provider onboarding infers image input for common vision model IDs such as GPT-4o, Claude, Gemini, Qwen-VL, LLaVA, Pixtral, InternVL, Mllama, MiniCPM-V, and GLM-4V, and skips the extra question for known text-only families. Unknown model IDs still prompt for image support. Non-interactive onboarding uses the same inference; pass
--custom-image-input--custom-text-input© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine