Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
Debugging helpers for streaming output, especially when a provider mixes reasoning into normal text.
Use
/debug/debugcommands.debug: trueopenclaw.jsonExamples:
text/debug show /debug set messages.responsePrefix="[openclaw]" /debug unset messages.responsePrefix /debug reset
/debug resetUse
/traceExamples:
text/trace /trace on /trace off
Use
/trace/verbose/debugUse
OPENCLAW_PLUGIN_LIFECYCLE_TRACE=1Example:
bashOPENCLAW_PLUGIN_LIFECYCLE_TRACE=1 openclaw plugins install tokenjuice --force
Example output:
text[plugins:lifecycle] phase="config read" ms=6.83 status=ok command="install" [plugins:lifecycle] phase="slot selection" ms=94.31 status=ok command="install" pluginId="tokenjuice" [plugins:lifecycle] phase="registry refresh" ms=51.56 status=ok command="install" reason="source-changed"
Use this for plugin lifecycle investigation before reaching for a CPU profiler. If the command is running from a source checkout, prefer measuring the built runtime with
node dist/entry.js ...pnpm buildpnpm openclaw ...OpenClaw keeps
src/cli/debug-timing.tsUse this when a command is slow and you need a quick phase breakdown before deciding whether to use a CPU profiler or fix a specific subsystem.
Add the helper near the code you are investigating. For example, while debugging
openclaw models listsrc/commands/models/list.list-command.tsts// Temporary debugging only. Remove before landing. import { createCliDebugTiming } from "../../cli/debug-timing.js"; const timing = createCliDebugTiming({ command: "models list" }); const authStore = timing.time("debug:models:list:auth_store", () => ensureAuthProfileStore()); const loaded = await timing.timeAsync( "debug:models:list:registry", () => loadListModelRegistry(cfg, { sourceConfig }), (result) => ({ models: result.models.length, discoveredKeys: result.discoveredKeys.size, }), );
Guidelines:
debug:registryauth_storerowstime()timeAsync()Readable mode is best for live debugging:
bashOPENCLAW_DEBUG_TIMING=1 pnpm openclaw models list --all --provider moonshot
Example output from a temporary
models listtextOpenClaw CLI debug timing: models list 0ms +0ms start all=true json=false local=false plain=false provider="moonshot" 2ms +2ms debug:models:list:import_runtime duration=2ms 17ms +14ms debug:models:list:load_config duration=14ms sourceConfig=true 20.3s +20.3s debug:models:list:auth_store duration=20.3s 20.3s +0ms debug:models:list:resolve_agent_dir duration=0ms agentDir=true 20.3s +0ms debug:models:list:resolve_provider_filter duration=0ms 25.3s +5.0s debug:models:list:ensure_models_json duration=5.0s 31.2s +5.9s debug:models:list:load_model_registry duration=5.9s models=869 availableKeys=38 discoveredKeys=868 availabilityError=false 31.2s +0ms debug:models:list:resolve_configured_entries duration=0ms entries=1 31.2s +0ms debug:models:list:build_configured_lookup duration=0ms entries=1 33.6s +2.4s debug:models:list:read_registry_models duration=2.4s models=871 35.2s +1.5s debug:models:list:append_discovered_rows duration=1.5s seenKeys=0 rows=0 36.9s +1.7s debug:models:list:append_catalog_supplement_rows duration=1.7s seenKeys=5 rows=5 Model Input Ctx Local Auth Tags moonshot/kimi-k2-thinking text 256k no no moonshot/kimi-k2-thinking-turbo text 256k no no moonshot/kimi-k2-turbo text 250k no no moonshot/kimi-k2.5 text+image 256k no no moonshot/kimi-k2.6 text+image 256k no no 36.9s +0ms debug:models:list:print_model_table duration=0ms rows=5 36.9s +0ms complete rows=5
Findings from this output:
| Phase | Time | What it means |
|---|---|---|
text debug:models:list:auth_store | 20.3s | The auth-profile store load is the largest cost and should be investigated first. |
text debug:models:list:ensure_models_json | 5.0s | Syncing text models.json |
text debug:models:list:load_model_registry | 5.9s | Registry construction and provider availability work are also meaningful costs. |
text debug:models:list:read_registry_models | 2.4s | Reading all registry models is not free and may matter for text --all |
| row append phases | 3.2s total | Building five displayed rows still takes several seconds, so the filtering path deserves a closer look. |
text debug:models:list:print_model_table | 0ms | Rendering is not the bottleneck. |
Those findings are enough to guide the next patch without keeping timing code in production paths.
Use JSON mode when you want to save or compare timing data:
bashOPENCLAW_DEBUG_TIMING=json pnpm openclaw models list --all --provider moonshot \ 2> .artifacts/models-list-timing.jsonl
Each stderr line is one JSON object:
json{ "command": "models list", "phase": "debug:models:list:registry", "elapsedMs": 31200, "deltaMs": 5900, "durationMs": 5900, "models": 869, "discoveredKeys": 868 }
Before opening the final PR:
bashrg 'createCliDebugTiming|debug:[a-z0-9_-]+:' src/commands src/cli \ --glob '!src/cli/debug-timing.*' \ --glob '!*.test.ts'
The command should return no temporary instrumentation call sites unless the PR is explicitly adding a permanent diagnostics surface. For normal performance fixes, keep only the behavior change, tests, and a short note with the timing evidence.
For deeper CPU hotspots, use Node profiling (
--cpu-profFor fast iteration, run the gateway under the file watcher:
bashpnpm gateway:watch
By default, this starts or restarts a tmux session named
openclaw-gateway-watch-mainopenclaw-gateway-watch-dev-19001bashtmux attach -t openclaw-gateway-watch-main
The tmux pane runs the raw watcher:
bashnode scripts/watch-node.mjs gateway --force
Use foreground mode when tmux is not wanted:
bashpnpm gateway:watch:raw # or OPENCLAW_GATEWAY_WATCH_TMUX=0 pnpm gateway:watch
Disable auto-attach while keeping tmux management:
bashOPENCLAW_GATEWAY_WATCH_ATTACH=0 pnpm gateway:watch
The tmux wrapper carries common non-secret runtime selectors such as
OPENCLAW_PROFILEOPENCLAW_CONFIG_PATHOPENCLAW_STATE_DIROPENCLAW_GATEWAY_PORTOPENCLAW_SKIP_CHANNELSThe watcher restarts on build-relevant files under
src/package.jsonopenclaw.plugin.jsontsconfig.jsonpackage.jsontsdown.config.tstsdowndistAdd any gateway CLI flags after
gateway:watchUse the dev profile to isolate state and spin up a safe, disposable setup for debugging. There are two
--dev--dev~/.openclaw-dev19001gateway --devRecommended flow (dev profile + dev bootstrap):
bashpnpm gateway:dev OPENCLAW_PROFILE=dev openclaw tui
If you don’t have a global install yet, run the CLI via
pnpm openclaw ...What this does:
Profile isolation (global
--devOPENCLAW_PROFILE=devOPENCLAW_STATE_DIR=~/.openclaw-devOPENCLAW_CONFIG_PATH=~/.openclaw-dev/openclaw.jsonOPENCLAW_GATEWAY_PORT=19001Dev bootstrap (
gateway --devgateway.mode=localagent.workspaceagent.skipBootstrap=trueAGENTS.mdSOUL.mdTOOLS.mdIDENTITY.mdUSER.mdHEARTBEAT.mdOPENCLAW_SKIP_CHANNELS=1Reset flow (fresh start):
bashpnpm gateway:dev:reset
bashOPENCLAW_PROFILE=dev openclaw gateway --dev --reset
--resettrashrmbashopenclaw gateway stop
OpenClaw can log the raw assistant stream before any filtering/formatting. This is the best way to see whether reasoning is arriving as plain text deltas (or as separate thinking blocks).
Enable it via CLI:
bashpnpm gateway:watch --raw-stream
Optional path override:
bashpnpm gateway:watch --raw-stream --raw-stream-path ~/.openclaw/logs/raw-stream.jsonl
Equivalent env vars:
bashOPENCLAW_RAW_STREAM=1 OPENCLAW_RAW_STREAM_PATH=~/.openclaw/logs/raw-stream.jsonl
Default file:
~/.openclaw/logs/raw-stream.jsonlTo capture raw OpenAI-compat chunks before they are parsed into blocks, pi-mono exposes a separate logger:
bashPI_RAW_STREAM=1
Optional path:
bashPI_RAW_STREAM_PATH=~/.pi-mono/logs/raw-openai-completions.jsonl
Default file:
~/.pi-mono/logs/raw-openai-completions.jsonlNote: this is only emitted by processes using pi-mono’s
openai-completions© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine