TaskFlow
DashboardFreewriteWhiteboardsProjectsCRMTasksNotificationsSettingsAgent TowerAPI Docs
OpenClaw Docs
?

User

Member

Caricamento in corso...

Home
Progetti
Task
Notifiche
CRM

    OpenClaw

    Documentation Mirror

    Documentation Overview

    Docs

    Auth credential semantics
    Scheduled tasks
    Hooks
    Automation & tasks
    Standing orders
    Task flow
    Background tasks
    BlueBubbles
    Broadcast groups
    Channel routing
    Discord
    Feishu
    Google Chat
    Group messages
    Groups
    iMessage
    Chat channels
    IRC
    LINE
    Channel location parsing
    Matrix
    Matrix migration
    Matrix push rules for quiet previews
    Mattermost
    Microsoft Teams
    Nextcloud Talk
    Nostr
    Pairing
    QA channel
    QQ bot
    Signal
    Slack
    Synology Chat
    Telegram
    Tlon
    Channel troubleshooting
    Twitch
    WeChat
    WhatsApp
    Yuanbao
    Zalo
    Zalo personal
    CI pipeline
    ACP
    Agent
    Agents
    Approvals
    Backup
    Browser
    Channels
    Clawbot
    `openclaw commitments`
    Completion
    Config
    Configure
    Cron
    Daemon
    Dashboard
    Devices
    Directory
    DNS
    Docs
    Doctor
    Flows (redirect)
    Gateway
    Health
    Hooks
    CLI reference
    Inference CLI
    Logs
    MCP
    Memory
    Message
    Migrate
    Models
    Node
    Nodes
    Onboard
    Pairing
    Plugins
    Proxy
    QR
    Reset
    Sandbox CLI
    Secrets
    Security
    Sessions
    Setup
    Skills
    Status
    System
    `openclaw tasks`
    TUI
    Uninstall
    Update
    Voicecall
    Webhooks
    Wiki
    Active memory
    Agent runtime
    Agent loop
    Agent runtimes
    Agent workspace
    Gateway architecture
    Channel docking
    Inferred commitments
    Compaction
    Context
    Context engine
    Delegate architecture
    Dreaming
    Experimental features
    Features
    Markdown formatting
    Memory overview
    Builtin memory engine
    Honcho memory
    QMD memory engine
    Memory search
    Messages
    Model failover
    Model providers
    Models CLI
    Multi-agent routing
    OAuth
    OpenClaw App SDK
    Presence
    QA overview
    Matrix QA
    Command queue
    Steering queue
    Retry policy
    Session management
    Session pruning
    Session tools
    SOUL.md personality guide
    Streaming and chunking
    System prompt
    Timezones
    TypeBox
    Typing indicators
    Usage tracking
    Date and time
    Node + tsx crash
    Diagnostics flags
    Authentication
    Background exec and process tool
    Bonjour discovery
    Bridge protocol
    CLI backends
    Configuration — agents
    Configuration — channels
    Configuration — tools and custom providers
    Configuration
    Configuration examples
    Configuration reference
    Diagnostics export
    Discovery and transports
    Doctor
    Gateway lock
    Health checks
    Heartbeat
    Gateway runbook
    Local models
    Gateway logging
    Multiple gateways
    Network model
    OpenAI chat completions
    OpenResponses API
    OpenShell
    OpenTelemetry export
    Gateway-owned pairing
    Prometheus metrics
    Gateway protocol
    Remote access
    Remote gateway setup
    Sandbox vs tool policy vs elevated
    Sandboxing
    Secrets management
    Secrets apply plan contract
    Security audit checks
    Security
    Tailscale
    Tools invoke API
    Troubleshooting
    Trusted proxy auth
    Debugging
    Environment variables
    FAQ
    FAQ: first-run setup
    FAQ: models and auth
    GPT-5.5 / Codex agentic parity
    GPT-5.5 / Codex parity maintainer notes
    Help
    Scripts
    Testing
    Testing: live suites
    General troubleshooting
    OpenClaw
    Ansible
    Azure
    Bun (experimental)
    ClawDock
    Release channels
    DigitalOcean
    Docker
    Docker VM runtime
    exe.dev
    Fly.io
    GCP
    Hetzner
    Hostinger
    Install
    Installer internals
    Kubernetes
    macOS VMs
    Migration guide
    Migrating from Claude
    Migrating from Hermes
    Nix
    Node.js
    Northflank
    Oracle Cloud
    Podman
    Railway
    Raspberry Pi
    Render
    Uninstall
    Updating
    Logging
    Network
    Audio and voice notes
    Camera capture
    Image and media support
    Nodes
    Location command
    Media understanding
    Talk mode
    Node troubleshooting
    Voice wake
    Pi integration architecture
    Pi development workflow
    Android app
    Platforms
    iOS app
    Linux app
    Gateway on macOS
    Canvas
    Gateway lifecycle
    macOS dev setup
    Health checks (macOS)
    Menu bar icon
    macOS logging
    Menu bar
    Peekaboo bridge
    macOS permissions
    Remote control
    macOS signing
    Skills (macOS)
    Voice overlay
    Voice wake (macOS)
    WebChat (macOS)
    macOS IPC
    macOS app
    Windows
    Plugin internals
    Plugin architecture internals
    Building plugins
    Plugin bundles
    Codex Computer Use
    Codex harness
    Community plugins
    Plugin compatibility
    Plugin dependency resolution
    Google Meet plugin
    Plugin hooks
    Plugin manifest
    Memory LanceDB
    Memory wiki
    Message presentation
    Agent harness plugins
    Building channel plugins
    Channel turn kernel
    Plugin entry points
    Plugin SDK migration
    Plugin SDK overview
    Building provider plugins
    Plugin runtime helpers
    Plugin setup and config
    Plugin SDK subpaths
    Plugin testing
    Skill workshop plugin
    Voice call plugin
    Webhooks plugin
    Zalo personal plugin
    OpenProse
    Alibaba Model Studio
    Anthropic
    Arcee AI
    Azure Speech
    Amazon Bedrock
    Amazon Bedrock Mantle
    Chutes
    Claude Max API proxy
    Cloudflare AI gateway
    ComfyUI
    Deepgram
    Deepinfra
    DeepSeek
    ElevenLabs
    Fal
    Fireworks
    GitHub Copilot
    GLM (Zhipu)
    Google (Gemini)
    Gradium
    Groq
    Hugging Face (inference)
    Provider directory
    Inferrs
    Inworld
    Kilocode
    LiteLLM
    LM Studio
    MiniMax
    Mistral
    Model provider quickstart
    Moonshot AI
    NVIDIA
    Ollama
    OpenAI
    OpenCode
    OpenCode Go
    OpenRouter
    Perplexity
    Qianfan
    Qwen
    Runway
    SGLang
    StepFun
    Synthetic
    Tencent Cloud (TokenHub)
    Together AI
    Venice AI
    Vercel AI gateway
    vLLM
    Volcengine (Doubao)
    Vydra
    xAI
    Xiaomi MiMo
    Z.AI
    Default AGENTS.md
    Release policy
    API usage and costs
    Credits
    Device model database
    Full release validation
    Memory configuration reference
    OpenClaw App SDK API design
    Prompt caching
    Rich output protocol
    RPC adapters
    SecretRef credential surface
    Session management deep dive
    AGENTS.md template
    BOOT.md template
    BOOTSTRAP.md template
    HEARTBEAT.md template
    IDENTITY template
    SOUL.md template
    TOOLS.md template
    USER template
    Tests
    Token use and costs
    Transcript hygiene
    Onboarding reference
    Contributing to the threat model
    Threat model (MITRE ATLAS)
    Formal verification (security models)
    Network proxy
    Agent bootstrapping
    Docs directory
    Getting started
    Docs hubs
    OpenClaw lore
    Onboarding (macOS app)
    Onboarding overview
    Personal assistant setup
    Setup
    Showcase
    Onboarding (CLI)
    CLI automation
    CLI setup reference
    ACP agents
    ACP agents — setup
    Agent send
    apply_patch tool
    Brave search
    Browser (OpenClaw-managed)
    Browser control API
    Browser troubleshooting
    Browser login
    WSL2 + Windows + remote Chrome CDP troubleshooting
    BTW side questions
    ClawHub
    Code execution
    Creating skills
    Diffs
    DuckDuckGo search
    Elevated mode
    Exa search
    Exec tool
    Exec approvals
    Exec approvals — advanced
    Firecrawl
    Gemini search
    Grok search
    Image generation
    Tools and plugins
    Kimi search
    LLM task
    Lobster
    Tool-loop detection
    Media overview
    MiniMax search
    Multi-agent sandbox and tools
    Music generation
    Ollama web search
    PDF tool
    Perplexity search
    Plugins
    Reactions
    SearXNG search
    Skills
    Skills config
    Slash commands
    Sub-agents
    Tavily
    Thinking levels
    Tokenjuice
    Trajectory bundles
    Text-to-speech
    Video generation
    Web search
    Web fetch
    Linux server
    Control UI
    Dashboard
    Web
    TUI
    WebChat

    OpenAPI Specs

    openapi
    TaskFlow
    docs/openclaw
    Original Docs

    Real-time Synchronized Documentation

    Last sync: 01/05/2026 08:32:47

    Note: This content is mirrored from docs.openclaw.ai and is subject to their terms and conditions.

    OpenClaw Docs

    v2.4.0 Production

    Last synced: Today, 22:00

    Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.

    Use this file to discover all available pages before exploring further.

    Crestodian

    text
    openclaw crestodian

    Crestodian is OpenClaw's local setup, repair, and configuration helper. It is designed to stay reachable when the normal agent path is broken.

    Running

    text
    openclaw
    with no command starts Crestodian in an interactive terminal. Running
    text
    openclaw crestodian
    starts the same helper explicitly.

    What Crestodian shows

    On startup, interactive Crestodian opens the same TUI shell used by

    text
    openclaw tui
    , with a Crestodian chat backend. The chat log starts with a short greeting:

    • when to start Crestodian
    • the model or deterministic planner path Crestodian is actually using
    • config validity and the default agent
    • Gateway reachability from the first startup probe
    • the next debug action Crestodian can take

    It does not dump secrets or load plugin CLI commands just to start. The TUI still provides the normal header, chat log, status line, footer, autocomplete, and editor controls.

    Use

    text
    status
    for the detailed inventory with config path, docs/source paths, local CLI probes, API-key presence, agents, model, and Gateway details.

    Crestodian uses the same OpenClaw reference discovery as regular agents. In a Git checkout, it points itself at local

    text
    docs/
    and the local source tree. In an npm package install, it uses the bundled package docs and links to https://github.com/openclaw/openclaw, with explicit guidance to review source whenever the docs are not enough.

    Examples

    bash
    openclaw openclaw crestodian openclaw crestodian --json openclaw crestodian --message "models" openclaw crestodian --message "validate config" openclaw crestodian --message "setup workspace ~/Projects/work model openai/gpt-5.5" --yes openclaw crestodian --message "set default model openai/gpt-5.5" --yes openclaw onboard --modern

    Inside the Crestodian TUI:

    text
    status health doctor doctor fix validate config setup setup workspace ~/Projects/work model openai/gpt-5.5 config set gateway.port 19001 config set-ref gateway.auth.token env OPENCLAW_GATEWAY_TOKEN gateway status restart gateway agents create agent work workspace ~/Projects/work models set default model openai/gpt-5.5 talk to work agent talk to agent for ~/Projects/work audit quit

    Safe startup

    Crestodian's startup path is deliberately small. It can run when:

    • text
      openclaw.json
      is missing
    • text
      openclaw.json
      is invalid
    • the Gateway is down
    • plugin command registration is unavailable
    • no agent has been configured yet

    text
    openclaw --help
    and
    text
    openclaw --version
    still use the normal fast paths. Noninteractive
    text
    openclaw
    exits with a short message instead of printing root help, because the no-command product is Crestodian.

    Operations and approval

    Crestodian uses typed operations instead of editing config ad hoc.

    Read-only operations can run immediately:

    • show overview
    • list agents
    • show model/backend status
    • run status or health checks
    • check Gateway reachability
    • run doctor without interactive fixes
    • validate config
    • show the audit-log path

    Persistent operations require conversational approval in interactive mode unless you pass

    text
    --yes
    for a direct command:

    • write config
    • run
      text
      config set
    • set supported SecretRef values through
      text
      config set-ref
    • run setup/onboarding bootstrap
    • change the default model
    • start, stop, or restart the Gateway
    • create agents
    • run doctor repairs that rewrite config or state

    Applied writes are recorded in:

    text
    ~/.openclaw/audit/crestodian.jsonl

    Discovery is not audited. Only applied operations and writes are logged.

    text
    openclaw onboard --modern
    starts Crestodian as the modern onboarding preview. Plain
    text
    openclaw onboard
    still runs classic onboarding.

    Setup bootstrap

    text
    setup
    is the chat-first onboarding bootstrap. It writes only through typed config operations and asks for approval first.

    text
    setup setup workspace ~/Projects/work setup workspace ~/Projects/work model openai/gpt-5.5

    When no model is configured, setup selects the first usable backend in this order and tells you what it chose:

    • existing explicit model, if already configured
    • text
      OPENAI_API_KEY
      ->
      text
      openai/gpt-5.5
    • text
      ANTHROPIC_API_KEY
      ->
      text
      anthropic/claude-opus-4-7
    • Claude Code CLI ->
      text
      claude-cli/claude-opus-4-7
    • Codex CLI ->
      text
      codex-cli/gpt-5.5

    If none are available, setup still writes the default workspace and leaves the model unset. Install or log into Codex/Claude Code, or expose

    text
    OPENAI_API_KEY
    /
    text
    ANTHROPIC_API_KEY
    , then run setup again.

    Model-Assisted Planner

    Crestodian always starts in deterministic mode. For fuzzy commands that the deterministic parser does not understand, local Crestodian can make one bounded planner turn through OpenClaw's normal runtime paths. It first uses the configured OpenClaw model. If no configured model is usable yet, it can fall back to local runtimes already present on the machine:

    • Claude Code CLI:
      text
      claude-cli/claude-opus-4-7
    • Codex app-server harness:
      text
      openai/gpt-5.5
      with
      text
      agentRuntime.id: "codex"
    • Codex CLI:
      text
      codex-cli/gpt-5.5

    The model-assisted planner cannot mutate config directly. It must translate the request into one of Crestodian's typed commands, then the normal approval and audit rules apply. Crestodian prints the model it used and the interpreted command before it runs anything. Configless fallback planner turns are temporary, tool-disabled where the runtime supports it, and use a temporary workspace/session.

    Message-channel rescue mode does not use the model-assisted planner. Remote rescue stays deterministic so a broken or compromised normal agent path cannot be used as a config editor.

    Switching to an agent

    Use a natural-language selector to leave Crestodian and open the normal TUI:

    text
    talk to agent talk to work agent switch to main agent

    text
    openclaw tui
    ,
    text
    openclaw chat
    , and
    text
    openclaw terminal
    still open the normal agent TUI directly. They do not start Crestodian.

    After switching into the normal TUI, use

    text
    /crestodian
    to return to Crestodian. You can include a follow-up request:

    text
    /crestodian /crestodian restart gateway

    Agent switches inside the TUI leave a breadcrumb that

    text
    /crestodian
    is available.

    Message rescue mode

    Message rescue mode is the message-channel entrypoint for Crestodian. It is for the case where your normal agent is dead, but a trusted channel such as WhatsApp still receives commands.

    Supported text command:

    • text
      /crestodian <request>

    Operator flow:

    text
    You, in a trusted owner DM: /crestodian status OpenClaw: Crestodian rescue mode. Gateway reachable: no. Config valid: no. You: /crestodian restart gateway OpenClaw: Plan: restart the Gateway. Reply /crestodian yes to apply. You: /crestodian yes OpenClaw: Applied. Audit entry written.

    Agent creation can also be queued from the local prompt or rescue mode:

    text
    create agent work workspace ~/Projects/work model openai/gpt-5.5 /crestodian create agent work workspace ~/Projects/work

    Remote rescue mode is an admin surface. It must be treated like remote config repair, not like normal chat.

    Security contract for remote rescue:

    • Disabled when sandboxing is active. If an agent/session is sandboxed, Crestodian must refuse remote rescue and explain that local CLI repair is required.
    • Default effective state is
      text
      auto
      : allow remote rescue only in trusted YOLO operation, where the runtime already has unsandboxed local authority.
    • Require an explicit owner identity. Rescue must not accept wildcard sender rules, open group policy, unauthenticated webhooks, or anonymous channels.
    • Owner DMs only by default. Group/channel rescue requires explicit opt-in.
    • Remote rescue cannot open the local TUI or switch into an interactive agent session. Use local
      text
      openclaw
      for agent handoff.
    • Persistent writes still require approval, even in rescue mode.
    • Audit every applied rescue operation. Message-channel rescue records channel, account, sender, and source-address metadata. Config-mutating operations also record config hashes before and after.
    • Never echo secrets. SecretRef inspection should report availability, not values.
    • If the Gateway is alive, prefer Gateway typed operations. If the Gateway is dead, use only the minimal local repair surface that does not depend on the normal agent loop.

    Config shape:

    jsonc
    { "crestodian": { "rescue": { "enabled": "auto", "ownerDmOnly": true, }, }, }

    text
    enabled
    should accept:

    • text
      "auto"
      : default. Allow only when the effective runtime is YOLO and sandboxing is off.
    • text
      false
      : never allow message-channel rescue.
    • text
      true
      : explicitly allow rescue when the owner/channel checks pass. This still must not bypass the sandboxing denial.

    The default

    text
    "auto"
    YOLO posture is:

    • sandbox mode resolves to
      text
      off
    • text
      tools.exec.security
      resolves to
      text
      full
    • text
      tools.exec.ask
      resolves to
      text
      off

    Remote rescue is covered by the Docker lane:

    bash
    pnpm test:docker:crestodian-rescue

    Configless local planner fallback is covered by:

    bash
    pnpm test:docker:crestodian-planner

    An opt-in live channel command-surface smoke checks

    text
    /crestodian status
    plus a persistent approval roundtrip through the rescue handler:

    bash
    pnpm test:live:crestodian-rescue-channel

    Fresh configless setup through Crestodian is covered by:

    bash
    pnpm test:docker:crestodian-first-run

    That lane starts with an empty state dir, routes bare

    text
    openclaw
    to Crestodian, sets the default model, creates an additional agent, configures Discord through a plugin enablement plus token SecretRef, validates config, and checks the audit log. QA Lab also has a repo-backed scenario for the same Ring 0 flow:

    bash
    pnpm openclaw qa suite --scenario crestodian-ring-zero-setup

    Related

    • CLI reference
    • Doctor
    • TUI
    • Sandbox
    • Security

    © 2024 TaskFlow Mirror

    Powered by TaskFlow Sync Engine