TaskFlow
DashboardFreewriteWhiteboardsProjectsCRMTasksNotificationsSettingsAgent TowerAPI Docs
OpenClaw Docs
?

User

Member

Caricamento in corso...

Home
Progetti
Task
Notifiche
CRM

    OpenClaw

    Documentation Mirror

    Documentation Overview

    Docs

    Auth credential semantics
    Scheduled tasks
    Hooks
    Automation & tasks
    Standing orders
    Task flow
    Background tasks
    BlueBubbles
    Broadcast groups
    Channel routing
    Discord
    Feishu
    Google Chat
    Group messages
    Groups
    iMessage
    Chat channels
    IRC
    LINE
    Channel location parsing
    Matrix
    Matrix migration
    Matrix push rules for quiet previews
    Mattermost
    Microsoft Teams
    Nextcloud Talk
    Nostr
    Pairing
    QA channel
    QQ bot
    Signal
    Slack
    Synology Chat
    Telegram
    Tlon
    Channel troubleshooting
    Twitch
    WeChat
    WhatsApp
    Yuanbao
    Zalo
    Zalo personal
    CI pipeline
    ACP
    Agent
    Agents
    Approvals
    Backup
    Browser
    Channels
    Clawbot
    `openclaw commitments`
    Completion
    Config
    Configure
    Cron
    Daemon
    Dashboard
    Devices
    Directory
    DNS
    Docs
    Doctor
    Flows (redirect)
    Gateway
    Health
    Hooks
    CLI reference
    Inference CLI
    Logs
    MCP
    Memory
    Message
    Migrate
    Models
    Node
    Nodes
    Onboard
    Pairing
    Plugins
    Proxy
    QR
    Reset
    Sandbox CLI
    Secrets
    Security
    Sessions
    Setup
    Skills
    Status
    System
    `openclaw tasks`
    TUI
    Uninstall
    Update
    Voicecall
    Webhooks
    Wiki
    Active memory
    Agent runtime
    Agent loop
    Agent runtimes
    Agent workspace
    Gateway architecture
    Channel docking
    Inferred commitments
    Compaction
    Context
    Context engine
    Delegate architecture
    Dreaming
    Experimental features
    Features
    Markdown formatting
    Memory overview
    Builtin memory engine
    Honcho memory
    QMD memory engine
    Memory search
    Messages
    Model failover
    Model providers
    Models CLI
    Multi-agent routing
    OAuth
    OpenClaw App SDK
    Presence
    QA overview
    Matrix QA
    Command queue
    Steering queue
    Retry policy
    Session management
    Session pruning
    Session tools
    SOUL.md personality guide
    Streaming and chunking
    System prompt
    Timezones
    TypeBox
    Typing indicators
    Usage tracking
    Date and time
    Node + tsx crash
    Diagnostics flags
    Authentication
    Background exec and process tool
    Bonjour discovery
    Bridge protocol
    CLI backends
    Configuration — agents
    Configuration — channels
    Configuration — tools and custom providers
    Configuration
    Configuration examples
    Configuration reference
    Diagnostics export
    Discovery and transports
    Doctor
    Gateway lock
    Health checks
    Heartbeat
    Gateway runbook
    Local models
    Gateway logging
    Multiple gateways
    Network model
    OpenAI chat completions
    OpenResponses API
    OpenShell
    OpenTelemetry export
    Gateway-owned pairing
    Prometheus metrics
    Gateway protocol
    Remote access
    Remote gateway setup
    Sandbox vs tool policy vs elevated
    Sandboxing
    Secrets management
    Secrets apply plan contract
    Security audit checks
    Security
    Tailscale
    Tools invoke API
    Troubleshooting
    Trusted proxy auth
    Debugging
    Environment variables
    FAQ
    FAQ: first-run setup
    FAQ: models and auth
    GPT-5.5 / Codex agentic parity
    GPT-5.5 / Codex parity maintainer notes
    Help
    Scripts
    Testing
    Testing: live suites
    General troubleshooting
    OpenClaw
    Ansible
    Azure
    Bun (experimental)
    ClawDock
    Release channels
    DigitalOcean
    Docker
    Docker VM runtime
    exe.dev
    Fly.io
    GCP
    Hetzner
    Hostinger
    Install
    Installer internals
    Kubernetes
    macOS VMs
    Migration guide
    Migrating from Claude
    Migrating from Hermes
    Nix
    Node.js
    Northflank
    Oracle Cloud
    Podman
    Railway
    Raspberry Pi
    Render
    Uninstall
    Updating
    Logging
    Network
    Audio and voice notes
    Camera capture
    Image and media support
    Nodes
    Location command
    Media understanding
    Talk mode
    Node troubleshooting
    Voice wake
    Pi integration architecture
    Pi development workflow
    Android app
    Platforms
    iOS app
    Linux app
    Gateway on macOS
    Canvas
    Gateway lifecycle
    macOS dev setup
    Health checks (macOS)
    Menu bar icon
    macOS logging
    Menu bar
    Peekaboo bridge
    macOS permissions
    Remote control
    macOS signing
    Skills (macOS)
    Voice overlay
    Voice wake (macOS)
    WebChat (macOS)
    macOS IPC
    macOS app
    Windows
    Plugin internals
    Plugin architecture internals
    Building plugins
    Plugin bundles
    Codex Computer Use
    Codex harness
    Community plugins
    Plugin compatibility
    Google Meet plugin
    Plugin hooks
    Plugin manifest
    Memory LanceDB
    Memory wiki
    Message presentation
    Agent harness plugins
    Building channel plugins
    Channel turn kernel
    Plugin entry points
    Plugin SDK migration
    Plugin SDK overview
    Building provider plugins
    Plugin runtime helpers
    Plugin setup and config
    Plugin SDK subpaths
    Plugin testing
    Skill workshop plugin
    Voice call plugin
    Webhooks plugin
    Zalo personal plugin
    OpenProse
    Alibaba Model Studio
    Anthropic
    Arcee AI
    Azure Speech
    Amazon Bedrock
    Amazon Bedrock Mantle
    Chutes
    Claude Max API proxy
    Cloudflare AI gateway
    ComfyUI
    Deepgram
    Deepinfra
    DeepSeek
    ElevenLabs
    Fal
    Fireworks
    GitHub Copilot
    GLM (Zhipu)
    Google (Gemini)
    Gradium
    Groq
    Hugging Face (inference)
    Provider directory
    Inferrs
    Inworld
    Kilocode
    LiteLLM
    LM Studio
    MiniMax
    Mistral
    Model provider quickstart
    Moonshot AI
    NVIDIA
    Ollama
    OpenAI
    OpenCode
    OpenCode Go
    OpenRouter
    Perplexity
    Qianfan
    Qwen
    Runway
    SGLang
    StepFun
    Synthetic
    Tencent Cloud (TokenHub)
    Together AI
    Venice AI
    Vercel AI gateway
    vLLM
    Volcengine (Doubao)
    Vydra
    xAI
    Xiaomi MiMo
    Z.AI
    Default AGENTS.md
    Release policy
    API usage and costs
    Credits
    Device model database
    Full release validation
    Memory configuration reference
    OpenClaw App SDK API design
    Prompt caching
    Rich output protocol
    RPC adapters
    SecretRef credential surface
    Session management deep dive
    AGENTS.md template
    BOOT.md template
    BOOTSTRAP.md template
    HEARTBEAT.md template
    IDENTITY template
    SOUL.md template
    TOOLS.md template
    USER template
    Tests
    Token use and costs
    Transcript hygiene
    Onboarding reference
    Contributing to the threat model
    Threat model (MITRE ATLAS)
    Formal verification (security models)
    Network proxy
    Agent bootstrapping
    Docs directory
    Getting started
    Docs hubs
    OpenClaw lore
    Onboarding (macOS app)
    Onboarding overview
    Personal assistant setup
    Setup
    Showcase
    Onboarding (CLI)
    CLI automation
    CLI setup reference
    ACP agents
    ACP agents — setup
    Agent send
    apply_patch tool
    Brave search
    Browser (OpenClaw-managed)
    Browser control API
    Browser troubleshooting
    Browser login
    WSL2 + Windows + remote Chrome CDP troubleshooting
    BTW side questions
    ClawHub
    Code execution
    Creating skills
    Diffs
    DuckDuckGo search
    Elevated mode
    Exa search
    Exec tool
    Exec approvals
    Exec approvals — advanced
    Firecrawl
    Gemini search
    Grok search
    Image generation
    Tools and plugins
    Kimi search
    LLM task
    Lobster
    Tool-loop detection
    Media overview
    MiniMax search
    Multi-agent sandbox and tools
    Music generation
    Ollama web search
    PDF tool
    Perplexity search
    Plugins
    Reactions
    SearXNG search
    Skills
    Skills config
    Slash commands
    Sub-agents
    Tavily
    Thinking levels
    Tokenjuice
    Trajectory bundles
    Text-to-speech
    Video generation
    Web search
    Web fetch
    Linux server
    Control UI
    Dashboard
    Web
    TUI
    WebChat

    OpenAPI Specs

    openapi
    TaskFlow
    docs/openclaw
    Original Docs

    Real-time Synchronized Documentation

    Last sync: 01/05/2026 07:03:13

    Note: This content is mirrored from docs.openclaw.ai and is subject to their terms and conditions.

    OpenClaw Docs

    v2.4.0 Production

    Last synced: Today, 22:00

    Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.

    Use this file to discover all available pages before exploring further.

    Streaming and chunking

    OpenClaw has two separate streaming layers:

    • Block streaming (channels): emit completed blocks as the assistant writes. These are normal channel messages (not token deltas).
    • Preview streaming (Telegram/Discord/Slack): update a temporary preview message while generating.

    There is no true token-delta streaming to channel messages today. Preview streaming is message-based (send + edits/appends).

    Block streaming (channel messages)

    Block streaming sends assistant output in coarse chunks as it becomes available.

    text
    Model output └─ text_delta/events ├─ (blockStreamingBreak=text_end) │ └─ chunker emits blocks as buffer grows └─ (blockStreamingBreak=message_end) └─ chunker flushes at message_end └─ channel send (block replies)

    Legend:

    • text
      text_delta/events
      : model stream events (may be sparse for non-streaming models).
    • text
      chunker
      :
      text
      EmbeddedBlockChunker
      applying min/max bounds + break preference.
    • text
      channel send
      : actual outbound messages (block replies).

    Controls:

    • text
      agents.defaults.blockStreamingDefault
      :
      text
      "on"
      /
      text
      "off"
      (default off).
    • Channel overrides:
      text
      *.blockStreaming
      (and per-account variants) to force
      text
      "on"
      /
      text
      "off"
      per channel.
    • text
      agents.defaults.blockStreamingBreak
      :
      text
      "text_end"
      or
      text
      "message_end"
      .
    • text
      agents.defaults.blockStreamingChunk
      :
      text
      { minChars, maxChars, breakPreference? }
      .
    • text
      agents.defaults.blockStreamingCoalesce
      :
      text
      { minChars?, maxChars?, idleMs? }
      (merge streamed blocks before send).
    • Channel hard cap:
      text
      *.textChunkLimit
      (e.g.,
      text
      channels.whatsapp.textChunkLimit
      ).
    • Channel chunk mode:
      text
      *.chunkMode
      (
      text
      length
      default,
      text
      newline
      splits on blank lines (paragraph boundaries) before length chunking).
    • Discord soft cap:
      text
      channels.discord.maxLinesPerMessage
      (default 17) splits tall replies to avoid UI clipping.

    Boundary semantics:

    • text
      text_end
      : stream blocks as soon as chunker emits; flush on each
      text
      text_end
      .
    • text
      message_end
      : wait until assistant message finishes, then flush buffered output.

    text
    message_end
    still uses the chunker if the buffered text exceeds
    text
    maxChars
    , so it can emit multiple chunks at the end.

    Media delivery with block streaming

    text
    MEDIA:
    directives are normal delivery metadata. When block streaming sends a media block early, OpenClaw remembers that delivery for the turn. If the final assistant payload repeats the same media URL, the final delivery strips the duplicate media instead of sending the attachment again.

    Exact duplicate final payloads are suppressed. If the final payload adds distinct text around media that was already streamed, OpenClaw still sends the new text while keeping the media single-delivery. This prevents duplicate voice notes or files on channels such as Telegram when an agent emits

    text
    MEDIA:
    during streaming and the provider also includes it in the completed reply.

    Chunking algorithm (low/high bounds)

    Block chunking is implemented by

    text
    EmbeddedBlockChunker
    :

    • Low bound: don’t emit until buffer >=
      text
      minChars
      (unless forced).
    • High bound: prefer splits before
      text
      maxChars
      ; if forced, split at
      text
      maxChars
      .
    • Break preference:
      text
      paragraph
      →
      text
      newline
      →
      text
      sentence
      →
      text
      whitespace
      → hard break.
    • Code fences: never split inside fences; when forced at
      text
      maxChars
      , close + reopen the fence to keep Markdown valid.

    text
    maxChars
    is clamped to the channel
    text
    textChunkLimit
    , so you can’t exceed per-channel caps.

    Coalescing (merge streamed blocks)

    When block streaming is enabled, OpenClaw can merge consecutive block chunks before sending them out. This reduces “single-line spam” while still providing progressive output.

    • Coalescing waits for idle gaps (
      text
      idleMs
      ) before flushing.
    • Buffers are capped by
      text
      maxChars
      and will flush if they exceed it.
    • text
      minChars
      prevents tiny fragments from sending until enough text accumulates (final flush always sends remaining text).
    • Joiner is derived from
      text
      blockStreamingChunk.breakPreference
      (
      text
      paragraph
      →
      text
      \n\n
      ,
      text
      newline
      →
      text
      \n
      ,
      text
      sentence
      → space).
    • Channel overrides are available via
      text
      *.blockStreamingCoalesce
      (including per-account configs).
    • Default coalesce
      text
      minChars
      is bumped to 1500 for Signal/Slack/Discord unless overridden.

    Human-like pacing between blocks

    When block streaming is enabled, you can add a randomized pause between block replies (after the first block). This makes multi-bubble responses feel more natural.

    • Config:
      text
      agents.defaults.humanDelay
      (override per agent via
      text
      agents.list[].humanDelay
      ).
    • Modes:
      text
      off
      (default),
      text
      natural
      (800–2500ms),
      text
      custom
      (
      text
      minMs
      /
      text
      maxMs
      ).
    • Applies only to block replies, not final replies or tool summaries.

    "Stream chunks or everything"

    This maps to:

    • Stream chunks:
      text
      blockStreamingDefault: "on"
      +
      text
      blockStreamingBreak: "text_end"
      (emit as you go). Non-Telegram channels also need
      text
      *.blockStreaming: true
      .
    • Stream everything at end:
      text
      blockStreamingBreak: "message_end"
      (flush once, possibly multiple chunks if very long).
    • No block streaming:
      text
      blockStreamingDefault: "off"
      (only final reply).

    Channel note: Block streaming is off unless

    text
    *.blockStreaming
    is explicitly set to
    text
    true
    . Channels can stream a live preview (
    text
    channels.<channel>.streaming
    ) without block replies.

    Config location reminder: the

    text
    blockStreaming*
    defaults live under
    text
    agents.defaults
    , not the root config.

    Preview streaming modes

    Canonical key:

    text
    channels.<channel>.streaming

    Modes:

    • text
      off
      : disable preview streaming.
    • text
      partial
      : single preview that is replaced with latest text.
    • text
      block
      : preview updates in chunked/appended steps.
    • text
      progress
      : progress/status preview during generation, final answer at completion.

    Channel mapping

    Channel
    text
    off
    text
    partial
    text
    block
    text
    progress
    Telegram✅✅✅maps to
    text
    partial
    Discord✅✅✅maps to
    text
    partial
    Slack✅✅✅✅
    Mattermost✅✅✅✅

    Slack-only:

    • text
      channels.slack.streaming.nativeTransport
      toggles Slack native streaming API calls when
      text
      channels.slack.streaming.mode="partial"
      (default:
      text
      true
      ).
    • Slack native streaming and Slack assistant thread status require a reply thread target; top-level DMs do not show that thread-style preview.

    Legacy key migration:

    • Telegram: legacy
      text
      streamMode
      and scalar/boolean
      text
      streaming
      values are detected and migrated by doctor/config compatibility paths to
      text
      streaming.mode
      .
    • Discord:
      text
      streamMode
      + boolean
      text
      streaming
      auto-migrate to
      text
      streaming
      enum.
    • Slack:
      text
      streamMode
      auto-migrates to
      text
      streaming.mode
      ; boolean
      text
      streaming
      auto-migrates to
      text
      streaming.mode
      plus
      text
      streaming.nativeTransport
      ; legacy
      text
      nativeStreaming
      auto-migrates to
      text
      streaming.nativeTransport
      .

    Runtime behavior

    Telegram:

    • Uses
      text
      sendMessage
      +
      text
      editMessageText
      preview updates across DMs and group/topics.
    • Sends a fresh final message instead of editing in place when a preview has been visible for about one minute, then cleans up the preview so Telegram's timestamp reflects reply completion.
    • Preview streaming is skipped when Telegram block streaming is explicitly enabled (to avoid double-streaming).
    • text
      /reasoning stream
      can write reasoning to preview.

    Discord:

    • Uses send + edit preview messages.
    • text
      block
      mode uses draft chunking (
      text
      draftChunk
      ).
    • Preview streaming is skipped when Discord block streaming is explicitly enabled.
    • Final media, error, and explicit-reply payloads cancel pending previews without flushing a new draft, then use normal delivery.

    Slack:

    • text
      partial
      can use Slack native streaming (
      text
      chat.startStream
      /
      text
      append
      /
      text
      stop
      ) when available.
    • text
      block
      uses append-style draft previews.
    • text
      progress
      uses status preview text, then final answer.
    • Native and draft preview streaming suppress block replies for that turn, so a Slack reply is streamed by one delivery path only.
    • Final media/error payloads and progress finals do not create throwaway draft messages; only text/block finals that can edit the preview flush pending draft text.

    Mattermost:

    • Streams thinking, tool activity, and partial reply text into a single draft preview post that finalizes in place when the final answer is safe to send.
    • Falls back to sending a fresh final post if the preview post was deleted or is otherwise unavailable at finalize time.
    • Final media/error payloads cancel pending preview updates before normal delivery instead of flushing a temporary preview post.

    Matrix:

    • Draft previews finalize in place when the final text can reuse the preview event.
    • Media-only, error, and reply-target-mismatch finals cancel pending preview updates before normal delivery; an already-visible stale preview is redacted.

    Tool-progress preview updates

    Preview streaming can also include tool-progress updates — short status lines like "searching the web", "reading file", or "calling tool" — that appear in the same preview message while tools are running, ahead of the final reply. This keeps multi-step tool turns visually alive rather than silent between the first thinking preview and the final answer.

    Supported surfaces:

    • Discord, Slack, Telegram, and Matrix stream tool-progress into the live preview edit by default when preview streaming is active.
    • Telegram has shipped with tool-progress preview updates enabled since
      text
      v2026.4.22
      ; keeping them enabled preserves that released behavior.
    • Mattermost already folds tool activity into its single draft preview post (see above).
    • Tool-progress edits follow the active preview streaming mode; they are skipped when preview streaming is
      text
      off
      or when block streaming has taken over the message. On Telegram,
      text
      streaming.mode: "off"
      is final-only: generic progress chatter is also suppressed instead of being delivered as standalone "Working..." messages, while approval prompts, media payloads, and errors still route normally.
    • To keep preview streaming but hide tool-progress lines, set
      text
      streaming.preview.toolProgress
      to
      text
      false
      for that channel. To disable preview edits entirely, set
      text
      streaming.mode
      to
      text
      off
      .

    Example:

    json
    { "channels": { "telegram": { "streaming": { "mode": "partial", "preview": { "toolProgress": false } } } } }

    Related

    • Messages — message lifecycle and delivery
    • Retry — retry behavior on delivery failure
    • Channels — per-channel streaming support

    © 2024 TaskFlow Mirror

    Powered by TaskFlow Sync Engine