TaskFlow
DashboardFreewriteWhiteboardsProjectsCRMTasksNotificationsSettingsAgent TowerAPI Docs
OpenClaw Docs
?

User

Member

Caricamento in corso...

Home
Progetti
Task
Notifiche
CRM

    OpenClaw

    Documentation Mirror

    Documentation Overview

    Docs

    Auth credential semantics
    Scheduled tasks
    Hooks
    Automation & tasks
    Standing orders
    Task flow
    Background tasks
    BlueBubbles
    Broadcast groups
    Channel routing
    Discord
    Feishu
    Google Chat
    Group messages
    Groups
    iMessage
    Chat channels
    IRC
    LINE
    Channel location parsing
    Matrix
    Matrix migration
    Matrix push rules for quiet previews
    Mattermost
    Microsoft Teams
    Nextcloud Talk
    Nostr
    Pairing
    QA channel
    QQ bot
    Signal
    Slack
    Synology Chat
    Telegram
    Tlon
    Channel troubleshooting
    Twitch
    WeChat
    WhatsApp
    Yuanbao
    Zalo
    Zalo personal
    CI pipeline
    ACP
    Agent
    Agents
    Approvals
    Backup
    Browser
    Channels
    Clawbot
    `openclaw commitments`
    Completion
    Config
    Configure
    Cron
    Daemon
    Dashboard
    Devices
    Directory
    DNS
    Docs
    Doctor
    Flows (redirect)
    Gateway
    Health
    Hooks
    CLI reference
    Inference CLI
    Logs
    MCP
    Memory
    Message
    Migrate
    Models
    Node
    Nodes
    Onboard
    Pairing
    Plugins
    Proxy
    QR
    Reset
    Sandbox CLI
    Secrets
    Security
    Sessions
    Setup
    Skills
    Status
    System
    `openclaw tasks`
    TUI
    Uninstall
    Update
    Voicecall
    Webhooks
    Wiki
    Active memory
    Agent runtime
    Agent loop
    Agent runtimes
    Agent workspace
    Gateway architecture
    Channel docking
    Inferred commitments
    Compaction
    Context
    Context engine
    Delegate architecture
    Dreaming
    Experimental features
    Features
    Markdown formatting
    Memory overview
    Builtin memory engine
    Honcho memory
    QMD memory engine
    Memory search
    Messages
    Model failover
    Model providers
    Models CLI
    Multi-agent routing
    OAuth
    OpenClaw App SDK
    Presence
    QA overview
    Matrix QA
    Command queue
    Steering queue
    Retry policy
    Session management
    Session pruning
    Session tools
    SOUL.md personality guide
    Streaming and chunking
    System prompt
    Timezones
    TypeBox
    Typing indicators
    Usage tracking
    Date and time
    Node + tsx crash
    Diagnostics flags
    Authentication
    Background exec and process tool
    Bonjour discovery
    Bridge protocol
    CLI backends
    Configuration — agents
    Configuration — channels
    Configuration — tools and custom providers
    Configuration
    Configuration examples
    Configuration reference
    Diagnostics export
    Discovery and transports
    Doctor
    Gateway lock
    Health checks
    Heartbeat
    Gateway runbook
    Local models
    Gateway logging
    Multiple gateways
    Network model
    OpenAI chat completions
    OpenResponses API
    OpenShell
    OpenTelemetry export
    Gateway-owned pairing
    Prometheus metrics
    Gateway protocol
    Remote access
    Remote gateway setup
    Sandbox vs tool policy vs elevated
    Sandboxing
    Secrets management
    Secrets apply plan contract
    Security audit checks
    Security
    Tailscale
    Tools invoke API
    Troubleshooting
    Trusted proxy auth
    Debugging
    Environment variables
    FAQ
    FAQ: first-run setup
    FAQ: models and auth
    GPT-5.5 / Codex agentic parity
    GPT-5.5 / Codex parity maintainer notes
    Help
    Scripts
    Testing
    Testing: live suites
    General troubleshooting
    OpenClaw
    Ansible
    Azure
    Bun (experimental)
    ClawDock
    Release channels
    DigitalOcean
    Docker
    Docker VM runtime
    exe.dev
    Fly.io
    GCP
    Hetzner
    Hostinger
    Install
    Installer internals
    Kubernetes
    macOS VMs
    Migration guide
    Migrating from Claude
    Migrating from Hermes
    Nix
    Node.js
    Northflank
    Oracle Cloud
    Podman
    Railway
    Raspberry Pi
    Render
    Uninstall
    Updating
    Logging
    Network
    Audio and voice notes
    Camera capture
    Image and media support
    Nodes
    Location command
    Media understanding
    Talk mode
    Node troubleshooting
    Voice wake
    Pi integration architecture
    Pi development workflow
    Android app
    Platforms
    iOS app
    Linux app
    Gateway on macOS
    Canvas
    Gateway lifecycle
    macOS dev setup
    Health checks (macOS)
    Menu bar icon
    macOS logging
    Menu bar
    Peekaboo bridge
    macOS permissions
    Remote control
    macOS signing
    Skills (macOS)
    Voice overlay
    Voice wake (macOS)
    WebChat (macOS)
    macOS IPC
    macOS app
    Windows
    Plugin internals
    Plugin architecture internals
    Building plugins
    Plugin bundles
    Codex Computer Use
    Codex harness
    Community plugins
    Plugin compatibility
    Google Meet plugin
    Plugin hooks
    Plugin manifest
    Memory LanceDB
    Memory wiki
    Message presentation
    Agent harness plugins
    Building channel plugins
    Channel turn kernel
    Plugin entry points
    Plugin SDK migration
    Plugin SDK overview
    Building provider plugins
    Plugin runtime helpers
    Plugin setup and config
    Plugin SDK subpaths
    Plugin testing
    Skill workshop plugin
    Voice call plugin
    Webhooks plugin
    Zalo personal plugin
    OpenProse
    Alibaba Model Studio
    Anthropic
    Arcee AI
    Azure Speech
    Amazon Bedrock
    Amazon Bedrock Mantle
    Chutes
    Claude Max API proxy
    Cloudflare AI gateway
    ComfyUI
    Deepgram
    Deepinfra
    DeepSeek
    ElevenLabs
    Fal
    Fireworks
    GitHub Copilot
    GLM (Zhipu)
    Google (Gemini)
    Gradium
    Groq
    Hugging Face (inference)
    Provider directory
    Inferrs
    Inworld
    Kilocode
    LiteLLM
    LM Studio
    MiniMax
    Mistral
    Model provider quickstart
    Moonshot AI
    NVIDIA
    Ollama
    OpenAI
    OpenCode
    OpenCode Go
    OpenRouter
    Perplexity
    Qianfan
    Qwen
    Runway
    SGLang
    StepFun
    Synthetic
    Tencent Cloud (TokenHub)
    Together AI
    Venice AI
    Vercel AI gateway
    vLLM
    Volcengine (Doubao)
    Vydra
    xAI
    Xiaomi MiMo
    Z.AI
    Default AGENTS.md
    Release policy
    API usage and costs
    Credits
    Device model database
    Full release validation
    Memory configuration reference
    OpenClaw App SDK API design
    Prompt caching
    Rich output protocol
    RPC adapters
    SecretRef credential surface
    Session management deep dive
    AGENTS.md template
    BOOT.md template
    BOOTSTRAP.md template
    HEARTBEAT.md template
    IDENTITY template
    SOUL.md template
    TOOLS.md template
    USER template
    Tests
    Token use and costs
    Transcript hygiene
    Onboarding reference
    Contributing to the threat model
    Threat model (MITRE ATLAS)
    Formal verification (security models)
    Network proxy
    Agent bootstrapping
    Docs directory
    Getting started
    Docs hubs
    OpenClaw lore
    Onboarding (macOS app)
    Onboarding overview
    Personal assistant setup
    Setup
    Showcase
    Onboarding (CLI)
    CLI automation
    CLI setup reference
    ACP agents
    ACP agents — setup
    Agent send
    apply_patch tool
    Brave search
    Browser (OpenClaw-managed)
    Browser control API
    Browser troubleshooting
    Browser login
    WSL2 + Windows + remote Chrome CDP troubleshooting
    BTW side questions
    ClawHub
    Code execution
    Creating skills
    Diffs
    DuckDuckGo search
    Elevated mode
    Exa search
    Exec tool
    Exec approvals
    Exec approvals — advanced
    Firecrawl
    Gemini search
    Grok search
    Image generation
    Tools and plugins
    Kimi search
    LLM task
    Lobster
    Tool-loop detection
    Media overview
    MiniMax search
    Multi-agent sandbox and tools
    Music generation
    Ollama web search
    PDF tool
    Perplexity search
    Plugins
    Reactions
    SearXNG search
    Skills
    Skills config
    Slash commands
    Sub-agents
    Tavily
    Thinking levels
    Tokenjuice
    Trajectory bundles
    Text-to-speech
    Video generation
    Web search
    Web fetch
    Linux server
    Control UI
    Dashboard
    Web
    TUI
    WebChat

    OpenAPI Specs

    openapi
    TaskFlow
    docs/openclaw
    Original Docs

    Real-time Synchronized Documentation

    Last sync: 01/05/2026 07:01:50

    Note: This content is mirrored from docs.openclaw.ai and is subject to their terms and conditions.

    OpenClaw Docs

    v2.4.0 Production

    Last synced: Today, 22:00

    Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.

    Use this file to discover all available pages before exploring further.

    Text-to-speech

    OpenClaw can convert outbound replies into audio across 14 speech providers and deliver native voice messages on Feishu, Matrix, Telegram, and WhatsApp, audio attachments everywhere else, and PCM/Ulaw streams for telephony and Talk.

    Quick start

    Pick a provider

    OpenAI and ElevenLabs are the most reliable hosted options. Microsoft and Local CLI work without an API key. See the [provider matrix](#supported-providers) for the full list.

    Set the API key

    Export the env var for your provider (for example `OPENAI_API_KEY`, `ELEVENLABS_API_KEY`). Microsoft and Local CLI need no key.

    Enable in config

    Set `messages.tts.auto: "always"` and `messages.tts.provider`:
    text
    ```json5} { messages: { tts: { auto: "always", provider: "elevenlabs", }, }, } ```

    Try it in chat

    `/tts status` shows the current state. `/tts audio Hello from OpenClaw` sends a one-off audio reply.

    note

    Auto-TTS is **off** by default. When `messages.tts.provider` is unset, OpenClaw picks the first configured provider in registry auto-select order.

    Supported providers

    ProviderAuthNotes
    Azure Speech
    text
    AZURE_SPEECH_KEY
    +
    text
    AZURE_SPEECH_REGION
    (also
    text
    AZURE_SPEECH_API_KEY
    ,
    text
    SPEECH_KEY
    ,
    text
    SPEECH_REGION
    )
    Native Ogg/Opus voice-note output and telephony.
    DeepInfra
    text
    DEEPINFRA_API_KEY
    OpenAI-compatible TTS. Defaults to
    text
    hexgrad/Kokoro-82M
    .
    ElevenLabs
    text
    ELEVENLABS_API_KEY
    or
    text
    XI_API_KEY
    Voice cloning, multilingual, deterministic via
    text
    seed
    .
    Google Gemini
    text
    GEMINI_API_KEY
    or
    text
    GOOGLE_API_KEY
    Gemini API TTS; persona-aware via
    text
    promptTemplate: "audio-profile-v1"
    .
    Gradium
    text
    GRADIUM_API_KEY
    Voice-note and telephony output.
    Inworld
    text
    INWORLD_API_KEY
    Streaming TTS API. Native Opus voice-note and PCM telephony.
    Local CLInoneRuns a configured local TTS command.
    MicrosoftnonePublic Edge neural TTS via
    text
    node-edge-tts
    . Best-effort, no SLA.
    MiniMax
    text
    MINIMAX_API_KEY
    (or Token Plan:
    text
    MINIMAX_OAUTH_TOKEN
    ,
    text
    MINIMAX_CODE_PLAN_KEY
    ,
    text
    MINIMAX_CODING_API_KEY
    )
    T2A v2 API. Defaults to
    text
    speech-2.8-hd
    .
    OpenAI
    text
    OPENAI_API_KEY
    Also used for auto-summary; supports persona
    text
    instructions
    .
    OpenRouter
    text
    OPENROUTER_API_KEY
    (can reuse
    text
    models.providers.openrouter.apiKey
    )
    Default model
    text
    hexgrad/kokoro-82m
    .
    Volcengine
    text
    VOLCENGINE_TTS_API_KEY
    or
    text
    BYTEPLUS_SEED_SPEECH_API_KEY
    (legacy AppID/token:
    text
    VOLCENGINE_TTS_APPID
    /
    text
    _TOKEN
    )
    BytePlus Seed Speech HTTP API.
    Vydra
    text
    VYDRA_API_KEY
    Shared image, video, and speech provider.
    xAI
    text
    XAI_API_KEY
    xAI batch TTS. Native Opus voice-note is not supported.
    Xiaomi MiMo
    text
    XIAOMI_API_KEY
    MiMo TTS through Xiaomi chat completions.

    If multiple providers are configured, the selected one is used first and the others are fallback options. Auto-summary uses

    text
    summaryModel
    (or
    text
    agents.defaults.model.primary
    ), so that provider must also be authenticated if you keep summaries enabled.

    warning

    The bundled **Microsoft** provider uses Microsoft Edge's online neural TTS service via `node-edge-tts`. It is a public web service without a published SLA or quota — treat it as best-effort. The legacy provider id `edge` is normalized to `microsoft` and `openclaw doctor --fix` rewrites persisted config; new configs should always use `microsoft`.

    Configuration

    TTS config lives under

    text
    messages.tts
    in
    text
    ~/.openclaw/openclaw.json
    . Pick a preset and adapt the provider block:

    ```json5} { messages: { tts: { auto: "always", provider: "azure-speech", providers: { "azure-speech": { apiKey: "${AZURE_SPEECH_KEY}", region: "eastus", voice: "en-US-JennyNeural", lang: "en-US", outputFormat: "audio-24khz-48kbitrate-mono-mp3", voiceNoteOutputFormat: "ogg-24khz-16bit-mono-opus", }, }, }, }, } ``` ```json5} { messages: { tts: { auto: "always", provider: "elevenlabs", providers: { elevenlabs: { apiKey: "${ELEVENLABS_API_KEY}", model: "eleven_multilingual_v2", voiceId: "EXAVITQu4vr4xnSDxMaL", }, }, }, }, } ``` ```json5} { messages: { tts: { auto: "always", provider: "google", providers: { google: { apiKey: "${GEMINI_API_KEY}", model: "gemini-3.1-flash-tts-preview", voiceName: "Kore", // Optional natural-language style prompts: // audioProfile: "Speak in a calm, podcast-host tone.", // speakerName: "Alex", }, }, }, }, } ``` ```json5} { messages: { tts: { auto: "always", provider: "gradium", providers: { gradium: { apiKey: "${GRADIUM_API_KEY}", voiceId: "YTpq7expH9539ERJ", }, }, }, }, } ``` ```json5} { messages: { tts: { auto: "always", provider: "inworld", providers: { inworld: { apiKey: "${INWORLD_API_KEY}", modelId: "inworld-tts-1.5-max", voiceId: "Sarah", temperature: 0.7, }, }, }, }, } ``` ```json5} { messages: { tts: { auto: "always", provider: "tts-local-cli", providers: { "tts-local-cli": { command: "say", args: ["-o", "{{OutputPath}}", "{{Text}}"], outputFormat: "wav", timeoutMs: 120000, }, }, }, }, } ``` ```json5} { messages: { tts: { auto: "always", provider: "microsoft", providers: { microsoft: { enabled: true, voice: "en-US-MichelleNeural", lang: "en-US", outputFormat: "audio-24khz-48kbitrate-mono-mp3", rate: "+0%", pitch: "+0%", }, }, }, }, } ``` ```json5} { messages: { tts: { auto: "always", provider: "minimax", providers: { minimax: { apiKey: "${MINIMAX_API_KEY}", model: "speech-2.8-hd", voiceId: "English_expressive_narrator", speed: 1.0, vol: 1.0, pitch: 0, }, }, }, }, } ``` ```json5} { messages: { tts: { auto: "always", provider: "openai", summaryModel: "openai/gpt-4.1-mini", modelOverrides: { enabled: true }, providers: { openai: { apiKey: "${OPENAI_API_KEY}", model: "gpt-4o-mini-tts", voice: "alloy", }, elevenlabs: { apiKey: "${ELEVENLABS_API_KEY}", model: "eleven_multilingual_v2", voiceId: "EXAVITQu4vr4xnSDxMaL", voiceSettings: { stability: 0.5, similarityBoost: 0.75, style: 0.0, useSpeakerBoost: true, speed: 1.0 }, applyTextNormalization: "auto", languageCode: "en", }, }, }, }, } ``` ```json5} { messages: { tts: { auto: "always", provider: "openrouter", providers: { openrouter: { apiKey: "${OPENROUTER_API_KEY}", model: "hexgrad/kokoro-82m", voice: "af_alloy", responseFormat: "mp3", }, }, }, }, } ``` ```json5} { messages: { tts: { auto: "always", provider: "volcengine", providers: { volcengine: { apiKey: "${VOLCENGINE_TTS_API_KEY}", resourceId: "seed-tts-1.0", voice: "en_female_anna_mars_bigtts", }, }, }, }, } ``` ```json5} { messages: { tts: { auto: "always", provider: "xai", providers: { xai: { apiKey: "${XAI_API_KEY}", voiceId: "eve", language: "en", responseFormat: "mp3", }, }, }, }, } ``` ```json5} { messages: { tts: { auto: "always", provider: "xiaomi", providers: { xiaomi: { apiKey: "${XIAOMI_API_KEY}", model: "mimo-v2.5-tts", voice: "mimo_default", format: "mp3", }, }, }, }, } ```

    Per-agent voice overrides

    Use

    text
    agents.list[].tts
    when one agent should speak with a different provider, voice, model, persona, or auto-TTS mode. The agent block deep-merges over
    text
    messages.tts
    , so provider credentials can stay in the global provider config:

    json5
    { messages: { tts: { auto: "always", provider: "elevenlabs", providers: { elevenlabs: { apiKey: "${ELEVENLABS_API_KEY}", model: "eleven_multilingual_v2" }, }, }, }, agents: { list: [ { id: "reader", tts: { providers: { elevenlabs: { voiceId: "EXAVITQu4vr4xnSDxMaL" }, }, }, }, ], }, }

    To pin a per-agent persona, set

    text
    agents.list[].tts.persona
    alongside provider config — it overrides the global
    text
    messages.tts.persona
    for that agent only.

    Precedence order for automatic replies,

    text
    /tts audio
    ,
    text
    /tts status
    , and the
    text
    tts
    agent tool:

    1. text
      messages.tts
    2. active
      text
      agents.list[].tts
    3. channel override, when the channel supports
      text
      channels.<channel>.tts
    4. account override, when the channel passes
      text
      channels.<channel>.accounts.<id>.tts
    5. local
      text
      /tts
      preferences for this host
    6. inline
      text
      [[tts:...]]
      directives when model overrides are enabled

    Channel and account overrides use the same shape as

    text
    messages.tts
    and deep-merge over the earlier layers, so shared provider credentials can stay in
    text
    messages.tts
    while a channel or bot account changes only voice, model, persona, or auto mode:

    json5
    { messages: { tts: { provider: "openai", providers: { openai: { apiKey: "${OPENAI_API_KEY}", model: "gpt-4o-mini-tts" }, }, }, }, channels: { feishu: { accounts: { english: { tts: { providers: { openai: { voice: "shimmer" }, }, }, }, }, }, }, }

    Personas

    A persona is a stable spoken identity that can be applied deterministically across providers. It can prefer one provider, define provider-neutral prompt intent, and carry provider-specific bindings for voices, models, prompt templates, seeds, and voice settings.

    Minimal persona

    json5
    { messages: { tts: { auto: "always", persona: "narrator", personas: { narrator: { label: "Narrator", provider: "elevenlabs", providers: { elevenlabs: { voiceId: "EXAVITQu4vr4xnSDxMaL", modelId: "eleven_multilingual_v2" }, }, }, }, }, }, }

    Full persona (provider-neutral prompt)

    json5
    { messages: { tts: { auto: "always", persona: "alfred", personas: { alfred: { label: "Alfred", description: "Dry, warm British butler narrator.", provider: "google", fallbackPolicy: "preserve-persona", prompt: { profile: "A brilliant British butler. Dry, witty, warm, charming, emotionally expressive, never generic.", scene: "A quiet late-night study. Close-mic narration for a trusted operator.", sampleContext: "The speaker is answering a private technical request with concise confidence and dry warmth.", style: "Refined, understated, lightly amused.", accent: "British English.", pacing: "Measured, with short dramatic pauses.", constraints: ["Do not read configuration values aloud.", "Do not explain the persona."], }, providers: { google: { model: "gemini-3.1-flash-tts-preview", voiceName: "Algieba", promptTemplate: "audio-profile-v1", }, openai: { model: "gpt-4o-mini-tts", voice: "cedar" }, elevenlabs: { voiceId: "voice_id", modelId: "eleven_multilingual_v2", seed: 42, voiceSettings: { stability: 0.65, similarityBoost: 0.8, style: 0.25, useSpeakerBoost: true, speed: 0.95, }, }, }, }, }, }, }, }

    Persona resolution

    The active persona is selected deterministically:

    1. text
      /tts persona <id>
      local preference, if set.
    2. text
      messages.tts.persona
      , if set.
    3. No persona.

    Provider selection runs explicit-first:

    1. Direct overrides (CLI, gateway, Talk, allowed TTS directives).
    2. text
      /tts provider <id>
      local preference.
    3. Active persona's
      text
      provider
      .
    4. text
      messages.tts.provider
      .
    5. Registry auto-select.

    For each provider attempt, OpenClaw merges configs in this order:

    1. text
      messages.tts.providers.<id>
    2. text
      messages.tts.personas.<persona>.providers.<id>
    3. Trusted request overrides
    4. Allowed model-emitted TTS directive overrides

    How providers use persona prompts

    Persona prompt fields (

    text
    profile
    ,
    text
    scene
    ,
    text
    sampleContext
    ,
    text
    style
    ,
    text
    accent
    ,
    text
    pacing
    ,
    text
    constraints
    ) are provider-neutral. Each provider decides how to use them:

    Fallback policy

    text
    fallbackPolicy
    controls behavior when a persona has no binding for the attempted provider:

    PolicyBehavior
    text
    preserve-persona
    Default. Provider-neutral prompt fields stay available; the provider may use them or ignore them.
    text
    provider-defaults
    Persona is omitted from prompt preparation for that attempt; the provider uses its neutral defaults while fallback to other providers continues.
    text
    fail
    Skip that provider attempt with
    text
    reasonCode: "not_configured"
    and
    text
    personaBinding: "missing"
    . Fallback providers are still tried.

    The whole TTS request only fails when every attempted provider is skipped or fails.

    Model-driven directives

    By default, the assistant can emit

    text
    [[tts:...]]
    directives to override voice, model, or speed for a single reply, plus an optional
    text
    [[tts:text]]...[[/tts:text]]
    block for expressive cues that should appear in audio only:

    text
    Here you go. [[tts:voiceId=pMsXgVXv3BLzUgSXRplE model=eleven_v3 speed=1.1]] [[tts:text]](laughs) Read the song once more.[[/tts:text]]

    When

    text
    messages.tts.auto
    is
    text
    "tagged"
    , directives are required to trigger audio. Streaming block delivery strips directives from visible text before the channel sees them, even when split across adjacent blocks.

    text
    provider=...
    is ignored unless
    text
    modelOverrides.allowProvider: true
    . When a reply declares
    text
    provider=...
    , the other keys in that directive are parsed only by that provider; unsupported keys are stripped and reported as TTS directive warnings.

    Available directive keys:

    • text
      provider
      (registered provider id; requires
      text
      allowProvider: true
      )
    • text
      voice
      /
      text
      voiceName
      /
      text
      voice_name
      /
      text
      google_voice
      /
      text
      voiceId
    • text
      model
      /
      text
      google_model
    • text
      stability
      ,
      text
      similarityBoost
      ,
      text
      style
      ,
      text
      speed
      ,
      text
      useSpeakerBoost
    • text
      vol
      /
      text
      volume
      (MiniMax volume, 0–10)
    • text
      pitch
      (MiniMax integer pitch, −12 to 12; fractional values are truncated)
    • text
      emotion
      (Volcengine emotion tag)
    • text
      applyTextNormalization
      (
      text
      auto|on|off
      )
    • text
      languageCode
      (ISO 639-1)
    • text
      seed

    Disable model overrides entirely:

    json5
    { messages: { tts: { modelOverrides: { enabled: false } } } }

    Allow provider switching while keeping other knobs configurable:

    json5
    { messages: { tts: { modelOverrides: { enabled: true, allowProvider: true, allowSeed: false } } } }

    Slash commands

    Single command

    text
    /tts
    . On Discord, OpenClaw also registers
    text
    /voice
    because
    text
    /tts
    is a built-in Discord command — text
    text
    /tts ...
    still works.

    text
    /tts off | on | status /tts chat on | off | default /tts latest /tts provider <id> /tts persona <id> | off /tts limit <chars> /tts summary off /tts audio <text>

    note

    Commands require an authorized sender (allowlist/owner rules apply) and either `commands.text` or native command registration must be enabled.

    Behavior notes:

    • text
      /tts on
      writes the local TTS preference to
      text
      always
      ;
      text
      /tts off
      writes it to
      text
      off
      .
    • text
      /tts chat on|off|default
      writes a session-scoped auto-TTS override for the current chat.
    • text
      /tts persona <id>
      writes the local persona preference;
      text
      /tts persona off
      clears it.
    • text
      /tts latest
      reads the latest assistant reply from the current session transcript and sends it as audio once. It stores only a hash of that reply on the session entry to suppress duplicate voice sends.
    • text
      /tts audio
      generates a one-off audio reply (does not toggle TTS on).
    • text
      limit
      and
      text
      summary
      are stored in local prefs, not the main config.
    • text
      /tts status
      includes fallback diagnostics for the latest attempt —
      text
      Fallback: <primary> -> <used>
      ,
      text
      Attempts: ...
      , and per-attempt detail (
      text
      provider:outcome(reasonCode) latency
      ).
    • text
      /status
      shows the active TTS mode plus configured provider, model, voice, and sanitized custom endpoint metadata when TTS is enabled.

    Per-user preferences

    Slash commands write local overrides to

    text
    prefsPath
    . The default is
    text
    ~/.openclaw/settings/tts.json
    ; override with the
    text
    OPENCLAW_TTS_PREFS
    env var or
    text
    messages.tts.prefsPath
    .

    Stored fieldEffect
    text
    auto
    Local auto-TTS override (
    text
    always
    ,
    text
    off
    , …)
    text
    provider
    Local primary provider override
    text
    persona
    Local persona override
    text
    maxLength
    Summary threshold (default
    text
    1500
    chars)
    text
    summarize
    Summary toggle (default
    text
    true
    )

    These override the effective config from

    text
    messages.tts
    plus the active
    text
    agents.list[].tts
    block for that host.

    Output formats (fixed)

    TTS voice delivery is channel-capability driven. Channel plugins advertise whether voice-style TTS should ask providers for a native

    text
    voice-note
    target or keep normal
    text
    audio-file
    synthesis and only mark compatible output for voice delivery.

    • Voice-note capable channels: voice-note replies prefer Opus (
      text
      opus_48000_64
      from ElevenLabs,
      text
      opus
      from OpenAI).
      • 48kHz / 64kbps is a good voice message tradeoff.
    • Feishu / WhatsApp: when a voice-note reply is produced as MP3/WebM/WAV/M4A or another likely audio file, the channel plugin transcodes it to 48kHz Ogg/Opus with
      text
      ffmpeg
      before sending the native voice message. WhatsApp sends the result through the Baileys
      text
      audio
      payload with
      text
      ptt: true
      and
      text
      audio/ogg; codecs=opus
      . If conversion fails, Feishu receives the original file as an attachment; WhatsApp send fails rather than posting an incompatible PTT payload.
    • BlueBubbles: keeps provider synthesis on the normal audio-file path; MP3 and CAF outputs are marked for iMessage voice memo delivery.
    • Other channels: MP3 (
      text
      mp3_44100_128
      from ElevenLabs,
      text
      mp3
      from OpenAI).
      • 44.1kHz / 128kbps is the default balance for speech clarity.
    • MiniMax: MP3 (
      text
      speech-2.8-hd
      model, 32kHz sample rate) for normal audio attachments. For channel-advertised voice-note targets, OpenClaw transcodes the MiniMax MP3 to 48kHz Opus with
      text
      ffmpeg
      before delivery when the channel advertises transcoding.
    • Xiaomi MiMo: MP3 by default, or WAV when configured. For channel-advertised voice-note targets, OpenClaw transcodes Xiaomi output to 48kHz Opus with
      text
      ffmpeg
      before delivery when the channel advertises transcoding.
    • Local CLI: uses the configured
      text
      outputFormat
      . Voice-note targets are converted to Ogg/Opus and telephony output is converted to raw 16 kHz mono PCM with
      text
      ffmpeg
      .
    • Google Gemini: Gemini API TTS returns raw 24kHz PCM. OpenClaw wraps it as WAV for audio attachments, transcodes it to 48kHz Opus for voice-note targets, and returns PCM directly for Talk/telephony.
    • Gradium: WAV for audio attachments, Opus for voice-note targets, and
      text
      ulaw_8000
      at 8 kHz for telephony.
    • Inworld: MP3 for normal audio attachments, native
      text
      OGG_OPUS
      for voice-note targets, and raw
      text
      PCM
      at 22050 Hz for Talk/telephony.
    • xAI: MP3 by default;
      text
      responseFormat
      may be
      text
      mp3
      ,
      text
      wav
      ,
      text
      pcm
      ,
      text
      mulaw
      , or
      text
      alaw
      . OpenClaw uses xAI's batch REST TTS endpoint and returns a complete audio attachment; xAI's streaming TTS WebSocket is not used by this provider path. Native Opus voice-note format is not supported by this path.
    • Microsoft: uses
      text
      microsoft.outputFormat
      (default
      text
      audio-24khz-48kbitrate-mono-mp3
      ).
      • The bundled transport accepts an
        text
        outputFormat
        , but not all formats are available from the service.
      • Output format values follow Microsoft Speech output formats (including Ogg/WebM Opus).
      • Telegram
        text
        sendVoice
        accepts OGG/MP3/M4A; use OpenAI/ElevenLabs if you need guaranteed Opus voice messages.
      • If the configured Microsoft output format fails, OpenClaw retries with MP3.

    OpenAI/ElevenLabs output formats are fixed per channel (see above).

    Auto-TTS behavior

    When

    text
    messages.tts.auto
    is enabled, OpenClaw:

    • Skips TTS if the reply already contains media or a
      text
      MEDIA:
      directive.
    • Skips very short replies (under 10 chars).
    • Summarizes long replies when summaries are enabled, using
      text
      summaryModel
      (or
      text
      agents.defaults.model.primary
      ).
    • Attaches the generated audio to the reply.
    • In
      text
      mode: "final"
      , still sends audio-only TTS for streamed final replies after the text stream completes; the generated media goes through the same channel media normalization as normal reply attachments.

    If the reply exceeds

    text
    maxLength
    and summary is off (or no API key for the summary model), audio is skipped and the normal text reply is sent.

    text
    Reply -> TTS enabled? no -> send text yes -> has media / MEDIA: / short? yes -> send text no -> length > limit? no -> TTS -> attach audio yes -> summary enabled? no -> send text yes -> summarize -> TTS -> attach audio

    Output formats by channel

    TargetFormat
    Feishu / Matrix / Telegram / WhatsAppVoice-note replies prefer Opus (
    text
    opus_48000_64
    from ElevenLabs,
    text
    opus
    from OpenAI). 48 kHz / 64 kbps balances clarity and size.
    Other channelsMP3 (
    text
    mp3_44100_128
    from ElevenLabs,
    text
    mp3
    from OpenAI). 44.1 kHz / 128 kbps default for speech.
    Talk / telephonyProvider-native PCM (Inworld 22050 Hz, Google 24 kHz), or
    text
    ulaw_8000
    from Gradium for telephony.

    Per-provider notes:

    • Feishu / WhatsApp transcoding: When a voice-note reply lands as MP3/WebM/WAV/M4A, the channel plugin transcodes to 48 kHz Ogg/Opus with
      text
      ffmpeg
      . WhatsApp sends through Baileys with
      text
      ptt: true
      and
      text
      audio/ogg; codecs=opus
      . If conversion fails: Feishu falls back to attaching the original file; WhatsApp send fails rather than posting an incompatible PTT payload.
    • MiniMax / Xiaomi MiMo: Default MP3 (32 kHz for MiniMax
      text
      speech-2.8-hd
      ); transcoded to 48 kHz Opus for voice-note targets via
      text
      ffmpeg
      .
    • Local CLI: Uses configured
      text
      outputFormat
      . Voice-note targets are converted to Ogg/Opus and telephony output to raw 16 kHz mono PCM.
    • Google Gemini: Returns raw 24 kHz PCM. OpenClaw wraps as WAV for attachments, transcodes to 48 kHz Opus for voice-note targets, returns PCM directly for Talk/telephony.
    • Inworld: MP3 attachments, native
      text
      OGG_OPUS
      voice-note, raw
      text
      PCM
      22050 Hz for Talk/telephony.
    • xAI: MP3 by default;
      text
      responseFormat
      may be
      text
      mp3|wav|pcm|mulaw|alaw
      . Uses xAI's batch REST endpoint — streaming WebSocket TTS is not used. Native Opus voice-note format is not supported.
    • Microsoft: Uses
      text
      microsoft.outputFormat
      (default
      text
      audio-24khz-48kbitrate-mono-mp3
      ). Telegram
      text
      sendVoice
      accepts OGG/MP3/M4A; use OpenAI/ElevenLabs if you need guaranteed Opus voice messages. If the configured Microsoft format fails, OpenClaw retries with MP3.

    OpenAI and ElevenLabs output formats are fixed per channel as listed above.

    Field reference

    Agent tool

    The

    text
    tts
    tool converts text to speech and returns an audio attachment for reply delivery. On Feishu, Matrix, Telegram, and WhatsApp, the audio is delivered as a voice message rather than a file attachment. Feishu and WhatsApp can transcode non-Opus TTS output on this path when
    text
    ffmpeg
    is available.

    WhatsApp sends audio through Baileys as a PTT voice note (

    text
    audio
    with
    text
    ptt: true
    ) and sends visible text separately from PTT audio because clients do not consistently render captions on voice notes.

    The tool accepts optional

    text
    channel
    and
    text
    timeoutMs
    fields;
    text
    timeoutMs
    is a per-call provider request timeout in milliseconds.

    Gateway RPC

    MethodPurpose
    text
    tts.status
    Read current TTS state and last attempt.
    text
    tts.enable
    Set local auto preference to
    text
    always
    .
    text
    tts.disable
    Set local auto preference to
    text
    off
    .
    text
    tts.convert
    One-off text → audio.
    text
    tts.setProvider
    Set local provider preference.
    text
    tts.setPersona
    Set local persona preference.
    text
    tts.providers
    List configured providers and status.

    Service links

    • OpenAI text-to-speech guide
    • OpenAI Audio API reference
    • Azure Speech REST text-to-speech
    • Azure Speech provider
    • ElevenLabs Text to Speech
    • ElevenLabs Authentication
    • Gradium
    • Inworld TTS API
    • MiniMax T2A v2 API
    • Volcengine TTS HTTP API
    • Xiaomi MiMo speech synthesis
    • node-edge-tts
    • Microsoft Speech output formats
    • xAI text to speech

    Related

    • Media overview
    • Music generation
    • Video generation
    • Slash commands
    • Voice call plugin

    © 2024 TaskFlow Mirror

    Powered by TaskFlow Sync Engine