Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
Groq provides ultra-fast inference on open-source models (Llama, Gemma, Mistral, and more) using custom LPU hardware. OpenClaw connects to Groq through its OpenAI-compatible API.
| Property | Value |
|---|---|
| Provider | text groq |
| Auth | text GROQ_API_KEY |
| API | OpenAI-compatible |
json5{ env: { GROQ_API_KEY: "gsk_..." }, agents: { defaults: { model: { primary: "groq/llama-3.3-70b-versatile" }, }, }, }
Groq's model catalog changes frequently. Run
openclaw models list | grep groq| Model | Notes |
|---|---|
| Llama 3.3 70B Versatile | General-purpose, large context |
| Llama 3.1 8B Instant | Fast, lightweight |
| Gemma 2 9B | Compact, efficient |
| Mixtral 8x7B | MoE architecture, strong reasoning |
OpenClaw maps its shared
/thinkreasoning_effortqwen/qwen3-32bnonedefaultlowmediumhighreasoning_effortGroq also provides fast Whisper-based audio transcription. When configured as a media-understanding provider, OpenClaw uses Groq's
whisper-large-v3-turbotools.media.audiojson5{ tools: { media: { audio: { models: [{ provider: "groq" }], }, }, }, }
© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine