Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
Hugging Face Inference Providers offer OpenAI-compatible chat completions through a single router API. You get access to many models (DeepSeek, Llama, and more) with one token. OpenClaw uses the OpenAI-compatible endpoint (chat completions only); for text-to-image, embeddings, or speech use the HF inference clients directly.
huggingfaceHUGGINGFACE_HUB_TOKENHF_TOKENhttps://router.huggingface.co/v1text<Warning> The token must have the **Make calls to Inference Providers** permission enabled or API requests will be rejected. </Warning>
text```bash} openclaw onboard --auth-choice huggingface-api-key ```
textYou can also set or change the default model later in config: ```json5} { agents: { defaults: { model: { primary: "huggingface/deepseek-ai/DeepSeek-R1" }, }, }, } ```
bashopenclaw onboard --non-interactive \ --mode local \ --auth-choice huggingface-api-key \ --huggingface-api-key "$HF_TOKEN"
This will set
huggingface/deepseek-ai/DeepSeek-R1Model refs use the form
huggingface/<org>/<model>https://router.huggingface.co/v1/models| Model | Ref (prefix with text huggingface/ |
|---|---|
| DeepSeek R1 | text deepseek-ai/DeepSeek-R1 |
| DeepSeek V3.2 | text deepseek-ai/DeepSeek-V3.2 |
| Qwen3 8B | text Qwen/Qwen3-8B |
| Qwen2.5 7B Instruct | text Qwen/Qwen2.5-7B-Instruct |
| Qwen3 32B | text Qwen/Qwen3-32B |
| Llama 3.3 70B Instruct | text meta-llama/Llama-3.3-70B-Instruct |
| Llama 3.1 8B Instruct | text meta-llama/Llama-3.1-8B-Instruct |
| GPT-OSS 120B | text openai/gpt-oss-120b |
| GLM 4.7 | text zai-org/GLM-4.7 |
| Kimi K2.5 | text moonshotai/Kimi-K2.5 |
© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine