Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
vLLM can serve open-source (and some custom) models via an OpenAI-compatible HTTP API. OpenClaw connects to vLLM using the
openai-completionsOpenClaw can also auto-discover available models from vLLM when you opt in with
VLLM_API_KEYmodels.providers.vllmOpenClaw treats
vllmstream_options.include_usage| Property | Value |
|---|---|
| Provider ID | text vllm |
| API | text openai-completions |
| Auth | text VLLM_API_KEY |
| Default base URL | text http://127.0.0.1:8000/v1 |
text``` http://127.0.0.1:8000/v1 ```
text```bash} export VLLM_API_KEY="vllm-local" ```
text```json5} { agents: { defaults: { model: { primary: "vllm/your-model-id" }, }, }, } ```
When
VLLM_API_KEYmodels.providers.vllmtextGET http://127.0.0.1:8000/v1/models
and converts the returned IDs into model entries.
Use explicit config when:
contextWindowmaxTokensjson5{ models: { providers: { vllm: { baseUrl: "http://127.0.0.1:8000/v1", apiKey: "${VLLM_API_KEY}", api: "openai-completions", request: { allowPrivateNetwork: true }, timeoutSeconds: 300, // Optional: extend connect/header/body/request timeout for slow local models models: [ { id: "your-model-id", name: "Local vLLM Model", reasoning: false, input: ["text"], cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }, contextWindow: 128000, maxTokens: 8192, }, ], }, }, }, }
© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine