Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
LM Studio is a friendly yet powerful app for running open-weight models on your own hardware. It lets you run llama.cpp (GGUF) or MLX models (Apple Silicon). Comes in a GUI package or headless daemon (
llmsterllmsterbashcurl -fsSL https://lmstudio.ai/install.sh | bash
Make sure you either start the desktop app or run the daemon using the following command:
bashlms daemon up
bashlms server start --port 1234
If you are using the app, make sure you have JIT enabled for a smooth experience. Learn more in the LM Studio JIT and TTL guide.
LM_API_TOKENbashexport LM_API_TOKEN="your-lm-studio-api-token"
If LM Studio authentication is disabled, you can leave the API key blank during interactive OpenClaw setup.
For LM Studio auth setup details, see LM Studio Authentication.
LM Studiobashopenclaw onboard
Default modelYou can also set or change it later:
bashopenclaw models set lmstudio/qwen/qwen3.5-9b
LM Studio model keys follow a
author/model-nameqwen/qwen3.5-9blmstudio/qwen/qwen3.5-9bcurl http://localhost:1234/api/v1/modelskeyUse non-interactive onboarding when you want to script setup (CI, provisioning, remote bootstrap):
bashopenclaw onboard \ --non-interactive \ --accept-risk \ --auth-choice lmstudio
Or specify the base URL, model, and optional API key:
bashopenclaw onboard \ --non-interactive \ --accept-risk \ --auth-choice lmstudio \ --custom-base-url http://localhost:1234/v1 \ --lmstudio-api-key "$LM_API_TOKEN" \ --custom-model-id qwen/qwen3.5-9b
--custom-model-idqwen/qwen3.5-9blmstudio/For authenticated LM Studio servers, pass
--lmstudio-api-keyLM_API_TOKEN--custom-api-key--lmstudio-api-keyThis writes
models.providers.lmstudiolmstudio/<custom-model-id>lmstudio:defaultInteractive setup can prompt for an optional preferred load context length and applies it across the discovered LM Studio models it saves into config. LM Studio plugin config trusts the configured LM Studio endpoint for model requests, including loopback, LAN, and tailnet hosts. You can opt out by setting
models.providers.lmstudio.request.allowPrivateNetwork: falseLM Studio is streaming-usage compatible. When it does not emit an OpenAI-shaped
usagetimings.prompt_ntimings.predicted_nSame streaming usage behavior applies to these OpenAI-compatible local backends:
When LM Studio's
/api/v1/modelsallowed_options: ["off", "on"]off/thinkonlowmediumjson5{ models: { providers: { lmstudio: { baseUrl: "http://localhost:1234/v1", apiKey: "${LM_API_TOKEN}", api: "openai-completions", models: [ { id: "qwen/qwen3-coder-next", name: "Qwen 3 Coder Next", reasoning: false, input: ["text"], cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }, contextWindow: 128000, maxTokens: 8192, }, ], }, }, }, }
Make sure LM Studio is running. If authentication is enabled, also set
LM_API_TOKENbash# Start via desktop app, or headless: lms server start --port 1234
Verify the API is accessible:
bashcurl http://localhost:1234/api/v1/models
If setup reports HTTP 401, verify your API key:
LM_API_TOKENLM Studio supports just-in-time (JIT) model loading, where models are loaded on first request. Make sure you have this enabled to avoid 'Model not loaded' errors.
Use the LM Studio host's reachable address, keep
/v1json5{ models: { providers: { lmstudio: { baseUrl: "http://gpu-box.local:1234/v1", apiKey: "lmstudio", api: "openai-completions", models: [{ id: "qwen/qwen3.5-9b" }], }, }, }, }
Unlike generic OpenAI-compatible providers,
lmstudiolocalhost127.0.0.1models.providers.<id>.request.allowPrivateNetwork: true© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine