Caricamento in corso...
Caricamento in corso...
Last synced: Today, 22:00
Technical reference for the OpenClaw framework. Real-time synchronization with the official documentation engine.
Use this file to discover all available pages before exploring further.
Venice AI provides privacy-focused AI inference with support for uncensored models and access to major proprietary models through their anonymized proxy. All inference is private by default — no training on your data, no logging.
/v1Venice offers two privacy levels — understanding this is key to choosing your model:
| Mode | Description | Models |
|---|---|---|
| Private | Fully private. Prompts/responses are never stored or logged. Ephemeral. | Llama, Qwen, DeepSeek, Kimi, MiniMax, Venice Uncensored, etc. |
| Anonymized | Proxied through Venice with metadata stripped. The underlying provider (OpenAI, Anthropic, Google, xAI) sees anonymized requests. | Claude, GPT, Gemini, Grok |
/v1text<Tabs> <Tab title="Interactive (recommended)"> ```bash} openclaw onboard --auth-choice venice-api-key ``` This will: 1. Prompt for your API key (or use existing `VENICE_API_KEY`) 2. Show all available Venice models 3. Let you pick your default model 4. Configure the provider automatically </Tab> <Tab title="Environment variable"> ```bash} export VENICE_API_KEY="vapi_xxxxxxxxxxxx" ``` </Tab> <Tab title="Non-interactive"> ```bash} openclaw onboard --non-interactive \ --auth-choice venice-api-key \ --venice-api-key "vapi_xxxxxxxxxxxx" ``` </Tab> </Tabs>
After setup, OpenClaw shows all available Venice models. Pick based on your needs:
venice/kimi-k2-5venice/claude-opus-4-6Change your default model anytime:
bashopenclaw models set venice/kimi-k2-5 openclaw models set venice/claude-opus-4-6
List all available models:
bashopenclaw models list | grep venice
You can also run
openclaw configure| Use Case | Recommended Model | Why |
|---|---|---|
| General chat (default) | text kimi-k2-5 | Strong private reasoning plus vision |
| Best overall quality | text claude-opus-4-6 | Strongest anonymized Venice option |
| Privacy + coding | text qwen3-coder-480b-a35b-instruct | Private coding model with large context |
| Private vision | text kimi-k2-5 | Vision support without leaving private mode |
| Fast + cheap | text qwen3-4b | Lightweight reasoning model |
| Complex private tasks | text deepseek-v3.2 | Strong reasoning, but no Venice tool support |
| Uncensored | text venice-uncensored | No content restrictions |
If Venice exposes DeepSeek V4 models such as
venice/deepseek-v4-provenice/deepseek-v4-flashreasoning_contentthinkingOpenClaw automatically discovers models from the Venice API when
VENICE_API_KEYThe
/models| Feature | Support |
|---|---|
| Streaming | All models |
| Function calling | Most models (check text supportsFunctionCalling |
| Vision/Images | Models marked with "Vision" feature |
| JSON mode | Supported via text response_format |
Venice uses a credit-based system. Check venice.ai/pricing for current rates:
| Aspect | Venice (Anonymized) | Direct API |
|---|---|---|
| Privacy | Metadata stripped, anonymized | Your account linked |
| Latency | +10-50ms (proxy) | Direct |
| Features | Most features supported | Full features |
| Billing | Venice credits | Provider billing |
bash# Use the default private model openclaw agent --model venice/kimi-k2-5 --message "Quick health check" # Use Claude Opus via Venice (anonymized) openclaw agent --model venice/claude-opus-4-6 --message "Summarize this task" # Use uncensored model openclaw agent --model venice/venice-uncensored --message "Draft options" # Use vision model with image openclaw agent --model venice/qwen3-vl-235b-a22b --message "Review attached image" # Use coding model openclaw agent --model venice/qwen3-coder-480b-a35b-instruct --message "Refactor this function"
© 2024 TaskFlow Mirror
Powered by TaskFlow Sync Engine