first commit
This commit is contained in:
159
content/providers/anthropic.md
Normal file
159
content/providers/anthropic.md
Normal file
@@ -0,0 +1,159 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想在 OpenClaw 中使用 Anthropic 模型
|
||||
- 你想使用 setup-token 而不是 API 密钥
|
||||
summary: 在 OpenClaw 中通过 API 密钥或 setup-token 使用 Anthropic Claude
|
||||
title: Anthropic
|
||||
x-i18n:
|
||||
generated_at: "2026-02-03T10:08:33Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: a78ccd855810a93e71d7138af4d3fc7d66e877349815c4a3207cf2214b0150b3
|
||||
source_path: providers/anthropic.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# Anthropic(Claude)
|
||||
|
||||
Anthropic 构建了 **Claude** 模型系列,并通过 API 提供访问。
|
||||
在 OpenClaw 中,你可以使用 API 密钥或 **setup-token** 进行认证。
|
||||
|
||||
## 选项 A:Anthropic API 密钥
|
||||
|
||||
**适用于:** 标准 API 访问和按用量计费。
|
||||
在 Anthropic Console 中创建你的 API 密钥。
|
||||
|
||||
### CLI 设置
|
||||
|
||||
```bash
|
||||
openclaw onboard
|
||||
# 选择:Anthropic API key
|
||||
|
||||
# 或非交互式
|
||||
openclaw onboard --anthropic-api-key "$ANTHROPIC_API_KEY"
|
||||
```
|
||||
|
||||
### 配置片段
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { ANTHROPIC_API_KEY: "sk-ant-..." },
|
||||
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } },
|
||||
}
|
||||
```
|
||||
|
||||
## 提示缓存(Anthropic API)
|
||||
|
||||
OpenClaw 支持 Anthropic 的提示缓存功能。这是**仅限 API**;订阅认证不支持缓存设置。
|
||||
|
||||
### 配置
|
||||
|
||||
在模型配置中使用 `cacheRetention` 参数:
|
||||
|
||||
| 值 | 缓存时长 | 描述 |
|
||||
| ------- | -------- | -------------------------- |
|
||||
| `none` | 无缓存 | 禁用提示缓存 |
|
||||
| `short` | 5 分钟 | API 密钥认证的默认值 |
|
||||
| `long` | 1 小时 | 扩展缓存(需要 beta 标志) |
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"anthropic/claude-opus-4-5": {
|
||||
params: { cacheRetention: "long" },
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
### 默认值
|
||||
|
||||
使用 Anthropic API 密钥认证时,OpenClaw 会自动为所有 Anthropic 模型应用 `cacheRetention: "short"`(5 分钟缓存)。你可以通过在配置中显式设置 `cacheRetention` 来覆盖此设置。
|
||||
|
||||
### 旧版参数
|
||||
|
||||
为了向后兼容,仍支持旧版 `cacheControlTtl` 参数:
|
||||
|
||||
- `"5m"` 映射到 `short`
|
||||
- `"1h"` 映射到 `long`
|
||||
|
||||
我们建议迁移到新的 `cacheRetention` 参数。
|
||||
|
||||
OpenClaw 在 Anthropic API 请求中包含 `extended-cache-ttl-2025-04-11` beta 标志;
|
||||
如果你覆盖提供商头信息,请保留它(参见 [/gateway/configuration](/gateway/configuration))。
|
||||
|
||||
## 选项 B:Claude setup-token
|
||||
|
||||
**适用于:** 使用你的 Claude 订阅。
|
||||
|
||||
### 在哪里获取 setup-token
|
||||
|
||||
setup-token 由 **Claude Code CLI** 创建,而不是 Anthropic Console。你可以在**任何机器**上运行:
|
||||
|
||||
```bash
|
||||
claude setup-token
|
||||
```
|
||||
|
||||
将令牌粘贴到 OpenClaw(向导:**Anthropic token (paste setup-token)**),或在 Gateway 网关主机上运行:
|
||||
|
||||
```bash
|
||||
openclaw models auth setup-token --provider anthropic
|
||||
```
|
||||
|
||||
如果你在不同的机器上生成了令牌,请粘贴它:
|
||||
|
||||
```bash
|
||||
openclaw models auth paste-token --provider anthropic
|
||||
```
|
||||
|
||||
### CLI 设置
|
||||
|
||||
```bash
|
||||
# 在新手引导期间粘贴 setup-token
|
||||
openclaw onboard --auth-choice setup-token
|
||||
```
|
||||
|
||||
### 配置片段
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } },
|
||||
}
|
||||
```
|
||||
|
||||
## 注意事项
|
||||
|
||||
- 使用 `claude setup-token` 生成 setup-token 并粘贴,或在 Gateway 网关主机上运行 `openclaw models auth setup-token`。
|
||||
- 如果你在 Claude 订阅上看到"OAuth token refresh failed …",请使用 setup-token 重新认证。参见 [/gateway/troubleshooting#oauth-token-refresh-failed-anthropic-claude-subscription](/gateway/troubleshooting#oauth-token-refresh-failed-anthropic-claude-subscription)。
|
||||
- 认证详情 + 重用规则在 [/concepts/oauth](/concepts/oauth)。
|
||||
|
||||
## 故障排除
|
||||
|
||||
**401 错误/令牌突然失效**
|
||||
|
||||
- Claude 订阅认证可能过期或被撤销。重新运行 `claude setup-token`
|
||||
并将其粘贴到 **Gateway 网关主机**。
|
||||
- 如果 Claude CLI 登录在不同的机器上,在 Gateway 网关主机上使用
|
||||
`openclaw models auth paste-token --provider anthropic`。
|
||||
|
||||
**No API key found for provider "anthropic"**
|
||||
|
||||
- 认证是**按智能体**的。新智能体不会继承主智能体的密钥。
|
||||
- 为该智能体重新运行新手引导,或在 Gateway 网关主机上粘贴 setup-token / API 密钥,
|
||||
然后使用 `openclaw models status` 验证。
|
||||
|
||||
**No credentials found for profile `anthropic:default`**
|
||||
|
||||
- 运行 `openclaw models status` 查看哪个认证配置文件处于活动状态。
|
||||
- 重新运行新手引导,或为该配置文件粘贴 setup-token / API 密钥。
|
||||
|
||||
**No available auth profile (all in cooldown/unavailable)**
|
||||
|
||||
- 检查 `openclaw models status --json` 中的 `auth.unusableProfiles`。
|
||||
- 添加另一个 Anthropic 配置文件或等待冷却期结束。
|
||||
|
||||
更多信息:[/gateway/troubleshooting](/gateway/troubleshooting) 和 [/help/faq](/help/faq)。
|
||||
170
content/providers/bedrock.md
Normal file
170
content/providers/bedrock.md
Normal file
@@ -0,0 +1,170 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想在 OpenClaw 中使用 Amazon Bedrock 模型
|
||||
- 你需要为模型调用配置 AWS 凭证/区域
|
||||
summary: 在 OpenClaw 中使用 Amazon Bedrock(Converse API)模型
|
||||
title: Amazon Bedrock
|
||||
x-i18n:
|
||||
generated_at: "2026-02-03T10:04:01Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: 318f1048451a1910b70522e2f7f9dfc87084de26d9e3938a29d372eed32244a8
|
||||
source_path: providers/bedrock.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# Amazon Bedrock
|
||||
|
||||
OpenClaw 可以通过 pi‑ai 的 **Bedrock Converse** 流式提供商使用 **Amazon Bedrock** 模型。Bedrock 认证使用 **AWS SDK 默认凭证链**,而非 API 密钥。
|
||||
|
||||
## pi‑ai 支持的功能
|
||||
|
||||
- 提供商:`amazon-bedrock`
|
||||
- API:`bedrock-converse-stream`
|
||||
- 认证:AWS 凭证(环境变量、共享配置或实例角色)
|
||||
- 区域:`AWS_REGION` 或 `AWS_DEFAULT_REGION`(默认:`us-east-1`)
|
||||
|
||||
## 自动模型发现
|
||||
|
||||
如果检测到 AWS 凭证,OpenClaw 可以自动发现支持**流式传输**和**文本输出**的 Bedrock 模型。发现功能使用 `bedrock:ListFoundationModels`,并会被缓存(默认:1 小时)。
|
||||
|
||||
配置选项位于 `models.bedrockDiscovery` 下:
|
||||
|
||||
```json5
|
||||
{
|
||||
models: {
|
||||
bedrockDiscovery: {
|
||||
enabled: true,
|
||||
region: "us-east-1",
|
||||
providerFilter: ["anthropic", "amazon"],
|
||||
refreshInterval: 3600,
|
||||
defaultContextWindow: 32000,
|
||||
defaultMaxTokens: 4096,
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
注意事项:
|
||||
|
||||
- `enabled` 在存在 AWS 凭证时默认为 `true`。
|
||||
- `region` 默认为 `AWS_REGION` 或 `AWS_DEFAULT_REGION`,然后是 `us-east-1`。
|
||||
- `providerFilter` 匹配 Bedrock 提供商名称(例如 `anthropic`)。
|
||||
- `refreshInterval` 单位为秒;设置为 `0` 可禁用缓存。
|
||||
- `defaultContextWindow`(默认:`32000`)和 `defaultMaxTokens`(默认:`4096`)用于已发现的模型(如果你知道模型限制,可以覆盖这些值)。
|
||||
|
||||
## 设置(手动)
|
||||
|
||||
1. 确保 AWS 凭证在 **Gateway 网关主机**上可用:
|
||||
|
||||
```bash
|
||||
export AWS_ACCESS_KEY_ID="AKIA..."
|
||||
export AWS_SECRET_ACCESS_KEY="..."
|
||||
export AWS_REGION="us-east-1"
|
||||
# 可选:
|
||||
export AWS_SESSION_TOKEN="..."
|
||||
export AWS_PROFILE="your-profile"
|
||||
# 可选(Bedrock API 密钥/Bearer 令牌):
|
||||
export AWS_BEARER_TOKEN_BEDROCK="..."
|
||||
```
|
||||
|
||||
2. 在配置中添加 Bedrock 提供商和模型(无需 `apiKey`):
|
||||
|
||||
```json5
|
||||
{
|
||||
models: {
|
||||
providers: {
|
||||
"amazon-bedrock": {
|
||||
baseUrl: "https://bedrock-runtime.us-east-1.amazonaws.com",
|
||||
api: "bedrock-converse-stream",
|
||||
auth: "aws-sdk",
|
||||
models: [
|
||||
{
|
||||
id: "anthropic.claude-opus-4-5-20251101-v1:0",
|
||||
name: "Claude Opus 4.5 (Bedrock)",
|
||||
reasoning: true,
|
||||
input: ["text", "image"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 200000,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "amazon-bedrock/anthropic.claude-opus-4-5-20251101-v1:0" },
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## EC2 实例角色
|
||||
|
||||
当在附加了 IAM 角色的 EC2 实例上运行 OpenClaw 时,AWS SDK 会自动使用实例元数据服务(IMDS)进行认证。但是,OpenClaw 的凭证检测目前只检查环境变量,不检查 IMDS 凭证。
|
||||
|
||||
**解决方法:** 设置 `AWS_PROFILE=default` 以表明 AWS 凭证可用。实际认证仍然通过 IMDS 使用实例角色。
|
||||
|
||||
```bash
|
||||
# 添加到 ~/.bashrc 或你的 shell 配置文件
|
||||
export AWS_PROFILE=default
|
||||
export AWS_REGION=us-east-1
|
||||
```
|
||||
|
||||
EC2 实例角色**所需的 IAM 权限**:
|
||||
|
||||
- `bedrock:InvokeModel`
|
||||
- `bedrock:InvokeModelWithResponseStream`
|
||||
- `bedrock:ListFoundationModels`(用于自动发现)
|
||||
|
||||
或者附加托管策略 `AmazonBedrockFullAccess`。
|
||||
|
||||
**快速设置:**
|
||||
|
||||
```bash
|
||||
# 1. 创建 IAM 角色和实例配置文件
|
||||
aws iam create-role --role-name EC2-Bedrock-Access \
|
||||
--assume-role-policy-document '{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [{
|
||||
"Effect": "Allow",
|
||||
"Principal": {"Service": "ec2.amazonaws.com"},
|
||||
"Action": "sts:AssumeRole"
|
||||
}]
|
||||
}'
|
||||
|
||||
aws iam attach-role-policy --role-name EC2-Bedrock-Access \
|
||||
--policy-arn arn:aws:iam::aws:policy/AmazonBedrockFullAccess
|
||||
|
||||
aws iam create-instance-profile --instance-profile-name EC2-Bedrock-Access
|
||||
aws iam add-role-to-instance-profile \
|
||||
--instance-profile-name EC2-Bedrock-Access \
|
||||
--role-name EC2-Bedrock-Access
|
||||
|
||||
# 2. 附加到你的 EC2 实例
|
||||
aws ec2 associate-iam-instance-profile \
|
||||
--instance-id i-xxxxx \
|
||||
--iam-instance-profile Name=EC2-Bedrock-Access
|
||||
|
||||
# 3. 在 EC2 实例上启用发现功能
|
||||
openclaw config set models.bedrockDiscovery.enabled true
|
||||
openclaw config set models.bedrockDiscovery.region us-east-1
|
||||
|
||||
# 4. 设置解决方法所需的环境变量
|
||||
echo 'export AWS_PROFILE=default' >> ~/.bashrc
|
||||
echo 'export AWS_REGION=us-east-1' >> ~/.bashrc
|
||||
source ~/.bashrc
|
||||
|
||||
# 5. 验证模型已被发现
|
||||
openclaw models list
|
||||
```
|
||||
|
||||
## 注意事项
|
||||
|
||||
- Bedrock 需要在你的 AWS 账户/区域中启用**模型访问**。
|
||||
- 自动发现需要 `bedrock:ListFoundationModels` 权限。
|
||||
- 如果你使用配置文件,请在 Gateway 网关主机上设置 `AWS_PROFILE`。
|
||||
- OpenClaw 按以下顺序获取凭证来源:`AWS_BEARER_TOKEN_BEDROCK`,然后是 `AWS_ACCESS_KEY_ID` + `AWS_SECRET_ACCESS_KEY`,然后是 `AWS_PROFILE`,最后是默认的 AWS SDK 链。
|
||||
- 推理支持取决于模型;请查看 Bedrock 模型卡了解当前功能。
|
||||
- 如果你更喜欢托管密钥流程,也可以在 Bedrock 前面放置一个 OpenAI 兼容的代理,并将其配置为 OpenAI 提供商。
|
||||
155
content/providers/claude-max-api-proxy.md
Normal file
155
content/providers/claude-max-api-proxy.md
Normal file
@@ -0,0 +1,155 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想将 Claude Max 订阅与 OpenAI 兼容工具配合使用
|
||||
- 你想要一个封装 Claude Code CLI 的本地 API 服务器
|
||||
- 你想通过使用订阅而非 API 密钥来节省费用
|
||||
summary: 将 Claude Max/Pro 订阅用作 OpenAI 兼容的 API 端点
|
||||
title: Claude Max API 代理
|
||||
x-i18n:
|
||||
generated_at: "2026-02-01T21:34:52Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: 63b61096b96b720c6d0c317520852db65d72ca8279b3868f35e8387fe3b6ce41
|
||||
source_path: providers/claude-max-api-proxy.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# Claude Max API 代理
|
||||
|
||||
**claude-max-api-proxy** 是一个社区工具,将你的 Claude Max/Pro 订阅暴露为 OpenAI 兼容的 API 端点。这使你可以将订阅与任何支持 OpenAI API 格式的工具配合使用。
|
||||
|
||||
## 为什么使用它?
|
||||
|
||||
| 方式 | 费用 | 适用场景 |
|
||||
| --------------- | ----------------------------------------------- | ------------------------ |
|
||||
| Anthropic API | 按 token 计费(Opus 约 $15/M 输入,$75/M 输出) | 生产应用、高流量场景 |
|
||||
| Claude Max 订阅 | 每月固定 $200 | 个人使用、开发、无限用量 |
|
||||
|
||||
如果你有 Claude Max 订阅并希望与 OpenAI 兼容工具配合使用,这个代理可以帮你节省大量费用。
|
||||
|
||||
## 工作原理
|
||||
|
||||
```
|
||||
你的应用 → claude-max-api-proxy → Claude Code CLI → Anthropic(通过订阅)
|
||||
(OpenAI 格式) (转换格式) (使用你的登录凭据)
|
||||
```
|
||||
|
||||
该代理:
|
||||
|
||||
1. 在 `http://localhost:3456/v1/chat/completions` 接受 OpenAI 格式的请求
|
||||
2. 将其转换为 Claude Code CLI 命令
|
||||
3. 以 OpenAI 格式返回响应(支持流式传输)
|
||||
|
||||
## 安装
|
||||
|
||||
```bash
|
||||
# 需要 Node.js 20+ 和 Claude Code CLI
|
||||
npm install -g claude-max-api-proxy
|
||||
|
||||
# 验证 Claude CLI 已认证
|
||||
claude --version
|
||||
```
|
||||
|
||||
## 使用方法
|
||||
|
||||
### 启动服务器
|
||||
|
||||
```bash
|
||||
claude-max-api
|
||||
# 服务器运行在 http://localhost:3456
|
||||
```
|
||||
|
||||
### 测试
|
||||
|
||||
```bash
|
||||
# 健康检查
|
||||
curl http://localhost:3456/health
|
||||
|
||||
# 列出模型
|
||||
curl http://localhost:3456/v1/models
|
||||
|
||||
# 聊天补全
|
||||
curl http://localhost:3456/v1/chat/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "claude-opus-4",
|
||||
"messages": [{"role": "user", "content": "Hello!"}]
|
||||
}'
|
||||
```
|
||||
|
||||
### 与 OpenClaw 配合使用
|
||||
|
||||
你可以将 OpenClaw 指向该代理作为自定义 OpenAI 兼容端点:
|
||||
|
||||
```json5
|
||||
{
|
||||
env: {
|
||||
OPENAI_API_KEY: "not-needed",
|
||||
OPENAI_BASE_URL: "http://localhost:3456/v1",
|
||||
},
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "openai/claude-opus-4" },
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## 可用模型
|
||||
|
||||
| 模型 ID | 对应模型 |
|
||||
| ----------------- | --------------- |
|
||||
| `claude-opus-4` | Claude Opus 4 |
|
||||
| `claude-sonnet-4` | Claude Sonnet 4 |
|
||||
| `claude-haiku-4` | Claude Haiku 4 |
|
||||
|
||||
## macOS 自动启动
|
||||
|
||||
创建 LaunchAgent 以自动运行代理:
|
||||
|
||||
```bash
|
||||
cat > ~/Library/LaunchAgents/com.claude-max-api.plist << 'EOF'
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>Label</key>
|
||||
<string>com.claude-max-api</string>
|
||||
<key>RunAtLoad</key>
|
||||
<true/>
|
||||
<key>KeepAlive</key>
|
||||
<true/>
|
||||
<key>ProgramArguments</key>
|
||||
<array>
|
||||
<string>/usr/local/bin/node</string>
|
||||
<string>/usr/local/lib/node_modules/claude-max-api-proxy/dist/server/standalone.js</string>
|
||||
</array>
|
||||
<key>EnvironmentVariables</key>
|
||||
<dict>
|
||||
<key>PATH</key>
|
||||
<string>/usr/local/bin:/opt/homebrew/bin:~/.local/bin:/usr/bin:/bin</string>
|
||||
</dict>
|
||||
</dict>
|
||||
</plist>
|
||||
EOF
|
||||
|
||||
launchctl bootstrap gui/$(id -u) ~/Library/LaunchAgents/com.claude-max-api.plist
|
||||
```
|
||||
|
||||
## 链接
|
||||
|
||||
- **npm:** https://www.npmjs.com/package/claude-max-api-proxy
|
||||
- **GitHub:** https://github.com/atalovesyou/claude-max-api-proxy
|
||||
- **Issues:** https://github.com/atalovesyou/claude-max-api-proxy/issues
|
||||
|
||||
## 注意事项
|
||||
|
||||
- 这是一个**社区工具**,并非由 Anthropic 或 OpenClaw 官方支持
|
||||
- 需要有效的 Claude Max/Pro 订阅并已认证 Claude Code CLI
|
||||
- 代理在本地运行,不会将数据发送到任何第三方服务器
|
||||
- 完全支持流式响应
|
||||
|
||||
## 另请参阅
|
||||
|
||||
- [Anthropic 提供商](/providers/anthropic) - OpenClaw 与 Claude 的原生集成,使用 setup-token 或 API 密钥
|
||||
- [OpenAI 提供商](/providers/openai) - 适用于 OpenAI/Codex 订阅
|
||||
71
content/providers/cloudflare-ai-gateway.md
Normal file
71
content/providers/cloudflare-ai-gateway.md
Normal file
@@ -0,0 +1,71 @@
|
||||
---
|
||||
title: "Cloudflare AI Gateway"
|
||||
summary: "Cloudflare AI Gateway setup (auth + model selection)"
|
||||
read_when:
|
||||
- You want to use Cloudflare AI Gateway with OpenClaw
|
||||
- You need the account ID, gateway ID, or API key env var
|
||||
---
|
||||
|
||||
# Cloudflare AI Gateway
|
||||
|
||||
Cloudflare AI Gateway sits in front of provider APIs and lets you add analytics, caching, and controls. For Anthropic, OpenClaw uses the Anthropic Messages API through your Gateway endpoint.
|
||||
|
||||
- Provider: `cloudflare-ai-gateway`
|
||||
- Base URL: `https://gateway.ai.cloudflare.com/v1/<account_id>/<gateway_id>/anthropic`
|
||||
- Default model: `cloudflare-ai-gateway/claude-sonnet-4-5`
|
||||
- API key: `CLOUDFLARE_AI_GATEWAY_API_KEY` (your provider API key for requests through the Gateway)
|
||||
|
||||
For Anthropic models, use your Anthropic API key.
|
||||
|
||||
## Quick start
|
||||
|
||||
1. Set the provider API key and Gateway details:
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice cloudflare-ai-gateway-api-key
|
||||
```
|
||||
|
||||
2. Set a default model:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "cloudflare-ai-gateway/claude-sonnet-4-5" },
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Non-interactive example
|
||||
|
||||
```bash
|
||||
openclaw onboard --non-interactive \
|
||||
--mode local \
|
||||
--auth-choice cloudflare-ai-gateway-api-key \
|
||||
--cloudflare-ai-gateway-account-id "your-account-id" \
|
||||
--cloudflare-ai-gateway-gateway-id "your-gateway-id" \
|
||||
--cloudflare-ai-gateway-api-key "$CLOUDFLARE_AI_GATEWAY_API_KEY"
|
||||
```
|
||||
|
||||
## Authenticated gateways
|
||||
|
||||
If you enabled Gateway authentication in Cloudflare, add the `cf-aig-authorization` header (this is in addition to your provider API key).
|
||||
|
||||
```json5
|
||||
{
|
||||
models: {
|
||||
providers: {
|
||||
"cloudflare-ai-gateway": {
|
||||
headers: {
|
||||
"cf-aig-authorization": "Bearer <cloudflare-ai-gateway-token>",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Environment note
|
||||
|
||||
If the Gateway runs as a daemon (launchd/systemd), make sure `CLOUDFLARE_AI_GATEWAY_API_KEY` is available to that process (for example, in `~/.openclaw/.env` or via `env.shellEnv`).
|
||||
97
content/providers/deepgram.md
Normal file
97
content/providers/deepgram.md
Normal file
@@ -0,0 +1,97 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想使用 Deepgram 语音转文字处理音频附件
|
||||
- 你需要一个快速的 Deepgram 配置示例
|
||||
summary: Deepgram 语音转录,用于接收语音消息
|
||||
title: Deepgram
|
||||
x-i18n:
|
||||
generated_at: "2026-02-01T21:34:47Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: 8f19e072f08672116ed1a72578635c0dcebb2b1f0dfcbefa12f80b21a18ad25c
|
||||
source_path: providers/deepgram.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# Deepgram(音频转录)
|
||||
|
||||
Deepgram 是一个语音转文字 API。在 OpenClaw 中,它通过 `tools.media.audio` 用于**接收音频/语音消息的转录**。
|
||||
|
||||
启用后,OpenClaw 会将音频文件上传到 Deepgram,并将转录文本注入回复管道(`{{Transcript}}` + `[Audio]` 块)。这**不是流式**处理;它使用的是预录音转录端点。
|
||||
|
||||
网站:https://deepgram.com
|
||||
文档:https://developers.deepgram.com
|
||||
|
||||
## 快速开始
|
||||
|
||||
1. 设置你的 API 密钥:
|
||||
|
||||
```
|
||||
DEEPGRAM_API_KEY=dg_...
|
||||
```
|
||||
|
||||
2. 启用提供商:
|
||||
|
||||
```json5
|
||||
{
|
||||
tools: {
|
||||
media: {
|
||||
audio: {
|
||||
enabled: true,
|
||||
models: [{ provider: "deepgram", model: "nova-3" }],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## 选项
|
||||
|
||||
- `model`:Deepgram 模型 ID(默认:`nova-3`)
|
||||
- `language`:语言提示(可选)
|
||||
- `tools.media.audio.providerOptions.deepgram.detect_language`:启用语言检测(可选)
|
||||
- `tools.media.audio.providerOptions.deepgram.punctuate`:启用标点符号(可选)
|
||||
- `tools.media.audio.providerOptions.deepgram.smart_format`:启用智能格式化(可选)
|
||||
|
||||
带语言参数的示例:
|
||||
|
||||
```json5
|
||||
{
|
||||
tools: {
|
||||
media: {
|
||||
audio: {
|
||||
enabled: true,
|
||||
models: [{ provider: "deepgram", model: "nova-3", language: "en" }],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
带 Deepgram 选项的示例:
|
||||
|
||||
```json5
|
||||
{
|
||||
tools: {
|
||||
media: {
|
||||
audio: {
|
||||
enabled: true,
|
||||
providerOptions: {
|
||||
deepgram: {
|
||||
detect_language: true,
|
||||
punctuate: true,
|
||||
smart_format: true,
|
||||
},
|
||||
},
|
||||
models: [{ provider: "deepgram", model: "nova-3" }],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## 注意事项
|
||||
|
||||
- 认证遵循标准提供商认证顺序;`DEEPGRAM_API_KEY` 是最简单的方式。
|
||||
- 使用代理时,可通过 `tools.media.audio.baseUrl` 和 `tools.media.audio.headers` 覆盖端点或请求头。
|
||||
- 输出遵循与其他提供商相同的音频规则(大小限制、超时、转录文本注入)。
|
||||
67
content/providers/github-copilot.md
Normal file
67
content/providers/github-copilot.md
Normal file
@@ -0,0 +1,67 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想使用 GitHub Copilot 作为模型提供商
|
||||
- 你需要了解 `openclaw models auth login-github-copilot` 流程
|
||||
summary: 使用设备流从 OpenClaw 登录 GitHub Copilot
|
||||
title: GitHub Copilot
|
||||
x-i18n:
|
||||
generated_at: "2026-02-01T21:34:57Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: 503e0496d92c921e2f7111b1b4ba16374f5b781643bfbc6cb69cea97d9395c25
|
||||
source_path: providers/github-copilot.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# GitHub Copilot
|
||||
|
||||
## 什么是 GitHub Copilot?
|
||||
|
||||
GitHub Copilot 是 GitHub 的 AI 编程助手。它为你的 GitHub 账户和订阅计划提供 Copilot 模型的访问权限。OpenClaw 可以通过两种不同的方式将 Copilot 用作模型提供商。
|
||||
|
||||
## 在 OpenClaw 中使用 Copilot 的两种方式
|
||||
|
||||
### 1)内置 GitHub Copilot 提供商(`github-copilot`)
|
||||
|
||||
使用原生设备登录流程获取 GitHub 令牌,然后在 OpenClaw 运行时将其兑换为 Copilot API 令牌。这是**默认**且最简单的方式,因为它不需要 VS Code。
|
||||
|
||||
### 2)Copilot Proxy 插件(`copilot-proxy`)
|
||||
|
||||
使用 **Copilot Proxy** VS Code 扩展作为本地桥接。OpenClaw 与代理的 `/v1` 端点通信,并使用你在其中配置的模型列表。当你已经在 VS Code 中运行 Copilot Proxy 或需要通过它进行路由时,选择此方式。你必须启用该插件并保持 VS Code 扩展运行。
|
||||
|
||||
使用 GitHub Copilot 作为模型提供商(`github-copilot`)。登录命令运行 GitHub 设备流程,保存认证配置文件,并更新你的配置以使用该配置文件。
|
||||
|
||||
## CLI 设置
|
||||
|
||||
```bash
|
||||
openclaw models auth login-github-copilot
|
||||
```
|
||||
|
||||
系统会提示你访问一个 URL 并输入一次性代码。请保持终端打开直到流程完成。
|
||||
|
||||
### 可选参数
|
||||
|
||||
```bash
|
||||
openclaw models auth login-github-copilot --profile-id github-copilot:work
|
||||
openclaw models auth login-github-copilot --yes
|
||||
```
|
||||
|
||||
## 设置默认模型
|
||||
|
||||
```bash
|
||||
openclaw models set github-copilot/gpt-4o
|
||||
```
|
||||
|
||||
### 配置片段
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: { defaults: { model: { primary: "github-copilot/gpt-4o" } } },
|
||||
}
|
||||
```
|
||||
|
||||
## 注意事项
|
||||
|
||||
- 需要交互式 TTY;请直接在终端中运行。
|
||||
- Copilot 模型的可用性取决于你的订阅计划;如果某个模型被拒绝,请尝试其他 ID(例如 `github-copilot/gpt-4.1`)。
|
||||
- 登录会将 GitHub 令牌存储在认证配置文件中,并在 OpenClaw 运行时将其兑换为 Copilot API 令牌。
|
||||
39
content/providers/glm.md
Normal file
39
content/providers/glm.md
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想在 OpenClaw 中使用 GLM 模型
|
||||
- 你需要了解模型命名规范和设置方法
|
||||
summary: GLM 模型系列概述 + 如何在 OpenClaw 中使用
|
||||
title: GLM 模型
|
||||
x-i18n:
|
||||
generated_at: "2026-02-01T21:34:53Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: 2d7b457f033f26f28c230a9cd2310151f825fc52c3ee4fb814d08fd2d022d041
|
||||
source_path: providers/glm.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# GLM 模型
|
||||
|
||||
GLM 是一个**模型系列**(而非公司),通过 Z.AI 平台提供。在 OpenClaw 中,GLM 模型通过 `zai` 提供商访问,模型 ID 格式如 `zai/glm-4.7`。
|
||||
|
||||
## CLI 设置
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice zai-api-key
|
||||
```
|
||||
|
||||
## 配置片段
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { ZAI_API_KEY: "sk-..." },
|
||||
agents: { defaults: { model: { primary: "zai/glm-4.7" } } },
|
||||
}
|
||||
```
|
||||
|
||||
## 注意事项
|
||||
|
||||
- GLM 版本和可用性可能会变化;请查阅 Z.AI 的文档获取最新信息。
|
||||
- 示例模型 ID 包括 `glm-4.7` 和 `glm-4.6`。
|
||||
- 有关提供商的详细信息,请参阅 [/providers/zai](/providers/zai)。
|
||||
209
content/providers/huggingface.md
Normal file
209
content/providers/huggingface.md
Normal file
@@ -0,0 +1,209 @@
|
||||
---
|
||||
summary: "Hugging Face Inference setup (auth + model selection)"
|
||||
read_when:
|
||||
- You want to use Hugging Face Inference with OpenClaw
|
||||
- You need the HF token env var or CLI auth choice
|
||||
title: "Hugging Face (Inference)"
|
||||
---
|
||||
|
||||
# Hugging Face (Inference)
|
||||
|
||||
[Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers) offer OpenAI-compatible chat completions through a single router API. You get access to many models (DeepSeek, Llama, and more) with one token. OpenClaw uses the **OpenAI-compatible endpoint** (chat completions only); for text-to-image, embeddings, or speech use the [HF inference clients](https://huggingface.co/docs/api-inference/quicktour) directly.
|
||||
|
||||
- Provider: `huggingface`
|
||||
- Auth: `HUGGINGFACE_HUB_TOKEN` or `HF_TOKEN` (fine-grained token with **Make calls to Inference Providers**)
|
||||
- API: OpenAI-compatible (`https://router.huggingface.co/v1`)
|
||||
- Billing: Single HF token; [pricing](https://huggingface.co/docs/inference-providers/pricing) follows provider rates with a free tier.
|
||||
|
||||
## Quick start
|
||||
|
||||
1. Create a fine-grained token at [Hugging Face → Settings → Tokens](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained) with the **Make calls to Inference Providers** permission.
|
||||
2. Run onboarding and choose **Hugging Face** in the provider dropdown, then enter your API key when prompted:
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice huggingface-api-key
|
||||
```
|
||||
|
||||
3. In the **Default Hugging Face model** dropdown, pick the model you want (the list is loaded from the Inference API when you have a valid token; otherwise a built-in list is shown). Your choice is saved as the default model.
|
||||
4. You can also set or change the default model later in config:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "huggingface/deepseek-ai/DeepSeek-R1" },
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Non-interactive example
|
||||
|
||||
```bash
|
||||
openclaw onboard --non-interactive \
|
||||
--mode local \
|
||||
--auth-choice huggingface-api-key \
|
||||
--huggingface-api-key "$HF_TOKEN"
|
||||
```
|
||||
|
||||
This will set `huggingface/deepseek-ai/DeepSeek-R1` as the default model.
|
||||
|
||||
## Environment note
|
||||
|
||||
If the Gateway runs as a daemon (launchd/systemd), make sure `HUGGINGFACE_HUB_TOKEN` or `HF_TOKEN`
|
||||
is available to that process (for example, in `~/.openclaw/.env` or via
|
||||
`env.shellEnv`).
|
||||
|
||||
## Model discovery and onboarding dropdown
|
||||
|
||||
OpenClaw discovers models by calling the **Inference endpoint directly**:
|
||||
|
||||
```bash
|
||||
GET https://router.huggingface.co/v1/models
|
||||
```
|
||||
|
||||
(Optional: send `Authorization: Bearer $HUGGINGFACE_HUB_TOKEN` or `$HF_TOKEN` for the full list; some endpoints return a subset without auth.) The response is OpenAI-style `{ "object": "list", "data": [ { "id": "Qwen/Qwen3-8B", "owned_by": "Qwen", ... }, ... ] }`.
|
||||
|
||||
When you configure a Hugging Face API key (via onboarding, `HUGGINGFACE_HUB_TOKEN`, or `HF_TOKEN`), OpenClaw uses this GET to discover available chat-completion models. During **interactive onboarding**, after you enter your token you see a **Default Hugging Face model** dropdown populated from that list (or the built-in catalog if the request fails). At runtime (e.g. Gateway startup), when a key is present, OpenClaw again calls **GET** `https://router.huggingface.co/v1/models` to refresh the catalog. The list is merged with a built-in catalog (for metadata like context window and cost). If the request fails or no key is set, only the built-in catalog is used.
|
||||
|
||||
## Model names and editable options
|
||||
|
||||
- **Name from API:** The model display name is **hydrated from GET /v1/models** when the API returns `name`, `title`, or `display_name`; otherwise it is derived from the model id (e.g. `deepseek-ai/DeepSeek-R1` → “DeepSeek R1”).
|
||||
- **Override display name:** You can set a custom label per model in config so it appears the way you want in the CLI and UI:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"huggingface/deepseek-ai/DeepSeek-R1": { alias: "DeepSeek R1 (fast)" },
|
||||
"huggingface/deepseek-ai/DeepSeek-R1:cheapest": { alias: "DeepSeek R1 (cheap)" },
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
- **Provider / policy selection:** Append a suffix to the **model id** to choose how the router picks the backend:
|
||||
- **`:fastest`** — highest throughput (router picks; provider choice is **locked** — no interactive backend picker).
|
||||
- **`:cheapest`** — lowest cost per output token (router picks; provider choice is **locked**).
|
||||
- **`:provider`** — force a specific backend (e.g. `:sambanova`, `:together`).
|
||||
|
||||
When you select **:cheapest** or **:fastest** (e.g. in the onboarding model dropdown), the provider is locked: the router decides by cost or speed and no optional “prefer specific backend” step is shown. You can add these as separate entries in `models.providers.huggingface.models` or set `model.primary` with the suffix. You can also set your default order in [Inference Provider settings](https://hf.co/settings/inference-providers) (no suffix = use that order).
|
||||
|
||||
- **Config merge:** Existing entries in `models.providers.huggingface.models` (e.g. in `models.json`) are kept when config is merged. So any custom `name`, `alias`, or model options you set there are preserved.
|
||||
|
||||
## Model IDs and configuration examples
|
||||
|
||||
Model refs use the form `huggingface/<org>/<model>` (Hub-style IDs). The list below is from **GET** `https://router.huggingface.co/v1/models`; your catalog may include more.
|
||||
|
||||
**Example IDs (from the inference endpoint):**
|
||||
|
||||
| 模型 | Ref (prefix with `huggingface/`) |
|
||||
| ---------------------- | ----------------------------------- |
|
||||
| DeepSeek R1 | `deepseek-ai/DeepSeek-R1` |
|
||||
| DeepSeek V3.2 | `deepseek-ai/DeepSeek-V3.2` |
|
||||
| Qwen3 8B | `Qwen/Qwen3-8B` |
|
||||
| Qwen2.5 7B Instruct | `Qwen/Qwen2.5-7B-Instruct` |
|
||||
| Qwen3 32B | `Qwen/Qwen3-32B` |
|
||||
| Llama 3.3 70B Instruct | `meta-llama/Llama-3.3-70B-Instruct` |
|
||||
| Llama 3.1 8B Instruct | `meta-llama/Llama-3.1-8B-Instruct` |
|
||||
| GPT-OSS 120B | `openai/gpt-oss-120b` |
|
||||
| GLM 4.7 | `zai-org/GLM-4.7` |
|
||||
| Kimi K2.5 | `moonshotai/Kimi-K2.5` |
|
||||
|
||||
You can append `:fastest`, `:cheapest`, or `:provider` (e.g. `:together`, `:sambanova`) to the model id. Set your default order in [Inference Provider settings](https://hf.co/settings/inference-providers); see [Inference Providers](https://huggingface.co/docs/inference-providers) and **GET** `https://router.huggingface.co/v1/models` for the full list.
|
||||
|
||||
### Complete configuration examples
|
||||
|
||||
**Primary DeepSeek R1 with Qwen fallback:**
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: {
|
||||
primary: "huggingface/deepseek-ai/DeepSeek-R1",
|
||||
fallbacks: ["huggingface/Qwen/Qwen3-8B"],
|
||||
},
|
||||
models: {
|
||||
"huggingface/deepseek-ai/DeepSeek-R1": { alias: "DeepSeek R1" },
|
||||
"huggingface/Qwen/Qwen3-8B": { alias: "Qwen3 8B" },
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
**Qwen as default, with :cheapest and :fastest variants:**
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "huggingface/Qwen/Qwen3-8B" },
|
||||
models: {
|
||||
"huggingface/Qwen/Qwen3-8B": { alias: "Qwen3 8B" },
|
||||
"huggingface/Qwen/Qwen3-8B:cheapest": { alias: "Qwen3 8B (cheapest)" },
|
||||
"huggingface/Qwen/Qwen3-8B:fastest": { alias: "Qwen3 8B (fastest)" },
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
**DeepSeek + Llama + GPT-OSS with aliases:**
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: {
|
||||
primary: "huggingface/deepseek-ai/DeepSeek-V3.2",
|
||||
fallbacks: [
|
||||
"huggingface/meta-llama/Llama-3.3-70B-Instruct",
|
||||
"huggingface/openai/gpt-oss-120b",
|
||||
],
|
||||
},
|
||||
models: {
|
||||
"huggingface/deepseek-ai/DeepSeek-V3.2": { alias: "DeepSeek V3.2" },
|
||||
"huggingface/meta-llama/Llama-3.3-70B-Instruct": { alias: "Llama 3.3 70B" },
|
||||
"huggingface/openai/gpt-oss-120b": { alias: "GPT-OSS 120B" },
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
**Force a specific backend with :provider:**
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "huggingface/deepseek-ai/DeepSeek-R1:together" },
|
||||
models: {
|
||||
"huggingface/deepseek-ai/DeepSeek-R1:together": { alias: "DeepSeek R1 (Together)" },
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
**Multiple Qwen and DeepSeek models with policy suffixes:**
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "huggingface/Qwen/Qwen2.5-7B-Instruct:cheapest" },
|
||||
models: {
|
||||
"huggingface/Qwen/Qwen2.5-7B-Instruct": { alias: "Qwen2.5 7B" },
|
||||
"huggingface/Qwen/Qwen2.5-7B-Instruct:cheapest": { alias: "Qwen2.5 7B (cheap)" },
|
||||
"huggingface/deepseek-ai/DeepSeek-R1:fastest": { alias: "DeepSeek R1 (fast)" },
|
||||
"huggingface/meta-llama/Llama-3.1-8B-Instruct": { alias: "Llama 3.1 8B" },
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
68
content/providers/index.md
Normal file
68
content/providers/index.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想选择一个模型提供商
|
||||
- 你需要快速了解支持的 LLM 后端
|
||||
summary: OpenClaw 支持的模型提供商(LLM)
|
||||
title: 模型提供商
|
||||
x-i18n:
|
||||
generated_at: "2026-02-03T07:53:32Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: eb4a97438adcf610499253afcf8b2af6624f4be098df389a6c3746f14c4a901b
|
||||
source_path: providers/index.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# 模型提供商
|
||||
|
||||
OpenClaw 可以使用许多 LLM 提供商。选择一个提供商,进行认证,然后将默认模型设置为 `provider/model`。
|
||||
|
||||
正在寻找聊天渠道文档(WhatsApp/Telegram/Discord/Slack/Mattermost(插件)等)?参见[渠道](/channels)。
|
||||
|
||||
## 亮点:Venice(Venice AI)
|
||||
|
||||
Venice 是我们推荐的 Venice AI 设置,用于隐私优先的推理,并可选择使用 Opus 处理困难任务。
|
||||
|
||||
- 默认:`venice/llama-3.3-70b`
|
||||
- 最佳综合:`venice/claude-opus-45`(Opus 仍然是最强的)
|
||||
|
||||
参见 [Venice AI](/providers/venice)。
|
||||
|
||||
## 快速开始
|
||||
|
||||
1. 与提供商进行认证(通常通过 `openclaw onboard`)。
|
||||
2. 设置默认模型:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } },
|
||||
}
|
||||
```
|
||||
|
||||
## 提供商文档
|
||||
|
||||
- [OpenAI(API + Codex)](/providers/openai)
|
||||
- [Anthropic(API + Claude Code CLI)](/providers/anthropic)
|
||||
- [Qwen(OAuth)](/providers/qwen)
|
||||
- [OpenRouter](/providers/openrouter)
|
||||
- [Vercel AI Gateway](/providers/vercel-ai-gateway)
|
||||
- [Moonshot AI(Kimi + Kimi Coding)](/providers/moonshot)
|
||||
- [OpenCode Zen](/providers/opencode)
|
||||
- [Amazon Bedrock](/providers/bedrock)
|
||||
- [Z.AI](/providers/zai)
|
||||
- [Xiaomi](/providers/xiaomi)
|
||||
- [GLM 模型](/providers/glm)
|
||||
- [MiniMax](/providers/minimax)
|
||||
- [Venice(Venice AI,注重隐私)](/providers/venice)
|
||||
- [Ollama(本地模型)](/providers/ollama)
|
||||
|
||||
## 转录提供商
|
||||
|
||||
- [Deepgram(音频转录)](/providers/deepgram)
|
||||
|
||||
## 社区工具
|
||||
|
||||
- [Claude Max API Proxy](/providers/claude-max-api-proxy) - 将 Claude Max/Pro 订阅作为 OpenAI 兼容的 API 端点使用
|
||||
|
||||
有关完整的提供商目录(xAI、Groq、Mistral 等)和高级配置,
|
||||
参见[模型提供商](/concepts/model-providers)。
|
||||
64
content/providers/kilocode.md
Normal file
64
content/providers/kilocode.md
Normal file
@@ -0,0 +1,64 @@
|
||||
---
|
||||
summary: "Use Kilo Gateway's unified API to access many models in OpenClaw"
|
||||
read_when:
|
||||
- You want a single API key for many LLMs
|
||||
- You want to run models via Kilo Gateway in OpenClaw
|
||||
---
|
||||
|
||||
# Kilo Gateway
|
||||
|
||||
Kilo Gateway provides a **unified API** that routes requests to many models behind a single
|
||||
endpoint and API key. It is OpenAI-compatible, so most OpenAI SDKs work by switching the base URL.
|
||||
|
||||
## Getting an API key
|
||||
|
||||
1. Go to [app.kilo.ai](https://app.kilo.ai)
|
||||
2. Sign in or create an account
|
||||
3. Navigate to API Keys and generate a new key
|
||||
|
||||
## CLI setup
|
||||
|
||||
```bash
|
||||
openclaw onboard --kilocode-api-key <key>
|
||||
```
|
||||
|
||||
Or set the environment variable:
|
||||
|
||||
```bash
|
||||
export KILOCODE_API_KEY="your-api-key"
|
||||
```
|
||||
|
||||
## Config snippet
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { KILOCODE_API_KEY: "sk-..." },
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "kilocode/anthropic/claude-opus-4.6" },
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Surfaced model refs
|
||||
|
||||
The built-in Kilo Gateway catalog currently surfaces these model refs:
|
||||
|
||||
- `kilocode/anthropic/claude-opus-4.6` (default)
|
||||
- `kilocode/z-ai/glm-5:free`
|
||||
- `kilocode/minimax/minimax-m2.5:free`
|
||||
- `kilocode/anthropic/claude-sonnet-4.5`
|
||||
- `kilocode/openai/gpt-5.2`
|
||||
- `kilocode/google/gemini-3-pro-preview`
|
||||
- `kilocode/google/gemini-3-flash-preview`
|
||||
- `kilocode/x-ai/grok-code-fast-1`
|
||||
- `kilocode/moonshotai/kimi-k2.5`
|
||||
|
||||
## Notes
|
||||
|
||||
- Model refs are `kilocode/<provider>/<model>` (e.g., `kilocode/anthropic/claude-opus-4.6`).
|
||||
- Default model: `kilocode/anthropic/claude-opus-4.6`
|
||||
- Base URL: `https://api.kilo.ai/api/gateway/`
|
||||
- For more model/provider options, see [/concepts/model-providers](/concepts/model-providers).
|
||||
- Kilo Gateway uses a Bearer token with your API key under the hood.
|
||||
153
content/providers/litellm.md
Normal file
153
content/providers/litellm.md
Normal file
@@ -0,0 +1,153 @@
|
||||
---
|
||||
summary: "Run OpenClaw through LiteLLM Proxy for unified model access and cost tracking"
|
||||
read_when:
|
||||
- You want to route OpenClaw through a LiteLLM proxy
|
||||
- You need cost tracking, logging, or model routing through LiteLLM
|
||||
---
|
||||
|
||||
# LiteLLM
|
||||
|
||||
[LiteLLM](https://litellm.ai) is an open-source LLM gateway that provides a unified API to 100+ model providers. Route OpenClaw through LiteLLM to get centralized cost tracking, logging, and the flexibility to switch backends without changing your OpenClaw config.
|
||||
|
||||
## Why use LiteLLM with OpenClaw?
|
||||
|
||||
- **Cost tracking** — See exactly what OpenClaw spends across all models
|
||||
- **Model routing** — Switch between Claude, GPT-4, Gemini, Bedrock without config changes
|
||||
- **Virtual keys** — Create keys with spend limits for OpenClaw
|
||||
- **Logging** — Full request/response logs for debugging
|
||||
- **Fallbacks** — Automatic failover if your primary provider is down
|
||||
|
||||
## Quick start
|
||||
|
||||
### Via onboarding
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice litellm-api-key
|
||||
```
|
||||
|
||||
### Manual setup
|
||||
|
||||
1. Start LiteLLM Proxy:
|
||||
|
||||
```bash
|
||||
pip install 'litellm[proxy]'
|
||||
litellm --model claude-opus-4-6
|
||||
```
|
||||
|
||||
2. Point OpenClaw to LiteLLM:
|
||||
|
||||
```bash
|
||||
export LITELLM_API_KEY="your-litellm-key"
|
||||
|
||||
openclaw
|
||||
```
|
||||
|
||||
That's it. OpenClaw now routes through LiteLLM.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment variables
|
||||
|
||||
```bash
|
||||
export LITELLM_API_KEY="sk-litellm-key"
|
||||
```
|
||||
|
||||
### Config file
|
||||
|
||||
```json5
|
||||
{
|
||||
models: {
|
||||
providers: {
|
||||
litellm: {
|
||||
baseUrl: "http://localhost:4000",
|
||||
apiKey: "${LITELLM_API_KEY}",
|
||||
api: "openai-completions",
|
||||
models: [
|
||||
{
|
||||
id: "claude-opus-4-6",
|
||||
name: "Claude Opus 4.6",
|
||||
reasoning: true,
|
||||
input: ["text", "image"],
|
||||
contextWindow: 200000,
|
||||
maxTokens: 64000,
|
||||
},
|
||||
{
|
||||
id: "gpt-4o",
|
||||
name: "GPT-4o",
|
||||
reasoning: false,
|
||||
input: ["text", "image"],
|
||||
contextWindow: 128000,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "litellm/claude-opus-4-6" },
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Virtual keys
|
||||
|
||||
Create a dedicated key for OpenClaw with spend limits:
|
||||
|
||||
```bash
|
||||
curl -X POST "http://localhost:4000/key/generate" \
|
||||
-H "Authorization: Bearer $LITELLM_MASTER_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"key_alias": "openclaw",
|
||||
"max_budget": 50.00,
|
||||
"budget_duration": "monthly"
|
||||
}'
|
||||
```
|
||||
|
||||
Use the generated key as `LITELLM_API_KEY`.
|
||||
|
||||
## Model routing
|
||||
|
||||
LiteLLM can route model requests to different backends. Configure in your LiteLLM `config.yaml`:
|
||||
|
||||
```yaml
|
||||
model_list:
|
||||
- model_name: claude-opus-4-6
|
||||
litellm_params:
|
||||
model: claude-opus-4-6
|
||||
api_key: os.environ/ANTHROPIC_API_KEY
|
||||
|
||||
- model_name: gpt-4o
|
||||
litellm_params:
|
||||
model: gpt-4o
|
||||
api_key: os.environ/OPENAI_API_KEY
|
||||
```
|
||||
|
||||
OpenClaw keeps requesting `claude-opus-4-6` — LiteLLM handles the routing.
|
||||
|
||||
## Viewing usage
|
||||
|
||||
Check LiteLLM's dashboard or API:
|
||||
|
||||
```bash
|
||||
# Key info
|
||||
curl "http://localhost:4000/key/info" \
|
||||
-H "Authorization: Bearer sk-litellm-key"
|
||||
|
||||
# Spend logs
|
||||
curl "http://localhost:4000/spend/logs" \
|
||||
-H "Authorization: Bearer $LITELLM_MASTER_KEY"
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- LiteLLM runs on `http://localhost:4000` by default
|
||||
- OpenClaw connects via the OpenAI-compatible `/v1/chat/completions` endpoint
|
||||
- All OpenClaw features work through LiteLLM — no limitations
|
||||
|
||||
## See also
|
||||
|
||||
- [LiteLLM Docs](https://docs.litellm.ai)
|
||||
- [Model Providers](/concepts/model-providers)
|
||||
206
content/providers/minimax.md
Normal file
206
content/providers/minimax.md
Normal file
@@ -0,0 +1,206 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想在 OpenClaw 中使用 MiniMax 模型
|
||||
- 你需要 MiniMax 设置指南
|
||||
summary: 在 OpenClaw 中使用 MiniMax M2.1
|
||||
title: MiniMax
|
||||
x-i18n:
|
||||
generated_at: "2026-02-03T10:08:52Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: 861e1ddc3c24be88f716bfb72d6015d62875a9087f8e89ea4ba3a35f548c7fae
|
||||
source_path: providers/minimax.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# MiniMax
|
||||
|
||||
MiniMax 是一家构建 **M2/M2.1** 模型系列的 AI 公司。当前面向编程的版本是 **MiniMax M2.1**(2025 年 12 月 23 日),专为现实世界的复杂任务而构建。
|
||||
|
||||
来源:[MiniMax M2.1 发布说明](https://www.minimax.io/news/minimax-m21)
|
||||
|
||||
## 模型概述(M2.1)
|
||||
|
||||
MiniMax 强调 M2.1 的以下改进:
|
||||
|
||||
- 更强的**多语言编程**能力(Rust、Java、Go、C++、Kotlin、Objective-C、TS/JS)。
|
||||
- 更好的 **Web/应用开发**和美观输出质量(包括原生移动端)。
|
||||
- 改进的**复合指令**处理,适用于办公风格的工作流程,基于交错思考和集成约束执行。
|
||||
- **更简洁的响应**,更低的 token 使用量和更快的迭代循环。
|
||||
- 更强的**工具/智能体框架**兼容性和上下文管理(Claude Code、Droid/Factory AI、Cline、Kilo Code、Roo Code、BlackBox)。
|
||||
- 更高质量的**对话和技术写作**输出。
|
||||
|
||||
## MiniMax M2.1 vs MiniMax M2.1 Lightning
|
||||
|
||||
- **速度:** Lightning 是 MiniMax 定价文档中的"快速"变体。
|
||||
- **成本:** 定价显示相同的输入成本,但 Lightning 的输出成本更高。
|
||||
- **编程计划路由:** Lightning 后端在 MiniMax 编程计划中不能直接使用。MiniMax 自动将大多数请求路由到 Lightning,但在流量高峰期会回退到常规 M2.1 后端。
|
||||
|
||||
## 选择设置方式
|
||||
|
||||
### MiniMax OAuth(编程计划)— 推荐
|
||||
|
||||
**适用于:** 通过 OAuth 快速设置 MiniMax 编程计划,无需 API 密钥。
|
||||
|
||||
启用内置 OAuth 插件并进行认证:
|
||||
|
||||
```bash
|
||||
openclaw plugins enable minimax-portal-auth # 如果已加载则跳过
|
||||
openclaw gateway restart # 如果 Gateway 网关已在运行则重启
|
||||
openclaw onboard --auth-choice minimax-portal
|
||||
```
|
||||
|
||||
系统会提示你选择端点:
|
||||
|
||||
- **Global** - 国际用户(`api.minimax.io`)
|
||||
- **CN** - 中国用户(`api.minimaxi.com`)
|
||||
|
||||
详情参见 [MiniMax OAuth 插件 README](https://github.com/openclaw/openclaw/tree/main/extensions/minimax-portal-auth)。
|
||||
|
||||
### MiniMax M2.1(API 密钥)
|
||||
|
||||
**适用于:** 使用 Anthropic 兼容 API 的托管 MiniMax。
|
||||
|
||||
通过 CLI 配置:
|
||||
|
||||
- 运行 `openclaw configure`
|
||||
- 选择 **Model/auth**
|
||||
- 选择 **MiniMax M2.1**
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { MINIMAX_API_KEY: "sk-..." },
|
||||
agents: { defaults: { model: { primary: "minimax/MiniMax-M2.1" } } },
|
||||
models: {
|
||||
mode: "merge",
|
||||
providers: {
|
||||
minimax: {
|
||||
baseUrl: "https://api.minimax.io/anthropic",
|
||||
apiKey: "${MINIMAX_API_KEY}",
|
||||
api: "anthropic-messages",
|
||||
models: [
|
||||
{
|
||||
id: "MiniMax-M2.1",
|
||||
name: "MiniMax M2.1",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 15, output: 60, cacheRead: 2, cacheWrite: 10 },
|
||||
contextWindow: 200000,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
### MiniMax M2.1 作为备用(Opus 为主)
|
||||
|
||||
**适用于:** 保持 Opus 4.5 为主模型,故障时切换到 MiniMax M2.1。
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { MINIMAX_API_KEY: "sk-..." },
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"anthropic/claude-opus-4-5": { alias: "opus" },
|
||||
"minimax/MiniMax-M2.1": { alias: "minimax" },
|
||||
},
|
||||
model: {
|
||||
primary: "anthropic/claude-opus-4-5",
|
||||
fallbacks: ["minimax/MiniMax-M2.1"],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
### 可选:通过 LM Studio 本地运行(手动)
|
||||
|
||||
**适用于:** 使用 LM Studio 进行本地推理。
|
||||
我们在强大硬件(例如台式机/服务器)上使用 LM Studio 的本地服务器运行 MiniMax M2.1 时看到了出色的效果。
|
||||
|
||||
通过 `openclaw.json` 手动配置:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "lmstudio/minimax-m2.1-gs32" },
|
||||
models: { "lmstudio/minimax-m2.1-gs32": { alias: "Minimax" } },
|
||||
},
|
||||
},
|
||||
models: {
|
||||
mode: "merge",
|
||||
providers: {
|
||||
lmstudio: {
|
||||
baseUrl: "http://127.0.0.1:1234/v1",
|
||||
apiKey: "lmstudio",
|
||||
api: "openai-responses",
|
||||
models: [
|
||||
{
|
||||
id: "minimax-m2.1-gs32",
|
||||
name: "MiniMax M2.1 GS32",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 196608,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## 通过 `openclaw configure` 配置
|
||||
|
||||
使用交互式配置向导设置 MiniMax,无需编辑 JSON:
|
||||
|
||||
1. 运行 `openclaw configure`。
|
||||
2. 选择 **Model/auth**。
|
||||
3. 选择 **MiniMax M2.1**。
|
||||
4. 在提示时选择你的默认模型。
|
||||
|
||||
## 配置选项
|
||||
|
||||
- `models.providers.minimax.baseUrl`:推荐使用 `https://api.minimax.io/anthropic`(Anthropic 兼容);`https://api.minimax.io/v1` 可选用于 OpenAI 兼容的负载。
|
||||
- `models.providers.minimax.api`:推荐使用 `anthropic-messages`;`openai-completions` 可选用于 OpenAI 兼容的负载。
|
||||
- `models.providers.minimax.apiKey`:MiniMax API 密钥(`MINIMAX_API_KEY`)。
|
||||
- `models.providers.minimax.models`:定义 `id`、`name`、`reasoning`、`contextWindow`、`maxTokens`、`cost`。
|
||||
- `agents.defaults.models`:为你想要在允许列表中的模型设置别名。
|
||||
- `models.mode`:如果你想将 MiniMax 与内置模型一起添加,保持 `merge`。
|
||||
|
||||
## 注意事项
|
||||
|
||||
- 模型引用格式为 `minimax/<model>`。
|
||||
- 编程计划使用量 API:`https://api.minimaxi.com/v1/api/openplatform/coding_plan/remains`(需要编程计划密钥)。
|
||||
- 如果需要精确的成本跟踪,请更新 `models.json` 中的定价值。
|
||||
- MiniMax 编程计划推荐链接(9 折优惠):https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link
|
||||
- 参见 [/concepts/model-providers](/concepts/model-providers) 了解提供商规则。
|
||||
- 使用 `openclaw models list` 和 `openclaw models set minimax/MiniMax-M2.1` 切换模型。
|
||||
|
||||
## 故障排除
|
||||
|
||||
### "Unknown model: minimax/MiniMax-M2.1"
|
||||
|
||||
这通常意味着 **MiniMax 提供商未配置**(没有提供商条目,也没有找到 MiniMax 认证配置文件/环境变量密钥)。此检测的修复在 **2026.1.12** 中(撰写本文时尚未发布)。修复方法:
|
||||
|
||||
- 升级到 **2026.1.12**(或从源码 `main` 分支运行),然后重启 Gateway 网关。
|
||||
- 运行 `openclaw configure` 并选择 **MiniMax M2.1**,或
|
||||
- 手动添加 `models.providers.minimax` 块,或
|
||||
- 设置 `MINIMAX_API_KEY`(或 MiniMax 认证配置文件)以便注入提供商。
|
||||
|
||||
确保模型 id **区分大小写**:
|
||||
|
||||
- `minimax/MiniMax-M2.1`
|
||||
- `minimax/MiniMax-M2.1-lightning`
|
||||
|
||||
然后重新检查:
|
||||
|
||||
```bash
|
||||
openclaw models list
|
||||
```
|
||||
54
content/providers/mistral.md
Normal file
54
content/providers/mistral.md
Normal file
@@ -0,0 +1,54 @@
|
||||
---
|
||||
summary: "Use Mistral models and Voxtral transcription with OpenClaw"
|
||||
read_when:
|
||||
- You want to use Mistral models in OpenClaw
|
||||
- You need Mistral API key onboarding and model refs
|
||||
title: "Mistral"
|
||||
---
|
||||
|
||||
# Mistral
|
||||
|
||||
OpenClaw supports Mistral for both text/image model routing (`mistral/...`) and
|
||||
audio transcription via Voxtral in media understanding.
|
||||
Mistral can also be used for memory embeddings (`memorySearch.provider = "mistral"`).
|
||||
|
||||
## CLI setup
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice mistral-api-key
|
||||
# or non-interactive
|
||||
openclaw onboard --mistral-api-key "$MISTRAL_API_KEY"
|
||||
```
|
||||
|
||||
## Config snippet (LLM provider)
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { MISTRAL_API_KEY: "sk-..." },
|
||||
agents: { defaults: { model: { primary: "mistral/mistral-large-latest" } } },
|
||||
}
|
||||
```
|
||||
|
||||
## Config snippet (audio transcription with Voxtral)
|
||||
|
||||
```json5
|
||||
{
|
||||
tools: {
|
||||
media: {
|
||||
audio: {
|
||||
enabled: true,
|
||||
models: [{ provider: "mistral", model: "voxtral-mini-latest" }],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Mistral auth uses `MISTRAL_API_KEY`.
|
||||
- Provider base URL defaults to `https://api.mistral.ai/v1`.
|
||||
- Onboarding default model is `mistral/mistral-large-latest`.
|
||||
- Media-understanding default audio model for Mistral is `voxtral-mini-latest`.
|
||||
- Media transcription path uses `/v1/audio/transcriptions`.
|
||||
- Memory embeddings path uses `/v1/embeddings` (default model: `mistral-embed`).
|
||||
55
content/providers/models.md
Normal file
55
content/providers/models.md
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想选择一个模型提供商
|
||||
- 你想要 LLM 认证 + 模型选择的快速设置示例
|
||||
summary: OpenClaw 支持的模型提供商(LLM)
|
||||
title: 模型提供商快速入门
|
||||
x-i18n:
|
||||
generated_at: "2026-02-03T07:53:35Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: 2f5b99207dc7860e0a7b541b61e984791f5d7ab1953b3e917365a248a09b025b
|
||||
source_path: providers/models.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# 模型提供商
|
||||
|
||||
OpenClaw 可以使用许多 LLM 提供商。选择一个,进行认证,然后将默认模型设置为 `provider/model`。
|
||||
|
||||
## 推荐:Venice(Venice AI)
|
||||
|
||||
Venice 是我们推荐的 Venice AI 设置,用于隐私优先的推理,并可选择使用 Opus 处理最困难的任务。
|
||||
|
||||
- 默认:`venice/llama-3.3-70b`
|
||||
- 最佳综合:`venice/claude-opus-45`(Opus 仍然是最强的)
|
||||
|
||||
参见 [Venice AI](/providers/venice)。
|
||||
|
||||
## 快速开始(两个步骤)
|
||||
|
||||
1. 与提供商认证(通常通过 `openclaw onboard`)。
|
||||
2. 设置默认模型:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } },
|
||||
}
|
||||
```
|
||||
|
||||
## 支持的提供商(入门集)
|
||||
|
||||
- [OpenAI(API + Codex)](/providers/openai)
|
||||
- [Anthropic(API + Claude Code CLI)](/providers/anthropic)
|
||||
- [OpenRouter](/providers/openrouter)
|
||||
- [Vercel AI Gateway](/providers/vercel-ai-gateway)
|
||||
- [Moonshot AI(Kimi + Kimi Coding)](/providers/moonshot)
|
||||
- [Synthetic](/providers/synthetic)
|
||||
- [OpenCode Zen](/providers/opencode)
|
||||
- [Z.AI](/providers/zai)
|
||||
- [GLM 模型](/providers/glm)
|
||||
- [MiniMax](/providers/minimax)
|
||||
- [Venice(Venice AI)](/providers/venice)
|
||||
- [Amazon Bedrock](/providers/bedrock)
|
||||
|
||||
有关完整的提供商目录(xAI、Groq、Mistral 等)和高级配置,请参阅[模型提供商](/concepts/model-providers)。
|
||||
145
content/providers/moonshot.md
Normal file
145
content/providers/moonshot.md
Normal file
@@ -0,0 +1,145 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想了解 Moonshot K2(Moonshot 开放平台)与 Kimi Coding 的配置
|
||||
- 你需要了解独立的端点、密钥和模型引用
|
||||
- 你想获取任一提供商的可复制粘贴配置
|
||||
summary: 配置 Moonshot K2 与 Kimi Coding(独立提供商和密钥)
|
||||
title: Moonshot AI
|
||||
x-i18n:
|
||||
generated_at: "2026-02-01T21:35:13Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: 2de81b1a37a0e6e61e0e142fcd36760ecd00834e107dc9b5e38bbf971b27e18e
|
||||
source_path: providers/moonshot.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# Moonshot AI (Kimi)
|
||||
|
||||
Moonshot 提供兼容 OpenAI 端点的 Kimi API。配置提供商并将默认模型设置为 `moonshot/kimi-k2.5`,或使用 Kimi Coding 的 `kimi-coding/k2p5`。
|
||||
|
||||
当前 Kimi K2 模型 ID:
|
||||
{/_ moonshot-kimi-k2-ids:start _/}
|
||||
|
||||
- `kimi-k2.5`
|
||||
- `kimi-k2-0905-preview`
|
||||
- `kimi-k2-turbo-preview`
|
||||
- `kimi-k2-thinking`
|
||||
- `kimi-k2-thinking-turbo`
|
||||
{/_ moonshot-kimi-k2-ids:end _/}
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice moonshot-api-key
|
||||
```
|
||||
|
||||
Kimi Coding:
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice kimi-code-api-key
|
||||
```
|
||||
|
||||
注意:Moonshot 和 Kimi Coding 是独立的提供商。密钥不可互换,端点不同,模型引用也不同(Moonshot 使用 `moonshot/...`,Kimi Coding 使用 `kimi-coding/...`)。
|
||||
|
||||
## 配置片段(Moonshot API)
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { MOONSHOT_API_KEY: "sk-..." },
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "moonshot/kimi-k2.5" },
|
||||
models: {
|
||||
// moonshot-kimi-k2-aliases:start
|
||||
"moonshot/kimi-k2.5": { alias: "Kimi K2.5" },
|
||||
"moonshot/kimi-k2-0905-preview": { alias: "Kimi K2" },
|
||||
"moonshot/kimi-k2-turbo-preview": { alias: "Kimi K2 Turbo" },
|
||||
"moonshot/kimi-k2-thinking": { alias: "Kimi K2 Thinking" },
|
||||
"moonshot/kimi-k2-thinking-turbo": { alias: "Kimi K2 Thinking Turbo" },
|
||||
// moonshot-kimi-k2-aliases:end
|
||||
},
|
||||
},
|
||||
},
|
||||
models: {
|
||||
mode: "merge",
|
||||
providers: {
|
||||
moonshot: {
|
||||
baseUrl: "https://api.moonshot.ai/v1",
|
||||
apiKey: "${MOONSHOT_API_KEY}",
|
||||
api: "openai-completions",
|
||||
models: [
|
||||
// moonshot-kimi-k2-models:start
|
||||
{
|
||||
id: "kimi-k2.5",
|
||||
name: "Kimi K2.5",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 256000,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
{
|
||||
id: "kimi-k2-0905-preview",
|
||||
name: "Kimi K2 0905 Preview",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 256000,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
{
|
||||
id: "kimi-k2-turbo-preview",
|
||||
name: "Kimi K2 Turbo",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 256000,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
{
|
||||
id: "kimi-k2-thinking",
|
||||
name: "Kimi K2 Thinking",
|
||||
reasoning: true,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 256000,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
{
|
||||
id: "kimi-k2-thinking-turbo",
|
||||
name: "Kimi K2 Thinking Turbo",
|
||||
reasoning: true,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 256000,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
// moonshot-kimi-k2-models:end
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Kimi Coding
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { KIMI_API_KEY: "sk-..." },
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "kimi-coding/k2p5" },
|
||||
models: {
|
||||
"kimi-coding/k2p5": { alias: "Kimi K2.5" },
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## 注意事项
|
||||
|
||||
- Moonshot 模型引用使用 `moonshot/<modelId>`。Kimi Coding 模型引用使用 `kimi-coding/<modelId>`。
|
||||
- 如有需要,可在 `models.providers` 中覆盖定价和上下文元数据。
|
||||
- 如果 Moonshot 发布了某个模型的不同上下文限制,请相应调整 `contextWindow`。
|
||||
- 如需使用中国端点,请使用 `https://api.moonshot.cn/v1`。
|
||||
55
content/providers/nvidia.md
Normal file
55
content/providers/nvidia.md
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
summary: "Use NVIDIA's OpenAI-compatible API in OpenClaw"
|
||||
read_when:
|
||||
- You want to use NVIDIA models in OpenClaw
|
||||
- You need NVIDIA_API_KEY setup
|
||||
title: "NVIDIA"
|
||||
---
|
||||
|
||||
# NVIDIA
|
||||
|
||||
NVIDIA provides an OpenAI-compatible API at `https://integrate.api.nvidia.com/v1` for Nemotron and NeMo models. Authenticate with an API key from [NVIDIA NGC](https://catalog.ngc.nvidia.com/).
|
||||
|
||||
## CLI setup
|
||||
|
||||
Export the key once, then run onboarding and set an NVIDIA model:
|
||||
|
||||
```bash
|
||||
export NVIDIA_API_KEY="nvapi-..."
|
||||
openclaw onboard --auth-choice skip
|
||||
openclaw models set nvidia/nvidia/llama-3.1-nemotron-70b-instruct
|
||||
```
|
||||
|
||||
If you still pass `--token`, remember it lands in shell history and `ps` output; prefer the env var when possible.
|
||||
|
||||
## Config snippet
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { NVIDIA_API_KEY: "nvapi-..." },
|
||||
models: {
|
||||
providers: {
|
||||
nvidia: {
|
||||
baseUrl: "https://integrate.api.nvidia.com/v1",
|
||||
api: "openai-completions",
|
||||
},
|
||||
},
|
||||
},
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "nvidia/nvidia/llama-3.1-nemotron-70b-instruct" },
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Model IDs
|
||||
|
||||
- `nvidia/llama-3.1-nemotron-70b-instruct` (default)
|
||||
- `meta/llama-3.3-70b-instruct`
|
||||
- `nvidia/mistral-nemo-minitron-8b-8k-instruct`
|
||||
|
||||
## Notes
|
||||
|
||||
- OpenAI-compatible `/v1` endpoint; use an API key from NVIDIA NGC.
|
||||
- Provider auto-enables when `NVIDIA_API_KEY` is set; uses static defaults (131,072-token context window, 4,096 max tokens).
|
||||
230
content/providers/ollama.md
Normal file
230
content/providers/ollama.md
Normal file
@@ -0,0 +1,230 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想通过 Ollama 使用本地模型运行 OpenClaw
|
||||
- 你需要 Ollama 的安装和配置指导
|
||||
summary: 通过 Ollama(本地 LLM 运行时)运行 OpenClaw
|
||||
title: Ollama
|
||||
x-i18n:
|
||||
generated_at: "2026-02-01T21:35:22Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: 157080ad90f449f622260a5f5bd293f79c15800527d36b15596e8ca232e3c957
|
||||
source_path: providers/ollama.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# Ollama
|
||||
|
||||
Ollama 是一个本地 LLM 运行时,可以轻松在你的机器上运行开源模型。OpenClaw 通过 Ollama 的 OpenAI 兼容 API 进行集成,并且当你通过 `OLLAMA_API_KEY`(或认证配置)启用且未定义显式的 `models.providers.ollama` 条目时,可以**自动发现支持工具调用的模型**。
|
||||
|
||||
## 快速开始
|
||||
|
||||
1. 安装 Ollama:https://ollama.ai
|
||||
|
||||
2. 拉取模型:
|
||||
|
||||
```bash
|
||||
ollama pull llama3.3
|
||||
# 或
|
||||
ollama pull qwen2.5-coder:32b
|
||||
# 或
|
||||
ollama pull deepseek-r1:32b
|
||||
```
|
||||
|
||||
3. 为 OpenClaw 启用 Ollama(任意值即可;Ollama 不需要真实密钥):
|
||||
|
||||
```bash
|
||||
# 设置环境变量
|
||||
export OLLAMA_API_KEY="ollama-local"
|
||||
|
||||
# 或在配置文件中设置
|
||||
openclaw config set models.providers.ollama.apiKey "ollama-local"
|
||||
```
|
||||
|
||||
4. 使用 Ollama 模型:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "ollama/llama3.3" },
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## 模型发现(隐式提供商)
|
||||
|
||||
当你设置了 `OLLAMA_API_KEY`(或认证配置)且**未**定义 `models.providers.ollama` 时,OpenClaw 会从本地 Ollama 实例 `http://127.0.0.1:11434` 发现模型:
|
||||
|
||||
- 查询 `/api/tags` 和 `/api/show`
|
||||
- 仅保留报告了 `tools` 能力的模型
|
||||
- 当模型报告 `thinking` 时标记为 `reasoning`
|
||||
- 在可用时从 `model_info["<arch>.context_length"]` 读取 `contextWindow`
|
||||
- 将 `maxTokens` 设置为上下文窗口的 10 倍
|
||||
- 所有费用设置为 `0`
|
||||
|
||||
这样无需手动配置模型条目,同时保持目录与 Ollama 的能力对齐。
|
||||
|
||||
查看可用模型:
|
||||
|
||||
```bash
|
||||
ollama list
|
||||
openclaw models list
|
||||
```
|
||||
|
||||
要添加新模型,只需通过 Ollama 拉取:
|
||||
|
||||
```bash
|
||||
ollama pull mistral
|
||||
```
|
||||
|
||||
新模型将被自动发现并可供使用。
|
||||
|
||||
如果你显式设置了 `models.providers.ollama`,自动发现将被跳过,你必须手动定义模型(见下文)。
|
||||
|
||||
## 配置
|
||||
|
||||
### 基本设置(隐式发现)
|
||||
|
||||
启用 Ollama 最简单的方式是通过环境变量:
|
||||
|
||||
```bash
|
||||
export OLLAMA_API_KEY="ollama-local"
|
||||
```
|
||||
|
||||
### 显式设置(手动模型)
|
||||
|
||||
在以下情况使用显式配置:
|
||||
|
||||
- Ollama 运行在其他主机/端口上。
|
||||
- 你想强制指定上下文窗口或模型列表。
|
||||
- 你想包含未报告工具支持的模型。
|
||||
|
||||
```json5
|
||||
{
|
||||
models: {
|
||||
providers: {
|
||||
ollama: {
|
||||
// 使用包含 /v1 的主机地址以兼容 OpenAI API
|
||||
baseUrl: "http://ollama-host:11434/v1",
|
||||
apiKey: "ollama-local",
|
||||
api: "openai-completions",
|
||||
models: [
|
||||
{
|
||||
id: "llama3.3",
|
||||
name: "Llama 3.3",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 8192,
|
||||
maxTokens: 8192 * 10
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
如果设置了 `OLLAMA_API_KEY`,你可以在提供商条目中省略 `apiKey`,OpenClaw 会自动填充以进行可用性检查。
|
||||
|
||||
### 自定义基础 URL(显式配置)
|
||||
|
||||
如果 Ollama 运行在不同的主机或端口上(显式配置会禁用自动发现,因此需要手动定义模型):
|
||||
|
||||
```json5
|
||||
{
|
||||
models: {
|
||||
providers: {
|
||||
ollama: {
|
||||
apiKey: "ollama-local",
|
||||
baseUrl: "http://ollama-host:11434/v1",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
### 模型选择
|
||||
|
||||
配置完成后,所有 Ollama 模型即可使用:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: {
|
||||
primary: "ollama/llama3.3",
|
||||
fallbacks: ["ollama/qwen2.5-coder:32b"],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## 高级用法
|
||||
|
||||
### 推理模型
|
||||
|
||||
当 Ollama 在 `/api/show` 中报告 `thinking` 时,OpenClaw 会将模型标记为具有推理能力:
|
||||
|
||||
```bash
|
||||
ollama pull deepseek-r1:32b
|
||||
```
|
||||
|
||||
### 模型费用
|
||||
|
||||
Ollama 免费且在本地运行,因此所有模型费用均设置为 $0。
|
||||
|
||||
### 上下文窗口
|
||||
|
||||
对于自动发现的模型,OpenClaw 会使用 Ollama 报告的上下文窗口(如果可用),否则默认为 `8192`。你可以在显式提供商配置中覆盖 `contextWindow` 和 `maxTokens`。
|
||||
|
||||
## 故障排除
|
||||
|
||||
### Ollama 未被检测到
|
||||
|
||||
确保 Ollama 正在运行,且你已设置 `OLLAMA_API_KEY`(或认证配置),并且**未**定义显式的 `models.providers.ollama` 条目:
|
||||
|
||||
```bash
|
||||
ollama serve
|
||||
```
|
||||
|
||||
同时确认 API 可访问:
|
||||
|
||||
```bash
|
||||
curl http://localhost:11434/api/tags
|
||||
```
|
||||
|
||||
### 没有可用模型
|
||||
|
||||
OpenClaw 仅自动发现报告了工具支持的模型。如果你的模型未列出,可以:
|
||||
|
||||
- 拉取一个支持工具调用的模型,或
|
||||
- 在 `models.providers.ollama` 中显式定义该模型。
|
||||
|
||||
添加模型:
|
||||
|
||||
```bash
|
||||
ollama list # 查看已安装的模型
|
||||
ollama pull llama3.3 # 拉取模型
|
||||
```
|
||||
|
||||
### 连接被拒绝
|
||||
|
||||
检查 Ollama 是否在正确的端口上运行:
|
||||
|
||||
```bash
|
||||
# 检查 Ollama 是否在运行
|
||||
ps aux | grep ollama
|
||||
|
||||
# 或重启 Ollama
|
||||
ollama serve
|
||||
```
|
||||
|
||||
## 另请参阅
|
||||
|
||||
- [模型提供商](/concepts/model-providers) - 所有提供商概览
|
||||
- [模型选择](/concepts/models) - 如何选择模型
|
||||
- [配置](/gateway/configuration) - 完整配置参考
|
||||
68
content/providers/openai.md
Normal file
68
content/providers/openai.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想在 OpenClaw 中使用 OpenAI 模型
|
||||
- 你想使用 Codex 订阅认证而非 API 密钥
|
||||
summary: 在 OpenClaw 中通过 API 密钥或 Codex 订阅使用 OpenAI
|
||||
title: OpenAI
|
||||
x-i18n:
|
||||
generated_at: "2026-02-01T21:35:10Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: f15365d5d616258f6035b986d80fe6acd1be5836a07e5bb68236688ef2952ef7
|
||||
source_path: providers/openai.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# OpenAI
|
||||
|
||||
OpenAI 提供 GPT 模型的开发者 API。Codex 支持**ChatGPT 登录**进行订阅访问,或**API 密钥**登录进行按量计费访问。Codex 云端需要 ChatGPT 登录。
|
||||
|
||||
## 方式 A:OpenAI API 密钥(OpenAI Platform)
|
||||
|
||||
**适用于:**直接 API 访问和按量计费。
|
||||
从 OpenAI 控制台获取你的 API 密钥。
|
||||
|
||||
### CLI 设置
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice openai-api-key
|
||||
# 或非交互式
|
||||
openclaw onboard --openai-api-key "$OPENAI_API_KEY"
|
||||
```
|
||||
|
||||
### 配置片段
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { OPENAI_API_KEY: "sk-..." },
|
||||
agents: { defaults: { model: { primary: "openai/gpt-5.2" } } },
|
||||
}
|
||||
```
|
||||
|
||||
## 方式 B:OpenAI Code(Codex)订阅
|
||||
|
||||
**适用于:**使用 ChatGPT/Codex 订阅访问而非 API 密钥。
|
||||
Codex 云端需要 ChatGPT 登录,而 Codex CLI 支持 ChatGPT 或 API 密钥登录。
|
||||
|
||||
### CLI 设置
|
||||
|
||||
```bash
|
||||
# 在向导中运行 Codex OAuth
|
||||
openclaw onboard --auth-choice openai-codex
|
||||
|
||||
# 或直接运行 OAuth
|
||||
openclaw models auth login --provider openai-codex
|
||||
```
|
||||
|
||||
### 配置片段
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: { defaults: { model: { primary: "openai-codex/gpt-5.2" } } },
|
||||
}
|
||||
```
|
||||
|
||||
## 注意事项
|
||||
|
||||
- 模型引用始终使用 `provider/model` 格式(参见 [/concepts/models](/concepts/models))。
|
||||
- 认证详情和复用规则请参阅 [/concepts/oauth](/concepts/oauth)。
|
||||
41
content/providers/opencode.md
Normal file
41
content/providers/opencode.md
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想通过 OpenCode Zen 访问模型
|
||||
- 你想要一个适合编程的精选模型列表
|
||||
summary: 在 OpenClaw 中使用 OpenCode Zen(精选模型)
|
||||
title: OpenCode Zen
|
||||
x-i18n:
|
||||
generated_at: "2026-02-01T21:35:16Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: 1390f9803a3cac48cb40694dd69267e3ddccd203a4ce8babda3198b926b5f6a3
|
||||
source_path: providers/opencode.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# OpenCode Zen
|
||||
|
||||
OpenCode Zen 是由 OpenCode 团队推荐的一组**精选模型列表**,适用于编程智能体。它是一个可选的托管模型访问路径,使用 API 密钥和 `opencode` 提供商。Zen 目前处于测试阶段。
|
||||
|
||||
## CLI 设置
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice opencode-zen
|
||||
# 或非交互式
|
||||
openclaw onboard --opencode-zen-api-key "$OPENCODE_API_KEY"
|
||||
```
|
||||
|
||||
## 配置片段
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { OPENCODE_API_KEY: "sk-..." },
|
||||
agents: { defaults: { model: { primary: "opencode/claude-opus-4-5" } } },
|
||||
}
|
||||
```
|
||||
|
||||
## 注意事项
|
||||
|
||||
- 也支持 `OPENCODE_ZEN_API_KEY`。
|
||||
- 你需要登录 Zen,添加账单信息,然后复制你的 API 密钥。
|
||||
- OpenCode Zen 按请求计费;详情请查看 OpenCode 控制台。
|
||||
43
content/providers/openrouter.md
Normal file
43
content/providers/openrouter.md
Normal file
@@ -0,0 +1,43 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想用一个 API 密钥访问多种 LLM
|
||||
- 你想在 OpenClaw 中通过 OpenRouter 运行模型
|
||||
summary: 使用 OpenRouter 的统一 API 在 OpenClaw 中访问多种模型
|
||||
title: OpenRouter
|
||||
x-i18n:
|
||||
generated_at: "2026-02-01T21:35:19Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: b7e29fc9c456c64d567dd909a85166e6dea8388ebd22155a31e69c970e081586
|
||||
source_path: providers/openrouter.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# OpenRouter
|
||||
|
||||
OpenRouter 提供了一个**统一 API**,通过单一端点和 API 密钥将请求路由到多种模型。它兼容 OpenAI,因此大多数 OpenAI SDK 只需切换 base URL 即可使用。
|
||||
|
||||
## CLI 设置
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice apiKey --token-provider openrouter --token "$OPENROUTER_API_KEY"
|
||||
```
|
||||
|
||||
## 配置片段
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { OPENROUTER_API_KEY: "sk-or-..." },
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "openrouter/anthropic/claude-sonnet-4-5" },
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## 注意事项
|
||||
|
||||
- 模型引用格式为 `openrouter/<provider>/<model>`。
|
||||
- 更多模型/提供商选项,请参阅[模型提供商](/concepts/model-providers)。
|
||||
- OpenRouter 底层使用 Bearer 令牌和你的 API 密钥进行认证。
|
||||
8
content/providers/qianfan.md
Normal file
8
content/providers/qianfan.md
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
summary: 使用千帆统一 API 在 OpenClaw 中接入多种模型
|
||||
title: 千帆(Qianfan)
|
||||
---
|
||||
|
||||
# 千帆(Qianfan)
|
||||
|
||||
该页面是英文文档的中文占位版本,完整内容请先参考英文版:[Qianfan](/providers/qianfan)。
|
||||
55
content/providers/qwen.md
Normal file
55
content/providers/qwen.md
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想在 OpenClaw 中使用 Qwen
|
||||
- 你想要免费层 OAuth 访问 Qwen Coder
|
||||
summary: 在 OpenClaw 中使用 Qwen OAuth(免费层)
|
||||
title: Qwen
|
||||
x-i18n:
|
||||
generated_at: "2026-02-03T07:53:34Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: 88b88e224e2fecbb1ca26e24fbccdbe25609be40b38335d0451343a5da53fdd4
|
||||
source_path: providers/qwen.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# Qwen
|
||||
|
||||
Qwen 为 Qwen Coder 和 Qwen Vision 模型提供免费层 OAuth 流程(每天 2,000 次请求,受 Qwen 速率限制约束)。
|
||||
|
||||
## 启用插件
|
||||
|
||||
```bash
|
||||
openclaw plugins enable qwen-portal-auth
|
||||
```
|
||||
|
||||
启用后重启 Gateway 网关。
|
||||
|
||||
## 认证
|
||||
|
||||
```bash
|
||||
openclaw models auth login --provider qwen-portal --set-default
|
||||
```
|
||||
|
||||
这会运行 Qwen 设备码 OAuth 流程并将提供商条目写入你的 `models.json`(加上一个 `qwen` 别名以便快速切换)。
|
||||
|
||||
## 模型 ID
|
||||
|
||||
- `qwen-portal/coder-model`
|
||||
- `qwen-portal/vision-model`
|
||||
|
||||
切换模型:
|
||||
|
||||
```bash
|
||||
openclaw models set qwen-portal/coder-model
|
||||
```
|
||||
|
||||
## 复用 Qwen Code CLI 登录
|
||||
|
||||
如果你已经使用 Qwen Code CLI 登录,OpenClaw 会在加载认证存储时从 `~/.qwen/oauth_creds.json` 同步凭证。你仍然需要一个 `models.providers.qwen-portal` 条目(使用上面的登录命令创建一个)。
|
||||
|
||||
## 注意
|
||||
|
||||
- 令牌自动刷新;如果刷新失败或访问被撤销,请重新运行登录命令。
|
||||
- 默认基础 URL:`https://portal.qwen.ai/v1`(如果 Qwen 提供不同的端点,使用 `models.providers.qwen-portal.baseUrl` 覆盖)。
|
||||
- 参阅[模型提供商](/concepts/model-providers)了解提供商级别的规则。
|
||||
102
content/providers/synthetic.md
Normal file
102
content/providers/synthetic.md
Normal file
@@ -0,0 +1,102 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想使用 Synthetic 作为模型提供商
|
||||
- 你需要配置 Synthetic API 密钥或 base URL
|
||||
summary: 在 OpenClaw 中使用 Synthetic 的 Anthropic 兼容 API
|
||||
title: Synthetic
|
||||
x-i18n:
|
||||
generated_at: "2026-02-01T21:35:34Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: f3f6e3eb864661754cbe2276783c5bc96ae01cb85ee4a19c92bed7863a35a4f7
|
||||
source_path: providers/synthetic.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# Synthetic
|
||||
|
||||
Synthetic 提供兼容 Anthropic 的端点。OpenClaw 将其注册为 `synthetic` 提供商,并使用 Anthropic Messages API。
|
||||
|
||||
## 快速设置
|
||||
|
||||
1. 设置 `SYNTHETIC_API_KEY`(或运行以下向导)。
|
||||
2. 运行新手引导:
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice synthetic-api-key
|
||||
```
|
||||
|
||||
默认模型设置为:
|
||||
|
||||
```
|
||||
synthetic/hf:MiniMaxAI/MiniMax-M2.1
|
||||
```
|
||||
|
||||
## 配置示例
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { SYNTHETIC_API_KEY: "sk-..." },
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "synthetic/hf:MiniMaxAI/MiniMax-M2.1" },
|
||||
models: { "synthetic/hf:MiniMaxAI/MiniMax-M2.1": { alias: "MiniMax M2.1" } },
|
||||
},
|
||||
},
|
||||
models: {
|
||||
mode: "merge",
|
||||
providers: {
|
||||
synthetic: {
|
||||
baseUrl: "https://api.synthetic.new/anthropic",
|
||||
apiKey: "${SYNTHETIC_API_KEY}",
|
||||
api: "anthropic-messages",
|
||||
models: [
|
||||
{
|
||||
id: "hf:MiniMaxAI/MiniMax-M2.1",
|
||||
name: "MiniMax M2.1",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 192000,
|
||||
maxTokens: 65536,
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
注意:OpenClaw 的 Anthropic 客户端会自动在 base URL 后追加 `/v1`,因此请使用 `https://api.synthetic.new/anthropic`(而非 `/anthropic/v1`)。如果 Synthetic 更改了其 base URL,请覆盖 `models.providers.synthetic.baseUrl`。
|
||||
|
||||
## 模型目录
|
||||
|
||||
以下所有模型的费用均为 `0`(输入/输出/缓存)。
|
||||
|
||||
| 模型 ID | 上下文窗口 | 最大令牌数 | 推理 | 输入 |
|
||||
| ------------------------------------------------------ | ---------- | ---------- | ----- | ------------ |
|
||||
| `hf:MiniMaxAI/MiniMax-M2.1` | 192000 | 65536 | false | text |
|
||||
| `hf:moonshotai/Kimi-K2-Thinking` | 256000 | 8192 | true | text |
|
||||
| `hf:zai-org/GLM-4.7` | 198000 | 128000 | false | text |
|
||||
| `hf:deepseek-ai/DeepSeek-R1-0528` | 128000 | 8192 | false | text |
|
||||
| `hf:deepseek-ai/DeepSeek-V3-0324` | 128000 | 8192 | false | text |
|
||||
| `hf:deepseek-ai/DeepSeek-V3.1` | 128000 | 8192 | false | text |
|
||||
| `hf:deepseek-ai/DeepSeek-V3.1-Terminus` | 128000 | 8192 | false | text |
|
||||
| `hf:deepseek-ai/DeepSeek-V3.2` | 159000 | 8192 | false | text |
|
||||
| `hf:meta-llama/Llama-3.3-70B-Instruct` | 128000 | 8192 | false | text |
|
||||
| `hf:meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8` | 524000 | 8192 | false | text |
|
||||
| `hf:moonshotai/Kimi-K2-Instruct-0905` | 256000 | 8192 | false | text |
|
||||
| `hf:openai/gpt-oss-120b` | 128000 | 8192 | false | text |
|
||||
| `hf:Qwen/Qwen3-235B-A22B-Instruct-2507` | 256000 | 8192 | false | text |
|
||||
| `hf:Qwen/Qwen3-Coder-480B-A35B-Instruct` | 256000 | 8192 | false | text |
|
||||
| `hf:Qwen/Qwen3-VL-235B-A22B-Instruct` | 250000 | 8192 | false | text + image |
|
||||
| `hf:zai-org/GLM-4.5` | 128000 | 128000 | false | text |
|
||||
| `hf:zai-org/GLM-4.6` | 198000 | 128000 | false | text |
|
||||
| `hf:deepseek-ai/DeepSeek-V3` | 128000 | 8192 | false | text |
|
||||
| `hf:Qwen/Qwen3-235B-A22B-Thinking-2507` | 256000 | 8192 | true | text |
|
||||
|
||||
## 注意事项
|
||||
|
||||
- 模型引用格式为 `synthetic/<modelId>`。
|
||||
- 如果启用了模型允许列表(`agents.defaults.models`),请添加你计划使用的所有模型。
|
||||
- 参阅[模型提供商](/concepts/model-providers)了解提供商规则。
|
||||
65
content/providers/together.md
Normal file
65
content/providers/together.md
Normal file
@@ -0,0 +1,65 @@
|
||||
---
|
||||
summary: "Together AI setup (auth + model selection)"
|
||||
read_when:
|
||||
- You want to use Together AI with OpenClaw
|
||||
- You need the API key env var or CLI auth choice
|
||||
---
|
||||
|
||||
# Together AI
|
||||
|
||||
The [Together AI](https://together.ai) provides access to leading open-source models including Llama, DeepSeek, Kimi, and more through a unified API.
|
||||
|
||||
- Provider: `together`
|
||||
- Auth: `TOGETHER_API_KEY`
|
||||
- API: OpenAI-compatible
|
||||
|
||||
## Quick start
|
||||
|
||||
1. Set the API key (recommended: store it for the Gateway):
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice together-api-key
|
||||
```
|
||||
|
||||
2. Set a default model:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "together/moonshotai/Kimi-K2.5" },
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Non-interactive example
|
||||
|
||||
```bash
|
||||
openclaw onboard --non-interactive \
|
||||
--mode local \
|
||||
--auth-choice together-api-key \
|
||||
--together-api-key "$TOGETHER_API_KEY"
|
||||
```
|
||||
|
||||
This will set `together/moonshotai/Kimi-K2.5` as the default model.
|
||||
|
||||
## Environment note
|
||||
|
||||
If the Gateway runs as a daemon (launchd/systemd), make sure `TOGETHER_API_KEY`
|
||||
is available to that process (for example, in `~/.openclaw/.env` or via
|
||||
`env.shellEnv`).
|
||||
|
||||
## Available models
|
||||
|
||||
Together AI provides access to many popular open-source models:
|
||||
|
||||
- **GLM 4.7 Fp8** - Default model with 200K context window
|
||||
- **Llama 3.3 70B Instruct Turbo** - Fast, efficient instruction following
|
||||
- **Llama 4 Scout** - Vision model with image understanding
|
||||
- **Llama 4 Maverick** - Advanced vision and reasoning
|
||||
- **DeepSeek V3.1** - Powerful coding and reasoning model
|
||||
- **DeepSeek R1** - Advanced reasoning model
|
||||
- **Kimi K2 Instruct** - High-performance model with 262K context window
|
||||
|
||||
All models support standard chat completions and are OpenAI API compatible.
|
||||
274
content/providers/venice.md
Normal file
274
content/providers/venice.md
Normal file
@@ -0,0 +1,274 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想在 OpenClaw 中使用注重隐私的推理服务
|
||||
- 你需要 Venice AI 设置指导
|
||||
summary: 在 OpenClaw 中使用 Venice AI 注重隐私的模型
|
||||
title: Venice AI
|
||||
x-i18n:
|
||||
generated_at: "2026-02-01T21:36:03Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: 2453a6ec3a715c24c460f902dec1755edcad40328de2ef895e35a614a25624cf
|
||||
source_path: providers/venice.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# Venice AI(Venice 精选)
|
||||
|
||||
**Venice** 是我们精选的 Venice 隐私优先推理配置,支持可选的匿名化访问专有模型。
|
||||
|
||||
Venice AI 提供注重隐私的 AI 推理服务,支持无审查模型,并可通过其匿名代理访问主流专有模型。所有推理默认私密——不会用你的数据训练,不会记录日志。
|
||||
|
||||
## 为什么在 OpenClaw 中使用 Venice
|
||||
|
||||
- **私密推理**,适用于开源模型(无日志记录)。
|
||||
- 需要时可使用**无审查模型**。
|
||||
- 在质量重要时,可**匿名访问**专有模型(Opus/GPT/Gemini)。
|
||||
- 兼容 OpenAI 的 `/v1` 端点。
|
||||
|
||||
## 隐私模式
|
||||
|
||||
Venice 提供两种隐私级别——理解这一点是选择模型的关键:
|
||||
|
||||
| 模式 | 描述 | 模型 |
|
||||
| ---------- | ------------------------------------------------------------------------------------- | ------------------------------------------- |
|
||||
| **私密** | 完全私密。提示词/回复**从不存储或记录**。临时性处理。 | Llama、Qwen、DeepSeek、Venice Uncensored 等 |
|
||||
| **匿名化** | 通过 Venice 代理转发并剥离元数据。底层提供商(OpenAI、Anthropic)收到的是匿名化请求。 | Claude、GPT、Gemini、Grok、Kimi、MiniMax |
|
||||
|
||||
## 功能特性
|
||||
|
||||
- **注重隐私**:可选择"私密"(完全私密)和"匿名化"(代理转发)模式
|
||||
- **无审查模型**:访问无内容限制的模型
|
||||
- **主流模型访问**:通过 Venice 匿名代理使用 Claude、GPT-5.2、Gemini、Grok
|
||||
- **兼容 OpenAI API**:标准 `/v1` 端点,易于集成
|
||||
- **流式输出**:✅ 所有模型均支持
|
||||
- **函数调用**:✅ 部分模型支持(请检查模型能力)
|
||||
- **视觉**:✅ 具有视觉能力的模型支持
|
||||
- **无硬性速率限制**:极端使用情况下可能触发公平使用限流
|
||||
|
||||
## 设置
|
||||
|
||||
### 1. 获取 API 密钥
|
||||
|
||||
1. 在 [venice.ai](https://venice.ai) 注册
|
||||
2. 前往 **Settings → API Keys → Create new key**
|
||||
3. 复制你的 API 密钥(格式:`vapi_xxxxxxxxxxxx`)
|
||||
|
||||
### 2. 配置 OpenClaw
|
||||
|
||||
**方案 A:环境变量**
|
||||
|
||||
```bash
|
||||
export VENICE_API_KEY="vapi_xxxxxxxxxxxx"
|
||||
```
|
||||
|
||||
**方案 B:交互式设置(推荐)**
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice venice-api-key
|
||||
```
|
||||
|
||||
这将:
|
||||
|
||||
1. 提示输入你的 API 密钥(或使用已有的 `VENICE_API_KEY`)
|
||||
2. 显示所有可用的 Venice 模型
|
||||
3. 让你选择默认模型
|
||||
4. 自动配置提供商
|
||||
|
||||
**方案 C:非交互式**
|
||||
|
||||
```bash
|
||||
openclaw onboard --non-interactive \
|
||||
--auth-choice venice-api-key \
|
||||
--venice-api-key "vapi_xxxxxxxxxxxx"
|
||||
```
|
||||
|
||||
### 3. 验证设置
|
||||
|
||||
```bash
|
||||
openclaw chat --model venice/llama-3.3-70b "Hello, are you working?"
|
||||
```
|
||||
|
||||
## 模型选择
|
||||
|
||||
设置完成后,OpenClaw 会显示所有可用的 Venice 模型。根据你的需求选择:
|
||||
|
||||
- **默认(我们的推荐)**:`venice/llama-3.3-70b`,私密且性能均衡。
|
||||
- **最佳整体质量**:`venice/claude-opus-45`,适合复杂任务(Opus 仍然是最强的)。
|
||||
- **隐私**:选择"私密"模型以获得完全私密的推理。
|
||||
- **能力**:选择"匿名化"模型以通过 Venice 代理访问 Claude、GPT、Gemini。
|
||||
|
||||
随时更改默认模型:
|
||||
|
||||
```bash
|
||||
openclaw models set venice/claude-opus-45
|
||||
openclaw models set venice/llama-3.3-70b
|
||||
```
|
||||
|
||||
列出所有可用模型:
|
||||
|
||||
```bash
|
||||
openclaw models list | grep venice
|
||||
```
|
||||
|
||||
## 通过 `openclaw configure` 配置
|
||||
|
||||
1. 运行 `openclaw configure`
|
||||
2. 选择 **Model/auth**
|
||||
3. 选择 **Venice AI**
|
||||
|
||||
## 应该使用哪个模型?
|
||||
|
||||
| 使用场景 | 推荐模型 | 原因 |
|
||||
| ---------------------- | -------------------------------- | ---------------------------- |
|
||||
| **通用对话** | `llama-3.3-70b` | 综合表现好,完全私密 |
|
||||
| **最佳整体质量** | `claude-opus-45` | Opus 在复杂任务上仍然最强 |
|
||||
| **隐私 + Claude 品质** | `claude-opus-45` | 通过匿名代理获得最佳推理能力 |
|
||||
| **编程** | `qwen3-coder-480b-a35b-instruct` | 代码优化,262k 上下文 |
|
||||
| **视觉任务** | `qwen3-vl-235b-a22b` | 最佳私密视觉模型 |
|
||||
| **无审查** | `venice-uncensored` | 无内容限制 |
|
||||
| **快速 + 低成本** | `qwen3-4b` | 轻量级,仍有不错能力 |
|
||||
| **复杂推理** | `deepseek-v3.2` | 推理能力强,私密 |
|
||||
|
||||
## 可用模型(共 25 个)
|
||||
|
||||
### 私密模型(15 个)— 完全私密,无日志记录
|
||||
|
||||
| 模型 ID | 名称 | 上下文(token) | 特性 |
|
||||
| -------------------------------- | ----------------------- | --------------- | ------------ |
|
||||
| `llama-3.3-70b` | Llama 3.3 70B | 131k | 通用 |
|
||||
| `llama-3.2-3b` | Llama 3.2 3B | 131k | 快速,轻量 |
|
||||
| `hermes-3-llama-3.1-405b` | Hermes 3 Llama 3.1 405B | 131k | 复杂任务 |
|
||||
| `qwen3-235b-a22b-thinking-2507` | Qwen3 235B Thinking | 131k | 推理 |
|
||||
| `qwen3-235b-a22b-instruct-2507` | Qwen3 235B Instruct | 131k | 通用 |
|
||||
| `qwen3-coder-480b-a35b-instruct` | Qwen3 Coder 480B | 262k | 编程 |
|
||||
| `qwen3-next-80b` | Qwen3 Next 80B | 262k | 通用 |
|
||||
| `qwen3-vl-235b-a22b` | Qwen3 VL 235B | 262k | 视觉 |
|
||||
| `qwen3-4b` | Venice Small (Qwen3 4B) | 32k | 快速,推理 |
|
||||
| `deepseek-v3.2` | DeepSeek V3.2 | 163k | 推理 |
|
||||
| `venice-uncensored` | Venice Uncensored | 32k | 无审查 |
|
||||
| `mistral-31-24b` | Venice Medium (Mistral) | 131k | 视觉 |
|
||||
| `google-gemma-3-27b-it` | Gemma 3 27B Instruct | 202k | 视觉 |
|
||||
| `openai-gpt-oss-120b` | OpenAI GPT OSS 120B | 131k | 通用 |
|
||||
| `zai-org-glm-4.7` | GLM 4.7 | 202k | 推理,多语言 |
|
||||
|
||||
### 匿名化模型(10 个)— 通过 Venice 代理
|
||||
|
||||
| 模型 ID | 原始模型 | 上下文(token) | 特性 |
|
||||
| ------------------------ | ----------------- | --------------- | ---------- |
|
||||
| `claude-opus-45` | Claude Opus 4.5 | 202k | 推理,视觉 |
|
||||
| `claude-sonnet-45` | Claude Sonnet 4.5 | 202k | 推理,视觉 |
|
||||
| `openai-gpt-52` | GPT-5.2 | 262k | 推理 |
|
||||
| `openai-gpt-52-codex` | GPT-5.2 Codex | 262k | 推理,视觉 |
|
||||
| `gemini-3-pro-preview` | Gemini 3 Pro | 202k | 推理,视觉 |
|
||||
| `gemini-3-flash-preview` | Gemini 3 Flash | 262k | 推理,视觉 |
|
||||
| `grok-41-fast` | Grok 4.1 Fast | 262k | 推理,视觉 |
|
||||
| `grok-code-fast-1` | Grok Code Fast 1 | 262k | 推理,编程 |
|
||||
| `kimi-k2-thinking` | Kimi K2 Thinking | 262k | 推理 |
|
||||
| `minimax-m21` | MiniMax M2.1 | 202k | 推理 |
|
||||
|
||||
## 模型发现
|
||||
|
||||
当设置了 `VENICE_API_KEY` 时,OpenClaw 会自动从 Venice API 发现模型。如果 API 不可达,则回退到静态目录。
|
||||
|
||||
`/models` 端点是公开的(列出模型无需认证),但推理需要有效的 API 密钥。
|
||||
|
||||
## 流式输出与工具支持
|
||||
|
||||
| 功能 | 支持情况 |
|
||||
| ------------- | ---------------------------------------------------------- |
|
||||
| **流式输出** | ✅ 所有模型 |
|
||||
| **函数调用** | ✅ 大多数模型(请检查 API 中的 `supportsFunctionCalling`) |
|
||||
| **视觉/图像** | ✅ 标记为"视觉"特性的模型 |
|
||||
| **JSON 模式** | ✅ 通过 `response_format` 支持 |
|
||||
|
||||
## 定价
|
||||
|
||||
Venice 使用积分制。请查看 [venice.ai/pricing](https://venice.ai/pricing) 了解当前费率:
|
||||
|
||||
- **私密模型**:通常成本较低
|
||||
- **匿名化模型**:与直接 API 定价相近 + 少量 Venice 费用
|
||||
|
||||
## 对比:Venice 与直接 API
|
||||
|
||||
| 方面 | Venice(匿名化) | 直接 API |
|
||||
| -------- | ------------------ | ------------ |
|
||||
| **隐私** | 剥离元数据,匿名化 | 关联你的账户 |
|
||||
| **延迟** | +10-50ms(代理) | 直连 |
|
||||
| **功能** | 支持大部分功能 | 完整功能 |
|
||||
| **计费** | Venice 积分 | 提供商计费 |
|
||||
|
||||
## 使用示例
|
||||
|
||||
```bash
|
||||
# 使用默认私密模型
|
||||
openclaw chat --model venice/llama-3.3-70b
|
||||
|
||||
# 通过 Venice 使用 Claude(匿名化)
|
||||
openclaw chat --model venice/claude-opus-45
|
||||
|
||||
# 使用无审查模型
|
||||
openclaw chat --model venice/venice-uncensored
|
||||
|
||||
# 使用视觉模型处理图像
|
||||
openclaw chat --model venice/qwen3-vl-235b-a22b
|
||||
|
||||
# 使用编程模型
|
||||
openclaw chat --model venice/qwen3-coder-480b-a35b-instruct
|
||||
```
|
||||
|
||||
## 故障排除
|
||||
|
||||
### API 密钥无法识别
|
||||
|
||||
```bash
|
||||
echo $VENICE_API_KEY
|
||||
openclaw models list | grep venice
|
||||
```
|
||||
|
||||
确保密钥以 `vapi_` 开头。
|
||||
|
||||
### 模型不可用
|
||||
|
||||
Venice 模型目录会动态更新。运行 `openclaw models list` 查看当前可用的模型。部分模型可能暂时离线。
|
||||
|
||||
### 连接问题
|
||||
|
||||
Venice API 地址为 `https://api.venice.ai/api/v1`。确保你的网络允许 HTTPS 连接。
|
||||
|
||||
## 配置文件示例
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { VENICE_API_KEY: "vapi_..." },
|
||||
agents: { defaults: { model: { primary: "venice/llama-3.3-70b" } } },
|
||||
models: {
|
||||
mode: "merge",
|
||||
providers: {
|
||||
venice: {
|
||||
baseUrl: "https://api.venice.ai/api/v1",
|
||||
apiKey: "${VENICE_API_KEY}",
|
||||
api: "openai-completions",
|
||||
models: [
|
||||
{
|
||||
id: "llama-3.3-70b",
|
||||
name: "Llama 3.3 70B",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 131072,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## 链接
|
||||
|
||||
- [Venice AI](https://venice.ai)
|
||||
- [API 文档](https://docs.venice.ai)
|
||||
- [定价](https://venice.ai/pricing)
|
||||
- [状态页](https://status.venice.ai)
|
||||
57
content/providers/vercel-ai-gateway.md
Normal file
57
content/providers/vercel-ai-gateway.md
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想将 Vercel AI Gateway 与 OpenClaw 配合使用
|
||||
- 你需要 API 密钥环境变量或 CLI 认证选择
|
||||
summary: Vercel AI Gateway 设置(认证 + 模型选择)
|
||||
title: Vercel AI Gateway
|
||||
x-i18n:
|
||||
generated_at: "2026-02-03T07:53:39Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: c6482f047a31b09c7a691d40babbd1f9fb3aa2042b61cc42956ad9b791da8285
|
||||
source_path: providers/vercel-ai-gateway.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# Vercel AI Gateway
|
||||
|
||||
[Vercel AI Gateway](https://vercel.com/ai-gateway) 提供了一个统一的 API,通过单一端点访问数百个模型。
|
||||
|
||||
- 提供商:`vercel-ai-gateway`
|
||||
- 认证:`AI_GATEWAY_API_KEY`
|
||||
- API:兼容 Anthropic Messages
|
||||
|
||||
## 快速开始
|
||||
|
||||
1. 设置 API 密钥(推荐:为 Gateway 网关存储它):
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice ai-gateway-api-key
|
||||
```
|
||||
|
||||
2. 设置默认模型:
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "vercel-ai-gateway/anthropic/claude-opus-4.5" },
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## 非交互式示例
|
||||
|
||||
```bash
|
||||
openclaw onboard --non-interactive \
|
||||
--mode local \
|
||||
--auth-choice ai-gateway-api-key \
|
||||
--ai-gateway-api-key "$AI_GATEWAY_API_KEY"
|
||||
```
|
||||
|
||||
## 环境变量说明
|
||||
|
||||
如果 Gateway 网关作为守护进程运行(launchd/systemd),请确保 `AI_GATEWAY_API_KEY`
|
||||
对该进程可用(例如,在 `~/.openclaw/.env` 中或通过
|
||||
`env.shellEnv`)。
|
||||
92
content/providers/vllm.md
Normal file
92
content/providers/vllm.md
Normal file
@@ -0,0 +1,92 @@
|
||||
---
|
||||
summary: "Run OpenClaw with vLLM (OpenAI-compatible local server)"
|
||||
read_when:
|
||||
- You want to run OpenClaw against a local vLLM server
|
||||
- You want OpenAI-compatible /v1 endpoints with your own models
|
||||
title: "vLLM"
|
||||
---
|
||||
|
||||
# vLLM
|
||||
|
||||
vLLM can serve open-source (and some custom) models via an **OpenAI-compatible** HTTP API. OpenClaw can connect to vLLM using the `openai-completions` API.
|
||||
|
||||
OpenClaw can also **auto-discover** available models from vLLM when you opt in with `VLLM_API_KEY` (any value works if your server doesn’t enforce auth) and you do not define an explicit `models.providers.vllm` entry.
|
||||
|
||||
## Quick start
|
||||
|
||||
1. Start vLLM with an OpenAI-compatible server.
|
||||
|
||||
Your base URL should expose `/v1` endpoints (e.g. `/v1/models`, `/v1/chat/completions`). vLLM commonly runs on:
|
||||
|
||||
- `http://127.0.0.1:8000/v1`
|
||||
|
||||
2. Opt in (any value works if no auth is configured):
|
||||
|
||||
```bash
|
||||
export VLLM_API_KEY="vllm-local"
|
||||
```
|
||||
|
||||
3. Select a model (replace with one of your vLLM model IDs):
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "vllm/your-model-id" },
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Model discovery (implicit provider)
|
||||
|
||||
When `VLLM_API_KEY` is set (or an auth profile exists) and you **do not** define `models.providers.vllm`, OpenClaw will query:
|
||||
|
||||
- `GET http://127.0.0.1:8000/v1/models`
|
||||
|
||||
…and convert the returned IDs into model entries.
|
||||
|
||||
If you set `models.providers.vllm` explicitly, auto-discovery is skipped and you must define models manually.
|
||||
|
||||
## Explicit configuration (manual models)
|
||||
|
||||
Use explicit config when:
|
||||
|
||||
- vLLM runs on a different host/port.
|
||||
- You want to pin `contextWindow`/`maxTokens` values.
|
||||
- Your server requires a real API key (or you want to control headers).
|
||||
|
||||
```json5
|
||||
{
|
||||
models: {
|
||||
providers: {
|
||||
vllm: {
|
||||
baseUrl: "http://127.0.0.1:8000/v1",
|
||||
apiKey: "${VLLM_API_KEY}",
|
||||
api: "openai-completions",
|
||||
models: [
|
||||
{
|
||||
id: "your-model-id",
|
||||
name: "Local vLLM Model",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 128000,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- Check the server is reachable:
|
||||
|
||||
```bash
|
||||
curl http://127.0.0.1:8000/v1/models
|
||||
```
|
||||
|
||||
- If requests fail with auth errors, set a real `VLLM_API_KEY` that matches your server configuration, or configure the provider explicitly under `models.providers.vllm`.
|
||||
68
content/providers/xiaomi.md
Normal file
68
content/providers/xiaomi.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想在 OpenClaw 中使用 Xiaomi MiMo 模型
|
||||
- 你需要设置 XIAOMI_API_KEY
|
||||
summary: 在 OpenClaw 中使用 Xiaomi MiMo (mimo-v2-flash)
|
||||
title: Xiaomi MiMo
|
||||
x-i18n:
|
||||
generated_at: "2026-02-01T21:36:15Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: 366fd2297b2caf8c5ad944d7f1b6d233b248fe43aedd22a28352ae7f370d2435
|
||||
source_path: providers/xiaomi.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# Xiaomi MiMo
|
||||
|
||||
Xiaomi MiMo 是 **MiMo** 模型的 API 平台。它提供与 OpenAI 和 Anthropic 格式兼容的 REST API,并使用 API 密钥进行身份验证。请在 [Xiaomi MiMo 控制台](https://platform.xiaomimimo.com/#/console/api-keys) 中创建你的 API 密钥。OpenClaw 使用 `xiaomi` 提供商配合 Xiaomi MiMo API 密钥。
|
||||
|
||||
## 模型概览
|
||||
|
||||
- **mimo-v2-flash**:262144 token 上下文窗口,兼容 Anthropic Messages API。
|
||||
- 基础 URL:`https://api.xiaomimimo.com/anthropic`
|
||||
- 授权方式:`Bearer $XIAOMI_API_KEY`
|
||||
|
||||
## CLI 设置
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice xiaomi-api-key
|
||||
# 或非交互式
|
||||
openclaw onboard --auth-choice xiaomi-api-key --xiaomi-api-key "$XIAOMI_API_KEY"
|
||||
```
|
||||
|
||||
## 配置片段
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { XIAOMI_API_KEY: "your-key" },
|
||||
agents: { defaults: { model: { primary: "xiaomi/mimo-v2-flash" } } },
|
||||
models: {
|
||||
mode: "merge",
|
||||
providers: {
|
||||
xiaomi: {
|
||||
baseUrl: "https://api.xiaomimimo.com/anthropic",
|
||||
api: "anthropic-messages",
|
||||
apiKey: "XIAOMI_API_KEY",
|
||||
models: [
|
||||
{
|
||||
id: "mimo-v2-flash",
|
||||
name: "Xiaomi MiMo V2 Flash",
|
||||
reasoning: false,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 262144,
|
||||
maxTokens: 8192,
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## 备注
|
||||
|
||||
- 模型引用:`xiaomi/mimo-v2-flash`。
|
||||
- 当设置了 `XIAOMI_API_KEY`(或存在身份验证配置文件)时,该提供商会自动注入。
|
||||
- 有关提供商规则,请参阅 [/concepts/model-providers](/concepts/model-providers)。
|
||||
41
content/providers/zai.md
Normal file
41
content/providers/zai.md
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
read_when:
|
||||
- 你想在 OpenClaw 中使用 Z.AI / GLM 模型
|
||||
- 你需要简单的 ZAI_API_KEY 配置
|
||||
summary: 在 OpenClaw 中使用智谱 AI(GLM 模型)
|
||||
title: Z.AI
|
||||
x-i18n:
|
||||
generated_at: "2026-02-01T21:36:13Z"
|
||||
model: claude-opus-4-5
|
||||
provider: pi
|
||||
source_hash: 2c24bbad86cf86c38675a58e22f9e1b494f78a18fdc3051c1be80d2d9a800711
|
||||
source_path: providers/zai.md
|
||||
workflow: 15
|
||||
---
|
||||
|
||||
# Z.AI
|
||||
|
||||
Z.AI 是 **GLM** 模型的 API 平台。它为 GLM 提供 REST API,并使用 API 密钥进行身份验证。请在 Z.AI 控制台中创建你的 API 密钥。OpenClaw 通过 `zai` 提供商配合 Z.AI API 密钥使用。
|
||||
|
||||
## CLI 设置
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice zai-api-key
|
||||
# 或非交互式
|
||||
openclaw onboard --zai-api-key "$ZAI_API_KEY"
|
||||
```
|
||||
|
||||
## 配置片段
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { ZAI_API_KEY: "sk-..." },
|
||||
agents: { defaults: { model: { primary: "zai/glm-4.7" } } },
|
||||
}
|
||||
```
|
||||
|
||||
## 注意事项
|
||||
|
||||
- GLM 模型以 `zai/<model>` 的形式提供(例如:`zai/glm-4.7`)。
|
||||
- 参阅 [/providers/glm](/providers/glm) 了解模型系列概览。
|
||||
- Z.AI 使用 Bearer 认证方式配合你的 API 密钥。
|
||||
Reference in New Issue
Block a user