LLM Council:多模型共识工具 - Openclaw Skills
作者:互联网
2026-04-16
什么是 LLM Council 安装器?
LLM Council 是一款功能强大的多模型共识应用程序,旨在通过同时查询多个 LLM 来简化复杂的决策过程。通过利用 Openclaw Skills,开发人员可以零手动配置部署此 React/Vite 和 FastAPI 技术栈。该技能自动执行从凭证解析到环境设置的整个生命周期,让您能够专注于综合模型评述而非基础设施。
该应用程序允许用户向多个模型提问,让它们相互评述,并接收综合后的“zhu席”回答。它是 OpenRouter 和 OpenClaw 原生的,确保您的凭证自动从现有配置中解析,无需手动干预。
下载入口:https://github.com/openclaw/skills/tree/main/skills/jeadland/install-llm-council
安装与下载
1. ClawHub CLI
从源直接安装技能的最快方式。
npx clawhub@latest install install-llm-council
2. 手动安装
将技能文件夹复制到以下位置之一
全局模式~/.openclaw/skills/
工作区
/skills/
优先级:工作区 > 本地 > 内置
3. 提示词安装
将此提示词复制到 OpenClaw 即可自动安装。
请帮我使用 Clawhub 安装 install-llm-council。如果尚未安装 Clawhub,请先安装(npm i -g clawhub)。
LLM Council 安装器 应用场景
- 需要同时从多个 AI 模型获取视角的深度研究。
- 通过自动化的模型间评述来解决 LLM 输出之间的分歧。
- 针对复杂提示词的多代理综合响应快速原型设计。
- 快速部署本地多模型仪表板以进行性能比较。
- 代理执行 /install-llm-council 命令以触发自动化安装脚本。
- 系统通过检查环境变量、工作空间文件或本地 OpenClaw 网关来解析 API 凭证。
- 克隆源仓库,并使用 uv(Python 后端)和 npm(React 前端)同步依赖项。
- 自动生成环境配置,将后端与首选 API 网关或 OpenRouter 链接。
- 启动 FastAPI 后端和 Vite 前端的后台服务,并具备自动端口冲突处理和健康检查功能。
LLM Council 安装器 配置指南
要使用 Openclaw Skills 部署 LLM Council,请在代理界面运行以下斜杠命令:
/install-llm-council
或者,您可以通过 CLI 手动触发安装脚本:
bash ~/.openclaw/skills/install-llm-council/install.sh
安装后管理服务,请使用提供的实用程序脚本:
# 检查服务状态
bash ~/.openclaw/skills/install-llm-council/status.sh
# 停止所有服务
bash ~/.openclaw/skills/install-llm-council/stop.sh
LLM Council 安装器 数据架构与分类体系
该技能在本地工作空间和技能目录中组织其运行数据和配置,以确保部署环境整洁。
| 组件 | 位置 | 用途 |
|---|---|---|
| 源代码 | ~/workspace/llm-council |
包含克隆的前端和后端代码。 |
| 配置 | .env |
存储 API 密钥和本地 URL 路由信息。 |
| 进程管理 | pids |
活动后台服务进程 ID 的跟踪文件。 |
| 控制脚本 | ~/.openclaw/skills/install-llm-council/ |
包含安装、停止和状态脚本。 |
name: install-llm-council
version: 1.1.6
description: |
LLM Council — multi-model consensus app with one-command setup. Ask one question to many
models, let them critique each other, get a synthesized chairman answer. OpenRouter/OpenClaw-native
backend + React/Vite frontend. Zero config — credentials resolve automatically.
slash_command: /install-llm-council
metadata: {"category":"devtools","tags":["llm","openrouter","openclaw","install","vite","fastapi","consensus","multi-model"],"repo":"https://github.com/jeadland/llm-council"}
LLM Council (with Installer)
LLM Council — ask one question to many models, let them critique each other, get a synthesized chairman answer.
This skill is the fastest way to run it: one command installs dependencies, configures credentials, and launches both backend and frontend. No manual setup, no API key prompts.
OpenClaw-native: Credentials resolve automatically from OpenClaw config or workspace .env. Falls back to the local OpenClaw gateway (port 18789) if no OpenRouter key is found.
Two Ways to Use LLM Council
| Mode | Best For | Command |
|---|---|---|
| Quick answer | Fast decisions, mobile, casual questions | /council "Your question" (requires ask-council skill) |
| Full discussion | Deep research, exploring disagreements, seeing all model responses | /install-llm-council then open browser at :5173 |
Slash Command
/install-llm-council [--mode auto|dev|preview] [--dir PATH]
When the user says /install-llm-council, run:
bash ~/.openclaw/skills/install-llm-council/install.sh
The script will:
- Resolve credentials — env var → workspace
.env→ OpenClaw local gateway (no prompt ever) - Clone or pull
https://github.com/jeadland/llm-councilto~/workspace/llm-council uv sync— Python backend dependenciesnpm ci— frontend dependencies- Write
.env— API key/URL for OpenRouter direct or OpenClaw gateway mode - Start app — uses hardened
start.shwith mode-aware startup and health checks - Auto-handle port conflicts — selects safe fallback ports when defaults are busy
- Print practical access URLs — Caddy route and common direct fallbacks
Flags
| Flag | Default | Description |
|---|---|---|
--mode auto |
auto |
Detect Caddy on :5173 and prefer preview mode; otherwise dev mode |
--mode dev |
— | Run Vite dev server (hot reload, port 5173 default) |
--mode preview |
— | Build + run Vite preview (port 4173 default) |
--dir PATH |
~/workspace/llm-council |
Override clone directory |
Credential Resolution (OpenClaw-native)
The installer never prompts for API keys. It resolves credentials in this order:
- Environment —
OPENROUTER_API_KEYalready exported - Workspace
.env—~/.openclaw/workspace/.envcontainsOPENROUTER_API_KEY=... - OpenClaw gateway — reads
~/.openclaw/openclaw.json→gateway.auth.token+gateway.port- Sets
OPENROUTER_API_URL=http://127.0.0.1:in/v1/ch@t/completions .env - Uses the gateway token as the bearer key (OpenAI-compatible endpoint)
- Sets
Ports
| Service | Port | Notes |
|---|---|---|
| Backend (FastAPI) | 8001 | Always |
| Frontend dev | 5173 | --mode dev (default) |
| Frontend preview | 4173 | --mode preview |
Files
| File | Purpose |
|---|---|
SKILL.md |
This file — skill documentation |
install.sh |
Main one-shot installer/launcher |
stop.sh |
Stop background services |
status.sh |
Check if services are running |
pids |
Saved PIDs for background processes |
Agent Instructions
When user says /install-llm-council or "install llm-council" or "start llm council":
bash ~/.openclaw/skills/install-llm-council/install.sh
Report back the access URL from the script output (e.g. http://10.0.1.X:5173).
To stop:
bash ~/.openclaw/skills/install-llm-council/stop.sh
To check status:
bash ~/.openclaw/skills/install-llm-council/status.sh
Example Output
? LLM Council installed and running!
Mode: dev
API: openrouter
Backend: http://127.0.0.1:8001
Frontend: http://10.0.1.42:5173
Stop: bash ~/.openclaw/skills/install-llm-council/stop.sh
Status: bash ~/.openclaw/skills/install-llm-council/status.sh
相关推荐
专题
+ 收藏
+ 收藏
+ 收藏
+ 收藏
+ 收藏
+ 收藏
最新数据
相关文章
Minecraft 3D 建造计划生成器:AI 场景架构师 - Openclaw Skills
Scholar Search:自动化文献搜索与研究简报 - Openclaw Skills
issue-to-pr: 自动化 GitHub Issue 修复与 PR 生成 - Openclaw Skills
接班交班总结器:临床 EHR 自动化 - Openclaw Skills
Teacher AI 备课专家:K-12 自动化教案设计 - Openclaw Skills
专利权利要求映射器:生物技术与制药 IP 分析 - Openclaw Skills
生成 Tesla 车身改色膜:用于 3D 显示的 AI 图像生成 - Openclaw Skills
Taiwan MD:面向台湾的 AI 原生开放知识库 - Openclaw Skills
自学习与迭代演进:AI Agent 成长框架 - Openclaw Skills
HIPC Config Manager: 安全的 API 凭据处理器 - Openclaw Skills
AI精选
