LLM Council:多模型共识工具 - Openclaw Skills

作者:互联网

2026-04-16

AI教程

什么是 LLM Council 安装器?

LLM Council 是一款功能强大的多模型共识应用程序,旨在通过同时查询多个 LLM 来简化复杂的决策过程。通过利用 Openclaw Skills,开发人员可以零手动配置部署此 React/Vite 和 FastAPI 技术栈。该技能自动执行从凭证解析到环境设置的整个生命周期,让您能够专注于综合模型评述而非基础设施。

该应用程序允许用户向多个模型提问,让它们相互评述,并接收综合后的“zhu席”回答。它是 OpenRouter 和 OpenClaw 原生的,确保您的凭证自动从现有配置中解析,无需手动干预。

下载入口:https://github.com/openclaw/skills/tree/main/skills/jeadland/install-llm-council

安装与下载

1. ClawHub CLI

从源直接安装技能的最快方式。

npx clawhub@latest install install-llm-council

2. 手动安装

将技能文件夹复制到以下位置之一

全局模式 ~/.openclaw/skills/ 工作区 /skills/

优先级:工作区 > 本地 > 内置

3. 提示词安装

将此提示词复制到 OpenClaw 即可自动安装。

请帮我使用 Clawhub 安装 install-llm-council。如果尚未安装 Clawhub,请先安装(npm i -g clawhub)。

LLM Council 安装器 应用场景

  • 需要同时从多个 AI 模型获取视角的深度研究。
  • 通过自动化的模型间评述来解决 LLM 输出之间的分歧。
  • 针对复杂提示词的多代理综合响应快速原型设计。
  • 快速部署本地多模型仪表板以进行性能比较。
LLM Council 安装器 工作原理
  1. 代理执行 /install-llm-council 命令以触发自动化安装脚本。
  2. 系统通过检查环境变量、工作空间文件或本地 OpenClaw 网关来解析 API 凭证。
  3. 克隆源仓库,并使用 uv(Python 后端)和 npm(React 前端)同步依赖项。
  4. 自动生成环境配置,将后端与首选 API 网关或 OpenRouter 链接。
  5. 启动 FastAPI 后端和 Vite 前端的后台服务,并具备自动端口冲突处理和健康检查功能。

LLM Council 安装器 配置指南

要使用 Openclaw Skills 部署 LLM Council,请在代理界面运行以下斜杠命令:

/install-llm-council

或者,您可以通过 CLI 手动触发安装脚本:

bash ~/.openclaw/skills/install-llm-council/install.sh

安装后管理服务,请使用提供的实用程序脚本:

# 检查服务状态
bash ~/.openclaw/skills/install-llm-council/status.sh

# 停止所有服务
bash ~/.openclaw/skills/install-llm-council/stop.sh

LLM Council 安装器 数据架构与分类体系

该技能在本地工作空间和技能目录中组织其运行数据和配置,以确保部署环境整洁。

组件 位置 用途
源代码 ~/workspace/llm-council 包含克隆的前端和后端代码。
配置 .env 存储 API 密钥和本地 URL 路由信息。
进程管理 pids 活动后台服务进程 ID 的跟踪文件。
控制脚本 ~/.openclaw/skills/install-llm-council/ 包含安装、停止和状态脚本。
name: install-llm-council
version: 1.1.6
description: |
  LLM Council — multi-model consensus app with one-command setup. Ask one question to many
  models, let them critique each other, get a synthesized chairman answer. OpenRouter/OpenClaw-native
  backend + React/Vite frontend. Zero config — credentials resolve automatically.
slash_command: /install-llm-council
metadata: {"category":"devtools","tags":["llm","openrouter","openclaw","install","vite","fastapi","consensus","multi-model"],"repo":"https://github.com/jeadland/llm-council"}

LLM Council (with Installer)

LLM Council — ask one question to many models, let them critique each other, get a synthesized chairman answer.

This skill is the fastest way to run it: one command installs dependencies, configures credentials, and launches both backend and frontend. No manual setup, no API key prompts.

OpenClaw-native: Credentials resolve automatically from OpenClaw config or workspace .env. Falls back to the local OpenClaw gateway (port 18789) if no OpenRouter key is found.

Two Ways to Use LLM Council

Mode Best For Command
Quick answer Fast decisions, mobile, casual questions /council "Your question" (requires ask-council skill)
Full discussion Deep research, exploring disagreements, seeing all model responses /install-llm-council then open browser at :5173

Slash Command

/install-llm-council [--mode auto|dev|preview] [--dir PATH]

When the user says /install-llm-council, run:

bash ~/.openclaw/skills/install-llm-council/install.sh

The script will:

  1. Resolve credentials — env var → workspace .env → OpenClaw local gateway (no prompt ever)
  2. Clone or pull https://github.com/jeadland/llm-council to ~/workspace/llm-council
  3. uv sync — Python backend dependencies
  4. npm ci — frontend dependencies
  5. Write .env — API key/URL for OpenRouter direct or OpenClaw gateway mode
  6. Start app — uses hardened start.sh with mode-aware startup and health checks
  7. Auto-handle port conflicts — selects safe fallback ports when defaults are busy
  8. Print practical access URLs — Caddy route and common direct fallbacks

Flags

Flag Default Description
--mode auto auto Detect Caddy on :5173 and prefer preview mode; otherwise dev mode
--mode dev Run Vite dev server (hot reload, port 5173 default)
--mode preview Build + run Vite preview (port 4173 default)
--dir PATH ~/workspace/llm-council Override clone directory

Credential Resolution (OpenClaw-native)

The installer never prompts for API keys. It resolves credentials in this order:

  1. EnvironmentOPENROUTER_API_KEY already exported
  2. Workspace .env~/.openclaw/workspace/.env contains OPENROUTER_API_KEY=...
  3. OpenClaw gateway — reads ~/.openclaw/openclaw.jsongateway.auth.token + gateway.port
    • Sets OPENROUTER_API_URL=http://127.0.0.1:/v1/ch@t/completions in .env
    • Uses the gateway token as the bearer key (OpenAI-compatible endpoint)

Ports

Service Port Notes
Backend (FastAPI) 8001 Always
Frontend dev 5173 --mode dev (default)
Frontend preview 4173 --mode preview

Files

File Purpose
SKILL.md This file — skill documentation
install.sh Main one-shot installer/launcher
stop.sh Stop background services
status.sh Check if services are running
pids Saved PIDs for background processes

Agent Instructions

When user says /install-llm-council or "install llm-council" or "start llm council":

bash ~/.openclaw/skills/install-llm-council/install.sh

Report back the access URL from the script output (e.g. http://10.0.1.X:5173).

To stop:

bash ~/.openclaw/skills/install-llm-council/stop.sh

To check status:

bash ~/.openclaw/skills/install-llm-council/status.sh

Example Output

? LLM Council installed and running!

  Mode:     dev
  API:      openrouter
  Backend:  http://127.0.0.1:8001
  Frontend: http://10.0.1.42:5173

  Stop:     bash ~/.openclaw/skills/install-llm-council/stop.sh
  Status:   bash ~/.openclaw/skills/install-llm-council/status.sh

相关推荐