Agora:多智能体并行推理委员会 - Openclaw Skills

作者:互联网

2026-04-17

AI教程

什么是 Agora?

Agora 通过调用专业的辩论委员会,将标准的 AI 交互转变为高级协作会话。该工具是 Openclaw Skills 中的佼佼者,受先进多智能体架构启发,可对任何查询提供 360 度全方位分析。通过担任“队长”角色,主智能体协调三个不同的角色——学者(Scholar)、工程师(Engineer)和缪斯(Muse),确保每个答案都经过透彻的研究、严密的逻辑和创意的推敲。

这一技能对于单一边角可能忽略关键细节的复杂多面任务特别有效。通过并行运行这些智能体,Agora 提供了一个全面的综合方案,突显共识、识别冲突,并提供单智能体对话无法企及的深度。

下载入口:https://github.com/openclaw/skills/tree/main/skills/robbyczgw-cla/agora-council

安装与下载

1. ClawHub CLI

从源直接安装技能的最快方式。

npx clawhub@latest install agora-council

2. 手动安装

将技能文件夹复制到以下位置之一

全局模式 ~/.openclaw/skills/ 工作区 /skills/

优先级:工作区 > 本地 > 内置

3. 提示词安装

将此提示词复制到 OpenClaw 即可自动安装。

请帮我使用 Clawhub 安装 agora-council。如果尚未安装 Clawhub,请先安装(npm i -g clawhub)。

Agora 应用场景

  • 解决需要深入研究和代码验证的复杂技术问题。
  • 能够从多元、竞争视角和侧向思维中获益的战略决策。
  • 针对高风险研究任务的全面事实核查和来源验证。
  • 调试必须从多个角度审视逻辑流和边缘情况的复杂系统架构。
Agora 工作原理
  1. 用户使用 /agora 命令后跟复杂查询来触发委员会。
  2. 队长(Captain)智能体解析模型标识并将查询分解为专门的子任务。
  3. 三个专门的子智能体(学者、工程师和缪斯)在并行会话中分发任务。
  4. 每个子智能体提供结构化响应,包括发现结果、置信水平和潜在异议。
  5. 队长将这些独立的输出综合为一个高置信度的最终答案,并注明一致点和冲突点。

Agora 配置指南

要在您的 Openclaw Skills 设置中激活此委员会,请使用集成命令结构。除了确保您的环境支持并行会话生成外,无需额外配置。

/agora <您的问??题>
/agora <问题> --preset=premium
/agora <问题> --scholar=sonnet --engineer=opus --muse=haiku

Agora 数据架构与分类体系

Agora 使用结构化分类法组织其输出,以确保委员会提供的不同观点清晰明了:

组件 职责
学者 (Scholar) 实时网络搜索、事实核查和来源引用。
工程师 (Engineer) 分步逻辑、数学计算和代码验证。
缪斯 (Muse) 创意视角、用户友好的解释和挑战性假设。
队长综合 (Captain Synthesis) 最终共识、冲突调查和交叉核对验证。
name: agora
version: 0.1.0-beta
description: "Multi-agent debate council — spawns 3 specialized sub-agents in parallel (Scholar, Engineer, Muse) to tackle complex problems from different angles. Configurable models per role. Inspired by Grok 4.20's multi-agent architecture."
tags: [multi-agent, council, parallel, reasoning, research, creative, collaboration, agora, debate]

Agora ??? — Multi-Agent Debate Council

Spawn 3 specialized sub-agents in parallel to tackle complex problems. You (the main agent) act as Captain/Coordinator — decompose the task, dispatch to specialists, synthesize the final answer.

When to Use

Activate when the user says any of:

  • /agora or /council
  • "ask the council", "multi-agent", "get multiple perspectives"
  • Or when facing complex, multi-faceted problems that benefit from diverse expertise

DO NOT use for: Simple questions, quick lookups, casual ch@t.

Architecture

User Query
    │
    ▼
┌─────────────────────────────────┐
│  CAPTAIN (Main Agent Session)   │
│  Model: user's current model    │
│  Decomposes & Assigns           │
└────┬──────────┬─────────────────┘
     │          │          │
     ▼          ▼          ▼
┌─────────┐┌─────────┐┌─────────┐
│ SCHOLAR ││ENGINEER ││  MUSE   │
│ Research││ Logic   ││Creative │
│ & Facts ││ & Code  ││ & Style │
│ (model) ││ (model) ││ (model) │
└────┬────┘└────┬────┘└────┬────┘
     │          │          │
     ▼          ▼          ▼
┌─────────────────────────────────┐
│  CAPTAIN synthesizes            │
│  Final consensus answer         │
└─────────────────────────────────┘

Model Configuration

Users can specify models per role. Parse from the command or use defaults.

Syntax

/agora 
/agora  --scholar=codex --engineer=codex --muse=sonnet
/agora  --all=haiku

Defaults (if no model specified)

Role Default Model Why
??? Captain User's current session model Coordinates & synthesizes
?? Scholar codex Cheap, fast, good at web search
?? Engineer codex Strong at logic & code
?? Muse sonnet Creative, nuanced writing

Model Aliases (use in --flags)

  • opus → Claude Opus 4.6
  • sonnet → Claude Sonnet 4.5
  • haiku → Claude Haiku 4.5
  • codex → GPT-5.3 Codex
  • grok → Grok 4.1
  • kimi → Kimi K2.5
  • minimax → MiniMax M2.5
  • Or any full model string (e.g. anthropic/claude-opus-4-6)

Presets

  • --preset=cheap → all haiku (fast, minimal cost)
  • --preset=balanced → scholar=codex, engineer=codex, muse=sonnet (default)
  • --preset=premium → all opus (max quality, high cost)
  • --preset=diverse → scholar=codex, engineer=sonnet, muse=opus (different perspectives)

The Council

?? Scholar (Research & Facts)

  • Role: Real-time web search, fact verification, evidence gathering, source citations
  • Must use: web_search tool extensively (or web-search-plus skill if available)
  • Prompt prefix: "You are SCHOLAR, a research specialist. Your job is to find accurate, up-to-date facts and evidence. Search the web extensively. Cite sources with URLs. Flag anything uncertain. Be thorough but concise. Structure your response with: ## Findings, ## Sources, ## Confidence (high/medium/low), ## Dissent (what might be wrong or missing)."

?? Engineer (Logic, Math & Code)

  • Role: Rigorous reasoning, calculations, code, debugging, step-by-step verification
  • Prompt prefix: "You are ENGINEER, a logic and code specialist. Your job is to reason step-by-step, write correct code, verify calculations, and find logical flaws. Be precise. Show your work. Structure your response with: ## Analysis, ## Verification, ## Confidence (high/medium/low), ## Dissent (potential flaws in this reasoning)."

?? Muse (Creative & Balance)

  • Role: Divergent thinking, user-friendly explanations, creative solutions, balancing perspectives
  • Prompt prefix: "You are MUSE, a creative specialist. Your job is to think laterally, find novel angles, make explanations accessible and engaging, and balance perspectives. Challenge assumptions. Be original. Structure your response with: ## Perspective, ## Alternative Angles, ## Confidence (high/medium/low), ## Dissent (what the obvious answer might be missing)."

Execution Steps

Step 1: Parse & Decompose

  1. Parse model flags from the command (if any), otherwise use defaults
  2. Read the user's query
  3. Break it into sub-tasks suited for each agent
  4. Create focused prompts for each role

Step 2: Dispatch (PARALLEL)

Spawn all 3 sub-agents simultaneously using sessions_spawn:

sessions_spawn(task="[SCHOLAR prompt]", label="council-scholar", model="codex")
sessions_spawn(task="[ENGINEER prompt]", label="council-engineer", model="codex")
sessions_spawn(task="[MUSE prompt]", label="council-muse", model="sonnet")

CRITICAL: All 3 calls in the SAME function_calls block for true parallelism!

Each sub-agent task MUST:

  1. Start with the role prefix and persona instructions
  2. Include the full original user query
  3. Specify what aspect to focus on
  4. Request structured output with the sections defined above

Step 3: Collect

Wait for all 3 sub-agents to complete. They auto-announce results back to this session. Do NOT poll in a loop — just wait for the system messages.

Step 4: Synthesize

As Captain, combine all 3 perspectives:

  1. Consensus: Where do all agents agree? → High confidence
  2. Conflict: Where do they disagree? → Investigate, pick strongest argument, explain why
  3. Gaps: What did nobody cover? → Flag for user
  4. Cross-check: Did Engineer's logic validate Scholar's facts? Did Muse find a creative angle nobody considered?
  5. Sources: Collect all URLs/citations from Scholar

Step 5: Deliver

Present the final answer in this format:

??? **Council Answer**

[Synthesized answer here — this is YOUR synthesis as Captain, not a copy-paste of sub-agent outputs]

**Confidence:** High/Medium/Low
**Agreement:** [What all agents agreed on]
**Dissent:** [Where they disagreed and why you sided with X]

---
?? Scholar (model) · ?? Engineer (model) · ?? Muse (model) | Agora v1.1

Examples

Simple

/agora Should I use PostgreSQL or MongoDB for a new SaaS app?

→ Uses defaults: Scholar=codex, Engineer=codex, Muse=sonnet

Custom models

/agora What's the best ETH L2 strategy right now? --scholar=sonnet --engineer=opus --muse=haiku

All same model

/agora Explain quantum computing --all=opus

Preset

/agora Debug this auth flow --preset=premium

Tips

  • For pure research questions: Scholar does heavy lifting, others verify
  • For coding problems: Engineer leads, Muse reviews UX, Scholar checks docs
  • For strategy questions: All three contribute equally
  • For writing tasks: Muse leads, Scholar fact-checks, Engineer structures
  • Use --preset=cheap for exploration, --preset=premium for important decisions

Cost Note

Each council call spawns 3 sub-agents = 3x token usage. Use wisely for complex problems. Default preset (balanced) uses Codex for 2/3 agents = cost-efficient.

相关推荐