创建帽子集合:多智能体工作流生成器 - Openclaw Skills

作者:互联网

2026-04-17

AI教程

什么是 创建帽子集合?

该技能提供了一个引导式的对话界面,旨在帮助开发者构建和部署复杂的多智能体系统。作为 Openclaw Skills 生态系统的一部分,它通过针对工作流意图、角色和交接逻辑提出澄清性问题,自动创建 Ralph 帽子集合预设。它弥补了高级架构概念与运行它们所需的技术 YAML 配置之间的差距。

通过综合架构验证和设计模式强制执行等核心价值主张,该技能确保每个生成的预设都遵循最佳实践。无论您是在构建顺序流水线还是复杂的监督者-工作者层级结构,此工具都能处理样板代码和技术限制,让您能够专注于智能体的逻辑和角色。

下载入口:https://github.com/openclaw/skills/tree/main/skills/paulpete/create-hat-collection

安装与下载

1. ClawHub CLI

从源直接安装技能的最快方式。

npx clawhub@latest install create-hat-collection

2. 手动安装

将技能文件夹复制到以下位置之一

全局模式 ~/.openclaw/skills/ 工作区 /skills/

优先级:工作区 > 本地 > 内置

3. 提示词安装

将此提示词复制到 OpenClaw 即可自动安装。

请帮我使用 Clawhub 安装 create-hat-collection。如果尚未安装 Clawhub,请先安装(npm i -g clawhub)。

创建帽子集合 应用场景

  • 通过结构化对话从头开始设计新的多智能体工作流。
  • 将抽象的工作流想法转化为具体的、符合架构规范的 YAML 配置。
  • 在 Openclaw Skills 中实现标准化的设计模式,如批判者-执行者(Critic-Actor)或科学方法(Scientific Method)。
  • 快速原型化事件驱动的智能体链,并内置针对触发器和发布的验证。
创建帽子集合 工作原理
  1. 探索阶段:该技能提出针对性问题,以确定工作流目的、所需的架构模式和必要的智能体角色。
  2. 事件映射:设计逻辑事件链,确保每个触发器映射到特定的帽子,并识别清晰的完成信号。
  3. 约束验证:系统根据 Openclaw Skills 架构规则检查设计,例如确保 task.start 不被误用作触发器。
  4. 预设生成:生成完整的 YAML 文件,包括事件循环、帽子定义以及角色特定的 Markdown 指令。
  5. 目录输出:最终的生产就绪文件将保存到 presets 目录,以便立即进行测试或评估。

创建帽子集合 配置指南

要在您的开发环境中使用此技能,请确保已安装 Ralph 生态系统。生成预设后,您可以使用以下命令进行验证和测试:

# 执行空运行以验证配置解析
cargo run --bin ralph -- run -c presets/.yml -p "test prompt" --dry-run

# 为核心运行器执行冒烟测试
cargo test -p ralph-core smoke_runner

创建帽子集合 数据架构与分类体系

该技能将数据整理成遵循严格元数据分类的结构化 YAML 预设。这确保了跨各种 Openclaw Skills 和智能体的兼容性。

字段 类型 描述
event_loop 对象 包含启动工作流所需的 starting_event。
hats 映射 智能体定义及其特定逻辑的键值对。
triggers 数组 激活智能体的特定事件列表。
publishes 数组 授权智能体发布的事件列表。
instructions Markdown 详细的角色定义、流程步骤和事件格式。
name: create-hat-collection
description: Generates new Ralph hat collection presets through guided conversation. Asks clarifying questions, validates against schema constraints, and outputs production-ready YAML files.

Create Hat Collection

Overview

This skill generates Ralph hat collection presets through a guided, conversational workflow. It asks clarifying questions about your workflow, validates the configuration against schema constraints, and produces a production-ready YAML preset file.

Output: A complete .yml preset file in the presets/ directory.

When to Use

  • Creating a new multi-agent workflow from scratch
  • Transforming a workflow idea into a structured preset
  • Need guidance on hat design patterns and event routing

Not for: Modifying existing presets (use /creating-hat-collections reference instead)

Workflow

Phase 1: Understand the Workflow

Ask clarifying questions to understand:

  1. Purpose: What problem does this workflow solve?
  2. Pattern: Which architecture pattern fits best?
    • Pipeline: A→B→C linear flow (analyze→summarize)
    • Critic-Actor: One proposes, another critiques (code review)
    • Supervisor-Worker: Coordinator delegates to specialists
    • Scientific: Observe→Hypothesize→Test→Fix (debugging)
  3. Roles: What distinct agent personas are needed?
  4. Handoffs: When should each role hand off to the next?
  5. Completion: What signals the workflow is done?

Phase 2: Design Event Flow

Map the workflow as an event chain:

task.start → [Role A] → event.a → [Role B] → event.b → [Role C] → LOOP_COMPLETE
                                                    ↓
                                         event.rejected → [Role A]

Constraints to validate:

  • Each trigger maps to exactly ONE hat (no ambiguous routing)
  • task.start and task.resume are RESERVED (never use as triggers)
  • Every hat must publish at least one event
  • Chain must eventually reach LOOP_COMPLETE

Phase 3: Generate Preset

Create the YAML file with these sections:

# 
# Pattern: 
# 
#
# Usage:
#   ralph run --config presets/.yml --prompt ""

event_loop:
  starting_event: ""  # Ralph publishes this

hats:
  hat_key:
    name: " Display Name"
    description: ""
    triggers: ["event.triggers.this"]
    publishes: ["event.this.publishes", "alternate.event"]
    default_publishes: "event.this.publishes"
    instructions: |
      ##  MODE

      

      ### Process
      1. 
      2. 
      3. Publish appropriate event

      ### Event Format
      ```
      
      key: value
      
      ```

      ### DON'T
      - 
      - 

Schema Reference

Required Top-Level Fields

Field Description
event_loop.starting_event First event Ralph publishes

Hat Definition Fields

Field Required Description
name Yes Display name with optional emoji (e.g., "?? Analyzer")
description Yes Short description of the hat's purpose (one sentence)
triggers Yes Events this hat responds to (list)
publishes Yes Events this hat can emit (list)
default_publishes Recommended Fallback event if hat forgets to publish
instructions Yes Role-specific prompt (use | for multiline)

Fields That DON'T Exist

Never use these—they're not in the schema:

  • emoji (put emoji in name instead)
  • system_prompt (use instructions)
  • subscriptions / publications (use triggers / publishes)

Event Naming Conventions

.ready / .done      # Phase transitions
.approved / .rejected  # Review gates
.found / .missing     # Discovery events
.request / .complete # Request-response

Examples: analysis.complete, review.approved, build.blocked, spec.rejected

Validation Checklist

Before finalizing the preset, verify:

  • Each trigger maps to exactly ONE hat
  • No hat uses task.start or task.resume as triggers
  • Every hat has name, description, triggers, publishes, and instructions
  • default_publishes is set for hats with multiple publish options
  • Event chain can reach LOOP_COMPLETE
  • No orphan events that never get handled (warning only)
  • Instructions include role definition, process steps, event format, and DON'Ts

Output Location

Save generated presets to: presets/.yml

For minimal/embedded presets: presets/minimal/.yml

Testing the Preset

After generation, test with:

# Dry run to validate config parsing
cargo run --bin ralph -- run -c presets/.yml -p "test prompt" --dry-run

# Run smoke tests
cargo test -p ralph-core smoke_runner

Use /evaluate-presets for comprehensive validation.

Example Conversation

User: I want a preset for doing code reviews with a security focus.

Assistant: I'll help create a security-focused code review preset. Let me ask a few questions:

  1. Review depth: Should this be a quick scan or thorough analysis?

  2. Roles needed: I'm thinking:

    • Security Analyst (finds vulnerabilities)
    • Code Reviewer (checks logic/style)
    • Summarizer (creates final report)

    Does this cover your needs, or should we add/remove roles?

  3. Iteration: If issues are found, should the code be fixed, or just reported?

User: Thorough analysis, those roles work, just report issues don't fix.

Assistant: Here's the event flow I'm designing:

task.start → [Security Analyst] → security.complete
                                        ↓
          → [Code Reviewer] → review.complete
                                        ↓
          → [Summarizer] → LOOP_COMPLETE

Let me generate the preset...

[Generates YAML with three hats following the schema]

Common Patterns Reference

Pipeline (Sequential)

A → B → C → done

Use for: analysis workflows, document processing

Critic-Actor (Review Loop)

Actor → Critic → approved/rejected
                    ↓
         rejected → Actor (retry)

Use for: code review, quality gates

Supervisor-Worker

Supervisor → worker.task → Worker → work.done → Supervisor

Use for: complex task decomposition

Scientific Method

Observe → Hypothesize → Test → confirmed/rejected
                                    ↓
                         rejected → Observe

Use for: debugging, investigation