招聘:自动化 AI 智能体入职与角色设计 - Openclaw Skills

作者:互联网

2026-03-24

AI教程

什么是 招聘?

招聘技能是一个为 Openclaw Skills 打造的复杂入职引擎,它不仅限于生成简单的配置。它通过结构化的面试来了解您面临的具体问题、助手的理想性格以及严格的运行边界。

通过综合这些信息,它为新的 AI 智能体构建完整的身份,包括角色定义和行为准则,确保每个新团队成员都能完美融入您现有的生态系统。它弥补了模糊的帮助需求与技术上准备就绪的子智能体之间的差距。

下载入口:https://github.com/openclaw/skills/tree/main/skills/larsderidder/hire

安装与下载

1. ClawHub CLI

从源直接安装技能的最快方式。

npx clawhub@latest install hire

2. 手动安装

将技能文件夹复制到以下位置之一

全局模式 ~/.openclaw/skills/ 工作区 /skills/

优先级:工作区 > 本地 > 内置

3. 提示词安装

将此提示词复制到 OpenClaw 即可自动安装。

请帮我使用 Clawhub 安装 hire。如果尚未安装 Clawhub,请先安装(npm i -g clawhub)。

招聘 应用场景

  • 通过添加用于代码审查、研究或创意写作的专业智能体来扩展您的 AI 团队。
  • 从单一的通用助手过渡到多智能体工作流。
  • 为面向客户或内部专用的智能体定义严格的安全边界和性格特征。
  • 在扩展 Openclaw Skills 能力时,自动完成技术设置和网关配置。
招聘 工作原理
  1. 启动:使用类似 /hire 的命令或自然语言请求触发向导来添加团队成员。
  2. 面试阶段:回答有关问题范围、性格特征和操作红线的针对性问题。
  3. 智能映射:该技能通过 Openclaw 网关动态查询可用模型,并将其映射到不同层级(推理型、平衡型、快速型或代码型),以寻找该角色的最佳引擎。
  4. 产物生成:创建一个专用的智能体目录,包含定制的身份、灵魂和工具文件。
  5. 自动化集成:该技能通过网关执行配置补丁,以注册新智能体并授予权限,无需人工干预。

招聘 配置指南

要在您的 Openclaw Skills 环境中使用招聘技能,请确保您的网关正在运行且模型列表已填充。该技能利用网关 API 自动发现模型并应用配置补丁。

# 通过自然语言触发招聘流程
"我需要招聘一名研究员来协助市场分析。"

# 或使用直接命令快捷方式
/hire

招聘 数据架构与分类体系

该技能使用标准化的文件结构将每个新智能体组织在 agents// 目录中,以保持 Openclaw Skills 的一致性:

文件 描述 来源
AGENTS.md 核心职责和操作规则。 生成
IDENTITY.md 名称、氛围和生物类型分类。 生成
SOUL.md 行为约束和性格。 基于模板
TOOLS.md 推断的工具访问和集成说明。 上下文感知
USER.md 指向共享用户上下文的符号链接。 共享
MEMORY.md 指向共享团队知识库的符号链接。 共享
name: hire
description: Interactive hiring wizard to set up a new AI team member. Guides the user through role design via conversation, generates agent identity files, and optionally sets up performance reviews. Use when the user wants to hire, add, or set up a new AI agent, team member, or assistant. Triggers on phrases like "hire", "add an agent", "I need help with X" (implying a new role), or "/hire".

hire

Set up a new AI team member through a guided conversation. Not a config generator - a hiring process.

When to Use

User says something like:

  • "I want to hire a new agent"
  • "I need help with X" (where X implies a new agent role)
  • "Let's add someone to the team"
  • /hire

The Interview

3 core questions, asked one at a time:

Q1: "What do you need help with?" Let them describe the problem, not a job title. "I'm drowning in code reviews" beats "I need a code reviewer."

  • Listen for: scope, implied autonomy level, implied tools needed

Q2: "What's their personality? Formal, casual, blunt, cautious, creative?" Or frame it as: "If this were a human colleague, what would they be like?"

  • Listen for: communication style, vibe, how they interact

Q3: "What should they never do?" The red lines. This is where trust gets defined.

  • Listen for: boundaries, safety constraints, access limits

Q4: Dynamic (optional)

After Q1-Q3, assess whether anything is ambiguous or needs clarification. If so, ask ONE follow-up question tailored to what's unclear. Examples:

  • "You mentioned monitoring - should they alert you immediately or batch updates?"
  • "They'll need access to your codebase - any repos that are off-limits?"
  • "You said 'casual' - are we talking friendly-professional or meme-level casual?"

If Q1-Q3 were clear enough, skip Q4 entirely.

Summary Card

After the interview, present a summary:

?? Role: [one-line description]
?? Name: [suggested name from naming taxonomy]
?? Model: [selected model] ([tier])
? Personality: [2-3 word vibe]
?? Tools: [inferred from conversation]
?? Boundaries: [key red lines]
?? Autonomy: [inferred level: high/medium/low]

Then ask: "Want to tweak anything, or are we good?"

Model Selection

Before finalizing, select an appropriate model for the agent.

Step 1: Discover available models

Run openclaw models list or check the gateway config to see what's configured.

Step 2: Categorize by tier

Map discovered models to capability tiers:

Tier Models (examples) Best for
reasoning claude-opus-, gpt-5, gpt-4o, deepseek-r1 Strategy, advisory, complex analysis, architecture
balanced claude-sonnet-*, gpt-4-turbo, gpt-4o-mini Research, writing, general tasks
fast claude-haiku-, gpt-3.5, local/ollama High volume, simple tasks, drafts
code codex-, claude-sonnet-, deepseek-coder Coding, refactoring, tests

Use pattern matching on model names - don't hardcode specific versions.

Step 3: Match role to tier

Based on the interview:

  • Heavy reasoning/advisory/strategy → reasoning tier
  • Research/writing/creative → balanced tier
  • Code-focused → code tier (or balanced if not available)
  • High-volume/monitoring → fast tier

Step 4: Select and confirm

Pick the best available model for the role. In the summary card, add:

?? Model: [selected model] ([tier] - [brief reason])

If multiple good options exist or you're unsure, ask: "For a [role type] role, I'd suggest [model] (good balance of capability and cost). Or [alternative] if you want [deeper reasoning / faster responses / lower cost]. Preference?"

Notes

  • Don't assume any specific provider - work with what's available
  • Cheaper is better when capability is sufficient
  • The user's default model isn't always right for every agent
  • If only one model is available, use it and note it in the summary

Optional Extras

After the summary is confirmed, offer:

  1. "Want to set up periodic performance reviews?"

    • If yes: ask preferred frequency (weekly, biweekly, monthly)
    • Create a cron job that triggers a review conversation
    • Review covers: what went well, what's not working, scope/permission adjustments
    • At the end of each review, ask: "Want to keep this schedule, change frequency, or stop reviews?"
  2. Onboarding assignment (if relevant to the role)

    • Suggest a small first task to test the new agent
    • Something real but low-stakes, so the user can see them in action

What to Generate

Create an agent directory at agents// with:

Always unique (generated fresh):

  • AGENTS.md - Role definition, responsibilities, operational rules, what they do freely vs ask first
  • IDENTITY.md - Name, emoji, creature type, vibe, core principles

Start from template, customize based on interview:

  • SOUL.md - Base from workspace SOUL.md template, customize vibe/boundaries sections
  • TOOLS.md - Populated with inferred tools and access notes
  • HEARTBEAT.md - Empty or with initial periodic tasks if relevant to role
  • USER.md../../USER.md (they need to know who they work for)
  • MEMORY.md../../MEMORY.md (shared team context)

Mention to the user: "I've linked USER.md and MEMORY.md so they know who you are and share team context. You can change this later if you want them more isolated."

Naming

Use craft/role-based names. Check TOOLS.md for the full naming taxonomy:

  • Research: Scout, Observer, Surveyor
  • Writing: Scribe, Editor, Chronicler
  • Code: Smith, Artisan, Engineer
  • Analysis: Analyst, Assessor, Arbiter
  • Creative: Muse, Artisan
  • Oversight: Auditor, Reviewer, Warden

Check existing agents to avoid name conflicts. Suggest a name that fits the role, but let the user override.

Team Awareness

Before generating, check agents/ for existing team members. Note:

  • Potential overlaps with existing roles
  • Gaps this new hire fills
  • How they'll interact with existing agents

Mention any relevant observations: "You already have Scout for research - this new role would focus specifically on..."

After Setup

  1. Tell the user what was created and where

  2. Automatically update the OpenClaw config via gateway config.patch (do not ask the user to run a manual command). You must:

    • Add the new agent entry to agents.list using this format:
      {
        "id": "",
        "workspace": "/home/lars/clawd/agents/",
        "model": ""
      }
      
    • Add the new agent ID to the main agent's subagents.allowAgents array
    • Preserve all existing agents and fields (arrays replace on patch)

    Required flow:

    1. Fetch config + hash
      openclaw gateway call config.get --params '{}'
      
    2. Build the updated agents.list array (existing entries + new agent) and update the main agent's subagents.allowAgents (existing list + new id).
    3. Apply with config.patch:
      openclaw gateway call config.patch --params '{
        "raw": "{
       agents: {
        list: [ /* full list with new agent + updated main allowAgents */ ]
       }
      }
      ",
        "baseHash": "",
        "restartDelayMs": 1000
      }'
      
  3. If monthly reviews were requested, confirm the cron schedule

  4. Update any team roster if one exists

Important

  • This is a CONVERSATION, not a form. Be natural.
  • Infer as much as possible from context. Don't ask what you can figure out.
  • The user might not know what they want exactly. Help them figure it out.
  • Keep the whole process under 5 minutes for the simple case.