Apollo Issue Review:自动化维护者工作流 - Openclaw Skills
作者:互联网
2026-03-31
什么是 Apollo Issue Review?
Apollo Issue Review 是一个专门设计的技能,旨在简化 Apollo 配置管理生态系统中 GitHub issue 的生命周期。通过利用 Openclaw Skills,开发者可以实现一个“分类优先”的工作流,区分技术回归和咨询支持请求。这确保了每个 issue 都能得到高质量、维护者级别的回复,在提供可行路径的同时尊重支持边界。
该技能强调技术准确性,在起草任何回复之前,要求必须进行验证——通过本地代码复现或深度的仓库证据扫描。这种系统化的方法减轻了维护者的手动负担,并确保社区贡献者能够以其母语获得清晰、规范的指导。
下载入口:https://github.com/openclaw/skills/tree/main/skills/nobodyiam/apollo-issue-review
安装与下载
1. ClawHub CLI
从源直接安装技能的最快方式。
npx clawhub@latest install apollo-issue-review
2. 手动安装
将技能文件夹复制到以下位置之一
全局模式~/.openclaw/skills/
工作区
/skills/
优先级:工作区 > 本地 > 内置
3. 提示词安装
将此提示词复制到 OpenClaw 即可自动安装。
请帮我使用 Clawhub 安装 apollo-issue-review。如果尚未安装 Clawhub,请先安装(npm i -g clawhub)。
Apollo Issue Review 应用场景
- 通过自动生成最小复现脚本,分拣复杂的错误报告。
- 结合经过验证的代码路径证据,响应用户关于 OpenAPI 可用性的查询。
- 通过为新 PR 提供可行性边界和改进建议,管理社区贡献。
- 在启用 Openclaw Skills 的仓库中,保持中英文 issue 沟通标准的一致性。
- 通过为现有 issue 线程生成简洁的增补内容,减少重复回复。
- 事实提取:该技能分析 issue 的标题、正文和现有评论,以识别用户的主要需求和 issue 类型。
- 分类:将 issue 分类为行为/回归问题或咨询/支持问题。
- 强制验证:针对错误运行本地复现,或针对功能查询执行全仓库证据扫描(使用 ripgrep)。
- 草案生成:使用规范的模块名称起草维护者级别的回复,包括复现结果和实用的解决方法。
- 发布门禁:系统将草案提交给用户,进行强制性的手动确认步骤。
- 自动发布:确认后,使用 GitHub API 或 CLI 将最终评论发布到 issue 线程。
Apollo Issue Review 配置指南
要在 Openclaw Skills 中使用此技能,请确保您的环境已安装 GitHub CLI 并完成身份验证。配置您的输入变量以指向目标仓库。
# 确保 GitHub CLI 已通过身份验证
gh auth login
# 该技能利用 ripgrep 进行证据扫描
# 确保 rg 在您的路径中可用
rg --version
在代理上下文中定义 repo 和 issue_number 以启动评审工作流。
Apollo Issue Review 数据架构与分类体系
该技能根据以下结构处理并生成数据:
| 属性 | 描述 |
|---|---|
issue_context |
标题、正文和所有当前评论的汇编。 |
validation_path |
采取的逻辑分支(行为回归 vs 咨询支持)。 |
canonical_names |
从 pom.xml 或仓库布局派生出的标准化模块名称。 |
publish_mode |
仅草案或确认后发布的控制标志。 |
output_mode |
在人类可读摘要或流水线 YAML 块之间进行选择。 |
name: apollo-issue-review
description: Review Apollo ecosystem issues with a classify-first workflow (reproduce for behavior issues, evidence-check for consultative asks) and draft maintainer-grade replies that directly answer user asks, clarify support boundaries, and provide actionable next paths.
Apollo Issue Review
Follow this workflow to review an Apollo issue and produce a concise maintainer response.
Core Principles
- Classify first: behavior/regression issue vs consultative/support question.
- For behavior/regression issues: reproduce first, theorize second.
- For consultative/support questions (for example "is there an official script/doc"): do evidence check first and answer directly; do not force "reproduced/not reproduced" wording.
- Solve the user ask, do not debate whether the user is right or wrong.
- If behavior is already reproduced and conclusion is stable, do not ask for extra info.
- Do not default to "version regression" analysis unless the user explicitly asks for version comparison or it changes the recommendation.
- Match the issue language: English issue -> English reply, Chinese issue -> Chinese reply (unless the user explicitly asks for bilingual output).
- Use canonical Apollo module names from repository reality (AGENTS/module layout/root
pom.xml), and correct misnamed terms succinctly when needed. - If an existing comment already answers the same ask (including bot replies), avoid duplicate long replies; prefer a short addendum that only contributes corrections or missing deltas.
- Never wrap GitHub @mention handles in backticks/code spans; use plain @handle so notifications are actually triggered.
- If a community user volunteers to implement ("认领"/"first contribution"), acknowledge and encourage first, then evaluate the proposal with explicit feasibility boundaries and concrete refinement suggestions.
- For OpenAPI-related asks, explicitly separate Portal web APIs (e.g.,
/users) and OpenAPI endpoints (e.g.,/openapi/v1/*); only claim "OpenAPI supports X" when token-based OpenAPI path is verified. - Before concluding "capability not available", cross-check code + docs/scripts + module/dependency hints from
pom.xmlto avoid false negatives caused by path assumptions.
Input Contract
Collect or derive these fields before review:
repo:/ issue_number: numeric IDissue_context: title/body/commentspublish_mode:draft-only(default) orpost-after-confirmoutput_mode:human(default) orpipeline
Optional but recommended:
known_labels: existing labels on the issuedesired_outcome: whether user wants only triage or triage + implementation handoff
If issue_number or issue_context is missing, ask one short clarification before continuing.
Workflow
- Collect issue facts and user ask
- Read issue body and comments before concluding.
- Extract: primary ask, symptom, expected behavior, actual behavior, and whether user asks one path or an either-or path.
- Keep user asks explicit (for example "better parsing API OR raw text API": answer both).
- Detect whether the thread includes a contribution-claim ask (for example "can I take this issue?") and treat it as a guidance+boundary response, not only a capability yes/no response.
- Detect main language from issue title/body/recent comments and set reply language before drafting.
- Decide issue type up front:
- behavior/regression (needs reproducibility check)
- consultative/support (needs evidence check)
- Normalize names to canonical module/service terms used by Apollo repo (e.g.,
apollo-portal, not invented service names). - If GitHub API access is unstable, use:
curl -L -s https://api.github.com/repos///issues/
curl -L -s https://api.github.com/repos///issues//comments
- Run the right validation path (mandatory)
- For behavior/regression issues:
- Build a minimal, local, runnable reproduction for the reported behavior.
- Prefer repo-native unit tests or a tiny temporary script over speculation.
- Record exact observed output and types, not just interpretation.
- For consultative/support questions:
- Verify by repository evidence scan (docs/scripts/code paths), not by speculative reproduction framing.
- For API availability asks, verify in three places before concluding:
- actual controller paths, 2) docs/openapi scripts, 3) module/dependency pointers in
pom.xml.
- actual controller paths, 2) docs/openapi scripts, 3) module/dependency pointers in
- Record exact files/paths searched and what exists vs does not exist.
- Example checks:
rg -n "" -S
go test ./... -run
# or a minimal go run script under /tmp for one-off validation
# consultative evidence scan example:
rg --files | rg -i ""
rg -n "" docs scripts apollo-* -S
- Branch by validation result
- Behavior/regression path:
- If reproducible:
- State clearly that behavior is confirmed.
- Identify whether this is supported behavior, usage mismatch, or current feature gap.
- Then answer user asks directly (existing API/workaround/unsupported).
- If not reproducible:
- Ask for minimal missing evidence only:
- input sample
- exact read/access code
- expected vs actual output
- Keep this short and concrete.
- Ask for minimal missing evidence only:
- If reproducible:
- Consultative/support path:
- If capability/script/doc exists: provide exact path/link and usage entry point.
- If it does not exist: state "currently not available" directly and give one practical alternative.
- If an existing comment already covered the same conclusion: post only a concise delta/correction instead of repeating the full answer.
- Draft maintainer reply (focus on action)
- Start with a one-paragraph summary in the thread language:
- behavior/regression issue: reproduction summary (
复现结论/Reproduction Result) - consultative/support issue: direct conclusion summary (
结论/Conclusion)
- behavior/regression issue: reproduction summary (
- Then include:
当前能力与边界: what is supported today and what is not.可行方案: exact API/command/workaround user can run now.后续路径: either invite PR with concrete files/tests, or state maintainers may plan it later without overpromising timeline.
- If the thread includes a contribution-claim proposal, structure the main body as:
- appreciation and encouragement, 2) feasibility judgment, 3) concrete implementation refinements (what to reuse vs what not to reuse directly).
- If user ask is either-or, answer both explicitly.
- If already confirmed feature gap, do not request more logs/steps by default.
- Keep wording factual and concise.
- Use canonical module names in final wording; if the issue uses a non-canonical name, correct it briefly without derailing the answer.
- If there is already a correct prior comment, prefer "reference + minimal supplement" format.
- If you mention users/bots, keep mentions as plain text (e.g., @dosubot), not code-formatted mention strings.
- Use localized section labels and wording by issue language (for example:
Reproduction Result / Current Support Boundary / Practical Path / Next Stepin English threads).
- Ask for publish confirmation (mandatory gate)
- Default behavior: generate draft only; do not post automatically.
- Present the exact comment body first, then ask for confirmation in the same thread.
- Use a direct question in the same language as the thread, e.g.:
- Chinese:
是否直接发布到 issue #?回复“发布”或“先不发”。 - English:
Post this to issue #now? Reply "post" or "hold".
- Chinese:
- Treat no response or ambiguous response as
not approved.
- Post the response only after explicit confirmation
- Allowed confirmation examples:
发布/帮我发/直接回复上去. - If user intent is unclear, ask one short clarification question before any post command.
- Preferred:
gh api repos///issues//comments -f body=''
- Fallback when
ghtransport is unstable:
TOKEN=$(gh auth token)
curl --http1.1 -sS -X POST r
-H "Authorization: token $TOKEN" r
-H "Accept: application/vnd.github+json" r
-d '{"body":""}' r
https://api.github.com/repos///issues//comments
- After posting, return the comment URL as evidence.
Output Contract
Default (output_mode=human) output should be human-friendly:
Issue Summary
- issue type + confidence
- validation result (reproduced / not reproduced / evidence result)
Triage Suggestion
- labels to add
- missing information (if any)
- whether it is ready for implementation handoff
Draft Maintainer Reply
- First sentence must match issue type:
- behavior/regression: reproducibility status (
已复现/暂未复现orReproduced/Not yet reproduced) - consultative/support: direct availability conclusion
- behavior/regression: reproducibility status (
- Include at least one concrete API/code path/file reference.
- If unsupported today: include support boundary + practical workaround + next path.
- If reproducible and conclusion is stable: do not request extra data.
- If not reproducible: request only minimal reproducible inputs.
- If prior comment already solved the ask: provide concise delta only.
- Do not present unverified root cause as fact.
- Keep language matched to issue language unless user asks otherwise.
Publish Gate
- If no explicit publish confirmation exists, end with:
- Chinese:
是否直接发布到 issue #?回复“发布”或“先不发”。 - English:
Post this to issue #now? Reply "post" or "hold".
- Chinese:
If output_mode=pipeline, append one machine-readable block after the human output:
handoff:
issue_classification:
type: "功能咨询|问题排查|技术讨论|Bug 反馈|Feature request"
validation_path: "behavior-regression|consultative-support"
confidence: "high|medium|low"
triage_decision:
labels_to_add: []
missing_info_fields: []
ready_for_issue_to_pr: false
ready_reason: ""
implementation_handoff:
goal: ""
acceptance_criteria: []
suggested_modules: []
risk_hints: []
Load References When Needed
- Use
references/diagnostic-playbook.mdfor scenario-specific diagnostics and command snippets. - Use
references/reply-templates.mdfor reusable Chinese maintainer reply skeletons.
相关推荐
专题
+ 收藏
+ 收藏
+ 收藏
+ 收藏
+ 收藏
最新数据
相关文章
社交信任操纵检测器:识别声誉造假 - Openclaw Skills
Naver 博客写作:自动化发布 Naver 博客文章 - Openclaw Skills
claw402: 专业市场数据与 AI 推理 - Openclaw Skills
红色警报以色列:实时紧急情况与火箭弹预警数据 - Openclaw Skills
人机协同可用性测试:human_test - Openclaw Skills
WatchOrFight RPS: AI 代理的链上游戏 - Openclaw Skills
文本检测技能:识别 AI 生成的内容 - Openclaw Skills
SearXNG 网页搜索:隐私优先的 AI 智能体搜索 - Openclaw Skills
消息解析器:标准化 WhatsApp 聊天导出 - Openclaw Skills
COMMS.md 阅读器:AI 沟通偏好适配器 - Openclaw Skills
AI精选
