Zettel Brainstormer:自动化卢曼卡片盒研究 - Openclaw Skills
作者:互联网
2026-04-15
什么是 Zettel Brainstormer?
Zettel Brainstormer 是 Openclaw Skills 库的强大补充,专为希望将草稿转化为深度、引用充分的文档的研究人员和作家而设计。通过以程序化方式探索您的本地知识库,该技能可以识别可能被隐藏的概念联系。它通过使用分层模型方法平衡了效率和深度,在不产生不必要成本的情况下确保高质量的综合。
此技能充当数字思维伙伴,理解您卢曼卡片盒的结构。它不只是寻找关键词,而是通过维基链接(wikilinks)和标签簇遵循您的思维架构,使其成为任何构建第二大脑的人的必备工具。使用像这样的 Openclaw Skills 可以实现原始数据与结构化洞察之间的无缝桥接。
下载入口:https://github.com/openclaw/skills/tree/main/skills/hxy9243/zettel-brainstormer
安装与下载
1. ClawHub CLI
从源直接安装技能的最快方式。
npx clawhub@latest install zettel-brainstormer
2. 手动安装
将技能文件夹复制到以下位置之一
全局模式~/.openclaw/skills/
工作区
/skills/
优先级:工作区 > 本地 > 内置
3. 提示词安装
将此提示词复制到 OpenClaw 即可自动安装。
请帮我使用 Clawhub 安装 zettel-brainstormer。如果尚未安装 Clawhub,请先安装(npm i -g clawhub)。
Zettel Brainstormer 应用场景
- 将种子想法扩展为全面的研究草稿。
- 使用语义发现挖掘笔记之间隐藏的联系。
- 自动总结长篇笔记以提取关键引用和要点。
- 为现有的 Obsidian 库丰富结构化元数据和交叉引用。
- 该技能通过遵循 N 层深的维基链接并查找具有重叠标签的文档来识别相关笔记。
- 具有成本效益的子代理模型处理每个识别出的笔记,以根据种子想法提取摘要、相关性得分和关键引用。
- 所有预处理的数据被收集并传递给高性能专业模型。
- 最终草稿综合了适当的 Obsidian 属性、标签和链接,以保持库的一致性。
- 结果作为新的、丰富的 Markdown 笔记保存到您指定的输出目录中。
Zettel Brainstormer 配置指南
要初始化该技能并配置您的本地环境,请运行包含的设置脚本:
python scripts/setup.py
此脚本将生成一个 config/models.json 文件,您可以在其中定义卢曼卡片盒目录、输出路径和首选 AI 模型。确保您已准备好卢曼卡片盒路径,以最大化这些 Openclaw Skills 的效果。
Zettel Brainstormer 数据架构与分类体系
该技能通过配置文件和结构化 Markdown 模板的组合来管理信息:
| 组件 | 功能 |
|---|---|
config/models.json |
模型选择、搜索深度和目录路径的配置。 |
scripts/find_links.py |
通过维基链接和标签分析提取文件路径的核心逻辑。 |
templates/preprocess.md |
提取子代理的指令集。 |
templates/draft.md |
专业模型的最终综合指令。 |
zettel_dir |
您的 Markdown 笔记源库。 |
name: zettel-brainstormer
description: It reads from your local zettelkasten notes, find a random idea, and find references by links or tags, then expand the idea with the references.
Zettel Brainstormer ??
This skill formalizes the process of taking a rough idea or draft and enriching it with deep research, diverse perspectives, and structured brainstorming.
The configuration file is config/models.json, which can be edited manually or using the setup.py script.
New 3-Stage Workflow
This skill now supports a 3-stage pipeline to balance cost and quality:
- Find References (with
find_links.py)
- Run
scripts/find_links.pyto identify relevant existing notes.- Wikilinked documents (follows [[wikilinks]] N levels deep, up to M total docs)
- Tag-similar documents (finds notes with overlapping tags)
- Native Obsidian Integration: Uses
obsidian-clifor high-performance indexing and discovery if available. - Semantic Discovery (Optional): Can leverage the
zettel-linkskill for finding "hidden" conceptual connections that don't share explicit tags or links.
- Output: A JSON list of absolute file paths to relevant notes.
- Subagent: Preprocess contents (with
preprocess_model)
- The agent iterates through the list of files found in Stage 1.
- For each file:
- Read the file content.
- Apply
templates/preprocess.mdusing thepreprocess_model, pass the seed note keypoints and file content as context. - Extract: Relevance score, Summary, Key Points, and Quotes.
- Output: A structured markdown summary of the note.
- Draft & Humanize (with
pro_model)
- Gather all preprocessed markdown outputs from Stage 2.
- Apply
templates/draft.mdusing thepro_model. - Synthesize points, add proper Obsidian properties, tags, and links.
- Uses the
obsidianskill if available to match style.
Files & Scripts
This skill includes the following resources under the skill folder:
scripts/find_links.py-- finds relevant note paths (linked + tag similar)scripts/draft_prompt.py-- (Deprecated) generate prompt for agentscripts/obsidian_utils.py-- shared utilities for wikilink extractiontemplates/preprocess.md-- Instructions for subagent to extract info from single notetemplates/draft.md-- Instructions for final draft generationconfig/models.example.json-- example configuration file
Configuration & Setup
First Run Setup: Before using this skill, you must run the setup script to configure models and directories.
python scripts/setup.py
This will create config/models.json with your preferences. You can press ENTER to accept defaults.
Configuration Fields:
pro_model: The model used for drafting (defaults to agent's current model)preprocess_model: Cheap model for extraction (defaults to agent's current model)zettel_dir: Path to your Zettelkasten notesoutput_dir: Path where drafts should be savedsearch_skill: Which search skill to use for web/X research (web_search, brave_search, or none)link_depth: How many levels deep to follow [[wikilinks]] (N levels, default: 2)max_links: Maximum total linked notes to include (M links, default: 10)discovery_mode: Options arestandard(default),cli(uses obsidian-cli), orsemantic(uses zettel-link).
Usage
- Trigger when user asks: "brainstorm X", "expand this draft", "research and add notes to
". - Example workflow (pseudo):
- Pick a seed note.
- Find Links:
python scripts/find_links.py --input--output /tmp/paths.json - Preprocess Subagent Loop:
- Load
/tmp/paths.json. - For each path, read content.
- Prompt
preprocess_modelwithtemplates/preprocess.md+ content. - Store result.
- Load
- Final Draft:
- Concatenate seed note + all preprocess results.
- Prompt
pro_modelwithtemplates/draft.md+ concatenated context.
- Save result to note.
Notes for maintainers
- Keep preprocess outputs small (200-600 tokens) to save cost.
- Ensure all external links are included in the
Referencessection with full titles and URLs. - When appending, always include a timestamp and short provenance line.
相关推荐
专题
+ 收藏
+ 收藏
+ 收藏
+ 收藏
+ 收藏
+ 收藏
最新数据
相关文章
CI 生成器:自动化 GitHub Actions 工作流 - Openclaw Skills
Bundle Checker:AI 驱动的 JS 包体积优化 - Openclaw Skills
AI 备份脚本生成器:自动执行数据库备份 - Openclaw Skills
录用信生成器:专业招聘文档自动化 - Openclaw Skills
MCP Hub 技能:连接 1200+ AI 代理工具 - Openclaw Skills
HTML 幻灯片:构建交互式 reveal.js 演示文稿 - Openclaw Skills
Doc Pipeline:文档工作流自动化 - Openclaw Skills
批量转换:自动化多格式文档管线 - Openclaw Skills
Soul World:AI 智能体社交模拟平台 - Openclaw Skills
agent-sims:社交 AI 智能体模拟平台 - Openclaw Skills
AI精选
