Zettel Brainstormer:自动化卢曼卡片盒研究 - Openclaw Skills

作者:互联网

2026-04-15

AI教程

什么是 Zettel Brainstormer?

Zettel Brainstormer 是 Openclaw Skills 库的强大补充,专为希望将草稿转化为深度、引用充分的文档的研究人员和作家而设计。通过以程序化方式探索您的本地知识库,该技能可以识别可能被隐藏的概念联系。它通过使用分层模型方法平衡了效率和深度,在不产生不必要成本的情况下确保高质量的综合。

此技能充当数字思维伙伴,理解您卢曼卡片盒的结构。它不只是寻找关键词,而是通过维基链接(wikilinks)和标签簇遵循您的思维架构,使其成为任何构建第二大脑的人的必备工具。使用像这样的 Openclaw Skills 可以实现原始数据与结构化洞察之间的无缝桥接。

下载入口:https://github.com/openclaw/skills/tree/main/skills/hxy9243/zettel-brainstormer

安装与下载

1. ClawHub CLI

从源直接安装技能的最快方式。

npx clawhub@latest install zettel-brainstormer

2. 手动安装

将技能文件夹复制到以下位置之一

全局模式 ~/.openclaw/skills/ 工作区 /skills/

优先级:工作区 > 本地 > 内置

3. 提示词安装

将此提示词复制到 OpenClaw 即可自动安装。

请帮我使用 Clawhub 安装 zettel-brainstormer。如果尚未安装 Clawhub,请先安装(npm i -g clawhub)。

Zettel Brainstormer 应用场景

  • 将种子想法扩展为全面的研究草稿。
  • 使用语义发现挖掘笔记之间隐藏的联系。
  • 自动总结长篇笔记以提取关键引用和要点。
  • 为现有的 Obsidian 库丰富结构化元数据和交叉引用。
Zettel Brainstormer 工作原理
  1. 该技能通过遵循 N 层深的维基链接并查找具有重叠标签的文档来识别相关笔记。
  2. 具有成本效益的子代理模型处理每个识别出的笔记,以根据种子想法提取摘要、相关性得分和关键引用。
  3. 所有预处理的数据被收集并传递给高性能专业模型。
  4. 最终草稿综合了适当的 Obsidian 属性、标签和链接,以保持库的一致性。
  5. 结果作为新的、丰富的 Markdown 笔记保存到您指定的输出目录中。

Zettel Brainstormer 配置指南

要初始化该技能并配置您的本地环境,请运行包含的设置脚本:

python scripts/setup.py

此脚本将生成一个 config/models.json 文件,您可以在其中定义卢曼卡片盒目录、输出路径和首选 AI 模型。确保您已准备好卢曼卡片盒路径,以最大化这些 Openclaw Skills 的效果。

Zettel Brainstormer 数据架构与分类体系

该技能通过配置文件和结构化 Markdown 模板的组合来管理信息:

组件 功能
config/models.json 模型选择、搜索深度和目录路径的配置。
scripts/find_links.py 通过维基链接和标签分析提取文件路径的核心逻辑。
templates/preprocess.md 提取子代理的指令集。
templates/draft.md 专业模型的最终综合指令。
zettel_dir 您的 Markdown 笔记源库。
name: zettel-brainstormer
description: It reads from your local zettelkasten notes, find a random idea, and find references by links or tags, then expand the idea with the references.

Zettel Brainstormer ??

This skill formalizes the process of taking a rough idea or draft and enriching it with deep research, diverse perspectives, and structured brainstorming.

The configuration file is config/models.json, which can be edited manually or using the setup.py script.

New 3-Stage Workflow

This skill now supports a 3-stage pipeline to balance cost and quality:

  1. Find References (with find_links.py)
  • Run scripts/find_links.py to identify relevant existing notes.
    • Wikilinked documents (follows [[wikilinks]] N levels deep, up to M total docs)
    • Tag-similar documents (finds notes with overlapping tags)
    • Native Obsidian Integration: Uses obsidian-cli for high-performance indexing and discovery if available.
    • Semantic Discovery (Optional): Can leverage the zettel-link skill for finding "hidden" conceptual connections that don't share explicit tags or links.
  • Output: A JSON list of absolute file paths to relevant notes.
  1. Subagent: Preprocess contents (with preprocess_model)
  • The agent iterates through the list of files found in Stage 1.
  • For each file:
    • Read the file content.
    • Apply templates/preprocess.md using the preprocess_model, pass the seed note keypoints and file content as context.
    • Extract: Relevance score, Summary, Key Points, and Quotes.
    • Output: A structured markdown summary of the note.
  1. Draft & Humanize (with pro_model)
  • Gather all preprocessed markdown outputs from Stage 2.
  • Apply templates/draft.md using the pro_model.
  • Synthesize points, add proper Obsidian properties, tags, and links.
  • Uses the obsidian skill if available to match style.

Files & Scripts

This skill includes the following resources under the skill folder:

  • scripts/find_links.py -- finds relevant note paths (linked + tag similar)
  • scripts/draft_prompt.py -- (Deprecated) generate prompt for agent
  • scripts/obsidian_utils.py -- shared utilities for wikilink extraction
  • templates/preprocess.md -- Instructions for subagent to extract info from single note
  • templates/draft.md -- Instructions for final draft generation
  • config/models.example.json -- example configuration file

Configuration & Setup

First Run Setup: Before using this skill, you must run the setup script to configure models and directories.

python scripts/setup.py

This will create config/models.json with your preferences. You can press ENTER to accept defaults.

Configuration Fields:

  • pro_model: The model used for drafting (defaults to agent's current model)
  • preprocess_model: Cheap model for extraction (defaults to agent's current model)
  • zettel_dir: Path to your Zettelkasten notes
  • output_dir: Path where drafts should be saved
  • search_skill: Which search skill to use for web/X research (web_search, brave_search, or none)
  • link_depth: How many levels deep to follow [[wikilinks]] (N levels, default: 2)
  • max_links: Maximum total linked notes to include (M links, default: 10)
  • discovery_mode: Options are standard (default), cli (uses obsidian-cli), or semantic (uses zettel-link).

Usage

  • Trigger when user asks: "brainstorm X", "expand this draft", "research and add notes to ".
  • Example workflow (pseudo):
    1. Pick a seed note.
    2. Find Links: python scripts/find_links.py --input --output /tmp/paths.json
    3. Preprocess Subagent Loop:
      • Load /tmp/paths.json.
      • For each path, read content.
      • Prompt preprocess_model with templates/preprocess.md + content.
      • Store result.
    4. Final Draft:
      • Concatenate seed note + all preprocess results.
      • Prompt pro_model with templates/draft.md + concatenated context.
    5. Save result to note.

Notes for maintainers

  • Keep preprocess outputs small (200-600 tokens) to save cost.
  • Ensure all external links are included in the References section with full titles and URLs.
  • When appending, always include a timestamp and short provenance line.

相关推荐