Excel 周报:Power Query 报告 - Openclaw Skills

作者:互联网

2026-04-16

Excel

什么是 Excel 周报仪表板技能?

Excel 周报仪表板技能旨在 Microsoft Excel 中构建和建立可重复的报告管道。通过利用 Openclaw Skills,该智能体可以创建强大的 Power Query 工作流,从文件夹、CSV 或 XLSX 文件中提取数据,并将原始数据转换为干净、可透视的表格。它优先考虑“无手动工作”理念,确保即使在每周数据累积或源列发生变化时,您的报告也能保持稳定。

该技能通过实施开发人员级的数据完整性实践,超越了简单的电子表格创建。它在原始摄取和清洗后的数据之间建立了清晰的分隔,并结合了验证标志和刷新运行状况检查。在根据报告需求使用 Openclaw Skills 时,您将获得一个可扩展的框架,该框架可处理数据标准化、类型安全和专业的视觉布局,而不会出现传统手动 Excel 模型常见的脆弱性。

下载入口:https://github.com/openclaw/skills/tree/main/skills/kowl64/excel-weekly-dashboard

安装与下载

1. ClawHub CLI

从源直接安装技能的最快方式。

npx clawhub@latest install excel-weekly-dashboard

2. 手动安装

将技能文件夹复制到以下位置之一

全局模式 ~/.openclaw/skills/ 工作区 /skills/

优先级:工作区 > 本地 > 内置

3. 提示词安装

将此提示词复制到 OpenClaw 即可自动安装。

请帮我使用 Clawhub 安装 excel-weekly-dashboard。如果尚未安装 Clawhub,请先安装(npm i -g clawhub)。

Excel 周报仪表板技能 应用场景

  • 自动执行来自 CSV 导出文件夹的每周 KPI 更新,以消除手动复制粘贴。
  • 将非结构化数据转储转换为具有严格数据输入和验证规则的干净表格。
  • 构建带有 ISO 周和运营维度切片器的交互式仪表板。
  • 重构在源文件更改列顺序或命名时会损坏的脆弱 Excel 模型。
  • 创建一个集中式报告中心,聚合来自多个导出的 PDF 或 Word 表格的数据。
Excel 周报仪表板技能 工作原理
  1. 该技能识别源文件类型并定位稳定的业务键,以确保数据连续性。
  2. 它定义一个规范的表模式,指定所需的列、数据类型以及缺失值的处理规则。
  3. 设计 Power Query 摄取层,最好使用具有防御性列逻辑的文件夹摄取方法。
  4. 它创建一个双阶段查询系统:用于原始标准化的暂存层和用于验证数据的清洗层。
  5. 使用由清洗后的数据查询驱动的透视表和交互式切片器构建报告层。
  6. 集成刷新状态仪表板,以提供行数、时间戳和潜在错误的可见性。

Excel 周报仪表板技能 配置指南

要在您的 Openclaw Skills 工作流中部署此仪表板逻辑,请向智能体提供您的源文件样本和 KPI 要求。

# 开始构建计划的典型交互
openclaw run excel-weekly-dashboard --files ./data/weekly_exports/

该技能将输出一份综合工作簿计划或特定工件(如 power_query_steps.pqworkbook_spec.md),以指导手动或自动设置 Excel 环境。

Excel 周报仪表板技能 数据架构与分类体系

该技能将 Excel 工作簿组织成逻辑层次结构,以在 Openclaw Skills 实施中保持数据完整性和性能:

工作表名称 角色 描述
Data_Staging 摄取 原始数据 Power Query 的直接输出。
Data_Clean 验证 丰富了验证标志(例如 IsValidDate)的标准化数据。
Dashboard 报告 包含透视表、图表和切片器的面向用户的层。
Refresh_Status 监控 上次刷新、行数和查询运行状况检查的日志。
name: excel-weekly-dashboard
description: Designs refreshable Excel dashboards (Power Query + structured tables + validation + pivot reporting). Use when you need a repeatable weekly KPI workbook that updates from files with minimal manual work.

Excel weekly dashboards at scale

PURPOSE

Designs refreshable Excel dashboards (Power Query + structured tables + validation + pivot reporting).

WHEN TO USE

  • TRIGGERS:
    • Build me a Power Query pipeline for this file so it refreshes weekly with no manual steps.
    • Turn this into a structured table with validation lists and clean data entry rules.
    • Create a pivot-driven weekly dashboard with slicers for year and ISO week.
    • Fix this Excel model so refresh does not break when new columns appear.
    • Design a reusable KPI pack that updates from a folder of CSVs.
  • DO NOT USE WHEN…
    • You need advanced forecasting/valuation modeling (this skill is for repeatable reporting pipelines).
    • You need a BI tool build (Power BI/Tableau) rather than Excel.
    • You need web scraping as the primary ingestion method.

INPUTS

  • REQUIRED:
    • Source data file(s): CSV, XLSX, DOCX-exported tables, or PDF-exported tables (provided by user).
    • Definition of ‘week’ (ISO week preferred) and the KPI fields required.
  • OPTIONAL:
    • Data dictionary / column definitions.
    • Known “bad data” patterns to validate (e.g., blank PayNumber, invalid dates).
    • Existing workbook to refactor.
  • EXAMPLES:
    • Folder of weekly CSV exports: exports/2026-W02/*.csv
    • Single XLSX dump with changing columns month to month

OUTPUTS

  • If asked for plan only (default): a step-by-step build plan + Power Query steps + sheet layout + validation rules.
  • If explicitly asked to generate artifacts:
    • workbook_spec.md (workbook structure and named tables)
    • power_query_steps.pq (M code template)
    • refresh-checklist.md (from assets/) Success = refresh works after adding a new week’s files without manual edits, and validation catches bad rows.

WORKFLOW

  1. Identify source type(s) (CSV/XLSX/DOCX/PDF-export) and the stable business keys (e.g., PayNumber).
  2. Define the canonical table schema:
    • required columns, types, allowed values, and “unknown” handling.
  3. Design ingestion with Power Query:
    • Prefer Folder ingest + combine, with defensive “missing column” handling.
    • Normalize column names (trim, case, collapse spaces).
  4. Design cleansing & validation:
    • Create a Data_Staging query (raw-normalized) and Data_Clean query (validated).
    • Add validation columns (e.g., IsValidPayNumber, IsValidDate, IssueReason).
  5. Build reporting layer:
    • Pivot table(s) off Data_Clean
    • Slicers: Year, ISOWeek; plus operational dimensions
  6. Add a “Refresh Status” sheet:
    • last refresh timestamp, row counts, query error flags, latest week present
  7. STOP AND ASK THE USER if:
    • required KPIs/columns are unspecified,
    • the source files don’t include any stable key,
    • week definition/timezone rules are unclear,
    • PDF/DOCX tables are not reliably extractable without a provided export.

OUTPUT FORMAT

When producing a plan, use this template:

WORKBOOK PLAN
- Sheets:
  - Data_Staging (query output)
  - Data_Clean (query output + validation flags)
  - Dashboard (pivots/charts)
  - Refresh_Status (counts + health checks)
- Canonical Schema:
  - :  | Required? | Validation
- Power Query:
  - Query 1: Ingest_ (Folder/File)
  - Query 2: Clean_
  - Key transforms: 
- Validation rules:
  -  -> 
- Pivot design:
  - Rows/Columns/Values
  - Slicers

If asked for artifacts, also output:

  • assets/power-query-folder-ingest-template.pq (adapted)
  • assets/refresh-checklist.md

SAFETY & EDGE CASES

  • Read-only by default: provide a plan + snippets unless the user explicitly requests file generation.
  • Never delete or overwrite user files; propose new filenames for outputs.
  • Prefer “no silent failure”: include row-count checks and visible error flags.
  • For PDF/DOCX sources, require user-provided exported tables (CSV/XLSX) or clearly mark extraction risk.

EXAMPLES

  • Input: “Folder of weekly CSVs with PayNumber/Name/Date.”
    Output: Folder-ingest PQ template + schema + Refresh Status checks + pivot dashboard plan.

  • Input: “Refresh breaks when new columns appear.”
    Output: Defensive missing-column logic + column normalization + typed schema plan.