选股编排器:专业 AI 股票分析 - Openclaw 技能

作者:互联网

2026-03-30

AI教程

什么是 选股编排器?

选股编排器是一个旨在管理复杂财务研究工作流程的中央协调枢纽。通过利用一套 Openclaw 技能,它能智能地路由用户意图——无论是对单一股票的深入研究还是广泛的行业筛选——涵盖数据获取、宏观分析和估值层。它作为 AI 投资助手的核心大脑,确保市场研究结构化、资源高效且技术扎实。

该技能的独特之处在于其强制执行严格预算控制和节奏策略的能力,使其成为在保持高质量输出的同时管理 API 限制的理想选择。它通过综合全球宏观信号、国内市场叙事和严谨的估值框架,将原始财务数据转化为专业级报告,为每项建议背后的逻辑提供透明的视角。

下载入口:https://github.com/openclaw/skills/tree/main/skills/ndtchan/stock-picker-orchestrator

安装与下载

1. ClawHub CLI

从源直接安装技能的最快方式。

npx clawhub@latest install stock-picker-orchestrator

2. 手动安装

将技能文件夹复制到以下位置之一

全局模式 ~/.openclaw/skills/ 工作区 /skills/

优先级:工作区 > 本地 > 内置

3. 提示词安装

将此提示词复制到 OpenClaw 即可自动安装。

请帮我使用 Clawhub 安装 stock-picker-orchestrator。如果尚未安装 Clawhub,请先安装(npm i -g clawhub)。

选股编排器 应用场景

  • 执行单股票深度研究,将基本面数据与专业估值模型和风险登记册相结合。
  • 执行多股票或行业筛选,从 VN30 等广泛领域中识别顶级的价值质量候选者。
  • 进行宏观新闻主导的调查,以确定全球经济转变或国内新闻如何影响特定行业。
  • 运行投资组合刷新,根据当前的风险触发因素和头寸规模纪律重新评估现有持仓。
选股编排器 工作原理
  1. 解析用户提示,将意图分类为特定模式,如单股票、多股票、宏观主导或投资组合刷新。
  2. 分配预算预设(轻量、标准或深度),以定义 API 调用和新闻抓取深度的硬性限制。
  3. 触发所需的上游 Openclaw 技能,以收集结构化的市场数据、宏观快照和新闻叙述。
  4. 验证所有中间输出的新鲜度和一致性,解决不同数据模块之间的任何冲突。
  5. 执行估值层,应用从快速乘数到综合 DCF 分析的适当模型。
  6. 将发现聚合到标准化的输出契约中,包括置信度评分、假设和可操作的后续步骤。

选股编排器 配置指南

要部署此编排器,您的工作区中必须有可用的依赖 Openclaw 技能。按照以下步骤确保兼容性:

# 确保 Openclaw 环境中以下依赖项处于活动状态:
# - vnstock-free-expert
# - nso-macro-monitor
# - us-macro-news-monitor
# - equity-valuation-framework
# - portfolio-risk-manager

选股编排器 数据架构与分类体系

编排器通过结构化管道管理数据,以确保透明度和可复现性:

组件 描述
意图模式 请求的分类(例如,单股票对比投资组合刷新)
预算预设 定义最大 API 调用和估值深度(轻量、标准、深度)
置信度细则 基于数据完整性和模块一致性的统一高/中/低评分
输出契约 强制性报告部分,包括假设、风险标志和数据缺口
name: stock-picker-orchestrator
description: Acts as a meta-orchestrator that routes stock-analysis requests across data, macro/news, and valuation skills under explicit budget controls; used when users ask to find candidates, compare stocks, or run end-to-end expert analysis.
compatibility: Requires dependent skills (`vnstock-free-expert`, macro/news monitors, and `equity-valuation-framework`) to be available in the same workspace.
metadata: {"openclaw":{"emoji":"??"}}

Stock Picker Orchestrator

Use this skill to coordinate the full analysis system from user intent to final recommendation framing.

Purpose

  • Convert user request into the right analysis pipeline.
  • Control budget: vnstock API calls, breadth of news scraping, depth of valuation work.
  • Produce transparent outputs: what was fetched, assumptions, confidence, gaps.
  • Scope boundary: this skill coordinates other skills and does not replace their domain-specific logic.

Skill graph (preferred dependencies)

  1. vnstock-free-expert for structured market/fundamental data.
  2. nso-macro-monitor for Vietnam macro snapshot.
  3. us-macro-news-monitor for global macro spillover signals.
  4. vn-market-news-monitor for domestic market narrative.
  5. equity-valuation-framework for decision-grade valuation and report standard.
  6. portfolio-risk-manager for IPS mini + position sizing + risk triggers (no-margin).

Trigger conditions

  • "Find best stock(s)"
  • "Screen this sector"
  • "Analyze ticker X deeply"
  • "How do macro/news affect these stocks"
  • "Value this stock like a professional"

First step: intent classification

Classify user request into one of these modes:

  • Single-Ticker Deep Dive
  • Multi-Ticker/Universe Screening
  • Macro/News-Led Investigation
  • Portfolio Refresh

If ambiguous, choose the most conservative high-signal mode and note assumption.

Execution workflow (ordered)

  1. Parse user intent and select one routing mode.
  2. Set budget preset (Light, Standard, Deep) and hard request limits.
  3. Execute required upstream skills for the chosen route.
  4. Validate intermediate outputs for freshness, completeness, and conflicts.
  5. Run valuation layer only at the required depth.
  6. Aggregate confidence across modules using the shared rubric.
  7. Return output using the mandatory output contract.

Budget policy (required)

Define and enforce budget at start:

  • API budget: max vnstock calls
  • News budget: max headlines/articles per source
  • Valuation depth: quick multiples vs full DCF

Default safe presets:

  • Light: 20-40 vnstock calls, headlines-only news, quick valuation
  • Standard: 40-120 calls, mixed headlines + selected deep reads, scenario valuation
  • Deep: 120+ calls, full context package, full valuation + sensitivity

Prefer free-tier-safe pacing when using vnstock.

Free-tier budget mapping (required)

Use these hard limits for vnstock runs:

  • Guest/no API key: max 20 requests/min (recommended pacing >= 3.2s/request).
  • Community API key: max 60 requests/min (recommended pacing >= 1.1s/request; keep 3.2s/request if unstable).

Policy actions:

  1. Estimate call count before execution and choose the smallest viable preset.
  2. If estimated calls exceed current budget, reduce scope (smaller universe or fewer modules).
  3. Reuse cached artifacts before making new requests.
  4. Stop scope expansion when remaining call budget < 10% and report partial results.

Routing logic

A) Single ticker request

Priority: depth over breadth. Pipeline:

  1. vnstock-free-expert fetch financials + price behavior.
  2. Optional macro/news context if user asks or risk is macro-sensitive.
  3. equity-valuation-framework full thesis + valuation + risks.

B) Multi-ticker/sector screening

Priority: breadth first, then depth on finalists. Pipeline:

  1. vnstock-free-expert broad screener/ranking.
  2. Select top candidates by objective criteria.
  3. Run quick valuation layer on shortlist.
  4. Deep valuation only for top 1-3 names.

C) Macro/news-led request

Priority: context first, valuation second. Pipeline:

  1. nso-macro-monitor + us-macro-news-monitor + vn-market-news-monitor.
  2. Map exposures to sectors/tickers.
  3. Run quick vnstock validation on impacted names.
  4. If needed, run equity-valuation-framework for decision-critical names.

D) Portfolio refresh

Priority: risk control + monitoring triggers + sizing discipline. Pipeline:

  1. Re-score holdings and benchmark against alternatives.
  2. Macro/news stress overlay.
  3. Run equity-valuation-framework at least quick depth on key holdings/watchlist.
  4. Run portfolio-risk-manager to produce IPS mini + position sizing policy + per-ticker triggers/invalidation.
  5. Flag rebalance candidates with confidence and data gaps.

Mandatory output contract

Always include these sections in final response:

  1. What Was Fetched
  • Data sources used, date/time, and coverage.
  1. Pipeline Chosen
  • Why this route was selected for current user intent.
  1. Assumptions
  • Explicit assumptions on macro, valuation parameters, and data quality.
  1. Results
  • Ranked outputs or thesis summary with concise evidence.
  1. Confidence and Gaps
  • Confidence level + missing data + potential impact.
  1. Risk Flags
  • Top risks and monitoring triggers.
  1. Next-Step Options
  • 2-3 practical follow-up actions (e.g., deepen 1 ticker, expand peer set, update after next macro release).

Shared confidence rubric (required)

Use a unified confidence output across pipeline steps:

  • High: all critical modules complete with no material data blockers.
  • Medium: one critical module has partial gaps but overall conclusion remains stable.
  • Low: key module(s) missing or conflicting evidence makes conclusion fragile.

Aggregation rule:

  1. Compute per-module confidence first (vnstock, macro, news, valuation).
  2. Overall confidence = minimum of critical modules used in the chosen pipeline.
  3. If module outputs conflict, cap overall confidence at Medium unless conflict is resolved with stronger evidence.
  4. Always state which module is the bottleneck for confidence.

Governance and quality rules

  • Single source of truth: if user provides ACTIVE_WATCHLIST/holdings, do not self-modify it; only propose drafts requiring user confirmation.
  • Never present uncertain outputs as facts.
  • Separate observed data from inference.
  • Prefer reproducible logic over ad-hoc narratives.
  • When data is insufficient, downgrade confidence and narrow claims.
  • Avoid absolute buy/sell instructions; provide valuation framing and risk-aware interpretation.

Conflict resolution rules

If outputs from different modules disagree:

  1. Trust data quality hierarchy first (freshness/completeness/consistency).
  2. Prefer broad consensus metrics over fragile point estimates.
  3. Keep both interpretations and state decision boundary (what would change the conclusion).

Fallback behavior

  • If macro/news skills are unavailable: continue with vnstock + valuation only and mark missing context.
  • If valuation inputs are weak: provide screening + directional view; defer full valuation.
  • If API budget is near limit: stop expanding scope, summarize partial results, request user confirmation for deeper run.

Example orchestration prompts

  • "Run a single-ticker deep dive for HPG with full valuation and risk register."
  • "Screen VN30 for top value-quality names, then deep value top 3."
  • "Start from macro shock signals, then identify Vietnamese sector winners/losers and value 2 candidates."

Trigger examples

  • "Find the best Vietnam stocks this week with full reasoning."
  • "Compare three candidate tickers and tell me which one is strongest."
  • "Start from macro and news, then shortlist potential winners."