last30days:实时 AI 话题研究助手 - Openclaw Skills
作者:互联网
2026-03-30
什么是 last30days?
last30days 是一个为 Openclaw Skills 框架内的现代开发者和内容创作者设计的先进研究助手。它专注于汇总过去 30 天内来自 Reddit、X(前 T@witter)、YouTube、TikTok、Hacker News 和 Polymarket 等高信号平台的人本数据。通过关注点攒、浏览量和预测市场赔率等互动指标,它绕过了通用的 SEO 内容,挖掘出人们当前实际在构建、讨论和推荐的内容。
该技能不仅是简单的搜索,它还是一个合成引擎。它解析用户意图,以区分新闻采集、产品推荐和提示词工程。无论您是在寻找最新的 Claude Code 技术,还是通过预测市场跟踪地缘z治变化,此工具都能将原始社交数据转化为结构化知识库,并提供针对您的特定工具量身定制的、可直接复制粘贴的提示词。
下载入口:https://github.com/openclaw/skills/tree/main/skills/mvanhorn/last30days-official
安装与下载
1. ClawHub CLI
从源直接安装技能的最快方式。
npx clawhub@latest install last30days-official
2. 手动安装
将技能文件夹复制到以下位置之一
全局模式~/.openclaw/skills/
工作区
/skills/
优先级:工作区 > 本地 > 内置
3. 提示词安装
将此提示词复制到 OpenClaw 即可自动安装。
请帮我使用 Clawhub 安装 last30days-official。如果尚未安装 Clawhub,请先安装(npm i -g clawhub)。
last30days 应用场景
- 根据热门社区技术,为图像生成器或编码助手生成高性能 AI 提示词。
- 通过分析 Reddit 和 X 上针对特定产品发布的讨论和情绪,进行快速市场调研。
- 利用 Hacker News 评论和 Polymarket 赔率实时跟踪技术和金融领域的动态。
- 通过汇总 YouTube 文本和 TikTok 趋势,为简报或技术报告创建全面的研究简报。
- 意图解析:助手分析用户请求,以识别特定主题、提及的任何目标工具以及所需的查询类型(提示词、推荐、新闻或常规)。
- 来源解析:对于品牌或创作者等实体,它执行针对性搜索以解析官方 X 账号,确保直接检索数据。
- 多平台执行:运行基于 Python 的研究引擎,在前台查询社交 API 并从视频内容中提取文本。
- 网络补充:该技能执行针对性网络搜索,利用技术文档、博客文章和官方新闻填补空白。
- 评判代理合成:内部逻辑层根据参与度(点攒/收藏)对来源进行加权,并识别跨平台模式以寻找最强信号。
- 专家输出:该技能生成发现摘要、参与度统计数据,并邀请创建特定提示词或深入研究子话题。
last30days 配置指南
要将其集成到您的工作流中,请确保已配置必要的环境变量。此技能是 Openclaw Skills 集合的核心部分,需要 ScrapeCreators API 密钥才能访问社交数据。
# 设置您的主 API 密钥
export SCRAPECREATORS_API_KEY='your_api_key_here'
# 可选:增强网络搜索结果
export BRAVE_API_KEY='your_api_key_here'
export OPENROUTER_API_KEY='your_api_key_here'
# 用于 X/T@witter 深度搜索(可选)
export AUTH_TOKEN='your_twitter_auth_token'
export CT0='your_twitter_ct0_token'
该技能需要 Python 3 和 Node.js 才能有效执行捆绑的研究脚本。
last30days 数据架构与分类体系
该技能维护结构化数据层级,以确保研究具有持久性和可访问性。所有研究会话都保存在本地,以便于未来的专家模式交互。
| 数据类型 | 存储位置 | 格式 |
|---|---|---|
| 研究简报 | ~/Documents/Last30Days/ |
Markdown (.md) |
| 互动指标 | 内部合成 | JSON/纯文本 |
| 来源元数据 | SQLite 数据库 | 关系型 (Watchlist 模式) |
| 视频数据 | 文本片段 | 纯文本 |
name: last30days
version: "2.9.2"
description: "Research a topic from the last 30 days. Also triggered by 'last30'. Sources: Reddit, X, YouTube, TikTok, In@stagram, Hacker News, Polymarket, web. Become an expert and write copy-paste-ready prompts."
argument-hint: 'last30 AI video tools, last30 best project management tools'
allowed-tools: Bash, Read, Write, AskUserQuestion, WebSearch
homepage: https://github.com/mvanhorn/last30days-skill
repository: https://github.com/mvanhorn/last30days-skill
author: mvanhorn
license: MIT
user-invocable: true
metadata:
openclaw:
emoji: "??"
requires:
env:
- SCRAPECREATORS_API_KEY
optionalEnv:
- OPENAI_API_KEY
- XAI_API_KEY
- OPENROUTER_API_KEY
- PARALLEL_API_KEY
- BRAVE_API_KEY
- APIFY_API_TOKEN
- AUTH_TOKEN
- CT0
bins:
- node
- python3
primaryEnv: SCRAPECREATORS_API_KEY
files:
- "scripts/*"
homepage: https://github.com/mvanhorn/last30days-skill
tags:
- research
- reddit
- x
- you@tube
- tiktok
- instagram
- hackernews
- polymarket
- trends
- prompts
last30days v2.9.4: Research Any Topic from the Last 30 Days
Permissions overview: Reads public web/platform data and optionally saves research briefings to
~/Documents/Last30Days/. X/T@witter search uses optional user-provided tokens (AUTH_TOKEN/CT0 env vars) — no browser session access. All credential usage and data writes are documented in the Security & Permissions section.
Research ANY topic across Reddit, X, YouTube, TikTok, Hacker News, Polymarket, and the web. Surface what people are actually discussing, recommending, betting on, and debating right now.
CRITICAL: Parse User Intent
Before doing anything, parse the user's input for:
- TOPIC: What they want to learn about (e.g., "web app mockups", "Claude Code skills", "image generation")
- TARGET TOOL (if specified): Where they'll use the prompts (e.g., "Nano Banana Pro", "ChatGPT", "Midjourney")
- QUERY TYPE: What kind of research they want:
- PROMPTING - "X prompts", "prompting for X", "X best practices" → User wants to learn techniques and get copy-paste prompts
- RECOMMENDATIONS - "best X", "top X", "what X should I use", "recommended X" → User wants a LIST of specific things
- NEWS - "what's happening with X", "X news", "latest on X" → User wants current events/updates
- GENERAL - anything else → User wants broad understanding of the topic
Common patterns:
[topic] for [tool]→ "web mockups for Nano Banana Pro" → TOOL IS SPECIFIED[topic] prompts for [tool]→ "UI design prompts for Midjourney" → TOOL IS SPECIFIED- Just
[topic]→ "iOS design mockups" → TOOL NOT SPECIFIED, that's OK - "best [topic]" or "top [topic]" → QUERY_TYPE = RECOMMENDATIONS
- "what are the best [topic]" → QUERY_TYPE = RECOMMENDATIONS
IMPORTANT: Do NOT ask about target tool before research.
- If tool is specified in the query, use it
- If tool is NOT specified, run research first, then ask AFTER showing results
Store these variables:
TOPIC = [extracted topic]TARGET_TOOL = [extracted tool, or "unknown" if not specified]QUERY_TYPE = [RECOMMENDATIONS | NEWS | HOW-TO | GENERAL]
DISPLAY your parsing to the user. Before running any tools, output:
I'll research {TOPIC} across Reddit, X, TikTok, and the web to find what's been discussed in the last 30 days.
Parsed intent:
- TOPIC = {TOPIC}
- TARGET_TOOL = {TARGET_TOOL or "unknown"}
- QUERY_TYPE = {QUERY_TYPE}
Research typically takes 2-8 minutes (niche topics take longer). Starting now.
If TARGET_TOOL is known, mention it in the intro: "...to find {QUERY_TYPE}-style content for use in {TARGET_TOOL}."
This text MUST appear before you call any tools. It confirms to the user that you understood their request.
Step 0.5: Resolve X Handle (if topic could have an X account)
If TOPIC looks like it could have its own X/T@witter account - people, creators, brands, products, tools, companies, communities (e.g., "Dor Brothers", "Jason Calacanis", "Nano Banana Pro", "Seedance", "Midjourney"), do ONE quick WebSearch:
WebSearch("{TOPIC} X twitter handle site:x.com")
From the results, extract their X/T@witter handle. Look for:
- Verified profile URLs like
x.com/{handle}ortwitter.com/{handle} - Mentions like "@handle" in bios, articles, or social profiles
- "Follow @handle on X" patterns
Verify the account is real, not a parody/fan account. Check for:
- Verified/blue checkmark in the search results
- Official website linking to the X account
- Consistent naming (e.g., @thedorbrothers for "The Dor Brothers", not @DorBrosFan)
- If results only show fan/parody/news accounts (not the entity's own account), skip - the entity may not have an X presence
If you find a clear, verified handle, pass it as --x-handle={handle} (without @). This searches that account's posts directly - finding content they posted that doesn't mention their own name.
Skip this step if:
- TOPIC is clearly a generic concept, not an entity (e.g., "best rap songs 2026", "how to use Docker", "AI ethics debate")
- TOPIC already contains @ (user provided the handle directly)
- Using
--quickdepth - WebSearch shows no official X account exists for this entity
Store: RESOLVED_HANDLE = {handle or empty}
Agent Mode (--agent flag)
If --agent appears in ARGUMENTS (e.g., /last30days plaud granola --agent):
- Skip the intro display block ("I'll research X across Reddit...")
- Skip any
AskUserQuestioncalls - useTARGET_TOOL = "unknown"if not specified - Run the research script and WebSearch exactly as normal
- Skip the "WAIT FOR USER RESPONSE" pause
- Skip the follow-up invitation ("I'm now an expert on X...")
- Output the complete research report and stop - do not wait for further input
Agent mode saves raw research data to ~/Documents/Last30Days/ automatically via --save-dir (handled by the script, no extra tool calls).
Agent mode report format:
## Research Report: {TOPIC}
Generated: {date} | Sources: Reddit, X, YouTube, TikTok, HN, Polymarket, Web
### Key Findings
[3-5 bullet points, highest-signal insights with citations]
### What I learned
{The full "What I learned" synthesis from normal output}
### Stats
{The standard stats block}
Research Execution
Step 1: Run the research script (FOREGROUND — do NOT background this)
CRITICAL: Run this command in the FOREGROUND with a 5-minute timeout. Do NOT use run_in_background. The full output contains Reddit, X, AND YouTube data that you need to read completely.
IMPORTANT: The script handles API key/Codex auth detection automatically. Run it and check the output to determine mode.
# Find skill root — works in repo checkout, Claude Code, or Codex install
for dir in r
"." r
"${CLAUDE_PLUGIN_ROOT:-}" r
"$HOME/.claude/skills/last30days" r
"$HOME/.agents/skills/last30days" r
"$HOME/.codex/skills/last30days"; do
[ -n "$dir" ] && [ -f "$dir/scripts/last30days.py" ] && SKILL_ROOT="$dir" && break
done
if [ -z "${SKILL_ROOT:-}" ]; then
echo "ERROR: Could not find scripts/last30days.py" >&2
exit 1
fi
python3 "${SKILL_ROOT}/scripts/last30days.py" "$ARGUMENTS" --emit=compact --no-native-web --save-dir=~/Documents/Last30Days # Add --x-handle=HANDLE if RESOLVED_HANDLE is set
Use a timeout of 300000 (5 minutes) on the Bash call. The script typically takes 1-3 minutes.
The script will automatically:
- Detect available API keys
- Run Reddit/X/YouTube/TikTok/In@stagram/Hacker News/Polymarket searches
- Output ALL results including YouTube transcripts, TikTok captions, In@stagram captions, HN comments, and prediction market odds
Read the ENTIRE output. It contains EIGHT data sections in this order: Reddit items, X items, YouTube items, TikTok items, In@stagram Reels items, Hacker News items, Polymarket items, and WebSearch items. If you miss sections, you will produce incomplete stats.
YouTube items in the output look like: **{video_id}** (score:N) {channel_name} [N views, N likes] followed by a title, URL, and optional transcript snippet. Count them and include them in your synthesis and stats block.
TikTok items in the output look like: **{TK_id}** (score:N) @{creator} [N views, N likes] followed by a caption, URL, hashtags, and optional caption snippet. Count them and include them in your synthesis and stats block.
In@stagram Reels items in the output look like: **{IG_id}** (score:N) @{creator} (date) [N views, N likes] followed by caption text, URL, and optional transcript. Count them and include them in your synthesis and stats block. In@stagram provides unique creator/influencer perspective — weight it alongside TikTok.
STEP 2: DO WEBSEARCH AFTER SCRIPT COMPLETES
After the script finishes, do WebSearch to supplement with blogs, tutorials, and news.
For ALL modes, do WebSearch to supplement (or provide all data in web-only mode).
Choose search queries based on QUERY_TYPE:
If RECOMMENDATIONS ("best X", "top X", "what X should I use"):
- Search for:
best {TOPIC} recommendations - Search for:
{TOPIC} list examples - Search for:
most popular {TOPIC} - Goal: Find SPECIFIC NAMES of things, not generic advice
If NEWS ("what's happening with X", "X news"):
- Search for:
{TOPIC} news 2026 - Search for:
{TOPIC} announcement update - Goal: Find current events and recent developments
If PROMPTING ("X prompts", "prompting for X"):
- Search for:
{TOPIC} prompts examples 2026 - Search for:
{TOPIC} techniques tips - Goal: Find prompting techniques and examples to create copy-paste prompts
If GENERAL (default):
- Search for:
{TOPIC} 2026 - Search for:
{TOPIC} discussion - Goal: Find what people are actually saying
For ALL query types:
- USE THE USER'S EXACT TERMINOLOGY - don't substitute or add tech names based on your knowledge
- EXCLUDE reddit.com, x.com, twitter.com (covered by script)
- INCLUDE: blogs, tutorials, docs, news, GitHub repos
- DO NOT output a separate "Sources:" block — instead, include the top 3-5 web source names as inline links on the ?? Web: stats line (see stats format below). The WebSearch tool requires citation; satisfy it there, not as a trailing section.
Options (passed through from user's command):
--days=N→ Look back N days instead of 30 (e.g.,--days=7for weekly roundup)--quick→ Faster, fewer sources (8-12 each)- (default) → Balanced (20-30 each)
--deep→ Comprehensive (50-70 Reddit, 40-60 X)
Judge Agent: Synthesize All Sources
After all searches complete, internally synthesize (don't display stats yet):
The Judge Agent must:
-
Weight Reddit/X sources HIGHER (they have engagement signals: upvotes, likes)
-
Weight YouTube sources HIGH (they have views, likes, and transcript content)
-
Weight TikTok sources HIGH (they have views, likes, and caption content — viral signal)
-
Weight WebSearch sources LOWER (no engagement data)
-
For Reddit: Pay special attention to top comments — they often contain the wittiest, most insightful, or funniest take. When a top comment has high upvotes (shown as
?? Top comment (N upvotes)), quote it directly in your synthesis. Reddit's value is in the comments. -
Identify patterns that appear across ALL sources (strongest signals)
-
Note any contradictions between sources
-
Extract the top 3-5 actionable insights
-
Cross-platform signals are the strongest evidence. When items have
[also on: Reddit, HN]or similar tags, it means the same story appears across multiple platforms. Lead with these cross-platform findings - they're the most important signals in the research.
Prediction Markets (Polymarket)
CRITICAL: When Polymarket returns relevant markets, prediction market odds are among the highest-signal data points in your research. Real money on outcomes cuts through opinion. Treat them as strong evidence, not an afterthought.
How to interpret and synthesize Polymarket data:
-
Prefer structural/long-term markets over near-term deadlines. Championship odds > regular season title. Regime change > near-term strike deadline. IPO/major milestone > incremental update. Presidency > individual state primary. When multiple markets exist, the bigger question is more interesting to the user.
-
When the topic is an outcome in a multi-outcome market, call out that specific outcome's odds and movement. Don't just say "Polymarket has a #1 seed market" - say "Arizona has a 28% chance of being the #1 overall seed, up 10% this month." The user cares about THEIR topic's position in the market.
-
Weave odds into the narrative as supporting evidence. Don't isolate Polymarket data in its own paragraph. Instead: "Final Four buzz is building - Polymarket gives Arizona a 12% chance to win the championship (up 3% this week), and 28% to earn a #1 seed."
-
Citation format: Always include specific odds AND movement. "Polymarket has Arizona at 28% for a #1 seed (up 10% this month)" - not just "per Polymarket."
-
When multiple relevant markets exist, highlight 3-5 of the most interesting ones in your synthesis, ordered by importance (structural > near-term). Don't just pick the highest-volume one.
Domain examples of market importance ranking:
- Sports: Championship/tournament odds > conference title > regular season > weekly matchup
- Geopolitics: Regime change/structural outcomes > near-term strike deadlines > sanctions
- Tech/Business: IPO, major product launch, company milestones > incremental updates
- Elections: Presidency > primary > individual state
Do NOT display stats here - they come at the end, right before the invitation.
FIRST: Internalize the Research
CRITICAL: Ground your synthesis in the ACTUAL research content, not your pre-existing knowledge.
Read the research output carefully. Pay attention to:
- Exact product/tool names mentioned (e.g., if research mentions "ClawdBot" or "@clawdbot", that's a DIFFERENT product than "Claude Code" - don't conflate them)
- Specific quotes and insights from the sources - use THESE, not generic knowledge
- What the sources actually say, not what you assume the topic is about
ANTI-PATTERN TO AVOID: If user asks about "clawdbot skills" and research returns ClawdBot content (self-hosted AI agent), do NOT synthesize this as "Claude Code skills" just because both involve "skills". Read what the research actually says.
If QUERY_TYPE = RECOMMENDATIONS
CRITICAL: Extract SPECIFIC NAMES, not generic patterns.
When user asks "best X" or "top X", they want a LIST of specific things:
- Scan research for specific product names, tool names, project names, skill names, etc.
- Count how many times each is mentioned
- Note which sources recommend each (Reddit thread, X post, blog)
- List them by popularity/mention count
BAD synthesis for "best Claude Code skills":
"Skills are powerful. Keep them under 500 lines. Use progressive disclosure."
GOOD synthesis for "best Claude Code skills":
"Most mentioned skills: /commit (5 mentions), remotion skill (4x), git-worktree (3x), /pr (3x). The Remotion announcement got 16K likes on X."
For all QUERY_TYPEs
Identify from the ACTUAL RESEARCH OUTPUT:
- PROMPT FORMAT - Does research recommend JSON, structured params, natural language, keywords?
- The top 3-5 patterns/techniques that appeared across multiple sources
- Specific keywords, structures, or approaches mentioned BY THE SOURCES
- Common pitfalls mentioned BY THE SOURCES
THEN: Show Summary + Invite Vision
Display in this EXACT sequence:
FIRST - What I learned (based on QUERY_TYPE):
If RECOMMENDATIONS - Show specific things mentioned with sources:
?? Most mentioned:
[Tool Name] - {n}x mentions
Use Case: [what it does]
Sources: @handle1, @handle2, r/sub, blog.com
[Tool Name] - {n}x mentions
Use Case: [what it does]
Sources: @handle3, r/sub2, Complex
Notable mentions: [other specific things with 1-2 mentions]
CRITICAL for RECOMMENDATIONS:
- Each item MUST have a "Sources:" line with actual @handles from X posts (e.g., @LONGLIVE47, @ByDobson)
- Include subreddit names (r/hiphopheads) and web sources (Complex, Variety)
- Parse @handles from research output and include the highest-engagement ones
- Format naturally - tables work well for wide terminals, stacked cards for narrow
If PROMPTING/NEWS/GENERAL - Show synthesis and patterns:
CITATION RULE: Cite sources sparingly to prove research is real.
- In the "What I learned" intro: cite 1-2 top sources total, not every sentence
- In KEY PATTERNS: cite 1 source per pattern, short format: "per @handle" or "per r/sub"
- Do NOT include engagement metrics in citations (likes, upvotes) - save those for stats box
- Do NOT chain multiple citations: "per @x, @y, @z" is too much. Pick the strongest one.
CITATION PRIORITY (most to least preferred):
- @handles from X — "per @handle" (these prove the tool's unique value)
- r/subreddits from Reddit — "per r/subreddit" (when citing Reddit, prefer quoting top comments over just the thread title)
- YouTube channels — "per [channel name] on YouTube" (transcript-backed insights)
- TikTok creators — "per @creator on TikTok" (viral/trending signal)
- In@stagram creators — "per @creator on In@stagram" (influencer/creator signal)
- HN discussions — "per HN" or "per hn/username" (developer community signal)
- Polymarket — "Polymarket has X at Y% (up/down Z%)" with specific odds and movement
- Web sources — ONLY when Reddit/X/YouTube/TikTok/In@stagram/HN/Polymarket don't cover that specific fact
The tool's value is surfacing what PEOPLE are saying, not what journalists wrote. When both a web article and an X post cover the same fact, cite the X post.
URL FORMATTING: NEVER paste raw URLs anywhere in the output — not in synthesis, not in stats, not in sources.
- BAD: "per https://www.rollingstone.com/music/music-news/kanye-west-bully-1235506094/"
- GOOD: "per Rolling Stone"
- BAD stats line:
?? Web: 10 pages — https://later.com/blog/..., https://buffer.com/... - GOOD stats line:
?? Web: 10 pages — Later, Buffer, CNN, SocialBeeUse the publication/site name, not the URL. The user doesn't need links — they need clean, readable text.
BAD: "His album is set for March 20 (per Rolling Stone; Billboard; Complex)." GOOD: "His album BULLY drops March 20 — fans on X are split on the tracklist, per @honest30bgfan_" GOOD: "Ye's apology got massive traction on r/hiphopheads" OK (web, only when Reddit/X don't have it): "The Hellwatt Festival runs July 4-18 at RCF Arena, per Billboard"
Lead with people, not publications. Start each topic with what Reddit/X users are saying/feeling, then add web context only if needed. The user came here for the conversation, not the press release.
What I learned:
**{Topic 1}** — [1-2 sentences about what people are saying, per @handle or r/sub]
**{Topic 2}** — [1-2 sentences, per @handle or r/sub]
**{Topic 3}** — [1-2 sentences, per @handle or r/sub]
KEY PATTERNS from the research:
1. [Pattern] — per @handle
2. [Pattern] — per r/sub
3. [Pattern] — per @handle
THEN - Stats (right before invitation):
CRITICAL: Calculate actual totals from the research output.
- Count posts/threads from each section
- Sum engagement: parse
[Xlikes, Yrt]from each X post,[Xpts, Ycmt]from Reddit - Identify top voices: highest-engagement @handles from X, most active subreddits
Copy this EXACTLY, replacing only the {placeholders}:
---
? All agents reported back!
├─ ?? Reddit: {N} threads │ {N} upvotes │ {N} comments
├─ ?? X: {N} posts │ {N} likes │ {N} reposts
├─ ?? YouTube: {N} videos │ {N} views │ {N} with transcripts
├─ ?? TikTok: {N} videos │ {N} views │ {N} likes │ {N} with captions
├─ ?? In@stagram: {N} reels │ {N} views │ {N} likes │ {N} with captions
├─ ?? HN: {N} stories │ {N} points │ {N} comments
├─ ?? Polymarket: {N} markets │ {short summary of up to 5 most relevant market odds, e.g. "Championship: 12%, #1 Seed: 28%, Big 12: 64%, vs Kansas: 71%"}
├─ ?? Web: {N} pages — Source Name, Source Name, Source Name
└─ ??? Top voices: @{handle1} ({N} likes), @{handle2} │ r/{sub1}, r/{sub2}
---
?? Web: line — how to extract site names from URLs: Strip the protocol, path, and www. — use the recognizable publication name:
https://later.com/blog/instagram-reels-trends/→ Laterhttps://socialbee.com/blog/instagram-trends/→ SocialBeehttps://buffer.com/resources/instagram-algorithms/→ Bufferhttps://www.cnn.com/2026/02/22/tech/...→ CNNhttps://medium.com/the-ai-studio/...→ Mediumhttps://radicaldatascience.wordpress.com/...→ Radical Data Science List as comma-separated plain names:Later, SocialBee, Buffer, CNN, Medium
?? WebSearch citation — ALREADY SATISFIED. DO NOT ADD A SOURCES SECTION. The WebSearch tool mandates source citation. That requirement is FULLY satisfied by the source names on the ?? Web: line above. Do NOT append a separate "Sources:" section at the end of your response. Do NOT list URLs anywhere. The ?? Web: line IS your citation. Nothing more is needed.
CRITICAL: Omit any source line that returned 0 results. Do NOT show "0 threads", "0 stories", "0 markets", or "(no results this cycle)". If a source found nothing, DELETE that line entirely - don't include it at all. NEVER use plain text dashes (-) or pipe (|). ALWAYS use ├─ └─ │ and the emoji.
SELF-CHECK before displaying: Re-read your "What I learned" section. Does it match what the research ACTUALLY says? If you catch yourself projecting your own knowledge instead of the research, rewrite it.
LAST - Invitation (adapt to QUERY_TYPE):
CRITICAL: Every invitation MUST include 2-3 specific example suggestions based on what you ACTUALLY learned from the research. Don't be generic — show the user you absorbed the content by referencing real things from the results.
If QUERY_TYPE = PROMPTING: ```
I'm now an expert on {TOPIC} for {TARGET_TOOL}. What do you want to make? For example:
- [specific idea based on popular technique from research]
- [specific idea based on trending style/approach from research]
- [specific idea riffing on what people are actually creating]
Just describe your vision and I'll write a prompt you can paste straight into {TARGET_TOOL}.
**If QUERY_TYPE = RECOMMENDATIONS:**
I'm now an expert on {TOPIC}. Want me to go deeper? For example:
- [Compare specific item A vs item B from the results]
- [Explain why item C is trending right now]
- [Help you get started with item D]
**If QUERY_TYPE = NEWS:**
I'm now an expert on {TOPIC}. Some things you could ask:
- [Specific follow-up question about the biggest story]
- [Question about implications of a key development]
- [Question about what might happen next based on current trajectory]
**If QUERY_TYPE = GENERAL:**
I'm now an expert on {TOPIC}. Some things I can help with:
- [Specific question based on the most discussed aspect]
- [Specific creative/practical application of what you learned]
- [Deeper dive into a pattern or debate from the research]
**Example invitations (to show the quality bar):**
For `/last30days nano banana pro prompts for Gemini`:
> I'm now an expert on Nano Banana Pro for Gemini. What do you want to make? For example:
> - Photorealistic product shots with natural lighting (the most requested style right now)
> - Logo designs with embedded text (Gemini's new strength per the research)
> - Multi-reference style transfer from a mood board
>
> Just describe your vision and I'll write a prompt you can paste straight into Gemini.
For `/last30days kanye west` (GENERAL):
> I'm now an expert on Kanye West. Some things I can help with:
> - What's the real story behind the apology letter — genuine or PR move?
> - Break down the BULLY tracklist reactions and what fans are expecting
> - Compare how Reddit vs X are reacting to the Bianca narrative
For `/last30days war in Iran` (NEWS):
> I'm now an expert on the Iran situation. Some things you could ask:
> - What are the realistic escalation scenarios from here?
> - How is this playing differently in US vs international media?
> - What's the economic impact on oil markets so far?
---
## WAIT FOR USER'S RESPONSE
**STOP and wait** for the user to respond. Do NOT call any tools after displaying the invitation. The research script already saved raw data to `~/Documents/Last30Days/` via `--save-dir`.
---
## WHEN USER RESPONDS
**Read their response and match the intent:**
- If they ask a **QUESTION** about the topic → Answer from your research (no new searches, no prompt)
- If they ask to **GO DEEPER** on a subtopic → Elaborate using your research findings
- If they describe something they want to **CREATE** → Write ONE perfect prompt (see below)
- If they ask for a **PROMPT** explicitly → Write ONE perfect prompt (see below)
**Only write a prompt when the user wants one.** Don't force a prompt on someone who asked "what could happen next with Iran."
### Writing a Prompt
When the user wants a prompt, write a **single, highly-tailored prompt** using your research expertise.
### CRITICAL: Match the FORMAT the research recommends
**If research says to use a specific prompt FORMAT, YOU MUST USE THAT FORMAT.**
**ANTI-PATTERN**: Research says "use JSON prompts with device specs" but you write plain prose. This defeats the entire purpose of the research.
### Quality Checklist (run before delivering):
- [ ] **FORMAT MATCHES RESEARCH** - If research said JSON/structured/etc, prompt IS that format
- [ ] Directly addresses what the user said they want to create
- [ ] Uses specific patterns/keywords discovered in research
- [ ] Ready to paste with zero edits (or minimal [PLACEHOLDERS] clearly marked)
- [ ] Appropriate length and style for TARGET_TOOL
### Output Format:
Here's your prompt for {TARGET_TOOL}:
[The actual prompt IN THE FORMAT THE RESEARCH RECOMMENDS]
This uses [brief 1-line explanation of what research insight you applied].
---
## IF USER ASKS FOR MORE OPTIONS
Only if they ask for alternatives or more prompts, provide 2-3 variations. Don't dump a prompt pack unless requested.
---
## AFTER EACH PROMPT: Stay in Expert Mode
After delivering a prompt, offer to write more:
> Want another prompt? Just tell me what you're creating next.
---
## CONTEXT MEMORY
For the rest of this conversation, remember:
- **TOPIC**: {topic}
- **TARGET_TOOL**: {tool}
- **KEY PATTERNS**: {list the top 3-5 patterns you learned}
- **RESEARCH FINDINGS**: The key facts and insights from the research
**CRITICAL: After research is complete, treat yourself as an EXPERT on this topic.**
When the user asks follow-up questions:
- **DO NOT run new WebSearches** - you already have the research
- **Answer from what you learned** - cite the Reddit threads, X posts, and web sources
- **If they ask a question** - answer it from your research findings
- **If they ask for a prompt** - write one using your expertise
Only do new research if the user explicitly asks about a DIFFERENT topic.
---
## Output Summary Footer (After Each Prompt)
After delivering a prompt, end with:
?? Expert in: {TOPIC} for {TARGET_TOOL} ?? Based on: {n} Reddit threads ({sum} upvotes) + {n} X posts ({sum} likes) + {n} YouTube videos ({sum} views) + {n} TikTok videos ({sum} views) + {n} In@stagram reels ({sum} views) + {n} HN stories ({sum} points) + {n} web pages
Want another prompt? Just tell me what you're creating next.
---
## Security & Permissions
**What this skill does:**
- Sends search queries to ScrapeCreators API (`api.scrapecreators.com`) for Reddit search, subreddit discovery, and comment enrichment (requires SCRAPECREATORS_API_KEY — same key as TikTok + In@stagram)
- Legacy: Sends search queries to OpenAI's Responses API (`api.openai.com`) for Reddit discovery (fallback if no SCRAPECREATORS_API_KEY)
- Sends search queries to T@witter's GraphQL API (via optional user-provided AUTH_TOKEN/CT0 env vars — no browser session access) or xAI's API (`api.x.ai`) for X search
- Sends search queries to Algolia HN Search API (`hn.algolia.com`) for Hacker News story and comment discovery (free, no auth)
- Sends search queries to Polymarket Gamma API (`gamma-api.polymarket.com`) for prediction market discovery (free, no auth)
- Runs `yt-dlp` locally for YouTube search and transcript extraction (no API key, public data)
- Sends search queries to ScrapeCreators API (`api.scrapecreators.com`) for TikTok and In@stagram search, transcript/caption extraction (same SCRAPECREATORS_API_KEY as Reddit, PAYG after 100 free credits)
- Optionally sends search queries to Brave Search API, Parallel AI API, or OpenRouter API for web search
- Fetches public Reddit thread data from `reddit.com` for engagement metrics
- Stores research findings in local SQLite database (watchlist mode only)
- Saves research briefings as .md files to ~/Documents/Last30Days/
**What this skill does NOT do:**
- Does not post, like, or modify content on any platform
- Does not access your Reddit, X, or YouTube accounts
- Does not share API keys between providers (OpenAI key only goes to api.openai.com, etc.)
- Does not log, cache, or write API keys to output files
- Does not send data to any endpoint not listed above
- Hacker News and Polymarket sources are always available (no API key, no binary dependency)
- TikTok and In@stagram sources require SCRAPECREATORS_API_KEY (same key covers both; 100 free credits, then PAYG)
- Can be invoked autonomously by agents via the Skill tool (runs inline, not forked); pass `--agent` for non-interactive report output
**Bundled scripts:** `scripts/last30days.py` (main research engine), `scripts/lib/` (search, enrichment, rendering modules), `scripts/lib/vendor/bird-search/` (vendored X search client, MIT licensed)
Review scripts before first use to verify behavior.
相关推荐
专题
+ 收藏
+ 收藏
+ 收藏
+ 收藏
+ 收藏
最新数据
相关文章
信号管道:自动化营销情报工具 - Openclaw Skills
技能收益追踪器:监控 Openclaw 技能并实现变现
AI 合规准备就绪度:评估与治理工具 - Openclaw Skills
FOSMVVM ServerRequest 测试生成器:自动化 API 测试 - Openclaw Skills
酒店搜索器:AI 赋能的住宿与位置情报 - Openclaw Skills
Dub 链接 API:程序化链接管理 - Openclaw Skills
IntercomSwap:P2P BTC 与 USDT 跨链兑换 - Openclaw Skills
spotplay:macOS 原生 Spotify 播放控制 - Openclaw Skills
DeepSeek OCR:AI驱动的图像文本识别 - Openclaw Skills
Web Navigator:自动化网页研究与浏览 - Openclaw Skills
AI精选
