媒体新闻摘要:好莱坞与娱乐产业 AI 智能体 - Openclaw Skills

作者:互联网

2026-03-27

AI教程

什么是 媒体新闻摘要?

该技能为媒体和娱乐部门提供了全面的情报层,汇集了来自《Variety》和《Deadline》等好莱坞行业媒体、X (Twitter) 和 Reddit 等社交平台以及深网搜索的数据。通过利用 Openclaw Skills,它处理包括票房表现、制作更新和颁奖季热点在内的九个关键领域的新闻,确保行业专业人士保持领先地位。

该系统使用复杂的并行流水线以确保速度和数据新鲜度,是追踪全球媒体脉搏的必备工具。它专门关注高信号源和关键意见领袖,提供行业精选视图,而非仅仅是原始数据。

下载入口:https://github.com/openclaw/skills/tree/main/skills/dinstein/media-news-digest

安装与下载

1. ClawHub CLI

从源直接安装技能的最快方式。

npx clawhub@latest install media-news-digest

2. 手动安装

将技能文件夹复制到以下位置之一

全局模式 ~/.openclaw/skills/ 工作区 /skills/

优先级:工作区 > 本地 > 内置

3. 提示词安装

将此提示词复制到 OpenClaw 即可自动安装。

请帮我使用 Clawhub 安装 media-news-digest。如果尚未安装 Clawhub,请先安装(npm i -g clawhub)。

媒体新闻摘要 应用场景

  • 监控每日好莱坞行业头条和行业交易。
  • 追踪全球票房趋势和首映周末表现。
  • 收集奥斯卡、艾美奖和国际电影节的颁奖季情报。
  • 总结 Netflix、Disney+ 和 HBO 等流媒体平台的更新。
  • 报道中国电影市场和内地制作动态。
媒体新闻摘要 工作原理
  1. 统一流水线启动从 35 个 RSS 源、18 个 X (Twitter) 影响力人物和 11 个 Reddit 社区的并行获取。
  2. 通过 Brave 或 Tavily API 执行网页搜索任务,利用新鲜度过滤器捕捉热门话题和突发新闻。
  3. 去重引擎删除冗余条目,同时评分算法在所有来源中优先处理高质量内容。
  4. 流水线将数据合并为结构化的 JSON 格式,并可选择为顶级新闻补充全文内容。
  5. 报告使用 Discord 或电子邮件模板进行格式化,并可导出为专业样式的 PDF。

媒体新闻摘要 配置指南

# 进入技能目录
cd media-news-digest

# 运行统一流水线以生成新鲜摘要
python3 scripts/run-pipeline.py r
  --defaults config/defaults r
  --config workspace/config r
  --hours 48 r
  --freshness pd r
  --archive-dir workspace/archive/media-news-digest/ r
  --output /tmp/md-merged.json --verbose

媒体新闻摘要 数据架构与分类体系

组件 详情
数据源 总计 65 个来源 (35 RSS, 18 Twitter, 11 Reddit, 9 网页主题)
内容主题 制作、交易、发行、票房、流媒体、奖项、电影节、评论
元数据 质量评分、URL 去重标记和来源健康状况追踪
存储 用于存放历史摘要 JSON 文件和生成的 PDF 的本地归档目录
name: media-news-digest
description: Generate media & entertainment industry news digests. Covers Hollywood trades (THR, Deadline, Variety), box office, streaming, awards season, film festivals, and production news. Four-source data collection from RSS feeds, Twitter/X KOLs, Reddit, and web search. Pipeline-based scripts with retry mechanisms and deduplication. Supports Discord and email output with PDF attachments.
version: "2.1.1"
homepage: https://github.com/draco-agent/media-news-digest
source: https://github.com/draco-agent/media-news-digest
metadata:
  openclaw:
    requires:
      bins: ["python3"]
    optionalBins: ["mail", "msmtp"]
    credentialAccess: >
      This skill does NOT read, store, or manage any platform credentials itself.
      Email delivery uses send-email.py with system mail (msmtp). Twitter and web search
      API keys are passed via environment variables and used only for outbound API calls.
      No credentials are written to disk by this skill.
env:
  - name: X_BEARER_TOKEN
    required: false
    description: Twitter/X API v2 bearer token for KOL monitoring (official backend)
  - name: TWITTERAPI_IO_KEY
    required: false
    description: twitterapi.io API key (alternative Twitter backend)
  - name: TWITTER_API_BACKEND
    required: false
    description: "Twitter backend selection: official, twitterapiio, or auto (default: auto)"
  - name: BRAVE_API_KEY
    required: false
    description: Brave Search API key for web search (single key)
  - name: BRAVE_API_KEYS
    required: false
    description: "Comma-separated Brave API keys for multi-key rotation (preferred over BRAVE_API_KEY)"
  - name: TAVILY_API_KEY
    required: false
    description: Tavily Search API key (alternative web search backend)

Media News Digest

Automated media & entertainment industry news digest system. Covers Hollywood trades, box office, streaming platforms, awards season, film festivals, production news, and industry deals.

Quick Start

  1. Generate Digest (unified pipeline — runs all 4 sources in parallel):

    python3 scripts/run-pipeline.py r
      --defaults /config/defaults r
      --config /config r
      --hours 48 --freshness pd r
      --archive-dir /archive/media-news-digest/ r
      --output /tmp/md-merged.json --verbose --force
    
  2. Use Templates: Apply Discord or email templates to merged output

Data Sources (65 total, 64 enabled)

  • RSS Feeds (36, 35 enabled): THR, Deadline, Variety, IndieWire, The Wrap, Collider, Vulture, Awards Daily, Gold Derby, Screen Rant, Empire, The Playlist, /Film, Entertainment Weekly, Roger Ebert, CinemaBlend, Den of Geek, The Direct, MovieWeb, CBR, What's on Netflix, Decider, Anime News Network, and more
  • Twitter/X KOLs (18): @THR, @DEADLINE, @Variety, @FilmUpdates, @DiscussingFilm, @BoxOfficeMojo, @MattBelloni, @Borys_Kit, @TheAcademy, @letterboxd, @A24, and more
  • Reddit (11): r/movies, r/boxoffice, r/television, r/Oscars, r/TrueFilm, r/entertainment, r/netflix, r/marvelstudios, r/DC_Cinematic, r/anime, r/flicks
  • Web Search (9 topics): Brave Search / Tavily with freshness filters

Topics (9 sections)

  • ???? China / 中国影视 — China mainland box office, Chinese films, Chinese streaming
  • ?? Production / 制作动态 — New projects, casting, filming updates
  • ?? Deals & Business / 行业交易 — M&A, rights, talent deals
  • ??? Upcoming Releases / 北美近期上映 — Theater openings, release dates, trailers
  • ??? Box Office / 票房 — NA/global box office, opening weekends
  • ?? Streaming / 流媒体 — Netflix, Disney+, Apple TV+, HBO, viewership
  • ?? Awards / 颁奖季 — Oscars, Golden Globes, Emmys, BAFTAs
  • ?? Film Festivals / 电影节 — Cannes, Venice, TIFF, Sundance, Berlin
  • ? Reviews & Buzz / 影评口碑 — Critical reception, RT/Metacritic scores

Scripts Pipeline

Unified Pipeline

python3 scripts/run-pipeline.py r
  --defaults config/defaults --config workspace/config r
  --hours 48 --freshness pd r
  --archive-dir workspace/archive/media-news-digest/ r
  --output /tmp/md-merged.json --verbose --force
  • Features: Runs all 4 fetch steps in parallel, then merges + deduplicates + scores
  • Output: Final merged JSON ready for report generation (~30s total)
  • Flags: --skip rss,twitter to skip steps, --enrich for full-text enrichment

Individual Scripts

  • fetch-rss.py — Parallel RSS fetcher (10 workers, 30s timeout, caching)
  • fetch-twitter.py — Dual backend: official X API v2 + twitterapi.io (auto fallback, 3-worker concurrency)
  • fetch-web.py — Web search via Brave (multi-key rotation) or Tavily
  • fetch-reddit.py — Reddit public JSON API (4 workers, no auth)
  • merge-sources.py — Quality scoring, URL dedup, multi-source merging
  • summarize-merged.py — Structured overview sorted by quality_score
  • enrich-articles.py — Full-text enrichment for top articles
  • generate-pdf.py — PDF generation with Chinese typography + emoji
  • send-email.py — MIME email with HTML body + PDF attachment
  • sanitize-html.py — XSS-safe markdown to HTML conversion
  • validate-config.py — Configuration validator
  • source-health.py — Source health tracking
  • config_loader.py — Config overlay loader (defaults + user overrides)
  • test-pipeline.sh — Pipeline testing with --only/--skip/--twitter-backend filters

Cron Integration

Reference references/digest-prompt.md in cron prompts.

Daily Digest

MODE = daily, FRESHNESS = pd, RSS_HOURS = 48

Weekly Digest

MODE = weekly, FRESHNESS = pw, RSS_HOURS = 168

Dependencies

All scripts work with Python 3.8+ standard library only. feedparser optional but recommended.