Redbook:小红书营销与命令行自动化 - Openclaw Skills

作者:互联网

2026-04-05

AI教程

什么是 Redbook 小红书命令行工具?

Redbook 是一款专业级 CLI 实用程序,赋能开发者和营销人员通过编程方式与小红书交互。作为 Openclaw Skills 集合中的专项工具,它提供了对平台数据的深度访问,允许用户提取通常隐藏在移动端界面后的洞察。通过利用现有的浏览器会话,它无需复杂的 API 集成即可实现复杂的研究和自动化工作流。

该技能对于任何希望实现内容策略产业化的人来说都至关重要。从计算互动收藏比到识别高需求内容空白,Redbook 将原始社交数据转化为可操作的营销情报。对于使用 Openclaw Skills 构建自动化内容管道或竞争对手研究代理的用户而言,它尤为有效。

下载入口:https://github.com/openclaw/skills/tree/main/skills/lucasygu/redbook

安装与下载

1. ClawHub CLI

从源直接安装技能的最快方式。

npx clawhub@latest install redbook

2. 手动安装

将技能文件夹复制到以下位置之一

全局模式 ~/.openclaw/skills/ 工作区 /skills/

优先级:工作区 > 本地 > 内置

3. 提示词安装

将此提示词复制到 OpenClaw 即可自动安装。

请帮我使用 Clawhub 安装 redbook。如果尚未安装 Clawhub,请先安装(npm i -g clawhub)。

Redbook 小红书命令行工具 应用场景

  • 进行深度关键词研究,识别高上限的利基市场和尚未被满足的主题。
  • 分析竞争对手主页,确定互动中位数和病毒式成功模式。
  • 通过识别并回复评论区的问题来自动化社群管理。
  • 从爆款帖子中提取结构化模板,为 AI 驱动的内容生成提供指导。
  • 直接从 Markdown 文件生成风格化的社交媒体卡片,实现快速发布。
Redbook 小红书命令行工具 工作原理
  1. 身份验证:该工具访问 macOS 上的本地浏览器 Cookie(Chrome、Safari 或 Firefox),以已登录用户身份验证请求。
  2. 探索:用户进行搜索或获取动态,以检索访问内容所需的最新 xsec_token。
  3. 分析:数据通过专门的模块(如关键词互动矩阵或机会评分逻辑)进行处理。
  4. 处理:对于内容创作,该工具使用本地无头浏览器将 Markdown 渲染成高分辨率的 PNG 卡片。
  5. 执行:发布评论或收藏笔记等操作均带有内置的安全延迟和抖动,以模拟人类行为。

Redbook 小红书命令行工具 配置指南

要将 Redbook 安装为您的 Openclaw Skills 之一,请运行以下命令:

clawhub install redbook

或者,通过 npm 安装:

npm install -g @lucasygu/redbook

要求:Node.js >= 22,macOS(用于访问钥匙串),以及 Chrome 浏览器中已登录 xiaohongshu.com 的活动会话。

Redbook 小红书命令行工具 数据架构与分类体系

Redbook 为所有命令提供结构化的 JSON 输出,以便于程序化分析。关键数据结构包括:

对象 描述 关键字段
笔记卡片 单个帖子数据 id, title, user, interact_info (点攒, 收藏, 评论)
创作者主页 用户元数据 nickname, desc, fans, vi_icon
病毒式分析 互动信号 viralScore, collectToLikeRatio, hookPatterns, themes
机会矩阵 市场研究 demand, competition, tier (S/A/B/C)
卡片渲染 视觉资产 width, height, dpr, style
description: Search, read, analyze, and automate Xiaohongshu (小红书) content via CLI
allowed-tools: Bash, Read, Write, Glob, Grep
# OpenClaw / ClawHub metadata (clawhub install redbook)
name: redbook
version: 0.5.0
metadata:
  openclaw:
    requires:
      bins:
        - redbook
    install:
      - kind: node
        package: "@lucasygu/redbook"
        bins: [redbook]
    os: [macos]
    homepage: https://github.com/lucasygu/redbook
tags:
  - xiaohongshu
  - social-media
  - analytics
  - content-ops

Redbook — Xiaohongshu CLI

Use the redbook CLI to search notes, read content, analyze creators, automate engagement, and research topics on Xiaohongshu (小红书/RED).

OpenClaw users: Install via clawhub install redbook or npm install -g @lucasygu/redbook.

Usage

/redbook search "AI编程"              # Search notes
/redbook read                    # Read a note
/redbook user                 # Creator profile
/redbook analyze              # Full creator analysis (profile + posts)

Quick Reference

Intent Command
Search notes redbook search "keyword" --json
Read a note redbook read --json
Get comments redbook comments --json --all
Creator profile redbook user --json
Creator's posts redbook user-posts --json
Browse feed redbook feed --json
Search hashtags redbook topics "keyword" --json
Analyze viral note redbook analyze-viral --json
Extract content template redbook viral-template --json
Post a comment redbook comment --content "text"
Reply to comment redbook reply --comment-id --content "text"
Batch reply (preview) redbook batch-reply --strategy questions --dry-run
List favorites redbook favorites --json or redbook favorites --json
Collect a note redbook collect
Remove from collection redbook uncollect
Render markdown to cards redbook render content.md --style xiaohongshu
Check connection redbook whoami

Always add --json when parsing output programmatically. Without it, output is human-formatted text.


XHS Platform Signals

XHS is not T@witter or In@stagram. These platform-specific engagement ratios reveal content type and audience behavior.

Collect/Like Ratio (collected_count / liked_count)

XHS's "collect" (收藏) is a save-for-later mechanic — users build personal reference libraries. This ratio is the strongest signal of content utility.

Ratio Classification Meaning
>40% 工具型 (Reference) Tutorial, checklist, template — users bookmark for reuse
20–40% 认知型 (Insight) Thought-provoking but not saved for later
<20% 娱乐型 (Entertainment) Consumed and forgotten — engagement is passive

Comment/Like Ratio (comment_count / liked_count)

Measures how much a note triggers conversation.

Ratio Classification Meaning
>15% 讨论型 (Discussion) Debate, sharing experiences, asking questions
5–15% 正常互动 (Normal) Typical engagement pattern
<5% 围观型 (Passive) Users like but don't engage further

Share/Like Ratio (share_count / liked_count)

Measures social currency — whether users share to signal identity or help others.

Ratio Meaning
>10% 社交货币 — people share to signal taste, identity, or help friends
<10% Content consumed individually, not forwarded

Search Sort Semantics

Sort What It Reveals
--sort popular Proven ceiling — the best a keyword can do
--sort latest Content velocity — how much is being posted now
--sort general Algorithm-weighted blend (default)

Content Form Dynamics

Form Tendency
图文 (image-text, type: "normal") Higher collect rate — users save reference content
视频 (video, type: "video") Higher like rate — easier to consume passively

Analysis Modules

Each module is a composable building block. Combine them for different analysis depths.

Module A: Keyword Engagement Matrix

Answers: Which keywords have the highest engagement ceiling? Which are saturated vs. underserved?

Commands:

redbook search "keyword1" --sort popular --json
redbook search "keyword2" --sort popular --json
# Repeat for each keyword in your list

Fields to extract from each result's items[]:

  • items[].note_card.interact_info.liked_count — likes (may use Chinese numbers: "1.5万" = 15,000)
  • items[].note_card.interact_info.collected_count — collects
  • items[].note_card.interact_info.comment_count — comments
  • items[].note_card.user.nickname — author

How to interpret:

  • Top1 ceiling = items[0] likes — the best-performing note for this keyword. This is the proven demand signal.
  • Top10 average = mean likes across items[0..9] — how well an average top note does.
  • A high Top1 but low Top10 avg means one outlier dominates; hard to compete.
  • A high Top10 avg means consistent demand; easier to break in.

Output: Keyword × engagement table ranked by Top1 ceiling.

Keyword Top1 Likes Top10 Avg Top1 Collects Collect/Like
keyword1 12,000 3,200 5,400 45%
keyword2 8,500 4,100 1,200 14%

Module B: Cross-Topic Heatmap

Answers: Which topic × scene intersections have demand? Where are the content gaps?

Commands:

# Combine base topic with scene/angle keywords
redbook search "base topic + scene1" --sort popular --json
redbook search "base topic + scene2" --sort popular --json
redbook search "base topic + scene3" --sort popular --json

Fields to extract: Same as Module A — Top1 liked_count for each combination.

How to interpret:

  • High Top1 = proven demand for this intersection
  • Zero or very low results = content gap (opportunity or no demand — check if the combination makes sense)
  • Compare across scenes to find which angles resonate most with the base topic

Output: Base × Scene heatmap.

             scene1    scene2    scene3    scene4
base topic   ████ 8K   ██ 2K     ████ 12K  ?? 200

Module C: Engagement Signal Analysis

Answers: What type of content is each keyword? Reference, insight, or entertainment?

Commands: Use search results from Module A, or for a single note:

redbook analyze-viral "" --json

Fields to extract:

  • From search results: compute ratios from interact_info fields
  • From analyze-viral: use pre-computed engagement.collectToLikeRatio, engagement.commentToLikeRatio, engagement.shareToLikeRatio

How to interpret: Apply the ratio benchmarks from XHS Platform Signals above.

Output: Per-keyword or per-note classification.

Keyword Collect/Like Comment/Like Type
keyword1 45% 8% 工具型 + 正常互动
keyword2 12% 22% 娱乐型 + 讨论型

Module D: Creator Discovery & Profiling

Answers: Who are the key creators in this niche? What are their strategies?

Commands:

# 1. Collect unique user_ids from search results across keywords
#    Extract from items[].note_card.user.user_id

# 2. For each creator:
redbook user "" --json
redbook user-posts "" --json

Fields to extract:

  • From user: interactions[] where type === "fans" → follower count
  • From user-posts: notes[].interact_info.liked_count for all posts → compute avg, median, max
  • From user-posts: notes[].display_title → content patterns, posting frequency

How to interpret:

  • Avg vs. Median likes: Large gap means viral outliers inflate the average. Median is the "true" baseline.
  • Max / Median ratio: >5× means they've had breakout hits. Study those notes specifically.
  • Post frequency: Count notes to estimate posting cadence. Prolific creators (>3/week) vs. quality-focused (<1/week).

Output: Creator comparison table.

Creator Followers Avg Likes Median Max Posts Style
@creator1 12万 3,200 1,800 45,000 89 Tutorial
@creator2 5.4万 8,100 6,500 22,000 34 Story

Module E: Content Form Breakdown

Answers: Do image-text or video notes perform better for this topic?

Commands:

redbook search "keyword" --type image --sort popular --json
redbook search "keyword" --type video --sort popular --json

Fields to extract:

  • Compare Top1 and Top10 avg liked_count and collected_count between the two result sets
  • Note the type field: "normal" = image-text, "video" = video

Output: Form × engagement table.

Form Top1 Likes Top10 Avg Collect/Like
图文 8,000 2,400 42%
视频 15,000 5,100 18%

Module F: Opportunity Scoring

Answers: Which keywords should I target? Where is the best effort-to-reward ratio?

Input: Keyword matrix from Module A.

Scoring logic:

  • Demand = Top1 likes ceiling (proven audience size)
  • Competition = density of high-engagement results (how many notes in Top10 have >1K likes)
  • Score = Demand × (1 / Competition density)

Tier thresholds (based on Top1 likes):

Tier Top1 Likes Meaning
S >100,000 (10万+) Massive demand — hard to compete but huge upside
A 20,000–100,000 Strong demand — competitive but winnable
B 5,000–20,000 Moderate demand — good for growing accounts
C <5,000 Niche — low competition, low ceiling

Output: Tiered keyword list.

Tier Keyword Top1 Competition Opportunity
A keyword1 45K Medium (6/10 >1K) High
B keyword3 12K Low (2/10 >1K) Very High
S keyword2 120K High (10/10 >1K) Medium

Module G: Audience Inference

Answers: Who is the audience for this niche? What do they want?

Input: Engagement ratios from Module C + comment themes from analyze-viral + content patterns.

Fields to extract from analyze-viral JSON:

  • comments.themes[] — recurring phrases and keywords from comment section
  • comments.questionRate — % of comments that are questions (learning intent)
  • engagement.collectToLikeRatio — save behavior signals intent
  • hook.hookPatterns[] — what title patterns attract this audience

Inference rules:

  • High collect rate + high question rate → learning-oriented audience (students, professionals)
  • High comment rate + emotional themes → community-oriented audience (sharing experiences)
  • High share rate → aspiration-oriented audience (lifestyle, identity signaling)
  • Comment language patterns → age/education signals (formal = older, slang = younger)

Output: Audience persona summary — demographics, intent, content preferences.


Module H: Content Brainstorm

Answers: What specific content should I create, backed by data?

Input: Opportunity scores (Module F) + audience persona (Module G) + heatmap gaps (Module B).

For each content idea, specify:

  • Target keyword — from opportunity scoring
  • Hook angle — based on hookPatterns that work for this niche
  • Content type — 工具型/认知型/娱乐型 based on what the audience wants
  • Form — 图文 or 视频 based on Module E
  • Engagement target — realistic based on Top10 avg for this keyword
  • Competitive reference — specific note URL that proves this angle works

Output: Ranked content ideas with data backing.

# Keyword Hook Angle Type Target Likes Reference
1 keyword3 "N个方法..." (List) 工具型 图文 5K+ [top note URL]
2 keyword1 "为什么..." (Question) 认知型 视频 10K+ [top note URL]

Module I: Comment Operations

Answers: Which comments deserve a reply? What is the comment quality distribution?

Commands:

# 1. Fetch all comments
redbook comments "" --all --json

# 2. Preview reply candidates (dry run)
redbook batch-reply "" --strategy questions --dry-run --json

# 3. Execute replies with template (5 min delay with ±30% jitter)
redbook batch-reply "" --strategy questions r
  --template "感谢提问!关于{content},..." r
  --max 10

Fields to extract from --dry-run JSON:

  • candidates[].commentId — target comments
  • candidates[].isQuestion — boolean, detected question
  • candidates[].likes — engagement signal
  • candidates[].hasSubReplies — whether already answered
  • skipped — how many comments were filtered out
  • totalComments — total fetched

Strategies:

  • questions — replies to comments ending with or ? (learning-oriented audience)
  • top-engaged — replies to highest-liked comments (maximum visibility)
  • all-unanswered — replies to comments with no existing sub-replies (fill gaps)

How to interpret:

  • High question rate (>15%) = audience is learning-oriented → reply to build authority
  • High top-engaged comments (>100 likes) = reply to visible comments for maximum reach
  • Many unanswered comments = engagement gap, opportunity to increase reply rate

Safety: Hard cap 30 replies per batch, minimum 3-minute delay with ±30% jitter (default 5 min), --dry-run by default (no template = preview only), immediate stop on captcha. See Rate Limits & Safety for details.

Output: Reply plan table with candidate comments, strategy match reason, and status.


Module J: Viral Replication

Answers: What structural template can I extract from successful notes to guide new content creation?

Commands:

# 1. Find top notes for a keyword
redbook search "keyword" --sort popular --json

# 2. Extract structural template from 2-3 top performers
redbook viral-template "" "" "" --json

Fields to extract from viral-template JSON:

  • dominantHookPatterns[] — hook types appearing in majority of notes
  • titleStructure.commonPatterns[] — specific title formula
  • titleStructure.avgLength — target title length
  • bodyStructure.lengthRange — target word count [min, max]
  • bodyStructure.paragraphRange — target paragraph count
  • engagementProfile.type — reference/insight/entertainment
  • audienceSignals.commonThemes[] — what the audience talks about

How to interpret:

  • Consistent hook patterns across notes = proven formula for this niche
  • Narrow body length range = audience has clear content length preference
  • High collect/like in profile = audience saves content → create reference material
  • Common comment themes = topics to address in new content

Composition with other modules:

  • Uses Module A results to identify top URLs for template extraction
  • Feeds into Module H (Content Brainstorm) as structural constraint
  • Uses Module C classification to validate engagement profile

Output: Content template spec — the structural skeleton for content creation. An LLM (via the composed workflow) uses this template to generate actual title, body, hashtags, and cover image prompt.


Module K: Engagement Automation

Answers: How should I manage ongoing engagement with my audience?

This module is a workflow that composes Modules I and J with human oversight.

Workflow:

  1. Monitorredbook comments "" --all --json to fetch recent comments
  2. Filterredbook batch-reply --strategy questions --dry-run to identify reply candidates
  3. Review — Human reviews dry-run output (or LLM reviews with persona guidelines)
  4. Executeredbook batch-reply --strategy questions --template "..." --max 10
  5. Report — Summary of replies sent, errors encountered, rate limit status

Safety rules:

  • Always --dry-run first, human approval before execution
  • Maximum 30 replies per session (hard cap)
  • Minimum 3-minute delay between replies, default 5 minutes, with ±30% random jitter
  • Never reply to the same comment twice (check hasSubReplies)
  • Stop immediately on captcha — do not retry
  • See Rate Limits & Safety for XHS risk control thresholds

Anti-spam guidelines:

  • Vary reply templates across batches
  • Limit to 1-2 batch runs per note per day
  • Prioritize quality (targeted strategy) over quantity
  • Uniform timing patterns trigger bot detection — jitter is applied automatically

Module L: Card Rendering

Answers: How do I turn markdown content into Xiaohongshu-ready image cards?

Commands:

# Render markdown to styled PNG cards
redbook render content.md --style xiaohongshu

# Custom style and output directory
redbook render content.md --style dark --output-dir ./cards

# JSON output (for programmatic use)
redbook render content.md --json

Input: Markdown file with YAML frontmatter: ```markdown

emoji: "??" title: "5个AI效率技巧" subtitle: "Claude Code 实战"

技巧一:...

Content here...


技巧二:...

More content...


**Output:** `cover.png` + `card_1.png`, `card_2.png`, ... in the same directory.

**Card specs:**
- **Size:** 1080×1440 (3:4 ratio, standard XHS image)
- **DPR:** 2 (retina quality, actual output 2160×2880)
- **Styles:** purple, xiaohongshu, mint, sunset, ocean, elegant, dark

**Pagination modes:**
- `auto` (default) — smart split on heading/paragraph boundaries using character-count heuristic
- `separator` — manual split on `---` in markdown

**How to interpret:**
- Uses the user's existing Chrome for rendering (via `puppeteer-core`) — no browser download needed
- Purely offline — no XHS API or cookies required
- Output images are ready for `redbook post --images cover.png card_1.png ...`

**Dependencies:** Requires `puppeteer-core` and `marked` (optional, install with `npm install -g puppeteer-core marked`).

**Composition with other modules:**
- Pairs with Module H (Content Brainstorm) — generate content ideas, write markdown, render to cards
- Pairs with Module J (Viral Replication) — extract template, write content matching the template, render
- Output feeds into `redbook post --images` for publishing

---

## Composed Workflows

Combine modules for different analysis depths.

### Quick Topic Scan (~5 min)
**Modules:** A → C → F

Search 3–5 keywords, classify engagement type, rank opportunities. Good for quickly validating whether a niche is worth deeper research.

### Content Planning
**Modules:** A → B → E → F → H

Build keyword matrix, map topic × scene intersections, check content form performance, score opportunities, brainstorm specific content ideas.

### Creator Competitive Analysis
**Modules:** A → D

Find who dominates a niche and study their content strategy, posting frequency, and engagement patterns.

### Full Niche Analysis
**Modules:** A → B → C → D → E → F → G → H

The comprehensive playbook — keyword landscape, cross-topic heatmap, engagement signals, creator profiles, content form analysis, opportunity scoring, audience personas, and data-backed content ideas.

### Single Note Deep-Dive
**Command:** `redbook analyze-viral "" --json`

No module composition needed — `analyze-viral` returns hook analysis, engagement ratios, comment themes, author baseline comparison, and a 0-100 viral score in one call.

### Viral Pattern Research → Content Template
```bash
# 1. Find top notes
redbook search "keyword" --sort popular --json

# 2. Extract template from top 3 notes (replaces manual synthesis)
redbook viral-template "" "" "" --json

viral-template automates what previously required manual synthesis across analyze-viral results. It outputs a ContentTemplate JSON that captures dominant hooks, body structure ranges, engagement profile, and audience signals.

Reply Management

Modules: I

Single-module workflow for managing comment engagement on your notes. Use batch-reply --dry-run to audit, then execute with a template.

Content Replication

Modules: A → J → H → L

Keyword research → viral template extraction → data-backed content brainstorm → render to image cards. The template provides structural constraints that guide Module H's content ideas. Module L renders the final markdown to XHS-ready PNGs.

Content Creation End-to-End

Modules: A → J → H → L → post

The full pipeline: research keywords → extract viral template → brainstorm content → write markdown → render to styled image cards → publish via redbook post --images cover.png card_1.png ...

Full Operations

Modules: A → C → I → J → K

Comprehensive automation playbook — keyword analysis, engagement classification, comment operations, viral replication templates, and engagement automation workflow.


Rate Limits & Safety

XHS enforces aggressive anti-spam (风控) that detects automated behavior through device fingerprinting, activity ratio monitoring, and timing pattern analysis. The CLI applies safe defaults based on platform research.

Safe Thresholds

Action Safe Interval CLI Default Hard Cap
Post a note 3-4 hours (2-3 notes/day max) N/A (manual)
Comment ≥3 minutes N/A (manual)
Reply ≥3 minutes N/A (manual)
Batch reply delay ≥3 minutes 5 min ±30% jitter
Batch reply count 10 30

Anti-Detection Measures

  • Timing jitter: ±30% random variation on all batch delays. Uniform intervals are a bot signature.
  • Hard caps: Maximum 30 replies per batch (down from 50). No override.
  • Rate limit warnings: post, comment, and reply commands display safe interval reminders after each action.
  • Captcha circuit breaker: Batch operations stop immediately on captcha (NeedVerify).

What Triggers Risk Control

  • Uniform timing — replying at exact 3-second intervals flags bot detection
  • High frequency — >50 interactions/minute across any action type
  • Activity ratio anomaly — more comments than post views signals inauthentic behavior
  • Device fingerprint mismatch — XHS fingerprints 21 hardware parameters

Best Practices for Agents

  1. Always --dry-run first, review candidates, then execute
  2. Use the default 5-minute delay — do not override --delay below 180000 (3 min)
  3. Limit batch runs to 1-2 per note per day
  4. Vary reply templates between batches
  5. Space post commands 3-4 hours apart (2-3 notes/day maximum)

API vs Browser Limitations

The following operations work reliably via API:

  • Reading: search, notes, comments, user profiles, feed, favorites
  • Writing: top-level comments, comment replies, collect/uncollect notes
  • Analysis: viral scoring, template extraction, batch reply planning

The following operations are unreliable via API (frequently trigger captcha):

  • Publishing notes (use --private for higher success rate)
  • Bulk operations at very high frequency

The following operations require browser automation (not supported by this CLI):

  • Captcha solving, real-time notifications
  • Like/follow (heavy anti-automation enforcement)
  • DM/private messaging
  • Cover image generation (use external tools like Gemini/DALL-E)

Command Details

redbook search

Search for notes by keyword. Returns note titles, URLs, likes, author info.

redbook search "Claude Code教程" --json
redbook search "AI编程" --sort popular --json        # Sort: general, popular, latest
redbook search "Cursor" --type image --json           # Type: all, video, image
redbook search "MCP Server" --page 2 --json           # Pagination

Options:

  • --sort : general (default), popular, latest
  • --type : all (default), video, image
  • --page : Page number (default: 1)

redbook read

Read a note's full content — title, body text, images, likes, comments count.

redbook read "https://www.xiaohongshu.com/explore/abc123" --json

Accepts full URLs or short note IDs. Falls back to HTML scraping if API returns captcha.

redbook comments

Get comments on a note. Use --all to fetch all pages.

redbook comments "https://www.xiaohongshu.com/explore/abc123" --json
redbook comments "https://www.xiaohongshu.com/explore/abc123" --all --json

redbook user

Get a creator's profile — nickname, bio, follower count, note count, likes received.

redbook user "5a1234567890abcdef012345" --json

The userId is the hex string from the creator's profile URL.

redbook user-posts

List all notes posted by a creator. Returns titles, URLs, likes, timestamps.

redbook user-posts "5a1234567890abcdef012345" --json

redbook feed

Browse the recommendation feed.

redbook feed --json

redbook topics

Search for topic hashtags. Useful for finding trending topics to attach to posts.

redbook topics "Claude Code" --json

redbook favorites [userId]

List a user's collected (bookmarked) notes. Defaults to the current logged-in user when no userId is provided.

redbook favorites --json                        # Your own favorites
redbook favorites "5a1234567890abcdef" --json   # Another user's favorites
redbook favorites --all --json                  # Fetch all pages

Options:

  • --all: Fetch all pages of favorites (default: first page only)

Note: Other users' favorites are only visible if they haven't set their collection to private.

redbook collect

Collect (bookmark) a note to your favorites.

redbook collect "https://www.xiaohongshu.com/explore/abc123"

redbook uncollect

Remove a note from your collection.

redbook uncollect "https://www.xiaohongshu.com/explore/abc123"

redbook analyze-viral

Analyze why a viral note works. Returns a deterministic viral score (0–100).

redbook analyze-viral "https://www.xiaohongshu.com/explore/abc123" --json
redbook analyze-viral "https://www.xiaohongshu.com/explore/abc123" --comment-pages 5

Options:

  • --comment-pages : Comment pages to fetch (default: 3, max: 10)

JSON output structure: Returns { note, score, hook, content, visual, engagement, comments, relative, fetchedAt }.

  • score.overall (0–100) — composite of hook (20) + engagement (20) + relative (20) + content (20) + comments (20)
  • hook.hookPatterns[] — detected title patterns (Identity Hook, Emotion Word, Number Hook, Question, etc.)
  • engagement — likes, comments, collects, shares + ratios (collectToLikeRatio, commentToLikeRatio, shareToLikeRatio)
  • relative.viralMultiplier — this note's likes / author's median likes
  • relative.isOutlier — true if viralMultiplier > 3
  • comments.themes[] — top recurring keyword phrases from comments

redbook viral-template [url2] [url3]

Extract a reusable content template from 1-3 viral notes. Analyzes each note (same pipeline as analyze-viral) and synthesizes common structural patterns.

redbook viral-template "" "" "" --json
redbook viral-template "" --comment-pages 5 --json

Options:

  • --comment-pages : Comment pages to fetch per note (default: 3, max: 10)

JSON output structure: Returns { dominantHookPatterns, titleStructure, bodyStructure, engagementProfile, audienceSignals, sourceNotes, generatedAt }.

  • dominantHookPatterns[] — hook types appearing in majority of input notes
  • titleStructure.avgLength — average title length across notes
  • bodyStructure.lengthRange — [min, max] body length
  • engagementProfile.type — "reference" / "insight" / "entertainment"
  • audienceSignals.commonThemes[] — merged comment themes across notes

redbook comment

Post a top-level comment on a note.

redbook comment "" --content "Great post!" --json

Options:

  • --content (required): Comment text

redbook reply

Reply to a specific comment on a note.

redbook reply "" --comment-id "" --content "Thanks for asking!" --json

Options:

  • --comment-id (required): Comment ID to reply to (from comments --json output)
  • --content (required): Reply text

redbook batch-reply

Reply to multiple comments using a filtering strategy. Always preview with --dry-run first.

# Preview which comments match the strategy
redbook batch-reply "" --strategy questions --dry-run --json

# Execute replies with a template (default 5 min delay with jitter)
redbook batch-reply "" --strategy questions r
  --template "感谢提问!{content}" --max 10

Options:

  • --strategy : questions (default), top-engaged, all-unanswered
  • --template : Reply template with {author}, {content} placeholders
  • --max : Max replies (default: 10, hard cap: 30)
  • --delay : Delay between replies in ms (default: 300000 / 5 min, min: 180000 / 3 min). ±30% random jitter applied automatically.
  • --dry-run: Preview candidates without posting (default when no template)

Safety: Stops immediately on captcha. No template = dry-run only. Delays include random jitter to avoid uniform timing patterns that trigger XHS bot detection.

redbook render

Render a markdown file with YAML frontmatter into styled PNG image cards. Uses the user's existing Chrome installation — no browser download needed.

redbook render content.md --style xiaohongshu
redbook render content.md --style dark --output-dir ./cards
redbook render content.md --pagination separator --json

Options:

  • --style : purple, xiaohongshu (default), mint, sunset, ocean, elegant, dark
  • --pagination : auto (default), separator (split on ---)
  • --output-dir : Output directory (default: same as input file)
  • --width : Card width in px (default: 1080)
  • --height : Card height in px (default: 1440)
  • --dpr : Device pixel ratio (default: 2)

Requires: puppeteer-core and marked (npm install -g puppeteer-core marked). Does NOT require XHS cookies — purely offline rendering.

Override Chrome path: Set CHROME_PATH environment variable if Chrome is not in the standard location.

redbook whoami

Check connection status. Verifies cookies are valid and shows the logged-in user.

redbook whoami

redbook post (Limited)

Publish an image note. Frequently triggers captcha (type=124) on the creator API. Image upload works, but the publish step is unreliable. For posting, consider using browser automation instead.

redbook post --title "标题" --body "正文" --images cover.png --json
redbook post --title "测试" --body "..." --images img.png --private --json

Options:

  • --title </CODE>: Note title (required)</LI> <LI><CODE>--body <body></CODE>: Note body text (required)</LI> <LI><CODE>--images <paths...></CODE>: Image file paths (required, at least one)</LI> <LI><CODE>--topic <keyword></CODE>: Search and attach a topic hashtag</LI> <LI><CODE>--private</CODE>: Publish as private note</LI></UL> <H3 id=global-options>Global Options</H3> <P>All commands accept:</P> <UL> <LI><CODE>--cookie-source <browser></CODE>: <CODE>chrome</CODE> (default), <CODE>safari</CODE>, <CODE>firefox</CODE></LI> <LI><CODE>--chrome-profile <name></CODE>: Chrome profile directory name (e.g., "Profile 1"). Auto-discovered if omitted.</LI> <LI><CODE>--json</CODE>: Output as JSON</LI></UL> <HR> <H2 id=technical-reference>Technical Reference</H2> <H3 id=xsec_token--required-for-reading-notes>xsec_token — Required for Reading Notes</H3> <P>The XHS API requires a valid <CODE>xsec_token</CODE> to fetch note content. Without it, <CODE>read</CODE>, <CODE>comments</CODE>, and <CODE>analyze-viral</CODE> return <CODE>{}</CODE>.</P> <P><STRONG>Key rules:</STRONG></P> <OL> <LI><STRONG>Tokens expire.</STRONG> A URL with <CODE>?xsec_token=...</CODE> from a previous session will return <CODE>{}</CODE>. Never cache or reuse old URLs.</LI> <LI><STRONG><CODE>search</CODE> always returns fresh tokens.</STRONG> Every item in search results includes a valid <CODE>xsec_token</CODE> for that note.</LI> <LI><STRONG>noteId alone returns <CODE>{}</CODE>.</STRONG> Running <CODE>redbook read <noteId></CODE> without a token almost always fails.</LI></OL> <P><STRONG>The correct workflow — always search first:</STRONG></P><PRE><CODE class=language-bash># WRONG — stale URL or bare noteId, will likely return {} redbook read "689da7b0000000001b0372c6" --json redbook read "https://www.xiaohongshu.com/explore/689da7b0?xsec_token=OLD_TOKEN" --json # RIGHT — search first, then use the fresh URL with token redbook search "AI编程" --sort popular --json # Extract the noteId + xsec_token from search results, then: redbook read "https://www.xiaohongshu.com/explore/<noteId>?xsec_token=<freshToken>" --json </CODE></PRE> <P><STRONG>For agents:</STRONG> When the user gives a bare XHS note URL (no <CODE>xsec_token</CODE> param), extract the noteId from the URL path, search for the note title or noteId to get a fresh token, then use the full URL with the fresh token.</P> <P><STRONG>How to extract fresh URLs from search results (JSON):</STRONG></P><PRE><CODE class=language-bash># Each search result item has: { id: "noteId", xsec_token: "...", note_card: { ... } } # Build the URL: https://www.xiaohongshu.com/explore/{id}?xsec_token={xsec_token} </CODE></PRE> <P><STRONG>Commands that need xsec_token:</STRONG> <CODE>read</CODE>, <CODE>comments</CODE>, <CODE>analyze-viral</CODE> <STRONG>Commands that do NOT need xsec_token:</STRONG> <CODE>search</CODE>, <CODE>user</CODE>, <CODE>user-posts</CODE>, <CODE>feed</CODE>, <CODE>whoami</CODE>, <CODE>topics</CODE></P> <H3 id=chinese-number-formats-in-api-responses>Chinese Number Formats in API Responses</H3> <P>The XHS API returns abbreviated numbers with Chinese unit suffixes:</P> <DIV class=table-scroll-wrapper> <TABLE> <THEAD> <TR> <TH>API value</TH> <TH>Actual number</TH></TR></THEAD> <TBODY> <TR> <TD><CODE>"1.5万"</CODE></TD> <TD>15,000</TD></TR> <TR> <TD><CODE>"2.4万"</CODE></TD> <TD>24,000</TD></TR> <TR> <TD><CODE>"1.2亿"</CODE></TD> <TD>120,000,000</TD></TR> <TR> <TD><CODE>"115"</CODE></TD> <TD>115</TD></TR></TBODY></TABLE></DIV> <P><CODE>万</CODE> = ×10,000. <CODE>亿</CODE> = ×100,000,000. Numbers under 10,000 are plain integers as strings.</P> <P>The <CODE>analyze-viral</CODE> command handles this automatically. When parsing <CODE>--json</CODE> output manually, watch for these suffixes in <CODE>interact_info</CODE> fields (<CODE>liked_count</CODE>, <CODE>collected_count</CODE>, etc.).</P> <H3 id=error-handling>Error Handling</H3> <DIV class=table-scroll-wrapper> <TABLE> <THEAD> <TR> <TH>Error</TH> <TH>Meaning</TH> <TH>Fix</TH></TR></THEAD> <TBODY> <TR> <TD><CODE>{}</CODE> empty response</TD> <TD>Missing or expired xsec_token</TD> <TD>Search first to get a fresh token</TD></TR> <TR> <TD>"No 'a1' cookie"</TD> <TD>Not logged into XHS in browser</TD> <TD>Log into xiaohongshu.com in Chrome</TD></TR> <TR> <TD>"Session expired"</TD> <TD>Cookie too old</TD> <TD>Re-login in Chrome</TD></TR> <TR> <TD>"NeedVerify" / captcha</TD> <TD>Anti-bot triggered</TD> <TD>Wait and retry, or reduce request frequency</TD></TR> <TR> <TD>"IP blocked" (300012)</TD> <TD>Rate limited</TD> <TD>Wait or switch network</TD></TR></TBODY></TABLE></DIV> <HR> <H2 id=output-format-guidance>Output Format Guidance</H2> <P>When producing analysis reports, use these formats:</P> <P><STRONG>Data tables:</STRONG> Markdown tables with exact field mappings. Always include the metric unit.</P> <P><STRONG>Heatmaps:</STRONG> ASCII bar charts for cross-topic comparison:</P><PRE><CODE> 职场 生活 教育 创业 AI编程 ████ 8K ██ 2K ████ 12K ?? 200 Claude Code ██ 3K ?? 100 ██ 4K █ 1K </CODE></PRE> <P><STRONG>Creator comparison:</STRONG> Structured table with both quantitative metrics and qualitative style assessment.</P> <P><STRONG>Final reports:</STRONG> Use this section order:</P> <OL> <LI>Market Overview (demand signals, content velocity)</LI> <LI>Keyword Landscape (engagement matrix, opportunity tiers)</LI> <LI>Cross-Topic Heatmap (topic × scene intersections)</LI> <LI>Audience Persona (demographics, intent, preferences)</LI> <LI>Competitive Landscape (creator profiles, strategy patterns)</LI> <LI>Content Opportunities (tiered recommendations with data backing)</LI> <LI>Content Ideas (specific hooks, angles, targets)</LI></OL> <H2 id=programmatic-api>Programmatic API</H2><PRE><CODE class=language-typescript>import { XhsClient } from "@lucasygu/redbook"; import { loadCookies } from "@lucasygu/redbook/cookies"; const cookies = await loadCookies("chrome"); const client = new XhsClient(cookies); const results = await client.searchNotes("AI编程", 1, 20, "popular"); const topics = await client.searchTopics("Claude Code"); </CODE></PRE> <H2 id=requirements>Requirements</H2> <UL> <LI>Node.js >= 22</LI> <LI>Logged into xiaohongshu.com in Chrome (or Safari/Firefox with <CODE>--cookie-source</CODE>)</LI> <LI>macOS (cookie extraction uses native keychain access)</LI> <LI><STRONG>For card rendering only:</STRONG> <CODE>puppeteer-core</CODE> and <CODE>marked</CODE> (<CODE>npm install -g puppeteer-core marked</CODE>). Uses your existing Chrome — no additional browser download.</LI></UL> </div> <div class="lastanext flexRow"> <a class="lastart flexRow" href="/wz/338273.html" ><span>上一篇:</span><span>AI 图像生成技能:Gemini Flash API 集成 - Openclaw Skills</span></a> <a class="nextart flexRow" href="/wz/338275.html" ><span>下一篇:</span><span>Lattice:基于文件的 AI 智能体编排 - Openclaw Skills</span></a> </div> </div> <div class="dtl-xgtj"> <div class="jb-titles flexRow"> <div class="jbtle-left flexRow"><b></b><p>相关推荐</p></div> </div> <div class="tjlist flexRow"> <div class="tj-item "> <div class="tjitemd"> <div class="tjimd-top flexRow"> <a class="imdta flexRow" href="/wz/366229.html" > <img src="https://images.jiaoben.net/uploads/20260417/logo_69e22831e66ae1.jpg" > </a> <div class="imdt-right flexColumn"> <a class="imdtra flexRow overflowclass" href="/wz/366229.html" >什么是阿里云AI通用型节省计划?AI大模型节省计划Tokens如何计费?</a> <a class="imdtrap flexRow overflowclass" href="/wz/366229.html" > 阿里云AI通用型节省计划是面向大模型按量付费的计费优化机制,AI权益中心:https://t.aliyun.com/U/0QpP7a 用户承诺月消费金额(如200元/年),即可享受阶梯折扣(最高5.3折),自动抵扣模型调用、Tokens、工具调用等费用,覆盖全部阿里直供模型,不提供固定Token额度,需与按量付费配合使用。 </a> </div> </div> <div class="tjimd-down flexRow"> <div class="imdd-tab flexRow"> <p class="imddt-time flexRow"><b></b><span>2026-04-17</span></p> </div> <a href="/wz/366229.html" class="imdd-more flexRow flexcenter" >立即查看</a> </div> </div> </div> <div class="tj-item "> <div class="tjitemd"> <div class="tjimd-top flexRow"> <a class="imdta flexRow" href="/wz/366228.html" > <img src="https://images.jiaoben.net/uploads/20260417/logo_69e22817c13e01.jpg" > </a> <div class="imdt-right flexColumn"> <a class="imdtra flexRow overflowclass" href="/wz/366228.html" >Tokens是什么?AI大模型中的Token是干什么的?开通百炼可以免费领取7000万Tokens</a> <a class="imdtrap flexRow overflowclass" href="/wz/366228.html" > Token是大模型处理文本的基本单位,中文约0.75字/Token。阿里云百炼新用户可免费领7000万Token,开通领取:https://t.aliyun.com/U/fPVHqY 覆盖百余款千问模型,有效期90天。相当于可写2.3万篇文章、4.7万次对话或处理933份百页文档,价值数百元,助力开发者低成本开启AI应用。 </a> </div> </div> <div class="tjimd-down flexRow"> <div class="imdd-tab flexRow"> <p class="imddt-time flexRow"><b></b><span>2026-04-17</span></p> </div> <a href="/wz/366228.html" class="imdd-more flexRow flexcenter" >立即查看</a> </div> </div> </div> <div class="tj-item "> <div class="tjitemd"> <div class="tjimd-top flexRow"> <a class="imdta flexRow" href="/wz/366226.html" > <img src="/jiaoben/image/noimg.png" > </a> <div class="imdt-right flexColumn"> <a class="imdtra flexRow overflowclass" href="/wz/366226.html" >AI 英语教育 APP 的开发</a> <a class="imdtrap flexRow overflowclass" href="/wz/366226.html" > AI英语APP已升级为全天候虚拟私教:依托端到端语音大模型与多模态感知,实现超低延迟真人对话、苏格拉底式启发教学、音素级纠音、5万+沉浸场景、自适应学习档案及游戏化社交。2026年核心竞争力在于“流畅度”与“深度反馈”。 </a> </div> </div> <div class="tjimd-down flexRow"> <div class="imdd-tab flexRow"> <p class="imddt-time flexRow"><b></b><span>2026-04-17</span></p> </div> <a href="/wz/366226.html" class="imdd-more flexRow flexcenter" >立即查看</a> </div> </div> </div> <div class="tj-item "> <div class="tjitemd"> <div class="tjimd-top flexRow"> <a class="imdta flexRow" href="/wz/366170.html" > <img src="https://images.jiaoben.net/uploads/20260417/logo_69e224a5ea87e1.png" > </a> <div class="imdt-right flexColumn"> <a class="imdtra flexRow overflowclass" href="/wz/366170.html" >Claude 开始进桌面之后,AI 系统的测试边界是不是又变了?</a> <a class="imdtrap flexRow overflowclass" href="/wz/366170.html" > AI正从“问答工具”跃升为“操作执行者”,深度融入桌面、办公与企业系统。对测试而言,边界已从结果验证扩展至过程、环境、风险与长期稳定性验证——传统功能测试失效,亟需构建覆盖任务链路、异常恢复、安全可控的AI专属测试框架。 </a> </div> </div> <div class="tjimd-down flexRow"> <div class="imdd-tab flexRow"> <p class="imddt-time flexRow"><b></b><span>2026-04-17</span></p> </div> <a href="/wz/366170.html" class="imdd-more flexRow flexcenter" >立即查看</a> </div> </div> </div> </div> </div> </div> <div class="cd-right dtlcd-right"> <div class="dtl-ht"> <div class="jb-titles flexRow"> <div class="jbtle-left flexRow"><b></b><p>专题</p></div> </div> <div class="dtlht-list "> <div class="htl-item flexRow"> <div class="htmitem-left"> <div class="htiteml-top flexRow"> <a href="/wz/zt-69351.html" >#数据可视化</a> <span></span> </div> <a class="htiteml-down flexRow" href="/wz/zt-69351.html" >数据可视化(Data Visu</a> </div> <p class="htmitem-right flexRow flexcenter gz" data-id="69351" >+ 收藏</p> </div> <div class="htl-item flexRow"> <div class="htmitem-left"> <div class="htiteml-top flexRow"> <a href="/wz/zt-69342.html" >#自然语言处理</a> <span></span> </div> <a class="htiteml-down flexRow" href="/wz/zt-69342.html" >自然语言处理(Natural</a> </div> <p class="htmitem-right flexRow flexcenter gz" data-id="69342" >+ 收藏</p> </div> <div class="htl-item flexRow"> <div class="htmitem-left"> <div class="htiteml-top flexRow"> <a href="/wz/zt-68363.html" >#Excel公式</a> <span></span> </div> <a class="htiteml-down flexRow" href="/wz/zt-68363.html" >Excel公式就是:用函数 +</a> </div> <p class="htmitem-right flexRow flexcenter gz" data-id="68363" >+ 收藏</p> </div> <div class="htl-item flexRow"> <div class="htmitem-left"> <div class="htiteml-top flexRow"> <a href="/wz/zt-68355.html" >#Excel技巧</a> <span></span> </div> <a class="htiteml-down flexRow" href="/wz/zt-68355.html" >Excel是日常生活中必不可</a> </div> <p class="htmitem-right flexRow flexcenter gz" data-id="68355" >+ 收藏</p> </div> <div class="htl-item flexRow"> <div class="htmitem-left"> <div class="htiteml-top flexRow"> <a href="/wz/zt-68081.html" >#蛋仔派对</a> <span></span> </div> <a class="htiteml-down flexRow" href="/wz/zt-68081.html" >蛋仔派对最新官方活动、关卡速</a> </div> <p class="htmitem-right flexRow flexcenter gz" data-id="68081" >+ 收藏</p> </div> <div class="htl-item flexRow"> <div class="htmitem-left"> <div class="htiteml-top flexRow"> <a href="/wz/zt-68000.html" >#人工智能</a> <span></span> </div> <a class="htiteml-down flexRow" href="/wz/zt-68000.html" >人工智能(AI),简单说,就</a> </div> <p class="htmitem-right flexRow flexcenter gz" data-id="68000" >+ 收藏</p> </div> </div> </div> <div class=" dtl-zt"> <div class="jb-titles flexRow"> <div class="jbtle-left flexRow"><b></b><p>最新数据</p></div> </div> <div class="wkch-downs"> <div class="weekch-top flexRow"> <a class="wktpa flexRow" href="/wz/338279.html" > <img src="/jiaoben/image/noimg.png" > </a> <div class="wktpa-right flexColumn"> <a class="wktpara flexRow overflowclass" href="/wz/338279.html" >Cursor CLI 透传:通过 Openclaw Skills 实现交互式终端 AI</a> <a class="wktparp flexRow overflowclass" href="/wz/338279.html" > 什么是 Cursor CLI 透 </a> </div> </div> <div class="weekch-list"> <div class="weekch-con flexRow"> <div class="weekch-icon flexRow"><b></b></div> <a href="/wz/338278.html" class="weekcha flexRow flexcenter overflowclass" >并购指南:战略并购框架 - Openclaw Skills</a> </div> <div class="weekch-con flexRow"> <div class="weekch-icon flexRow"><b></b></div> <a href="/wz/338276.html" class="weekcha flexRow flexcenter overflowclass" >Clawtar: 精通 Cashu HTTP 402 支付流程 - Openclaw Skills</a> </div> <div class="weekch-con flexRow"> <div class="weekch-icon flexRow"><b></b></div> <a href="/wz/338272.html" class="weekcha flexRow flexcenter overflowclass" >IQAir 空气质量查询器:实时 AQI 监测 - Openclaw Skills</a> </div> <div class="weekch-con flexRow"> <div class="weekch-icon flexRow"><b></b></div> <a href="/wz/338271.html" class="weekcha flexRow flexcenter overflowclass" >物流查询:AI Agent 的全球包裹追踪 - Openclaw Skills</a> </div> <div class="weekch-con flexRow"> <div class="weekch-icon flexRow"><b></b></div> <a href="/wz/338269.html" class="weekcha flexRow flexcenter overflowclass" >GatewayStack 治理:Openclaw Skills 的高级安全防护</a> </div> <div class="weekch-con flexRow"> <div class="weekch-icon flexRow"><b></b></div> <a href="/wz/338268.html" class="weekcha flexRow flexcenter overflowclass" >Crunch Coordinate:管理 DAO 竞赛与质押 - Openclaw Skills</a> </div> <div class="weekch-con flexRow"> <div class="weekch-icon flexRow"><b></b></div> <a href="/wz/338267.html" class="weekcha flexRow flexcenter overflowclass" >依赖审计:使用 Openclaw Skills 保护您的项目</a> </div> <div class="weekch-con flexRow"> <div class="weekch-icon flexRow"><b></b></div> <a href="/wz/338266.html" class="weekcha flexRow flexcenter overflowclass" >危机公关手册:快速事件响应 - Openclaw Skills</a> </div> <div class="weekch-con flexRow"> <div class="weekch-icon flexRow"><b></b></div> <a href="/wz/338265.html" class="weekcha flexRow flexcenter overflowclass" >Soaring Weather:专业热气流预报 - Openclaw Skills</a> </div> </div> </div> </div> <div class=" dtl-wz"> <div class="jb-titles flexRow"> <div class="jbtle-left flexRow"><b></b><p>相关文章</p></div> </div> <div class="blog-list"> <a href="/wz/359462.html" class="bloga flexRow over"><p class="overflowclass">代理状态:监控支付意图和交易 - Openclaw Skills</p><div class="blogtime"><span>04/</span>17</div></a> <a href="/wz/359463.html" class="bloga flexRow over"><p class="overflowclass">Proxy MCP:AI 智能体支付与虚拟卡 - Openclaw Skills</p><div class="blogtime"><span>04/</span>17</div></a> <a href="/wz/359464.html" class="bloga flexRow over"><p class="overflowclass">Apify Ultimate Scraper: AI 网页数据抓取 - Openclaw Skills</p><div class="blogtime"><span>04/</span>17</div></a> <a href="/wz/359465.html" class="bloga flexRow over"><p class="overflowclass">加密诈骗检测器:实时欺诈预防 - Openclaw Skills</p><div class="blogtime"><span>04/</span>17</div></a> <a href="/wz/359466.html" class="bloga flexRow over"><p class="overflowclass">newsmcp: 实时 AI 新闻聚合与过滤 - Openclaw Skills</p><div class="blogtime"><span>04/</span>17</div></a> <a href="/wz/359467.html" class="bloga flexRow over"><p class="overflowclass">Moltbook 优化器:策略与排名精通 - Openclaw 技能</p><div class="blogtime"><span>04/</span>17</div></a> <a href="/wz/359468.html" class="bloga flexRow over"><p class="overflowclass">Frigate NVR:智能摄像机管理与自动化 - Openclaw Skills</p><div class="blogtime"><span>04/</span>17</div></a> <a href="/wz/359469.html" class="bloga flexRow over"><p class="overflowclass">Markdown 检查器:样式、链接和格式工具 - Openclaw Skills</p><div class="blogtime"><span>04/</span>17</div></a> <a href="/wz/359470.html" class="bloga flexRow over"><p class="overflowclass">Venice.ai 至尊路由:私密且无审查的模型路由 - Openclaw Skills</p><div class="blogtime"><span>04/</span>17</div></a> <a href="/wz/359472.html" class="bloga flexRow over"><p class="overflowclass">图片优化器:使用 Openclaw Skills 压缩和调整图片尺寸</p><div class="blogtime"><span>04/</span>17</div></a> </div> </div> <div class="cdr-ai"> <div class="jb-titles flexRow"> <div class="jbtle-left flexRow"><b></b><p>AI精选 </p></div> <a class="jbtitle-more flexRow" href="/category/list_344_1.html" title=""><span>更多</span><b></b></a> </div> <div class="ai-list"> <div class="ail-top flexRow"> <a href="/wz/366175.html" title="" class="ailta "> <img src="https://images.jiaoben.net/uploads/20260417/logo_69e224cf789341.jpg" > <p ><span>赛博朋克 K-Pop 动画</span></p></a> <a href="/wz/366174.html" title="" class="ailta "> <img src="https://images.jiaoben.net/uploads/20260417/logo_69e224c9eb4481.jpg" > <p ><span>冰川星球大逃亡</span></p></a> </div> <div class="ail-down"> <a class="ali-con flexRow" href="/wz/366173.html" title=""> <div class="alicon-left flexRow"><span>精选</span></div> <p class="aliconp overflowclass">皮克斯/迪士尼风格 X (Twitter) 个人资料卡片提示</p> </a> <a class="ali-con flexRow" href="/wz/366167.html" title=""> <div class="alicon-left flexRow"><span>精选</span></div> <p class="aliconp overflowclass">蝴蝶群化作空灵舞者循环动画</p> </a> <a class="ali-con flexRow" href="/wz/366165.html" title=""> <div class="alicon-left flexRow"><span>精选</span></div> <p class="aliconp overflowclass">抱着泰迪熊的男士写实肖像</p> </a> <a class="ali-con flexRow" href="/wz/366163.html" title=""> <div class="alicon-left flexRow"><span>精选</span></div> <p class="aliconp overflowclass">滑雪旅行自拍视角提示</p> </a> <a class="ali-con flexRow" href="/wz/366152.html" title=""> <div class="alicon-left flexRow"><span>精选</span></div> <p class="aliconp overflowclass">天鹅绒运动服中的超逼真肖像</p> </a> <a class="ali-con flexRow" href="/wz/366136.html" title=""> <div class="alicon-left flexRow"><span>精选</span></div> <p class="aliconp overflowclass">外卖配送狂奔电影感提示词</p> </a> <a class="ali-con flexRow" href="/wz/365746.html" title=""> <div class="alicon-left flexRow"><span>精选</span></div> <p class="aliconp overflowclass">MCP协议设计与实现-第13章 Streamable HTTP:远程流式传输</p> </a> <a class="ali-con flexRow" href="/wz/365745.html" title=""> <div class="alicon-left flexRow"><span>精选</span></div> <p class="aliconp overflowclass">从零开发一个 MCP 服务器 + OpenCode Skill:让 AI 学会审查你的代码</p> </a> </div> </div> </div> <div class="cdr-blog"> <div class="jb-titles flexRow"> <div class="jbtle-left flexRow"><b></b><p>脚本推荐</p></div> </div> <div class="blog-list"> <a href="/wz/zt-49225.html" title="" class="bloga flexRow over"><p class="overflowclass">SeeDance 2.0 Video Creator专区</p></a> <a href="/wz/zt-49224.html" title="" class="bloga flexRow over"><p class="overflowclass">OpenClaw AI专区</p></a> <a href="/wz/zt-49223.html" title="" class="bloga flexRow over"><p class="overflowclass">cowork专区</p></a> <a href="/wz/zt-49222.html" title="" class="bloga flexRow over"><p class="overflowclass">claude code skills专区</p></a> </div> </div> </div> </div> </div> </div> </main> <script> $(function() { // “+ 收藏”按钮点击事件 $(document).on('click', '.htmitem-right, .ztop-right', function(e) { // 仅针对包含 “+ 收藏” 文字的按钮 if ($(this).text().indexOf('+ 收藏') === -1) return; e.preventDefault(); const id = $(this).data('id'); if (!id) { layer.msg('该项暂无有效ID,无法收藏'); return; } // 构造收藏 URL: 当前域名 + /wz/zt- + id + / const bookmarkUrl = window.location.origin + '/wz/zt-' + id + '.html'; // 获取收藏标题 (优先从同级元素获取话题名称,否则使用页面标题) let bookmarkTitle = $(this).closest('.htl-item, .zttopd').find('a:first, span.overflowclass').text().trim() || document.title; if (bookmarkTitle.startsWith('#')) bookmarkTitle = bookmarkTitle.substring(1); // 浏览器收藏逻辑 (带 Fallback) try { if (window.sidebar && window.sidebar.addPanel) { // Firefox < 23 window.sidebar.addPanel(bookmarkTitle, bookmarkUrl, ""); } else if (window.external && ('AddFavorite' in window.external)) { // IE window.external.AddFavorite(bookmarkUrl, bookmarkTitle); } else { // Chrome, Safari, Firefox 23+, etc. const isMac = /Mac/i.test(navigator.userAgent); const keyStr = isMac ? 'Command + D' : 'Ctrl + D'; layer.confirm('由于浏览器安全限制,请使用 <b>' + keyStr + '</b> 手动添加收藏。<br><br>收藏地址:<br><small>' + bookmarkUrl + '</small>', { title: '收藏提示', btn: ['复制链接', '知道了'], yes: function(index) { copyToClipboard(bookmarkUrl).then(() => { layer.msg('链接已复制,请手动添加到收藏夹'); }).catch(() => { layer.msg('复制失败,请手动选择复制'); }); layer.close(index); } }); } } catch (err) { layer.msg('收藏失败,请手动添加'); } }); // 兼容非 HTTPS 的复制函数 function copyToClipboard(text) { if (navigator.clipboard && window.isSecureContext) { return navigator.clipboard.writeText(text); } else { let textArea = document.createElement("textarea"); textArea.value = text; textArea.style.position = "fixed"; textArea.style.left = "-999999px"; textArea.style.top = "-999999px"; document.body.appendChild(textArea); textArea.focus(); textArea.select(); return new Promise((res, rej) => { document.execCommand('copy') ? res() : rej(); textArea.remove(); }); } } }); </script> <footer> <div class="foot "> <div class="foot-top flexRow"> <div class="foot-left"> <div class="ftl-top flexRow"><span class="flexRow flexcenter">脚本</span>在线</div> <p class="ftl-down"> 智能赋能梦想,脚本构筑现实。我们致力于链接AI智能指令 与传统自动化,为您提供一站式、高效率的脚 本资产与生成 服务。 </p> </div> <div class="foot-right flexRow"> <div class="ftr-list flexColumn"> <p>核心板块</p> <span>AI脚本库</span> <span>自动化仓库</span> <span>脚本实验室</span> </div> <div class="ftr-list flexColumn"> <p>关于我们</p> <a href="/category/list_229_1.html" >最新游戏</a> <span>商务合作</span> <span>隐私政策</span> </div> <div class="ftr-list flexColumn"> <p>社区支持</p> <span >API文档</span> <a href="/category/list_334_1.html" >攻略资讯</a> <span>违规举报</span> </div> </div> </div> <div class="foot-down flexColumn"> <p>© 2026 jiaoben.net | 脚本在线 | 联系:jiaobennet2026@163.com</p> <p>备案:<a style="color: #7F7F7F;" href="https://beian.miit.gov.cn/" rel="nofollow" target="_blank">湘ICP备18025217号-11</a> </p> </div> </div> </footer> <div style="display:none;"> <script type="text/javascript"> var _paq = window._paq = window._paq || []; _paq.push(['trackPageView']); _paq.push(['enableLinkTracking']); (function() { var u="//tongji.zhangwan.net/"; _paq.push(['setTrackerUrl', u+'matomo.php']); _paq.push(['setSiteId', '29']); // Add this code below within the Matomo JavaScript tracker code // Important: the tracker url includes the /matomo.php var secondaryTrackerUrl = u+'matomo.php'; var secondaryWebsiteId = 27; // Also send all of the tracking data to this other Matomo server, in website ID 77 _paq.push(['addTracker', secondaryTrackerUrl, secondaryWebsiteId]); // That's it! var d=document, g=d.createElement('script'), s=d.getElementsByTagName('script')[0]; g.type='text/javascript'; g.async=true; g.src=u+'matomo.js'; s.parentNode.insertBefore(g,s); })(); </script> <script> var _hmt = _hmt || []; (function() { var hm = document.createElement("script"); hm.src = "https://hm.baidu.com/hm.js?5d3cfe1f36b1988029fe82a0d475b20d"; var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(hm, s); })(); </script> </div> </body> </html>