Solo SEO Audit:自动化技术性 SEO 健康检查 - Openclaw Skills
作者:互联网
2026-04-15
什么是 Solo SEO Audit?
Solo SEO Audit 是一款专为 AI 智能体设计的技术诊断技能,用于对任何 URL 或项目进行深度 SEO 健康检查。它会细致地评估页面架构,包括元标签、Open Graph 数据、JSON-LD 结构化数据,以及 sitemap.xml 和 robots.txt 等必要的抓取指令。通过利用 Openclaw Skills,开发者可以自动验证其 Web 资产,确保其符合搜索引擎最佳实践和社交分享标准。
该技能不仅限于基础扫描,它还会计算一个 0 到 100 的加权分数,针对可能阻碍索引或降低搜索可见性的关键问题提供可操作的见解。对于任何使用 Openclaw Skills 维护高性能 Web 应用程序的开发者或营销人员来说,这都是一个必不可少的组件。
下载入口:https://github.com/openclaw/skills/tree/main/skills/fortunto2/solo-seo-audit
安装与下载
1. ClawHub CLI
从源直接安装技能的最快方式。
npx clawhub@latest install solo-seo-audit
2. 手动安装
将技能文件夹复制到以下位置之一
全局模式~/.openclaw/skills/
工作区
/skills/
优先级:工作区 > 本地 > 内置
3. 提示词安装
将此提示词复制到 OpenClaw 即可自动安装。
请帮我使用 Clawhub 安装 solo-seo-audit。如果尚未安装 Clawhub,请先安装(npm i -g clawhub)。
Solo SEO Audit 应用场景
- 在发布前验证落地页,确保所有元标签和社交头信息配置正确。
- 通过跟踪特定目标关键词的 SERP 排名,将项目与顶级竞争对手进行基准测试。
- 审计 sitemap.xml 和 robots.txt 等技术基础设施文件的合规性和可访问性。
- 生成 Markdown 格式的自动化 SEO 报告,并直接存储在项目仓库中。
- 智能体从用户输入中识别目标 URL,或在 CLAUDE.md 或 PRD 等项目文档中查找。
- 抓取页面内容并提取关键元素,包括标题、描述、规范链接和标题层级。
- 工具验证 sitemap.xml 和 robots.txt 等核心基础设施文件的存在及其有效性。
- 智能体通过强制推理步骤评估元素的呈现情况,然后计算最终得分。
- 针对前 3-5 个关键词进行 SERP 排名检查,查看站点与竞争对手相比的排名情况。
- 根据 100 分制量规计算最终得分,并将详细报告写入控制台或 Markdown 文件。
Solo SEO Audit 配置指南
要使用此技能,请确保您的 AI 智能体具备 Web 搜索和抓取能力。该技能旨在支持 Openclaw Skills 的环境中无缝运行。
# 直接通过 URL 触发搜索
/seo-audit https://example.com
# 或为本地项目触发审计
/seo-audit my-project-name
Solo SEO Audit 数据架构与分类体系
该技能生成一份结构化报告,通常保存到 docs/seo-audit.md,遵循以下元数据分类:
| 组件 | 跟踪指标 |
|---|---|
| 元数据 | 标题长度、描述长度、OG 标签、Twitter 卡片 |
| 技术 | JSON-LD 有效性、规范 URL、HTTPS 状态、语言标签 |
| 基础设施 | Sitemap XML 验证、Robots.txt 禁止规则、Favicon 存在情况 |
| 性能 | SERP 排名位置、每个关键词识别的前 3 名竞争对手 |
name: solo-seo-audit
description: SEO health check for any URL — analyzes meta tags, OG, JSON-LD, sitemap, robots.txt, SERP positions, and scores 0-100. Use when user says "check SEO", "audit this page", "SEO score", "check meta tags", or "SERP position". Do NOT use for generating landing content (use /landing-gen) or social media posts (use /content-gen).
license: MIT
metadata:
author: fortunto2
version: "1.1.1"
openclaw:
emoji: "??"
allowed-tools: Read, Grep, Bash, Glob, Write, WebSearch, WebFetch, AskUserQuestion, mcp__solograph__web_search, mcp__solograph__project_info
argument-hint: ""
/seo-audit
SEO health check for any URL or project landing page. Fetches the page, analyzes meta tags, OG, JSON-LD, sitemap, robots.txt, checks SERP positions for target keywords, and outputs a scored report.
MCP Tools (use if available)
web_search(query, engines, include_raw_content)— SERP position check, competitor analysisproject_info(name)— get project URL if auditing by project name
If MCP tools are not available, use Claude WebSearch/WebFetch as fallback.
Steps
-
Parse target from
$ARGUMENTS.- If URL (starts with
http): use directly. - If project name: look up URL from project README, CLAUDE.md, or
docs/prd.md. - If empty: ask via AskUserQuestion — "Which URL or project to audit?"
- If URL (starts with
-
Fetch the page via WebFetch. Extract:
tag (length check: 50-60 chars ideal)(length check: 150-160 chars ideal)- Open Graph tags:
og:title,og:description,og:image,og:url,og:type - Twitter Card tags:
twitter:card,twitter:title,twitter:image - JSON-LD structured data (
) — canonical URL— language tag— i18n tags- Heading structure: H1 count (should be exactly 1), H2-H3 hierarchy
-
Check infrastructure files:
- Fetch
{origin}/sitemap.xml— exists? Valid XML? Page count? - Fetch
{origin}/robots.txt— exists? Disallow rules? Sitemap reference? - Fetch
{origin}/favicon.ico— exists?
- Fetch
-
Forced reasoning — assess before scoring: Write out before proceeding:
- What's present: [list of found elements]
- What's missing: [list of absent elements]
- Critical issues: [anything that blocks indexing or sharing]
-
SERP position check — for 3-5 keywords:
- Extract keywords from page title + meta description + H1.
- For each keyword, search via MCP
web_search(query="{keyword}")or WebSearch. - Record: position of target URL in results (1-10, or "not found").
- Record: top 3 competitors for each keyword.
-
Score calculation (0-100):
Check Max Points Criteria Title tag 10 Exists, 50-60 chars, contains primary keyword Meta description 10 Exists, 150-160 chars, compelling OG tags 10 og:title, og:description, og:image all present JSON-LD 10 Valid structured data present Canonical 5 Present and correct Sitemap 10 Exists, valid, referenced in robots.txt Robots.txt 5 Exists, no overly broad Disallow H1 structure 5 Exactly one H1, descriptive HTTPS 5 Site uses HTTPS Mobile meta 5 Viewport tag present Language 5 langattribute onFavicon 5 Exists SERP presence 15 Found in top 10 for target keywords -
Write report to
docs/seo-audit.md(in project context) or print to console:# SEO Audit: {URL} **Date:** {YYYY-MM-DD} **Score:** {N}/100 ## Summary {2-3 sentence overview of SEO health} ## Checks | Check | Status | Score | Details | |-------|--------|-------|---------| | Title | pass/fail | X/10 | "{actual title}" (N chars) | | ... | ... | ... | ... | ## SERP Positions | Keyword | Position | Top Competitors | |---------|----------|----------------| | {kw} | #N or N/A | competitor1, competitor2, competitor3 | ## Critical Issues - {issue with fix recommendation} ## Recommendations (Top 3) 1. {highest impact fix} 2. {second priority} 3. {third priority} -
Output summary — print score and top 3 recommendations.
Notes
- Score is relative — 80+ is good for a landing page, 90+ is excellent
- SERP checks are approximations (not real-time ranking data)
- Run periodically after content changes or before launch
Common Issues
Page fetch fails
Cause: URL is behind authentication, CORS, or returns non-HTML. Fix: Ensure the URL is publicly accessible. For SPAs, check if content is server-rendered.
SERP positions show "not found"
Cause: Site is new or not indexed by search engines. Fix: This is expected for new sites. Submit sitemap to Google Search Console and re-audit in 2-4 weeks.
Low score despite good content
Cause: Missing infrastructure files (sitemap.xml, robots.txt, JSON-LD). Fix: These are the highest-impact fixes. Generate sitemap, add robots.txt with sitemap reference, and add JSON-LD structured data.
相关推荐
专题
+ 收藏
+ 收藏
+ 收藏
+ 收藏
+ 收藏
+ 收藏
最新数据
相关文章
免费 AI 图像生成:Flux 和 DALL-E - Openclaw Skills
Flux Schnell: 快速 AI 图像生成技能 - Openclaw Skills
Fastest AI:统一低延迟 LLM 网关 - Openclaw Skills
Email Templates AI: 生成专业外联内容 - Openclaw Skills
Cheapest AI:经济实惠的 LLM API 接入 - Openclaw Skills
最佳免费 AI:通过赠金访问顶级模型 - Openclaw Skills
最适合编程的 AI:顶级大模型的统一 API - Openclaw Skills
Anthropic 替代方案:访问 Claude 模型 - Openclaw Skills
Acuity API:自动化预约调度 - Openclaw Skills
Subreddit Scout:Reddit 市场调研自动化工具 - Openclaw Skills
AI精选
