Glin Profanity MCP:高级 AI 内容审核 - Openclaw Skills

作者:互联网

2026-03-24

AI快讯

什么是 Glin Profanity MCP 服务器?

Glin Profanity MCP 服务器是一款高性能工具,旨在与 AI 编码代理和助手无缝集成。通过利用模型上下文协议,此技能使 AI 模型能够对文本进行深度分析以识别冒犯性语言,从而为现代应用提供必不可少的安全性和专业性。对于需要可靠审核工作流的开发人员来说,它是 Openclaw Skills 库中的杰出成员。

该技能超越了简单的关键词屏蔽,提供了上下文感知的检测功能,能够理解医疗或游戏环境等不同领域的差异。它允许 AI 助手充当智能审核员,解释特定文本被标记的原因,建议清洁的替代方案,甚至跟踪重复违规者的历史记录,以维护社区标准。

下载入口:https://github.com/openclaw/skills/tree/main/skills/thegdsks/glin-profanity-mcp

安装与下载

1. ClawHub CLI

从源直接安装技能的最快方式。

npx clawhub@latest install glin-profanity-mcp

2. 手动安装

将技能文件夹复制到以下位置之一

全局模式 ~/.openclaw/skills/ 工作区 /skills/

优先级:工作区 > 本地 > 内置

3. 提示词安装

将此提示词复制到 OpenClaw 即可自动安装。

请帮我使用 Clawhub 安装 glin-profanity-mcp。如果尚未安装 Clawhub,请先安装(npm i -g clawhub)。

Glin Profanity MCP 服务器 应用场景

  • 批量审核用户生成的评论或论坛帖子。
  • 在发布前验证专业博客文章和营销文案。
  • 检测用于绕过标准过滤器的复杂谐音(leetspeak)或 Unicode 技巧。
  • 生成自动化的安全审计报告,用于合规性和社区管理。
  • 通过确保 AI 生成或用户提交的内容保持清洁,保护品牌声誉。
Glin Profanity MCP 服务器 工作原理
  1. AI 助手接收涉及文本分析或内容审查的提示。
  2. 助手通过 Openclaw Skills 中使用的标准化协议调用 Glin Profanity MCP 服务器工具。
  3. 服务器根据其支持 24 种语言的广泛数据库和检测算法处理文本。
  4. 返回详细结果,包括安全分数(0-100)、标记术语和建议操作。
  5. AI 助手利用这些数据进行内容审查、向用户提供反馈或编写审核报告。

Glin Profanity MCP 服务器 配置指南

要在您的 AI 环境中安装此技能,请使用以下配置:

Claude Desktop 将此片段添加到您的 claude_desktop_config.json

{
  "mcpServers": {
    "glin-profanity": {
      "command": "npx",
      "args": ["-y", "glin-profanity-mcp"]
    }
  }
}

Cursor 将此添加到您的 .cursor/mcp.json 配置中:

{
  "mcpServers": {
    "glin-profanity": {
      "command": "npx",
      "args": ["-y", "glin-profanity-mcp"]
    }
  }
}

Glin Profanity MCP 服务器 数据架构与分类体系

该技能将审核数据组织为结构化响应,以确保高准确性。下表描述了生成的主要数据点:

字段 描述
is_profane 布尔值,指示文本是否包含禁止语言。
safety_score 0 到 100 之间的数值,代表内容安全性。
matches 检测到的违禁词及其分类列表。
obfuscation 关于检测到的谐音或 Unicode 字符技巧的详细信息。
user_history 有关特定用户 ID 之前违规情况的元数据。
name: glin-profanity-mcp
description: MCP server providing profanity detection tools for AI assistants. Use when reviewing batches of user content, auditing comments for moderation reports, analyzing text for profanity before publishing, or when AI needs content moderation capabilities during workflows.

Glin Profanity MCP Server

MCP (Model Context Protocol) server that provides profanity detection as tools for AI assistants like Claude Desktop, Cursor, and Windsurf.

Best for: AI-assisted content review workflows, batch moderation, audit reports, and content validation before publishing.

Installation

Claude Desktop

Add to ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "glin-profanity": {
      "command": "npx",
      "args": ["-y", "glin-profanity-mcp"]
    }
  }
}

Cursor

Add to .cursor/mcp.json:

{
  "mcpServers": {
    "glin-profanity": {
      "command": "npx",
      "args": ["-y", "glin-profanity-mcp"]
    }
  }
}

Available Tools

Core Detection

Tool Description
check_profanity Check text for profanity with detailed results
censor_text Censor profanity with configurable replacement
batch_check Check multiple texts at once (up to 100)
validate_content Get safety score (0-100) with action recommendation

Analysis

Tool Description
analyze_context Context-aware analysis (medical, gaming, etc.)
detect_obfuscation Detect leetspeak and Unicode tricks
explain_match Explain why text was flagged
compare_strictness Compare detection across strictness levels

Utilities

Tool Description
suggest_alternatives Suggest clean replacements
analyze_corpus Analyze up to 500 texts for stats
create_regex_pattern Generate regex for custom detection
get_supported_languages List all 24 supported languages

User Tracking

Tool Description
track_user_message Track messages for repeat offenders
get_user_profile Get user's moderation history
get_high_risk_users List users with high violation rates

Example Prompts

Content Review

"Check these 50 user comments and tell me which ones need moderation"
"Validate this blog post before publishing - use high strictness"
"Analyze this medical article with medical domain context"

Batch Operations

"Batch check all messages in this array and return only flagged ones"
"Generate a moderation audit report for these comments"

Understanding Flags

"Explain why 'f4ck' was detected as profanity"
"Compare strictness levels for this gaming chat message"

Content Cleanup

"Suggest professional alternatives for this flagged text"
"Censor the profanity but preserve first letters"

When to Use

Use MCP server when:

  • AI assists with content review workflows
  • Batch checking user submissions
  • Generating moderation reports
  • Content validation before publishing
  • Human-in-the-loop moderation

Use core library instead when:

  • Automated real-time filtering (hooks/middleware)
  • Every message needs checking without AI involvement
  • Performance-critical applications (< 1ms response)

Resources

  • npm: https://www.npmjs.com/package/glin-profanity-mcp
  • GitHub: https://github.com/GLINCKER/glin-profanity/tree/release/packages/mcp
  • Core library: https://www.npmjs.com/package/glin-profanity