OpenClaw 记忆增强器:专为智能体优化的边缘端 RAG - Openclaw 技能
作者:互联网
2026-04-05
什么是 OpenClaw 记忆增强器?
OpenClaw 记忆增强器是专为 Openclaw 技能生态系统设计的复杂记忆层,使 AI 智能体能够在多个会话中保留和找回信息。它实现了一个检索增强生成 (RAG) 系统,可自动处理记忆目录中的文件,为智能体提供在长期项目和个性化用户交互中保持连续性所需的上下文。
该技能的独特之处在于其针对边缘计算的高度优化。它提供了一个专门的边缘 (Edge) 版本,运行零外部依赖,且内存消耗低于 10MB,非常适合 NVIDIA Jetson 或树莓派等设备。通过利用本地存储,它确保所有记忆数据在用户的硬件上保持私密和安全,无需云端向量数据库。
下载入口:https://github.com/openclaw/skills/tree/main/skills/henryfcb/openclaw-memory-enhancer
安装与下载
1. ClawHub CLI
从源直接安装技能的最快方式。
npx clawhub@latest install openclaw-memory-enhancer
2. 手动安装
将技能文件夹复制到以下位置之一
全局模式~/.openclaw/skills/
工作区
/skills/
优先级:工作区 > 本地 > 内置
3. 提示词安装
将此提示词复制到 OpenClaw 即可自动安装。
请帮我使用 Clawhub 安装 openclaw-memory-enhancer。如果尚未安装 Clawhub,请先安装(npm i -g clawhub)。
OpenClaw 记忆增强器 应用场景
- 使 AI 智能体能够在不同的聊天会话中记住用户偏好和过去的决策。
- 从历史对话记录中自动构建本地知识库。
- 为智能体需要跟踪进度的长期技术项目提供深度上下文。
- 根据以往解决问题的交互自动生成 FAQ 或解决方案指南。
- 在传统 RAG 架构过于沉重的资源受限硬件上部署智能体。
- 记忆编码:系统从文本中提取关键词并将其转换为归一化的哈希向量。
- 自动加载:扫描指定的记忆目录,将现有的 Markdown 或 JSON 文件摄取到本地向量库中。
- 语义搜索:收到查询后,系统执行快速关键词预过滤,随后进行余弦相似度计算。
- 排序与检索:根据可配置的相似度阈值对最相关的记忆块进行排序和检索。
- 提示词增强:将相关上下文注入 LLM 提示词,使智能体能够根据历史数据提供知情回复。
OpenClaw 记忆增强器 配置指南
要通过 ClawHub 安装,请使用以下命令:
clawhub install openclaw-memory-enhancer
通过 Git 手动安装:
git clone https://github.com/henryfcb/openclaw-memory-enhancer.git ~/.openclaw/skills/openclaw-memory-enhancer
要初始化边缘优化版本并加载记忆:
cd ~/.openclaw/skills/openclaw-memory-enhancer
python3 memory_enhancer_edge.py --load
OpenClaw 记忆增强器 数据架构与分类体系
该技能使用结构化分类在 ~/.openclaw/workspace/knowledge-base/ 目录中组织数据,以提高 Openclaw 技能的检索准确性:
| 记忆类型 | 描述 | 文件上下文 |
|---|---|---|
daily_log |
按时间顺序的活动 | memory/YYYY-MM-DD.md |
core_memory |
基础智能体规则 | 系统指令 |
qa |
问答对 | FAQ 生成 |
preference |
用户特定设置 | 个性化数据 |
solution |
技术演练 | 实现指南 |
存储在边缘版本中使用轻量级 JSON 格式,在标准版本中使用 NumPy 支持的数组。
name: openclaw-memory-enhancer
description: "Edge-optimized RAG memory system for OpenClaw with semantic search. Automatically loads memory files, provides intelligent recall, and enhances conversations with relevant context. Perfect for Jetson and edge devices (<10MB memory)."
homepage: https://github.com/henryfcb/openclaw-memory-enhancer
metadata:
openclaw:
emoji: "??"
requires:
bins: ["python3"]
env: []
install: []
?? OpenClaw Memory Enhancer
Give OpenClaw long-term memory - remember important information across sessions and automatically recall relevant context for conversations.
Core Capabilities
| Capability | Description |
|---|---|
| ?? Semantic Search | Vector similarity search, understanding intent not just keywords |
| ?? Auto Load | Automatically reads all files from memory/ directory |
| ?? Smart Recall | Finds relevant historical memory during conversations |
| ?? Memory Graph | Builds connections between related memories |
| ?? Local Storage | 100% local, no cloud, complete privacy |
| ?? Edge Optimized | <10MB memory, runs on Jetson/Raspberry Pi |
Quick Reference
| Task | Command (Edge Version) | Command (Standard Version) |
|---|---|---|
| Load memories | python3 memory_enhancer_edge.py --load |
python3 memory_enhancer.py --load |
| Search | --search "query" |
--search "query" |
| Add memory | --add "content" |
--add "content" |
| Export | --export |
--export |
| Stats | --stats |
--stats |
When to Use
Use this skill when:
- You want OpenClaw to remember things across sessions
- You need to build a knowledge base from chat history
- You're working on long-term projects that need context
- You want automatic FAQ generation from conversations
- You're running on edge devices with limited memory
Don't use when:
- Simple note-taking apps are sufficient
- You don't need cross-session memory
- You have plenty of memory and want maximum accuracy (use standard version)
Versions
Edge Version ? Recommended
Best for: Jetson, Raspberry Pi, embedded devices
python3 memory_enhancer_edge.py --load
Features:
- Zero dependencies (Python stdlib only)
- Memory usage < 10MB
- Lightweight keyword + vector matching
- Perfect for resource-constrained devices
Standard Version
Best for: Desktop/server, maximum accuracy
pip install sentence-transformers numpy
python3 memory_enhancer.py --load
Features:
- Uses sentence-transformers for high-quality embeddings
- Better semantic understanding
- Memory usage 50-100MB
- Requires model download (~50MB)
Installation
Via ClawHub (Recommended)
clawhub install openclaw-memory-enhancer
Via Git
git clone https://github.com/henryfcb/openclaw-memory-enhancer.git r
~/.openclaw/skills/openclaw-memory-enhancer
Usage Examples
Command Line
# Load existing OpenClaw memories
cd ~/.openclaw/skills/openclaw-memory-enhancer
python3 memory_enhancer_edge.py --load
# Search for memories
python3 memory_enhancer_edge.py --search "voice-call plugin setup"
# Add a new memory
python3 memory_enhancer_edge.py --add "User prefers dark mode"
# Show statistics
python3 memory_enhancer_edge.py --stats
# Export to Markdown
python3 memory_enhancer_edge.py --export
Python API
from memory_enhancer_edge import MemoryEnhancerEdge
# Initialize
memory = MemoryEnhancerEdge()
# Load existing memories
memory.load_openclaw_memory()
# Search for relevant memories
results = memory.search_memory("AI trends report", top_k=3)
for r in results:
print(f"[{r['similarity']:.2f}] {r['content'][:100]}...")
# Recall context for a conversation
context = memory.recall_for_prompt("Help me check billing")
# Returns formatted memory context
# Add new memory
memory.add_memory(
content="User prefers direct results",
source="chat",
memory_type="preference"
)
OpenClaw Integration
# In your OpenClaw agent
from skills.openclaw_memory_enhancer.memory_enhancer_edge import MemoryEnhancerEdge
class EnhancedAgent:
def __init__(self):
self.memory = MemoryEnhancerEdge()
self.memory.load_openclaw_memory()
def process(self, user_input: str) -> str:
# 1. Recall relevant memories
memory_context = self.memory.recall_for_prompt(user_input)
# 2. Enhance prompt with context
enhanced_prompt = f"""
{memory_context}
User: {user_input}
"""
# 3. Call LLM with enhanced context
response = call_llm(enhanced_prompt)
return response
Memory Types
| Type | Description | Example |
|---|---|---|
daily_log |
Daily memory files | memory/2026-02-22.md |
capability |
Capability records | Skills, tools |
core_memory |
Core conventions | Important rules |
qa |
Question & Answer | Q: How to... A: You should... |
instruction |
Direct instructions | "Remember: always do X" |
solution |
Technical solutions | Step-by-step guides |
preference |
User preferences | "User likes dark mode" |
How It Works
Memory Encoding (Edge Version)
- Keyword Extraction: Extract important words from text
- Hash Vector: Map keywords to vector positions
- Normalization: L2 normalize the vector
- Storage: Save to local JSON file
Memory Retrieval
- Query Encoding: Convert query to same vector format
- Keyword Pre-filter: Fast filter by common keywords
- Similarity Calculation: Cosine similarity between vectors
- Ranking: Return top-k most similar memories
Privacy Protection
- All data stored locally in
~/.openclaw/workspace/knowledge-base/ - No network requests
- No external API calls
- No data leaves your device
Technical Specifications
Edge Version
Vector Dimensions: 128
Memory Usage: < 10MB
Dependencies: None (Python stdlib)
Storage Format: JSON
Max Memories: 1000 (configurable)
Query Latency: < 100ms
Standard Version
Vector Dimensions: 384
Memory Usage: 50-100MB
Dependencies: sentence-transformers, numpy
Storage Format: NumPy + JSON
Model Size: ~50MB download
Query Latency: < 50ms
Configuration
Edit these parameters in the code:
self.config = {
"vector_dim": 128, # Vector dimensions
"max_memory_size": 1000, # Max number of memories
"chunk_size": 500, # Content chunk size
"min_keyword_len": 2, # Minimum keyword length
}
Troubleshooting
No results found
# Lower the threshold
results = memory.search_memory(query, threshold=0.2) # Default 0.3
# Increase top_k
results = memory.search_memory(query, top_k=10) # Default 5
Memory limit reached
The system automatically removes oldest memories when limit is reached.
To increase limit:
self.config["max_memory_size"] = 5000 # Increase from 1000
Slow performance
- Use Edge version instead of Standard
- Reduce
max_memory_size - Use keyword pre-filtering (automatic)
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a Pull Request
License
MIT License - See LICENSE file for details.
Acknowledgments
- Built for the OpenClaw ecosystem
- Optimized for edge computing devices
- Inspired by long-term memory systems in AI
Not an official OpenClaw or Moonshot AI product.
Users must provide their own OpenClaw workspace and API keys.
相关推荐
专题
+ 收藏
+ 收藏
+ 收藏
+ 收藏
+ 收藏
+ 收藏
最新数据
相关文章
Moltyverse:AI 智能体社交网络 - Openclaw Skills
港币汇率换算器:实时汇率与趋势 - Openclaw Skills
ArXiv 技能猎人:自动化智能体技能生成 - Openclaw Skills
深度思考:高级推理与问题拆解 - Openclaw Skills
HSK 学习:间隔复现汉语精通 - Openclaw Skills
会议协调员:AI 高级日程管理助手 - Openclaw Skills
主动消息:为生活瞬间提供 AI 智能跟进 - Openclaw Skills
人类优化前端:美学与用户体验设计 - Openclaw 技能
UniFuncs 实时搜索 AI 智能体技能 - Openclaw Skills
Gemini 2.5 Flash:快速且廉价的 AI 模型访问 - Openclaw Skills
AI精选
