TimescaleDB: 高性能时序 SQL - Openclaw Skills
作者:互联网
2026-03-30
什么是 TimescaleDB?
TimescaleDB 作为 PostgreSQL 的强大扩展,专门为大规模处理时序数据而设计。通过引入“超表”(hypertables),它能自动按时间将数据分区,即使数据集增长到数十亿行,也能保持高性能。Openclaw Skills 库中的这一集成使开发者能够使用熟悉的 SQL 语法,同时受益于专门的时序函数和自动化数据管理功能。
该技能的核心价值在于桥接了关系型数据库与专用时序存储之间的鸿沟。它允许时序指标与关系型元数据进行无缝连接,为监控、物联网和金融应用提供统一平台,而无需管理多种数据库类型带来的运维开销。
下载入口:https://github.com/openclaw/skills/tree/main/skills/ivangdavila/timescaledb
安装与下载
1. ClawHub CLI
从源直接安装技能的最快方式。
npx clawhub@latest install timescaledb
2. 手动安装
将技能文件夹复制到以下位置之一
全局模式~/.openclaw/skills/
工作区
/skills/
优先级:工作区 > 本地 > 内置
3. 提示词安装
将此提示词复制到 OpenClaw 即可自动安装。
请帮我使用 Clawhub 安装 timescaledb。如果尚未安装 Clawhub,请先安装(npm i -g clawhub)。
TimescaleDB 应用场景
- 高吞吐量系统指标和基础设施日志的监控与告警。
- 捕获并分析分布式网络中的实时物联网传感器数据。
- 追踪金融市场波动并执行高频交易分析。
- 管理用户活动日志和行为分析的长期数据保留。
- 创建一个带有必需时间列的标准 PostgreSQL 表,并将其转换为超表以启用自动分区。
- 调整块(chunk)时间间隔,确保每个数据分区都能优化地适应系统可用内存。
- 利用批量插入方法或多值 INSERT 命令来最大化数据吞吐量。
- 应用 time_bucket 函数将原始数据聚合到粒度间隔中,用于仪表板显示和报告。
- 实施持续聚合和压缩策略,以自动化后台维护并优化存储。
TimescaleDB 配置指南
要使用此技能,请确保您的系统中已安装 psql。按照以下步骤配置环境:
- 在 PostgreSQL 服务器上安装 TimescaleDB 扩展。
- 连接到您的数据库实例:
psql -h localhost -U postgres
- 在目标数据库中启用扩展:
CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;
- 定义模式并初始化超表:
SELECT create_hypertable('your_table_name', 'time_column');
TimescaleDB 数据架构与分类体系
TimescaleDB 将信息组织成一个平衡写入速度与查询效率的分层结构。下表描述了 Openclaw Skills 管理的主要数据组件:
| 组件 | 描述 |
|---|---|
| 超表 (Hypertable) | 用户交互的虚拟表,作为所有块的父表。 |
| 块 (Chunk) | 包含特定时间范围数据的物理子表。 |
| 持续聚合 | 一种专门的物化视图,维护预计算结果以加速查询。 |
| 压缩策略 | 定义何时以及如何将历史块转换为列式格式以节省存储的元数据。 |
name: TimescaleDB
description: Store and query time-series data with hypertables, compression, and continuous aggregates.
metadata: {"clawdbot":{"emoji":"??","requires":{"anyBins":["psql"]},"os":["linux","darwin","win32"]}}
Hypertables
- Convert table to hypertable:
SELECT create_hypertable('metrics', 'time') - Must have time column (TIMESTAMPTZ recommended)—partition key for chunks
- Call BEFORE inserting data—converting large tables is expensive
- Can't undo easily—plan schema before converting
Chunk Interval
- Default 7 days per chunk—tune based on data volume
SELECT set_chunk_time_interval('metrics', INTERVAL '1 day')for high-volume- Chunks should be 25% of memory—too small = overhead, too large = slow queries
- Check chunk sizes:
SELECT * FROM chunks_detailed_size('metrics')
time_bucket
time_bucket('1 hour', time)groups timestamps—like date_trunc but with arbitrary intervals- Use in GROUP BY for aggregation:
GROUP BY time_bucket('5 minutes', time) - Origin parameter for offset:
time_bucket('1 day', time, '2024-01-01'::timestamptz) - Beats date_trunc for non-standard intervals—15min, 4h, etc.
Continuous Aggregates
- Materialized views that auto-refresh—pre-compute expensive aggregations
CREATE MATERIALIZED VIEW hourly_stats WITH (timescaledb.continuous) AS SELECT ...- Add refresh policy:
SELECT add_continuous_aggregate_policy('hourly_stats', ...) - Query aggregate view instead of raw hypertable—orders of magnitude faster
Real-Time Aggregates
- Continuous aggregates include recent data automatically—no stale reads
WITH (timescaledb.continuous, timescaledb.materialized_only = false)for real-time- Combines materialized historical + live recent—transparent to queries
- Small performance cost for real-time—disable if batch-only acceptable
Compression
- Compress old chunks to save 90%+ storage:
ALTER TABLE metrics SET (timescaledb.compress) - Add compression policy:
SELECT add_compression_policy('metrics', INTERVAL '7 days') - Compressed chunks are read-only—can't update/delete individual rows
- Decompress for modifications:
SELECT decompress_chunk('chunk_name')
Retention
- Auto-delete old data:
SELECT add_retention_policy('metrics', INTERVAL '90 days') - Drops entire chunks—efficient, no row-by-row delete
- Retention runs on scheduler—data persists slightly past interval
- Combine with compression: compress at 7d, drop at 90d
Indexing
- Time column auto-indexed in hypertable—don't add redundant index
- Add indexes on filter columns:
CREATE INDEX ON metrics (device_id, time DESC) - Composite indexes with time last—enables chunk exclusion
- Skip indexes on rarely-filtered columns—each index slows writes
Insert Performance
- Batch inserts critical—single-row inserts are slow
- Use COPY or multi-value INSERT:
INSERT INTO metrics VALUES (...), (...), ... - Parallel COPY with
timescaledb-parallel-copytool—saturates I/O - Out-of-order inserts work but slower—prefer time-ordered writes
Query Patterns
- Always include time range in WHERE—enables chunk exclusion
WHERE time > now() - INTERVAL '1 day'skips old chunks entirely- ORDER BY time DESC with LIMIT for "latest N"—index scan, fast
- Avoid SELECT * on wide tables—fetch only needed columns
Distributed Hypertables
- Multi-node for horizontal scale—data sharded across nodes
- Create access node + data nodes—access node coordinates queries
- More operational complexity—start single-node, distribute when needed
- Not needed for most workloads—single node handles millions of rows/sec
相关推荐
专题
+ 收藏
+ 收藏
+ 收藏
+ 收藏
+ 收藏
最新数据
相关文章
信号管道:自动化营销情报工具 - Openclaw Skills
技能收益追踪器:监控 Openclaw 技能并实现变现
AI 合规准备就绪度:评估与治理工具 - Openclaw Skills
FOSMVVM ServerRequest 测试生成器:自动化 API 测试 - Openclaw Skills
酒店搜索器:AI 赋能的住宿与位置情报 - Openclaw Skills
Dub 链接 API:程序化链接管理 - Openclaw Skills
IntercomSwap:P2P BTC 与 USDT 跨链兑换 - Openclaw Skills
spotplay:macOS 原生 Spotify 播放控制 - Openclaw Skills
DeepSeek OCR:AI驱动的图像文本识别 - Openclaw Skills
Web Navigator:自动化网页研究与浏览 - Openclaw Skills
AI精选
