Prometheus 监控指南:模式与最佳实践 - Openclaw Skills

作者:互联网

2026-03-27

AI教程

什么是 Prometheus 监控与可观测性?

这份来自 Openclaw Skills 的资源深入探讨了专业的 Prometheus 监控模式,重点关注维护系统稳定性和数据准确性。它解决了基数爆炸的关键挑战,即用户 ID 等高基数标签会降低性能,并提供了有效的标签管理和重新标记策略。

除了基础指标外,本技能还探讨了直方图与摘要的细微差别、PromQL 函数(如 rate 和 increase)以及高级抓取配置。通过遵循这些模式,开发人员可以构建更具弹性的可观测性堆栈,在没有配置不当的告警或低效查询干扰的情况下提供清晰的洞察。

下载入口:https://github.com/openclaw/skills/tree/main/skills/ivangdavila/prom

安装与下载

1. ClawHub CLI

从源直接安装技能的最快方式。

npx clawhub@latest install prom

2. 手动安装

将技能文件夹复制到以下位置之一

全局模式 ~/.openclaw/skills/ 工作区 /skills/

优先级:工作区 > 本地 > 内置

3. 提示词安装

将此提示词复制到 OpenClaw 即可自动安装。

请帮我使用 Clawhub 安装 prom。如果尚未安装 Clawhub,请先安装(npm i -g clawhub)。

Prometheus 监控与可观测性 应用场景

  • 通过识别并丢弃请求 ID 等高基数标签来防止 TSDB 性能问题。
  • 使用具有正确定义桶边界的直方图实施服务水平目标 (SLO)。
  • 通过使用记录规则预计算昂贵的 PromQL 查询来优化监控资源使用。
  • 通过从基于原因的告警策略切换为基于症状的告警策略来减少告警疲劳。
Prometheus 监控与可观测性 工作原理
  1. 分析指标基数,确保唯一时间序列的数量保持在 Prometheus TSDB 可管理的范围内。
  2. 配置具有适当间隔和超时的抓取作业,确保 scrape_timeout 始终低于间隔。
  3. 准确应用 PromQL 函数,对计数器使用 rate() 并确保范围选择器至少是抓取间隔的四倍。
  4. 部署包含 for 子句(以防止抖动)和 runbook_url(以实现可操作的轮班响应)的告警规则。
  5. 使用记录规则将高基数数据聚合到低基数序列中,以便进行长期存储和更快的仪表板展示。

Prometheus 监控与可观测性 配置指南

要开始在 Openclaw Skills 中使用这些模式,请确保已安装 Prometheus 且配置有效。您可以使用以下命令验证告警和记录规则:

promtool check rules /path/to/your/rules.yml

对于动态环境,请将 static_configs 替换为 file_sd_configs 或服务发现,以便在不需要重启服务的情况下自动进行目标管理。

Prometheus 监控与可观测性 数据架构与分类体系

Prometheus 将数据组织成由指标名称和一组键值对(标签)标识的时间序列。有效的数据组织遵循以下分类法:

组件 描述
指标名称 遵循 snake_case 并包含基本单位(例如 http_requests_total)。
标签 用于过滤的低基数维度(例如 envserviceinstance)。
直方图 _bucket_sum_count 后缀表示,用于延迟分析。
记录规则 命名遵循 level:metric:operations 惯例以确保清晰。
name: Prometheus
description: Prometheus monitoring patterns, cardinality management, alerting best practices, and PromQL traps.
metadata:
  category: infrastructure
  skills: ["prometheus", "monitoring", "alerting", "metrics", "observability"]

Cardinality Explosions

  • Every unique label combination creates a new time series — user_id as label kills Prometheus
  • Avoid high-cardinality labels: user IDs, email addresses, request IDs, timestamps, UUIDs
  • Check cardinality: prometheus_tsdb_head_series metric — above 1M series needs attention
  • Use histograms for latency, not per-request labels — buckets are fixed cardinality
  • Relabeling can drop dangerous labels before ingestion: labeldrop in scrape config

Histogram vs Summary

  • Histograms: use for SLOs, aggregatable across instances, buckets defined upfront
  • Summaries: use when you need exact percentiles, cannot aggregate across instances
  • Histogram bucket boundaries must be defined before data arrives — wrong buckets = wrong percentiles
  • Default buckets (.005, .01, .025, .05, .1, .25, .5, 1, 2.5, 5, 10) assume HTTP latency — adjust for your use case

Rate and Increase

  • rate() requires range selector at least 4x scrape interval — rate(metric[1m]) with 30s scrape misses data
  • rate() is per-second, increase() is total over range — don't confuse them
  • Counter resets on restart — rate() handles this, raw delta doesn't
  • irate() uses only last two samples — too spiky for alerting, use rate() for alerts

Alerting Mistakes

  • Alert on symptoms, not causes — "high latency" not "high CPU"
  • for clause prevents flapping: for: 5m means condition must hold 5 minutes before firing
  • Missing for clause = fires immediately on first match = noisy
  • Alerts need runbook_url label — on-call needs to know what to do, not just that something's wrong
  • Test alerts with promtool check rules — syntax errors discovered at 3am are bad

PromQL Traps

  • and is intersection by labels, not boolean AND — results must have matching label sets
  • or fills in missing series, doesn't do boolean OR on values
  • {} without metric name is expensive — scans all metrics
  • offset goes back in time: metric offset 1h is value from 1 hour ago
  • Comparison operators filter series: http_requests > 100 drops series below 100, doesn't return boolean

Scrape Configuration

  • honor_labels: true trusts source labels — use only when source is authoritative (e.g., Pushgateway)
  • scrape_timeout must be less than scrape_interval — otherwise overlapping scrapes
  • Static configs don't reload without restart — use file_sd or service discovery for dynamic targets
  • TLS verification disabled (insecure_skip_verify) should be temporary, never permanent

Pushgateway Pitfalls

  • Pushgateway is for batch jobs, not services — services should expose /metrics
  • Metrics persist until deleted — stale metrics from dead jobs confuse dashboards
  • Add job and instance labels to distinguish sources — default grouping hides failures
  • Delete metrics when job completes: curl -X DELETE http://pushgateway/metrics/job/myjob

Recording Rules

  • Pre-compute expensive queries: record: job:request_duration_seconds:rate5m
  • Naming convention: level:metric:operations — helps identify what rules produce
  • Recording rules update every evaluation interval — not instant, plan for slight delay
  • Reduce cardinality with recording rules: aggregate away labels you don't need for alerting

Federation and Remote Write

  • Federation for pulling from other Prometheus — use sparingly, adds latency
  • Remote write for long-term storage — Prometheus local storage is not durable
  • Remote write can buffer during outages — but buffer is finite, data loss on extended outages
  • Prometheus is not highly available by default — run two instances scraping same targets

Common Operational Issues

  • TSDB corruption on unclean shutdown — use --storage.tsdb.wal-compression and monitor disk space
  • Memory grows with series count — each series costs ~3KB RAM
  • Compaction pauses during high load — leave 40% disk headroom
  • Scrape targets stuck "Unknown" — check network, firewall, target actually exposing /metrics

Label Best Practices

  • Use labels for dimensions you'll filter/aggregate by — environment, service, instance
  • Keep label values low-cardinality — tens or hundreds, not thousands
  • Consistent naming: snake_case, prefix with domain: http_requests_total, node_cpu_seconds_total
  • le label is reserved for histogram buckets — don't use for other purposes