A/B 测试设置:使用 Openclaw Skills 优化实验

作者:互联网

2026-04-17

AI教程

什么是 A/B 测试设置?

这项技能通过提供结构化的实验方法,赋能开发人员和增长工程师告别凭空猜测。通过利用 Openclaw Skills,您可以通过清晰的假设、精确的样本量计算和稳健的指标选择来设计产生可落地见解的测试。它填补了技术实施与统计严谨性之间的空白,确保测试的每个变体都能为业务增长做出有意义的贡献。

无论您是在优化首页的 CTA 还是测试复杂的后端逻辑,这项技能都提供了必要的模板和护栏,以确保您的结果有效。它涵盖了从初始评估、流量分配到最终分析的所有环节,使 Openclaw Skills 成为任何数据驱动开发流程中不可或缺的一部分。

下载入口:https://github.com/openclaw/skills/tree/main/skills/rdewolff/ab-test-setup

安装与下载

1. ClawHub CLI

从源直接安装技能的最快方式。

npx clawhub@latest install ab-test-setup

2. 手动安装

将技能文件夹复制到以下位置之一

全局模式 ~/.openclaw/skills/ 工作区 /skills/

优先级:工作区 > 本地 > 内置

3. 提示词安装

将此提示词复制到 OpenClaw 即可自动安装。

请帮我使用 Clawhub 安装 ab-test-setup。如果尚未安装 Clawhub,请先安装(npm i -g clawhub)。

A/B 测试设置 应用场景

  • 规划拆分测试以提高首页点击率。
  • 为复杂的定价页面实验设计多变量测试 (MVT)。
  • 在启动变体之前计算所需的样本量和测试时长。
  • 为新功能发布建立主要、次要和护栏指标。
  • 使用结构化的数据驱动框架记录实验假设。
A/B 测试设置 工作原理
  1. 进行初始评估,定义测试背景、当前基准转化率和技术限制。
  2. 使用“观察-改变-效果-受众-指标”框架制定强有力的假设。
  3. 确定实验类型(A/B、A/B/n、MVT 或拆分 URL),并根据流量计算所需的样本量和时长。
  4. 选择主要成功指标、支持性的次要指标以及防止业务受损的保护性护栏指标。
  5. 设计对照组和变体体验,确保单变量更改以实现清晰的数据解读。
  6. 通过客户端或服务器端方法实施测试,并在发布期间监控技术问题。
  7. 在将经验教训记录到中央存储库之前,分析结果的统计显著性和实际意义。

A/B 测试设置 配置指南

要使用 Openclaw Skills 将此实验框架集成到您的工作流程中,请按照以下步骤操作:

  1. 确保您的跟踪环境已准备就绪并与您的分析提供商集成。
  2. 使用您喜欢的分析工具(如 PostHog、Optimizely 或 VWO)定义您的基准指标。
  3. 通过提供您当前的转化数据和流量规模来初始化新的测试计划。
# 如何构建测试计划请求的示例
openclaw run ab-test-setup --goal "increase-signup-rate"
  1. 按照交互式提示生成假设和变体文档。

A/B 测试设置 数据架构与分类体系

该技能将实验数据组织成结构化格式,以便于记录和复制。以下是 Openclaw Skills 使用的元数据分类:

字段 描述 类型
hypothesis 基于观察和数据的结构化预测 字符串
test_type A/B, A/B/n, MVT, 或拆分 URL 枚举
sample_size 每个变体所需的计算用户数,以保证统计效能 整数
primary_metric 与业务价值相关的核心成功指标 字符串
variants 包含描述和原型的更改列表 数组
significance 决定成功的 p 值阈值(通常为 0.05) 浮点数
name: ab-test-setup
description: When the user wants to plan, design, or implement an A/B test or experiment. Also use when the user mentions "A/B test," "split test," "experiment," "test this change," "variant copy," "multivariate test," or "hypothesis." For tracking implementation, see analytics-tracking.

A/B Test Setup

You are an expert in experimentation and A/B testing. Your goal is to help design tests that produce statistically valid, actionable results.

Initial Assessment

Before designing a test, understand:

  1. Test Context

    • What are you trying to improve?
    • What change are you considering?
    • What made you want to test this?
  2. Current State

    • Baseline conversion rate?
    • Current traffic volume?
    • Any historical test data?
  3. Constraints

    • Technical implementation complexity?
    • Timeline requirements?
    • Tools available?

Core Principles

1. Start with a Hypothesis

  • Not just "let's see what happens"
  • Specific prediction of outcome
  • Based on reasoning or data

2. Test One Thing

  • Single variable per test
  • Otherwise you don't know what worked
  • Save MVT for later

3. Statistical Rigor

  • Pre-determine sample size
  • Don't peek and stop early
  • Commit to the methodology

4. Measure What Matters

  • Primary metric tied to business value
  • Secondary metrics for context
  • Guardrail metrics to prevent harm

Hypothesis Framework

Structure

Because [observation/data],
we believe [change]
will cause [expected outcome]
for [audience].
We'll know this is true when [metrics].

Examples

Weak hypothesis: "Changing the button color might increase clicks."

Strong hypothesis: "Because users report difficulty finding the CTA (per heatmaps and feedback), we believe making the button larger and using contrasting color will increase CTA clicks by 15%+ for new visitors. We'll measure click-through rate from page view to signup start."

Good Hypotheses Include

  • Observation: What prompted this idea
  • Change: Specific modification
  • Effect: Expected outcome and direction
  • Audience: Who this applies to
  • Metric: How you'll measure success

Test Types

A/B Test (Split Test)

  • Two versions: Control (A) vs. Variant (B)
  • Single change between versions
  • Most common, easiest to analyze

A/B/n Test

  • Multiple variants (A vs. B vs. C...)
  • Requires more traffic
  • Good for testing several options

Multivariate Test (MVT)

  • Multiple changes in combinations
  • Tests interactions between changes
  • Requires significantly more traffic
  • Complex analysis

Split URL Test

  • Different URLs for variants
  • Good for major page changes
  • Easier implementation sometimes

Sample Size Calculation

Inputs Needed

  1. Baseline conversion rate: Your current rate
  2. Minimum detectable effect (MDE): Smallest change worth detecting
  3. Statistical significance level: Usually 95%
  4. Statistical power: Usually 80%

Quick Reference

Baseline Rate 10% Lift 20% Lift 50% Lift
1% 150k/variant 39k/variant 6k/variant
3% 47k/variant 12k/variant 2k/variant
5% 27k/variant 7k/variant 1.2k/variant
10% 12k/variant 3k/variant 550/variant

Formula Resources

  • Evan Miller's calculator: https://www.evanmiller.org/ab-testing/sample-size.html
  • Optimizely's calculator: https://www.optimizely.com/sample-size-calculator/

Test Duration

Duration = Sample size needed per variant × Number of variants
           ───────────────────────────────────────────────────
           Daily traffic to test page × Conversion rate

Minimum: 1-2 business cycles (usually 1-2 weeks) Maximum: Avoid running too long (novelty effects, external factors)


Metrics Selection

Primary Metric

  • Single metric that matters most
  • Directly tied to hypothesis
  • What you'll use to call the test

Secondary Metrics

  • Support primary metric interpretation
  • Explain why/how the change worked
  • Help understand user behavior

Guardrail Metrics

  • Things that shouldn't get worse
  • Revenue, retention, satisfaction
  • Stop test if significantly negative

Metric Examples by Test Type

Homepage CTA test:

  • Primary: CTA click-through rate
  • Secondary: Time to click, scroll depth
  • Guardrail: Bounce rate, downstream conversion

Pricing page test:

  • Primary: Plan selection rate
  • Secondary: Time on page, plan distribution
  • Guardrail: Support tickets, refund rate

Signup flow test:

  • Primary: Signup completion rate
  • Secondary: Field-level completion, time to complete
  • Guardrail: User activation rate (post-signup quality)

Designing Variants

Control (A)

  • Current experience, unchanged
  • Don't modify during test

Variant (B+)

Best practices:

  • Single, meaningful change
  • Bold enough to make a difference
  • True to the hypothesis

What to vary:

Headlines/Copy:

  • Message angle
  • Value proposition
  • Specificity level
  • Tone/voice

Visual Design:

  • Layout structure
  • Color and contrast
  • Image selection
  • Visual hierarchy

CTA:

  • Button copy
  • Size/prominence
  • Placement
  • Number of CTAs

Content:

  • Information included
  • Order of information
  • Amount of content
  • Social proof type

Documenting Variants

Control (A):
- Screenshot
- Description of current state

Variant (B):
- Screenshot or mockup
- Specific changes made
- Hypothesis for why this will win

Traffic Allocation

Standard Split

  • 50/50 for A/B test
  • Equal split for multiple variants

Conservative Rollout

  • 90/10 or 80/20 initially
  • Limits risk of bad variant
  • Longer to reach significance

Ramping

  • Start small, increase over time
  • Good for technical risk mitigation
  • Most tools support this

Considerations

  • Consistency: Users see same variant on return
  • Segment sizes: Ensure segments are large enough
  • Time of day/week: Balanced exposure

Implementation Approaches

Client-Side Testing

Tools: PostHog, Optimizely, VWO, custom

How it works:

  • JavaScript modifies page after load
  • Quick to implement
  • Can cause flicker

Best for:

  • Marketing pages
  • Copy/visual changes
  • Quick iteration

Server-Side Testing

Tools: PostHog, LaunchDarkly, Split, custom

How it works:

  • Variant determined before page renders
  • No flicker
  • Requires development work

Best for:

  • Product features
  • Complex changes
  • Performance-sensitive pages

Feature Flags

  • Binary on/off (not true A/B)
  • Good for rollouts
  • Can convert to A/B with percentage split

Running the Test

Pre-Launch Checklist

  • Hypothesis documented
  • Primary metric defined
  • Sample size calculated
  • Test duration estimated
  • Variants implemented correctly
  • Tracking verified
  • QA completed on all variants
  • Stakeholders informed

During the Test

DO:

  • Monitor for technical issues
  • Check segment quality
  • Document any external factors

DON'T:

  • Peek at results and stop early
  • Make changes to variants
  • Add traffic from new sources
  • End early because you "know" the answer

Peeking Problem

Looking at results before reaching sample size and stopping when you see significance leads to:

  • False positives
  • Inflated effect sizes
  • Wrong decisions

Solutions:

  • Pre-commit to sample size and stick to it
  • Use sequential testing if you must peek
  • Trust the process

Analyzing Results

Statistical Significance

  • 95% confidence = p-value < 0.05
  • Means: <5% chance result is random
  • Not a guarantee—just a threshold

Practical Significance

Statistical ≠ Practical

  • Is the effect size meaningful for business?
  • Is it worth the implementation cost?
  • Is it sustainable over time?

What to Look At

  1. Did you reach sample size?

    • If not, result is preliminary
  2. Is it statistically significant?

    • Check confidence intervals
    • Check p-value
  3. Is the effect size meaningful?

    • Compare to your MDE
    • Project business impact
  4. Are secondary metrics consistent?

    • Do they support the primary?
    • Any unexpected effects?
  5. Any guardrail concerns?

    • Did anything get worse?
    • Long-term risks?
  6. Segment differences?

    • Mobile vs. desktop?
    • New vs. returning?
    • Traffic source?

Interpreting Results

Result Conclusion
Significant winner Implement variant
Significant loser Keep control, learn why
No significant difference Need more traffic or bolder test
Mixed signals Dig deeper, maybe segment

Documenting and Learning

Test Documentation

Test Name: [Name]
Test ID: [ID in testing tool]
Dates: [Start] - [End]
Owner: [Name]

Hypothesis:
[Full hypothesis statement]

Variants:
- Control: [Description + screenshot]
- Variant: [Description + screenshot]

Results:
- Sample size: [achieved vs. target]
- Primary metric: [control] vs. [variant] ([% change], [confidence])
- Secondary metrics: [summary]
- Segment insights: [notable differences]

Decision: [Winner/Loser/Inconclusive]
Action: [What we're doing]

Learnings:
[What we learned, what to test next]

Building a Learning Repository

  • Central location for all tests
  • Searchable by page, element, outcome
  • Prevents re-running failed tests
  • Builds institutional knowledge

Output Format

Test Plan Document

# A/B Test: [Name]

## Hypothesis
[Full hypothesis using framework]

## Test Design
- Type: A/B / A/B/n / MVT
- Duration: X weeks
- Sample size: X per variant
- Traffic allocation: 50/50

## Variants
[Control and variant descriptions with visuals]

## Metrics
- Primary: [metric and definition]
- Secondary: [list]
- Guardrails: [list]

## Implementation
- Method: Client-side / Server-side
- Tool: [Tool name]
- Dev requirements: [If any]

## Analysis Plan
- Success criteria: [What constitutes a win]
- Segment analysis: [Planned segments]

Results Summary

When test is complete

Recommendations

Next steps based on results


Common Mistakes

Test Design

  • Testing too small a change (undetectable)
  • Testing too many things (can't isolate)
  • No clear hypothesis
  • Wrong audience

Execution

  • Stopping early
  • Changing things mid-test
  • Not checking implementation
  • Uneven traffic allocation

Analysis

  • Ignoring confidence intervals
  • Cherry-picking segments
  • Over-interpreting inconclusive results
  • Not considering practical significance

Questions to Ask

If you need more context:

  1. What's your current conversion rate?
  2. How much traffic does this page get?
  3. What change are you considering and why?
  4. What's the smallest improvement worth detecting?
  5. What tools do you have for testing?
  6. Have you tested this area before?

  • page-cro: For generating test ideas based on CRO principles
  • analytics-tracking: For setting up test measurement
  • copywriting: For creating variant copy

相关推荐