Skip to content
Go back

The 100 Conversions Rule: Where It Came From and Why You Should Ignore It

Updated:

If you’ve spent any time in the A/B testing world, you’ve probably heard the magic number: 100 conversions per variation. It’s repeated in blog posts, testing tools, and marketing courses like gospel. But here’s the uncomfortable truth—this rule is complete rubbish.

The “100 conversions per variation” guideline has become one of the most persistent myths in conversion rate optimisation, and it’s leading marketers down the wrong path. It’s time we called it out for what it is: an oversimplified metric that can seriously damage your testing programme.

Where Did This Rule Actually Come From?

The origins of this rule aren’t rooted in statistical science—they’re rooted in convenience. Let’s trace back how we got here:

Table of contents

Open Table of contents

Why 100 Conversions Per Variation Doesn’t Work

Let’s break down why this rule is fundamentally flawed:

It Ignores Your Baseline Conversion Rate

A test with a 1% conversion rate needs vastly different sample sizes compared to one with a 10% conversion rate. The 100-conversion rule treats them identically, which makes no statistical sense.

Consider these scenarios:

Both might hit 100 conversions, but they require completely different sample sizes to reach statistical significance.

It Doesn’t Account for Effect Size

The minimum detectable effect (MDE) is crucial for determining sample size. If you’re looking for a 5% improvement versus a 50% improvement, you need different amounts of data. The 100-conversion rule completely ignores this.

Statistical Power Gets Thrown Out the Window

Proper A/B testing requires understanding statistical power—typically set at 80%. This means you have an 80% chance of detecting a true effect when it exists. The 100-conversion rule doesn’t consider power at all, leaving you flying blind.

What the Experts Actually Recommend

Real conversion optimisation experts have moved far beyond this simplistic approach.

Peep Laja from ConversionXL suggests that 300-400 conversions per variation is more realistic for most marketing teams, but even this should be calculated based on your specific test parameters.

The key insight from ConversionXL’s research is clear: there’s no magic number of conversions that guarantees statistical significance. Instead, you need to focus on:

The Real Cost of Bad Sample Sizing

Following the 100-conversion rule isn’t just academically wrong—it’s expensive. Here’s what happens when you get sample sizing wrong:

False Positives (Type I Errors)

You think you’ve found a winner when you haven’t. You implement the “winning” variation across your site, only to see performance drop. I’ve seen companies lose thousands in revenue because they jumped on false positives.

False Negatives (Type II Errors)

You miss real improvements because your test didn’t have enough power to detect them. That 15% conversion rate improvement? It was real, but your undersized test called it inconclusive.

Wasted Resources

Running tests with improper sample sizes means you’re either running them too long (opportunity cost) or stopping too early (unreliable results). Both waste time and money.

How to Actually Determine Sample Size

Instead of relying on arbitrary rules, here’s how to properly calculate sample size:

1. Define Your Parameters

2. Use Proper Sample Size Calculators

Tools like Optimizely’s sample size calculator or Evan Miller’s calculator actually consider these parameters. Don’t just guess—calculate.

3. Plan Before You Test

Determine your required sample size before launching the test. This prevents the temptation to stop early when you see “encouraging” results.

A Better Framework for A/B Testing

Here’s a more robust approach to A/B testing sample sizes:

StepActionWhy It Matters
1Calculate baseline conversion rateFoundation for all sample size calculations
2Define minimum detectable effectDetermines the sensitivity of your test
3Set statistical power (80%) and significance (95%)Standard practice for reliable results
4Calculate required sample sizeTells you exactly how much data you need
5Estimate test durationHelps with planning and resource allocation

Common Scenarios and Real Sample Size Requirements

Let’s look at some realistic examples:

E-commerce Checkout Optimisation

Landing Page CTA Test

Notice how different these are from the arbitrary 100-conversion rule?

When You Can’t Reach Ideal Sample Sizes

Sometimes you simply don’t have enough traffic to reach statistically robust sample sizes. Here’s what to do:

What you shouldn’t do is pretend that 100 conversions will give you reliable results when the math says otherwise.

Moving Beyond Oversimplified Rules

The marketing industry loves simple rules because they’re easy to remember and implement. But A/B testing isn’t simple—it’s a sophisticated statistical process that deserves proper treatment.

The 100-conversion rule persists because it feels actionable. It gives teams a target to hit. But hitting the wrong target is worse than having no target at all.

Instead of memorising arbitrary numbers, invest time in understanding the statistical principles behind sample size determination. Your tests will be more reliable, your insights more actionable, and your optimisation programme more successful.

The “100 conversions per variation” rule needs to be retired. It’s time for the marketing industry to embrace statistical rigour over convenient shortcuts. Your conversion rates—and your bottom line—will thank you for it.

Growth Method is the only AI-native project management tool built specifically for marketing and growth teams. Book a call to speak with Stuart, our founder, at https://cal.com/stuartb/30min.


Back to top ↑