Skip to main content
Glossary Testing & Experimentation

Minimum Detectable Effect

What minimum detectable effect (MDE) means, how to choose the right MDE for your experiments, and why it's a business decision disguised as a statistical one.

Minimum detectable effect (MDE) is the smallest improvement your experiment is designed to reliably detect. It’s the threshold between “this change is big enough to matter” and “even if it’s real, we don’t care.” Setting the right MDE is one of the most important — and most overlooked — decisions in experiment design.

Why MDE is a business decision

MDE directly determines your required sample size and test duration. A smaller MDE requires exponentially more traffic. Choosing between a 5% and 20% relative MDE can mean the difference between a 2-week test and a 3-month test.

The question to ask isn’t “what’s the smallest effect we can detect?” It’s “what’s the smallest effect worth implementing?” If a variant wins by 1% relative but requires 3 months of engineering work to productionize, it’s not worth detecting. Set your MDE to the lift that justifies the cost of implementation.

How to choose your MDE

Start with the economics:

  1. Calculate the revenue impact. If your funnel generates $1M/month, a 5% relative lift is $50K/month. A 1% lift is $10K/month. What’s your implementation cost?

  2. Consider the opportunity cost. A test running for 8 weeks to detect a 3% lift means 8 weeks of not testing something else. Sometimes a higher MDE with faster iteration is the better strategy.

  3. Factor in traffic constraints. Low-traffic pages or segments force higher MDEs. If you can only get 5,000 visitors per variant in a reasonable timeframe, you’re limited to detecting 20%+ relative effects.

Common mistakes

Setting MDE too low. Teams power tests to detect 2-3% relative lifts, then wait months for results. Meanwhile, they could have run five tests at 15% MDE in the same period and learned far more.

Confusing MDE with expected effect. MDE isn’t a prediction of how much you’ll win. It’s the smallest effect you’d want to detect. Your hypothesis might predict a 15% lift, but you should power the test for the smallest lift that’s still worth shipping.

Not adjusting by metric type. Revenue-per-visitor tests need higher MDEs than conversion rate tests because revenue has higher variance. A 5% MDE on revenue might require 10x the traffic of a 5% MDE on conversion.

Practical example

You’re testing a redesigned product page. Implementation takes one developer two weeks. Your current revenue per visitor is $4.50 across 8,000 daily visitors. A 10% relative lift ($0.45/visitor) would generate an additional $3,600/day — easily worth the dev time. A 2% relative lift ($0.09/visitor) generates $720/day — marginal. You set MDE at 8% relative, which requires ~12,000 visitors per variant, giving you a 3-day test. Fast, decisive, and economically sound.

Work Together

Put This Into Practice

Understanding the theory is step one. Building an experimentation program that applies these concepts systematically — and ties every test to revenue — is where the real impact happens.

Lean Experiments Newsletter

Revenue Frameworks
for Growth Leaders

Every week: one experiment, one framework, one insight to make your marketing more evidence-based and your revenue more predictable.