Definition
Minimum Detectable Effect (MDE) refers to the smallest effect size in a test (such as an A/B test or controlled experiment) that can be statistically detected given a specified sample size, statistical power, and confidence level. It quantifies the threshold at which a difference between two variants (e.g., control vs. treatment) becomes distinguishable from random variation.
How it relates to marketing
In marketing, MDE is critical for designing experiments that test changes to campaigns, website elements, messaging, pricing, or other variables. It determines whether a test is capable of detecting meaningful changes in key performance indicators (KPIs), such as click-through rates, conversion rates, or revenue per user. MDE helps marketers avoid wasting time and resources on experiments that are too small or underpowered to yield actionable insights.
How to calculate Minimum Detectable Effect
MDE is influenced by:
- Sample size (n): Number of observations in each group (control and test).
- Statistical power (1 – β): The probability of detecting an effect when it exists (commonly set at 80% or 90%).
- Significance level (α): The probability of a Type I error (commonly set at 0.05).
- Baseline conversion rate or metric mean (p): The expected performance of the control group.
The general formula for calculating MDE in the context of proportions is:
MDE = Z_(1−α/2) * √[2p(1−p)/n] + Z_(1−β) * √[2p(1−p)/n]
![MDE = Z_(1−α/2) * √[2p(1−p)/n] + Z_(1−β) * √[2p(1−p)/n]](https://i0.wp.com/agilebrandguide.com/wp-content/uploads/2025/07/image.webp?resize=443%2C70&ssl=1)
Where:
Z_(1−α/2)
is the z-score corresponding to the desired confidence level.Z_(1−β)
is the z-score corresponding to the desired power.
MDE increases as sample size decreases, and decreases as sample size increases.
How to utilize Minimum Detectable Effect
Use cases include:
- A/B testing: Define the smallest business-relevant uplift in metrics that the test must be capable of detecting.
- Budget allocation: Determine whether a test is feasible given traffic or user volume constraints.
- Experiment design: Select between higher-powered tests (larger samples) and faster tests (lower sample, higher MDE) depending on business goals.
Comparison to similar concepts
Concept | Description | Key Difference from MDE |
---|---|---|
Effect Size | The actual observed difference between test and control groups | MDE is the minimum detectable, not the actual observed effect |
Statistical Power | Probability of detecting an effect if one exists | MDE is what power helps define |
Confidence Interval | Range in which the true effect size likely falls | Confidence intervals may or may not include the MDE |
Sample Size | Number of subjects needed for a test | Sample size directly affects the MDE |
Best practices
- Define business goals first: Choose an MDE that reflects a meaningful and actionable business impact.
- Avoid underpowered tests: If the MDE is larger than the desired effect, the test won’t be useful.
- Run pre-test calculations: Use online calculators or statistical tools to simulate sample size and MDE trade-offs.
- Be realistic about traffic: Adjust test duration or scope based on how much traffic is available to reach the MDE.
Future trends
As marketing experiments increasingly rely on real-time data, platforms are beginning to automate MDE estimation using machine learning models that account for observed variances, adaptive sampling, and multivariate test dynamics. In addition, tighter data privacy regulations are pushing marketers to work with smaller datasets, making precise MDE planning more important than ever.
Related Terms
- A/B Testing
- Statistical Power
- Sample Size
- Confidence Level
- Type I Error (False Positive)
- Type II Error (False Negative)
- Effect Size
- Conversion Rate
- Hypothesis Testing
- Experiment Design