Minimum Detectable Effect (MDE)

Definition

Minimum Detectable Effect (MDE) refers to the smallest effect size in a test (such as an A/B test or controlled experiment) that can be statistically detected given a specified sample size, statistical power, and confidence level. It quantifies the threshold at which a difference between two variants (e.g., control vs. treatment) becomes distinguishable from random variation.

How it relates to marketing

In marketing, MDE is critical for designing experiments that test changes to campaigns, website elements, messaging, pricing, or other variables. It determines whether a test is capable of detecting meaningful changes in key performance indicators (KPIs), such as click-through rates, conversion rates, or revenue per user. MDE helps marketers avoid wasting time and resources on experiments that are too small or underpowered to yield actionable insights.

How to calculate Minimum Detectable Effect

MDE is influenced by:

  • Sample size (n): Number of observations in each group (control and test).
  • Statistical power (1 – β): The probability of detecting an effect when it exists (commonly set at 80% or 90%).
  • Significance level (α): The probability of a Type I error (commonly set at 0.05).
  • Baseline conversion rate or metric mean (p): The expected performance of the control group.

The general formula for calculating MDE in the context of proportions is:

MDE = Z_(1−α/2) * √[2p(1−p)/n] + Z_(1−β) * √[2p(1−p)/n]

MDE = Z_(1−α/2) * √[2p(1−p)/n] + Z_(1−β) * √[2p(1−p)/n]

Where:

  • Z_(1−α/2) is the z-score corresponding to the desired confidence level.
  • Z_(1−β) is the z-score corresponding to the desired power.

MDE increases as sample size decreases, and decreases as sample size increases.

How to utilize Minimum Detectable Effect

Use cases include:

  • A/B testing: Define the smallest business-relevant uplift in metrics that the test must be capable of detecting.
  • Budget allocation: Determine whether a test is feasible given traffic or user volume constraints.
  • Experiment design: Select between higher-powered tests (larger samples) and faster tests (lower sample, higher MDE) depending on business goals.

Comparison to similar concepts

ConceptDescriptionKey Difference from MDE
Effect SizeThe actual observed difference between test and control groupsMDE is the minimum detectable, not the actual observed effect
Statistical PowerProbability of detecting an effect if one existsMDE is what power helps define
Confidence IntervalRange in which the true effect size likely fallsConfidence intervals may or may not include the MDE
Sample SizeNumber of subjects needed for a testSample size directly affects the MDE

Best practices

  • Define business goals first: Choose an MDE that reflects a meaningful and actionable business impact.
  • Avoid underpowered tests: If the MDE is larger than the desired effect, the test won’t be useful.
  • Run pre-test calculations: Use online calculators or statistical tools to simulate sample size and MDE trade-offs.
  • Be realistic about traffic: Adjust test duration or scope based on how much traffic is available to reach the MDE.

Future trends

As marketing experiments increasingly rely on real-time data, platforms are beginning to automate MDE estimation using machine learning models that account for observed variances, adaptive sampling, and multivariate test dynamics. In addition, tighter data privacy regulations are pushing marketers to work with smaller datasets, making precise MDE planning more important than ever.


Related Terms

  1. A/B Testing
  2. Statistical Power
  3. Sample Size
  4. Confidence Level
  5. Type I Error (False Positive)
  6. Type II Error (False Negative)
  7. Effect Size
  8. Conversion Rate
  9. Hypothesis Testing
  10. Experiment Design