Glossary · Business Analytics
Peeking Problem
also: optional stopping · sequential testing
Definition
The peeking problem is the inflation of false-positive rates that occurs when a frequentist A/B test is repeatedly evaluated before reaching its pre-registered sample size. A nominal 5% false-positive rate can become 20–30% under daily peeking. Bayesian testing and sequential-analysis methods eliminate the problem.
Frequentist null-hypothesis testing guarantees its error rates only at the pre-specified sample size. Each interim look at the data re-rolls the dice: with 10 looks at alpha = 0.05, the probability of at least one false positive rises above 20%. Bonferroni correction or alpha-spending functions solve the problem at a power cost. Bayesian A/B testing sidesteps the issue entirely — posterior probabilities remain calibrated regardless of how many times the test is inspected.
Essays on this concept
- Business Analytics
Bayesian A/B Testing in Practice: When to Stop Experiments and How to Communicate Results to Non-Technical Stakeholders
Frequentist A/B testing answers a question nobody asked: 'If the null hypothesis were true, how surprising is this data?' Bayesian testing answers the question that matters: 'Given this data, what's the probability that B is actually better?'
- Behavioral Economics
The Decoy Effect Reimagined: Dynamic Price Anchoring with Real-Time Behavioral Segmentation
A dominated third option can shift 22% more users to your premium plan. But the static decoy is dead — here's how real-time behavioral data makes asymmetric dominance adaptive.
Related concepts
Authoritative references