Skip to main content
← Glossary · Statistics & Methodology

Novelty Effect

The temporary increase in engagement or conversions that occurs simply because something is new and unfamiliar — not because it's actually better.

The novelty effect is one of the most common reasons A/B test results don't hold up after implementation. A new design, feature, or layout gets attention because it's different — not because it's better. Once the novelty wears off, performance regresses toward the original baseline.

How to Detect Novelty Effects

The telltale sign: strong results in the first week of a test that gradually diminish over time. If your variant shows a 15% lift in week 1, 8% in week 2, and 3% in week 3, you're likely seeing a novelty effect. Segment your results by time period to check.

The most vulnerable metrics are engagement metrics — clicks, page views, time on site. Conversion and revenue metrics are less susceptible because they require a genuine decision, not just curiosity.

Returning vs. New Visitors

The novelty effect primarily impacts returning visitors who notice the change. New visitors have no baseline, so "novelty" doesn't apply to them. Segmenting results by new vs. returning visitors is a quick diagnostic: if the variant wins only among returning visitors, suspicion of novelty is warranted.

Practical Application

Run tests for at least 2-3 full business cycles (typically 2-4 weeks) to let novelty effects fade. Segment results by week and by new vs. returning visitors. If a test shows strong early results that decay over time, extend the test duration before declaring a winner. The true treatment effect is what remains after novelty fades.