Heavy User Effect in A/B Testing: Identification and Estimation

02/06/2019
by   Yu Wang, et al.
0

On-line experimentation (also known as A/B testing) has become an integral part of software development. To timely incorporate user feedback and continuously improve products, many software companies have adopted the culture of agile deployment, requiring online experiments to be conducted and concluded on limited sets of users for a short period. While conceptually efficient, the result observed during the experiment duration can deviate from what is seen after the feature deployment, which makes the A/B test result highly biased. While such bias can have multiple sources, we provide theoretical analysis as well as empirical evidence to show that the heavy user effect can contribute significantly to it. To address this issue, we propose to use a jackknife-resampling estimator. Simulated and real-life examples show that the jackknife estimator can reduce the bias and make A/B testing results closer to our long term estimate.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset