A Discrepancy-Based Design for A/B Testing Experiments

Time

-

Locations

Rettaliata Engineering Center, Room 036

Host

Department of Applied Mathematics

Speaker

Yiou Li
Department of Mathematical Sciences, Depaul University
https://csh.depaul.edu/faculty-staff/faculty-a-z/Pages/mathematical-sciences/yiou-li.aspx

Description

A/B tests (or "A/B/n tests") refer to the experiments and the corresponding inference on the treatment effect(s) of a two-level or multi-level controllable experimental factor. The common practice is to use a randomized design and perform hypothesis tests on the estimates. However, such estimations and inferences are not always accurate on a single experiment. In this work, we study how the discrepancy between empirical distribution of the experiment and the population distribution would influence the accuracy of the treatment effect estimates. Guided by the theoretical results, we introduce a new design of experiment method for A/B tests in order to ensure an accurate estimate of treatment effects that is robust to the model assumption. We propose a discrepancy-based criterion and show that the design minimizing this criterion significantly improves the accuracy of the treatment effect(s) estimates. Furthermore, the discrepancy-based criterion is model-free and thus makes the estimation of the treatment effect(s) robust to the model assumptions. More importantly, the proposed design is applicable to both continuous and categorical response measurements. We develop two efficient algorithms to construct the designs by optimizing the criterion for both offline and online A/B tests. Through simulation study and a real example, we show that the proposed design approach achieves accurate estimation even if the model assumption is not correct.

Event Topic

Computational Mathematics & Statistics

Tags: