Presented By: Department of Mathematics
Student AIM Seminar
Optimizing Scalable Adaptive Choice Experiments with Multi-Armed Bandit Algorithms
Multi-armed Bandit methods have been successful for A\B testing. It is the main algorithm behind google analytics. I will apply these methods to the scenario where we have large list of 100+ items and we want to find out which are the best ones (most desirable by consumers). My focus will be on large MaxDiff studies whose main purpose is identifying the top few items for the sample. I will present a new adaptive approach called Bandit MaxDiff that may increase efficiency fourfold over standard non-adaptive MaxDiff. I show simulated results based on data from a large consumer packaged goods manufacturer. Speaker(s): Alexander Zaitzeff (University of Michigan)
Co-Sponsored By
Explore Similar Events
-
Loading Similar Events...