Skip to Content

Sponsors

No results

Tags

No results

Types

No results

Search Results

Events

No results
Search events using: keywords, sponsors, locations or event type
When / Where
All occurrences of this event have passed.
This listing is displayed for historical purposes.

Presented By: Department of Mathematics

Student AIM Seminar

Optimizing Scalable Adaptive Choice Experiments with Multi-Armed Bandit Algorithms

Multi-armed Bandit methods have been successful for A\B testing. It is the main algorithm behind google analytics. I will apply these methods to the scenario where we have large list of 100+ items and we want to find out which are the best ones (most desirable by consumers). My focus will be on large MaxDiff studies whose main purpose is identifying the top few items for the sample. I will present a new adaptive approach called Bandit MaxDiff that may increase efficiency fourfold over standard non-adaptive MaxDiff. I show simulated results based on data from a large consumer packaged goods manufacturer. Speaker(s): Alexander Zaitzeff (University of Michigan)

Explore Similar Events

  •  Loading Similar Events...

Back to Main Content