Presented By: Department of Statistics
Statistics Department Seminar Series: Lester Mackey, Machine Learning Researcher, Microsoft Research New England, Adjunct Professor, Department of Statistics, Stanford University
"Kernel Thinning and Stein Thinning
Abstract: This talk will introduce two new tools for summarizing a probability distribution more effectively than independent sampling or standard Markov chain Monte Carlo thinning:
1) Given an initial n point summary (for example, from independent sampling or a Markov chain), kernel thinning finds a subset of only square-root n points with comparable worst-case integration error across a reproducing kernel Hilbert space.
2) If the initial summary suffers from biases due to off-target sampling, tempering, or burn-in, Stein thinning simultaneously compresses the summary and improves the accuracy by correcting for these biases.
These tools are especially well-suited for tasks that incur substantial downstream computation costs per summary point like organ and tissue modeling in which each simulation consumes 1000s of CPU hours.
Lester Mackey is a statistical machine learning researcher at Microsoft Research New England and an adjunct professor at Stanford University. His current research interests include statistical machine learning, scalable algorithms, high-dimensional statistics, approximate inference, and probability. Lately, he has been developing and analyzing scalable learning algorithms for healthcare, climate forecasting, approximate posterior inference, high-energy physics, recommender systems, and the social good.
https://web.stanford.edu/~lmackey/
1) Given an initial n point summary (for example, from independent sampling or a Markov chain), kernel thinning finds a subset of only square-root n points with comparable worst-case integration error across a reproducing kernel Hilbert space.
2) If the initial summary suffers from biases due to off-target sampling, tempering, or burn-in, Stein thinning simultaneously compresses the summary and improves the accuracy by correcting for these biases.
These tools are especially well-suited for tasks that incur substantial downstream computation costs per summary point like organ and tissue modeling in which each simulation consumes 1000s of CPU hours.
Lester Mackey is a statistical machine learning researcher at Microsoft Research New England and an adjunct professor at Stanford University. His current research interests include statistical machine learning, scalable algorithms, high-dimensional statistics, approximate inference, and probability. Lately, he has been developing and analyzing scalable learning algorithms for healthcare, climate forecasting, approximate posterior inference, high-energy physics, recommender systems, and the social good.
https://web.stanford.edu/~lmackey/
Related Links
Co-Sponsored By
Explore Similar Events
-
Loading Similar Events...