Presented By: Department of Statistics
Statistics Department Seminar Series: Wenlong Mou PhD Candidate, Department of EECS, University of California, Berkeley
" Instance-dependent optimality in statistical decision-making: what they mean and how to achieve them?"
Abstract: Data-driven methodology is a pillar of real-world decision-making. When applying statistical learning methods, puzzling phenomena have arisen in choosing estimators, tuning their parameters, and characterizing bias-variance trade-offs. There are various settings in which asymptotic and/or worst-case theory fails to provide the relevant guidance, so that a more refined approach, both non-asymptotic and instance-optimal, is required.
In this talk, I present some recent advances in optimal procedures for statistical decision-making. I will first discuss function approximation methods for policy evaluation in reinforcement learning. I describe a novel class of optimal and instance-dependent oracle inequalities for projected Bellman equations. Different from statistical learning, the optimal approximation factor depends on the geometry of the problem, and can be much larger than unity. Drawing on this perspective, I then discuss instance-dependent optimal methods for estimating linear functionals from observational data. With a practical sample size, the optimal risks exhibit a rich spectrum of behavior beyond the asymptotic semi-parametric efficiency bound. Our non-asymptotic instance-dependent results identify the fundamental roles of certain novel quantities, and provide concrete guidance on practical choices.
Bio: Wenlong Mou is a Ph.D. student at Department of EECS, UC Berkeley, advised by Martin Wainwright and Peter Bartlett. Prior to Berkeley, he received his B.Sc. degree in Computer Science from Peking University. Wenlong's research interests include statistics, machine learning theory, dynamic programming and optimization, and applied probability. He is particularly interested in designing optimal statistical methods that enable optimal data-driven decision making, powered by efficient computational algorithms.
https://people.eecs.berkeley.edu/~wmou/
In this talk, I present some recent advances in optimal procedures for statistical decision-making. I will first discuss function approximation methods for policy evaluation in reinforcement learning. I describe a novel class of optimal and instance-dependent oracle inequalities for projected Bellman equations. Different from statistical learning, the optimal approximation factor depends on the geometry of the problem, and can be much larger than unity. Drawing on this perspective, I then discuss instance-dependent optimal methods for estimating linear functionals from observational data. With a practical sample size, the optimal risks exhibit a rich spectrum of behavior beyond the asymptotic semi-parametric efficiency bound. Our non-asymptotic instance-dependent results identify the fundamental roles of certain novel quantities, and provide concrete guidance on practical choices.
Bio: Wenlong Mou is a Ph.D. student at Department of EECS, UC Berkeley, advised by Martin Wainwright and Peter Bartlett. Prior to Berkeley, he received his B.Sc. degree in Computer Science from Peking University. Wenlong's research interests include statistics, machine learning theory, dynamic programming and optimization, and applied probability. He is particularly interested in designing optimal statistical methods that enable optimal data-driven decision making, powered by efficient computational algorithms.
https://people.eecs.berkeley.edu/~wmou/
Related Links
Co-Sponsored By
Explore Similar Events
-
Loading Similar Events...