Presented By: Industrial & Operations Engineering
IOE 899 Seminar: Paul Grigas, University of Berkeley
Smart "Predict, then Optimize"
Abstract:
Many real-world analytics problems involve two significant challenges: prediction and optimization. Due to the typically complex nature of each challenge, the standard paradigm is to predict, then optimize. By and large, machine learning tools are intended to minimize prediction error and do not account for how the predictions will be used in a downstream optimization problem. In contrast, we propose a new and very general framework, called Smart “Predict, then Optimize” (SPO), which directly leverages the optimization problem structure, i.e., its objective and constraints, for designing successful predictive models. A key component of our framework is the SPO loss function, which measures the quality of a prediction by comparing the objective values of the solutions generated using the predicted and observed parameters, respectively. Training a model with respect to the SPO loss is computationally challenging, and therefore we also develop a surrogate loss function, called the SPO+ loss, which upper bounds the SPO loss, has desirable convexity properties, and is statistically consistent under mild conditions. We also propose a stochastic gradient descent algorithm which allows for situations in which the number of training samples is large, model regularization is desired, and/or the optimization problem of interest is nonlinear or integer. Finally, we perform computational experiments to empirically verify the success of our SPO framework in comparison to the standard predict-then-optimize approach. This is joint work with Adam Elmachtoub.
This talk is based on the following paper: https://arxiv.org/abs/1710.08005
Bio:
Paul Grigas is an assistant professor of Industrial Engineering and Operations Research at the University of California, Berkeley. Paul’s research interests are in large-scale convex optimization, statistical machine learning, and data-driven decision making. He is also broadly interested in the applications of data analytics, and he has worked on applications in online advertising. Paul was awarded an NSF CRII Award, the 2015 INFORMS Optimization Society Student Paper Prize, and an NSF Graduate Research Fellowship. Paul received his PhD in Operations Research from MIT in 2016. Previously, he earned a B.S. in Operations Research and Information Engineering from Cornell University.
Many real-world analytics problems involve two significant challenges: prediction and optimization. Due to the typically complex nature of each challenge, the standard paradigm is to predict, then optimize. By and large, machine learning tools are intended to minimize prediction error and do not account for how the predictions will be used in a downstream optimization problem. In contrast, we propose a new and very general framework, called Smart “Predict, then Optimize” (SPO), which directly leverages the optimization problem structure, i.e., its objective and constraints, for designing successful predictive models. A key component of our framework is the SPO loss function, which measures the quality of a prediction by comparing the objective values of the solutions generated using the predicted and observed parameters, respectively. Training a model with respect to the SPO loss is computationally challenging, and therefore we also develop a surrogate loss function, called the SPO+ loss, which upper bounds the SPO loss, has desirable convexity properties, and is statistically consistent under mild conditions. We also propose a stochastic gradient descent algorithm which allows for situations in which the number of training samples is large, model regularization is desired, and/or the optimization problem of interest is nonlinear or integer. Finally, we perform computational experiments to empirically verify the success of our SPO framework in comparison to the standard predict-then-optimize approach. This is joint work with Adam Elmachtoub.
This talk is based on the following paper: https://arxiv.org/abs/1710.08005
Bio:
Paul Grigas is an assistant professor of Industrial Engineering and Operations Research at the University of California, Berkeley. Paul’s research interests are in large-scale convex optimization, statistical machine learning, and data-driven decision making. He is also broadly interested in the applications of data analytics, and he has worked on applications in online advertising. Paul was awarded an NSF CRII Award, the 2015 INFORMS Optimization Society Student Paper Prize, and an NSF Graduate Research Fellowship. Paul received his PhD in Operations Research from MIT in 2016. Previously, he earned a B.S. in Operations Research and Information Engineering from Cornell University.
Explore Similar Events
-
Loading Similar Events...