Presented By: Frontiers in Scientific Machine Learning (FSML)
FSML Seminar 11: Derivative-Informed Operator Learning with Applications to Cost-Efficient Bayesian Inversion
Lianghao Cao

Zoom Link:
https://umich.zoom.us/j/97823527756?pwd=H01BbvtuG5q02Wzb8LJvhUnvijlAIe.1
Abstract:
This talk focuses on a derivative-informed supervised learning method for efficiently building machine learning surrogates of high-fidelity computational models, particularly those governed by parametric partial differential equations. Unlike the conventional supervised learning method that treats the model as a black box, our approach leverages additional model sensitivity information, extracted via solving forward or adjoint sensitivity equations. This sensitivity information is integrated into the surrogate’s architecture and training process based on rigorous error analysis. We refer to such a surrogate construction as DINO (derivative-informed neural operator).
DINO offers two key advantages over conventional surrogate construction. First, it significantly improves the cost-accuracy trade-off for a wide range of models, often by one to two orders of magnitude. Second, it directly controls the surrogate Jacobian (Fréchet derivative) errors, thus enhancing performance in surrogate-driven outer-loop problems that use gradient- and Hessian-based optimization algorithms. We demonstrate DINO’s capability to accelerate infinite-dimensional Bayesian inversion. First, we show that geometric MCMC driven by DINO achieves a 2–9x speed up in asymptotically exact posterior sampling. Second, we introduce LazyDINO, a DINO-driven measure transport method for amortized Bayesian inversion, which is one to two orders of magnitude more cost-efficient than competing methods.
This talk is based on joint work with Michael Brennan, Joshua Chen, Omar Ghattas, Youssef Marzouk, and Thomas O’Leary-Roseberry.
Short bio: Dr. Lianghao Cao is a Postdoctoral Scholar Research Associate from Department of Computing and Mathematical Sciences at California Institute of Technology. He obtained a B.S. in Engineering Mechanics from the University of Illinois at Urbana-Champaign, and a Ph.D. in Computational Science, Engineering, and Mathematics from The University of Texas at Austin. His research blends mechanistic modeling, uncertainty quantification, and scientific machine learning to understand, enhance, and control the quality, validity, and reliability of simulation-based predictions of complex physical systems.
https://umich.zoom.us/j/97823527756?pwd=H01BbvtuG5q02Wzb8LJvhUnvijlAIe.1
Abstract:
This talk focuses on a derivative-informed supervised learning method for efficiently building machine learning surrogates of high-fidelity computational models, particularly those governed by parametric partial differential equations. Unlike the conventional supervised learning method that treats the model as a black box, our approach leverages additional model sensitivity information, extracted via solving forward or adjoint sensitivity equations. This sensitivity information is integrated into the surrogate’s architecture and training process based on rigorous error analysis. We refer to such a surrogate construction as DINO (derivative-informed neural operator).
DINO offers two key advantages over conventional surrogate construction. First, it significantly improves the cost-accuracy trade-off for a wide range of models, often by one to two orders of magnitude. Second, it directly controls the surrogate Jacobian (Fréchet derivative) errors, thus enhancing performance in surrogate-driven outer-loop problems that use gradient- and Hessian-based optimization algorithms. We demonstrate DINO’s capability to accelerate infinite-dimensional Bayesian inversion. First, we show that geometric MCMC driven by DINO achieves a 2–9x speed up in asymptotically exact posterior sampling. Second, we introduce LazyDINO, a DINO-driven measure transport method for amortized Bayesian inversion, which is one to two orders of magnitude more cost-efficient than competing methods.
This talk is based on joint work with Michael Brennan, Joshua Chen, Omar Ghattas, Youssef Marzouk, and Thomas O’Leary-Roseberry.
Short bio: Dr. Lianghao Cao is a Postdoctoral Scholar Research Associate from Department of Computing and Mathematical Sciences at California Institute of Technology. He obtained a B.S. in Engineering Mechanics from the University of Illinois at Urbana-Champaign, and a Ph.D. in Computational Science, Engineering, and Mathematics from The University of Texas at Austin. His research blends mechanistic modeling, uncertainty quantification, and scientific machine learning to understand, enhance, and control the quality, validity, and reliability of simulation-based predictions of complex physical systems.