Skip to Content

Sponsors

No results

Tags

No results

Types

No results

Search Results

Events

No results
Search events using: keywords, sponsors, locations or event type
When / Where
All occurrences of this event have passed.
This listing is displayed for historical purposes.

Presented By: Department of Mathematics

AIM seminar: Accelerating Convergence of Stochastic Gradient MCMC: algorithm and theory

Qi Feng, University of Michigan

Stochastic Gradient Langevin Dynamics (SGLD) shows its advantages in multi-modal sampling and non-convex optimization, which implies broad application in machine learning, e.g., uncertainty quantification for AI safety problems, training of neural networks, etc. The core issue in this field concerns the SGLD algorithm’s acceleration and convergence rate of the continuous time (mean-field) Langevin diffusion process to its invariant distribution. In this talk, I will present the general idea of entropy dissipation and show the convergence rate analysis for linear and non-linear Fokker-Planck equations. I will show some applications by using various Langevin dynamics-based algorithms.

Explore Similar Events

  •  Loading Similar Events...

Back to Main Content