Skip to Content

Sponsors

No results

Keywords

No results

Types

No results

Search Results

Events

No results
Search events using: keywords, sponsors, locations or event type
When / Where
All occurrences of this event have passed.
This listing is displayed for historical purposes.

Presented By: Industrial & Operations Engineering

IOE 899: Generative diffusion models: optimization, generalization, and fine-tuning

Renyuan Xu

Portrait of Renyuan Xu Portrait of Renyuan Xu
Portrait of Renyuan Xu
About the speaker: Renyuan Xu is an assistant professor at the Department of Finance and Risk Engineering at New York University. Previously, she was an assistant professor at the University of Southern California and a Hooke Research Fellow at the University of Oxford. She completed her Ph.D. from UC Berkeley in 2019.
Her research interests include stochastic analysis, stochastic controls and games, machine learning theory, and mathematical finance. She received an NSF CAREER Award in 2024, the SIAM Financial Mathematics and Engineering Early Career Award in 2023, and a JP Morgan AI Faculty Research Award in 2022.

Abstract: Generative diffusion models, which transform noise into new data instances by reversing a Markov diffusion process in time, have become a cornerstone of modern generative models. Notable successful applications include image generation (e.g., DALL-E 2 by OpenAI), audio synthesis (DiffWave and Grad-TTS), and financial time series forecasting (TimeGrad).
A key component of these models is learning the associated Stein's score function. While the practical success of diffusion models is widely recognized, the theoretical underpinnings are still in development. In particular, it remains unclear whether gradient-based algorithms can learn the score function with provable accuracy. In this talk, I will present a suite of non-asymptotic theory aimed at understanding the data generation process in diffusion models and the accuracy of score estimation. Our analysis addresses both the optimization and generalization aspects of the learning process, establishing a novel connection to supervised learning and neural tangent kernels. If time permits, I will also share recent results on fine-tuning diffusion models from a stochastic control perspective.
This talk is based on joint work with Yinbin Han (NYU) and Meisam Razaviyayn (USC).
Portrait of Renyuan Xu Portrait of Renyuan Xu
Portrait of Renyuan Xu

Explore Similar Events

  •  Loading Similar Events...

Back to Main Content