Presented By: Financial/Actuarial Mathematics Seminar - Department of Mathematics
Unified continuous-time q-learning for mean-field game and mean-field control problems
Fengyi Yuan, UM
We study the continuous-time q-learning in the mean-field jump-diffusion models from the representative agent’s perspective. We introduce the integrated q-function in decoupled form (decoupled Iq-function) and establish its martingale characterization together with the value function, which provides a unified policy evaluation rule for both mean-field game (MFG) and mean-field control (MFC) problems. Moreover, depending on the task to solve the MFG or MFC problem, we can employ the decoupled Iq-function by different means to learn the mean-field equilibrium policy or the mean-field optimal policy respectively. As a result, we devise a unified q-learning algorithm for both MFG and MFC problems by utilizing all test policies stemming from the mean-field interactions. For several examples in the jump-diffusion setting, within and beyond the LQ framework, we can obtain the exact parameterization of the decoupled Iq-functions and the value functions, and illustrate our algorithm from the representative agent’s perspective with satisfactory performance. Joint work with Xiang Yu and Xiaoli Wei.
Co-Sponsored By
Explore Similar Events
-
Loading Similar Events...