Skip to Content


No results


No results


No results

Search Results


No results
Search events using: keywords, sponsors, locations or event type
When / Where

Presented By: Financial/Actuarial Mathematics Seminar - Department of Mathematics

Utilizing game theory and deep learning to find optimal policies for large number of agents

Gokce Dayanikli, UIUC

In many real-life policy making applications, the principal (i.e., governor or regulator) would like to find optimal policies for a large population of interacting agents who optimize their own objectives in a game theoretical framework. With the motivation of finding optimal policies for large populations, we start with introducing continuous time Stackelberg mean field game problem between a principal and a large number of agents. In the model, the agents in the population play a non-cooperative game and choose their controls to optimize their individual objectives while interacting with the principal and the other agents in the society through the population distribution. The principal can influence the resulting mean field game Nash equilibrium through incentives to optimize her own objective. Therefore, Stackelberg mean field game problems are by their nature bi-level problems where we have an optimal control problem at the principal level and a Nash equilibrium problem at the population level. This bi-level nature creates many efficiency challenges for the implementation of numerical approaches. For this reason, we will analyze how to rewrite this bi-level problem as a single-level problem and propose a deep learning approach to solve it. Then we will briefly discuss the convergence of the numerical solution where we utilize the single level problem to the solution of the original problem. We will conclude by demonstrating some applications such as the systemic risk model for a regulator and many banks and an optimal contract problem between a project manager and a large number of employees.

Explore Similar Events

  •  Loading Similar Events...

Back to Main Content