Skip to Content

Sponsors

No results

Keywords

No results

Types

No results

Search Results

Events

No results
Search events using: keywords, sponsors, locations or event type
When / Where
All occurrences of this event have passed.
This listing is displayed for historical purposes.

Presented By: Financial/Actuarial Mathematics Seminar - Department of Mathematics

Time-inconsistent mean-field stopping problems: A regularized equilibrium approach

Fengyi Yuan of HK Poly U.

ABSTRACT: We study the mean-field Markov decision process (MDP) with the centralized stopping under the non-exponential discount. The problem differs fundamentally from most existing studies due to its time-inconsistent natural and also the continuous state space. Unlike many previous studies on time-inconsistent stopping, we are interested in the general discount function without imposing any structural conditions such as “decreasing impatience”. As a result, the study on the relaxed equilibrium becomes necessary as the pure-strategy equilibrium may not exist in general. We use the method of regularization to prove the existence of the relaxed equilibrium, and at the same time provide approximation results of it. We also establish some connections between the mean-field MDP and the N-agent MDP. As a third advantage of the regularization method, we prove that the regularized equilibrium is an $\epsilon$-equilibrium of N-agent problem when $N$ is sufficiently large and the regularization constant $\lambda$ is sufficiently small.

Explore Similar Events

  •  Loading Similar Events...

Keywords


Back to Main Content