To answer the question posed in the title, we first need to define what we mean by "better." In this talk, we will look at the tournament designs of football and table tennis from an optimal stopping perspective. In particular, we consider the problem of finding the optimal scheme for a knock-out tournament with 2^n players, aiming at determining the top player. In each game in the tournament, we observe a real-time score, modeled by a Brownian motion with drift where the drift reflects the players' relative abilities. We can stop observing the game when the outcome seems clear and decide who advances. However, the longer a match is played, the more cost one needs to pay. We formulate and solve a stopping problem to minimise the probability of eliminating the best player while keeping the cost of observation low. The result will tell us how to smartly distribute the time cost across tournament games, and thus, reveals which sport has a superior design. Additionally, we discuss a few variants of the problem and some possible generalisations.

]]>This paper investigates the asymptotic behavior of the linear-quadratic stochastic optimal control problems. By establishing a connection between the ergodic cost problem and the so-called cell problem in the homogenization of Hamilton-Jacobi equations, we reveal the turnpike properties of the linear-quadratic stochastic optimal control problems from various perspectives.

]]>We examine the stationary relaxed singular control problem within a multi-dimensional framework for a single agent, as well as its Mean Field Game (MFG) equivalent. We demonstrate that optimal relaxed controls exist for both maximization and minimization cases. These relaxed controls are defined by random measures across the state and control spaces, with the state process described as a solution to the associated martingale problem. By leveraging findings from \cite{kur-sto}, we establish the equivalence between the martingale problem and the stationary forward equation. This allows us to reformulate the relaxed control problem into a linear programming problem within the measure space. We prove the sequential compactness of these measures, thereby confirming the feasibility of achieving an optimal solution. Subsequently, our focus shifts to Mean Field Games. Drawing on insights from the single-agent problem and employing Kakutani--Glicksberg--Fan fixed point theorem, we derive the existence of a mean field game equilibria.

]]>In this talk I will present a principal-agent problem in continuous time with multiple lump-sum payments (contracts) paid at different deterministic times. Based on the approach introduced in Cvitanić-Possamai-Touzi, we reduce the non-zero sum Stackelberg game between the principal and agent to a standard stochastic optimal control problem. We apply our result to a benchmark model for which we investigate how different inputs (payment frequencies, payment distribution, discount factors, agent's reservation utility, renegotiation) affect the principal's value. This is a joint work with Erhan Bayraktar, Ibrahim Ekren, and Liwei Huang.

]]>Mean field games model the strategic interaction among a large number of players by reducing the problem to two entities: the statistical distribution of all players on the one hand and a representative player on the other. The master equation, introduce by Lions, models this interaction in a single equation, whose independent variables are time, state, and distribution. It can be viewed as a nonlinear transport equation on an infinite dimensional space. Solving this transport equation by the method of characteristics is essentially equivalent to finding the unique Nash equilibrium. When the equilibrium is not unique, we seek selection principles, i.e. how to determine which equilibrium players should follow in practice. A natural question, from the mathematical point of view, is whether entropy solutions can be used as a selection principle. We will examine certain classes of mean field games to show that the question is rather subtle and yields both positive and negative results.

]]>It will initially be considered the asymptotic behavior of the solution of a mean-field system of Backward Stochastic Differential Equations with Jumps (BSDEs), as the multitude of the system equations grows to infinity, to independent and identically distributed (IID) solutions of McKean–Vlasov BSDEs. This property is known in the literature as backward propagation of chaos. Afterwards, it will be provided the suitable framework for the stability of the aforementioned property to hold. In other words, assuming a sequence of mean-field systems of BSDEs which propagate chaos, then their solutions, as the multitude of the system equations grows to infinity, approximates an IID sequence of solutions of the limiting McKean–Vlasov BSDE. The generality of the framework allows to incorporate either discrete-time or continuous-time approximating mean-field BSDE systems.

]]>