We study three stochastic differential games. In each game, two players control a process X = {Xt, 0 ≤ t < ∞} which takes values in the interval I = (0,1), is absorbed at the endpoints of I, and satisfies a stochastic differential equation
The control functions α(·) and β(·) are chosen by players 𝔄 and 𝔅, respectively.In the first of our games, which is zero-sum, player 𝔄 has a continuous reward function u : [0,1] → ℝ. In addition to α(·), player 𝔄 chooses a stopping rule τ and seeks to maximize the expectation of u(Xτ); whereas player 𝔅 chooses β(·) and aims to minimize this expectation.
In the second game, players 𝔄 and 𝔅 each have continuous reward functions u(·) and v(·), choose stopping rules τ and ρ, and seek to maximize the expectations of u(Xτ) and v(Xρ), respectively.
In the third game the two players again have continuous reward functions u(·) and v(·), now assumed to be unimodal, and choose stopping rules τ and ρ. This game terminates at the minimum τ∧ρ the stopping rules τ and ρ, and players 𝔄, 𝔅 want to maximize the expectations of u(Xτ∧ρ) and ν(Xτ∧ρ), respectively.
Under mild technical assumptions we show that the first game has a value, and find a saddle point of optimal strategies for the players. The other two games are not zero-sum, in general, and for them we construct Nash equilibria.