continuous game

{{Short description|Generalization of games used in game theory}}

{{Refimprove|date=March 2009}}

{{Use dmy dates|date=August 2024}}

A continuous game is a mathematical concept, used in game theory, that generalizes the idea of an ordinary game like tic-tac-toe (noughts and crosses) or checkers (draughts). In other words, it extends the notion of a discrete game, where the players choose from a finite set of pure strategies. The continuous game concepts allows games to include more general sets of pure strategies, which may be uncountably infinite.

In general, a game with uncountably infinite strategy sets will not necessarily have a Nash equilibrium solution. If, however, the strategy sets are required to be compact and the utility functions continuous, then a Nash equilibrium will be guaranteed; this is by Glicksberg's generalization of the Kakutani fixed point theorem. The class of continuous games is for this reason usually defined and studied as a subset of the larger class of infinite games (i.e. games with infinite strategy sets) in which the strategy sets are compact and the utility functions continuous.

Formal definition

Define the n-player continuous game G = (P, \mathbf{C}, \mathbf{U}) where

::P = {1, 2, 3,\ldots, n} is the set of n\, players,

:: \mathbf{C}= (C_1, C_2, \ldots, C_n) where each C_i\, is a compact set, in a metric space, corresponding to the i\, th player's set of pure strategies,

:: \mathbf{U}= (u_1, u_2, \ldots, u_n) where u_i:\mathbf{C}\to \R is the utility function of player i\,

: We define \Delta_i\, to be the set of Borel probability measures on C_i\, , giving us the mixed strategy space of player i.

: Define the strategy profile \boldsymbol{\sigma} = (\sigma_1, \sigma_2, \ldots, \sigma_n) where \sigma_i \in \Delta_i\,

Let \boldsymbol{\sigma}_{-i} be a strategy profile of all players except for player i. As with discrete games, we can define a best response correspondence for player i\, , b_i\ . b_i\, is a relation from the set of all probability distributions over opponent player profiles to a set of player i's strategies, such that each element of

:b_i(\sigma_{-i})\,

is a best response to \sigma_{-i}. Define

:\mathbf{b}(\boldsymbol{\sigma}) = b_1(\sigma_{-1}) \times b_2(\sigma_{-2}) \times \cdots \times b_n(\sigma_{-n}).

A strategy profile \boldsymbol{\sigma}* is a Nash equilibrium if and only if

\boldsymbol{\sigma}* \in \mathbf{b}(\boldsymbol{\sigma}*)

The existence of a Nash equilibrium for any continuous game with continuous utility functions can be proven using Irving Glicksberg's generalization of the Kakutani fixed point theorem.I.L. Glicksberg. A further generalization of the Kakutani fixed point theorem, with application to Nash equilibrium points. Proceedings of the American Mathematical Society, 3(1):170–174, February 1952. In general, there may not be a solution if we allow strategy spaces, C_i\, 's which are not compact, or if we allow non-continuous utility functions.

=Separable games=

A separable game is a continuous game where, for any i, the utility function u_i:\mathbf{C}\to \R can be expressed in the sum-of-products form:

: u_i(\mathbf{s}) = \sum_{k_1=1}^{m_1} \ldots \sum_{k_n=1}^{m_n} a_{i\, ,\, k_1\ldots k_n} f_1(s_1)\ldots f_n(s_n), where \mathbf{s} \in \mathbf{C}, s_i \in C_i, a_{i\, ,\, k_1\ldots k_n} \in \R, and the functions f_{i\, ,\, k}:C_i \to \R are continuous.

A polynomial game is a separable game where each C_i\, is a compact interval on \R\, and each utility function can be written as a multivariate polynomial.

In general, mixed Nash equilibria of separable games are easier to compute than non-separable games as implied by the following theorem:

:For any separable game there exists at least one Nash equilibrium where player i mixes at most m_i+1\, pure strategies.N. Stein, A. Ozdaglar and P.A. Parrilo. "Separable and Low-Rank Continuous Games". International Journal of Game Theory, 37(4):475–504, December 2008. https://arxiv.org/abs/0707.3462

Whereas an equilibrium strategy for a non-separable game may require an uncountably infinite support, a separable game is guaranteed to have at least one Nash equilibrium with finitely supported mixed strategies.

Examples

=Separable games=

==A polynomial game==

Consider a zero-sum 2-player game between players X and Y, with C_X = C_Y = \left [0,1 \right ] . Denote elements of C_X\, and C_Y\, as x\, and y\, respectively. Define the utility functions H(x,y) = u_x(x,y) = -u_y(x,y)\, where

:H(x,y)=(x-y)^2\, .

The pure strategy best response relations are:

:b_X(y) =

\begin{cases}

1, & \mbox{if }y \in \left [0,1/2 \right ) \\

0\text{ or }1, & \mbox{if }y = 1/2 \\

0, & \mbox{if } y \in \left (1/2,1 \right ]

\end{cases}

:b_Y(x) = x\,

b_X(y)\, and b_Y(x)\, do not intersect, so there is no pure strategy Nash equilibrium.

However, there should be a mixed strategy equilibrium. To find it, express the expected value, v = \mathbb{E} [H(x,y)] as a linear combination of the first and second moments of the probability distributions of X and Y:

: v = \mu_{X2} - 2\mu_{X1} \mu_{Y1} + \mu_{Y2}\,

(where \mu_{XN} = \mathbb{E} [x^N] and similarly for Y).

The constraints on \mu_{X1}\, and \mu_{X2} (with similar constraints for y,) are given by Hausdorff as:

:

\begin{align}

\mu_{X1} \ge \mu_{X2} \\

\mu_{X1}^2 \le \mu_{X2}

\end{align}

\qquad

\begin{align}

\mu_{Y1} \ge \mu_{Y2} \\

\mu_{Y1}^2 \le \mu_{Y2}

\end{align}

Each pair of constraints defines a compact convex subset in the plane. Since v\, is linear, any extrema with respect to a player's first two moments will lie on the boundary of this subset. Player i's equilibrium strategy will lie on

: \mu_{i1} = \mu_{i2} \text{ or } \mu_{i1}^2 = \mu_{i2}

Note that the first equation only permits mixtures of 0 and 1 whereas the second equation only permits pure strategies. Moreover, if the best response at a certain point to player i lies on \mu_{i1} = \mu_{i2}\, , it will lie on the whole line, so that both 0 and 1 are a best response. b_Y(\mu_{X1},\mu_{X2})\, simply gives the pure strategy y = \mu_{X1}\, , so b_Y\, will never give both 0 and 1.

However b_x\, gives both 0 and 1 when y = 1/2.

A Nash equilibrium exists when:

: (\mu_{X1}*, \mu_{X2}*, \mu_{Y1}*, \mu_{Y2}*) = (1/2, 1/2, 1/2, 1/4)\,

This determines one unique equilibrium where Player X plays a random mixture of 0 for 1/2 of the time and 1 the other 1/2 of the time. Player Y plays the pure strategy of 1/2. The value of the game is 1/4.

=Non-Separable Games=

==A rational payoff function==

Consider a zero-sum 2-player game between players X and Y, with C_X = C_Y = \left [0,1 \right ] . Denote elements of C_X\, and C_Y\, as x\, and y\, respectively. Define the utility functions H(x,y) = u_x(x,y) = -u_y(x,y)\, where

:H(x,y)=\frac{(1+x)(1+y)(1-xy)}{(1+xy)^2}.

This game has no pure strategy Nash equilibrium. It can be shownIrving Leonard Glicksberg & Oliver Alfred Gross (1950). "Notes on Games over the Square." Kuhn, H.W. & Tucker, A.W. eds. Contributions to the Theory of Games: Volume II. Annals of Mathematics Studies 28, p.173–183. Princeton University Press. that a unique mixed strategy Nash equilibrium exists with the following pair of cumulative distribution functions:

: F^*(x) = \frac{4}{\pi} \arctan{\sqrt{x}} \qquad G^*(y) = \frac{4}{\pi} \arctan{\sqrt{y}}.

Or, equivalently, the following pair of probability density functions:

: f^*(x) = \frac{2}{\pi \sqrt{x} (1+x)} \qquad g^*(y) = \frac{2}{\pi \sqrt{y} (1+y)}.

The value of the game is 4/\pi.

==Requiring a Cantor distribution==

Consider a zero-sum 2-player game between players X and Y, with C_X = C_Y = \left [0,1 \right ] . Denote elements of C_X\, and C_Y\, as x\, and y\, respectively. Define the utility functions H(x,y) = u_x(x,y) = -u_y(x,y)\, where

:H(x,y)=\sum_{n=0}^\infty \frac{1}{2^n}\left(2x^n-\left (\left(1-\frac{x}{3} \right )^n-\left (\frac{x}{3}\right)^n \right ) \right ) \left(2y^n - \left (\left(1-\frac{y}{3} \right )^n-\left (\frac{y}{3}\right)^n \right ) \right ).

This game has a unique mixed strategy equilibrium where each player plays a mixed strategy with the Cantor singular function as the cumulative distribution function.Gross, O. (1952). "A rational payoff characterization of the Cantor distribution." Technical

Report D-1349, The RAND Corporation.

Further reading

  • H. W. Kuhn and A. W. Tucker, eds. (1950). Contributions to the Theory of Games: Vol. II. Annals of Mathematics Studies 28. Princeton University Press. {{ISBN|0-691-07935-8}}.

See also

References