Javascript required
Skip to content Skip to sidebar Skip to footer

What is Non continuous Action Set in Game Theory

A continuous game is a mathematical concept, used in game theory, that generalizes the idea of an ordinary game like tic-tac-toe (noughts and crosses) or checkers (draughts). In other words, it extends the notion of a discrete game, where the players choose from a finite set of pure strategies. The continuous game concepts allows games to include more general sets of pure strategies, which may be uncountably infinite.

In general, a game with uncountably infinite strategy sets will not necessarily have a Nash equilibrium solution. If, however, the strategy sets are required to be compact and the utility functions continuous, then a Nash equilibrium will be guaranteed; this is by Glicksberg's generalization of the Kakutani fixed point theorem. The class of continuous games is for this reason usually defined and studied as a subset of the larger class of infinite games (i.e. games with infinite strategy sets) in which the strategy sets are compact and the utility functions continuous.

Formal definition [edit]

Define the n-player continuous game G = ( P , C , U ) {\displaystyle G=(P,\mathbf {C} ,\mathbf {U} )} where

P = 1 , 2 , 3 , , n {\displaystyle P={1,2,3,\ldots ,n}} is the set of n {\displaystyle n\,} players,
C = ( C 1 , C 2 , , C n ) {\displaystyle \mathbf {C} =(C_{1},C_{2},\ldots ,C_{n})} where each C i {\displaystyle C_{i}\,} is a compact set, in a metric space, corresponding to the i {\displaystyle i\,} th player's set of pure strategies,
U = ( u 1 , u 2 , , u n ) {\displaystyle \mathbf {U} =(u_{1},u_{2},\ldots ,u_{n})} where u i : C R {\displaystyle u_{i}:\mathbf {C} \to \mathbb {R} } is the utility function of player i {\displaystyle i\,}
We define Δ i {\displaystyle \Delta _{i}\,} to be the set of Borel probability measures on C i {\displaystyle C_{i}\,} , giving us the mixed strategy space of player i.
Define the strategy profile σ = ( σ 1 , σ 2 , , σ n ) {\displaystyle {\boldsymbol {\sigma }}=(\sigma _{1},\sigma _{2},\ldots ,\sigma _{n})} where σ i Δ i {\displaystyle \sigma _{i}\in \Delta _{i}\,}

Let σ i {\displaystyle {\boldsymbol {\sigma }}_{-i}} be a strategy profile of all players except for player i {\displaystyle i} . As with discrete games, we can define a best response correspondence for player i {\displaystyle i\,} , b i {\displaystyle b_{i}\ } . b i {\displaystyle b_{i}\,} is a relation from the set of all probability distributions over opponent player profiles to a set of player i {\displaystyle i} 's strategies, such that each element of

b i ( σ i ) {\displaystyle b_{i}(\sigma _{-i})\,}

is a best response to σ i {\displaystyle \sigma _{-i}} . Define

b ( σ ) = b 1 ( σ 1 ) × b 2 ( σ 2 ) × × b n ( σ n ) {\displaystyle \mathbf {b} ({\boldsymbol {\sigma }})=b_{1}(\sigma _{-1})\times b_{2}(\sigma _{-2})\times \cdots \times b_{n}(\sigma _{-n})} .

A strategy profile σ {\displaystyle {\boldsymbol {\sigma }}*} is a Nash equilibrium if and only if σ b ( σ ) {\displaystyle {\boldsymbol {\sigma }}*\in \mathbf {b} ({\boldsymbol {\sigma }}*)} The existence of a Nash equilibrium for any continuous game with continuous utility functions can be proven using Irving Glicksberg's generalization of the Kakutani fixed point theorem.[1] In general, there may not be a solution if we allow strategy spaces, C i {\displaystyle C_{i}\,} 's which are not compact, or if we allow non-continuous utility functions.

Separable games [edit]

A separable game is a continuous game where, for any i, the utility function u i : C R {\displaystyle u_{i}:\mathbf {C} \to \mathbb {R} } can be expressed in the sum-of-products form:

u i ( s ) = k 1 = 1 m 1 k n = 1 m n a i , k 1 k n f 1 ( s 1 ) f n ( s n ) {\displaystyle u_{i}(\mathbf {s} )=\sum _{k_{1}=1}^{m_{1}}\ldots \sum _{k_{n}=1}^{m_{n}}a_{i\,,\,k_{1}\ldots k_{n}}f_{1}(s_{1})\ldots f_{n}(s_{n})} , where s C {\displaystyle \mathbf {s} \in \mathbf {C} } , s i C i {\displaystyle s_{i}\in C_{i}} , a i , k 1 k n R {\displaystyle a_{i\,,\,k_{1}\ldots k_{n}}\in \mathbb {R} } , and the functions f i , k : C i R {\displaystyle f_{i\,,\,k}:C_{i}\to \mathbb {R} } are continuous.

A polynomial game is a separable game where each C i {\displaystyle C_{i}\,} is a compact interval on R {\displaystyle \mathbb {R} \,} and each utility function can be written as a multivariate polynomial.

In general, mixed Nash equilibria of separable games are easier to compute than non-separable games as implied by the following theorem:

For any separable game there exists at least one Nash equilibrium where player i mixes at most m i + 1 {\displaystyle m_{i}+1\,} pure strategies.[2]

Whereas an equilibrium strategy for a non-separable game may require an uncountably infinite support, a separable game is guaranteed to have at least one Nash equilibrium with finitely supported mixed strategies.

Examples [edit]

Separable games [edit]

A polynomial game [edit]

Consider a zero-sum 2-player game between players X and Y, with C X = C Y = [ 0 , 1 ] {\displaystyle C_{X}=C_{Y}=\left[0,1\right]} . Denote elements of C X {\displaystyle C_{X}\,} and C Y {\displaystyle C_{Y}\,} as x {\displaystyle x\,} and y {\displaystyle y\,} respectively. Define the utility functions H ( x , y ) = u x ( x , y ) = u y ( x , y ) {\displaystyle H(x,y)=u_{x}(x,y)=-u_{y}(x,y)\,} where

H ( x , y ) = ( x y ) 2 {\displaystyle H(x,y)=(x-y)^{2}\,} .

The pure strategy best response relations are:

b X ( y ) = { 1 , if y [ 0 , 1 / 2 ) 0  or 1 , if y = 1 / 2 0 , if y ( 1 / 2 , 1 ] {\displaystyle b_{X}(y)={\begin{cases}1,&{\mbox{if }}y\in \left[0,1/2\right)\\0{\text{ or }}1,&{\mbox{if }}y=1/2\\0,&{\mbox{if }}y\in \left(1/2,1\right]\end{cases}}}
b Y ( x ) = x {\displaystyle b_{Y}(x)=x\,}

b X ( y ) {\displaystyle b_{X}(y)\,} and b Y ( x ) {\displaystyle b_{Y}(x)\,} do not intersect, so there is no pure strategy Nash equilibrium. However, there should be a mixed strategy equilibrium. To find it, express the expected value, v = E [ H ( x , y ) ] {\displaystyle v=\mathbb {E} [H(x,y)]} as a linear combination of the first and second moments of the probability distributions of X and Y:

v = μ X 2 2 μ X 1 μ Y 1 + μ Y 2 {\displaystyle v=\mu _{X2}-2\mu _{X1}\mu _{Y1}+\mu _{Y2}\,}

(where μ X N = E [ x N ] {\displaystyle \mu _{XN}=\mathbb {E} [x^{N}]} and similarly for Y).

The constraints on μ X 1 {\displaystyle \mu _{X1}\,} and μ X 2 {\displaystyle \mu _{X2}} (with similar constraints for y,) are given by Hausdorff as:

μ X 1 μ X 2 μ X 1 2 μ X 2 μ Y 1 μ Y 2 μ Y 1 2 μ Y 2 {\displaystyle {\begin{aligned}\mu _{X1}\geq \mu _{X2}\\\mu _{X1}^{2}\leq \mu _{X2}\end{aligned}}\qquad {\begin{aligned}\mu _{Y1}\geq \mu _{Y2}\\\mu _{Y1}^{2}\leq \mu _{Y2}\end{aligned}}}

Each pair of constraints defines a compact convex subset in the plane. Since v {\displaystyle v\,} is linear, any extrema with respect to a player's first two moments will lie on the boundary of this subset. Player i's equilibrium strategy will lie on

μ i 1 = μ i 2  or μ i 1 2 = μ i 2 {\displaystyle \mu _{i1}=\mu _{i2}{\text{ or }}\mu _{i1}^{2}=\mu _{i2}}

Note that the first equation only permits mixtures of 0 and 1 whereas the second equation only permits pure strategies. Moreover, if the best response at a certain point to player i lies on μ i 1 = μ i 2 {\displaystyle \mu _{i1}=\mu _{i2}\,} , it will lie on the whole line, so that both 0 and 1 are a best response. b Y ( μ X 1 , μ X 2 ) {\displaystyle b_{Y}(\mu _{X1},\mu _{X2})\,} simply gives the pure strategy y = μ X 1 {\displaystyle y=\mu _{X1}\,} , so b Y {\displaystyle b_{Y}\,} will never give both 0 and 1. However b x {\displaystyle b_{x}\,} gives both 0 and 1 when y = 1/2. A Nash equilibrium exists when:

( μ X 1 , μ X 2 , μ Y 1 , μ Y 2 ) = ( 1 / 2 , 1 / 2 , 1 / 2 , 1 / 4 ) {\displaystyle (\mu _{X1}*,\mu _{X2}*,\mu _{Y1}*,\mu _{Y2}*)=(1/2,1/2,1/2,1/4)\,}

This determines one unique equilibrium where Player X plays a random mixture of 0 for 1/2 of the time and 1 the other 1/2 of the time. Player Y plays the pure strategy of 1/2. The value of the game is 1/4.

Non-Separable Games [edit]

A rational pay-off function [edit]

Consider a zero-sum 2-player game between players X and Y, with C X = C Y = [ 0 , 1 ] {\displaystyle C_{X}=C_{Y}=\left[0,1\right]} . Denote elements of C X {\displaystyle C_{X}\,} and C Y {\displaystyle C_{Y}\,} as x {\displaystyle x\,} and y {\displaystyle y\,} respectively. Define the utility functions H ( x , y ) = u x ( x , y ) = u y ( x , y ) {\displaystyle H(x,y)=u_{x}(x,y)=-u_{y}(x,y)\,} where

H ( x , y ) = ( 1 + x ) ( 1 + y ) ( 1 x y ) ( 1 + x y ) 2 . {\displaystyle H(x,y)={\frac {(1+x)(1+y)(1-xy)}{(1+xy)^{2}}}.}

This game has no pure strategy Nash equilibrium. It can be shown[3] that a unique mixed strategy Nash equilibrium exists with the following pair of probability density functions:

f ( x ) = 2 π x ( 1 + x ) g ( y ) = 2 π y ( 1 + y ) . {\displaystyle f^{*}(x)={\frac {2}{\pi {\sqrt {x}}(1+x)}}\qquad g^{*}(y)={\frac {2}{\pi {\sqrt {y}}(1+y)}}.}

The value of the game is 4 / π {\displaystyle 4/\pi } .

Requiring a Cantor distribution [edit]

Consider a zero-sum 2-player game between players X and Y, with C X = C Y = [ 0 , 1 ] {\displaystyle C_{X}=C_{Y}=\left[0,1\right]} . Denote elements of C X {\displaystyle C_{X}\,} and C Y {\displaystyle C_{Y}\,} as x {\displaystyle x\,} and y {\displaystyle y\,} respectively. Define the utility functions H ( x , y ) = u x ( x , y ) = u y ( x , y ) {\displaystyle H(x,y)=u_{x}(x,y)=-u_{y}(x,y)\,} where

H ( x , y ) = n = 0 1 2 n ( 2 x n ( ( 1 x 3 ) n ( x 3 ) n ) ) ( 2 y n ( ( 1 y 3 ) n ( y 3 ) n ) ) {\displaystyle H(x,y)=\sum _{n=0}^{\infty }{\frac {1}{2^{n}}}\left(2x^{n}-\left(\left(1-{\frac {x}{3}}\right)^{n}-\left({\frac {x}{3}}\right)^{n}\right)\right)\left(2y^{n}-\left(\left(1-{\frac {y}{3}}\right)^{n}-\left({\frac {y}{3}}\right)^{n}\right)\right)} .

This game has a unique mixed strategy equilibrium where each player plays a mixed strategy with the Cantor singular function as the cumulative distribution function.[4]

Further reading [edit]

  • H. W. Kuhn and A. W. Tucker, eds. (1950). Contributions to the Theory of Games: Vol. II. Annals of Mathematics Studies 28. Princeton University Press. ISBN 0-691-07935-8.

See also [edit]

  • Graph continuous

References [edit]

  1. ^ I.L. Glicksberg. A further generalization of the Kakutani fixed point theorem, with application to Nash equilibrium points. Proceedings of the American Mathematical Society, 3(1):170–174, February 1952.
  2. ^ N. Stein, A. Ozdaglar and P.A. Parrilo. "Separable and Low-Rank Continuous Games". International Journal of Game Theory, 37(4):475–504, December 2008. https://arxiv.org/abs/0707.3462
  3. ^ Irving Leonard Glicksberg & Oliver Alfred Gross (1950). "Notes on Games over the Square." Kuhn, H.W. & Tucker, A.W. eds. Contributions to the Theory of Games: Volume II. Annals of Mathematics Studies 28, p.173–183. Princeton University Press.
  4. ^ Gross, O. (1952). "A rational payoff characterization of the Cantor distribution." Technical Report D-1349, The RAND Corporation.

buzacotthings1995.blogspot.com

Source: https://en.wikipedia.org/wiki/Continuous_game