|
|
(One intermediate revision by one other user not shown) |
Line 1: |
Line 1: |
| {{Infobox equilibrium
| | The name of the writer is Jayson. I am truly fond of to go to karaoke but I've been taking on new things recently. Credit authorising is how she tends to make a residing. North Carolina is exactly where we've been residing for many years and will by no means transfer.<br><br>Feel free to surf to my homepage ... live psychic reading ([https://www-ocl.gist.ac.kr/work/xe/?document_srl=605236 simply click the up coming webpage]) |
| |name = Risk dominance<br />Payoff dominance
| |
| |subsetof = [[Nash equilibrium]]
| |
| |supersetof =
| |
| |discoverer = [[John Harsanyi]], [[Reinhard Selten|Reinhard Selten]]
| |
| |usedfor = [[Non-cooperative game]]s
| |
| |example = [[Stag hunt]]
| |
| }}
| |
| | |
| '''Risk dominance''' and '''payoff dominance''' are two related refinements of the [[Nash equilibrium]] (NE) [[solution concept]] in [[game theory]], defined by [[John Harsanyi]] and [[Reinhard Selten]]. A Nash equilibrium is considered '''payoff dominant''' if it is [[Pareto efficiency|Pareto superior]] to all other Nash equilibria in the game.{{ref|1|1}} When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered '''risk dominant''' if it has the largest [[basin of attraction]] (i.e. is less risky). This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.
| |
| | |
| The [[payoff matrix]] in Figure 1 provides a simple two-player, two-strategy example of a game with two pure Nash equilibria. The strategy pair (Hunt, Hunt) is payoff dominant since payoffs are higher for both players compared to the other pure NE, (Gather, Gather). On the other hand, (Gather, Gather) risk dominates (Hunt, Hunt) since if uncertainty exists about the other player's action, gathering will provide a higher expected payoff. The game in Figure 1 is a well-known game-theoretic dilemma called [[stag hunt]]. The rationale behind it is that communal action (hunting) yields a higher return if all players combine their skills, but if it is unknown whether the other player helps in hunting, gathering might turn out to be the better individual strategy for food provision, since it does not depend on [[coordination game|coordinating]] with the other player. In addition, gathering alone is preferred to gathering in competition with others. Like the [[Prisoner's dilemma]], it provides a reason why [[collective action]] might fail in the absence of [[credible commitment]]s.
| |
| <center>
| |
| {|
| |
| |-
| |
| |
| |
| {{Payoff matrix | Name = Fig. 1: [[Stag hunt]] example
| |
| | 2L = Hunt | 2R = Gather |
| |
| 1U = Hunt | UL = 5, 5 | UR = 0, 4 |
| |
| 1D = Gather | DL = 4, 0 | DR = 2, 2 |
| |
| float = none}}
| |
| |
| |
| {{Payoff matrix | Name = Fig. 2: Generic [[coordination game]]
| |
| | 2L = H | 2R = G |
| |
| 1U = H | UL = A, a | UR = C, b |
| |
| 1D = G | DL = B, c | DR = D, d |
| |
| float = none}}
| |
| |}
| |
| </center>
| |
| | |
| == Formal definition ==
| |
| The game given in Figure 2 is a [[coordination game]] if the following payoff inequalities hold for player 1 (rows): A > B, D > C, and for player 2 (columns): a > b, d > c. The strategy pairs (H, H) and (G, G) are then the only [[pure strategy|pure]] Nash equilibria. In addition there is a [[mixed strategy|mixed]] Nash equilibrium where player 1 plays H with probability p = (d-c)/(a-b-c+d) and G with probability 1–p; player 2 plays H with probability q = (D-C)/(A-B-C+D) and G with probability 1–q.
| |
| | |
| Strategy pair (H, H) '''payoff dominates''' (G, G) if A ≥ D, a ≥ d, and at least one of the two is a strict inequality: A > D or a > d.
| |
| | |
| Strategy pair (G, G) '''risk dominates''' (H, H) if the product of the deviation losses is highest for (G, G) (Harsanyi and Selten, 1988, Lemma 5.4.4). In other words, if the following inequality holds: {{nowrap|(C – D)(c – d)≥(B – A)(b – a)}}. If the inequality is strict then (G, G) strictly risk dominates (H, H).{{ref|2|2}}(That is, players have more incentive to deviate).
| |
| | |
| If the game is symmetric, so if A = a, B = b, etc., the inequality allows for a simple interpretation: We assume the players are unsure about which strategy the opponent will pick and assign probabilities for each strategy. If each player assigns probabilities ½ to H and G each, then (G, G) risk dominates (H, H) if the expected payoff from playing G exceeds the expected payoff from playing H: {{nowrap|½ B + ½ D ≥ ½ A + ½ C}}, or simply {{nowrap|B + D ≥ A + C}}.
| |
| | |
| Another way to calculate the risk dominant equilibrium is to calculate the risk factor for all equilibria and to find the equilibrium with the smallest risk factor. To calculate the risk factor in our 2x2 game, consider the expected payoff to a player if they play H: <math>E[\pi_H]=p A + (1-p) C</math> (where ''p'' is the probability that the other player will play H), and compare it to the expected payoff if they play G: <math>E[\pi_G]=p B + (1-p) D</math>. The value of ''p'' which makes these two expected values equal is the risk factor for the equilibrium (H, H), with <math>1-p</math> the risk factor for playing (G, G). You can also calculate the risk factor for playing (G, G) by doing the same calculation, but setting ''p'' as the probability the other player will play G. An interpretation for ''p'' is it is the smallest probability that the opponent must play that strategy such that the person's own payoff from copying the opponent's strategy is greater than if the other strategy was played.
| |
| | |
| == Equilibrium selection ==
| |
| A number of evolutionary approaches have established that when played in a large population, players might fail to play the payoff dominant equilibrium strategy and instead end up in the payoff dominated, risk dominant equilibrium. Two separate evolutionary models both support the idea that the risk dominant equilibrium is more likely to occur. The first model, based on [[replicator dynamics]], predicts that a population is more likely to adopt the risk dominant equilibrium than the payoff dominant equilibrium. The second model, based on [[Best response#Best response dynamics|best response]] [[strategy revision]] and [[mutation]], predicts that the risk dominant state is the only [[stochastically stable]] equilibrium. Both models assume that multiple two-player games are played in a population of N players. The players are matched randomly with opponents, with each player having equal likelihoods of drawing any of the N−1 other players. The players start with a pure strategy, G or H, and play this strategy against their opponent. In replicator dynamics, the population game is repeated in sequential generations where subpopulations change based on the success of their chosen stratregies. In best response, players update their strategies to improve expected payoffs in the subsequent generations. The recognition of Kandori, Mailath & Rob (1993) and Young (1993) was that if the rule to update one's strategy allows for mutation{{ref|4|4}}, and the probability of mutation vanishes, i.e. asymptotically reaches zero over time, the likelihood that the risk dominant equilibrium is reached goes to one, even if it is payoff dominated.{{ref|3|3}}
| |
| | |
| == Notes ==
| |
| * {{note|1|1}} A single Nash equilibrium is trivially payoff and risk dominant if it is the only NE in the game.
| |
| * {{note|2|2}} Similar distinctions between strict and weak exist for most definitions here, but are not denoted explicitly unless necessary.
| |
| * {{note|3|3}} Harsanyi and Selten (1988) propose that the payoff dominant equilibrium is the rational choice in the stag hunt game, however Harsanyi (1995) retracted this conclusion to take risk dominance as the relevant selection criterion.
| |
| | |
| == References ==
| |
| * Samuel Bowles: ''Microeconomics: Behavior, Institutions, and Evolution'', Princeton University Press, pp. 45–46 (2004) ISBN 0-691-09163-3
| |
| * Drew Fudenberg and David K. Levine: ''The Theory of Learning in Games'', MIT Press, p. 27 (1999) ISBN 0-262-06194-5
| |
| * John C. Harsanyi: "A New Theory of Equilibrium Selection for Games with Complete Information", ''Games and Econonmic Behavior'' 8, pp. 91–122 (1995)
| |
| * John C. Harsanyi and Reinhard Selten: ''A General Theory of Equilibrium Selection in Games'', MIT Press (1988) ISBN 0-262-08173-3
| |
| * Michihiro Kandori, George J. Mailath & [[Rafael Rob]]: "Learning, Mutation, and Long-run Equilibria in Games", ''Econometrica'' 61, pp. 29–56 (1993) [http://econpapers.repec.org/article/ecmemetrp/v_3A61_3Ay_3A1993_3Ai_3A1_3Ap_3A29-56.htm Abstract]
| |
| * Roger B. Myerson: ''Game Theory, Analysis of Conflict'', Harvard University Press, pp. 118–119 (1991) ISBN 0-674-34115-5
| |
| * Larry Samuelson: ''Evolutionary Games and Equilibrium Selection'', MIT Press (1997) ISBN 0-262-19382-5
| |
| * H. Peyton Young: "The Evolution of Conventions", ''Econometrica'', 61, pp. 57–84 (1993) [http://econpapers.repec.org/article/ecmemetrp/v_3a61_3ay_3a1993_3ai_3a1_3ap_3a57-84.htm Abstract]
| |
| * H. Peyton Young: ''Individual Strategy and Social Structure'', Princeton University Press (1998) ISBN 0-691-08687-7
| |
| | |
| {{Game theory}}
| |
| | |
| [[Category:Game theory]]
| |
| [[Category:Evolutionary game theory]]
| |
The name of the writer is Jayson. I am truly fond of to go to karaoke but I've been taking on new things recently. Credit authorising is how she tends to make a residing. North Carolina is exactly where we've been residing for many years and will by no means transfer.
Feel free to surf to my homepage ... live psychic reading (simply click the up coming webpage)