Iterative removal of strictly dominated strategies, minimax strategies and the minimax theorem for zero-sum game, correlated equilibria. Perfect information games: trees, players assigned to nodes, payoffs, backward Induction, subgame perfect equilibrium, introduction to imperfect-information games, mixed versus behavioral strategies. Repeated prisoners dilemma, finite and infinite repeated games, limited-average versus future-discounted reward, folk theorems, stochastic games and learning.

Interesting and abound at the same time. Excellent course for beginners. Problem sets are very creative. No more further resources needed. I found this course specially useful if the purpose is to apply Game Theory in other disciplines.

Peer review assignments can only be submitted and reviewed once your session has begun. If you choose to explore the course without purchasing, you may not be able to access certain assignments. When you purchase a Certificate you get access to all course materials, including graded assignments.

If you only want to read and view the course content, you can audit the course for free. More questions? Visit the Learner Help Center. Browse Chevron Right.

## Books by field of study

Social Sciences Chevron Right. Game Theory.

Game Theory Part 1: Dominant Strategy

## Application of Game Theory in Real Life Business Models

Available languages. English Subtitles: English. Chevron Left. Syllabus - What you will learn from this course. Video 11 videos. Introductory Video 8m.

## How Game Theory Strategy Improves Decision Making

Quiz 2 practice exercises. Commonly though not necessarily! Extensive form games represent dynamic games , where players choose their actions in a determined temporal order.

Strategic form games represent static games , where players choose their actions simultaneously. As an example, figure 1 is a possible representation of the stag-hunt scenario described in the introduction. The 2-by-2 matrix of figure 1 determines two players, Row and Col, who each have two pure strategies: R1 and C1 go deer hunting and R2 and C2 go hare hunting. Both players evaluate these consequences of each profile. In the stag-hunt scenario, players have the following ranking:. This ranking can be quite simply represented by a numerical function u , according to the following two principles:.

A function that meets these two principles and some further requirements that are not relevant here is called an ordinal utility function. It is now easy to see that the numbers of the game in figure 1 represent the ranking of figure 2. Note, however, that the matrix of figure 1 is not the only way to represent the stag-hunt game. Because the utilities only represent rankings, there are many ways how one can represent the ranking of figure 2.

For example, the games in figures 3a-c are identical to the game in figure 1. In figure 3a, all numbers are negative, but they retain the same ranking of consequences. In figure 3c, although the numbers are very different for the two players, they retain the same ranking as in figure 1. Note that in the stag-hunt game, agents do not gain if others lose. Everybody is better off hunting deer, and losses arise from lack of coordination. Games with this property are therefore called coordination games.

Most social games are of this sort: in chess, for example, the idea of coordination is wholly misplaced. Such games are called zero-sum games. Today, many of the games discussed are of a third kind: they combine coordination aspects with conflicting aspects, so that players may at times gain from coordinating, but at other times from competing with the other players.

Players can create further strategies by randomizing over pure strategies. They can choose a randomization device like a dice and determine for each chance result which of their pure strategies they will play. For example, Row could create a new strategy that goes as follows: toss a fair coin. Play R1 if heads, and R2 if tails. As there are no limits to the number of possible randomization devices, each player can create an infinite number of mixed strategies for herself.

Such an expected value is computed as the weighted average of the pure-strategy payoffs, and the weights are the probabilities with which each strategy is played. For the payoffs of mixed strategy to be computable, the utility function has to carry cardinal information. Now it is also important how much a player prefers a consequence X to a consequence Y , in comparison to another pair of consequences X and Z.

Because mixed strategies are a very important concept in game theory, it is generally assumed that the utility functions characterizing the payoffs are cardinal. However, it is important to note that cardinal utilities also do not allow making interpersonal comparisons. In fact, such interpersonal comparisons play no role in standard game theory at all. Representing interactive situations in these highly abstract games, the objective of game theory is to determine the outcome or possible outcomes of each game, given certain assumptions about the players.

To do this is to solve a game. Various solution concepts have been proposed. The conceptually most straightforward solution concept is the elimination of dominated strategies. Take the game of figure 4 which, take note, differs from the stag-hunt game in its payoffs.

In this game, no matter what Col chooses, playing R2 gives Row a higher payoff. If Col plays C1 , Row is better off playing R2 , because she can obtain 3 utils instead of two. If Col plays C2 , Row is also better off playing R2 , because she can obtain 1 utils instead of none. Similarly for Col: no matter what Row chooses, playing C2 gives her a higher payoff.

This is what is meant by saying that R1 and C1 are strictly dominated strategies. To solve a game by eliminating all dominated strategies is based on the assumption that players do and should choose those strategies that are best for them, in this very straightforward sense. In cases like in figure 4, where each player has only one non-dominated strategy, the elimination of dominated strategies is a straightforward and plausible solution concept. However, there are many games, which do not have any dominated strategies, as for example the stag-hunt game or the zero-sum game of figure 5.

### Account Options

Von Neumann and Morgenstern argued for the Minimax Rule as the solution concept for zero-sum games. Her gain is my loss. So I better look for how much I minimally get out of each option and try to make this amount as large as possible. If this is reasonable, then my adversary will do the same. The minimax solution therefore recommends that Row choose the strategy with the highest minimum, while Col choose a strategy with the lowest maximum. Thus, in figure 5, Row chooses R2 , as it has the highest minimal payoff for her, and Col chooses C2 , as it has the lowest maximal payoff for Row and hence the highest minimal payoff for her.

Unfortunately, there are many non-zero-sum games without dominated strategies, for example the game of figure 6. For these kinds of games, the Nash equilibrium solution concept offers greater versatility than dominance or maximin as it turns out, all maximin solutions are also Nash equilibria.

In contrast to dominated strategy elimination, the Nash equilibrium applies to strategy profiles, not to individual strategies. Roughly, a strategy profile is in Nash equilibrium if none of the players can do better by unilaterally changing her strategy. Take the example of matrix 6. Consider the strategy profile R1,C1. So R1, C1 is not in equilibrium, because at least one player in this case both is better off by unilaterally deviating from it. Only R1, C2 is a pure strategy Nash equilibrium — neither player is better off by unilaterally deviating from it. There are games without a pure strategy Nash equilibrium, as matrix 7 shows.