In this chapter, we will try to investigate how parties may develop strategies and orient the dynamics of a conflict to obtain their goals. In order to do so, we will make use of the technical framework of Game Theory. Our purpose will be to not only understand how parties may interact in a conflictual situation but also identify the necessary conditions to render this interaction positive and cooperative.

Interaction and Strategies

Generally speaking, a strategy is a plan to cope with a certain situation. In a conflict, an actor’s strategy dictates their choices as to what actions to undertake. Lulofs and Cahn, for instance, distinguish strategies from tactics where a tactic is “a specific observable action that moves a conflict in a particular direction in line with the strategy” (Lulofs & Cahn, 2000, p. 100). Thinking actions in terms of tactics encourages seeing them as part of the strategy that parties adopt inside a conflictual situation.

To understand the notion of a strategy in the dynamics of a conflict, it is important to distinguish between two sorts of strategic reasoning. Consider the following situation first. You are a skipper, and you are planning to sail across the Atlantic Ocean from Europe to the US. There are two main routes to do the crossing: a northern stormier one from Ireland to Canada and a more southern and relaxed one from Portugal to Florida. Which one would you choose? It obviously depends on whether you prefer speed or safety, and you must seriously evaluate your boat’s conditions as well as your abilities; besides, you must thoroughly examine the weather forecasts and consider what possible accidents might occur: storms might damage your sails, light winds might force you to turn on the engine, and so on. You must be prepared to act and deal with these eventualities: board a spare rig of sails, store enough fuel, and so on.

Consider now a different situation. You are playing chess against someone else, and you must plan your moves to checkmate your opponent. In order to do that, you must take into account several different scenarios depending on the moves your opponent might decide to make. Without question, your opponent’s moves are not random: they react to what you do; therefore, their moves depend on yours. Among the moves you are allowed to play at every turn, you will choose the one you think would produce the best outcome for you depending on what your opponent might choose to do; and your opponent will act in kind to obtain the best outcome for themselves.

What is the difference between these two situations and the strategies that allow one to cope with them? In both cases, an agent is confronted with the task of determining a course of action in reason of the likelihood of certain events and their preferences with respect to the outcomes of those actions in those circumstances. In the former situation, the events are natural incidents that could happen independently from what the agent decides to do. In the latter situation, instead, the events are the actions of a second agent who has their own strategy and takes into account what the other might decide to do. The sort of reasoning to be adopted in the two cases, in fact, is quite different. In the former, the strategy depends on the probability of uncertain incidents; in this sense, this is a type of parametric reasoning. In the latter, the strategy depends on the outcomes of the interaction of the choices of the two agents; this is, instead, a strategic reasoning. Usually, the parties of a conflict oppose one another in a context in which several events may occur, and they must therefore adopt both parametric and strategic reasoning. In order to focus on the interaction of the agents’ strategies, we will however allow ourselves some idealization: we will bracket any parametric reasoning the parties involved in a conflict may have to make and consider their strategies as exclusively of the latter sort. Specifically, we will focus on the fact that in making their decisions the agents must consider that their actions are interdependent.

But what is the strategic reasoning that allows the parties to determine their course of action? When we dealt with the notion of mutual knowledge in Chap. 4, we had already discussed a situation that requires strategic reasoning: we called it “the two generals’ paradox”. Two generals, A and B, of two allied armies, have to coordinate an attack against an enemy army, which is bigger than each of them separately. If the allied armies attack together, they may suffer some losses but they will win; however, if only one of them attacks, in all probability it will be crushed. Now, consider, for instance, what general A’s reasoning could be. If A believes that B will attack, A should attack as well in order to ensure victory. But B might prefer not to put themselves at risk and let A engage the enemy alone instead; thus, A might think that if B thinks that A will attack, then B will not attack, in which case A should not attack either. What is the most rational choice then for A? Is mutual knowledge the problem again? Would communication between the two generals make any difference? These are just the sort of questions investigated in Game Theory.

Game Theory

Game Theory is the study of decision-making in multi-agent interaction situations; in other words, it studies the way in which agents take rational decisions on the basis of their preferences in situations in which the outcomes of their choices are influenced by the choices of other agents. Game Theory was originally defined as a formal theory by John von Neumann and Oskar Morgenstern to model economic behavior (von Neumann & Morgenstern, 1944), but it has been developed since then and applied in innumerable fields: from warfare to gambling, from psychology to biology, from logic to computer sciences and artificial intelligence. According to the economist, Nobel laureate Roger Myerson “Game Theory can be defined as the study of mathematical models of conflict and cooperation between intelligent rational decision-makers” (Myerson, 1991, p. 1). In this sense, Game Theory may help us in understanding the reasoning that determines the strategies of the agents in a conflictual situation (Rapoport, 1974). The general lines of Game Theory will be introduced in this section for illustrative purposes only.Footnote 1

In Game Theory, a conflictual situation is modeled as a game, where obviously a game is a technical concept. In order to define a game, a list of players must be specified along with their preferences, their information as well as the actions available to them, and the possible outcomes of the game must be determined. Many different sorts of games can be classified by varying these components, and we will consider some of them below.

In basically every game, however, the possibility of modeling strategic reasoning in Game Theory is grounded in the assumption that players are perfectly rational. This assumption requires some qualification, since rationality is clearly an extremely delicate and complex concept, which can be analyzed in countless ways. Game Theory employs a very specific “economic” notion of rationality: in Game Theory, a rational agent is an agent who chooses a strategy to maximize their utility. An agent’s utility is a measure of their preferences for the possible outcomes of a game. To illustrate this idea, consider someone who buys a ticket at a charity raffle: there are several prizes, and they have preferences about what they would like to win. Their preferences can be represented as the utility that the possible outcomes of the raffle have for them: from the minimal utility of the case in which they do not win anything, to the maximal utility of the case in which they win, say, a bike.Footnote 2 The utility of an outcome for an agent can simply be formalized by assigning a payoff value to it that represents how desirable is the action for the agent. Notice that in this sense utility is a subjective measure.

In Game Theory, a strategy is a plan of the actions that an agent might decide to undertake at all the points at which the game requires them to choose what to do. If the game consists in one decision point only, a strategy is just an action. More often, games require players to make decisions at several points. In a chess game, for instance, a player must choose a move at every turn: their strategy will be the list of all the actions they decide to make. Games in which players must choose their strategies individually are called non-cooperative games, whereas games where players are allowed or forced to coordinate their strategies are called cooperative games. We will begin by focusing on the former. Let us review some examples in order to illustrate these concepts.

Consider, again, the conflict mentioned in Chap. 9 between a husband and wife arguing about their vacation destination. Let us try to model it as a game. There are obviously two players in this game: the husband and the wife. The actors have only one decision to make, and they must make it simultaneously; therefore, there are only two strategies: going to the mountains and going to the sea. Combinatorics tells us that there are four possible outcomes for this game: the husband goes to the mountains and the wife goes to the sea, the husband goes to the sea and the wife goes to the mountains, they both go to the mountains, and they both go to the sea. Now suppose that the husband wants to go to the mountains and the wife to the sea. These are the preferences of the players, and they define their respective payoffs for the possible outcomes of the game. Let us do this by assigning an ordinal number to each of them starting with the less desirable one; in fact, this amounts to defining a utility function for each of the players. Suppose the utility function for the wife can be defined as follows:

  • Wife:

    Wife goes to the mountains and husband goes to the sea: 0

    Wife goes to the sea and husband goes to the mountains: 1

    Both wife and husband go to the mountains: 2

    Both wife and husband go to the sea: 3

The utility function for the husband, instead, is the following:

  • Husband:

    Wife goes to the mountains and husband goes to the sea: 0

    Wife goes to the sea and husband goes to the mountains: 1

    Both wife and husband go to the sea: 2

    Both wife and husband go to the mountains: 3

The payoffs can be represented in a matrix like the one depicted in Fig. 10.1.

Fig. 10.1
A 2 by 2 matrix represents the battle of sexes with parameters sea and mountains. Column denotes husband. Row denotes wife. Row 1: 3, 2; 1, 1. Row 2: 0, 0; 2, 3.

Battle of sexes

Each cell of the matrix represent a different possible outcome of the game, resulting from the combination of the strategies of the husband (represented in the columns) and those of the wife (represented in the rows). The two values in each cell of the matrix represents the payoffs of the outcome for the players: the value on the left is the payoff for the wife and the value on the right is the payoff for the husband.

The situation described in this game is often used for illustrative purposes in the literature on Game Theory and is known as “the battle of sexes”. Notice that the rankings of the payoffs represent the fact that the choices of one actor are influenced by the choices of the other: for instance, the wife would like to go to the sea, but she prefers not to go to the sea if the husband decides to go to the mountains. Compare now the matrix in Fig. 10.1 with the matrix in Fig. 10.2, which represents another distribution of the payoffs for the husband and wife. In this case, the utility functions of the players make explicit that their choices have no influence on one another: the wife will go to the sea, and the husband will go to the mountains regardless of what the spouse chooses to do; in fact, we do not really need Game Theory to model this situation.

Fig. 10.2
A 2 by 2 matrix represents the divergence between the sexes with parameters sea and mountains. Column denotes husband. Row denotes wife. Row 1: 0, 1; 1, 1. Row 2: 0, 0; 0, 1.

Divergence between the sexes

Notice also that the utility functions of the battle of sexes represented in Fig. 10.1 determine a distribution of the payoffs in such a way that certain outcomes are more preferable than others to both players; for instance, it is preferable for the couple to go both to the sea (top left cell) and to the mountains (bottom right cell), rather than go separately to different destinations (either top right or bottom left cell). A game in which the sum of the payoffs is different for different outcomes is called a variable-sum game. To the contrary, a game in which the sum of the payoffs is the same for different outcomes is called a constant sum game, or, more often, a zero-sum game. In every outcome in a zero-sum game, what is gained by one player is lost by the others; therefore, if losses are represented by a negative payoff, then their sum is constantly zero. Zero-sum games model conflictual situations of pure competition. A paradigmatic example of a zero-sum game is rock-paper-scissors, whose matrix is depicted in Fig. 10.3.

Fig. 10.3
A 3 by 3 matrix represents the parameters player A and player B. Column and row denote rock, paper, and scissors. Diagonal elements are 0, 0.

Rock-paper-scissors

Let us now ask what strategy a rational player of the battle of the sexes should choose. According to the distribution of the payoffs the husband should decide to go to the mountains if the wife decides to go to the mountains, and he should decide to go to the sea if the wife decides to go to the sea. Similarly, the wife should decide to go to the mountains if the husband decides to go to the mountains, and she should decide to go to the sea if the husband decides to go to the sea. In Game Theory, a strategy that leads to a higher payoff than another is said to dominate the other. It might be useful to represent the relations of dominance inside the matrix of a game by means of arrows connecting the cells. In Fig. 10.4, for instance, horizontal arrows represent the wife’s dominant strategies, while vertical arrows represent those of the husband.

Fig. 10.4
A 2 by 2 matrix represents the battle of sexes with parameters sea and mountains. Column denotes husband. Row denotes wife. Row 1: 3, 2; 1, 1. Row 2: 0, 0; 2, 3 and are mapped each other.

Battle of sexes

There is clearly no strictly dominant strategy for any of the players in this game, that is, no strategy worth adopting for a player independently of what the others decide to do. In games such as this, each player must try to “read the mind” of the others to understand what their strategies could be and make their own choices accordingly.

We have said that Game Theory is based on a rationality assumption according to which players make their choices in order to maximize their utilities; hence, Game Theory invites us to think of a game as a utility maximization problem that each single player must solve. In this sense, the strategies that allow all players to obtain the best possible outcomes for themselves could be considered as the solutions of the game. What counts as a solution, naturally, depends on what is to be considered the best possible outcome for all the player. The main contribution to Game Theory by John Nash, which extended and generalized the original work of von Neumann and Morgenstern, was a proposal for the formal analysis of this idea (Nash, 1950a, 1950b, 1951, 1953). According to Nash, the maximal utility for the players could be obtained in terms of the maximal consistency or “equilibrium” between their strategies. An analogy might be useful to shed some light on this concept. In physics, equilibrium is the balance of opposing forces: a system is said to be in equilibrium when it keeps its motion and internal energy constant over time unless some external force intervenes. Likewise, in Game Theory a game is in equilibrium when the strategies adopted by the players balance one another, meaning they are the best set of strategies that each player can adopt given the strategies of the others, so that none of them would change it unless some other player changes theirs. More explicitly, we can say that a set of strategies is a Nash Equilibrium if and only if no player could improve their payoff by changing their strategy, given the strategies of all the other players in the game.

As it is easy to demonstrate, the “battle of sexes” game has two Nash Equilibria: both husband and wife decide to go to the mountains, or they both decide to go to the sea. In this sense, it has two solutions: two sets of strategies that lead to outcomes that would satisfy each player given the choices of the others. There are games which do not have any pure Nash Equilibrium. For instance, the rock-paper-scissors game seems to have no solution because for every outcome, the losing player always has a better strategy to choose: if they played rock and the opponent won by playing paper, they could have had a better payoff by playing scissors, and so on. Nash (1950a) proved, however, that there always exists a Nash Equilibrium in games with a finite number of players and a finite number of pure strategies that they might choose if mixed strategies are allowed, where a mixed strategy is an assignment of probabilities to the different pure strategies.Footnote 3

Now, the concept of a Nash Equilibrium as the solution of a game provides an answer to the question regarding what sort of reasoning allows the parties in a conflict to determine their strategies: the parties should define their respective utility functions and look for a Nash Equilibrium. Let us try to apply this idea to the problem of coordination of the two allied generals. In this situation, generals A and B have two choices: attack or retreat. The game therefore has four possible outcomes. The two generals would like to attack together, but if the ally retreats, they prefer to retreat as well. The utility functions for the two generals rank these outcomes specularly from the least to the most preferred. Both prefer to attack when the ally chooses to attack, but if the ally retreats, they prefer to pull back as well. A matrix for the game of the two generals is depicted in Fig. 10.5.

Fig. 10.5
A 2 by 2 matrix represents the parameters general A and general B. Column and row denote attack and retreat. Row 1: 3, 3; 0, 2. Row 2: 2, 0; 1, 1 and are mapped each other.

The two generals

It is easy to see that no one has a strictly dominant strategy in this case. Here again, two Nash Equilibria correspond to the expected set of strategies for both generals: attack if the other attacks, retreat if the other retreats.

The two generals’ conflictual situation is in fact a well-known case study in Game Theory. The paradigmatic illustration of such a situation is traced back to an example considered by Rousseau in the second part of the Discourse on the Origin of Inequality:

If it was a matter of hunting a deer, everyone well realized that he must remain faithful to his post; but if a hare happened to pass within reach of one of them, we cannot doubt that he would have gone off in pursuit of it without scruple. (Pl., III, pp. 165–167)

Suppose that each hunter has only two options: either hunting the stag or hunting the hare. On the one hand, the stag is a valuable prey, but the hunters can get the stag only if they hunt it together. The hare, on the other hand, is less valuable but each hunter could get it by themselves. This is the structure of a Stag Hunt game (Fig. 10.6).

Fig. 10.6
A 2 by 2 matrix represents the parameters hunter A and hunter B. Column and row denote stag and hare. Row 1: 3, 3; 0, 2. Row 2: 2, 0; 1, 1 and are mapped each other.

The Stag Hunt

Since there are two solutions to this sort of game, the players must understand what the others will do in order to establish what is more convenient for them to do subsequently.

Game Theory makes explicit the strategic reasoning of the parties of a conflict in terms of a formal model. Notice how the game-theoretic analysis has confirmed the intuition we had regarding what agents should do in the different games considered in this section. This, however, is not always the case.

The Prisoner’s Dilemma: Rationality vs. Cooperation

Let us now discuss another strategic problem considered a paradigmatic illustration of how Game Theory applies to the study of conflicts (cf. Rapoport & Chammah, 1965). The conflictual situation in question is often described along the lines of the following anecdote attributed to the mathematician Albert W. Tucker.

Two bank robbers have been arrested by the police for a minor crime. The penalty for this crime is two years in prison. The police knows that they have also robbed a bank but have enough evidence to prosecute them for the minor crime only. The officers therefore make an offer to each prisoner separately, asking each of them to confess to the robbery and turn the other one in. If one cooperates and the other does not, the first goes free and the other receives a full sentence for the bank robbery, that is, ten years of prison. If they both plead guilty, however, they both receive a reduced sentence of five years.

This is the so-called Prisoner’s Dilemma. It is a decision problem that can be modeled as a variable-sum game for two players: prisoner A and prisoner B. They only have two strategies available: either cooperate with one another and refuse to confess or not cooperate and confess. The utilities for the two players seem clear: the less time spent in prison, the better is the outcome. As always, their payoffs can conveniently be represented in a matrix such as the one depicted in Fig. 10.7.

Fig. 10.7
A 2 by 2 matrix represents the parameters prisoner A and prisoner B. Column and row denote refuse and confess. All elements are mapped to each other.

The Prisoner’s Dilemma

How should a prisoner reason? If the other prisoner confesses, then they should confess to obtain the reduced sentence. But if the other prisoner does not confess, they have even more reasons to confess, because, in that case, they could go free. Given that the distribution of the payoff is specular, the same reasoning works for both prisoners; hence, confession is the strictly dominant strategy for both players in the Prisoner’s Dilemma. Regardless of whether the other decides to confess, a prisoner’s best choice to maximize their utility is to confess; in fact, as it is easy to demonstrate, the case where both confess is the only Nash Equilibrium of the game, and in this sense, as we have seen, it can be considered the only rational solution of the conflict. But is it in actual fact the case? If we look at the matrix in Fig. 10.7, there clearly seems to be an outcome in which both players would be better off. In fact, if they both refuse to confess, both will be convicted for only two years, rather than five. Would this not be a better option than the one recommended by the Nash Equilibrium?

To shed some light on this puzzle, let us begin by seeking an alternative sense in which such an option could be considered a solution of a game. The concept of Pareto Efficiency may come in handy here. The concept applies to economics for identifying the best way in which certain resources can be allocated: a given allocation is Pareto efficient if there is no other allocation which improves the payoffs of at least one individual without making the others’ worse. The principle grounding Pareto Efficiency can be expressed as follows:

  • Optimality: A distribution is optimal if under no alternative distribution all recipients would be better off.

If the concept of Pareto Efficiency is applied in the context of Game Theory, it may provide another interpretation of what the solution of a game should be. In this alternative sense, the rational choice for an agent who aims to maximize their utilities would be to adopt the set of strategies that guarantees fair Pareto Efficiency. For instance, in the Prisoner’s Dilemma defined by the above matrix, the outcome (-2, -2) is clearly more efficient than (-5, -5), because both players are better off in the former. And undeniably (-2, -2) is more efficient than (-10, 0) and (0, -10) because in the latter of both, either one or the other player obtains a worse payoff.

Pareto Efficiency alone, however, is not necessarily what to look for as a solution in a conflictual situation. This is because an optimal distribution could simply be unfair: in fact, it might well be the case that the optimal distribution, given the alternatives, allocates very different quantities of resources to the recipients. At least another principle is therefore required to ensure that the distribution is fair:

  • Equity: If the position of the recipients is symmetric, then the distribution should be symmetric, that is to say, it does not vary when we switch the recipients.

Indeed, Skyrms (2004, p. 18) suggests that the principles of Optimality and Equity are decisive for the identification of the outcomes of a game that abide by distributive justice.

Now, let us consider these two notions of the solution of a game: Nash Equilibrium on the one hand and optimal and fair distribution on the other. In the case of the Prisoner’s Dilemma, it seems that Game Theory leads to a paradoxical conclusion: the most rational choice for the players is not what their best option seems to be. It is important to realize that the structure of the Prisoner’s Dilemma, which generates this paradox, is not peculiar at all; in fact, it is shared by all games that model conflictual situations in which the cooperation between players is collectively convenient but individually risky. The two prisoners would thus obtain the best option for them collectively by cooperating with one another and refusing to confess to the police; however, each prisoner individually is not sure that the other will cooperate, and clearly if the other does cooperate, it is still more convenient for each of them not to cooperate and turn the other in. What seems to be at issue in conflictual situations such as the Prisoner’s Dilemma is whether cooperative behavior is rational, at least if rationality is construed in terms of the maximization of individual utility.

Recall that Thomas Hobbes describes similar behavioral dynamics in the state of nature. The natural condition of human beings is characterized precisely by the fact that everyone has the right to decide how to act by themselves, on the basis of their own individual preferences. It is important to realize that, according to Hobbes, the state of nature is a state of war not just because people may compete for limited resources, but rather because they do not trust one another. All things considered, Hobbes argues, people are created equal by nature and they think of themselves as equal, on average, in terms of physical and mental faculties. Thus, since no one is naturally designed to prevail in a possible conflict, people might always think that it is worth competing against each other for some goal. This is an ineradicable eventuality. It follows that when someone has obtained something, they are afraid that someone else might try to take it away from them. To prevent this possibility, one is better off refusing any cooperation and fighting instead to destroy any power that might endanger them. It is for this reason that in the state of nature, people embark upon an all against all war. Hobbes’ diagnosis is that if human beings are free to make choices regarding their self-interest, then they will find it more rational not to trust each other and not cooperate, rather than come together and prosper. Hobbes saw the strategy of alienating everyone’s right to make choices and subjecting to the absolute power of the sovereign as the only way out of this conflictual situation. Under the social contract, cooperation is not an equilibrium, but it is imposed by the sovereign through the threat of the use of force.

But is cooperation really irrational? When confronted with the Prisoner’s Dilemma, one might adopt several different approaches toward this question. On the one hand, one could point out that there is an irreducible contradiction between self-interest and moral behavior and that moral behavior does not show itself to be rational in Game Theory, simply because in Game Theory it is assumed that rational choices maximize individual utility. On the other hand, one could point out that the sort of conflictual situations described by the Prisoner’s Dilemma are just too simplistic: if more adequate models were provided for real-life conflictual situations, then game-theoretic results would not seem so puzzling. Clearly, both the revision of the definition of rationality and the development of the mathematics of Game Theory may be enterprises worth pursuing and both have been, in fact, explored at large in economic, sociological, and philosophical research. However, since our main focus here is the role of communication in the analysis and transformation of conflicts, we will adopt a more conservative approach. On the one hand, we will accept the economic characterization of rationality as a working hypothesis in order to be able to exploit the game-theoretic framework for the analysis of the problem of cooperation in conflicts. Notice that by doing so, we are not committing to the view that self-interest actually plays any fundamental role in the explanation of human rationality: we are only treating Game Theory as a model representing how the beliefs and desires of rational agents determine their behavior.Footnote 4 On the other hand, we will try to keep the technicalities to a minimum, since we do not aim to solve the problem of cooperation in Game Theory. In fact, the most interesting question for us to ask regarding the Prisoner’s Dilemma is whether communication has anything to do with the puzzle.

It is sometimes noticed, for instance, that the reason why the Prisoner’s Dilemma is to be considered a non-cooperative game is because the players are separated and forbidden from communicating with one another: it is suggested that, if they could communicate, they would reach an agreement and cooperate. This suggestion, however, is misleading. The reason why games such as the Prisoner’s Dilemma are classified as non-cooperative is not because players are prevented from using means that would allow them to cooperate, for example communication and negotiation. They are non-cooperative games because players must make individual choices based on their own individual preferences.Footnote 5 Suppose, for instance, that the two prisoners are in fact allowed to communicate and agree not to confess. Now, suppose one of the prisoners is questioned and holds to the agreement, refusing to confess. It is then the other prisoner’s turn, and suppose that they actually know that their partner did not confess. What is more rational for them to do then? If the payoffs are those presented in the Prisoner’s Dilemma, then if the prisoner is economically rational, they should break the deal, confess, and go free. But then of course the first prisoner could foresee this and, again, if economically rational, should choose to confess in the first place.

If communication is conceived as a means of transferring information from one player to another to secure an agreement between them, it makes no difference to the puzzle of the Prisoner’s Dilemma. As we have just seen, the reason for this is that rational players choose their strategies only on the basis of their expected utility. The distribution of the payoffs is the only thing that truly makes a difference in Game Theory. But if this is the case, then there is actually a role for communication to play in the analysis of the puzzle. Recall that the utility functions that assign payoffs to the possible outcomes are intended to provide a formal representation of the preferences of the different players. In fact, when we assign a certain payoff to a certain outcome for a player, we mean that the player has an instrumental belief that by adopting a certain strategy they will obtain a certain result that will satisfy their desires to a certain degree. In this sense, then, utility functions model the aspects of the cognitive environment of a rational agent involved in the explanation of their actions.

Now, when we look at the conflictual situations modeled by non-cooperative games such as the Prisoner’s Dilemma from the point of view of communication, it is easy to realize that these games simply do not happen, so to speak, in the void. To the contrary, they describe problems of rational choice that are grounded in the beliefs and desires that in the basic framework of Game Theory are only implicitly represented by the utility functions of the players. As we have seen, Sperber and Wilson suggest thinking of this context in terms of the cognitive environments of the agents. How would an account of this context change the results of game-theoretic analysis?

Reiterated Games

As a matter of fact, cognitive environments are rather conservative; therefore the process required to modify them is not straightforward. Most of all it requires time. In the framework of Game Theory, time has been investigated with relation to iterated games, that is, games that can be repeated by players several times. To illustrate the impact communication may have on the process of modification of the utility functions of the players of noncooperative games, let us consider another example of a conflictual situation with the structure of the Prisoner’s Dilemma. In this situation, several firms compete in a free market in which prices are governed by the law of supply and demand. The strategy that will collectively maximize the utilities for all the firms is to reduce the supply to keep prices high. If the firms will choose to cooperate and form a “cartel”, they will stick to a certain agreed quota of production; yet, obviously, if that were the strategy of the other players, every single firm would maximize its utilities by increasing their production and benefit from the high prices, while the others’ strategy in turn would suffer from the fall in demand.

In jargon, such a player is called a “free rider”. In its original use, the expression refers to those who take a ride on public transportation without paying a ticket. In economics, a free rider is an individual who benefits from some collective good but refuses to contribute to its maintenance. The problem of free riders was famously pointed out by the ecologist Garrett Hardin with relation to the phenomenon he called “the tragedy of the commons” (Hardin, 1968). Hardin’s reasoning has been used to argue for the privatization of public resources. He illustrated the concept with the example of common pastures. Where common pastures are available, every single herdsman has two possible courses of actions: either keeping the cattle on common pastures or keeping them on private ones. If a herdsman keeps his cattle on common pastures, they grow bigger and faster than if he just let them pasture on his private fields; thus, keeping as many cattle as possible on the commons is in the herdsman’s best individual interest. Unfortunately, if there are too many cattle on the common pastures, these will soon be ruined, and they will no longer be usable by anyone. What should the herdsman do? If he is the only one not to exploit the commons, he will lose his profits and the commons will be ruined regardless. He will therefore keep his cattle on the common pastures, and all the other herdsmen will act in kind, if they are allowed to, until the commons is ruined for everyone.

Similarly, if we model the cartel situation in Game Theory, the economically rational solution proves to be for every firm to increase their production, regardless of the others’ actions and engage in a price war until their profit margin is zero. Just like in the Prisoner’s Dilemma, this sounds paradoxical. Figure 10.8 represents this conflictual situation as a non-cooperative game for two players.

Fig. 10.8
A 2 by 2 matrix represents the parameters firm A and firm B. Column and row denote cooperate and defect. Row 1: 1, 1; negative 1, 2. Row 2: 2, negative 1; 0, 0 and are mapped each other.

The cartel’s Prisoner’s Dilemma

But, now, consider what happens in common real-life situations when players have to play this game multiple times. Every time they play, the firms need to decide whether to cooperate and respect their agreed-upon production quota or defect and increase production. In this scenario, players might also determine their preferences by taking into account their knowledge about previous interactions and their expectations with respect to future interactions. Thus, in this situation, suppose that the first time the game is played, one of the firms decides to break the cartel and increase production. If there is only one non-cooperative player, this choice will provide them with high payoffs on that round; however, the next time the game is played, the other firms will likely react against the defector by adopting the noncooperative strategy that consists in raising their production as well. By doing so, they will cause a fall in prices that will lower the profits of any firm on the market. If the firms maintain the price war by repeatedly playing the noncooperative strategy, they will also eventually erase the advantage originally gained by the defector.

Obviously, the repetition of the games does not engender the convenience of the cooperative behavior by itself. To the contrary, rational players adopt cooperative strategies only insofar as they can maximize their utilities by doing so. In order to see this, simply consider what happens if players know exactly how many times they will have to interact. Experiments have shown that in repeated, noncooperative games, the number of cooperative choices decreases to minimum by the end of the game (Andreoni, 1988). The closer the last interaction, the less convenient cooperation will be, because the consequences of retaliation will be less severe: this is the so-called end-game effect. The graduality of the results of the empirical experiments simply depends on the fact that players do not always make perfectly rational decisions. But in fact, if we look at the problem from a mathematical point of view, the impact of the end-game effect is even more dramatic. Suppose that perfectly rational players know that they will play exactly 20 rounds of the Prisoner’s Dilemma: How does that affect their strategies? Let us reason backward from the very last interaction. In the 20th round, players know (a) that they cannot change the score they have obtained in previous rounds and (b) that they will not have to make any other decision that can be influenced by what they choose in this round since there will no longer be any interactions. Therefore, in the 20th round, they will try to maximize their utility by choosing the Nash Equilibrium and they will defect. Now, let us go backward to the 19th round. In their penultimate interaction, the players again know (a) that they cannot change the score that they have obtained in the previous rounds and (b) that they will not have to make any other decision that can be influenced by what they choose in this round, because they already know what they will play on the last round. Therefore, in the 19th round they will try to maximize their utility by choosing the Nash Equilibrium and they will defect. What about the 18th round? In this round too, obviously, players will know (a) and (b); therefore, they will stick to the Nash Equilibrium. It is easy to realize that the situation does not change up to the first round. In the first round, rational players will foresee what they will do in the subsequent rounds; hence, in light of (a) and (b) again they will choose to defect.

  • 20th round. ⟨Defect, Defect⟩

    19th round. ⟨Defect, Defect⟩

    1st round. ⟨Defect, Defect⟩

This very simple reasoning demonstrates that if players know how many times they will have to play, it can be established in the framework of Game Theory that they should never cooperate; indeed, a dismaying result. Let us see if there is any chance to do better.

What is the best strategy to adopt in iterated noncooperative games such as the Prisoner’s Dilemma? There is a certain, general consensus that this question can be satisfactorily answered in a unique way (see, e.g., Jurišić et al., 2012; see, however, Rapoport et al., 2015 for criticism). The evidence upon which this consensus is based was originally provided through the experiments conducted in the early 1980s by Robert Axelrod (1980a, 1980b). These experiments consisted in a single stage round-robin tournament among computer programs, each of which simulated a different strategy for playing the Prisoner’s Dilemma. The payoff matrix of the game awarded both players 3 points for mutual cooperation, and 1 point for mutual defection. If one player defected while the other player cooperated, the defecting player received 5 points and the cooperating player received 0 points (Fig. 10.9).Footnote 6

Fig. 10.9
A 3 by 3 matrix represents the parameters player A and player B. Columns and rows denote cooperate and defect. Row 1: 3, 3; 0, 5; Row 2: 5, 0; 1,1.

The Prisoner’s Dilemma in Axelrod’s tournaments

In the first tournament, each program played against every other participant 200 times. Computer programs were developed by 14 experts in Game Theory who had previously studied the Prisoner’s Dilemma at length. In the second tournament, the number of the game’s repetitions for each pair of programs was determined probabilistically to avoid the end-game effect. This time, 62 candidate programs were submitted, and their developers were aware of the results of the first experiment. In both experiments, the strategy scoring more points was the one originally proposed by Anatol Rapoport, who nicknamed it “TIT-FOR-TAT” because it essentially consists in mimicking the tactics of the opponent (Rapoport & Chammah, 1965). This strategy is fairly simple:

  1. (a)

    If it is the first round, cooperate.

  2. (b)

    Otherwise, do what the opponent did in the previous round.

It is also easy to comprehend why TIT-FOR-TAT is effective in round-robin tournaments. First, it is clear, meaning it is easily recognizable by the other strategies and its behavior is predictable and reliable, due to the fact that it only considers the previous round of the game and regularly acts according to it. This makes it a good partner to identify and cooperate with.

Second, it is retaliatory: if the opponent defects, it punishes them by defecting in the subsequent round. This minimizes the losses against uncooperative players: every time the opponent stops cooperating, the player obtains the worst payoff one time only. Besides, it discourages defections (Fig. 10.10).

Fig. 10.10
A n by 2 matrix represents retaliation. Row denotes tit for tat and opponent. Row 1: C, D; Row 2: D.

Retaliation

Third, it is forgiving: if the opponent begins to cooperate once again, the player cooperates back. This favors cooperation with faithful opponents (Fig. 10.11).

Fig. 10.11
A n by 2 matrix depicts forgiveness. Row denotes tit for tat and opponent. Row 1: C, D, C; Row 2: D, C.

Forgiveness

Fourth, it is nice, meaning it looks for cooperation the first time and never tries to fool the opponent.

This ensures good payoffs with other cooperative opponents (as, e.g., other TIT-FOR-TATs) (Fig. 10.12).

Fig. 10.12
A n by 2 matrix depicts the niceness. Row denotes tit for tat and opponent. All elements are C.

Niceness

TIT-FOR-TAT gains a lot of points. In fact, in a population where every player adopts TIT-FOR-TAT, the most rational thing to do is to adopt it as well: ⟨TIT-FOR-TAT, TIT-FOR-TAT⟩ is a Nash Equilibrium for Axelrod’s iterated Prisoner’s Dilemma game.

It is possible to demonstrate, however, that there is nothing particularly special about TIT-FOR-TAT, and that its success is simply an instance of a far more general phenomenon. To see why this is the case, it is important to keep in mind what the conditions are under which the success of TIT-FOR-TAT is obtained. Specifically, recall that to avoid end-game effects, players must know that at every round there will always be a certain probability of future interactions.Footnote 7 Besides, the application of TIT-FOR-TAT presupposes that the player always has enough information about the game to tell whether the opponent has cooperated or defected in the previous round.Footnote 8 Now, it can be proven that under these conditions equilibria arise that are not the Nash Equilibria of a single round of the game and that provide the players with a greater utility. There is an entire series of results of this kind that are known as Folk Theorems, because they are part of the folklore of Game Theory, in the sense that they were widely known by game theorists even before they were formally stated and proven.Footnote 9 As far as our purposes are concerned, Folk Theorems in particular establish that if players have a large enough time horizon, there are always solutions to an iterated Prisoner’s Dilemma in which both players obtain payoffs that exceed those of the Nash Equilibrium for the game’s single round, especially. In particular, if the utility gained by deviating from cooperation is not too high, there may be optimal and fair solutions. The mathematics behind Folk Theorems is not extremely difficult to understand, but it is still too complicated for this presentation. The basic idea they confirm, however, is fairly intuitive: if players have a reasonable expectation that they will have to interact again in the future, they might find it convenient to cooperate rather than trying to fool one another. It is also intuitively clear that the Folk Theorems only apply to ideal situations, whereas in real-life conflicts, the conditions that they require are seldom satisfied; nonetheless, they are relevant to our investigation, because they establish that in principle cooperative behavior could be a solution to a conflict.

The Evolutionary Perspective on Reiterated Games

Are nice strategies such as TIT-FOR-TAT the way out of the Prisoner’s Dilemma? Do Axelrod’s experiments show that cooperation can be the most economically rational behavior, at least in the long run? Notice, to begin with, that TIT-FOR-TAT does not guarantee that players will cooperate: TIT-FOR-TAT copies the opponent; therefore, it will cooperate if the opponent cooperates and will defect if the opponent defects. In effect, the Folk Theorems do not guarantee that in infinitely repeated games players will converge on optimal and fair equilibria; they only establish that in those games there are other more efficient equilibria. In a way, the Folk Theorems only show that if players expect to interact again in the future for a long enough time, a Prisoner’s Dilemma game can be transformed in a Stag Hunt game. There are several ways in which this transformation can be construed. The difference between them depends on how the context in which the game is played is construed.

By examining the results of his first experiments, Axelrod noticed that the success of certain nice strategies such as TIT-FOR-TAT in the tournaments was relevantly boosted by the large number of points that they scored against certain specific strategies. These latter did not perform very well in the tournaments, but they determined the success of other strategies. Axelrod used the expression “kingmaker” to refer to the strategies that played this subsidiary role in the tournament. But what if there are no kingmakers among the opponents? Axelrod raised this worry explicitly: “[D]oes TIT FOR TAT do well in a wide variety of environments? That is to say, is it robust?” (Axelrod, 1984, p. 48).

In order to test the robustness of TIT-FOR-TAT, he considered the repetition of games in an evolutionary perspective (Axelrod & Hamilton, 1981). He therefore organized a tournament where the different strategies played the Prisoner’s Dilemma against each other in several subsequent stages. The number of programs that played a certain strategy in the next stage was determined in function of the number of points that such a strategy had obtained in the previous stage. This was supposed to represent the fact that the strategies that score less points would be abandoned in favor of more successful ones by “future generations” of players. The idea was to test the success of the strategies in adapting to different environments. The results of the experiment showed that the less successful strategies become increasingly less common and eventually drop out, while the more successful ones proliferate. Again, the prevailing strategy in this evolutionary experiment was TIT-FOR-TAT. In Axelrod’s view, this result demonstrated that in effect the success of TIT-FOR-TAT does not depend on the context in which it is put into play.

This presentation, however, is not completely accurate. Surely, Axelrod’s evolutionary experiment showed that TIT-FOR-TAT is robust enough to succeed in an environment without any of its kingmakers, but it does not show that TIT-FOR-TAT can succeed in any environment whatsoever. Notice, in fact, that since TIT-FOR-TAT always copies the opponent, it can never win by obtaining more points in a series of interactions: it can either lose or tie. For instance, if it is put into play in an environment where there are only strategies against which it cannot tie, eventually it will not gain enough points to reproduce itself in the next generation. Now, one of the strategies that are particularly effective against TIT-FOR-TAT is one that always defects: such a strategy would fool TIT-FOR-TAT in the first round and then force the Nash Equilibrium of any single round of the Prisoner’s Dilemma until the end of the series of interactions. Besides, defectors would score better against each other than TIT-FOR-TAT does. Consequently, TIT-FOR-TAT would be unable to invade an environment in which all players do nothing other than defect (Fig. 10.13).

Fig. 10.13
An n by 2 matrices. Row denotes tit for tat and defector. Row 1: C, D, D with n; Row 2: D, D, D with n plus 5.

TIT-FOR-TAT vs. DEFECTOR

As a matter of fact, TIT-FOR-TAT only fares well together with other nice strategies. In fact, if an environment contains enough nice strategies, these will proliferate because they will be able to play enough times against one another, and the payoffs of the Prisoner’s Dilemma reward cooperation over defection. Once TIT-FOR-TAT has occupied an environment, it is considerably difficult for defectors to invade it, because TIT-FOR-TAT is retaliating: the defectors would therefore score less points when they play against TIT-FOR-TATs or against themselves than the points that TIT-FOR-TATs score when they play against themselves.

There are two crucial things to notice about the evolutionary perspective on reiterated games. The first one is that actors who must choose a strategy in every round of every stage of the game are not really the players anymore: the actual players are the strategies themselves which strive for evolutionary success. These players do not strive for the maximization of their utilities in a single round of the game. The utility of these players is the expected number of offspring in the next stage of the game. Clearly, the dynamics with which strategies replicate themselves might vary, but if they depend on the point they score in subsequent rounds of the Prisoner’s Dilemma, then there is at least something that we already know from Axelrod’s experiments: in iterated stages of the Prisoner’s Dilemma, strategies fare well when they cooperate with cooperators and defect with defectors, but they will score more points in the first case. This means that, in the long run, a strategy will have more chances of reproducing itself (i.e., obtain more utility) from cooperation than from defection if the environment is favorable to cooperation. In fact, the Folk Theorems guarantee that mutual cooperation might be another Nash Equilibrium.

In order to see the point more clearly, consider a simple scenario. It is an evolutionary tournament such as the one described above, in which strategies play the Prisoner’s Dilemma with the payoff matrix represented in Fig. 10.9. There are only two sorts of strategies: COOPERATOR and DEFECTOR, whose behavior can easily be inferred from their names: the former always cooperates and the latter always defects. They play only two stages with only one round. After each stage, the next generation of strategies will have as many individuals as the points scored by the previous generation (just ignore the complication of pairing in an odd number of players for the sake of argument). Combinatorically, it is easy to see that there are three possible environments:

  1. 1.

    DEFECTOR vs. DEFECTOR. In the first round, each defector plays against a defector and therefore scores 1 point. In the second round, there are two defectors. They play against each other, and again each of them scores 1 point. At the end of the game, each DEFECTOR has 1 heir (Fig. 10.14).

Fig. 10.14
A page with stages 1, and 2 with a 1 by 1 matrix. Row denotes defector A, B, and defector A 1, B 1. All elements are D.

DEFECTOR vs. DEFECTOR

  1. 2.

    DEFECTOR vs. COOPERATOR. In the first round, a defector plays against a cooperator: the first scores 5 points and the latter 0. In the second round, there are five defectors. They play against each other, and each of them scores 1 point. At the end of the game, DEFECTOR has 5 heirs and COOPERATOR has 0 heirs (Fig. 10.15).

Fig. 10.15
A page depicts stages 1, and 2 with three 1 by 1 matrices. Row denotes defector, cooperator, defector 1, 2, and defector 3,4. All elements are D.

DEFECTOR vs. COOPERATOR

  1. 3.

    COOPERATOR vs. COOPERATOR. In the first round, a cooperator plays against a cooperator: they both score 3 points. In the second round, there are six cooperators. They play against each other, and each of them scores 3 points. At the end of the game, each COOPERATOR has 9 heirs (Fig. 10.16).

Fig. 10.16
A page depicts stages 1, and 2 with four 1 by 1 matrix. Row denotes cooperator A, B, A 1, A 2, A 3, B 1, B 2, and B 3. All elements are D.

COOPERATOR vs. COOPERATOR

But then, if the players of this game are strategies which aim to reproduce themselves, their utilities are to be measured in terms of their offspring. Thus, the actual payoff matrix for this game is better represented in Fig. 10.17. As is easy to demonstrate, this game has two Nash Equilibria: ⟨Cooperate, Cooperate⟩ and ⟨Defect, Defect⟩. In fact, this game is no longer a Prisoner’s Dilemma: it is a Stag Hunt game. In this sense, the evolutionary perspective illustrates the result that the Folk Theorems establish.

Fig. 10.17
A 2 by 2 matrix depicts the parameters player A and player B. Column and row denote cooperate and defect. Row 1: 9, 9; 0, 5. Row 2: 5, 0; 1, 1.

Offspring matrix

The second important thing to notice about the evolutionary perspective on reiterated games, however, is that it fails to provide an answer about how to progress from the least to the most efficient equilibrium. As we have seen, in fact, it only tells us that the progression depends on the population of players. If the environment contains enough cooperative players, generation after generation, they can substitute the defectors and resist further invasions on their part. It is possible to establish exactly how many cooperators are required to invade a population of defectors based on the dynamics of replication of the players (which of course involve the payoff of the game for the single rounds), but it is not possible to intervene on the process once the population is given.

Is the population really given anyway? Brian Skyrms (2004) famously proposed that the dynamics of group formation should be considered in order to explain the evolution of cooperation. Rather than being given, he argues, populations are themselves determined by the interaction of agents; in fact, agents select their peers according to the feedback they obtain in terms of the success of their actions when they interact. Cooperators team with cooperators and keep out defectors and free riders. Skyrms’ analysis of the feedback-governed processes of social grouping offers interesting prospects for the evolutionary explanation of social structures. Still, these processes are beyond the intervention of rational agents, and the dynamics of many occurrent conflictual situations are not in actual fact suitable for being represented in terms of them: not every conflict transformation depends on a change in the population of the parties.

What is the alternative? An interesting suggestion comes from the interpretation that Michael Tomasello (2009) gave of Skyrms’ work. Tomasello distinguishes between two ways of engaging in cooperative activities: the “I-mode”, in which one exclusively search for one’s own self-interest, and the “we-mode”, characterized instead by joint intentionality. While apes are only able to cooperate in “I-mode”, humans are mostly engaged in “we-mode” group activities. According to Tomasello, Skyrms shows how human cooperation can be properly characterized as a Stag Hunt game rather than that of a Prisoner’s Dilemma. Tomasello’s question, then, is how do humans come to be able to do that? His answer involves a story about how humans developed skills to engage in activities in communicational, social, and normative contexts. The possibility for this cultural evolutionary process is grounded in characteristically human metarepresentational abilities, the very same abilities that, as seen in Chap. 1, Tomasello believes enable humans to communicate: “[s]kills and motivations for cooperative communication co-evolved with these cooperative activities because such communication both depended on these activities and contributed to them by facilitating the coordination needed to co-construct a joint goal and differentiated roles” (ibid., p. 74).

Conclusions

In this chapter, the game-theoretic approach to the analysis of conflictual situations has been introduced and discussed in order to model the dynamics of the interaction between the strategic choices of parties. These choices depend on the practical reasoning of rational agents who act upon their beliefs and desires to achieve their goals. In Game Theory, the cognitive environment of the agents that motivates their actions is represented in terms of the payoff values assigned to the possible outcomes of the conflict, and the agents’ goals are represented as the maximization of their utilities. Game Theory also shows that in the long run cooperation is the most efficacious style to adopt in coping with a conflictual situation. It also shows, however, that cooperation is not always the most convenient one, because it does not always guarantee the maximization of the utilities of the agents.

Our purpose here was not to solve the problem of cooperation in social systems; therefore, we did not discuss any amelioration of game-theoretic analysis. We noticed, however, that communication plays no role in the standard framework of Game Theory. The absence of communication from this picture is remarkable, given that the determination of the best strategies to maximize the utilities of the agents depends primarily on the utility functions themselves, which implicitly represent the agents’ cognitive environment. In effect, on the one hand, the dynamics that determine the cognitive environment of the agents is not really an object of investigation in the theory, because the definition of the players’ utility functions is a prerequisite for the game-theoretic analysis. On the other hand, it seems that on iterated games the only way in which the strategies of rational players could converge on the more risky solutions that emerge as equilibria in the long run is by backing them up with the appropriate common knowledge (Skyrms, 2004, p. 51): player A must know that player B is cooperating, player B must know that player A is cooperating, player A must know that player B knows that player A is cooperating, and so on. But as we have seen, common knowledge is not attainable through communication if communication is conceived in terms of the code model as the transmission of information. In these terms, communication has no role to play in Game Theory. This is why Skyrms proposes looking elsewhere to explain the evolution of cooperative behavior, namely to the game-theoretic analysis of group formations.Footnote 10

The analysis of repeated games, however, establishes that the modification of the players’ cognitive environment may eventually result in a corresponding modification of their utility functions that might in turn modify their strategies. We have learned from Sperber and Wilson that communication is not as much about transferring information as it is about enlarging mutual cognitive environments; hence, even if the game-theoretic analysis of conflicts does not assign any role to communication, it still clearly indicates what this role should be and what could be achieved by means of it. Communication might contribute to the emergence of more efficient solutions to conflictual situations and drive parties toward them, thus fostering cooperation. Cooperative strategies are often more efficient, in the sense that they realize equilibria that allow parties to obtain the highest payoffs for themselves and for other parties. In Rousseau’s sense, cooperative strategies allow parties to pursue their general will. As Rousseau himself made clear, however, it is crucial to realize that such a general will is not simply given in function of the individual beliefs and desires of the parties; rather, it must be constructed through communication processes.

Chapter Summary

In this chapter, we used the technical framework of Game Theory to model conflict as strategic games in which the outcomes of the parties’ decisions depend on one another. In this framework, a game is formalized as a list of players with a list of actions available to them and their payoffs in all the possible outcomes. Players are assumed to be “economically” rational and only try to maximize their payoffs. A game’s solution can be defined as a Nash Equilibrium in which no player can improve their payoff by changing their choice given the choices of all the other players.

The game-theoretic approach immediately highlights a problem with conflict management: games such as the Prisoner’s Dilemma seem to show that a conflict might not have cooperative solutions, even if parties would obtain higher payoffs by cooperating. We saw, however, that this conclusion is too hasty, especially because the basic formulations of the Prisoner’s Dilemma miss some crucial aspects of real-life situations. One of these is the fact that usually parties in a conflict must decide whether to cooperate multiple times. The dynamics of iterated noncooperative games was statistically studied by Robert Axelrod having different strategies, simulated by computer programs, competing against one another in round-robin tournaments. In order to avoid end-of game effects, the number of game iterations was determined probabilistically. The winning strategy was the so-called TIT-FOR-TAT. Originally proposed by Anatol Rapoport, this strategy is very simple: it cooperates in the first round and then does what the opponent did in the previous round. The success of TIT-FOR-TAT depends on the fact that it gains high payoffs against other cooperative strategies while not loosing much against noncooperative ones. The outcome of a reiterated Prisoner’s Dilemma in which all parties adopt TIT-FOR-TAT is a cooperative Nash Equilibrium. This and other similar results known in the literature as “Folk Theorems” guarantee that cooperation is not necessarily considered irrational in Game Theory when players have a large enough time horizon. Axelrod also demonstrated that TIT-FOR-TAT is robust, meaning that it can do well even in environments where there are only a few other cooperative players to compete with. He tested the success of TIT-FOR-TAT in evolutionary games in which the number of parties playing a certain strategy at a certain round depend on the points scored by that strategy in the previous round.

The evolutionary perspective on reiterated games proves that a strategy has more chances of reproducing itself by cooperating, if there are enough cooperative players in its environment. In light of this, Brian Skyrms suggested that the feasibility of cooperative solutions depends in fact on the relative distribution of cooperative players in a population. There seems to be another possibility, however; given that the payoffs only represent the attitudes of the players toward certain outcomes, it seems that the structure of a game could be modified by communication.

Focus Points

  • What is to be understood as a solution to a conflict?

  • Is cooperation always rational?

  • How does iteration change the structure of noncooperative games?

Further Introductory Reading

Game Theory is a branch of mathematics. As such, it cannot be presented without its technical apparatus and the difficulties that come with it. A very accessible introduction is Binmore (2007b). See also Binmore (2007a) and Watson (2013).

Further Advanced Reading

For an extensive introduction to Game Theory, see Osborne and Rubinstein (1994), Hargreaves-Heap and Varoufakis (2004), Rasmusen (2010), or Tadelis (2013). A more advanced and technical presentation is offered in Myerson (1991). Axelrod’s tournaments and their results are very clearly presented and discussed in his Axelrod (1984). See however Rapoport et al. (2015) for a more recent reevaluation of those results. For the debate between Skyrms and Tomasello, see Skyrms (2004), Tomasello (2009). There are countless applications of Game Theory to conflict management in the scientific literature; in this regard, Schelling (1960) was a very influential book that is still important and not for historical reasons only. See also Rapoport (1974).

Study Questions

  1. 1.

    Define the notion of a game in Game Theory.

  2. 2.

    What is a Nash Equilibrium?

  3. 3.

    Define Pareto efficiency.

  4. 4.

    Illustrate the Prisoner’s Dilemma.

  5. 5.

    Illustrate the “tragedy of the commons”.

  6. 6.

    What is the end-game effect?

  7. 7.

    Describe the strategy TIT-FOR-TAT and the characteristics that make it successful.

  8. 8.

    What are Folk Theorems in Game Theory?

  9. 9.

    Under what conditions can TIT-FOR-TAT be successful in evolutionary games?

Sample Essay Questions

  1. 1.

    Discuss the rationality assumptions of Game Theory.

  2. 2.

    Discuss the case of the Prisoner’s Dilemma from the point of view of conflict management.

  3. 3.

    Compare the different views of Skyrms and Tomasello on the evolution of cooperation.