Prisoner's dilemma

Last updated

The prisoner's dilemma is a game theory thought experiment involving two rational agents, each of whom can either cooperate for mutual benefit or betray their partner ("defect") for individual gain. The dilemma arises from the fact that while defecting is rational for each agent, cooperation yields a higher payoff for each. The puzzle was designed by Merrill Flood and Melvin Dresher in 1950 during their work at the RAND Corporation. [1] They invited economist Armen Alchian and mathematician John Williams to play a hundred rounds of the game, observing that Alchian and Williams often chose to cooperate. When asked about the results, John Nash remarked that rational behavior in the iterated version of the game can differ from that in a single-round version. This insight anticipated a key result in game theory: cooperation can emerge in repeated interactions, even in situations where it is not rational in a one-off interaction.

Contents

Albert W. Tucker later named the game the "prisoner's dilemma" by framing the rewards in terms of prison sentences. [2] The prisoner's dilemma models many real-world situations involving strategic behavior. In casual usage, the label "prisoner's dilemma" is applied to any situation in which two entities can gain important benefits by cooperating or suffer by failing to do so, but find it difficult or expensive to coordinate their choices.

Premise

An example prisoner's dilemma payoff matrix Prisoners dilemma.svg
An example prisoner's dilemma payoff matrix

William Poundstone described this "typical contemporary version" of the game in his 1993 book Prisoner's Dilemma:

Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don't have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain. If he testifies against his partner, he will go free while the partner will get three years in prison on the main charge. Oh, yes, there is a catch ... If both prisoners testify against each other, both will be sentenced to two years in jail. The prisoners are given a little time to think this over, but in no case may either learn what the other has decided until he has irrevocably made his decision. Each is informed that the other prisoner is being offered the very same deal. Each prisoner is concerned only with his own welfare—with minimizing his own prison sentence. [3]

This leads to four different possible outcomes for prisoners A and B:

  1. If A and B both remain silent, they will each serve one year in prison.
  2. If A testifies against B but B remains silent, A will be set free while B serves three years in prison.
  3. If A remains silent but B testifies against A, A will serve three years in prison and B will be set free.
  4. If A and B testify against each other, they will each serve two years.

Strategy for the prisoner's dilemma

Two prisoners are separated into individual rooms and cannot communicate with each other. It is assumed that both prisoners understand the nature of the game, have no loyalty to each other, and will have no opportunity for retribution or reward outside of the game. The normal game is shown below: [4]

Prisoner B

Prisoner A
Prisoner B stays silent
(cooperates)
Prisoner B testifies
(defects)
Prisoner A stays silent
(cooperates)
Each serves 1 yearPrisoner A: 3 years
Prisoner B: goes free
Prisoner A testifies
(defects)
Prisoner A: goes free
Prisoner B: 3 years
Each serves 2 years

Regardless of what the other decides, each prisoner gets a higher reward by betraying the other ("defecting"). The reasoning involves analyzing both players' best responses: B will either cooperate or defect. If B cooperates, A should defect, because going free is better than serving 1 year. If B defects, A should also defect, because serving 2 years is better than serving 3. So, either way, A should defect since defecting is A's best response regardless of B's strategy. Parallel reasoning will show that B should defect.

Defection always results in a better payoff than cooperation, so it is a strictly dominant strategy for both players. Mutual defection is the only strong Nash equilibrium in the game. Since the collectively ideal result of mutual cooperation is irrational from a self-interested standpoint, this Nash equilibrium is not Pareto efficient.

Generalized form

The structure of the traditional prisoner's dilemma can be generalized from its original prisoner setting. Suppose that the two players are represented by the colors red and blue and that each player chooses to either "cooperate" or "defect".

If both players cooperate, they both receive the reward for cooperating. If both players defect, they both receive the punishment payoff . If Blue defects while Red cooperates, then Blue receives the temptation payoff , while Red receives the "sucker's" payoff, . Similarly, if Blue cooperates while Red defects, then Blue receives the sucker's payoff , while Red receives the temptation payoff .

This can be expressed in normal form:

Canonical prisoner's dilemma payoff matrix
Red
Blue
CooperateDefect
Cooperate
R
R
T
S
Defect
S
T
P
P

and to be a prisoner's dilemma game in the strong sense, the following condition must hold for the payoffs:

The payoff relationship implies that mutual cooperation is superior to mutual defection, while the payoff relationships and imply that defection is the dominant strategy for both agents.

The iterated prisoner's dilemma

If two players play the prisoner's dilemma more than once in succession, remember their opponent's previous actions, and are allowed to change their strategy accordingly, the game is called the iterated prisoner's dilemma.

In addition to the general form above, the iterative version also requires that , to prevent alternating cooperation and defection giving a greater reward than mutual cooperation.

The iterated prisoner's dilemma is fundamental to some theories of human cooperation and trust. Assuming that the game effectively models transactions between two people that require trust, cooperative behavior in populations can be modeled by a multi-player iterated version of the game. In 1975, Grofman and Pool estimated the count of scholarly articles devoted to it at over 2,000. The iterated prisoner's dilemma is also called the "peace-war game". [5] [6]

General strategy

If the iterated prisoner's dilemma is played a finite number of times and both players know this, then the dominant strategy and Nash equilibrium is to defect in all rounds. The proof is inductive: one might as well defect on the last turn, since the opponent will not have a chance to later retaliate. Therefore, both will defect on the last turn. Thus, the player might as well defect on the second-to-last turn, since the opponent will defect on the last no matter what is done, and so on. The same applies if the game length is unknown but has a known upper limit.[ citation needed ]

For cooperation to emerge between rational players, the number of rounds must be unknown or infinite. In that case, "always defect" may no longer be a dominant strategy. As shown by Robert Aumann in a 1959 paper, [7] rational players repeatedly interacting for indefinitely long games can sustain cooperation. Specifically, a player may be less willing to cooperate if their counterpart did not cooperate many times, which causes disappointment. Conversely, as time elapses, the likelihood of cooperation tends to rise, owing to the establishment of a "tacit agreement" among participating players. In experimental situations, cooperation can occur even when both participants know how many iterations will be played. [8]

According to a 2019 experimental study in the American Economic Review that tested what strategies real-life subjects used in iterated prisoner's dilemma situations with perfect monitoring, the majority of chosen strategies were always to defect, tit-for-tat, and grim trigger. Which strategy the subjects chose depended on the parameters of the game. [9]

Axelrod's tournament and successful strategy conditions

Interest in the iterated prisoner's dilemma was kindled by Robert Axelrod in his 1984 book The Evolution of Cooperation , in which he reports on a tournament that he organized of the N-step prisoner's dilemma (with N fixed) in which participants have to choose their strategy repeatedly and remember their previous encounters. Axelrod invited academic colleagues from around the world to devise computer strategies to compete in an iterated prisoner's dilemma tournament. The programs that were entered varied widely in algorithmic complexity, initial hostility, capacity for forgiveness, and so forth.

Axelrod discovered that when these encounters were repeated over a long period of time with many players, each with different strategies, greedy strategies tended to do very poorly in the long run while more altruistic strategies did better, as judged purely by self-interest. He used this to show a possible mechanism for the evolution of altruistic behavior from mechanisms that are initially purely selfish, by natural selection.

The winning deterministic strategy was tit for tat, developed and entered into the tournament by Anatol Rapoport. It was the simplest of any program entered, containing only four lines of BASIC, [10] and won the contest. The strategy is simply to cooperate on the first iteration of the game; after that, the player does what his or her opponent did on the previous move. [11] Depending on the situation, a slightly better strategy can be "tit for tat with forgiveness": when the opponent defects, on the next move, the player sometimes cooperates anyway, with a small probability (around 1–5%, depending on the lineup of opponents). This allows for occasional recovery from getting trapped in a cycle of defections.

After analyzing the top-scoring strategies, Axelrod stated several conditions necessary for a strategy to succeed: [12]

In contrast to the one-time prisoner's dilemma game, the optimal strategy in the iterated prisoner's dilemma depends upon the strategies of likely opponents, and how they will react to defections and cooperation. For example, if a population consists entirely of players who always defect, except for one who follows the tit-for-tat strategy, that person is at a slight disadvantage because of the loss on the first turn. In such a population, the optimal strategy is to defect every time. More generally, given a population with a certain percentage of always-defectors with the rest being tit-for-tat players, the optimal strategy depends on the percentage and number of iterations played.[ citation needed ]

Other strategies

Deriving the optimal strategy is generally done in two ways:

In the strategy called win-stay, lose-switch, faced with a failure to cooperate, the player switches strategy the next turn. [17] In certain circumstances,[ specify ] Pavlov beats all other strategies by giving preferential treatment to co-players using a similar strategy.

Although tit-for-tat is considered the most robust basic strategy, a team from Southampton University in England introduced a more successful strategy at the 20th-anniversary iterated prisoner's dilemma competition. It relied on collusion between programs to achieve the highest number of points for a single program. The university submitted 60 programs to the competition, which were designed to recognize each other through a series of five to ten moves at the start. [18] Once this recognition was made, one program would always cooperate and the other would always defect, assuring the maximum number of points for the defector. If the program realized that it was playing a non-Southampton player, it would continuously defect in an attempt to minimize the competing program's score. As a result, the 2004 Prisoners' Dilemma Tournament results show University of Southampton's strategies in the first three places (and a number of positions towards the bottom), despite having fewer wins and many more losses than the GRIM strategy. The Southampton strategy takes advantage of the fact that multiple entries were allowed in this particular competition and that a team's performance was measured by that of the highest-scoring player (meaning that the use of self-sacrificing players was a form of minmaxing).

Because of this new rule, this competition also has little theoretical significance when analyzing single-agent strategies as compared to Axelrod's seminal tournament. But it provided a basis for analyzing how to achieve cooperative strategies in multi-agent frameworks, especially in the presence of noise.

Long before this new-rules tournament was played, Dawkins, in his book The Selfish Gene , pointed out the possibility of such strategies winning if multiple entries were allowed, but remarked that Axelrod would most likely not have allowed them if they had been submitted. It also relies on circumventing the rule that no communication is allowed between players, which the Southampton programs arguably did with their preprogrammed "ten-move dance" to recognize one another, reinforcing how valuable communication can be in shifting the balance of the game.

Even without implicit collusion between software strategies, tit-for-tat is not always the absolute winner of any given tournament; more precisely, its long-run results over a series of tournaments outperform its rivals, but this does not mean it is the most successful in the short term. The same applies to tit-for-tat with forgiveness and other optimal strategies.

This can also be illustrated using the Darwinian ESS simulation. In such a simulation, tit-for-tat will almost always come to dominate, though nasty strategies will drift in and out of the population because a tit-for-tat population is penetrable by non-retaliating nice strategies, which in turn are easy prey for the nasty strategies. Dawkins showed that here, no static mix of strategies forms a stable equilibrium, and the system will always oscillate between bounds.[ citation needed ]

Stochastic iterated prisoner's dilemma

In a stochastic iterated prisoner's dilemma game, strategies are specified in terms of "cooperation probabilities". [19] In an encounter between player X and player Y, X's strategy is specified by a set of probabilities P of cooperating with Y. P is a function of the outcomes of their previous encounters or some subset thereof. If P is a function of only their most recent n encounters, it is called a "memory-n" strategy. A memory-1 strategy is then specified by four cooperation probabilities: , where Pcd is the probability that X will cooperate in the present encounter given that the previous encounter was characterized by X cooperating and Y defecting. If each of the probabilities are either 1 or 0, the strategy is called deterministic. An example of a deterministic strategy is the tit-for-tat strategy written as , in which X responds as Y did in the previous encounter. Another is the win-stay, lose switch strategy written as . It has been shown that for any memory-n strategy there is a corresponding memory-1 strategy that gives the same statistical results, so that only memory-1 strategies need be considered. [19]

If is defined as the above 4-element strategy vector of X and as the 4-element strategy vector of Y (where the indices are from Y's point of view), a transition matrix M may be defined for X whose ij-th entry is the probability that the outcome of a particular encounter between X and Y will be j given that the previous encounter was i, where i and j are one of the four outcome indices: cc, cd, dc, or dd. For example, from X's point of view, the probability that the outcome of the present encounter is cd given that the previous encounter was cd is equal to . Under these definitions, the iterated prisoner's dilemma qualifies as a stochastic process and M is a stochastic matrix, allowing all of the theory of stochastic processes to be applied. [19]

One result of stochastic theory is that there exists a stationary vector v for the matrix v such that . Without loss of generality, it may be specified that v is normalized so that the sum of its four components is unity. The ij-th entry in will give the probability that the outcome of an encounter between X and Y will be j given that the encounter n steps previous is i. In the limit as n approaches infinity, M will converge to a matrix with fixed values, giving the long-term probabilities of an encounter producing j independent of i. In other words, the rows of will be identical, giving the long-term equilibrium result probabilities of the iterated prisoner's dilemma without the need to explicitly evaluate a large number of interactions. It can be seen that v is a stationary vector for and particularly , so that each row of will be equal to v. Thus, the stationary vector specifies the equilibrium outcome probabilities for X. Defining and as the short-term payoff vectors for the {cc,cd,dc,dd} outcomes (from X's point of view), the equilibrium payoffs for X and Y can now be specified as and , allowing the two strategies P and Q to be compared for their long-term payoffs.

Zero-determinant strategies

The relationship between zero-determinant (ZD), cooperating and defecting strategies in the iterated prisoner's dilemma (iterated prisoner's dilemma) Iterated Prisoners Dilemma Venn-Diagram.svg
The relationship between zero-determinant (ZD), cooperating and defecting strategies in the iterated prisoner's dilemma (iterated prisoner's dilemma)

In 2012, William H. Press and Freeman Dyson published a new class of strategies for the stochastic iterated prisoner's dilemma called "zero-determinant" (ZD) strategies. [19] The long term payoffs for encounters between X and Y can be expressed as the determinant of a matrix which is a function of the two strategies and the short term payoff vectors: and , which do not involve the stationary vector v. Since the determinant function is linear in , it follows that (where ). Any strategies for which are by definition a ZD strategy, and the long-term payoffs obey the relation .

Tit-for-tat is a ZD strategy which is "fair", in the sense of not gaining advantage over the other player. But the ZD space also contains strategies that, in the case of two players, can allow one player to unilaterally set the other player's score or alternatively force an evolutionary player to achieve a payoff some percentage lower than his own. The extorted player could defect, but would thereby hurt himself by getting a lower payoff. Thus, extortion solutions turn the iterated prisoner's dilemma into a sort of ultimatum game. Specifically, X is able to choose a strategy for which , unilaterally setting sy to a specific value within a particular range of values, independent of Y's strategy, offering an opportunity for X to "extort" player Y (and vice versa). But if X tries to set sx to a particular value, the range of possibilities is much smaller, consisting only of complete cooperation or complete defection. [19]

An extension of the iterated prisoner's dilemma is an evolutionary stochastic iterated prisoner's dilemma, in which the relative abundance of particular strategies is allowed to change, with more successful strategies relatively increasing. This process may be accomplished by having less successful players imitate the more successful strategies, or by eliminating less successful players from the game, while multiplying the more successful ones. It has been shown that unfair ZD strategies are not evolutionarily stable. The key intuition is that an evolutionarily stable strategy must not only be able to invade another population (which extortionary ZD strategies can do) but must also perform well against other players of the same type (which extortionary ZD players do poorly because they reduce each other's surplus). [20]

Theory and simulations confirm that beyond a critical population size, ZD extortion loses out in evolutionary competition against more cooperative strategies, and as a result, the average payoff in the population increases when the population is larger. In addition, there are some cases in which extortioners may even catalyze cooperation by helping to break out of a face-off between uniform defectors and win–stay, lose–switch agents. [21]

While extortionary ZD strategies are not stable in large populations, another ZD class called "generous" strategies is both stable and robust. When the population is not too small, these strategies can supplant any other ZD strategy and even perform well against a broad array of generic strategies for iterated prisoner's dilemma, including win–stay, lose–switch. This was proven specifically for the donation game by Alexander Stewart and Joshua Plotkin in 2013. [22] Generous strategies will cooperate with other cooperative players, and in the face of defection, the generous player loses more utility than its rival. Generous strategies are the intersection of ZD strategies and so-called "good" strategies, which were defined by Ethan Akin to be those for which the player responds to past mutual cooperation with future cooperation and splits expected payoffs equally if he receives at least the cooperative expected payoff. [23] Among good strategies, the generous (ZD) subset performs well when the population is not too small. If the population is very small, defection strategies tend to dominate. [22]

Continuous iterated prisoner's dilemma

Most work on the iterated prisoner's dilemma has focused on the discrete case, in which players either cooperate or defect, because this model is relatively simple to analyze. However, some researchers have looked at models of the continuous iterated prisoner's dilemma, in which players are able to make a variable contribution to the other player. Le and Boyd [24] found that in such situations, cooperation is much harder to evolve than in the discrete iterated prisoner's dilemma. In a continuous prisoner's dilemma, if a population starts off in a non-cooperative equilibrium, players who are only marginally more cooperative than non-cooperators get little benefit from assorting with one another. By contrast, in a discrete prisoner's dilemma, tit-for-tat cooperators get a big payoff boost from assorting with one another in a non-cooperative equilibrium, relative to non-cooperators. Since nature arguably offers more opportunities for variable cooperation rather than a strict dichotomy of cooperation or defection, the continuous prisoner's dilemma may help explain why real-life examples of tit-for-tat-like cooperation are extremely rare [25] even though tit-for-tat seems robust in theoretical models.

Real-life examples

Many instances of human interaction and natural processes have payoff matrices like the prisoner's dilemma's. It is therefore of interest to the social sciences, such as economics, politics, and sociology, as well as to the biological sciences, such as ethology and evolutionary biology. Many natural processes have been abstracted into models in which living beings are engaged in endless games of prisoner's dilemma.

Environmental studies

In environmental studies, the dilemma is evident in crises such as global climate change. It is argued all countries will benefit from a stable climate, but any single country is often hesitant to curb CO2 emissions. The immediate benefit to any one country from maintaining current behavior is perceived to be greater than the purported eventual benefit to that country if all countries' behavior was changed, therefore explaining the impasse concerning climate-change in 2007. [26]

An important difference between climate-change politics and the prisoner's dilemma is uncertainty; the extent and pace at which pollution can change climate is not known. The dilemma faced by governments is therefore different from the prisoner's dilemma in that the payoffs of cooperation are unknown. This difference suggests that states will cooperate much less than in a real iterated prisoner's dilemma, so that the probability of avoiding a possible climate catastrophe is much smaller than that suggested by a game-theoretical analysis of the situation using a real iterated prisoner's dilemma. [27]

Thomas Osang and Arundhati Nandy provide a theoretical explanation with proofs for a regulation-driven win-win situation along the lines of Michael Porter's hypothesis, in which government regulation of competing firms is substantial. [28]

Animals

Cooperative behavior of many animals can be understood as an example of the iterated prisoner's dilemma. Often animals engage in long-term partnerships; for example, guppies inspect predators cooperatively in groups, and they are thought to punish non-cooperative inspectors. [29]

Vampire bats are social animals that engage in reciprocal food exchange. Applying the payoffs from the prisoner's dilemma can help explain this behavior. [30]

Psychology

In addiction research and behavioral economics, George Ainslie points out that addiction can be cast as an intertemporal prisoner's dilemma problem between the present and future selves of the addict. In this case, "defecting" means relapsing, where not relapsing both today and in the future is by far the best outcome. The case where one abstains today but relapses in the future is the worst outcome: in some sense, the discipline and self-sacrifice involved in abstaining today have been "wasted" because the future relapse means that the addict is right back where they started and will have to start over. Relapsing today and tomorrow is a slightly "better" outcome, because while the addict is still addicted, they haven't put the effort in to trying to stop. The final case, where one engages in the addictive behavior today while abstaining tomorrow, has the problem that (as in other prisoner's dilemmas) there is an obvious benefit to defecting "today", but tomorrow one will face the same prisoner's dilemma, and the same obvious benefit will be present then, ultimately leading to an endless string of defections. [31]

In The Science of Trust, John Gottman defines good relationships as those where partners know not to enter into mutual defection behavior, or at least not to get dynamically stuck there in a loop. In cognitive neuroscience, fast brain signaling associated with processing different rounds may indicate choices at the next round. Mutual cooperation outcomes entail brain activity changes predictive of how quickly a person will cooperate in kind at the next opportunity; [32] this activity may be linked to basic homeostatic and motivational processes, possibly increasing the likelihood of short-cutting into mutual cooperation.

Economics

The prisoner's dilemma has been called the E. coli of social psychology, and it has been used widely to research various topics such as oligopolistic competition and collective action to produce a collective good. [33]

Advertising is sometimes cited as a real example of the prisoner's dilemma. When cigarette advertising was legal in the United States, competing cigarette manufacturers had to decide how much money to spend on advertising. The effectiveness of Firm A's advertising was partially determined by the advertising conducted by Firm B. Likewise, the profit derived from advertising for Firm B is affected by the advertising conducted by Firm A. If both Firm A and Firm B chose to advertise during a given period, then the advertisement from each firm negates the other's, receipts remain constant, and expenses increase due to the cost of advertising. Both firms would benefit from a reduction in advertising. However, should Firm B choose not to advertise, Firm A could benefit greatly by advertising. Nevertheless, the optimal amount of advertising by one firm depends on how much advertising the other undertakes. As the best strategy is dependent on what the other firm chooses there is no dominant strategy, which makes it slightly different from a prisoner's dilemma. The outcome is similar, though, in that both firms would be better off were they to advertise less than in the equilibrium.

Sometimes cooperative behaviors do emerge in business situations. For instance, cigarette manufacturers endorsed the making of laws banning cigarette advertising, understanding that this would reduce costs and increase profits across the industry. [34] [lower-alpha 4]

Without enforceable agreements, members of a cartel are also involved in a (multi-player) prisoner's dilemma. [35] "Cooperating" typically means agreeing to a price floor, while "defecting" means selling under this minimum level, instantly taking business from other cartel members. Anti-trust authorities want potential cartel members to mutually defect, ensuring the lowest possible prices for consumers.

Sport

Doping in sport has been cited as an example of a prisoner's dilemma. Two competing athletes have the option to use an illegal and/or dangerous drug to boost their performance. If neither athlete takes the drug, then neither gains an advantage. If only one does, then that athlete gains a significant advantage over the competitor, reduced by the legal and/or medical dangers of having taken the drug. But if both athletes take the drug, the benefits cancel out and only the dangers remain, putting them both in a worse position than if neither had doped. [36]

International politics

In international relations theory, the prisoner's dilemma is often used to demonstrate why cooperation fails in situations when cooperation between states is collectively optimal but individually suboptimal. [37] [38] A classic example is the security dilemma, whereby an increase in one state's security (such as increasing its military strength) leads other states to fear for their own security out of fear of offensive action. [39] Consequently, security-increasing measures can lead to tensions, escalation or conflict with one or more other parties, producing an outcome which no party truly desires. [40] [39] [41] [42] [43] The security dilemma is particularly intense in situations when it is hard to distinguish offensive weapons from defensive weapons, and offense has the advantage in any conflict over defense. [39]

The prisoner's dilemma has frequently been used by realist international relations theorists to demonstrate the why all states (regardless of their internal policies or professed ideology) under international anarchy will struggle to cooperate with one another even when all benefit from such cooperation.

Critics of realism argue that iteration and extending the shadow of the future are solutions to the prisoner's dilemma. When actors play the prisoner's dilemma once, they have incentives to defect, but when they expect to play it repeatedly, they have greater incentives to cooperate. [44]

Multiplayer dilemmas

Many real-life dilemmas involve multiple players. [45] Although metaphorical, Garrett Hardin's tragedy of the commons may be viewed as an example of a multi-player generalization of the prisoner's dilemma: each villager makes a choice for personal gain or restraint. The collective reward for unanimous or frequent defection is very low payoffs and the destruction of the commons.

The commons are not always exploited: William Poundstone, in a book about the prisoner's dilemma, describes a situation in New Zealand where newspaper boxes are left unlocked. It is possible for people to take a paper without paying (defecting), but very few do, feeling that if they do not pay then neither will others, destroying the system. [46] Subsequent research by Elinor Ostrom, winner of the 2009 Nobel Memorial Prize in Economic Sciences, hypothesized that the tragedy of the commons is oversimplified, with the negative outcome influenced by outside influences. Without complicating pressures, groups communicate and manage the commons among themselves for their mutual benefit, enforcing social norms to preserve the resource and achieve the maximum good for the group, an example of effecting the best-case outcome for prisoner's dilemma. [47] [48]

Academic settings

The prisoner's dilemma has been used in various academic settings to illustrate the complexities of cooperation and competition. One notable example is the classroom experiment conducted by sociology professor Dan Chambliss at Hamilton College in the 1980s. Starting in 1981, Chambliss proposed that if no student took the final exam, everyone would receive an A, but if even one student took it, those who didn't would receive a zero. In 1988, John Werner, a first-year student, successfully organized his classmates to boycott the exam, demonstrating a practical application of game theory and the prisoner's dilemma concept. [49]

Nearly 25 years later, a similar incident occurred at Johns Hopkins University in 2013. Professor Peter Fröhlich's grading policy scaled final exams according to the highest score, meaning that if everyone received the same score, they would all get an A. Students in Fröhlich's classes organized a boycott of the final exam, ensuring that no one took it. As a result, every student received an A, successfully solving the prisoner's dilemma in a mutually optimal way without iteration. [50] [51] These examples highlight how the prisoner's dilemma can be used to explore cooperative behavior and strategic decision-making in educational contexts.

Closed-bag exchange

The prisoner's dilemma as a briefcase exchange Prisoner's Dilemma briefcase exchange (colorized).svg
The prisoner's dilemma as a briefcase exchange

Douglas Hofstadter [52] suggested that people often find problems such as the prisoner's dilemma problem easier to understand when it is illustrated in the form of a simple game, or trade-off. One of several examples he used was "closed bag exchange":

Two people meet and exchange closed bags, with the understanding that one of them contains money, and the other contains a purchase. Either player can choose to honor the deal by putting into his or her bag what he or she agreed, or he or she can defect by handing over an empty bag.

Friend or Foe?

Friend or Foe? is a game show that aired from 2002 to 2003 on the Game Show Network in the US. On the game show, three pairs of people compete. When a pair is eliminated, they play a game similar to the prisoner's dilemma to determine how the winnings are split. If they both cooperate (Friend), they share the winnings 50–50. If one cooperates and the other defects (Foe), the defector gets all the winnings, and the cooperator gets nothing. If both defect, both leave with nothing. Notice that the reward matrix is slightly different from the standard one given above, as the rewards for the "both defect" and the "cooperate while the opponent defects" cases are identical. This makes the "both defect" case a weak equilibrium, compared with being a strict equilibrium in the standard prisoner's dilemma. If a contestant knows that their opponent is going to vote "Foe", then their own choice does not affect their own winnings. In a specific sense, Friend or Foe has a rewards model between prisoner's dilemma and the game of Chicken.

This is the rewards matrix:

Pair 2
Pair 1
"Friend"
(cooperate)
"Foe"
(defect)
"Friend"
(cooperate)
1
1
2
0
"Foe"
(defect)
0
2
0
0

This payoff matrix has also been used on the British television programs Trust Me, Shafted , The Bank Job and Golden Balls , and on the American game shows Take It All , as well as for the winning couple on the reality shows Bachelor Pad and Love Island . Game data from the Golden Balls series has been analyzed by a team of economists, who found that cooperation was "surprisingly high" for amounts of money that would seem consequential in the real world but were comparatively low in the context of the game. [53]

Iterated snowdrift

Researchers from the University of Lausanne and the University of Edinburgh have suggested that the "Iterated Snowdrift Game" may more closely reflect real-world social situations, although this model is actually a chicken game. In this model, the risk of being exploited through defection is lower, and individuals always gain from taking the cooperative choice. The snowdrift game imagines two drivers who are stuck on opposite sides of a snowdrift, each of whom is given the option of shoveling snow to clear a path or remaining in their car. A player's highest payoff comes from leaving the opponent to clear all the snow by themselves, but the opponent is still nominally rewarded for their work.

This may better reflect real-world scenarios, the researchers giving the example of two scientists collaborating on a report, both of whom would benefit if the other worked harder. "But when your collaborator doesn't do any work, it's probably better for you to do all the work yourself. You'll still end up with a completed project." [54] [55]

Example snowdrift payouts (A, B)
B 
 A
CooperatesDefects
Cooperates500, 500200, 800
Defects800, 2000, 0
Example prisoner's dilemma payouts (A, B)
B 
 A
CooperatesDefects
Cooperates500, 500−200, 1200
Defects1200, −2000, 0

Coordination games

In coordination games, players must coordinate their strategies for a good outcome. An example is two cars that abruptly meet in a blizzard; each must choose whether to swerve left or right. If both swerve left, or both right, the cars do not collide. The local left- and right-hand traffic convention helps to co-ordinate their actions.

Symmetrical co-ordination games include Stag hunt and Bach or Stravinsky.

Asymmetric prisoner's dilemmas

A more general set of games is asymmetric. As in the prisoner's dilemma, the best outcome is cooperation, and there are motives for defection. Unlike the symmetric prisoner's dilemma, though, one player has more to lose and/or more to gain than the other. Some such games have been described as a prisoner's dilemma in which one prisoner has an alibi, hence the term "alibi game". [56]

In experiments, players getting unequal payoffs in repeated games may seek to maximize profits, but only under the condition that both players receive equal payoffs; this may lead to a stable equilibrium strategy in which the disadvantaged player defects every X game, while the other always co-operates. Such behavior may depend on the experiment's social norms around fairness. [57]

Software

Several software packages have been created to run simulations and tournaments of the prisoner's dilemma, some of which have their source code available:

In fiction

Hannu Rajaniemi set the opening scene of his The Quantum Thief trilogy in a "dilemma prison". The main theme of the series has been described as the "inadequacy of a binary universe" and the ultimate antagonist is a character called the All-Defector. The first book in the series was published in 2010, with the two sequels, The Fractal Prince and The Causal Angel , published in 2012 and 2014, respectively.

A game modeled after the iterated prisoner's dilemma is a central focus of the 2012 video game Zero Escape: Virtue's Last Reward and a minor part in its 2016 sequel Zero Escape: Zero Time Dilemma .

In The Mysterious Benedict Society and the Prisoner's Dilemma by Trenton Lee Stewart, the main characters start by playing a version of the game and escaping from the "prison" altogether. Later, they become actual prisoners and escape once again.

In The Adventure Zone: Balance during The Suffering Game subarc, the player characters are twice presented with the prisoner's dilemma during their time in two liches' domain, once cooperating and once defecting.

In the 8th novel from the author James S. A. Corey Tiamat's Wrath, Winston Duarte explains the prisoner's dilemma to his 14-year-old daughter, Teresa, to train her in strategic thinking.[ citation needed ]

The 2008 film The Dark Knight includes a scene loosely based on the problem in which the Joker rigs two ferries, one containing prisoners and the other containing civilians, arming both groups with the means to detonate the bomb on each other's ferries, threatening to detonate them both if they hesitate. [62] [63]

In moral philosophy

The prisoner's dilemma is commonly used as a thinking tool in moral philosophy as an illustration of the potential tension between the benefit of the individual and the benefit of the community.

Both the one-shot and the iterated prisoner's dilemma have applications in moral philosophy. Indeed, many of the moral situations, such as genocide, are not easily repeated more than once. Moreover, in many situations, the previous rounds' outcomes are unknown to the players, since they are not necessarily the same (e.g. interaction with a panhandler on the street). [64]

The philosopher David Gauthier uses the prisoner's dilemma to show how morality and rationality can conflict. [65]

Some game theorists have criticized the use of the prisoner's dilemma as a thinking tool in moral philosophy. [65] Kenneth Binmore argued that the prisoner's dilemma does not accurately describe the game played by humanity, which he argues is closer to a coordination game. Brian Skyrms shares this perspective.

Steven Kuhn suggests that these views may be reconciled by considering that moral behavior can modify the payoff matrix of a game, transforming it from a prisoner's dilemma into other games. [65]

Pure and impure prisoner's dilemma

A prisoner's dilemma is considered "impure" if a mixed strategy may give better expected payoffs than a pure strategy. This creates the interesting possibility that the moral action from a utilitarian perspective (i.e., aiming at maximizing the good of an action) may require randomization of one's strategy, such as cooperating with 80% chance and defecting with 20% chance. [66]

See also

Notes

  1. The tournament has two rounds. In the first round, each of the top eight strategies were nice, and not one of the bottom seven were nice. In the second round (strategy designers could take into account the results of the first round), all but one of the top fifteen strategies were nice (and that one ranked eighth). Of the bottom fifteen strategies, all but one were not nice. [13]
  2. In contrast to strategies like grim trigger (also called Friedman), which is never first to defect, but once the other defects even once, grim trigger defects from then on. [14]
  3. For example see the 2003 study [15] for discussion of the concept and whether it can apply in real economic or strategic situations.
  4. This argument for the development of cooperation through trust is given in The Wisdom of Crowds , where it is argued that long-distance capitalism was able to form around a nucleus of Quakers, who always dealt honourably with their business partners (rather than defecting and reneging on promises – a phenomenon that had discouraged earlier long-term unenforceable overseas contracts). It is argued that dealings with reliable merchants allowed the meme for cooperation to spread to other traders, who spread it further until a high degree of cooperation became a profitable strategy in general commerce.

Related Research Articles

An evolutionarily stable strategy (ESS) is a strategy that is impermeable when adopted by a population in adaptation to a specific environment, that is to say it cannot be displaced by an alternative strategy which may be novel or initially rare. Introduced by John Maynard Smith and George R. Price in 1972/3, it is an important concept in behavioural ecology, evolutionary psychology, mathematical game theory and economics, with applications in other fields such as anthropology, philosophy and political science.

<i>The Evolution of Cooperation</i> 1984 book by Robert Axelrod

The Evolution of Cooperation is a 1984 book written by political scientist Robert Axelrod that expands upon a paper of the same name written by Axelrod and evolutionary biologist W.D. Hamilton. The article's summary addresses the issue in terms of "cooperation in organisms, whether bacteria or primates".

In game theory, the Nash equilibrium is the most commonly-used solution concept for non-cooperative games. A Nash equilibrium is a situation where no player could gain by changing their own strategy. The idea of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to his model of competition in an oligopoly.

<span class="mw-page-title-main">Tit for tat</span> English saying meaning "equivalent retaliation"

Tit for tat is an English saying meaning "equivalent retaliation". It is an alteration of tip for tap "blow for blow", first recorded in 1558.

In economics and game theory, a participant is considered to have superrationality if they have perfect rationality but assume that all other players are superrational too and that a superrational individual will always come up with the same strategy as any other superrational thinker when facing the same problem. Applying this definition, a superrational player playing against a superrational opponent in a prisoner's dilemma will cooperate while a rationally self-interested player would defect.

The game of chicken, also known as the hawk-dove game or snowdrift game, is a model of conflict for two players in game theory. The principle of the game is that while the ideal outcome is for one player to yield, individuals try to avoid it out of pride, not wanting to look like "chickens." Each player taunts the other to increase the risk of shame in yielding. However, when one player yields, the conflict is avoided, and the game essentially ends.

Evolutionary game theory (EGT) is the application of game theory to evolving populations in biology. It defines a framework of contests, strategies, and analytics into which Darwinian competition can be modelled. It originated in 1973 with John Maynard Smith and George R. Price's formalisation of contests, analysed as strategies, and the mathematical criteria that can be used to predict the results of competing strategies.

In game theory, the centipede game, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round, but after an additional switch the potential payoff will be higher. Therefore, although at each round a player has an incentive to take the pot, it would be better for them to wait. Although the traditional centipede game had a limit of 100 rounds, any game with this structure but a different number of rounds is called a centipede game.

In game theory, grim trigger is a trigger strategy for a repeated game.

In game theory, the stag hunt, sometimes referred to as the assurance game, trust dilemma or common interest game, describes a conflict between safety and social cooperation. The stag hunt problem originated with philosopher Jean-Jacques Rousseau in his Discourse on Inequality. In the most common account of this dilemma, which is quite different from Rousseau's, two hunters must decide separately, and without the other knowing, whether to hunt a stag or a hare. However, both hunters know the only way to successfully hunt a stag is with the other's help. One hunter can catch a hare alone with less effort and less time, but it is worth far less than a stag and has much less meat. But both hunters would be better off if both choose the more ambitious and more rewarding goal of getting the stag, giving up some autonomy in exchange for the other hunter's cooperation and added might. This situation is often seen as a useful analogy for many kinds of social cooperation, such as international agreements on climate change.

Regime theory is a theory within international relations derived from the liberal tradition which argues that international institutions or regimes affect the behavior of states or other international actors. It assumes that cooperation is possible in the anarchic system of states, as regimes are, by definition, instances of international cooperation.

In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of their current action on the future actions of other players; this impact is sometimes called their reputation. Single stage game or single shot game are names for non-repeated games.

In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that at any point in the game, the players' behavior from that point onward should represent a Nash equilibrium of the continuation game, no matter what happened before. Every finite extensive game with perfect recall has a subgame perfect equilibrium. Perfect recall is a term introduced by Harold W. Kuhn in 1953 and "equivalent to the assertion that each player is allowed by the rules of the game to remember everything he knew at previous moves and all of his choices at those moves".

<span class="mw-page-title-main">Peace war game</span>

Peace war game is an iterated game originally played in academic groups and by computer simulation for years to study possible strategies of cooperation and aggression. As peace makers became richer over time it became clear that making war had greater costs than initially anticipated. The only strategy that acquired wealth more rapidly was a "Genghis Khan", a constant aggressor making war continually to gain resources. This led to the development of the "provokable nice guy" strategy, a peace-maker until attacked. Multiple players continue to gain wealth cooperating with each other while bleeding the constant aggressor.

In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

In game theory, the traveler's dilemma is a non-zero-sum game in which each player proposes a payoff. The lower of the two proposals wins; the lowball player receives the lowball payoff plus a small bonus, and the highball player receives the same lowball payoff, minus a small penalty. Surprisingly, the Nash equilibrium is for both players to aggressively lowball. The traveler's dilemma is notable in that naive play appears to outperform the Nash equilibrium; this apparent paradox also appears in the centipede game and the finitely-iterated prisoner's dilemma.

Program equilibrium is a game-theoretic solution concept for a scenario in which players submit computer programs to play the game on their behalf and the programs can read each other's source code. The term was introduced by Moshe Tennenholtz in 2004. The same setting had previously been studied by R. Preston McAfee, J. V. Howard and Ariel Rubinstein.

Subjective expected relative similarity (SERS) is a normative and descriptive theory that predicts and explains cooperation levels in a family of games termed Similarity Sensitive Games (SSG), among them the well-known Prisoner's Dilemma game (PD). SERS was originally developed in order to (i) provide a new rational solution to the PD game and (ii) to predict human behavior in single-step PD games. It was further developed to account for: (i) repeated PD games, (ii) evolutionary perspectives and, as mentioned above, (iii) the SSG subgroup of 2×2 games. SERS predicts that individuals cooperate whenever their subjectively perceived similarity with their opponent exceeds a situational index derived from the game's payoffs, termed the similarity threshold of the game. SERS proposes a solution to the rational paradox associated with the single step PD and provides accurate behavioral predictions. The theory was developed by Prof. Ilan Fischer at the University of Haifa.

Reciprocal altruism in humans refers to an individual behavior that gives benefit conditionally upon receiving a returned benefit, which draws on the economic concept – ″gains in trade″. Human reciprocal altruism would include the following behaviors : helping patients, the wounded, and the others when they are in crisis; sharing food, implement, knowledge.

The Berge equilibrium is a game theory solution concept named after the mathematician Claude Berge. It is similar to the standard Nash equilibrium, except that it aims to capture a type of altruism rather than purely non-cooperative play. Whereas a Nash equilibrium is a situation in which each player of a strategic game ensures that they personally will receive the highest payoff given other players' strategies, in a Berge equilibrium every player ensures that all other players will receive the highest payoff possible. Although Berge introduced the intuition for this equilibrium notion in 1957, it was only formally defined by Vladislav Iosifovich Zhukovskii in 1985, and it was not in widespread use until half a century after Berge originally developed it.

References

  1. "Prisoner's Dilemma". Stanford Encyclopedia of Philosophy . Retrieved 10 March 2024.
  2. Poundstone 1993, pp. 8, 117.
  3. Poundstone 1993 , p. 118: "A typical contemporary version of the story goes like this: Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don't have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain. If he testifies against his partner, he will go free while the partner will get three years in prison on the main charge. Oh, yes, there is a catch ... If both prisoners testify against each other, both will be sentenced to two years in jail. The prisoners are given a little time to think this over, but in no case may either learn what the other has decided until he has irrevocably made his decision. Each is informed that the other prisoner is being offered the very same deal. Each prisoner is concerned only with his own welfare—with minimizing his own prison sentence."
  4. Poundstone 1993, p. 118.
  5. Grofman, Bernard; Pool, Jonathan (January 1977). "How to make cooperation the optimizing strategy in a two-person game". The Journal of Mathematical Sociology. 5 (2): 173–186. doi:10.1080/0022250x.1977.9989871. ISSN   0022-250X.
  6. Shy, Oz (1995). Industrial Organization: Theory and Applications. Massachusetts Institute of Technology Press. ISBN   978-0262193665 . Retrieved February 27, 2013.
  7. Aumann, Robert J. (2016-03-02), "16. Acceptable Points in General Cooperative n-Person Games", Contributions to the Theory of Games (AM-40), Volume IV, Princeton University Press, pp. 287–324, doi:10.1515/9781400882168-018, ISBN   978-1-4008-8216-8 , retrieved 2024-05-14
  8. Cooper, Russell; DeJong, Douglas V.; Forsythe, Robert; Ross, Thomas W. (1996). "Cooperation without Reputation: Experimental Evidence from Prisoner's Dilemma Games". Games and Economic Behavior. 12 (2): 187–218. doi:10.1006/game.1996.0013.
  9. Dal Bó, Pedro; Fréchette, Guillaume R. (2019). "Strategy Choice in the Infinitely Repeated Prisoner's Dilemma". American Economic Review. 109 (11): 3929–3952. doi:10.1257/aer.20181480. ISSN   0002-8282. S2CID   216726890.
  10. Axelrod (2006) , p. 193
  11. Axelrod (2006) , p. 31
  12. Axelrod (2006) , chpt. 6
  13. Axelrod (2006), pp. 113–114
  14. Axelrod (2006), p. 36
  15. Landsberger, Michael; Tsirelson, Boris (2003). "Bayesian Nash equilibrium; a statistical test of the hypothesis" (PDF). Tel Aviv University. Archived from the original (PDF) on 2005-10-02.
  16. Wu, Jiadong; Zhao, Chengye (2019), Sun, Xiaoming; He, Kun; Chen, Xiaoyun (eds.), "Cooperation on the Monte Carlo Rule: Prisoner's Dilemma Game on the Grid", Theoretical Computer Science, Communications in Computer and Information Science, vol. 1069, Springer Singapore, pp. 3–15, doi:10.1007/978-981-15-0105-0_1, ISBN   978-981-15-0104-3, S2CID   118687103
  17. Wedekind, C.; Milinski, M. (2 April 1996). "Human cooperation in the simultaneous and the alternating Prisoner's Dilemma: Pavlov versus Generous Tit-for-Tat". Proceedings of the National Academy of Sciences. 93 (7): 2686–2689. Bibcode:1996PNAS...93.2686W. doi: 10.1073/pnas.93.7.2686 . PMC   39691 . PMID   11607644.
  18. "University of Southampton team wins Prisoner's Dilemma competition" (Press release). University of Southampton. 7 October 2004. Archived from the original on 2014-04-21.
  19. 1 2 3 4 5 Press, WH; Dyson, FJ (26 June 2012). "Iterated Prisoner's Dilemma contains strategies that dominate any evolutionary opponent". Proceedings of the National Academy of Sciences of the United States of America . 109 (26): 10409–13. Bibcode:2012PNAS..10910409P. doi: 10.1073/pnas.1206569109 . PMC   3387070 . PMID   22615375.
  20. Adami, Christoph; Arend Hintze (2013). "Evolutionary instability of Zero Determinant strategies demonstrates that winning isn't everything". Nature Communications. 4: 3. arXiv: 1208.2666 . Bibcode:2013NatCo...4.2193A. doi:10.1038/ncomms3193. PMC   3741637 . PMID   23903782.
  21. Hilbe, Christian; Martin A. Nowak; Karl Sigmund (April 2013). "Evolution of extortion in Iterated Prisoner's Dilemma games". PNAS. 110 (17): 6913–18. arXiv: 1212.1067 . Bibcode:2013PNAS..110.6913H. doi: 10.1073/pnas.1214834110 . PMC   3637695 . PMID   23572576.
  22. 1 2 Stewart, Alexander J.; Joshua B. Plotkin (2013). "From extortion to generosity, evolution in the Iterated Prisoner's Dilemma". Proceedings of the National Academy of Sciences of the United States of America . 110 (38): 15348–53. Bibcode:2013PNAS..11015348S. doi: 10.1073/pnas.1306246110 . PMC   3780848 . PMID   24003115.
  23. Akin, Ethan (2013). "Stable Cooperative Solutions for the Iterated Prisoner's Dilemma". p. 9. arXiv: 1211.0969 [math.DS]. Bibcode : 2012arXiv1211.0969A
  24. Le S, Boyd R (2007). "Evolutionary Dynamics of the Continuous Iterated Prisoner's Dilemma". Journal of Theoretical Biology. 245 (2): 258–67. Bibcode:2007JThBi.245..258L. doi:10.1016/j.jtbi.2006.09.016. PMID   17125798.
  25. Hammerstein, P. (2003). Why is reciprocity so rare in social animals? A protestant appeal. In: P. Hammerstein, Editor, Genetic and Cultural Evolution of Cooperation, MIT Press. pp. 83–94.
  26. "Markets & Data". The Economist . 2007-09-27.
  27. Rehmeyer, Julie (2012-10-29). "Game theory suggests current climate negotiations won't avert catastrophe". Science News. Society for Science & the Public.
  28. Osang, Thomas; Nandyyz, Arundhati (August 2003). Environmental Regulation of Polluting Firms: Porter's Hypothesis Revisited (PDF) (paper). Archived (PDF) from the original on 2010-07-02.
  29. Brosnan, Sarah F.; Earley, Ryan L.; Dugatkin, Lee A. (October 2003). "Observational Learning and Predator Inspection in Guppies ( Poecilia reticulata ): Social Learning in Guppies". Ethology. 109 (10): 823–833. doi: 10.1046/j.0179-1613.2003.00928.x .
  30. Dawkins, Richard (1976). The Selfish Gene. Oxford University Press.
  31. Ainslie, George (2001). Breakdown of Will. Cambridge University Press. ISBN   978-0-521-59694-7.
  32. Cervantes Constantino, Garat, Nicolaisen, Paz, Martínez-Montes, Kessel, Cabana, and Gradin (2020). "Neural processing of iterated prisoner's dilemma outcomes indicates next-round choice and speed to reciprocate cooperation". Social Neuroscience. 16 (2): 103–120. doi:10.1080/17470919.2020.1859410. PMID   33297873. S2CID   228087900.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  33. Axelrod, Robert (1980). "Effective Choice in the Prisoner's Dilemma". The Journal of Conflict Resolution. 24 (1): 3–25. doi:10.1177/002200278002400101. ISSN   0022-0027. JSTOR   173932. S2CID   143112198.
  34. Henriksen, Lisa (March 2012). "Comprehensive tobacco marketing restrictions: promotion, packaging, price and place". Tobacco Control. 21 (2): 147–153. doi:10.1136/tobaccocontrol-2011-050416. PMC   4256379 . PMID   22345238.
  35. Nicholson, Walter (2000). Intermediate microeconomics and its application (8th ed.). Fort Worth, TX: Dryden Press : Harcourt College Publishers. ISBN   978-0-030-25916-6.
  36. Schneier, Bruce (2012-10-26). "Lance Armstrong and the Prisoners' Dilemma of Doping in Professional Sports". Wired. Wired.com. Retrieved 2012-10-29.
  37. Snyder, Glenn H. (1971). ""Prisoner's Dilemma" and "Chicken" Models in International Politics". International Studies Quarterly. 15 (1): 66–103. doi:10.2307/3013593. ISSN   0020-8833. JSTOR   3013593.
  38. Jervis, Robert (1978). "Cooperation under the Security Dilemma". World Politics. 30 (2): 167–214. doi:10.2307/2009958. hdl: 2027/uc1.31158011478350 . ISSN   1086-3338. JSTOR   2009958. S2CID   154923423.
  39. 1 2 3 Jervis, Robert (1978). "Cooperation Under the Security Dilemma". World Politics. 30 (2): 167–214. doi:10.2307/2009958. hdl: 2027/uc1.31158011478350 . ISSN   0043-8871. JSTOR   2009958. S2CID   154923423.
  40. Herz, John H. (1950). Idealist Internationalism and the Security Dilemma. pp. 157–180.
  41. Snyder, Glenn H. (1984). "The Security Dilemma in Alliance Politics". World Politics. 36 (4): 461–495. doi:10.2307/2010183. ISSN   0043-8871. JSTOR   2010183. S2CID   154759602.
  42. Jervis, Robert (1976). Perception and Misperception in International Politics. Princeton University Press. pp. 58–113. ISBN   978-0-691-10049-4.
  43. Glaser, Charles L. (2010). Rational Theory of International Politics. Princeton University Press. ISBN   9780691143729.
  44. Axelrod, Robert; Hamilton, William D. (1981). "The Evolution of Cooperation". Science. 211 (4489): 1390–1396. Bibcode:1981Sci...211.1390A. doi:10.1126/science.7466396. ISSN   0036-8075. PMID   7466396.
  45. Gokhale CS, Traulsen A. Evolutionary games in the multiverse. Proceedings of the National Academy of Sciences. 2010 Mar 23. 107(12):5500–04.
  46. Poundstone 1993, pp. 126–127.
  47. "The Volokh Conspiracy " Elinor Ostrom and the Tragedy of the Commons". Volokh.com. 2009-10-12. Retrieved 2011-12-17.
  48. Ostrom, Elinor (2015) [1990]. Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press. doi:10.1017/CBO9781316423936. ISBN   978-1-107-56978-2.
  49. Rivard, Ry (2013-02-21). "A look back at another successful final exam boycott". Inside Higher Ed. Retrieved 2024-07-12.
  50. Wolfers, Justin (2013-02-14). "Gaming the System". The New York Times. Retrieved 2024-07-12.
  51. "Johns Hopkins Students Boycott Final Exam - So Everyone Gets an A". Baltimore Fishbowl. 2013-02-25. Retrieved 2024-07-12.
  52. Hofstadter, Douglas R. (1985). "Ch.29 The Prisoner's Dilemma Computer Tournaments and the Evolution of Cooperation.". Metamagical Themas: questing for the essence of mind and pattern. Bantam Dell Pub Group. ISBN   978-0-465-04566-2.
  53. Van den Assem, Martijn J. (January 2012). "Split or Steal? Cooperative Behavior When the Stakes Are Large". Management Science. 58 (1): 2–20. doi:10.1287/mnsc.1110.1413. hdl: 1765/31292 . S2CID   1371739. SSRN   1592456.
  54. Zyga, Lisa (2007-10-09). "'Snowdrift' game tops 'Prisoner's Dilemma' in explaining cooperation". Phys.org. Archived from the original on 2024-04-11.
  55. Kümmerli, Rolf; Colliard, Caroline; Fiechter, Nicolas; Petitpierre, Blaise; Russier, Flavien; Keller, Laurent (2007-09-25). "Human cooperation in social dilemmas: comparing the Snowdrift game with the Prisoner's Dilemma". Proceedings of the Royal Society B: Biological Sciences. 274 (1628). Royal Society: 2965–2970. doi:10.1098/rspb.2007.0793. ISSN   1471-2954. PMC   2291152 . PMID   17895227.
  56. Robinson, D.R.; Goforth, D.J. (May 5, 2004). Alibi games: the Asymmetric Prisoner' s Dilemmas (PDF). Meetings of the Canadian Economics Association, Toronto, June 4–6, 2004. Archived (PDF) from the original on 2004-12-06.
  57. Beckenkamp, Martin; Hennig-Schmidt, Heike; Maier-Rigaud, Frank P. (March 4, 2007). "Cooperation in Symmetric and Asymmetric Prisoner's Dilemma Games" (PDF). Max Planck Institute for Research on Collective Goods . Archived (PDF) from the original on 2019-09-02.
  58. available online at http://www-personal.umich.edu/~axe/research/Software/CC/CC2.html
  59. https://web.archive.org/web/19991010053242/http://www.lifl.fr/IPD/ipd.frame.html
  60. https://github.com/Axelrod-Python/Axelrod
  61. https://evoplex.org/
  62. Romain, Lindsey (2018-07-18). "The Dark Knight's only redeemable character is the criminal who saves the ferries". Polygon. Retrieved 2024-01-06.
  63. "The Dark Knight: Game Theory : Networks Course blog for INFO 2040/CS 2850/Econ 2040/SOC 2090" . Retrieved 2024-01-06.
  64. Kuhn, Steven T. (2004-07-01). "Reflections on Ethics and Game Theory". Synthese. 141 (1): 1–44. doi:10.1023/B:SYNT.0000035846.91195.cb. ISSN   1573-0964.
  65. 1 2 3 Kuhn, Steven (December 2016). "Gauthier and the Prisoner's Dilemma". Dialogue. 55 (4): 659–676. doi:10.1017/S0012217316000603. ISSN   0012-2173.
  66. Kuhn, Steven T.; Moresi, Serge (October 1995). "Pure and Utilitarian Prisoner's Dilemmas". Economics and Philosophy. 11 (2): 333–343. doi:10.1017/S0266267100003424. ISSN   0266-2671.

Bibliography

Further reading