Cooperation Against All Odds

by Marie Collison, December 18, 2022

Would you ever throw your best friend under the bus? Probably not. What if the reward was to have your entire education paid for? What if you were being threatened with indefinite jail time if you did not do so? These questions address a fascinating concept often reviewed in the fields of game theory and sociology: the Prisoner’s Dilemma.

Here is an example of the Prisoner’s Dilemma: pretend you and a friend of yours just robbed a bank. Not a close friend, but someone you may have shared a class with at some point. You got caught and are now waiting in separate interrogation rooms. You are unable to communicate with one another, nor have you spoken about any sort of plan if you two were to get caught. After some time, an officer walks into the room holding a sheet of paper. The officer tells you that if you sign the paper, which blames the entire incident on your friend, you will be set free and won’t have to serve any jail time. In turn, your friend will be condemned to 10 years in prison. Alternatively, if you don’t sign the paper and your friend does, you will serve 10 years in prison and they won’t serve any time. If neither of you sign the paper, you will each serve 2 years. If you BOTH sign the paper, you each are sentenced to 6 years (see below for a diagram). What would you do?

The logical collective answer would be for neither of you to sign the paper, right? You would still serve 2 years in jail, but the total time spent in jail between the two of you is only 4 years as opposed to 10 years (if only one of you signs) or 12 years (if you both sign). However, on an individual level, the choice to sign the paper is an obvious one. If you sign the paper and your friend doesn’t, you won’t have to serve any time. The problem resides in the fact that your friend’s best move is to also sign the paper. The payoff of signing the paper (at best, 2 years and at worst 6 years) is much more appealing compared to the consequences of not signing the paper (at best 0 but at worst 10 years) on both ends. This means the most likely outcome will be the both of you signing the papers and each serving 6 years. Ideally, the two of you would each refuse to sign the papers and would each serve 2 years. This would in turn be the collectively most optimal choice. In the one-time play, each person’s interests are in complete conflict, which makes cooperation extremely difficult to achieve. 

At the heart of this problem lies the human nature towards both altruism and selfishness. If you were to play the game once, the outcome of 6 years would be unfortunate but better than 10 years. However, if you begin to play the game over and over again against the same person, the matter of history affects your future choices. Therein lies the problem: how do you optimize your strategy to “win” against any other person that you face? This is when a person’s decisions towards either altruism or selfishness matter and affects future interactions. 

In the 1980’s, Robert Axelrod, a professor of political science at the University of Michigan sent out an invitation to a special tournament. This invitation was sent out to a group of very prominent game theorists, people’s whose entire lives were dedicated to studying puzzles like the Prisoner’s Dilemma. Axelrod’s only instruction: submit a computer program that would win at the iterative Prisoner’s Dilemma game. To clarify, winning meant coming out of the game with the fewest years of prison. Each strategy would play every single other strategy and the winning strategy would be the one to result in the fewest years. 

There were numerous strategies of varying complexities. Simple strategies included always defecting (betraying your friend) or never signing (cooperating together and not giving the other up). Another strategy submitted was random (cooperating 50% of the time and defecting 50% of the time). All of the strategies were complete at the time of submission, so no changes could be made to adapt to different opponents. In the end, only one strategy reigned supreme: tit-for-tat. This strategy even won again when Axelrod repeated the tournament with newly submitted strategies.

The tit-for-tat strategy is fairly simple. It consists of two components:

  1. Begin by cooperating.
  2. Match the decision your opponent made in the previous round until the match is finished.

For example, if the match starts with your opponent cooperating, you would in turn cooperate in the next round. If your opponent then defects in round two when you cooperate, you would then defect in round three. 

Against simple strategies, it is fairly easy to analyze how the tit-for-tat strategy holds up. When against an “always cooperate” strategy, the entire match is rainbows and smiles as the two easily cooperate the whole time. Against the random strategy, both the tit-for-tat and the opponent will be 50/50 on cooperating/defecting. Against the “always defect” strategy, the tit-for-tat strategy only loses in the first round before both strategies begin to turn on one another for the rest of the match. So why does this strategy work and what does this mean in the grand scheme of the world?

The strategy works because the strategy can never be taken advantage of for multiple rounds as in the “always cooperate” version, but it will also not miss out on the benefits of cooperation. What this tournament outlines may not be the “best” strategy, as it will stoop to the level of a strategy such as “always defect;” however, it outlines possibly the most optimal strategy to come out on top. It also outlines some tips on how to promote cooperation:

  1. Teach reciprocity: when there are more tit-for-tat strategies in play, the success of other strategies diminishes.
  2. Insist on no more than equity: the tit-for-tat strategy doesn’t expect more than equal action and does not perform more than equal action.
  3. Respond quickly to provocation, but be forgiving: when the opponent defects, the tit-for-tat strategy immediately defects in the next turn. Don’t do more than match your opponent’s last action even if your opponent defects multiple times. 
  4. Don’t be envious: do not try to “beat” your opponent, simply match their previous decision. 
  5. Begin as open as possible: like in the tit-for-tat strategy, begin by opening yourself up to cooperation, making it possible to have the most ideal outcome rather than beginning on a sour note.

The Prisoner’s Dilemma goes beyond a simple mind game: it teaches us that cooperation can be difficult to achieve, even in situations where cooperation is clearly the optimal solution. It is a guide, not a perfect one, but a well tested one, on how individual rationality can lead to collective irrationality. Although this may seem like one giant philosophical problem that may not seem directly relevant, the Prisoner’s Dilemma extends well beyond theory and into the reality of human interaction. 


Works Cited

Axelrod’s Tournament. cs.stanford.edu/people/eroberts/courses/soco/projects/1998-99/game-theory/axelrod.html. Accessed 26 Oct. 2022.

Shah, Rina. “Robert Axelrod: The Prisoner’s Dilemma Simulation.” Shortform

Books, 6 Jan. 2021, http://www.shortform.com/blog/robert-axelrod/. Accessed 26 Oct. 2022.

‌Tit For Tat. 17 Sept. 2019, http://www.radiolab.org/episodes/104010-one-good-deed-deserves-another.