Skip to main content.
November 26th, 2008

Is there a rational strategy in Finite Iterated Prisoners Dilemma?

(UPDATE: I think there’s a mistake in the argument here – see Bob Stalnaker’s comment 11 below.)

Row and Column are going to play 100 rounds of Prisoners Dilemma. At each round they can either play Co-op or Defect, with standard rules. (So the payoffs are symmetric, and on each round Defect dominats Co-op for each player, but each playing Co-op is Pareto superior to each playing Defect.) The following is true at the start of the game.

  1. Each player is rational.
  2. No player loses a belief that they have if they receive evidence that is consistent with that belief.
  3. For any r, if it is true that if a player were to play Co-op on round r, the other player would play Defect on every subsequent round, then it is also true that if the first player were to play Defect on round r, then the other player would still play Defect on every subsequent round.
  4. The first three premises are matters of common belief.

Call a strategy that a player can play consistent with those four assumptions an approved strategy. (Note that one of the assumptions is that the player is rational, so these will all be rational strategies.) Assume for reductio that there are approved strategies S1 and S2 such that if Column plays S2, then Row can play S1, and this will involve sometimes playing Co-op. I will try to derive a contradiction from that assumption.

Let r be the largest number such that there are approved strategies S1, S2 and if Column is playing S2, and Row plays S1, then Row plays Co-op on round r. I will now argue that it is irrational for Row to play Co-op on round r, contradicting the assumption that S1 is an approved strategy.

Since both players are playing approved strategies, they are both acting consistently with the initial assumptions. So by premise 2, the initial assumptions still hold, and this is a matter of common belief. So it is still a matter of common belief that each player is playing an approved strategy.

If Row plays Co-op on round r, that is still, by hypothesis, an approved strategy, so Column would react by sticking to her approved strategy, by another application of premise 2. Since r is the last round under which any playing an approved strategy against an approved strategy co-operates, and Column is playing an approved strategy, Row believes that if she were to play Co-op, Column would play Defect on every subsequent round. By premise 3 (or, more precisely, by her belief that premise 3 still holds), Row can infer that Column will also play Defect on every subsequent round if she plays Defect on this round.

Putting these two facts together, Row believes prior to playing this round that whatever she were to to, Column would react by playing Defect on every subsequent round. If that’s the case, then she would get a higher return by playing Defect this round, since the only reason to ever play Co-op is that it has an effect on play in later rounds. But it will have no such effect. So it is uniquely rational for Row to play Defect at this round. But this contradicts our assumption that S1 is a rational strategy, and according to it Row plays Co-op on round r.

If our assumption is true, then there can be no approved strategy that ever co-operates before observing the other player co-operate. If there were such a strategy, call it S3, then we can imagine a game where both players play S3. By hypothesis there is a round r where the player playing S3 co-operates before the other player co-operates. So if both players play S3, which is approved, then they will both play Defect up to round r, then play Co-op on that round. But that’s to say that they will play Co-op while (a) playing an approved strategy and (b) believing that the other playing will play an approved strategy. And this contradicts our earlier result.

This does not mean that a rational player can never co-operate, but it does mean that they can never co-operate while the initial assumptions are in place. A rational player might, for instance, co-operate on seeing that her co-player is playing tit-for-tat, and hence that the initial assumptions are not operational.

Nor does it mean, as I think some theorists have been too quick to conclude, that playing Defect all the time is an approved, or even a rational, strategy. Assume that there are approved strategies, and that (as we’ve shown so far) they all involve playing Defect on the first round. Now the familiar objections to backward induction reasoning, tracing back at least to Philip Pettit and Robert Sugden’s “The Backward Induction Paradox”, become salient objections.

If Row holds all the initial assumptions, she may also believe that if she were to play Co-op on the first round, then Column would infer that she is an irrational agent, and that as such she’ll play Tit-for-Tat. (This isn’t built into the original assumptions, but it is consistent with them.) And if Row believes that is how Column would react, then Row is rational to play Co-op, or at least more rational on this occasion than playing Defect. Indeed, even if Row thinks there is a small chance that if she plays Co-op, Column will conclude that she is irrationally playing Tit-for-Tat, then the expected return of playing Co-op will be higher, and hence it will be rational. I conclude that, given any kind of plausible assumptions Row might have about Column’s beliefs, playing Co-op on the first round is rational.

In their paper, Pettit and Sugden try to make two arguments. The first I’ve very quickly excerpted here – namely that the assumption that always Defect is uniquely rational leads to contradiction given minimal assumption about Row’s beliefs about how Column would react. The second, if I’m reading them correctly, is that rational players may play some strategy other than always Defect. The argument for the second conclusion involves rejecting premise 2 of my model. They rely on cases where players react to rational strategies by inferring the other player is irrational, or believes they are irrational, or believes they believe that they are irrational etc. Such cases are not altogether implausible, but it is interesting to think about what happens without making such a possibility.

And I conclude that given my initial assumption, there is no approved strategy. And I’m tempted to think that’s because there is no rational strategy to follow. Just like in Death in Damascus, any strategy a player might follow, they have reason to believe is an irrational strategy when they are playing it. This is a somewhat depressing conclusion, I think, but causal decision theory sometimes doesn’t give us straightforward advice, and I suspect finite iterated Prisoners Dilemma, at least given assumptions like my premise 2, is a case where causal decision theory doesn’t give us any advice at all.

Posted by Brian Weatherson in Uncategorized

12 Comments »

This entry was posted on Wednesday, November 26th, 2008 at 1:06 pm and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.

12 Responses to “Is there a rational strategy in Finite Iterated Prisoners Dilemma?”

  1. Michael Kremer says:

    I don’t find the result very surprising. Nor do I find it depressing.

    What happens if you vary what is known, so that what is known is only that there will be a finite number of iterations, but not how many? (Suppose there is a randomizing device that will determine when the game ends.) Your argument does not go through, since one cannot know that there is a largest number r such that there are approved strategies S1, S2 and if Column is playing S2, and Row plays S1, then Row plays Co-op on round r.

  2. Brian Weatherson says:

    That’s right – if you don’t know when the game ends, the argument against the rationality of co-operation doesn’t go through. I was trying to reconstruct something like the backward induction argument against co-operation, but do it in a way that doesn’t rely on faulty assumptions about belief revision. And just like the original argument, my argument relies on a known end point.

  3. Bob Stalnaker says:

    Brian says, “If our assumption is true, then there can be no approved strategy that ever co-operates before observing the other player co-operate.” I think this is correct, but misleading, since the definition of an “approved strategy” has consequences that are stronger than I think are intended, and unreasonable.

    He also says that his result does not imply that “defect unconditionally” is an approved or even a rational strategy. This is also, I think, right but misleading. Whether a strategy is approved, and whether it is rational, depend on the prior beliefs of the players. The strategy “defect unconditionally” is both rational and approved, given certain prior beliefs.

    The problem is with assumption 2, which is not really a constraint on the strategy that is played, but rather a constraint on the belief revision policies (and in effect, a constraint on the prior beliefs of the players). This is not a reasonable condition to impose, since it puts very severe constraints on belief revision, in effect requiring that players never learn anything incompatible with their prior beliefs. Specifically, if we assume that the agent is not omniscient (so there is at least one proposition Q, such that both Q and ~Q are compatible with the agent’s prior beliefs), then assumption 2 implies that the agent cannot receive any evidence that is incompatible with her beliefs, while remaining consistent. Here is the argument: Suppose the agent receives evidence E that is incompatible with her prior beliefs. (That is, the agent initially believes ~E). Then she also believes both (~E v Q) and (~E v ~Q), since they are obvious consequences of ~E. But both of these disjunctive propositions are logically compatible with evidence E, so if the agent conforms to assumption 2, she must retain them both when she learns E, in which case she will have inconsistent beliefs.

    The implicit assumption that with “approved” strategies, there can be no surprises is what is doing the work in Brian’s argument. It is an established result in epistemic game theory that the backward induction argument for the iterated prisoners’ dilemma works on the assumption that it is common knowledge that neither player will do anything that will surprise the other player (no prior probability one belief for either player will be overturned by the actions of the other player). So on this assumption (along with the assumption that it is common knowledge that both players are rational), both players will defect every time, and they will expect the other player to defect every time, and so no one will be surprised. If they have the right prior beliefs, both players will act rationally in following a strategy with this result, and it may be an “approved” strategy, in Brian’s sense. But of course the players might be fully rational, while not having the prior beliefs that make this strategy rational, and in such a case, it might be rationally required for a player to act in a way that will surprise the other player. In this case, a non-equilibrium solution may be reached, with everyone acting in a way that is fully rational. The strategies will not be “approved”, but that is only because the conditions of approval are unreasonable.

  4. Michael Kremer says:

    This time, some more serious comments.

    First, on your main argument: You write “Since r is the last round under which any playing an approved strategy against an approved strategy co-operates, and Column is playing an approved strategy, Row believes that if she were to play Co-op, Column would play Defect on every subsequent round.” But doesn’t this require more than that r is the last round under which any playing an approved strategy against an approved strategy co-operates — namely doesn’t it require that Row knows or at least believes that r is the last round under which any playing an approved strategy against an approved strategy co-operates? Do you take this to follow (whatever r is) from Row’s being rational?

    Second, your premise 2 does not seem to come into your main argument at all. It only comes in in your response to worries derived from Pettit and Sugden. I’m a bit confused by this.

    More generally I’m confused by what you say about the relevance of premise 2. Consider a case in which a player reacts to a rational strategy by inferring that the other player is irrational. How could this lead to a belief being retracted on presentation of evidence consistent with that belief? Could you explain?

  5. Michael Kremer says:

    I wrote mine while Bob Stalnaker posted his — I agree with his argument against 2. Also, I now see the place where premise 2 occurs in the original argument, so please ignore my comments about premise 2. I need to read more carefully.

  6. Brian Weatherson says:

    Bob is clearly right that my premise 2 is too strong. A rational belief revision policy does not allow any change of belief is not acceptable.

    But I think I could save the argument with a slightly weaker premise in place. (And this new premise will be slightly weaker than the result Bob refers to – that when the players no there are no surprises, then they will always defect.)

    The weaker premise is that a player never loses any of their beliefs unless they receive evidence that is inconsistent with their prior beliefs.

    This doesn’t say that the evidence has to be inconsistent with the belief that is lost, as the original premise 2 did. It just says that only surprises lead to removals of beliefs. And I think this is all that I appeal to in the argument.

    I don’t think this is an especially plausible premise, but I do think it is a common and somewhat useful idealisation. It’s essentially equivalent to the idea that when evidence E is consistent with a belief set B, the agent with belief set B who receives evidence E should have as their new belief set the closure of E union B. But this says nothing about what happens when E is inconsistent with B (i.e. surprising) so I don’t think it rules out agents who are agnostic about some questions from being surprised.

    That’s a weaker premise than no known surprises, because I’m not assuming at the start that any of their beliefs are true. But it’s also a weaker conclusion, since I’m merely arguing that co-operating is never rational, not that defecting always is rational.

  7. Brian Weatherson says:

    Michael, I’m assuming that part of rationality is that the agents can figure out which strategies are approved. That may be too much of an idealisation. But if the players know which strategies are approved, and r is the last round on which any approved strategy defects, then naturally the players will know this.

  8. Bob Stalnaker says:

    I think that with the weakening of premise 2, Brian’s argument is fallacious. The problem is with this claim:

    “Since r is the last round under which any playing an approved strategy against an approved strategy co-operates, and Column is playing an approved strategy, Row believes that if she were to play Co-op, Column would play Defect on every subsequent round. By premise 3 (or, more precisely, by her belief that premise 3 still holds), Row can infer that Column will also play Defect on every subsequent round if she plays Defect on this round.”

    The assumption, at this point in the argument, is not that r is the last round on which cooperation occurs with any pair of approved strategies, but only that r is the last round under which there is cooperation on the particular approved strategies, S1 and S2, that the players are playing. But there may be many approved strategies, and it is consistent with the premises that Row believes, on round r, that Column will or might be playing a different approved strategy, one that cooperate on a further round, even though she is in fact wrong about that.

    Suppose r is round one. Row cooperates, on the expectation that Column will cooperate in response. But Column surprises her by defecting, and so it is defect for both from then on. Round one turns out to be the last one on which either cooperate, but Row didn’t expect that, and so her cooperation on round one was not irrational. Nothing in Brian’s argument shows that this might not happen, but to be sure that it can happen, given the premises, we must be sure that there is an approved strategy for Column in which he responds to cooperation on round one with cooperation on round two. One can show that there is such a strategy, but it turns on the fact that (with the weakened premise 2), by round two, it may no longer be common belief that the players are playing approved strategies, or even that they are rational. Column might have expected Row to defect on round one, and might have concluded that she was irrational when she cooperated. Once one is surprised by something, all bets are off as to which prior beliefs survive.

  9. Brian Weatherson says:

    I’m not sure I see where the misstep was supposed to be. I suspect that I haven’t expressed the assumptions I was intending to make clearly enough.

    Assume, for reductio, that there are pairs of approved strategies S’, S’‘ such that the player playing S’ co-operates at some round while the other player plays S’‘.

    If any such strategies exist, there must be a maximal round r such that a player playing an approved strategy with a partner also playing an approved strategy co-operates on that round. Use ‘r’ to pick out that round.

    Let S1 and S2 be the names of one of the pair of strategies such that the player playing S1 co-operates on this maximal round ‘r’ while the other player plays S2.

    Now imagine we have a pair of players playing S1 and S2, and imagine we’ve just completed round r-1. The above argument is meant to be an argument that the person playing S1 can’t be rational to co-operate on round r.

    And it’s crucial to that argument that r isn’t just the highest round that they co-operate while playing those strategies. But since the game is finite, there must be a highest round, and some pair of possible players must play the strategies that take them too that highest round. And that’s what leads to the worries I have.

  10. Brian Weatherson says:

    One other quick comment about the belief update policies I’m thinking of here. What I really wanted was the conjunction of the following two ideas.

    (A) “Believes” should be interpreted as “Assigns probability 1 to”; and
    (B) Anything that has probability 1, keeps having probability 1 unless something with probability 0 happens (and after that all bets are off).

    I think in general that (A) is a bad idea, and (B) is probably not a lot better. But I think both these ideas are fairly standard. In any case, it’s easy enough to come up with cases where agents who assign probability, say, 0.99 to the rationality of the other player co-operated in finite prisoners dilemma, so the most interesting cases must be where the player’s probability of the other player’s rationality is 1.

  11. Bob Stalnaker says:

    Thanks, Brian, that is very clear, and I see (from your comment 9) that I misread the argument. If there is a problem with it, it is subtler than I suggested. But I still see a problem with the argument as you now spell it out, which connects with the last part of my previous post (the remark that all bets are off about what beliefs are given up in response to a surprise). It is, as you emphasize, essential to your argument that the players not only play an approved strategy, but that each believes that an approved strategy is being played by the other player. But the premises say only that they have this belief at the start of the game. Strategies themselves don’t change in the course of a game, but beliefs about what strategy is being played by the other may change. If, for example, Col surprises Row with a move (even a move that might be part of an approved strategy), then it could happen (consistent with the premises) that Row responds by giving up her belief that Col is playing an approved strategy. That means that an approved strategy might involve a move that, at the time it is made, is rational because the player will then believe that the other player is not playing an approved strategy.

    We are supposing (as I now understand the argument) that r is the largest number such that for any approved strategies S1 and S2, C is played on round r. It still might be rational for Row to play C on that round, since she might at that point believe that he is playing an unapproved strategy that plays C (under some condition) on round r+1.

    Here is a toy model to make the point more concrete. Let the game have just three iterations, and to describe strategies, suppose C is co-op, D is defect, and T is the conditional tit-for-tat move (C iff C from the other player on the previous round). So strategy CTD, for example is C on round one, conditional move T on round 2, and D on round 3. I claim that the strategies CDD and DTD are both approved. It is clear that DDD is approved, and DTD is a best response to DDD. So suppose Col believes (with probability one) that Row will play DDD. In choosing between DDD and DTD (which are equally good responses to DDD), Col needs to consider what to believe on the belief-contravening hypothesis that Row plays C on round one. Suppose Col would, in that case, revise his beliefs by concluding (falsely, it turns out) that Row is playing the irrational strategy, CTT. The best response to CTT (between the equal best responses to DDD) is DTD, so that is Col’s best strategy, given his beliefs and conditional beliefs. So DTD is approved, and the best response to DTD is CDD, so it is approved. The r, in the three-iteration game, is 2 (since no approved strategy plays C on the last round). But Col is still rational to play C on round 2. (Please don’t quote my approval of DTD out of context) I may be missing a reason why DTD, or CDD should not be approved, but each seems okay to me.

  12. Brian Weatherson says:

    I think this is an error in the proof. I think for the argument to go through you need (at least) a much stronger assumption than I was making. (Or, for that matter, seems reasonable to make.)

    What I think I was assuming was that the only beliefs that the agents had about each other were the beliefs specified in the three conditions stated here. But of course that’s not part of the setup of the problem. The problem is consistent with Col believing (with probability 1) that Row is playing some approved strategy other than the one Row is playing. And it’s consistent with Row believing that Col believes Row is playing a rational strategy other than the one Row is actually playing, etc.

    So I think this is a real mistake in the proof.

Leave a Reply

You must be logged in to post a comment.