How Not to Resolve the
Backward Induction Paradox

 

Robert Bass

 

 


The backward induction paradox emerges from the consideration of certain non-cooperative, non-zero-sum games. For present purposes, we will focus on the Prisoners' Dilemma.

If two players confront one another in a single-shot Prisoners' Dilemma, with no expectation of future interaction, there is a dominance argument that, if each is rational, each will defect. They are faced with a structure of incentives in which each receives what he regards as his best outcome if he defects while the other cooperates. Each will receive his second-best outcome if both cooperate. The third-best outcome for each will occur if both defect, and the worst outcome for each will be realized if he cooperates while his opponent defects. Thus, each will, if rational, realize that he will do better by defecting than by cooperating, whatever his partner does. Specifically, if one player considers the possibility of his partner cooperating, he will realize that by cooperating also, he will achieve his second-best outcome while by defecting, he will achieve his best outcome. If, on the other hand, he contemplates the possibility that his partner will defect, he will realize that cooperation will lead to his worst outcome, while defection will only lead to his second-worst (or third-best) outcome.1 A standard way of representing this situation is as follows:2

 

 

.........................................................................................Player B

.............................................................................C..............................D

........................................................C............... 3, 3...........................1, 4

................................Player A

........................................................D............... 4, 1............................2, 2

 

Since both players can reason in this way, both are likely to choose to defect, and both will thereby receive their second-worst outcomes.

By contrast, if two players will confront one another repeatedly in a Prisoners' Dilemma, where there is no expectation that their interaction will terminate after any specific number of rounds or iterations,3 conditional cooperation strategies may be rational if the players care sufficiently about the outcomes of future interactions.4 One player may cooperate in the expectation that, if he is defected against, he can "punish" the other player by defection in one or more future rounds. Similarly, a partner's cooperation may be "rewarded" by cooperation in subsequent rounds. One strategy that does well under a wide variety of conditions and when pitted against a wide array of other possible strategies is Tit-for-Tat which always cooperates with a partner on their first interaction and thereafter plays whatever the partner played in the previous round.5

What is to be said, though, about a finite iterated Prisoners' Dilemma -- in which both players can count on their interactions terminating after some definite number of rounds? Suppose two players will confront one another for 100 iterations or rounds of a Prisoners' Dilemma. Both intuition and empirical results suggest that they will find it rational to pursue a conditional cooperation strategy. Yet, given the received view that rational players will defect in a single-shot Prisoners' Dilemma, there is a familiar backward induction argument that two rational players in a finite iterated dilemma will each defect in every round.
Very briefly, that argument can be expressed as follows: Rational players will defect in the last round because, in the last round, with no prospect of future rewards for cooperation or of future penalties for defection, the structure of incentives they face is the same as in the single-shot Prisoners' Dilemma. Given the expectation of last-round defection, each will defect in the next to last round. And, given the expectation of next-to-last-round defection, each rational player will defect in the next-to-next-to-last round. Since some form of this argument can be presented with respect to each round, rational players will defect in every round, including the first.

This is the backward induction paradox: Despite the facts that in a finite iterated Prisoners' Dilemma it is plain (a) that a pair of conditional cooperators can do better than constant mutual defection, and (b) that people do not in general behave in the way that the game theorists recommend -- thus actually doing better than constant mutual defection, there seems to be a powerful argument that the players should mutually defect throughout all rounds.

There have been different reactions to the backward induction paradox. Some have accepted it while suggesting that "reasonable" players will not be game-theoretically "rational." Others have been willing to call into question whether rational players will really (and always) defect in the single-shot Prisoners' Dilemma.6

An Attempted Resolution of the Backward Induction Paradox


Enter Sugden and Pettit. In an interesting article, Robert Sugden and Philip Pettit maintain "that the argument for permanent defection is unsound and that the backward induction paradox is soluble."7 That is, they claim to resolve it while accepting the irrationality of cooperation in the single-shot Prisoners' Dilemma (which I shall also assume). If they are correct, intuition and game-theoretic rationality coincide. There is no paradox and no need to surrender or modify either.

On their view, once the backward induction argument is set out carefully, it will turn out that neither player is in a position to rationally believe everything she must in order for the paradox to follow. As they put it:

On careful exposition, the induction depends on the capacity of each of the players, before making his first move, to run the following sequence of arguments:

argument 1. In round n [where "n" is the number of the last round], my partner will act rationally; therefore, he will defect.
argument 2. In round n-1, my partner will act rationally. In addition, he will then believe that I will act rationally in round n; therefore, he will believe that I will defect in round n. Given this belief, he will defect in round n-1.
argument 3. In round n-2, my partner will act rationally. In addition, he will then believe, not just what he believes at round n-1, but also that in round n-1 (i) I will act rationally and (ii) I will believe that he will act rationally in round n. Therefore, in round n-2, my partner will believe that I will defect in round n-1. Given these beliefs, he will defect in round n-2.

And so on, up to argument n. Taken together, these arguments yield the conclusion that the partner will defect in every round; thus, the only rational response is to defect in every round too.8

Sugden and Pettit continue that

... it is mistaken to think that a player is in a position to run these arguments before making his first move. At this point, he believes that his opponent is rational ... that his opponent believes he is rational, and so on .... But that does not entitle him to believe that in subsequent rounds his partner will still believe he is rational (etc.), irrespective of how he, the first player has acted in the interim. Thus, for example, argument 2 is unavailable to him, because from the premise that my partner now believes that I am rational, I cannot derive the premise required, that in round n-1 my partner will believe that I am rational. Similarly, argument 3 is unavailable ....
The backward induction argument is supposed to show each player, at the start of the game, that rationality requires him to defect in every round -- and in particular in the first. It would indeed be irrational for me to cooperate in the first round if, regardless of whether I cooperate in that round, my partner will defect in rounds 2, ..., n. But does the backward induction paradox show this? Only if arguments 1, .... n-1 all hold even on the supposition that I cooperate in round 1.9

They argue further that if either player cooperates on any move -- and a fortiori on the first -- then the other player's belief in her partner's rationality will be upset:

Suppose ... A cooperates in the final round. This is an irrational act, since defection dominates cooperation there, and so would cause B, if rational, to believe that A is irrational. In that case, the common belief in rationality breaks down at the first level.

Suppose instead that A cooperates in round n-1. B may again be led to believe that A is irrational, in which case we also get a first-level breakdown of the common belief. But may B hold that A is rational? Perhaps, but only if [he] believes that A believes that B can be induced by A's cooperation in round n-1 to cooperate in round n, which would be an irrational act. So in that case the common belief breaks down too, though now at the second level up: B does not believe that A is irrational but ... that A believes that he, B, is irrational.

Now suppose that A cooperates in round n-2. B may be led to believe that A is irrational. Alternatively, B may believe that A believes that by cooperating in round n-2 he can induce B to cooperate in at least one of rounds n-1 or n. But cooperating in round n is irrational, so this would involve a breakdown of the common belief at level 2. And cooperating in round n-1 is either irrational -- in which case we also get a breakdown at level 2 -- or it springs from the belief that such cooperation would induce A irrationally to cooperate in round n, in which case we get a breakdown of the common belief at level 3.

The generalization should be obvious. For any act of cooperation by one player in round n-j where 0 [is less than or equal to] j [is less than or equal to] n-1, the partner, if rational, must respond with a belief that causes the common belief in rationality to break down at level j+1 or at some lower level. If we can see this, so can the players (who, by assumption, are rational). So, necessarily, neither of the players can believe that the common belief in rationality would survive whatever moves the players make.10

We have now solved the backward induction paradox, for we have shown that the players are ... necessarily not in a position to run the backward induction. One of the beliefs required ... -- the belief that the common belief in rationality would survive even if cooperative moves were played -- is a belief that the players could not have.11

Having resolved the backward induction paradox to their satisfaction, Pettit and Sugden examine the strategies that might be pursued by rational players who are not backward inducers. These include, they say, variations on Tit-for-Tat which make allowance for defection in or near the last round.12 Since I think that their discussion of these strategies is peripheral to their criticism of the backward induction argument, I will not further discuss them.

The Resolution Examined

In examining Pettit's and Sugden's argument, one of the more conspicuous features is that the premise that they claim is necessary for the backward induction to work does not seem to appear in their own presentation.13

It is necessary, they claim, that each player believe "that in subsequent rounds his partner will still believe that he is rational (etc.), irrespective of how he, the first player, has acted in the interim."14 But the argument they present (quoted above) -- which is supposed to be available to "each of the players, before making his first move"15 -- makes no mention of a requirement that the players will each preserve a belief in the rationality of the other without regard to how the other plays. All that is said there is that each believes, before the first move that the other is rational (and each believes of his partner that his partner believes in his own rationality, etc.). This is just a statement of the common belief in mutual rationality with which the players, ex hypothesi, begin. Call this the Weak Premise.

Pettit and Sugden, however, wish to claim that the backward induction argument requires the Strong Premise that the players will continue, at late stages of the finite iterated Prisoners' Dilemma, to believe in their mutual rationality regardless of how either of them has played. There is some difficulty in interpreting their argument at this point since the statement of this requirement (curiously, for something so central) seems careless. In one case they express the requirement as a belief held (by a potentially backward-inducing player) "that in subsequent rounds his partner will still believe he is rational (etc.), irrespective of how he, the first player has acted in the interim."16 Surely, this is too strong a requirement, and Pettit and Sugden cannot mean to be committing themselves to the claim that, for the backward induction argument to work, the backward inducer's belief in his partner's rationality must be immune to upset even by evidence of the onset of insanity. If, for example, I observed a partner trying to hammer nails through her hands and ignite her hair, my belief in her rationality might well be upset. Arguably, I would then be ill-advised to continue to play as if I thought my partner was rational. But it is not obvious that, for the backward induction argument to work, my belief in my partner's rationality must be unrevisable in the face of such evidence.

In other places, Pettit and Sugden seem to wish to restrict the claim to refer only to rationality (or irrationality) as it is exhibited in moves played. For example, they say that "neither of the players can believe that the common belief in rationality would survive whatever moves the players make."17 This also seems too strong a requirement for their purposes, for there are unquestionably some patterns of moves that my partner could make that would lead me to conclude that he was irrational. He might, for example, cooperate on every round even though I defected on every round. Again, it appears that I might be a backward inducer who begins with a belief in my partner's rationality (etc.), but who later concludes on the basis of evidence provided by the course of play that my partner is not rational. If I expected such evidence to be forthcoming, I couldn't run the backward induction argument. But, if I did not expect it, it is not clear why I couldn't employ the backward induction argument based on what I do believe and expect (before any moves have been played).

Their most circumspect statement of the requirement just says that one of "the beliefs required" is "that the common belief in rationality would survive even if cooperative moves were played."18 This, I think, comes closest to what Pettit and Sugden should have been claiming on behalf of their Strong Premise: that the backward induction argument doesn't work unless mutual belief in rationality will be preserved even if one of the players should make an "unprovoked" cooperative move. Whether this is what they actually intended is unclear, given their various claims on behalf of stronger theses. But it is what I will take them to have intended, since (a) the stronger interpretations are independently implausible, and (b) if the necessity of the Strong Premise, under this interpretation, can be sustained, it seems sufficient to support their criticism of the backward induction argument.

To return from interpretive issues to their argument, Pettit and Sugden claim, referring to their earlier presentation of the backward induction argument, that "argument 2 is unavailable ... because from the premise that my partner now believes that I am rational, I cannot derive the premise required, that in round n-1 my partner will believe that I am rational."19 Well, perhaps I cannot -- from this premise alone. But, then, I am not quite so epistemically limited as this statement would suggest. I am at least allowed the additional beliefs that my partner is rational and that she believes that I am rational (since these are both included in the common belief in mutual rationality with which the backward induction argument begins). Because I hold these beliefs, I will expect her to alter her beliefs about my rationality only if such alterations are warranted by my play. Given that, why can I not then derive the premise required? That is, if I am rational and believe my partner to be rational and that my partner believes that I am rational (etc.), then I am in a position to infer that my partner's beliefs about my rationality (etc.) will not be altered unless I play irrationally. Thus, I may infer that my partner will continue to believe in my rationality (etc.) unless I make an irrational move or sequence of moves. It seems that, given the initial assumption that my partner is rational, and given the fact that I am, by hypothesis, rational (and will therefore not play any irrational moves), I have everything necessary to conclude that, at late stages of the game, my partner will still believe that I am rational (etc.).20 Do I need more than this to be entitled to believe that, at n-1, my partner will believe me to be rational, etc.? If so, why? Why, in other words, do I, at the beginning of the finite iterated Prisoners' Dilemma, need anything other than the Weak Premise (i.e., the common belief in mutual rationality) and what can be inferred from it in order to run the backward induction argument?

What Pettit and Sugden argue here is, to say the least, highly compressed. An unsympathetic interpreter might say that they have simply provided no argument that the Strong Premise is necessary.21 What follows is an attempt to interpret some of their scattered remarks on the subject as parts of an argument for the Strong Premise.

On this interpretation, I am unentitled to the required premise (that my partner will believe I am rational at n-122) because I, by a first-move cooperation or a pattern of early cooperative moves, could bring it about that my partner will not, at the n-1th round, be able to rationally believe in our mutual rationality. No doubt I could. As rehearsed above, they argue in fact that any cooperative move will lead to breakdowns in the beliefs of the other player about mutual rationality. My partner will rationally conclude from an early cooperative move either that I am irrational or that, on some level, I believe she is irrational. So, since I can upset my partner's belief in mutual rationality, I am not entitled to the premise that in round n-1 my partner will believe that I am rational (or that in round n-2 I will believe that my partner will believe in round n-1 that I am rational, etc.).23

What has this to do with the backward induction argument? The answer is that it has precisely nothing to do with the backward induction argument unless it can be rational for me to engage in first-move cooperation. If my partner correctly concludes that I am irrational, then the backward induction conclusion will not follow. She may find a better way to play against an irrational player than constant defection. But, of course, the backward induction argument never claimed to prescribe how rational players must play against irrational players.

So, the possibility that Pettit and Sugden have highlighted is only relevant if it can be rational for me to engage in first-move cooperation. It can, they say, if I can rationally believe that my first-move cooperation will upset the common belief in mutual rationality. If the common belief is upset, my partner may be induced to play in ways that she could not rationally play against a rational partner while accepting it. First-move cooperation on my part will, they have argued, so upset my partner's beliefs. So, I can cooperate on the first move even if I am rational, believe my partner to be rational, believe that she believes I am rational, etc.

But here, I think, it is evident that Pettit and Sugden are trying to eat their cake and have it, too. They are claiming both (a) that I can be rational in first-move cooperation while believing in my partner's rationality and in her belief in my rationality and (b) that my partner cannot rationally believe that my first-move cooperation is rational if I also believe in her rationality, etc. But why can she not? (Couldn't she have read the article?) We are dealing here with publicly available arguments. If Pettit and Sugden are correct, there is an argument that shows that, at the beginning of a finite iterated Prisoners' Dilemma, with the common belief in mutual rationality in place, a player may rationally choose to cooperate on the first move. If this argument is available to one player, there is no reason (that Pettit and Sugden have given) that the other player can't also be aware of it and understand it. Hence, if I can be rational in first-move cooperation (without already having given up the common belief), then my partner can rationally believe that I could be rational, that I believe she is rational, etc. Alternatively, suppose that something is wrong with the argument that Pettit and Sugden present on behalf of the possibility of rational first-move cooperation (and that there are no better arguments for that conclusion). Then, my partner will be unable to rationally believe that I have rationally cooperated on the first-move (without having already given up the common belief). But, under that supposition, that will be because first-move cooperation with the common belief in place is not rational.24

If the foregoing is correct, Pettit's and Sugden's case25 that the Strong Premise is necessary to the backward induction argument fails. Lacking a satisfactory argument that the Strong Premise is necessary, it appears that Pettit and Sugden have given no reason that the Weak Premise does not suffice for the backward induction argument; it appears to be enough by itself to support the inference on the part of either player that, provided he plays rationally, the other party will, in later rounds, still believe in his rationality, in his belief in the other's rationality, etc.

This, I think, is sufficient to dispose of Pettit's and Sugden's central argument. However, an important question remains: Where exactly do Pettit and Sugden go wrong? The answer -- or part of the answer -- can be found in a further consideration of their argument that a rational player must always regard an "unprovoked" cooperative move as a reason to think that either her partner is irrational or that her partner thinks that she is irrational.

As they introduce it,

[w]e can show that any act of cooperation would cause the common belief in rationality to break down by starting with the nth round and seeing that cooperation would cause a breakdown there; then going back to round n-1 and seeing that cooperation would have a similar effect in that round; and so on, making our way back to round 1.26

But this, in slightly altered dress, is just the backward induction argument itself. Instead of saying that rational players (with the common belief) always defect, it appears to make the equivalent statement that a player who does not defect is either not rational or does not hold the common belief.

There are three apparent differences. First, in this passage, Pettit and Sugden do not quite say what I attributed to them. They say that cooperation would cause a breakdown in the common belief, not that it shows the common belief to be false. However, their argument that such a breakdown would be caused is an argument that the common belief cannot, in fact, be true.

Second, in contrast with standard formulations, it allows for the possibility that cooperation may not (immediately) be taken as evidence of irrationality, but as evidence of a player's belief in the irrationality of his partner.27 If, however, this is a difference, it is one that makes no difference. Suppose I cooperate on the first move. Then my partner, if rational, will conclude either that I am irrational or that I think that she is. If either of these is correct, though, then the conditions for the application of the backward induction argument are unsatisfied.

The third difference may seem to leave a loophole to avoid the backward induction conclusion. There is the suggestion, upon which Pettit and Sugden rely in constructing their resolution to the backward induction paradox, that a player may be rationally required to believe something that isn't so -- in this case, to believe either that a partner is irrational or that the partner believes her to be irrational.28 But this isn't really intelligible. As argued above, it assumes in effect that publicly available arguments aren't publicly available.

The point in raising this is not to reiterate earlier arguments but to illustrate that the "resolution" of the backward induction paradox proposed by Pettit and Sugden itself depends on the backward induction argument. This has got to be self-defeating. If the premises are right, the conclusion must be wrong while if the conclusion is right, something must be wrong with (at least one of) the premises. So, one way or another, the backward induction paradox survives. Ultimately, I suspect that there is only one way the paradox is likely to be disposed of: Game theorists will have to bite the bullet and concede that it may be rational to cooperate (at least sometimes) even in the single-shot Prisoners' Dilemma.


 

Comments? I'd love to hear!

 

 




1 This is what is meant by saying that a choice, for a given player, is strictly dominant. A strictly dominant choice is one that is better for the player making it, whatever else happens. (A non-strictly dominant choice is one that, given some conditioning event or events, is better than any other, and, for any other conditioning event or events, is at least as good as that other.)

2 "C" and "D" stand, of course, for the options to cooperate or defect. The numbers should be taken to represent cardinal pay-offs associated with the outcomes, ranging from the highest ranked, 4, to the lowest ranked, 1 with the first number being A's pay-off and the second number being B's pay-off. Strictly, in dealing with the single-shot Prisoners' Dilemma, it is only necessary to refer to ordinal preference rankings. However, I will shortly be discussing iterated games, where there are several rounds of interactions between the same players and having the same strategic structure. In those cases, no determinate results can be achieved unless the pay-offs are interpreted cardinally, so it's simpler to represent matters in that way throughout.

3 This does not imply that they will face an infinite series of rounds. All that is necessary is that there be a low probability that any particular round will be the last.

4 There will be no need to further refer to this condition, but it is perhaps worth mentioning that it is possible to formally represent the extent to which the players care about the outcomes of future interactions by introducing a discount rate so that a pay-off of, say, 4, envisioned by one player as occurring in some subsequent round is discounted as compared with the same pay-off in the present round (and is discounted more the further in the future it is envisioned as occurring).

5 For much interesting discussion, see Robert Axelrod's The Evolution of Cooperation (New York: Basic Books, 1984). Also very interesting is Robert Sugden's The Economics of Rights, Cooperation and Welfare (Oxford: Basil Blackwell, 1986), especially chapters 1, 2 and 6.

6 Perhaps, the best-known argument to this effect is in David Gauthier's Morals by Agreement (Oxford: Clarendon Press, 1986), especially chapters V and VI. Also see Peter Danielson's Artificial Morality (London: Routledge, 1992) and E. F. McClennen's "The Theory of Rationality for Ideal Games" in Philosophical Studies 65:193-215, 1992, and "Rationality and Rules" (Draft, 1994).

7 "The Backward Induction Paradox," Journal of Philosophy 86, No. 4 (April, 1989), p. 169.

8 Pettit and Sugden, p. 171.

9 Ibid., pp. 171-172.

10 This argument, I think, can be presented in a more intuitively persuasive form. Because of the dominance argument for defection in a single-shot Prisoners' Dilemma, cooperation, in an iterated Prisoners' Dilemma, is either irrational or it is justified by the expectation of future cooperative gains, i.e., gains to be derived from cooperation on subsequent rounds. But, in the case of the nth round, there are no future cooperative gains to be had, so nth-round cooperation is always irrational. Since we are here discussing a finite iterated Prisoners' Dilemma, any cooperative move at an earlier stage must be justified (if it is to be justified) by the expectation of a later cooperative move. Since nth-round cooperation is irrational, there will have to be, if the players are rational and if any cooperative move occurs, a last cooperative move no later than the n-1th round. That move, however, will not be justified unless there was an expectation that there would be subsequent cooperative gains -- which will not be possible if both players are rational (etc.). Some version of this argument will be applicable at any round prior to the nth. Cooperation will be either irrational or rational only contingent upon the expectation of cooperation by one's partner in some later round. Eventually, there will be no place other than the nth round (where cooperation is always irrational) for such later-round cooperation to occur. Therefore, any cooperative move in any round will lead a rational partner to conclude that the other either is irrational or believes that she is irrational.

11 Ibid., pp. 173-174. In passing, we may note a minor confusion embodied in the claim that a cooperative move by one player, say, in the first round, will necessarily undermine mutual belief in rationality. It need not -- unless the other rules out "trembling-hand" explanations for an early cooperation. To put this differently, an early cooperative move may constrain the other player to think that that move was irrational -- which only means that the move was not strategically justified -- but will not necessarily lead the other player to conclude that the one who made the move is irrational. However, these complications can be avoided by stipulating that the conditions of play either rule out trembling-hand cooperation or make it sufficiently unlikely that it would be irrational to believe in it. Where relevant, I shall assume that some such stipulation is in place.

12 Pettit and Sugden are not, it should be noted, arguing that first-move cooperation is rational in the sense of being rationally required. They only mean to claim that, given certain other assumptions, it is not rationally forbidden.

13 The reason, of course, may be their intention to expose the inadequacy of standard presentations of the argument which omit it.

14 Ibid., p. 172.

15 Ibid., p. 171.

16 Ibid., p. 172.

17 Ibid., p. 174.

18 Ibid., p. 174.

19 Ibid., 172.

20 Of course, this would not involve, for either partner, believing that her beliefs about the rationality of the other would not be upset if certain things were to happen -- so long as she does not expect any of those things to happen.

21 For example, they say, "We know that they [the players] cannot run the backward induction unless they believe that that common belief [in mutual rationality] will survive, regardless of what either does." Pettit and Sugden, p. 173.

The first clause of this seems unexceptionable: In order to run the backward induction, the players have to believe that, at late stages of the iterated Prisoners' Dilemma, they will still believe in the rationality of the other. But what is the justification for adding the second clause (equivalent to one version of what I called the Strong Premise) to the effect that the belief will survive whatever either player does?

22 To be more precise, there is more than one "required premise" for the backward induction argument as Pettit and Sugden present it. See their discussion, pp. 171-172, of arguments 1,..., n. Most of this is quoted above. This is required just for what they call "argument 2."

23 There is something suspicious about this argument. It seems to move from a premise that it is possible that the common belief in mutual rationality will be upset to a conclusion that a player cannot make use of a premise expressing that common belief (the Weak Premise) or anything derivable from it. But normally, something more than just the possibility of a mistake is needed to disallow the use of a premise. Since this is a reconstruction of Pettit's and Sugden's argument, I am reluctant to claim that they were actually relying upon a move of this kind. In any case, if they were not, it would be helpful if they had set out more clearly the argument that they intended to be making.

24 More broadly, it appears that the resolution to the backward induction paradox that Pettit and Sugden propose is vulnerable on one or the other of two fronts. On one hand, they need first-move cooperation to be rational for, if it is not, there is not even the appearance of a solution to the backward induction paradox. On the other, first-move cooperation must be irrational in order to upset a partner's belief in the mutual rationality of the players. It seems true that an early cooperative move may upset a partner's continued acceptance of the common belief in mutual rationality. (Moreover, Pettit and Sugden have presented an argument that such upset must ensue if there is an unprovoked cooperative move. See above, note 10 and accompanying text.) What is not true -- or, at least, what Sugden and Pettit have not shown to be true -- is that a rational player could make such an early cooperative move.

25 It should be remembered that the argument I am criticizing is a reconstruction based on what is included in the article. I do not know if Pettit or Sugden would endorse it.

26 Pettit and Sugden, p. 173.

27 This is probably not a real difference. The standard formulations simply took it for granted that the common belief in mutual rationality (or, frequently, common knowledge of mutual rationality) is in place. Therefore, they argued that rational players could not cooperate. The difference is more in the choice of ways to express the point than in the point expressed.

28 There is an interesting further question as to whether the players' beliefs may be under-determined by the evidence. If so, it might be permissible for one player to believe that the other will interpret a cooperative first move as evidence that the common belief in mutual rationality does not apply while it is also permissible for the other to so interpret it without it being the case that either is rationally required or forbidden so to believe. Then, Pettit's and Sugden's argument may go through. However, they aren't in a position to accept the suggestion. The backward induction argument that they endorse -- and which, given their assumptions, I accept -- rules it out. See also note 10.