On Rationality

I want to expand on what I wrote previously in “A Simple But Challenging Game: Part II, this time focusing on Rosenthal’s Centipede Game. To remind you of the rules, in that game there are two players. The players, named Mutt and Jeff, start out with $2 each, and they alternate rounds. On the first round, Mutt can defect by stealing $2 from Jeff, and the game is over. Otherwise, Mutt cooperates by not stealing, and Nature gives Mutt $1. Then Jeff can defect and steal $2 from Mutt, and the game is over, or he can cooperate and Nature gives Jeff $1. This continues until one or the other defects, or each player has $100.

As I previously wrote, in this game, the Nash equilibrium is that Mutt should immediately defect on his first turn. This result is obtained by induction. When both players have $99, it is clearly in Mutt’s interest to steal from Jeff, so that the he will end with $101, and Jeff will end with $97. But that means that when Jeff has $98 and Mutt has $99, Jeff knows what Mutt will do if he cooperates, and can see that he should steal from Mutt, so that he will end with $100 and Mutt will end with $97. But of course that means that when both players have $98, Mutt can see that he should steal from Jeff, and so on, until one reaches the conclusion that Mutt should start the game by stealing from Jeff.

Of course, this Nash equilibrium behavior doesn’t really seem very wise (not to mention ethical), and experiments show that humans will not follow it. Instead they usually will cooperate until the end or near the end of the game, and thus obtain much more money than would “Nashists” who rigorously follow the conclusions of theoretical game theory.

Game theorists often like to characterize the behavior of Nashists as “rational,” which means that they need to explain the “irrational” behavior of humans in the Rosenthal Centipede Game. See for example, this economics web-page, which gives the following “possible explanations of ‘irrational’ behavior”:

There are two types of explanation to account for the divergence. The first assumes that the subject pool contains a certain proportion of altruists who place a positive weight in their utililty function on the payoff of their opponent. Also to the extent that selfish players believe that there is some probability that other players are altruists, they have an incentive to mimic altruistic behaviour by passing.

The second explanation considers the possibility of action errors. Errors in action, or ‘noisy’ play, may result from subjects experimenting with different strategies. Or simply from subjects pressing the wrong key.

Let’s step back for a second and consider what “rational” behavior should mean. A standard definition from economics is that a rational agent will act so as to maximize his expected utility. Let’s accept this definition of “rational.”

The first thing we should note is that “utility” is not usually the same as “pay-off” in a game. As noted in the first explanation above, many people get utility from helping other people get a pay-off. But there are many other differences between pay-offs and utility. You might lose utility from performing actions that seem unethical or unjust, and gain utility from performing actions that seem virtuous or just. You might want to minimize the risk in your pay-off as well as maximize the expected pay-off. You might value pay-offs in a non-linear way, so that the difference between $101 and $100 is very small in terms of utility.

Of course, this difference between pay-off and utility is very annoying theoretically. We’d really like the pay-offs to strictly represent utilities, but unfortunately for experiments, it is only possible to hand out dollars, not some abstract “utils.”

But suppose that the pay-offs in the Rosenthal Centipede Game really did represent utils. Would the game theory result really be “rational” even in that case? Would the only remaining explanation of cooperating behavior be that the players just don’t understand the situation and are making an error?

No. Remember that to be “rational,” an agent should maximize his expected utility. But he can only do that conditioned on some belief about the nature of the person he is playing with. That belief should take the form of a probability distribution for the possible strategies of his opponent. A Nashist rigidly reasons by backward induction that his opponent must always defect at the first opportunity. He even believes this if he plays second, and his opponent cooperates on the first turn! But is this the most accurate belief possible, or the one that will serve to maximize utility? Probably not.

A much more accurate belief could be based on the understanding that even people who understand the backward induction argument can reason beyond it and see that many of their opponents are nevertheless likely to cooperate for a long time, and therefore it pays to cooperate. If you believe that your opponent is likely to cooperate, it is completely “rational” to cooperate. And if this belief that other players are likely to cooperate is backed by solid evidence such as the fact that they started the game by cooperating, then the behavior of the Nashist, based on inaccurate beliefs that cannot be updated, is in fact quite “irrational,” because it does not maximize his utility.

Sophisticated game theorists do in fact understand these points very well, but they muddy the waters by unnecessarily overloading the term “rational” with a second meaning beyond the definition above; they in essence say that “rational” beliefs are those of the Nashist. For example, take a look at this 1995 paper about the centipede game by Nobel Laureate Robert Aumann. Aumann proves that “Common Knowledge of Rationality” (by which he which he essentially means the certain knowledge that all players must always behave as Nashists) will imply backward induction. He specifically adds the following disclaimer at the end of his paper:

We have shown that common knowledge of rationality (CKR) implies backward induction. Does that mean that in perfect information games, only the inductive choices are appropriate or wise? Would we always recommend the inductive choice?

Certainly not. CKR is an ideal (this is not a value judgement; “ideal” is meant as in “ideal gas”) condition that is rarely met in practice; when it is not met, the inductive choice may be not only unreasonable and unwise, but quite simply irrational. In Rosenthal’s (1982) centipede games, for example, even minute departures from CKR may make it incumbent on rational players to “stay in” until quite late in the game (Aumann, 1992); the resulting outcome is very far from that of backward induction. What we have shown is that if there is CKR, then one gets the backward induction outcome; we do not claim that CKR obtains or “should” obtain, and we make no recommendations.

This is all well and good, but why use the horribly misleading name “Common Knowledge of Rationality” for something that would be more properly called “Universal Insistence on Idiocy?”

I hope it is obvious by now why I am skeptical of explanations of various types of human behavior that are based on assuming that all humans are always Nashists, and even more skeptical of recommendations about how we should behave that are based on those same assumptions.

[Acknowledgement: I thank my son Adam for discussions about these issues.]

Advertisement

Tags: , , , , ,

8 Responses to “On Rationality”

  1. David MacKay Says:

    Another argument for not being Nashist in games like centipede and prisoners’ dilemma is an idea that I think Douglas Hofstadter called super-rationality. Hofstadter discusses these games at length in Metamagical themas and carried out a few experiments on his Scientific American readers to see if they would ‘get it’. (He was disappointed with them.)
    The idea of super-rationality is that you take into account the fact that your game-partner is like you and can think like you. Indeed, if we believe he is _just_ like you, we deduce that he will play in the same way as you. So now you can decide whether to cooperate or defect by saying ‘he’ll do the same as me; so do I want us both to cooperate, both defect, or both use a mixed strategy?’ The answer in many games is ‘I prefer us both to cooperate’. Which makes it “rational”, in my view, to cooperate. As you note, the Nash morons have stolen and abused the word rational, so we have to go one-up on them and call this behaviour “super-rational”.

  2. Jonathan Yedidia Says:

    Thanks David for the comment. I was thinking about the concept of “super-rationality,” and since I’m not an expert on the field, I was wondering whether it had already been explored. If one insists on reducing the probability function for the belief about how your opponent will play to a delta function that has only one possible strategy (perhaps so that computations can easily be made), super-rationality seems very reasonable. I wonder to what extent the concept can be formalized so that it works for games with less symmetry.

    In fact, my son Adam has pointed out to me a “super-rational” solution to an asymmetric game (“Real Men Don’t Eat Quiche” in Gintis’ book.) Maybe I’ll post more about it in the future.

  3. David MacKay Says:

    Here’s a shorthand definition of Superrational “Use the procedure that you would like everyone to use (from your own selfish point of view), given that everyone uses the same procedure”.
    I can’t remember exactly how Hofstadter deals with asymmetric games, and I agree it’s not obvious. My current take on how to play super-rational in an asymmetric game is to pretend that both players don’t yet know which half they will play, and pretend they are forced to choose strategy in advance of the game, and invoke the fact that they will of course pick identical strategies. This feels like a satisfactory generalization, though it has some defects – eg it assumes that both players have the same views about the utility function of the game. If in fact part of the asymmetry comes from the two players having different utility functions, it gets awkward – having to put yourself in their shoes?
    Anyways, I highly recommend Hofstadter’s book, Metamagical Themas.

  4. Yoav Freund Says:

    Hi David and Jonathan,

    Jonathan, this is a great blog! I blame you for preventing me from getting to what I needed to get done today!

    I think that the attempt to characterize rationality through game theory has two flaws. First is the notion that the natural goal for the player is to maximize a single quantity called a utility. Second, the notion that the natural thing to look for is equilibrium.

    As for the first notion. I would suggest two alternatives:
    1) The goal of the player is to ensure that a utility vector is inside a desired set. This is a notion studied by Blackwell in the 60s. My interpretation of this idea is that for an organism to survive various conditions have to be met: temperature, water, nutrients should all be within required values (on average). I think this is a more reasonable goal than maximizing something.
    2) Minimizing regret. Instead of maximizing utility, the goal of the player is to minimize the difference between the utility gained and the utility that could be gained had the user made different choices in the past.

    As for equilibria, I think that physics has made great progress using equilibria as a central concept to analyze real world systems. However, I think that humans think very rarely about equilibria when they make decisions. Similarly, financial markets are not operating close to equilibrium. The rate of change of world economy is so high that I believe it precludes the assumption of being close to equilibrium.

    For me the notion of “evolutionary stable strategies” is much more convincing then Nash equilibrium and even more than Aumann’s “correlated equilibrium”.
    (BTW, unlike the computation on Nash Equilibrium, which is NP hard, there are
    simple learning strategies that are guaranteed to converge to correlated equilibrium, as was shown in a recent paper by Blum and Mansour).

    I highly recommend the book “Evolutionary Dynamics” by Martin Nowak which described these ideas very eloquently.

  5. Jonathan Yedidia Says:

    Hi Yoav,
    Thanks for your comments. I just wanted to mention that I recommended “Evolutionary Dynamics,” and gave a link to a video of a talk by Nowak, in a previous post.

  6. Claus Says:

    This is why I prefer to use the label “economic behaviour” for the economic model of behaviour, instead of “rational behaviour”.

    This makes problems such as this one much more easier. Most of the time, it makes them nonexistent. They vanish.

    This way, there’s more time left for the important problems.

  7. Richard K Says:

    > Here’s a shorthand definition of Superrational “Use the procedure that you would like everyone to use (from your own selfish point of view), given that everyone uses the same procedure”.

    You know what else it’s a definition of? Morality. It mimics almost word for word one of the commoner descriptions of Kant’s Categorical Imperative.

  8. Emin Martinian Says:

    While I agree that the Nashist view may not be “rational” or the best way to play, I think the Nash equilibrium still highlights a very important concept in these games: no arbitrage. Imagine that someone asked you how many dollars (or how many utils) you would pay to play the game. You might be willing to pay a lot depending on how you believe the other player will act and whether or not he will cooperate. But it would be irrational for you to ever pay less than $2.

    Regardless of what the other player does, you can always earn at least $2. So if the game was ever offered to you at less than $2, you would have an opportunity to earn money with no risk (i.e., you would have an arbitrage opportunity).

    Note, this does not mean that even if the game was offered to you at $1 you would play the Nashist strategy or that if the game was offered to you at $3 you would refuse to play it. It just means that the minimum fair price for the game must be at least $2. I believe if you think of Nash equilibria as a way to establish the minimum price for the game, then Nash equilibria make much more sense.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: