Newcomb and Mixed Strategies

In a nice paper in a recent _Philosophical Review_ Alan Hajek argued that Pascal’s argument in the Wager fails because he doesn’t take account of mixed strategies. I’ve been spending too much of today wondering whether the same thing is true in other fields. (Not that I’m entirely convinced by Hajek’s argument, but the response would take another post, and historical research, and that’s for another week.)

For a while I thought mixed strategies could solve some of the problems “Andy Egan discusses”:http://www.geocities.com/eganamit/NoCDT.pdf in his paper on causal decision theory. Maybe they can, but I’m not so sure. For now I just want to discuss what they do to Nick Bostrom’s “Meta-Newcomb Problem”:http://www.nickbostrom.com/papers/newcomb.html.

The first thing to say is that it’s hard to say what they’d do, because Bostrom doesn’t say what his predictors do if they predict you’ll use a mixed strategy. I’ll follow Nozick and say that if they predict a mixed strategy, that’s the same as predicting a 2-box choice. Importantly I make this assumption both for Bostrom’s predictor and his Meta-Predictor. But if the “Predictor” is not predicting, but is in fact reacting to your choice (as is a possibility in Bostrom’s game) then I’ll assume that what matters is what choice you make, not how you make it. So choosing 1 box by a mixed strategy will be the same as choosing 1 box by a pure strategy for purposes of what causal consequences it has.

Given those assumptions, it sort of seems that the “best” thing to do in Bostrom’s case is to adopt a mixed strategy with probability e of choosing 2 boxes, for vanishingly small e. That will mean that if the meta-predictor is “right” your choice will cause the predictor to wait until you’ve made your decision, and with probability 1 less a vanishingly small amount, you’ll get the million. (Scare quotes because I’ve had to put an odd interpretation on the MetaPredictor’s prediction to make it make sense as a prediction. But this is just in keeping with the Nozickian assumptions with which I started.)

Problem solved, at least under one set of assumptions.

Now I had to set up the assumptions about how to deal with mixed strategies in just the right way for this to work. Presumably there are other ways that would be interesting. I’m not interested in games where predictors are assumed to know the outputs of randomising devices used in mixed strategies. That seems too much like backwards causation. But there could be many other assumptions that lead to interesting puzzles.

UPDATE: Be sure to read the many interesting comments below, especially Bob Stalnaker’s very helpful remarks.