Newcomb’s Centipede

The following puzzle is a cross between the “Newcomb puzzle”:http://en.wikipedia.org/wiki/Newcomb’s_paradox and the “centipede game”:http://en.wikipedia.org/wiki/Centipede_game.

You have to pick a number between 1 and 50, call that number _u_. A demon, who is exceptionally good at predictions, will try to predict what you pick and will pick a number _d_, between 1 and 50, that is 1 less than _u_. If she predicts _u_ is 1, then she can’t do this, so she’ll pick 1 as well. The demon’s choice is made before your choice is made, but only revealed after your choice is made. (If the demon predicts that you’ll use a mixed strategy to choose _u_, she’ll set _d_ equal to 1 less than the lowest number that you have some probability of choosing.)

Depending on what numbers the two of you pick, you’ll get a reward by the formula below.

If _u_ is less than or equal to _d_, your reward will be 2u.
If _u_ is greater than _d_, your reward will be 2d – 1.

For an evidential decision theorist, it’s clear enough what you should do. Almost certainly, your payout will be 2u – 3, so you should maximise _u_, so you should pick 50, and get a payout of 97.

For a causal decision theorist, it’s clear enough what you should not do. We know that the demon won’t pick 50. If the demon won’t pick 50, then picking 49 has a return that’s better than picking 50 if _d_ = 49, and as good as picking 50 in all other circumstances. So picking 49 dominates picking 50, so 50 shouldn’t be picked.

Now the interesting question. What *should* you pick if you’re a causal decision theorist. I know of three arguments that you should pick 1, but none of them sound completely convincing.

_Backwards Induction_
The demon knows you’re a causal decision theorist. So the demon knows that you won’t pick 50. So the demon won’t pick 49; she’ll pick at most 48. If it is given that the demon will pick at most 48, then picking 48 dominates picking 49. So you should pick at most 48. But the demon knows this, so she’ll pick at most 47, and given that, picking 47 dominates picking 48. Repeating this pattern several times gives us an argument for picking 1.

I’m suspicious of this because it’s similar to the bad backwards induction arguments that have been criticised effectively by Stalnaker, and by Pettit & Sugden. But it’s not quite the same as the arguments that they criticised, and perhaps it is successful.

_Two Kinds of Conditionals_
In his very interesting “The Ethics of Morphing”:http://web.mit.edu/~casparh/www/Papers/CJHareMorphing.pdf, Caspar Hare appears to suggest that causal decision theorists should be sympathetic to something like the following principle. (Caspar stays neutral between evidential and causal decision theory, so it isn’t his principle. And the principle might be slightly stronger than even what he attributes to the causal decision theorist, since I’m not sure the translation from his lingo to mine is entirely accurate. Be that as it may, this idea was inspired by what he said, so I wanted to note the credit.)

Say an option is unhappy if, supposing you’ll take it, there is another option that would have been better to take, and an option is happy if, supposing you take it, it would have been worse to have taken other options. Then if one option is happy, and the others all unhappy, you should take the happy option.

Every option but picking 1 is unhappy. Supposing you pick n, greater than 1, the demon will pick n-1, and given that you would have been better off picking n-1. But picking 1 is happy. Supposing that, the demon will pick 1, and you would have been worse off picking anything else.

There’s something to the _pick happy options_ principle, so this argument is somewhat attractive. But this does seem like a bad consequence of the principle.

_Stable Probability_
In Lewis’s version of causal decision theory, we have to look at the probability of various counterfactuals of the form _If I were to pick n, I would get k dollars_. But we aren’t really told where those probabilities come from. In the Newcomb problem that doesn’t matter; whatever probabilities we assign, two boxing comes out best. But the probabilities matter a lot here.

Now it isn’t clear what constrains the probabilities in question, but I think the following sounds like a sensible constraint. If you pick n, the probability the demon picks n-1 (or n if n=1) should be very high. That’s relevant, because the counterfactuals in question (what would I have got had I picked something else) are determined by what the demon picks.

Here’s a constraint that seems plausible. Say an option is Lewis-stable if, conditional on your picking it, it has the highest “causally expected utility”. (“Causally expected utility” is my term for the value that Lewis thinks we should try to maximise.) Then the constraint is that if there’s exactly one Lewis-stable option, you should pick it.

Again, it isn’t too hard to see that only 1 is Lewis-stable. So you should pick it.

_Summary_
It seems intuitively wrong to me to pick 1. It doesn’t dominate the other options. Indeed, unless the demon picks 1, it is the worst option of all. And I like causal decision theory. So I’d like a good argument that the causal decision theorist should pick something other than 1. But I’m worried (a) that causal decision theory recommends taking 1, and (b) that if that isn’t true, it makes no recommendation at all. I’m not sure either is a particularly happy result.