Here is the ‘Death in Damascus’ case from Allan Gibbard and William Harper’s classic paper on causal decision theory.
bq. Consider the story of the man who met Death in Damascus. Death looked surprised, but then recovered his ghastly composure and said, ‘I am coming for you tomorrow’. The terrified man that night bought a camel and rode to Aleppo. The next day, Death knocked on the door of the room where he was hiding, and said I have come for you’.
bq. ‘But I thought you would be looking for me in Damascus’, said the man.
bq. ‘Not at all’, said Death ‘that is why I was surprised to see you yesterday. I knew that today I was to find you in Aleppo’.
bq. Now suppose the man knows the following. Death works from an appointment book which states time and place; a person dies if and only if the book correctly states in what city he will be at the stated time. The book is made up weeks in advance on the basis of highly reliable predictions. An appointment on the next day has been inscribed for him. Suppose, on this basis, the man would take his being in Damascus the next day as strong evidence that his appointment with Death is in Damascus, and would take his being in Aleppo the next day as strong evidence that his appointment is in Aleppo…
bq. If… he decides to go to Aleppo, he then has strong grounds for expecting that Aleppo is where Death already expects him to be, and hence it is rational for him to prefer staying in Damascus. Similarly, deciding to stay in Damascus would give him strong grounds for thinking that he ought to go to Aleppo.
Causal decision theorists often say that in these cases, there is no rational thing to do. Whatever the man does, he will (when he does it) have really good evidence that he would have been better off if he did something else. Evidential decision theorists often say that this is a terrible consequence of causal decsion theory, but it seems plausible enough to me. It’s bad to make choices that bring about your untimely death, or that you have reason to believe will bring about your untimely death, and that’s what the man here does. So far I’m a happy causal decision theorist.
But let’s change the original case a little. (The changes are similar to the changes in Andy Egan’s various “counterexamples to causal decision theory”:www.geocities.com/eganamit/NoCDT.pdf.) The man wants to avoid death, and he believes that Death will predict where he will go tomorrow, and go there tomorrow, and that he’ll die iff he is where Death is. But he has other preferences too. Let’s say that his live options are to spend the next 24 hours somewhat enjoyably in Las Vegas, or exceedingly unpleasanty in Death Valley. Then you might think he’s got a reason to go to Vegas; he’ll die either way, but it will be a better end in Vegas than Death Valley.
Let’s make this a little more precise with some demons and boxes. There is a demon who is, as usual, very good at predicting what you’ll do. The demon has put two boxes, A and B, on the table in front of you, and has put money in them by the following rules.
- If the demon has predicted that you’ll choose and take A, then the demon put $1400 in B, and $100 in A.
- If the demon has predicted that you’ll choose and take B, then then demon put $800 in A, and $700 in B.
If the demon has predicted that you’ll play some kind of mixed strategy, then the demon has put no money in either box, because the demon doesn’t stand for that kind of thing.
What should you do? Three possible answers come to mind.
*Answer 1*: If you take box A, you’ll probably get $100. If you take box B, you’ll probably get $700. You prefer $700 to $100, so you should take box A.
_Verdict_: *WRONG!*. This is exactly the reasoning that leads to taking one box in Newcomb’s problem, and one boxing is wrong. (If you don’t agree, then you’re not going to be in the target audience for this post I’m afraid.)
*Answer 2*: There’s nothing you can rationally do. If you choose A, you would have been better off choosing B, and you’ll know this. If you choose B, you would have been better off choosing A, and you’ll know this. If you walk away, or mentally flip a coin, you’ll get nothing, which seems terrible.
_Verdict_: I think correct, but three worries.
First, the argument that the mixed strategy is irrational goes by a little quickly. If you are sure you are going to play a mixed strategy, then you couldn’t do any better than by playing it, so it isn’t obviously irrational. So perhaps what’s really true is that if you know that you aren’t going to play a mixed strategy, then playing a mixed strategy would have a lower payoff than playing some pure strategy. For instance, if you are playing B, then if you had have played the mixed strategy (Choose B with probability 0.5, Choose A with probability 0.5), your expected return would have been $750, which is less than the $800 that you would have got if you’d chosen A. And this generalises to any pure strategy that you choose, and any mixed strategy that you could have chosen as an alternative; whatever two strategies you pick, there is a pure strategy that you could have chosen that would have been better. So for anyone who’s not playing a mixed strategy, it would be irrational to play a mixed strategy. And I suspect that condition covers all readers.
Second, this case seems like a pretty strong argument against Richard Jeffrey’s preferred view of using evidential decision theory, but restricting attention to the ratifiable strategies. Only mixed strategies are ratifiable in this puzzle, but mixed strategies seem absolutely crazy here. So don’t restrict yourself to ratifiable strategies.
Third, it seems odd to give up on the puzzle like this. Here’s one way to express our dissatisfaction with answer two. The puzzle is quite asymmetric; box B is quite different to box A in terms of its outcome profile. But our answer is symmetric; either pure strategy is irrational from the perspective of someone who is planning to play it. Perhaps we can put that dissatisfaction to work.
*Answer 3*: If you choose A, you could have done much much better choosing B. If you choose B, you could have done a little better choosing A. So B doesn’t look as bad as A by this measure. So you should choose B.
_Verdict_: Tempting, but ultimately I think inconsistent.
I think the intuitions that Andy pumps with his examples are really driven by something like this reasoning. But I don’t think the reasoning really works. Here is a less charitable, but I think more revealing, way of putting the reasoning.
bq. Choosing A is really irrational. Choosing B is only a bit irrational. Since as rational agents we want to minimise irrationality, we should choose B, since that is minimally irrational.
But it should be clear why that can’t work. If choosing B is what rational agents do, i.e. is rational, then one of the premises of our reasoning is mistaken. B is not a little bit irrational, rather, it is not irrational at all. If choosing B is irrational, as the premises state, then we can’t conclude that it is rational.
The only alternative is to deny that B is even a little irrational. But that seems quite odd, since choosing B involves doing something that you know, when you do it, is less rewarding than something else you could just as easily have done.
So I conclude Answer 2 is correct. Either choice is less than fully rational. There isn’t anything that we can, simply and without qualification, say that you should do. This is a problem for those who think decision theory should aim for completeness, but cases like this suggest that this was an implausible aim.