Skip to main content.
November 26th, 2008

Asymmetric Death in Damascus

Here is the ‘Death in Damascus’ case from Allan Gibbard and William Harper’s classic paper on causal decision theory.

Consider the story of the man who met Death in Damascus. Death looked surprised, but then recovered his ghastly composure and said, ‘I am coming for you tomorrow’. The terrified man that night bought a camel and rode to Aleppo. The next day, Death knocked on the door of the room where he was hiding, and said I have come for you’.

‘But I thought you would be looking for me in Damascus’, said the man.

‘Not at all’, said Death ‘that is why I was surprised to see you yesterday. I knew that today I was to find you in Aleppo’.

Now suppose the man knows the following. Death works from an appointment book which states time and place; a person dies if and only if the book correctly states in what city he will be at the stated time. The book is made up weeks in advance on the basis of highly reliable predictions. An appointment on the next day has been inscribed for him. Suppose, on this basis, the man would take his being in Damascus the next day as strong evidence that his appointment with Death is in Damascus, and would take his being in Aleppo the next day as strong evidence that his appointment is in Aleppo…

If… he decides to go to Aleppo, he then has strong grounds for expecting that Aleppo is where Death already expects him to be, and hence it is rational for him to prefer staying in Damascus. Similarly, deciding to stay in Damascus would give him strong grounds for thinking that he ought to go to Aleppo.

Causal decision theorists often say that in these cases, there is no rational thing to do. Whatever the man does, he will (when he does it) have really good evidence that he would have been better off if he did something else. Evidential decision theorists often say that this is a terrible consequence of causal decsion theory, but it seems plausible enough to me. It’s bad to make choices that bring about your untimely death, or that you have reason to believe will bring about your untimely death, and that’s what the man here does. So far I’m a happy causal decision theorist.

But let’s change the original case a little. (The changes are similar to the changes in Andy Egan’s various counterexamples to causal decision theory.) The man wants to avoid death, and he believes that Death will predict where he will go tomorrow, and go there tomorrow, and that he’ll die iff he is where Death is. But he has other preferences too. Let’s say that his live options are to spend the next 24 hours somewhat enjoyably in Las Vegas, or exceedingly unpleasanty in Death Valley. Then you might think he’s got a reason to go to Vegas; he’ll die either way, but it will be a better end in Vegas than Death Valley.

Let’s make this a little more precise with some demons and boxes. There is a demon who is, as usual, very good at predicting what you’ll do. The demon has put two boxes, A and B, on the table in front of you, and has put money in them by the following rules.

What should you do? Three possible answers come to mind.

Answer 1: If you take box A, you’ll probably get $100. If you take box B, you’ll probably get $700. You prefer $700 to $100, so you should take box A.

Verdict: WRONG!. This is exactly the reasoning that leads to taking one box in Newcomb’s problem, and one boxing is wrong. (If you don’t agree, then you’re not going to be in the target audience for this post I’m afraid.)

Answer 2: There’s nothing you can rationally do. If you choose A, you would have been better off choosing B, and you’ll know this. If you choose B, you would have been better off choosing A, and you’ll know this. If you walk away, or mentally flip a coin, you’ll get nothing, which seems terrible.

Verdict: I think correct, but three worries.

First, the argument that the mixed strategy is irrational goes by a little quickly. If you are sure you are going to play a mixed strategy, then you couldn’t do any better than by playing it, so it isn’t obviously irrational. So perhaps what’s really true is that if you know that you aren’t going to play a mixed strategy, then playing a mixed strategy would have a lower payoff than playing some pure strategy. For instance, if you are playing B, then if you had have played the mixed strategy (Choose B with probability 0.5, Choose A with probability 0.5), your expected return would have been $750, which is less than the $800 that you would have got if you’d chosen A. And this generalises to any pure strategy that you choose, and any mixed strategy that you could have chosen as an alternative; whatever two strategies you pick, there is a pure strategy that you could have chosen that would have been better. So for anyone who’s not playing a mixed strategy, it would be irrational to play a mixed strategy. And I suspect that condition covers all readers.

Second, this case seems like a pretty strong argument against Richard Jeffrey’s preferred view of using evidential decision theory, but restricting attention to the ratifiable strategies. Only mixed strategies are ratifiable in this puzzle, but mixed strategies seem absolutely crazy here. So don’t restrict yourself to ratifiable strategies.

Third, it seems odd to give up on the puzzle like this. Here’s one way to express our dissatisfaction with answer two. The puzzle is quite asymmetric; box B is quite different to box A in terms of its outcome profile. But our answer is symmetric; either pure strategy is irrational from the perspective of someone who is planning to play it. Perhaps we can put that dissatisfaction to work.

Answer 3: If you choose A, you could have done much much better choosing B. If you choose B, you could have done a little better choosing A. So B doesn’t look as bad as A by this measure. So you should choose B.

Verdict: Tempting, but ultimately I think inconsistent.

I think the intuitions that Andy pumps with his examples are really driven by something like this reasoning. But I don’t think the reasoning really works. Here is a less charitable, but I think more revealing, way of putting the reasoning.

Choosing A is really irrational. Choosing B is only a bit irrational. Since as rational agents we want to minimise irrationality, we should choose B, since that is minimally irrational.

But it should be clear why that can’t work. If choosing B is what rational agents do, i.e. is rational, then one of the premises of our reasoning is mistaken. B is not a little bit irrational, rather, it is not irrational at all. If choosing B is irrational, as the premises state, then we can’t conclude that it is rational.

The only alternative is to deny that B is even a little irrational. But that seems quite odd, since choosing B involves doing something that you know, when you do it, is less rewarding than something else you could just as easily have done.

So I conclude Answer 2 is correct. Either choice is less than fully rational. There isn’t anything that we can, simply and without qualification, say that you should do. This is a problem for those who think decision theory should aim for completeness, but cases like this suggest that this was an implausible aim.

Posted by Brian Weatherson in Uncategorized

10 Comments »

This entry was posted on Wednesday, November 26th, 2008 at 12:01 am and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.

10 Responses to “Asymmetric Death in Damascus”

  1. wo says:

    Maybe in this case the rational thing would be to toss a die to decide whether to choose A or B or a mixed strategy?

    I’m also not sure that the mixed strategy would be “absolutely crazy”. If you opt for either of the pure strategies, you know that a mixed strategy would have higher expected payoff, while if you opt for a mixed strategy, you know that you’d fare no better if instead you had chosen a pure strategy. Yes, you can be certain that you’ll get nothing if you choose the mixed strategy. But you didn’t have a choice: the boxes were empty all along!

    BTW, there has been a brief discussion of asymmetric Death in Damascus cases between Reed Richter and William Harper in the mid 1980s, starting (I think) with Richter’s “Rationality Revisited”.

  2. Brian Weatherson says:

    I think a meta-mixed strategy like you describe just is a mixed strategy, and I think the demon would agree. (Since it’s my demon, I can specify that she would agree!)

    I’m not convinced by the defence of a mixed strategy, but let me note that the argument against Jeffrey can be made with weaker assumptions than I have so far. On Jeffrey’s view, mixed strategies are uniquely rational. But there’s no credal state which, combined with either evidential or causal decision theory, that makes the mixed strategy do better than both pure strategies. So it seems very odd to say that it would be uniquely rational.

    Thanks for the references – I’ll chase them up.

  3. wo says:

    I see. I always thought of ratifiability as an additional constraint: we should rule out unratifiable options on the grounds that they are unratifiable, no matter their expected utility; then we choose the best among the remaining options. It’s true that the mixed strategies never do better in terms of expected utility, but they always do better in terms of ratifiability.

  4. Kenny Easwaran says:

    A further problem with Answer 3 – it looks like it only depends on the differences between the amounts in the two boxes given a prediction, and not the comparison between the payoffs if the demon has made the correct prediction. For instance, if you added a million dollars to both payoffs if the demon predicted you would choose A, Answer 3 would still argue for choosing B. But it seems that this addition would at least give some more weight towards choosing A. (Though maybe I’m wrong – this might just be my one-boxer sympathies coming through here.)

    Also, in your mention of Answer 1, I think you switched “700” and “100”.

  5. Avrom says:

    The thing that really gets to me about this, and what might be the source of your intuitive “ugh” about answer 2, is this: Unlike with the “one-boxing” answer in Newcomb’s Paradox, there’s no direct counterargument to the “Box B” argument here. In Newcomb’s paradox, you really do have these two arguments that both look good prima facie, and which pull in opposite directions, and you have to decide which one you find more convincing. But here, there’s no reason not to buy the argument for just choosing B, except that you know one-boxing in general doesn’t work because of Newcomb’s paradox.

    In other words, it’s just hard to come up with a general theory of rational decision that supports the intuitively pretty appealing notion of picking Box B. If we didn’t know about Newcomb’s Paradox itself, there would be no puzzle here—we’d take Box B and be happy.

  6. Avrom says:

    So, I think I have a potential solution to this problem:

    http://avromandina.net/avrom/2008/12/dealing-with-asymmetric-death-in-damascus/

    Thoughts?

  7. Ralph Wedgwood says:

    I’ve been working on an answer to exactly this problem, in a paper that I’m now calling “Gandalf’s Solution to the Newcomb Problem”:

    http://users.ox.ac.uk/~mert1230/gandalf.ltr.pdf

    Basically my approach is a version of Brian’s Answer 3.

    I deny that picking Box B in Brian’s example is the slightest bit irrational. Of course, it’s quite true that “choosing B involves doing something that you know, when you do it, is less rewarding than something else you could just as easily have done”. But the crucial point is that choosing A would involve doing something that you know, when you do it, would be atrociously suboptimal. So B is to be preferred to A, and choosing the option that is to be preferred is rational.

    Kenny’s objection seems to me the crucial one that I have to answer. I think that what I have to do is to argue that only a one-boxer can really view this objection as having any force.

  8. Ralph Wedgwood says:

    P. S. Obviously, the kind of answer that I want to give about these asymmetric “Death in Damascus” cases will constrain what I can consistently say about the original Death in Damascus case.

    So I say that in the original Death in Damascus case, both of the available options are rational. It’s a ghastly situation to be in, but that is just because of the virtual certainty of imminent death, not because rational choice is impossible!

  9. growthmetal says:

    Can you link to your arguments against one-boxing?

  10. says:

    I’ve got to admit, I don’t understand how the rational move isn’t to just pick box B. It seems intuitively obvious to me.

    If I choose box A, it contains $100.
    If I choose box B, it contains $700.

    Whatever the demon put in the other box based on my predicted decision seems irrelevant if it’s that good of a predictor. Even if the demon somehow gets it wrong, and predicts my action incorrectly, choosing box B still gets me $1400 instead of $800.

    This really seems straightforwards to me; picking B gets me more money regardless of the demon’s prediction. How is B not the rational answer?

Leave a Reply

You must be logged in to post a comment.