Skip to main content.
April 19th, 2005

How Causal is Causal Decision Theory?

Suzy has a favourite bottle. She values it at $100.

Billy has thrown a rock at Suzy’s favourite bottle. It will soon hit and shatter the bottle.

Suzy cannot intercept Billy’s rock or save the bottle, but she can throw her own rock at the bottle so that it hits at the same time as Billy’s, and jointly causes the shattering.

The bottle fairy gives Suzy $1 for every bottle she shatters with a rock, including those she co-shatters.

What should Suzy do?

Standard versions of “causal decision theory” say that Suzy should throw the rock. She will lose the bottle either way, and this way she gets $1 from the bottle fairy.

A more purely causal theory, one that says you should do what has the best causal consequences, would say that she shouldn’t throw. Throwing causes a net $99 loss for Suzy – destroying her $100 bottle and getting back $1 from the bottle fairy. Not throwing has no salient causal consequences. Since nothing beats a $99 loss, she shouldn’t throw.

What are usually called causal decision theories are really counterfactual decision theories. Suzy should throw because she would be better off if she threw than if she didn’t throw. That her throwing would cause a net loss, and holding her arm would not, is irrelevant. I side with the counterfactual theories here over the purely causal theories, but the main point I want to make is that what is standardly called causal decision theory does not just say “Do whatever has the best causal consequences.”

In Daniel Nolan’s book on David Lewis he wonders why Lewis doesn’t link his ethical theory more closely to his causal decision theory. I think it is cases like this that show why we might want decision theory and ethics to come apart. What I’ve been calling a purely causal decision theory is more appropriate for ethical decision making. (Or at least it seems to be according to Lewis.) We can see this by changing my example a little.

Change the example so the bottle is Sally’s, not Suzy’s. She values it at $100. Suzy assigns no value to the bottle, but does value the $1 she will get from the bottle fairy for breaking it. It would be wrong in standard cases (i.e. when the bottle is safe) for Suzy to break Sally’s $100 bottle for the $1 from the bottle fairy. Lewis’s view, I think, is that the same is true even when Billy’s rock is bound to break the bottle anyway. The world would not be worse off if Suzy threw her rock and co-broke the bottle. But it would be vicious of Suzy to do this – even if X is going to occur anyway it is wrong to cause X if X is a bad outcome.

Here is a less charitable way of putting Lewis’s position. The sunk costs fallacy is a fallacy for prudential decision making, but it is not always a fallacy for ethical decision making.

Coincidentally, as I was writing this the iPod played Bob Dylan singing “Unless you have made no mistakes in your life, be careful of the stones that you throw.”

Posted by Brian Weatherson in Uncategorized

6 Comments »

This entry was posted on Tuesday, April 19th, 2005 at 11:14 am and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. Both comments and pings are currently closed.

6 Responses to “How Causal is Causal Decision Theory?”

  1. Daniel Nolan says:

    For anyone who is worried, the book is available at Amazon.com for $22.95 as well as for the price you see if you click through – Brian has linked to the hardback, which is why it looks so pricey. (Unfortunately the US Amazon mistakenly seems to call the hardback a “paperback” too, but it has two listings, one for the hardback price and one for the paperback price, so I assume that’s what’s going on.)

    [Fixed now. I hadn’t realised that Amazon misspelled hardback as ‘paperback’ – BW]

  2. gunge says:

    cool.

    Also: note that your comment system is broken (on firefox). If you enter your name/email address and then select ‘no’ for “remember personal information” it resets the name/email address fields to empty.

  3. Campbell says:

    In the second case (where the bottle is Sally’s), it might be said that the world would be worse if Suzy threw her rock, because then the world would contain one more vicious act than it would have otherwise, and the fewer vicious acts there are the better. What would you say to such a view?

  4. Reza says:

    I don’t see how a counterfactual CDT parts ways with a pure CDT in this case.

    Let S be the bottle’s shattering.

    S is not counterfactually dependent on Sally’s throw (ST) given Billy’s throw (BT), though S is counterfactually dependent on the truth of the disjunctive event STvBT.

    Sally knows BT prior to deciding whether or not to throw, therefore STvBT. So, her degree of belief in the proposition ~ST&S is 1 prior to deciding; and hence her degree of belief in having lost $100 is conjointly 1 prior to her deciding whether or not to throw.

    So might we not argue that she would not consider her throwing a cause of S from her
    epistemic perspective at the time of choice? If that’s right, then both a counterfactual CDT and
    pure CDT enjoin her to throw, as throwing has the salient causal consequence of earning her
    $1.

    Perhaps, we do not want to allow for disjunctive events?

  5. Reza says:

    or rather, what I meant to conclude: since Sally’s throw isn’t a cause on a counterfactual dependence analysis of causation (in which case the fairy won’t give her $1), it follows that both a counterfactual and pure CDT enjoin her not to throw, as throwing has no salient causal consequences.

    It’s not clear to me how we get “joint causation” on the counterfactual dependence analysis. Is this something we want an analysis of causation to account for? are there genuine joint causes?

  6. Brian Weatherson says:

    Campbell,

    I’m perfectly happy with that kind of response, indeed it’s just the kind of thing that I take note of in the kind of consequentialism I think is correct for most everyday cases. (Not quite all cases, because of worries about amusing but wrong pranks, but most cases.)

    Reza,

    That’s right if we consider a purely counterfactual analysis of causation. I think (not at all originally) that these are cases that show a purely counterfactual analysis of causation to be mistaken. If a purely counterfactual analysis is right, then causal decision theory and counterfactual decision theory can’t come apart.