Suzy has a favourite bottle. She values it at $100.
Billy has thrown a rock at Suzy’s favourite bottle. It will soon hit and shatter the bottle.
Suzy cannot intercept Billy’s rock or save the bottle, but she can throw her own rock at the bottle so that it hits at the same time as Billy’s, and jointly causes the shattering.
The bottle fairy gives Suzy $1 for every bottle she shatters with a rock, including those she co-shatters.
What should Suzy do?
Standard versions of “causal decision theory” say that Suzy should throw the rock. She will lose the bottle either way, and this way she gets $1 from the bottle fairy.
A more purely *causal* theory, one that says you should do what has the best causal consequences, would say that she shouldn’t throw. Throwing causes a net $99 loss for Suzy – destroying her $100 bottle and getting back $1 from the bottle fairy. Not throwing has no salient causal consequences. Since nothing beats a $99 loss, she shouldn’t throw.
What are usually called causal decision theories are really counterfactual decision theories. Suzy should throw because she would be better off if she threw than if she didn’t throw. That her throwing would cause a net loss, and holding her arm would not, is irrelevant. I side with the counterfactual theories here over the purely causal theories, but the main point I want to make is that what is standardly called causal decision theory does not just say “Do whatever has the best causal consequences.”
In “Daniel Nolan’s book on David Lewis”:http://www.amazon.com/exec/obidos/redirect?tag=caoineorg-20&camp=14573&creative=327641&link_code=am1&path=http%3A//www.amazon.com/gp/product/offer-listing/0773529306%3Fcondition%3Dall/ASIN/0773529306 he wonders why Lewis doesn’t link his ethical theory more closely to his causal decision theory. I think it is cases like this that show why we might want decision theory and ethics to come apart. What I’ve been calling a purely causal decision theory is more appropriate for ethical decision making. (Or at least it seems to be according to Lewis.) We can see this by changing my example a little.
Change the example so the bottle is Sally’s, not Suzy’s. She values it at $100. Suzy assigns no value to the bottle, but does value the $1 she will get from the bottle fairy for breaking it. It would be wrong in standard cases (i.e. when the bottle is safe) for Suzy to break Sally’s $100 bottle for the $1 from the bottle fairy. Lewis’s view, I think, is that the same is true even when Billy’s rock is bound to break the bottle anyway. The world would not be worse off if Suzy threw her rock and co-broke the bottle. But it would be vicious of Suzy to do this – even if X is going to occur anyway it is wrong to _cause_ X if X is a bad outcome.
Here is a less charitable way of putting Lewis’s position. The sunk costs fallacy is a fallacy for prudential decision making, but it is not always a fallacy for ethical decision making.
Coincidentally, as I was writing this the iPod played Bob Dylan singing “Unless you have made no mistakes in your life, be careful of the stones that you throw.”