A question or two for

A question or two for non-consequentialists. (–Doesn’t that include me now? –Err, guess so.)

I’ve always worried that non-consequentialist views are in some hard to pin down way somewhat selfish. The agent who lives by the rules of virtue theory does not act to maximise the amount of virtue in the world, but to maximise her own virtue. (This hopefully isn’t her motivation as such, just a description of her motivation.) It is as if the primary care of this agent is not courage, or honesty, or whatever, but her courage or her honesty or whatever. Such an agent may well sacrifice her material interests for the sake of others (especially if she really is courageous), but she won’t sacrifice her virtue for the sake of others, or even for the sake of other people’s virtue. This kind of consideration (usually expressed a wee bit more carefully) for a long time had made me hopeful that a kind of sophisticated consequentialism could be worked out.

In particular, I take the following kind of consequentialist position very seriously. Define a ranking on societies roughly the way Rawls did, but determining which socities we would prefer to be in from behind the veil of ignorance. Ideally we could even get a cardinal measure out of this ordinal measure, perhaps by the kinds of techniques Ramsey et al developed. Then the moral worth of an action is determined by the difference in quality thus measured between the world as it is with the action performed and the world as it would be had the action not been performed. (Or, if you care especially about moral luck, let the expected value of this quantity, rather than its actual value, guide the decision.) This gives you a consequentialist theory (or an expected-consequentialist theory, which is often taken to be the same kind of theory) that has two nice features. First, it avoids all of the obvious counterexamples to simpler versions of consequentialism. If we would prefer behind the veil of ignorance that doctors not cut up patients for body parts, that parental love would be a significant feature of the world, etc, and I think we would, then we can account for the divergence between what we ordinarily take to be moral behaviour and behaviour that maximises hedons or whatever other simple thing utilitarians want to maximise. Secondly, we don’t seem to fetishise our own virtue in the way that we think the virtuous theorists are doing.

Anyway, I take it this isn’t a wildly popular position, despite its manifest advantages, and indeed even I retract it in the pranks paper. Though to be sure I still think it is at least extensionally correct for all important moral decisions. (Just in case you particularly care, I still think that that kind of position is the fallback or tiebreaker whenever more fundamental moral principles clash, and in practice every important moral decision involves clash of moral principles, so in practice I’m still a consequentialist. Just not in theory.) But if it isn’t right, then two interesting questions come up. In each case I’ll assume an agent faces a choice between an action M that is morally right and and action C that produces the best consequences in the above sense.

First question. What should we advise the agent to do? It is almost a tautology that one should do the right thing. But it is not as obvious that one should always advise others to do the right thing.

Second question. If the agent chooses to do C, knowing full well that M would be morally preferable but C will produce a better world, what should our attitude towards her be? Should we thank her for making the world a better place? Criticise her for being immoral? Both?

While I was a happy consequentialist I never had to face these questions, and indeed I sort of thought that the difficulties with either answer (to either question) were some evidence in favour of consequentialism. But now I face these difficulties, and it isn’t particularly fun. Maybe I’ll just go back to philosophy of language, or probability or something where there are easier questions.

(Both these questions arose out of conversations with Andy Egan, so if somehow you thought that just raising these questions was a morally worthy act, or even somehow that it made for a better world, praise or thank him I guess. Not that I know anything about praise or thanks any more.)