Skip to main content.
December 20th, 2003

Ethics

I got so annoyed with the CD drive on my office computer that I had to go to Radio Shack and buy a new one before I could get back to work. You might think this is just me procrastinating again, but the old drive was really really bad. And the new one is quite good, even if I did have to change the jumper settings before I could get it to work. (Who knew that drives still had jumper settings?)

But this isn’t meant to be a technology post. I wanted to mention a couple of things about my favourite ethical theory. For the reasons Andy and I lay out here I no longer think it is the one true ethical theory, but it’s nevertheless my favourite theory.

It’s a form of consequentialism, so in general it says the better actions are those that make for better worlds. (I fudge the question of whether we should maximise actual goodness in the world, or expected goodness according to our actual beliefs, or expected goodness according to rational beliefs given our evidence. I lean towards the last, but it’s a tricky question.) What’s distinctive is how we say which worlds are better: w1 is better than w2 iff behind the veil of ignorance we’d prefer to be in w1 to w2.

What I like about the theory is that it avoids so many of the standard counterexamples to consequentialism. We would prefer to live in a world where a doctor doesn’t kill a patient to harvest her organs, even if that means we’re at risk of being one of the people who are not saved. Or I think we would prefer that, I could be wrong. But I think our intuition that the doctor’s action is wrong is only as strong as our preference for not being in that world.

We even get something like agent-centred obligations out of the theory. Behind the veil of ignorance, I think I’d prefer to be in a world where parents love their children (and vice versa) and pay special attention to their needs, rather than in a world where everyone is a Benthamite maximiser. This implies it is morally permissible (perhaps even obligatory) to pay special attention to one’s nearest and dearest. And we get that conclusion without having to make some bold claims, as Frank Jackson does in his paper on the ‘nearest and dearest objection’, about the moral efficiency of everyone looking after their own friends and family. (Jackson’s paper is in Ethics 1991.)

So in practice, we might make the following judgment. Imagine that two children, a and b, are at (very mild) risk of drowning, and their parents A and B are standing on the shore. I think there’s something to be said for a world where A goes and rescues her child a, and B rescues her child b, at least if other things are entirely equal. (I assume that A and B didn’t make some prior arrangement to look after each other’s children, because the prior obligation might affect who they should rescue.)

But what if other things are not equal? (I owe this question to Jamie Dreier.) Imagine there are 100 parents on the beach, and 100 children to be rescued. If everyone goes for their own child, 98 will be rescued. If everyone goes for the child most in danger, 99 will be rescued. Could the value of paying special attention to your own loved ones make up for the disvalue of having one more drown? The tricky thing, as Jamie pointed out, is that we might ideally want the following situation: everyone is disposed to give preference to their own children, but they act against their underlying dispositions in this case so the extra child gets rescued. From behind the veil of ignorance, after all, we’d be really impressed by the possibility that we would be the drowned child, or one of her parents.

It’s not clear this is a counterexample to the theory. It might be that the right thing is for every parent to rescue the nearest child, and that this is what we would choose behind the veil of ignorance. But it does make the theory look less like one with agent-centric obligations than I thought it was.

This leads to a tricky taxonomic question. Is the theory I’ve sketched one in which there are only neutral values (in Parfit’s sense) or relative values? Is it, that is, a form of ‘Big-C Consequentialism’? Of course in one sense there are relative values, because what is right is relative to what people would choose from behind the veil of ignorance, and different people might reasonably differ on that. But setting a community with common interests, do we still have relative values or neutral values? This probably just reflects my ignorance, but I’m not really sure. On the one hand we have a neutrally stated principle that applies to everyone. On the other, we get the outcome that it is perfectly acceptable (perhaps even obligatory) to pay special attention to your friends and family because they are your friends and family. So I’m not sure whether this is an existence proof that Big-C Consequentialist theories can allow this kind of favouritism, or a proof that we don’t really have a Big-C Consequentialist theory at all.

Posted by Brian Weatherson in Uncategorized

11 Comments »

This entry was posted on Saturday, December 20th, 2003 at 3:56 pm and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. Both comments and pings are currently closed.

11 Responses to “Ethics”

  1. pekka says:

    Very interesting. I’m wondering what you want to say about the implications of your biconditional about betterness regarding the issue of so-called deontological restrictions/constraints. This bears on whether your view is a form of Big-C Consequentialism.

    Suppose that commonsense intuitions are deontological in the way that most of the literature says they are. Then it seems that in Thomson’s “Bystander at the Switch” variant of the trolley case we’d prefer behind the veil of ignorance to be in a world where the switch is thrown to divert the trolley, the one is killed, and the five are saved, but that in the “Fat Man” variant we’d prefer to be in a world where the fat man is not thrown off the footbridge to stop the trolley with his bulk and thereby prevent the five from being killed.

    Assuming the central consequentialist claim that rightness is a straighforward function of betterness (say, the right action is one which is no worse than any of the available alternatives), your biconditional captures these intuitions. But it does so only through the claim that killing one innocent person to save five innocents is sometimes better and sometimes worse than letting five innocents die, depending on how those outcomes are brought about — not through what strikes me as the more intuitive claim that five innocents alive and one innocent dead is always (in this sort of cases, anyway) better than five innocents dead and one innocent alive. The latter claim enjoys widespread acceptance among non-consequentialists as well, many of whom therefore grant that deontological constraints cannot be justified by appeal to the goodness of outcomes. That’s why they deny that rightness is a straightforward function of the goodness of outcomes.

    I see a few puzzles arising from all this.

    One, are people just mistaken to think that constraints cannot be justified by appeal to the goodness of outcomes? If your view is a form of Big-C Consequentialism but grounds deontological constraints, then the issue of constraints cannot be a central dividing line between consequentialists and deontologists. This would be rather surprising.

    Two, can the issue of deontological constraints even be raised in your framework, if you accept that rightness is a straightforward function of the goodness of outcomes? Constraints are usually chaarcterized as moral limits that make certain ways of producing the best outcomes impermissible/wrong.

    Three, do the commonsense intuitions reveal an intuitive “misfit” between the notion of goodness of outcomes and the notion of what we’d prefer behind the veil of ignorance? I’m kind of tempted to say Yes to try and explain the first two puzzles. The worry is that your biconditional accommodates deontological intuitions only by smuggling them in on the RHS and then construing betterness of outcomes accordingly — but in a way that makes hash of the issue of constraints.

    Perhaps I’m just missing something, or else misinterpreting your view. For example, I think I’m assuming that preferences regarding a pair of worlds at time t can take into account how the worlds came to be the way they are at t. Perhaps that illegit. And you may be tacitly operating with certain decision-theoretic background assumptions about the relationship between preference and betterness, such that the notion of what world we’d prefer, behind the veil of ignorance, to be in doesn’t capture commonsense intuitions about the Fat Man case after all. But that’s not clear from your post.

    Since labels for philosophical intuitions don’t really matter, I’m less curious about whether your view is or is not a form of Big-C Consequentialism than about which of the assumptions that generate these worries you’re most inclined to reject.

  2. pekka says:

    Oops, a typo in the last paragraph. I meant to say “philosophical positions”, not “intuitions”. Sorry.

  3. Brian Weatherson says:

    Good questions! Here’s my first pass at answers.

    First, I don’t think we’ll get constraints here in the sense of absolute bars. For all the theory says, it might be acceptable to push the fat man in front of the trolley to save 50, even if it isn’t OK to do so to save 5. Everything’s a trade-off, but the trade-offs include what we might previously have thought of as intentional descriptions of the action.

    The second point is right – this is a consequentialist theory so it doesn’t have deontological constraints built in. (Again, the fat man who could save 50 becomes relevant here.)

    And I’m certainly allowing ‘backwards-looking’ preferences to play a role. I take it the evaluations on worlds can be as holistic as one likes, and that can include cross-time considerations, backwards looking considerations etc.

  4. pekka says:

    Brian, the way I framed my questions allows that constraints are non-absolute, in that they may fail to hold above some threshold of lives saved. Many who see themselves as non-consequentialists allow that anyway.

    Is the point of the last sentence in your first reply that one factor relevant to the goodness of a world are the intentions with which people act in it, so long as we’d prefer to be in a world where people act with certain intentions and not with others? Then I can see how a world where we end up with five innocents alive and one innocent dead can sometimes be better but sometimes worse than a world where we end up with five innocents dead and one innocent alive — provided that our preferences behind the veil of ignorance are sufficiently Kantian to weight the avoidance of certain means to the best outcomes more heavily than the achievement of the best outcomes. I still wonder whether this is fruitful way to construe the betterness relation.

  5. pekka says:

    … where “the best outcomes” is a careless shorthand for “the outcomes in which the greater number of lives is saved”.

  6. Heath White says:

    There’s something odd about this theory. If I prefer, from behind the veil of ignorance, to be in a world in which the Fat Man is not pushed into the path of the trolley, this is because I have certain moral qualms about living in such a world. In fact, think of the situation this way: suppose that five innocents will die unless the fat man is pushed in front of the trolley, and from behind the VI I have to choose whether I want to be in a world where he is pushed versus a world where he isn’t; and furthermore, what is “veiled” is which of the six individuals in the situation I am. From that perspective, there is a pretty strong temptation to choose the world in which the fat man is pushed, since statistically I am more likely to be one of five innocents than the innocent fat man. My point is, that if I prefer a world in which the fat man is not pushed, this is not a self-interested preference but some kind of a moral one.

    But now isn’t there something strange about a moral theory that is relative to our other moral intuitions? For instance, if we asked some of Homer’s heroes whether they’d prefer to be in a world in which the aristocrats got all the honor and died young in battle, or to live in a peaceful democracy, I think they’d choose the former world, because it seems more morally attractive to them. (Let’s suppose.) If you ask me, I’ll go for the peaceful democracy. So what does the theory say about this? Are right/wrong and better/worse relative to the existing moral intuitions of individuals? (societies?) If so, why not simply track those intuitions, and be a straight relativist without the detour through consequentialism? If not, we owe an account of whose preferences from behind the VI we’re talking about.

  7. Ralph Wedgwood says:

    I’m not completely clear whether the theory that Brian is sketching is meant to be a sort of “act consequentialism” (aka “direct consequentialism”), or some sort of “indirect consequentialism” instead.

    (Admittedly, Brian’s formulation “the better actions are those that make for better worlds” does sound fairly act-consequentialist, but perhaps there are other interpretations.)

    At all events, I’d like to insist on the pretty familiar point that no act-consequentialist theory can capture the agent-centered obligations that Brian seems to think that his theory can.

    E.g. suppose that behind the veil of ignorance, we prefer being in a world where n innocent people are brutally killed over a world in which n + 1 people are brutally killed. But now suppose it turns out that the only way for you, in your current situation, to prevent two innocents from being brutally killed is by brutally killing one innocent person yourself. Then if Brian’s theory were act-consequentialist, it would say that the best action for you is brutally to kill this innocent person. (This is an old point of Nozick’s.) But this obviously flies in the face of the deontological intuitions that non-consequentialists typically care about.

    No doubt Brian’s theory could capture these agent-centred obligations if it is a version of indirect consequentialism. (Of course, indirect consequentialist theories faces familiar problems of their own…)

    But if Brian’s theory is a form of indirect consequentialism, we would need to know exactly what bearing the (agent-neutral) facts about the comparative goodness of worlds have on the rightness of acts — if it isn’t the simple act-consequentialist idea that an act is right iff, were it performed, the world would be at least as good as it would be if any alternative act were performed. Is the theory a form of “rule consequentialism”? Or “motive consequentialism”? Or what?

  8. Brian Weatherson says:

    On Heath’s point, this is certainly meant to be a form of dispositionalism about value. As such it goes dangerously close to relativism – too close for real comfort I’d say. But the other options are similarly uncomfortable, so I’m not sure this is disasterous.

    I should have been more careful with the point about constraints. I do think this is one of the nicer features of the theory though, so it is worth saying a little about it. On a plausible deontological theory, we’ll have to have constraints on actions, and qualifications on the constraints, and qualifications on the qualifications, and so on potentially forever. Now one might at this stage start to wonder if anything unifies all this complexity. One position of course is that nothing does – there’s just a giant mess. But I think my pet theory has as good a shot as most at trying to see what’s behind all this mess. It doesn’t quite work, but it goes close.

    It is meant to be a form of act consequentialism. But I’m inclined to think the theory says the right thing about Nozick’s point. Here’s the progression of thoughts I have.

    (1) The evaluations here are ridiculously fine-grained. It’s possible to prefer n people being killed to n+1 being killed, but not prefer the world where n are killed to save the n+1 to the one where the n+1 are killed. So the fact that it’s consequentialist doesn’t entail that we’ll have no odd restrictions of this type.

    (2) But that’s a very strange preference ranking on worlds to have, so maybe in practice the theory does have that consequence.

    (3) But reflecting on that fact makes me wonder about how strong the intuitions were in the first place that said it’s wrong to kill n to save n+1. If they all are ‘in the same boat’, then maybe it’s morally OK.

    The important issue is does our evaluation of the acts vary with our evaluations of the worlds (in all their detail)? My inclination is still to say that it does in life-and-death cases, but it does not in the mundane cases Andy and I describe.

  9. Ralph says:

    I share Brian’s worry about unity. But I’m inclined to think that the best approach will probably be some form of indirect consequentialism, although to avoid the problems that famously beset (e.g.) rule-utilitarianism, it would have to be a form of indirect consequentialism radically unlike any that have so far been devised (I’m still light years away from working out the details, alas).

    On Brian’s response to Nozick’s distinction between respecting rights and minimizing the occurrence of rights violations, here are three comments:

    (1) I’m all in favour of super-fine-grained evaluations. And actually, I don’t think that the preference ranking on worlds that Brian mentions is at all strange — although I sympathize with Heath for complaining that we surely have this preference ranking on worlds for distinctively moral reasons.

    (2) Even if we incorporate these fine-grained evaluations of worlds, the central feature of act-consequentialism remains: in judging the rightness (or bestness) of acts, we need only look at agent-neutral facts about the worlds that would be actual were each of the available acts performed; and so, in particular, nothing about the specific role that the particular agent plays in the world is relevant.

    E.g. suppose that I face two alternatives, A and B: If I do A, I will be respecting people’s rights; if I do B, I will be grossly violating people’s rights. However, philosophers have arranged that if I do B, someone else who would otherwise grossly violate people’s rights (just as I do in doing B) will instead respect them (just as I would do if I did A), so that from behind the veil of ignorance, the world that would be actual if I do A and the world that would be actual if I do B are completely on a par. Then act-consequentialists will say that A and B are on a par. I find this intuitively unacceptable.

    (3) Like many other Aussie and NZ philosophers (Jack Smart, Jonathan Bennett, Frank Jackson, Peter Singer, et al.), Brian obviously doesn’t find this intuitively acceptable. I used to think (uncharitably) that philosophers who defended act-consequentialism were just wilfully ignoring the clarion call of intuition because they were seduced by the siren song of an excessively simple moral theory. But now I’m more inclined to think that these philosophers really have different pre-theoretical intuitions. (On my own epistemological views, this makes both sides justified in their moral beliefs — and also justified in regarding the intuitions of the other side as unreliable!)

  10. Daniel Elstein says:

    I think it was R. M. Hare who first pointed out that when you’re dealing with a two-level consequentialist theory like this you can see it either as act- or rule-consequentialist. To get the act-consequentialist interpretation, all you have to say is that the relevant action is that of inculcating a certain disposition (what Hare calls ‘prima-facie principles’). Basically to evaluate the consequences of particular acts of self-training you have to look at the consequences of the relevant dispositions being universalized (e.g. possessed by the people who are relevantly similar to you). This perhaps leaves a little more leeway than you intended, because it might be that ineradicable differences in character require different dispositions, though you can close the gap if you want by specifying that everyone is relevantly similar to you. So the short answer is that Consequentialism can allow for favouritism.

  11. James Balfour says:

    I know this doesn’t have much to do with anything, but if someone who looks at the consequences of responding one way or another in a moral dilemma, and responds in a way that will result in what that person thinks are the best consequences, and this is called the consequentialist view, what is the opposite?
    James