Ethics

I got so annoyed with the CD drive on my office computer that I had to go to Radio Shack and buy a new one before I could get back to work. You might think this is just me procrastinating again, but the old drive was really really bad. And the new one is quite good, even if I did have to change the jumper settings before I could get it to work. (Who knew that drives still had jumper settings?)

But this isn’t meant to be a technology post. I wanted to mention a couple of things about my favourite ethical theory. For the reasons Andy and I lay out here I no longer think it is the one true ethical theory, but it’s nevertheless my favourite theory.

It’s a form of consequentialism, so in general it says the better actions are those that make for better worlds. (I fudge the question of whether we should maximise actual goodness in the world, or expected goodness according to our actual beliefs, or expected goodness according to rational beliefs given our evidence. I lean towards the last, but it’s a tricky question.) What’s distinctive is how we say which worlds are better: w1 is better than w2 iff behind the veil of ignorance we’d prefer to be in w1 to w2.

What I like about the theory is that it avoids so many of the standard counterexamples to consequentialism. We would prefer to live in a world where a doctor doesn’t kill a patient to harvest her organs, even if that means we’re at risk of being one of the people who are not saved. Or I think we would prefer that, I could be wrong. But I think our intuition that the doctor’s action is wrong is only as strong as our preference for not being in that world.

We even get something like agent-centred obligations out of the theory. Behind the veil of ignorance, I think I’d prefer to be in a world where parents love their children (and vice versa) and pay special attention to their needs, rather than in a world where everyone is a Benthamite maximiser. This implies it is morally permissible (perhaps even obligatory) to pay special attention to one’s nearest and dearest. And we get that conclusion without having to make some bold claims, as Frank Jackson does in his paper on the ‘nearest and dearest objection’, about the moral efficiency of everyone looking after their own friends and family. (Jackson’s paper is in Ethics 1991.)

So in practice, we might make the following judgment. Imagine that two children, a and b, are at (very mild) risk of drowning, and their parents A and B are standing on the shore. I think there’s something to be said for a world where A goes and rescues her child a, and B rescues her child b, at least if other things are entirely equal. (I assume that A and B didn’t make some prior arrangement to look after each other’s children, because the prior obligation might affect who they should rescue.)

But what if other things are not equal? (I owe this question to Jamie Dreier.) Imagine there are 100 parents on the beach, and 100 children to be rescued. If everyone goes for their own child, 98 will be rescued. If everyone goes for the child most in danger, 99 will be rescued. Could the value of paying special attention to your own loved ones make up for the disvalue of having one more drown? The tricky thing, as Jamie pointed out, is that we might ideally want the following situation: everyone is disposed to give preference to their own children, but they act against their underlying dispositions in this case so the extra child gets rescued. From behind the veil of ignorance, after all, we’d be really impressed by the possibility that we would be the drowned child, or one of her parents.

It’s not clear this is a counterexample to the theory. It might be that the right thing is for every parent to rescue the nearest child, and that this is what we would choose behind the veil of ignorance. But it does make the theory look less like one with agent-centric obligations than I thought it was.

This leads to a tricky taxonomic question. Is the theory I’ve sketched one in which there are only neutral values (in Parfit’s sense) or relative values? Is it, that is, a form of ‘Big-C Consequentialism’? Of course in one sense there are relative values, because what is right is relative to what people would choose from behind the veil of ignorance, and different people might reasonably differ on that. But setting a community with common interests, do we still have relative values or neutral values? This probably just reflects my ignorance, but I’m not really sure. On the one hand we have a neutrally stated principle that applies to everyone. On the other, we get the outcome that it is perfectly acceptable (perhaps even obligatory) to pay special attention to your friends and family because they are your friends and family. So I’m not sure whether this is an existence proof that Big-C Consequentialist theories can allow this kind of favouritism, or a proof that we don’t really have a Big-C Consequentialist theory at all.