Absolutism and Uncertainty

Frank Jackson and Michael Smith have a very nice new paper on a puzzle for absolutist ethical theories. An absolutist ethical theory is a theory that says actions of a certain kind (call it K) cannot be done, no matter how good the consequences that would result from doing such an action. So an absolutist might, for instance, say that it is always impermissible to kill an innocent person, no matter how many lives we might save that way.

Frank and Michael (hereafter FM) point out that it will always be uncertain whether a particular action is or is not of kind K. And an ethical theory that tells us we cannot do things when they are of kind K, should tell us what to do when they are probably, or perhaps, of kind K. That question, they argue, absolutists cannot give a satisfactory answer to. I don’t want to defend absolutism, which I think is generally absurd to be frank, but I’m not sure FM have quite put their finger on exactly where the problem is.

Two disclaimers before we continue.

The first thing to note here is that being of kind K and being probably of kind K are very different things, at least logically speaking. An absolutist may say that you ought never do things that are K, whether or not you know they are, or indeed whether or not you have any reason to think they are. This kind of moral theory would parallel a kind of extreme externalism in epistemology. Of course it isn’t a very appealing, and ends up giving an enormous role to moral luck, but I can imagine someone holding such a view. Still, we’ll set it aside. We’ll imagine that our absolutist agrees that doing something of kind K when you believe, in good faith and for good reasons, that it is not of that kind could be morally acceptable. (So far I’m just following FM.)

We’ll also set aside problems concerning agents who have irrational beliefs. FM only talk about the agent’s probability that the action is of kind K. But presumably the agent’s reasons matter as well. If all the evidence points towards the action being of that kind, but the agent firmly believes anyway that it isn’t, it isn’t altogether obvious that the absolutist should excuse her actions. I think I’m disagreeing with something FM say at the bottom of page 7. But this case is tricky: the agent is at fault, but is it an epistemic or a moral fault? Or both? These complications are, to say the least, complicated, so I’ll ignore them by assuming we are dealing with agents who are perfect at calculating the effect of evidence. (I’ll also ignore complications to do with agents who don’t know what their evidence is.)

Having set those things aside, FM survey a range of things that the absolutist might say. The view they spend the most time on is what Andy Egan called the “Big Bad Number” approach. This view has the following features.

  • A moral agent aims to maximise expected moral value. (This makes it sound consequentialist, but this is a trivial sense of consequentialism. See Campbell Brown, Consequentialise This for more discussion of this.)
  • Actions of kind K (the prohibited kind) have a disvalue of B, where B is the big bad number.
  • The most (absolute) value (or disvalue) that can attach to any action not of a prohibited kind is C, where C C/B, then pB > C, so the expected disvalue outweighs the expected value.

So far so good. FM ask two questions.

First, what should C/B be? FM suggest any value is arbitrary, but I think there’s a natural answer to this, namely one half. If someone is probably innocent, then it’s wrong to kill them. At least that seems as intuitive to me as the original absolutist position. And I’ll assume it in what follows, though little actually turns on this. (FM suggest that C/B should be closer to .95. I should note that if we think C should equal B, which is consistent with absolutism at least strictly as stated, some of the problems below don’t seem to arise. I’ll set that option aside for this post.)

The second question concerns what to do in cases where there are two actions to be done each of which has some chance of being of kind K. Here’s the kind of case they have in mind.

Two skiers (X and Y) are skiing down different parts of the mountain. Each of them are headed towards points that you know for sure will trigger snow slips that will kill ten people. You aren’t sure whether this is inadvertent, or whether they intend to kill these people. You have a gun and could shoot and kill one or the other preventing the snow slip, but there’s no other way to stop the snow slip and stop the ten (or twenty) people dying.

In each case, you assign probability 0.3 to the skiier being innocent, with the probability of innocence in the two cases being independent. So the probability that at least one is innocent is 0.51.

Let’s assume that the value of saving 10 people is greater than the disvalue of killing a person whose probability of innocence is 0.3. (Or, in what seems like equivalent language, we might say that the disvalue of allowing 10 people to die in the snow slip is greater than the disvalue of killing a person who has a 30% chance of being innocent.) Killing both would mean doing something that has a probability greater than C/B, i.e. one half, of being of kind K. What should you do?

FM argue, on pages 16 to 19, that none of the four options available (shoot both, shoot X only, shoot Y only, shoot neither) is morally acceptable. And, they argue, it is implausible that this is a genuine moral dilemma. So absolutists don’t have a general solution to the problem of what to do in cases of uncertainty about whether an action is K.

I think there are a couple of moves left open to the absolutist here. We should expect this, since dilemmas can’t arise on numerical maximising theories, and the theory we’ve presented so far is a numerical maximising theory. I don’t think any of the options are particularly attractive, but they are options.

First, we might look at things the following way. There is a value to saving lives. The more lives saved the better. But the value of life saving is never above C. So the function v, such that v(N) = x iff x is the value of saving N lives, asymptotically approaches C. (Or perhaps it reaches a limit less than C, but I think the asymptotic approach is preferable.) Without much loss of generality, I’ll assume that v(10) = 0.35B, and v(20) = 0.45B.

Given all these assumptions, what should someone do? Answer: they should either shoot X, or shoot Y, but not both. Why should they shoot one of them? Because the value of saving 10 people is 0.35B, and the cost of shooting someone with probability of innocence is 0.3B. So that’s good to do. Why not shoot the second then? Because the value of saving an extra 10 people is the difference between v(20) and v(10), i.e. 0.1B, and that’s less than the disvalue of shooting the person who may well be innocent.

Now I think it’s more than a little odd that the agent has to shoot one or the other, but not both, despite the symmetrical situation. But saying it’s a Buridan’s Ass situation is at least more plausible than saying it’s a Sophie’s Choice situation.

This isn’t the only way of looking at things though. Perhaps instead of looking at the value of saving people, we should look at the disvalue of letting people die. Again, I’ll assume there’s a function l, such that l(N) measures the loss involved with letting N people die. And I’ll assume that l(N) asymptotically approaches C. It seems reasonable in the circumstances to say that l(10) = 0.35, and l(20) = 0.45, and more generally that l(x) = v(x) for all x.

What should the agent looking at things this way do? Answer: shoot neither. If they shoot one of the skiers, they will reduce the expected loss from allowing people to die from 0.45B to 0.35B, i.e. by 0.1B. And that loss reduction is less than the moral cost of shooting someone who has a probability 0.3 of being innocent.

Now this is a passing strange way to look at things. An agent with this approach who sees the first skiier should be getting ready to pull the trigger, until she sees the second skiier, at which point she should put the rifle away. Well, I know that moral value is not generally intrinsic to a situation, and as Moore taught us the values of wholes may not be simply related to values of parts. But this seems a very strange kind of extrinsic moral value.

Still, FM said that there is no approach that the absolutist could take here. But I’ve argued that there are two things they could say. An embarassment of riches! Quite literally, because the presence of two options should be genuinely embarrassing. It seems that what the absolutist should recommend depends on whether we consider the agent to be doing something good (i.e. saving people from the snow slip), or merely preventing a bad thing (i.e. the death of the people in the snow slip). The problem is similar to one that Kahneman and Tversky showed infects a lot of our reasoning: where we set the ‘zero-point’ or status quo makes a big difference for how we act.

Now I don’t want to commit myself to the view that there is no difference between actions and omissions. But in this case I think what we have here is a terminological difference, leading it seems to a difference in moral advice. Take the status quo to be that the 20 people die, and it is good, indeed obligatory, to save half of them. Take the status quo to be that the 20 people are alive (though in danger) and it is obligatory to not kill the people endangering them.

And, of course, either way the advice is rather odd, either oddly non-conglomerative in the first case or oddly extrinsic in the second. So I think there are problems for the absolutist to be sure. But perhaps not quite the same problem as FM have in mind.

2 Replies to “Absolutism and Uncertainty”

  1. Brian, I think your points and overall conclusion are right on. A small addition to the analysis of available options:

    In the first case, where you conclude that it is best to shoot one but not both of the skiiers, consider what happens after you shoot the first skiier.

    You specifically chose not to shoot the second skiier because the difference between v(20) and v(10) was smaller than the cost of taking a life. But this decision was made in terms of what to do in an instant. In an instant, do you kill one, both, or neither? You chose to kill one.

    Now the first one is dead, and you have a new scenario on your hand. The past is gone, and your decision is exactly whether you want to kill the (remaining) skiier to save 10, or not. Assuming your old values still apply, you would conclude that you should shoot the second skiier.

    So the conclusion, even slightly more strange, is not that you should shoot both, but that you should first shoot one and then reevaluate and decide to shoot the other.

    This, of course, doesn’t necessarily follow if you allow the reasonable assumption that the v(10) and the cost of killing a probably-guilty person can change.

  2. I don’t think there is really an ‘embarrassment of riches’. What happened was that you picked two utility functions, and though each of them is at least superficially plausible, they are inconsistent (with each other). That’s somewhat interesting, but doesn’t seem to me to have anything to do with the plausibility of the move for the absolutist.

    Look, when you save twenty, maybe that’s more than twice as good as saving ten, because the fewer people there are in the world, the more important it is to save a life. Then the second way of looking at things is right. The loss of twenty is more than double (in value) the loss of ten. You made the diminishment implausibly large, but maybe for larger N it would be plausible.

    I’m not an absolutist, but if I were I don’t think this puzzle would embarrass me much. My hunch is that the embarassment for absolutists will show up starkly in situations with probability and cooperative action.

Comments are closed.