Skip to main content.
September 22nd, 2005

A Puzzle about Moral Uncertainty, and its solution

Here’s an interesting asymmetry between reasoning under moral uncertainty and reasoning under factual uncertainty. Or, at least, an interesting prima facie asymmetry, since there might be a simple explanation once we set everything out clearly.

The followinig situation is reasonably common in reasoning under uncertainty. We have three choices, A B and C. Which of these is best to do depends on whether p or q is true, and we’re certain that exactly one of them is true. If p is true, the best outcome will arise from doing A. If q is true, the best outcome will arise from doing C. Yet despite this, the thing to do is B.

Here’s an example. I’m in Vegas, thinking about betting on a (playoff) football game. The teams seem fairly even, and there is no points spread. As usual, to bet on a team I have to risk $55 to win $100. Fortunately, I have $55 in my pocket. Let A = I bet on the home team, C = I bet on the away team, and B = I keep my money in my pocket. Let p = the home team wins, and q = the away team wins. (Given it’s a playoff game, we can be practically certain that one of these is true.) So if p, I’ll be best off if A, and if q, I’ll be best off if C. Still, the thing to do is B, since both A and C have negative expected value.

Now the puzzle is that this kind of situation doesn’t seem to arise for moral uncertainty.

Let p and q be fundamental moral theories such that we are certain exactly one of them is true. For example, let p be the proposition that a particular kind of consequentialism is true, and q be the proposition that a particular deontological theory is true. And imagine we are certain that one of these is true. (If this is implausible, use more than 2 moral theories here so we have enough cases that we can be confident in practice that one of them is the true one.)

In this case it is much harder to find examples where the the thing to do is to select the option that is sub-optimal according to all the live theories. Let A be an act that maximises utility while violating lots and lots of people’s rights, so it is best according to consequentialism. And let C be an act that performs all your duties and violates no rights, but produces very little utility. In the middle, let B be an action that only violates a few rights, and produces almost as much utility as A. It doesn’t seem that B should be done, even if it seems that B is close to being the best action by both theories.

Here’s what I think the explanation of the asymmetry is. (This is actually as much Ishani’s explanation as mine.) In the football case, what is true is that if the home team will win, the best outcome will come from betting on it. But a ‘narrow scope’ normative claim, like “If the home team will win, I should bet on it” is not true, and doesn’t follow from this fact. We can prove this conditional is false because in the (nearby!) possible world/epistemic possibility where the antecedent is true, the consequent is still false. I shouldn’t bet on the home team because that has negative expected itility, and I shouldn’t do things with negative expected utility.

In the moral case, the ‘narrow scope’ normative claims are true. If consequentialism is true, then I should do A, where the ‘should’ here has narrow scope. (That is, it is an unconditional normative claim, made conditionally on the truth of some moral theory.) And if the deontological theory is true, then I should do C. Since (by hypothesis) one of these two is true, I should do A or C. We can’t make this kind of inference in the factual uncertainty case because the narrow scope conditionals are not available. And this explains the asymmetry.

Posted by Brian Weatherson in Workbench

7 Comments »

This entry was posted on Thursday, September 22nd, 2005 at 9:22 pm and is filed under Workbench. You can follow any responses to this entry through the comments RSS 2.0 feed. Both comments and pings are currently closed.

7 Responses to “A Puzzle about Moral Uncertainty, and its solution”

  1. Andrew Sepielli says:

    Brian,

    You say that the narrow scope claim “If the home team will win, I should bet on it” is false. Rather, I should do the act that will maximize expected utility.

    But then, you say that the narrow scope claim “If consequentialism is true, then I should do [the act that consequentialism recommends]” is true. I should not, by implication, do the act that will maximize expected moral value (assuming it is a different act from the one that consequentialism recommends). But why is maximizing expected moral value so clearly the wrong tack when maximizing expected utility is the right one?

  2. Ralph Wedgwood says:

    I actually doubt that there is any asymmetry here at all. There only appears to be one because you’ve focused on a rather unconvincing example. (In the moral case that you focus on, it just isn’t plausible to say that we know for certain that either p or q is true.)

    Admittedly, we do have slightly more resistance to probabilistic thinking on moral questions than on other questions. (I think that this can be explained along the lines of Bob Adams’s idea of “Moral Faith”.) But I can’t see any reason for doubting that sometimes all the morally rational choice are ones that are known to be morally suboptimal.

    The explanation that you and Ishani suggest doesn’t work, in my view, because it just equivocates on ‘should’. There is extensive linguistic evidence that ‘should’ is context-sensitive, and expresses different concepts in different contexts. In particular, it can be both an information-relative ‘should’, and a more objective ‘should’ (which is in a way relative to all the relevant facts, regardless of whether they’re knowable or not).

    E.g., sometimes we might say ‘It turned out that I shouldn’t have done that, although I couldn’t have known it at the time’ (objective ‘should’), and sometimes we say ‘Since we know so little about the situation, we should be very cautious’ (information-relative ‘should’). (Above, I used the term ‘rational’ to express a concept that is effectively the dual of an information-relative ‘should’, and the term ‘optimal’ to express an objective concept that is closely related to an objective ‘should’.)

    It seems to me that in the example that you use to illustrate your explanation of this alleged asymmetry, you focus on the information-relative ‘should’ in the non-moral case, but then shift to an objective ‘should’ in the moral case. This is why I suspect that you commit the fallacy of equivocation!

  3. James Dreier says:

    I think Ralph is right about ‘should’, but I’m sure there’s an asymmetry.
    Look, if you think there is any chance that Anarchy, State, and Utopia is correct, then your expected moral utility for taxation is negative infinity. But unless you actually are a Nozickian libertarian, you won’t give taxation the position in your preferences demanded by its expected moral utility.

  4. Pablo Stafforini says:

    Imagine two people, Rich and Poor, and assume that your available acts would realize states of affairs A, B or C, such that

              A        B         C
    Rich    20      50       100
    Poor   20      25         10

    Suppose you know there are exactly two possible moral principles, the Principle of Equality and the Principle of Utility; and suppose, furthermore, that you know one of them is true. What ought you to do? Although A is probably preferred by Egalitarians and C clearly preferred by Utilitarians, you ought, arguably, to prefer B. For B’s expected moral value, as assessed alternatively by each principle, appears to be greater than the expected moral value of either A or C, similarly assessed.

    If this choice of principles represents a plausible one for agents under conditions of moral uncertainty –-i.e., if we can plausibly claim to know that either Utility or Equality is the true moral principle—, then it is not the case that, as compared with similar cases of factual uncertainty, these are, as Brian claims, “much harder” to come by. We can think of many situations in which an act that realizes a suboptimal state of affairs for both Utilitarians and Egalitarians is considered the morally right thing to do by each of them.

  5. Brian Weatherson says:

    I agree with Ralph that there are ambiguities in ‘should’ claims, but I’m pretty sure that I’m not exploiting them here. Let’s insist that we mean the ‘ex ante’ disambiguation of ‘should’, and we mean the normative operator to have narrow scope. Then I think that (1) will be true and (2) false.

    (1) If consequentialism is true, you should maximise expected utility.
    (2) If the home team will win, you should bet on the home team.

    And that’s what explains the asymmetry I think. I can’t quite see why the ex ante/ex post distinction matters here. Is it the case that (1) is meant to be false on a narrow scope/ex ante interpretation?

    I think in Pablo’s case it is very hard to put oneself in a frame of mind where we know that either equality or utility should be maximised, and in particular know that we shouldn’t be maximising some mix of the two, but not know which. Maybe it’s just me, but I do find this a little easier to imagine when we have moral theories that are so different.

    I don’t like the expected moral value theory for a couple of reasons. First, as a couple of my colleagues in Canberra have convinced me (in a hopefully soon to be posted paper) that we can’t compute these expected values. Second, it misses another asymmetry that I probably should have emphasised in the post.

    The case I mentioned in the post, goes like this. We have moral theories T1 and T2, with the following facts.

    If T1 is true, then A is acceptable, B is unacceptable and C is extremely unacceptable.
    If T2 is true, then then C is acceptable, B is unacceptable and A is extremely unacceptable.

    In that case I think it is improper to do B. But compare it to the following.

    If T1 is true, then A is superogatory, B is acceptable and C is unacceptable.
    If T2 is true, then then C is superogatory, B is acceptable and A is unacceptable.

    In that case it seems plausible that B is proper to do. (Some may argue that B is mandatory because it maximises the probability of doing an acceptable thing, but this seems improper to me.) I’m not sure how an expected moral value approach can explain why B is OK (if not mandatory) in the second case but not OK in the first case. The explanation in terms of (1) being true (but (2) false) does just that I think.

  6. Jamie says:


    I’m not sure how an expected moral value approach can explain why B is OK (if not mandatory) in the second case but not OK in the first case.

    Because:

    u(acceptable) – u(unacceptable) >> u(super) – u(acceptable)?

  7. Brian Weatherson says:

    That would get the asymmetry between the two cases, but wouldn’t it have the implication that in the second case, B was the uniquely permissible thing to do? After all, if each of A and C have a serious probability of being impermissible, and the difference in moral utility between the permissible and the impermissible is huge relative to the difference in utility between permissible options, it will be hard to get the numbers to work out so B isn’t the highest utility option by a distance.

    Now maybe I’m wrong and B is the only permissible choice here, but I think it’s plausible that whichever of A and C is licenced by the true moral theory is also permissible.