Skip to main content.
January 29th, 2008

Ought and Context

As Joe Salerno reports, John MacFarlane gave a really nice paper on “ought” (co-written with Nico Kolodny) at the recent Arizona Ontology Conference. In the questions Adam Elga raised a really nice case that I think deserves some thinking about. At the very least it makes me worried about my preferred theories of “ought”.

NB: John’s paper isn’t online yet, so I’m not linking to it, nor directly discussing it. The puzzle raised here is a puzzle for his particular view, but it’s also a puzzle for a view like I like, which is quite unlike John’s.

The big issue is the truth conditions of sentences of the form of (1), as uttered by U in context C.

(1) S ought to do X.

I think a certain kind of contextualism is needed here. In fact, I think we need to be doubly (or possibly triply!) contextualist here. In particular, we need the following tools.

What’s driving all this formalism is the idea that what should be done is relative to what you know. If a patient comes into the hospital with some odd symptoms, and the doctor’s don’t know what he has, they should run some tests to find out. If they know what he has, they shouldn’t run tests, they should treat him. (I’m using “should” and “ought” interchangably here. I hope that doesn’t lead quickly to problems.) I think the knowledge-relativity of what ought to be done is not too problematic. The big issues is, whose knowledge? This is something I think context answers.

More precisely, I think context supplies a function f that takes S and U as inputs, and returns a knowledge base as output. There are two values that f commonly takes, although it could take many other values besides these.

These two values of f correspond to the widely-discussed ‘subjective’ and ‘objective’ oughts, but I don’t see much reason to think these are the only two values it takes.

Let’s see how this applies to a puzzle case we get from Parfit. A huge quantity of water is sloshing down the side of a hill. As it stands the water will end up in two caves, largely filling each of them. There are 10 miners in one of the caves, but no one at the site knows which. If both caves are flooded, one of the miners will drown. (Ignore for now how we could know this, and assume we do.) The people at the site have some sandbags, which they could use to block one of the caves. If they block the cave with the miners, all the miners will be saved. If they block the other cave, the cave with the miners will be totally filled with water, and they’ll all drown. What should be done?

The options are

As it turns out, you and I, standing far away from the action, know that the miners are in the north cave. (We can’t communicate this to the site workers.) So it seems that one of us could say to the other they should do option 1, block the north cave. In this case, we’ll be assigning f it’s ‘objective’ value, where the knowledge includes our knowledge.

But we could also, considering things from the perspectives of the workers on the site, conclude that blocking the north cave would be an insane gamble. So if we said, with the dilemma facing these workers in our mind, that they should do option 3, it seems we could say something true. And the theory says that is possible, because we now give f it’s agent-centered (or subjective) interpretation.

So far, so good. There are a couple of claims that sound plausibly true, and we can interpret each of them. Moreover, the interpretations look not very ad hoc from a pragmatic perspective. The more we focus on the dilemma of the site workers, the more it is their knowledge that is relevant.

Now here’s Adam’s variant on the puzzle. Assume that the site workers have a sensor they can send down the caves to see where the miners are. The sensor can only be used once, and costs $10 to replace. Now we add a fourth option: Send the sensor down, and then block the cave with the miners in it..

Clearly there is now a true interpretation of The site workers should take option 4. At the very least, that’s the best option relative to the site worker’s knowledge, so it should have a true reading. Here’s the puzzle. To my ear, the following sentence has no true reading.

But the contextualist theory arguably predicts that if we focus on the facts about the miners, i.e. if we let our knowledge into the knowledge base, we should get a true reading for that claim. Because from our perspective the best outcome (all miners saved with no lost sensor!) is from option 1. Why is this not an available reading?

I can think of two answers to this question, neither particularly satisfactory.

First, perhaps we could say that even from our perspective, they should take option 4. It’s true that option 1 has better consequences, no lost $10 etc, but consequences aren’t all that matter. It would be wrong to not be careful and check when there are 10 lives in the balance.

Second, we could say that the presence of a ‘procedural’ option in the relevant sense forces our attention on the procedures that the site workers should face. That’s to say, in general when one of the options is “learn more and make a decision based on what is learned”, f takes the agent-centered value. And that’s because it would be stupid to include such an option when we are considering what to do from a perspective other than that of the agent performing the action.

The first is plausible on the surface, but it seems to overgeneralise. If it would be wrong not to be careful and use the sensor, wouldn’t it be equally wrong to block one cave when the miners might be in the other? The second just looks ad hoc to me, and I’m worried that the sweeping generalisation will be, well, too sweeping.

So this is a puzzle without an obvious (to me) solution.

Posted by Brian Weatherson in Uncategorized

5 Comments »

This entry was posted on Tuesday, January 29th, 2008 at 4:14 pm and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.

5 Responses to “Ought and Context”

  1. finlay says:

    My initial instinct (as a defender of a contextualist view of ‘ought’) was to defend the second answer against the charge of being ad hoc. On further thought, I have a slightly different answer. In Elga’s case, there is a particular reason why we are forced to the agent-centered value.

    To see that this isn’t the case in general, consider the following, superficially similar variant on a 3-envelope problem:

    S is offered 3 envelopes. She knows C holds $1500, and that of A and B one holds $2000 and one holds $0. Suppose that A does, in fact, hold the $2000. She is offered the following options: (1) Take A, (2) Take B, (3) Take C, (4) Pay $200 to see which envelope holds the $2000, and then choose one.

    This is (I believe) structurally identical to the Elga case, but it seems to me that both readings of ‘ought’ are ok. There is the (subjective) reading on which it is true that S ought to choose (4). But isn’t there also the (objective) reading on which it is true that S ought to choose (1)? (1) gives her $2000, after all, while (4) gives her $1800.

    What is different between this case and Elga’s? I suggest it’s that it involves in some sense incommensurable values. When it comes to saving lives, ‘money is no object’ (within limits, of course). My first reaction on reading the set-up was to be puzzled about why the cost of the sensor was even mentioned.

    If the cost of the sensor is normatively irrelevant, then in effect the only difference between option 1 and option 4 is that the latter includes knowledge of where the miners are. If we are presented with this as a relevant alternative, this forces us to the subjective reading of ‘ought’.

    It may be objected to the claim that monetary costs are irrelevant in the miner case, that given two options for saving the miners which differ only in cost, we clearly ought to choose the cheaper. But this is explained in the same way: if courses of action are presented as relevant alternatives purely in respect of cost, this forces us to a monetary evaluation of options.

    This solution strongly suggests (to me) that ‘ought’ takes a further argument above those you’ve mentioned: an END argument (as I’ve argued elsewhere).

  2. Andrew Macdonald says:

    I think the key consideration is that agents are looking to make the rational choice given their knowledge. Interpreting this way, the statement, “the workers should do option 1” is false. The workers would be reckless to do option 1 (i.e., not justified or rational), given what they know.

    But if the statement, “they should do option 1, block the north cave” is false, then why would we say it? The reason is that we are assuming a different context when we make our claim. We are not saying what they should do given their knowledge, we are saying what they should do given our knowledge. On this interpretation, our claim is true. What’s going on is that for a single sentence, there are really two different claims being made.

    I think then, given our knowledge, “the site workers should take option 1 rather than option 4” is true. The only reason to want option 4 is if we weren’t sufficiently sure about the miners’ location. But, at least as the hypothetical is stated, that’s not the case.

    The same idea follows with Finlay’s parallel example. Given S’s knowledge, S should choose 4. Given our knowledge (which includes complete information about the options), S should choose 1.

  3. Mark van Roojen says:

    I kind of like a version of option 2. I think the problem may partly lie with ‘rather than’. Even in a context where option 4 is in the conversational mix somewhere, it seems to me OK to say that they ought to choose 1, as in “We know they ought to choose 1,” or “Of course they ought to choose 1.” So it doesn’t seem to me that the mere presence of option 4 costs us the ability to talk about the objective ought.

    I think that connecting the options with the ‘rather than’ makes it hard to think of the choice as one from a situation in which 4 has nothing at all going for it, as it would be from the objective perspective of full information. So the agent’s actual context of less than full information is made salient by the way the options are linked as alternatives. (Why else put only 4 after the ‘rather than’, rather than 2 or 3 or all of them?)

    Speaking of the objective perspective, I have a question about your use of U (the speaker) to generate at least one variant of the “objective approach”. Why not just go with something close to all relevant truths in principle knowable by the agent (bracketing for the moment issues about reasons whose grounds that depend on lack of knowledge as some “conditional fallacy” type examples highlight)? My thought is that this is the right sort of thing to define the truth conditions of such objective oughts, even if U’s actual judgements using that objective ought will depend on what U thinks is true. But that shouldn’t change the truth conditions, or so it seems to me.

    And while I’m rambling, let me suggest that there is another kind of relativity that contextual factors might shift — relativity to the agent’s capacity to act on reasons she has because of weak will and the like. In one good sense we ought to do what we would do if we were completely rational. But in another good sense what I ought to do is what makes sense given that I know I am unlikely to be completely rational. I think we often freely move between judgements of both sorts depending on whether the context makes it natural to treat the less than full rationality as one of the conditions on the choice or whether it is more natural to treat it as up to the agent in the context of choice. There is some discussion of these sorts of cases in the “conditional fallacy” literature, though most of the emphasis there is not on context dependence.

    FWIW.

  4. arpruss says:

    What if we take a fine-grained view of action, and say: You ought to both block precisely the cave with the miners, and block precisely the cave that the indicator shows to have miners in it? If you do option 4, you do both of these obligatory actions. If you do option 1, you do only one of these obligatory actions.

    Complication: What if the indicator malfunctions and shows that the miners are in the south cave? That’s fine. What you ought to do is to block precisely the cave with the miners and block precisely the cave that the indicator shows to have miners in it. Of course, there is no way of doing both actions in the malfunctioning indicator case. So you’re going to go against your duty whatever you do. But if you choose option 1, you do so culpably (e.g., because knowingly), while if you choose option 4, you do so inculpably (you think you’re fulfilling both).

  5. arpruss says:

    Two footnotes to my post:
    1. If one thinks that both doing A and refraining from A can be obligatory, one doesn’t need fine-grained actions.
    2. What I said is basically an application (perhaps flawed) of Mark Murphy’s story about mistaken conscience.

Leave a Reply

You must be logged in to post a comment.