As “Joe Salerno”:http://knowability.blogspot.com/2008/01/both-kinds-of-philosophy-country-and.html reports, John MacFarlane gave a really nice paper on “ought” (co-written with Nico Kolodny) at the recent Arizona Ontology Conference. In the questions Adam Elga raised a really nice case that I think deserves some thinking about. At the very least it makes me worried about my preferred theories of “ought”.
NB: John’s paper isn’t online yet, so I’m not linking to it, nor directly discussing it. The puzzle raised here is a puzzle for his particular view, but it’s also a puzzle for a view like I like, which is quite unlike John’s.
The big issue is the truth conditions of sentences of the form of (1), as uttered by U in context C.
(1) S ought to do X.
I think a certain kind of contextualism is needed here. In fact, I think we need to be doubly (or possibly triply!) contextualist here. In particular, we need the following tools.
- A two place function O. O takes as inputs a set of possible actions A, and a knowledge base K, and returns (if possible) the action that ought to be done out of those actions relative to that knowledge base.
- A function from contexts to sets of possible actions. We won’t worry about this here.
- A function from contexts to knowledge bases. We’ll have much more to say about this below.
- It’s possible we’re not going to need just one O, but a function from contexts to O’s. Worrying about this would take us too far astray I think, so I’ll ignore it.
What’s driving all this formalism is the idea that what should be done is relative to what you know. If a patient comes into the hospital with some odd symptoms, and the doctor’s don’t know what he has, they should run some tests to find out. If they know what he has, they shouldn’t run tests, they should treat him. (I’m using “should” and “ought” interchangably here. I hope that doesn’t lead quickly to problems.) I think the knowledge-relativity of what ought to be done is not too problematic. The big issues is, whose knowledge? This is something I think context answers.
More precisely, I think context supplies a function f that takes S and U as inputs, and returns a knowledge base as output. There are two values that f commonly takes, although it could take many other values besides these.
- An agent-centered use where K = whatever knowledge is available to S at the time she has to make the decision between these alternatives. (‘Available’ might be context-sensitive here. Note the increasing number of moving parts!)
- An ‘objective’ approach where K = whatever knowledge is available to either S at the time she has to make the decision, or U at the time she makes the utterance.
These two values of f correspond to the widely-discussed ‘subjective’ and ‘objective’ oughts, but I don’t see much reason to think these are the only two values it takes.
Let’s see how this applies to a puzzle case we get from Parfit. A huge quantity of water is sloshing down the side of a hill. As it stands the water will end up in two caves, largely filling each of them. There are 10 miners in one of the caves, but no one at the site knows which. If both caves are flooded, one of the miners will drown. (Ignore for now how we could know this, and assume we do.) The people at the site have some sandbags, which they could use to block one of the caves. If they block the cave with the miners, all the miners will be saved. If they block the other cave, the cave with the miners will be totally filled with water, and they’ll all drown. What should be done?
The options are
- Block the north cave.
- Block the south cave.
- Block neither cave.
As it turns out, you and I, standing far away from the action, know that the miners are in the north cave. (We can’t communicate this to the site workers.) So it seems that one of us could say to the other they should do option 1, block the north cave. In this case, we’ll be assigning f it’s ‘objective’ value, where the knowledge includes our knowledge.
But we could also, considering things from the perspectives of the workers on the site, conclude that blocking the north cave would be an insane gamble. So if we said, with the dilemma facing these workers in our mind, that they should do option 3, it seems we could say something true. And the theory says that is possible, because we now give f it’s agent-centered (or subjective) interpretation.
So far, so good. There are a couple of claims that sound plausibly true, and we can interpret each of them. Moreover, the interpretations look not very ad hoc from a pragmatic perspective. The more we focus on the dilemma of the site workers, the more it is their knowledge that is relevant.
Now here’s Adam’s variant on the puzzle. Assume that the site workers have a sensor they can send down the caves to see where the miners are. The sensor can only be used once, and costs $10 to replace. Now we add a fourth option: Send the sensor down, and then block the cave with the miners in it..
Clearly there is now a true interpretation of The site workers should take option 4. At the very least, that’s the best option relative to the site worker’s knowledge, so it should have a true reading. Here’s the puzzle. To my ear, the following sentence has no true reading.
- The site workers should take option 1 rather than option 4.
But the contextualist theory arguably predicts that if we focus on the facts about the miners, i.e. if we let our knowledge into the knowledge base, we should get a true reading for that claim. Because from our perspective the best outcome (all miners saved with no lost sensor!) is from option 1. Why is this not an available reading?
I can think of two answers to this question, neither particularly satisfactory.
First, perhaps we could say that even from our perspective, they should take option 4. It’s true that option 1 has better consequences, no lost $10 etc, but consequences aren’t all that matter. It would be wrong to not be careful and check when there are 10 lives in the balance.
Second, we could say that the presence of a ‘procedural’ option in the relevant sense forces our attention on the procedures that the site workers should face. That’s to say, in general when one of the options is “learn more and make a decision based on what is learned”, f takes the agent-centered value. And that’s because it would be stupid to include such an option when we are considering what to do from a perspective other than that of the agent performing the action.
The first is plausible on the surface, but it seems to overgeneralise. If it would be wrong not to be careful and use the sensor, wouldn’t it be equally wrong to block one cave when the miners might be in the other? The second just looks ad hoc to me, and I’m worried that the sweeping generalisation will be, well, too sweeping.
So this is a puzzle without an obvious (to me) solution.