This is a very badly worked out laundry list of ideas for my paper on intuitions. Most of it falls under the category of responses to the Weinberg, Nichols, and Stich experiments, but some of it is probably just repitition.
One response to WNS is that the experiments just show that different groups pick out different concepts by the term knowledge. See the last few pages of “this paper“http://homepage.mac.com/ernestsosa/.cv/ernestsosa/Public/ADefenseofIntuitions.pdf-link.pdf by Ernie Sosa for an example of someone saying just that. Frank Jackson says something similar in From Metaphysics to Ethics. WNS reply to such moves here, but they don’t specifically reject the idea that there are different concepts floating around. Rather, they just say that there being all these different concepts is itself bad news for analytic epistemology as (they say) it is practiced.
I think they should have a more direct response. I don’t think that the evidence they provide is even of the right kind to show that the different groups have different concepts. More generally, I want to defend the line that having different concepts requires that the (token) concepts play different roles, not that they intuitively apply to different things.
The main argument for this is an argument by analogy. In a wide range of cases, what seems to count for having different concepts is that the concepts play different functional roles, not that the people with the concepts apply them differently.
The most famous example of this goes back to R. M. Hare in The Language of Morals. Hare imagines a missionary who arrives on an island where the natives are cannibals, and hears the cannibals frequently say things like “Eating your enemies is the right thing to do.” Now it would, intuitively(!), be the wrong thing for the missionary to do to interpret the cannibals as meaning something different by right, e.g meaning wrong. As long as their talk involving right and good matches up with their evaluation and their action in the right kinds of ways, they mean the same thing we do.
Hare also mentions that the converse is true. Someone who actually says “Murder is wrong” but doesn’t disapprove of murder, and doesn’t take that fact to be a reason not to murder, arguably doesn’t mean by ‘wrong’ what we do. The point is arguable, and David Brink has argued it, but I think Hare is right. Imagine on the cannibals island there is a cannibal who acts like all the other tribespeople, but agrees with us about the truth of (what appear to be) first-order moral claims, like “Eating people is wrong.” She is the one who is using the word wrong with a different meaning, not the people who disagree with us about the sentences.
We see this pattern repeat across the board when it comes to philosophically contentious terms. A, B and C are in a causation seminar, discussing tricky cases of alleged causation by double prevention of would-be pre-empters and the like. A and C utter the same sentences in the seminar, while B often disagrees from those sentences, and indeed asserts similar sentences with not inserted at a key place. This is little or no evidence that A and B mean different things by cause. But when they are leaving the room, A and C have the following exchange.
A: Could you turn out the light?
C: That’s a good idea. Let me go get a ladder.
A: Why are you getting a ladder.
C: To remove the light.
A: Why not just flick the light switch?
C: Why do that?
A: Because flicking the light switch causes the light to turn off.
C: I agree that flicking the light switch causes the light to turn off, and I am trying to turn the light off, but I don’t see that’s a reason to flick the light switch.
It seems to me that A has good reason, almost conclusive reason, to think that she and C mean something different by ‘cause’, even though they agree about all the particular applications of the word. Again, it’s the person who agrees about the role of causation in its connection to action, i.e. B, who has the common concept, not the one who agrees about the cases.
The last case to look at is belief. Consider the false belief experiments that Alan Leslie and colleagues have run to test for the presence or absence of a theory of mind in autistic children. In the original experiment, the authors described the question Where will Sally look for her marble? as The Belief Question. This is quite striking I think. [Note to self: check whether the following questions have actually been asked in an experiment.] I think it’s right, that is a way to work out what the child believes about what the doll believes. But let’s imagine the experiment was done slightly differently. In particular, imagine the Belief Question was broken into two stages, and here are D and E’s answers.
Q: Where does Sally believe the marble is?
D: In the basket.
E: In the box.
Q: Where will Sally look for the marble?
D: In the box.
E: In the box.
In this case, D answers the question that involves the word ‘believe’ in the same way that we would, and E does not. But the two questions taken toegther indicate that it’s E, not D, who shares our concept of belief. She has a false theory of belief, to be sure, but she means the same thing we do by ‘belief’.
At this point in the full paper I’ll include something that purports to be an argument for why the knowledge case should be taken to be analogous. In particular, it’s whether people agree about whether knowledge is the stuff that can be used as premises in practical reasoning that determines whether they have the same concept of knowledge, not how they apply the term to particular cases.
So I don’t think the right response is to say that WNS’s subjects mean different things by their words.
Having said that, there is a possible linguistic difference that might account for the differences between the replies. In WNS’s experiments, there was a strong contrastive focus on ‘knows’. For one thing, ‘knows’ was not a possible answer, only ‘really knows’. For another, ‘knows’ was contrasted with ‘only believes’. Consider the following case.
Fred and George are both presented with bowls of water of equal temperature and asked to put their hands in the water to see what temperature they are. The bowls are both rather warm, though not unbearably hot. Fred is asked, “Is the water hot?” and says yes. George is asked “Is the water hot, or only warm?” and says warm. This certainly isn’t evidence that Fred and George mean different things by hot. It isn’t even evidence they disagree about which things are hot. For water has to be hotter for hot to be the right answer to George’s question than it has to be for the answer to Fred’s question to be yes. Or at least that’s the case in my dialect.
Now imagine Hermoine is presented with a bowl of water with the same temperature, and asked George’s question, and responds hot. Is this a sign that she disagrees with George? Well, not necessarily. At least, it isn’t a sign she disagrees about the non-linguistic facts. It might just be that Hermoine doesn’t adjust for focal effects as much as George does. That is, it might be the case that like Fred and George, if she was asked straight up is the water hot, she would say yes, and the presence of the contrast option warm doesn’t change her answer the way it changes George’s.
This is not to say that this is what is going on in the WNS experiments, it’s just a hypothesis. If we had independent evidence of cross-linguistic differences in responses to contrasting options it might even look like a plausible hypothesis. But there are some avenues to explore here that aren’t adequately addressed.
One reason to take this hypothesis seriously is that much analytic epistemology seems to rely on very odd stress patterns in knowledge ascriptions. Imagine a case where Jill has pretty good, but not conclusive, evidence to support her (true) belief about where her car is parked, but Katie has completely forgotten where her car is parked. It isn’t obvious that the intuitively correct answers to the following two questions are the same.
Does JILL know where her car is parked? (Yes)
Does Jill KNOW where her car is parked? (No?)
Take away the odd stress on know, odd because that’s not how we talk in real life settings where the stress is almost always on the subject or part of the argument, and sceptical intuitions are further from the foreground.
Back to WNS. The main point I want to defend is that intuitions about particular cases are not of that much evidential value in doing conceptual analysis. Intuitions about borderline particular cases are of even less value. Moreover, most of the cases epistemologists look at, including the cases that WNS investigate, are somewhat borderline. So even if everyone’s intuitions lined up with these cases, we should be suspicious of their evidential value.
OK, more to say about this later. But the payoff is that conceptual analysis shouldn’t rest on hard cases, but on considerations about conceptual role, i.e. on what we want knowledge for. There’s some reason to think this pushes us towards a weaker conception of knowledge than is standard, but I’m not sure how far I want to run that line.
Posted by Brian Weatherson in Workbench