Skip to main content.
August 30th, 2004

Intuitions

This is a very badly worked out laundry list of ideas for my paper on intuitions. Most of it falls under the category of responses to the Weinberg, Nichols, and Stich experiments, but some of it is probably just repitition.

One response to WNS is that the experiments just show that different groups pick out different concepts by the term knowledge. See the last few pages of “this paper“http://homepage.mac.com/ernestsosa/.cv/ernestsosa/Public/ADefenseofIntuitions.pdf-link.pdf by Ernie Sosa for an example of someone saying just that. Frank Jackson says something similar in From Metaphysics to Ethics. WNS reply to such moves here, but they don’t specifically reject the idea that there are different concepts floating around. Rather, they just say that there being all these different concepts is itself bad news for analytic epistemology as (they say) it is practiced.

I think they should have a more direct response. I don’t think that the evidence they provide is even of the right kind to show that the different groups have different concepts. More generally, I want to defend the line that having different concepts requires that the (token) concepts play different roles, not that they intuitively apply to different things.

The main argument for this is an argument by analogy. In a wide range of cases, what seems to count for having different concepts is that the concepts play different functional roles, not that the people with the concepts apply them differently.

The most famous example of this goes back to R. M. Hare in The Language of Morals. Hare imagines a missionary who arrives on an island where the natives are cannibals, and hears the cannibals frequently say things like “Eating your enemies is the right thing to do.” Now it would, intuitively(!), be the wrong thing for the missionary to do to interpret the cannibals as meaning something different by right, e.g meaning wrong. As long as their talk involving right and good matches up with their evaluation and their action in the right kinds of ways, they mean the same thing we do.

Hare also mentions that the converse is true. Someone who actually says “Murder is wrong” but doesn’t disapprove of murder, and doesn’t take that fact to be a reason not to murder, arguably doesn’t mean by ‘wrong’ what we do. The point is arguable, and David Brink has argued it, but I think Hare is right. Imagine on the cannibals island there is a cannibal who acts like all the other tribespeople, but agrees with us about the truth of (what appear to be) first-order moral claims, like “Eating people is wrong.” She is the one who is using the word wrong with a different meaning, not the people who disagree with us about the sentences.

We see this pattern repeat across the board when it comes to philosophically contentious terms. A, B and C are in a causation seminar, discussing tricky cases of alleged causation by double prevention of would-be pre-empters and the like. A and C utter the same sentences in the seminar, while B often disagrees from those sentences, and indeed asserts similar sentences with not inserted at a key place. This is little or no evidence that A and B mean different things by cause. But when they are leaving the room, A and C have the following exchange.

A: Could you turn out the light?
C: That’s a good idea. Let me go get a ladder.
A: Why are you getting a ladder.
C: To remove the light.
A: Why not just flick the light switch?
C: Why do that?
A: Because flicking the light switch causes the light to turn off.
C: I agree that flicking the light switch causes the light to turn off, and I am trying to turn the light off, but I don’t see that’s a reason to flick the light switch.

It seems to me that A has good reason, almost conclusive reason, to think that she and C mean something different by ‘cause’, even though they agree about all the particular applications of the word. Again, it’s the person who agrees about the role of causation in its connection to action, i.e. B, who has the common concept, not the one who agrees about the cases.

The last case to look at is belief. Consider the false belief experiments that Alan Leslie and colleagues have run to test for the presence or absence of a theory of mind in autistic children. In the original experiment, the authors described the question Where will Sally look for her marble? as The Belief Question. This is quite striking I think. [Note to self: check whether the following questions have actually been asked in an experiment.] I think it’s right, that is a way to work out what the child believes about what the doll believes. But let’s imagine the experiment was done slightly differently. In particular, imagine the Belief Question was broken into two stages, and here are D and E’s answers.

Q: Where does Sally believe the marble is?
D: In the basket.
E: In the box.

Q: Where will Sally look for the marble?
D: In the box.
E: In the box.

In this case, D answers the question that involves the word ‘believe’ in the same way that we would, and E does not. But the two questions taken toegther indicate that it’s E, not D, who shares our concept of belief. She has a false theory of belief, to be sure, but she means the same thing we do by ‘belief’.

At this point in the full paper I’ll include something that purports to be an argument for why the knowledge case should be taken to be analogous. In particular, it’s whether people agree about whether knowledge is the stuff that can be used as premises in practical reasoning that determines whether they have the same concept of knowledge, not how they apply the term to particular cases.

So I don’t think the right response is to say that WNS’s subjects mean different things by their words.

Having said that, there is a possible linguistic difference that might account for the differences between the replies. In WNS’s experiments, there was a strong contrastive focus on ‘knows’. For one thing, ‘knows’ was not a possible answer, only ‘really knows’. For another, ‘knows’ was contrasted with ‘only believes’. Consider the following case.

Fred and George are both presented with bowls of water of equal temperature and asked to put their hands in the water to see what temperature they are. The bowls are both rather warm, though not unbearably hot. Fred is asked, “Is the water hot?” and says yes. George is asked “Is the water hot, or only warm?” and says warm. This certainly isn’t evidence that Fred and George mean different things by hot. It isn’t even evidence they disagree about which things are hot. For water has to be hotter for hot to be the right answer to George’s question than it has to be for the answer to Fred’s question to be yes. Or at least that’s the case in my dialect.

Now imagine Hermoine is presented with a bowl of water with the same temperature, and asked George’s question, and responds hot. Is this a sign that she disagrees with George? Well, not necessarily. At least, it isn’t a sign she disagrees about the non-linguistic facts. It might just be that Hermoine doesn’t adjust for focal effects as much as George does. That is, it might be the case that like Fred and George, if she was asked straight up is the water hot, she would say yes, and the presence of the contrast option warm doesn’t change her answer the way it changes George’s.

This is not to say that this is what is going on in the WNS experiments, it’s just a hypothesis. If we had independent evidence of cross-linguistic differences in responses to contrasting options it might even look like a plausible hypothesis. But there are some avenues to explore here that aren’t adequately addressed.

One reason to take this hypothesis seriously is that much analytic epistemology seems to rely on very odd stress patterns in knowledge ascriptions. Imagine a case where Jill has pretty good, but not conclusive, evidence to support her (true) belief about where her car is parked, but Katie has completely forgotten where her car is parked. It isn’t obvious that the intuitively correct answers to the following two questions are the same.

Does JILL know where her car is parked? (Yes)
Does Jill KNOW where her car is parked? (No?)

Take away the odd stress on know, odd because that’s not how we talk in real life settings where the stress is almost always on the subject or part of the argument, and sceptical intuitions are further from the foreground.

Back to WNS. The main point I want to defend is that intuitions about particular cases are not of that much evidential value in doing conceptual analysis. Intuitions about borderline particular cases are of even less value. Moreover, most of the cases epistemologists look at, including the cases that WNS investigate, are somewhat borderline. So even if everyone’s intuitions lined up with these cases, we should be suspicious of their evidential value.

OK, more to say about this later. But the payoff is that conceptual analysis shouldn’t rest on hard cases, but on considerations about conceptual role, i.e. on what we want knowledge for. There’s some reason to think this pushes us towards a weaker conception of knowledge than is standard, but I’m not sure how far I want to run that line.

Posted by Brian Weatherson in Workbench

8 Comments »

This entry was posted on Monday, August 30th, 2004 at 3:28 pm and is filed under Workbench. You can follow any responses to this entry through the comments RSS 2.0 feed. Both comments and pings are currently closed.

8 Responses to “Intuitions”

  1. jon kvanvig says:

    One quick thought here, Brian. The arguments you give for concluding different concepts all involve defeasible reasoning. So even though you conclude on the basis of various response patterns that different concepts are involved, that assessment might have to be revisited in light of learning more. Take the light switch case. It’s compatible with the story as told that the guy wants to use the ladder for some other purpose, e.g., flipping light switches is taboo.

    This doesn’t undermine your argumentative strategy, but makes it much harder to ever conclude on the basis of conceptual role factors that different concepts are involved.

  2. wo says:

    I don’t see a big difference between two concepts playing different roles and two concepts intuitively applying to different things. When people disagree about Gettier cases, some of them accept an inference from (a description of) such and such circumstances to “x knows”, while the others don’t. Isn’t that a difference in conceptual role? It seems that you mean something special by “role” (“what we want knowledge for”), but it is unclear to me what that is.

  3. Brian Weatherson says:

    Jon,

    I agree this won’t be a conclusive reason. It’s always going to be hard to find indefeasible reasons to think someone has a different concepts. But if we’re doing the metaphysics of concepts as well as the epistemology, I think we can say that this kind of thing might be what makes it the case that the speakers have different concepts.

    By the way, the move to talk of reasons was meant to provide some cover against taboo worries. Someone who thinking flicking light switches is taboo might still think there’s a reason to do so, but the reason is blocked or overridden by the taboo. This won’t cover all cases, but it’s a start.

    Wo,

    I don’t have a precise definition of the distinction I’m after, because I don’t think it’s a sharp distinction. But I think we can understand a continuum of cases from on the one end thought experiments to on the other principles that connect the relevant concept to either (a) other concepts or (b) action. What I want to stress is that the latter end of the continuum is where we should focus our efforts in conceptual analysis.

  4. Andrew says:

    Brian,

    I agree that it’s often very profitable to think about why we care about the nature of a given concept – what interests and purposes employment of that very concept serves. And it’s particularly useful when the issue has been previously under-explored, giving us another tree to bark up.

    But I’m not sure that that project is usefully contrasted with an interest in thought experiments/borderline cases/hard cases. Understanding the role of a concept is, as you say in your reply to Wo, likely to involve formulation of certain principles linking that concept to other concepts, and to action, and to e.g. non-conceptual sensitivities and capacities. But aren’t we going to want to evaluate the scope and plausibility of such principles? And isn’t such evaluation going to bring back in the very test cases and thought experiments – cases where a putative conceptual connection is stress-tested – that the move to conceptual role was designed to avoid?

    Perhaps all you’re claiming is that the appeal to intuitions about borderline cases is at least staved off for the moment. But if, as you suggest, such intuitions aren’t of much evidential value, and the epistemic good-standing of the principles you appeal to are ultimately grounded on just such intuitions, then it’s difficult to see what the advance is. It would be different if we had reason to think that philosophical debate about such principles wouldn’t quickly come to mirror existing debates in epistemology, philosophy of action, metaphysics, etc, where appeal to intuitions about hard cases is ubiquitous and seemingly methodologically indispensable. Similarly, it would be different if we thought that such principles were likely to command broad agreement, so that they could be secured evidentially without appeal to what we would say in hard cases. But I’m not sure what such reason is supposed to amount to.

  5. Heath White says:

    The notion of “conceptual role” would seem to include, at minimum, inferential role. That is, what is taken to be a reason for a knowledge-statement, and what k-stmts are taken to be reasons for (either theoretical or practical). If there are widely different applications of “knowledge” among two groups, you can expect that they have different inferential standards for k-stmts. To that extent I agree with Wo.

    I think that Brian, however, has in mind a particular sort of inferential role, perhaps one focused on practical reasoning. That would be a kind of role-essentialism: this sort of inference counts for difference in concepts, that sort of inference doesn’t. That’s ok, I’m just curious to see it spelled out.

  6. Jonathan Weinberg says:

    Hi Brian — I was going to put a comment in here, but (a) it got way too long, and (b) this seemed an excellent opportunity to promote the experimental philosophy blog. So I’ve got a longwinded reply up here.

  7. Alexander Crawford says:

    Brian,

    I think you should clarify your “Fred and George water” example. Temperature is determinable, conventionally defined, and you have stipulated that each bowl of water has the same temperature. The problem arises in that there is no clear definition as to what constitutes “rather warm” or “not unbearably hot”, and although it’s clear you intend the terms to represent levels of degree, the qualifications “rather” and “not unbearably” confuse this test.

    Fred is asked if the water feels “hot”, period. He’s not given any definition or qualification, and has to guess at what is meant by “hot” (not cold? above room temperature? drinkable without discomfort? Baby bottle warm?).

    George is asked to make a subjective and personal determination as to what distinguishes “hot” from “warm”. Because “rather warm” could easily be equally “not unbearably hot”, you cannot assume Georges opinion is a test of the accuracy of Freds answer, nor that Freds answer defines the difference between “warm” and “hot”.

    A. Some rather warm water is also hot water.
    B. All “not unbearably hot” water is hot.
    C. Some “not unbearably hot” water is “rather warm” water.
    D. Not all “not unbearably hot” water has a higher temperature than “rather warm” water.

  8. Alexander Crawford says:

    Brian,

    I think you should clarify your “Fred and George water” example. Temperature is determinable, conventionally defined, and you have stipulated that each bowl of water has the same temperature. The problem arises in that there is no clear definition as to what constitutes “rather warm” or “not unbearably hot”, and although it’s clear you intend the terms to represent levels of degree, the qualifications “rather” and “not unbearably” confuse this test.

    Fred is asked if the water feels “hot”, period. He’s not given any definition or qualification, and has to guess at what is meant by “hot” (not cold? above room temperature? drinkable without discomfort? Baby bottle warm?).

    George is asked to make a subjective and personal determination as to what distinguishes “hot” from “warm”. Because “rather warm” could easily be equally “not unbearably hot”, you cannot assume Georges opinion is a test of the accuracy of Freds answer, nor that Freds answer defines the difference between “warm” and “hot”.

    A. Some rather warm water is also hot water.
    B. All “not unbearably hot” water is hot.
    C. Some “not unbearably hot” water is “rather warm” water.
    D. Not all “not unbearably hot” water has a higher temperature than “rather warm” water.