Skip to main content.
August 15th, 2008

Intuition isn’t Unreliable

At least since Robert Cummins’s paper Reflections on Reflective Equilibrium in Rethinking Intuition, a lot of people have worried that intuition, that old staple of philosophical argument, is unreliable. This is fairly important to the epistemology of philosophy, especially to intuition-based epistemologies of philosophy, so I think it’s worth considering.

(Worries about intuition obviously don’t start 10 years ago, but the particular worry about reliability does become pronounced in Cummins. I suspect, though I don’t have the relevant papers in front of me, that there are related worries in earlier work by Stich. Note that this post is strictly about reliability, not a general defence of intuition in philosophy.)

The happy new is that there’s a simple argument that intuition isn’t unreliable. I think it isn’t clear whether intuition simply is reliable, or whether there’s no fact of the matter about how reliable it is. (Or, perhaps, that there is no such thing as intuition.) But we can be sure that it is not unreliable.

Start with a fact that may point towards the unreliability of intuitions. Some truths are counter-intuitive. That’s to say, intuition suggests the opposite of the truth. I’m told it’s true that eating celery takes more calories than there is in the celery, so you can’t gain weight by eating it. If true, that’s pretty counterintuitive. And just about everything about counter-steering strikes me as counterintuitive. So those are some poor marks against intuition.

But now think of all the falsehoods that would be even more counterintuitive if true. If you couldn’t gain weight by eating steak, that would be really counterintuitive. Intuitively, steak eating is bad for your waistline. And that’s true! Intuitively, you have less control of a motorbike at very high speeds than at moderate speeds. And that’s true too! It would be really counterintuitive if remains from older civilisations were generally closer to the surface and easier to find than remains from more recent civilisations. And that’s false – the counterintuitive claim is false here.

In fact almost everywhere you look, from archeology to zoology, you can find falsehoods that would be very counterintuitive if true. That’s to say, intuition strongly supports the falsehood of these actual falsehood. That’s to say, intuition gets these right.

To be sure, most of these cases are boring. That’s because, to repeat a familiar point, we’re less interested in cases where common sense is correct. And here intuition overlaps common sense. But that doesn’t mean intuition is unreliable; it’s just that we don’t care about it’s great successes.

There are so many of these successes, so many falsehoods that would be extremely counterintuitive if true, that intuition can hardly be unreliable. But maybe it’s not actually reliable either. I can think of two reasons why we might think that.

First, there may be no fact of the matter about how reliable intuition is.

It’s counterintuitive that there can be proper subsets of a set that are equinumerous with that set. And that’s true, so bad news for intuition. It would be really counterintuitive if there could be proper subsets of a set of cardinality 7 that are also of cardinality 7. But there can’t be, so good news for intuition. And the same for cardinality 8, 9, etc. So there are infinitely many successes for intuition! A similar trick can probably be used to find infinitely many failures. So there’s no such thing as the ratio of successes to failure, so no such thing as how reliable intuition is.

On the other hand, perhaps we’re counting wrongly. Perhaps there is one intuition that covers all of these cases. Perhaps, though it isn’t clear. It isn’t clear, that is, how to individuate intuitions. Arguably our concept of an intuition isn’t that precise to give clean rules about individuation. But if that’s right, there again won’t be any fact of the matter about how reliable intuition is.

This isn’t, I think, bad news for using intuition in philosophy. Similar arguments can be used to suggest there is no fact of the matter in how reliable vision is, or memory is. But it would be absurd on this ground to say that vision, or memory, is epistemologically suspect. So this doesn’t make intuition epistemologically suspect.

Second, there might be no single such thing as intuition. (I’m indebted here to conversations with Jonathan Schaffer, though I’m not sure he’d endorse anything as simple-minded as any of the sides presented below.)

It would be counterintuitive if steak eating didn’t lead to weight gain. It would be counterintuitive if Gettiered subjects have knowledge. In both cases intuition seems to be correct. But perhaps this is just a play on words. Perhaps there is no psychologically or epistemologically interesting state that is common to this view about steak and this view about knowledge.

If that’s so, then perhaps, just perhaps, one of the states in question will be unreliable.

I doubt that will turn out to be the case though. Even if there are distinct states, it will still turn out that each of them gets a lot of easy successes. Let’s just restrict our attention to philosophical intuition. We’ll still get the same results as above.

It would be counterintuitive if torturing babies for fun and profit was morally required. And, as it turns out, torturing babies for fun and profit is not morally required. Score one for intution! It would be counterintuitive if I knew a lot about civilisations on causally isolated planets. And I don’t know a lot about civilisations on causally isolated planets. Score two for intuition! It would be counterintuitive if it were metaphysically impossible for me to put off serious work by writing blog posts. And it is metaphysically possible for me to put off serious work by writing blog posts. 3-0, intuition! I think we can keep running up the score this way quite easily, even if we restrict our attention to philosophy.

The real worry, and this might be a worry for the epistemological significance of intuition, is that the individuation of state types here is too fuzzy to ground any epistemological theory. For once any kind of intuition (philosophical, epistemological, moral, etc) is isolated, it should be clear that it has too many successes to possibly be unreliable.

Posted by Brian Weatherson in Uncategorized

24 Comments »

This entry was posted on Friday, August 15th, 2008 at 7:02 pm and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.

24 Responses to “Intuition isn’t Unreliable”

  1. petemandik says:

    I’m not sure if this is related to your remarks about the individuation of state types in your last paragraph, but consider the following.

    I’m not sure how much credit intuition really deserves with respect to the sorts of known truths you present. Take something that I know by perception. I know that my cat’s in my office with me by perceiving, not by intuiting, that my cat’s in my office with me. I’m happy to grant that it would be pretty unintuitive if it turned out to be false that, despite perceptual appearances to the contrary, my cat were actually absent. But it seems a bit odd, a bit unintuitive, to say that intuition, instead of perception, gets the credit here.

  2. Brad says:

    It seems to me that this kind of argument is only going to work for restricted domains, even if the worries about counting and individuation can be deflected. If we restrict ourselves to certain of the sciences, we get enough counter-intuitive facts, or more often facts on which intuition is simply silent, to dent the reliability of intuition there. If this is right, then a defender of the reliability of philosophical intuitions along the lines you suggest needs an argument that philosophy is relevantly similar to the domains in which intuition is reliable.

  3. Joachim Horvath says:

    Even if intuition in general, or philosophical intuition in general, is not unreliable, it still isn’t really settled why the specific use of intuitions to resolve fundamental philosophical disagreements by appeal to thought experiments should be deemed reliable as well. In analogy with perception, most intuitions about thought experiments may be more like looking at small objects from a great distance than like looking at middle-sized objects right in front of oneself in good lighting conditions. So, just like the reliability of the latter kind of perception can’t simply be extrapolated to the former kind, the reliability of intuitions about close and not very complicated possibilities maybe cannot simply be extrapolated to distant and rather “weird” possibilities (I take it that Weinberg and other “restrictionist” experimental philosophers argue for something like that analogy, e.g. in “Intuition and Calibration” – and they support the claim that philosophical intuitions of the latter kind are in fact unreliable with their experimental data about cross-cultural variation, instability, etc.).

  4. eschwitz says:

    Another comment in the spirit of Brad and Joachim: Intuition will clearly be reliable for some things, but the extension to matters of the sort philosophers dispute is questionable.

    Here’s an example of the kind of thing I have in mind: Is there any reason to think our intuitions about Searle’s Chinese Room or Block’s Chinese Nation would be a trustworthy guide to the genuine presence or absence of consciousness in such cases?

  5. Brian Weatherson says:

    Thanks everyone for the great contributions!

    @Pete,

    I don’t think intuition should be given credit for these simple successes. I’m not a simple reliabilist, so I don’t think mere accuracy, as it were, is credit-worthy. But I do think they should be counted as accurate predictions.

    Perhaps it’s easiest to use an example. Imagine I’m predicting horse races using a magic 8-ball. This is a terrible method, and it doesn’t deserve any epistemological credit. But still if we’re measuring how accurate the 8-ball is, we count wins as successes, even if we know they are lucky guesses. For all we’ve said here, intuition might be like a successful magic 8-ball, but one thing you can’t say against the successful magic 8-ball is that it is unreliable.

    @Brad,

    I’m very sceptical that even when we restrict our attention to the sciences that intuition does that badly. Sure there are lots of counterintuitive results. But there are even more ways in which the results could have been counterintuitive. I think it’s just attention bias that makes us think more about the counterintuitive results.

    For example, it’s counterintuitive that solids are largely empty space. (That is, that solids have roughly the internal structure that we thought 200 years ago gases have.) It would have been even more counterintuitive if gases had the internal structure that we thought, 200 years ago, that solids have. I think we can find examples like this basically anywhere we look.

    @Joachim,

    I think we’re again seeing attention bias. Let’s do a cross national survey on whether it is morally required to torture babies for fun. I bet the results won’t be too far from the correct answer. Or a survey on whether an action like my writing this post caused the Normans to win the Battle of Hastings. We’ll get the right answer again. Perhaps, if surveys are well designed, we can get some evidence against some particular use of intuition. (Although most arguments I’ve seen to that effect require some contentious, and frankly implausible, theories about the epistemological significance of disagreement.)

    But I don’t see any grounds for any reasons to draw any conclusions about kinds of intuition.

    @Eric,

    Perhaps there isn’t a positive reason – I’m really just playing defence here. I don’t think there is any reason to think that intuition is in general unreliable, and I don’t see any particular reason to think that ‘philosophical’ intuitions fall into a special class. It would be nice to have a positive reason to trust intuition, but I don’t see a reason to distrust it from general grounds of (un)reliability.

  6. Joachim Horvath says:

    Brian,

    it is interesting that none of the cases you cite in your response, about the moral wrongness of torturing babies or about what caused the outcome of the Battle of Hastings, are really that outlandish or removed from our everyday capacities for moral and causal judgment. In fact, babies have actually been tortured, and it is part of our ordinary conception of causality that there is no backward causation. Compare these cases with swampman or phenomenal zombies, which are not even nomologically possible (at least the zombies aren’t). Thus, even if intuition in general is reliable, there is still a pressing question why it should be reliable in cases for which it has not plausibly been ‘designed’ or especially trained. Analogously, there is a real problem for reliability when we try to apply visual perception in conditions to which it is not at all adapted. So, I don’t think that this worry can simply be shrugged off as an instance of attention bias. Furthermore, I don’t think that arguments from experimental philosophy rest so much on the fact that people disagree on certain cases. Rather, the troublesome datum is that philosophical intuitions seem to vary (in a priori unpredictable ways) as a result of seemingly irrelevant factors, like cultural background or order of presentation.

  7. Brian Weatherson says:

    But I bet there are a ton of outlandish cases where intuition is clearly on the right side as well. In all sorts of sci-fi stories there are clear cases of right and wrong.

    Is it morally required to torture E.T. for fun? Intuition says no, and intuition is right.

    Imagine it were possible to cause every person in Berlin intense pain by whistling any tune from Nirvana’s “Nevermind” while holding a bottle of whiskey. Would doing so be morally permissible? Intuition says no, and intuition is right.

    The second case certainly is, and the first case might well be, nomically impossible. But intuition is pretty reliable with respect to them. And we can multiply such cases endlessly. If there are reasons to discount intuition in cases of philosophical significance, it can’t be a general unreliability in far-fetched cases.

  8. Joachim Horvath says:

    Assume there is something like a unified human capacity for intuition. Then, it seems, every healthy adult human being should possess a well-functioning instance of that capacity. And if that capacity is a reliable one, as you argue, then it should track the truth about its subject matter in most cases. As a consequence, there should not be too much variation in the output of that faculty, and it should be relatively easy to reach universal agreement on what the right and what the wrong intuitions are. Now, in most of the very few cases when philosophical intuitions have been studied empirically, there has been considerable variation and surprisingly little agreement on the right intuitive response to that cases. Furthermore, our philosophical intuitions seem to be over time unstable and to vary systematically as a result of irrelevant factors, like affective attitude or socio-economic background. I find all of this hard to reconcile with your claim that intuition in general is reliable. Why not take it as a reductio of this assumption instead? Also, even though most of your examples of reliable intuitions seem a priori convincing, it does not seem unreasonable to expect, on the basis of the experimental results that have been collected so far, that they might be subject to ordering effects and cultural variation as well. Finally, when you claim about the Berlin-Nirvana case that “Intuition says no, and intuition is right”, isn’t that a terribly circular argument? Because, how else than on the basis of intuition do you know that causing the citizens of Berlin pain in this way is morally wrong?

  9. Brian Weatherson says:

    And if that capacity is a reliable one, as you argue, then it should track the truth about its subject matter in most cases. As a consequence, there should not be too much variation in the output of that faculty, and it should be relatively easy to reach universal agreement on what the right and what the wrong intuitions are.

    I don’t see why that even comes close to following. If a faculty has outputs in 1,000,000 cases, and it is right 99% of the time, then any person could be wrong about 10,000 cases, and any two people could disagree about 20,000 cases.

    Now if you ask me, 99% reliable is highly reliable. And 20,000 cases is a lot to disagree about. So I don’t see how you’re getting from premises to conclusion.

    And if the argument here is going to turn on scepticism about the morality of torturing people for fun, then I don’t understand the rules of the game we’re playing. I’m much more confident that it’s wrong to torture people for fun than I am about the epistemological significance of disagreement, reliability etc etc.

    In fact I’m more confident that it’s wrong to torture people for fun than I am that people are telling the truth about the results of experiments they run, so if we’re getting to that level of sceptical doubt I think I don’t believe there’s literally any evidence from experimental philosophy at all.

  10. Joachim Horvath says:

    Ok, maybe I’m overstating the case of the intuition-skeptic a bit here – on the other hand, I just try out arguments that actual intuition skeptics use as well, so if they aren’t any good, it seems very important to find out where they go astray…

    So, let’s say you are right that we cannot expect little variation and disagreement in intuitions even if intuition has a high overall degree of reliability. Still, isn’t it kind of troubling that intuitions seem to vary in systematic and predictable ways as a consequence of factors that should not influence them, like the order in which one considers certain hypothetical cases? Further, isn’t it quite troublesome that we seem to have no ability to anticipate such influences from the armchair? So, even if armchair intuition is in fact highly reliable, we might nevertheless loose trust in it because we are pretty much in the dark about when and why it tends to fail. Here is a disanalogy with perception that may be illuminating: Perception can tell us, at least in principle, how perception works, when and why it fails and what its limits are – intuition, on the other hand, does not tell us similar facts about intuition, which seems to weaken its trustworthiness as a basic source of evidence or justification.

    Also, I am extremely confident as well that torturing for fun is wrong, but maybe moral cases are not such a good example here, given how emotion-laden they are. So, let’s consider another intuition-classic. I am quite confident that Gettier subjects do not know, but it seems a bit crazy to me to discard all scientific evidence against the trustworthiness of epistemic intuitions because I am so much more confident about my own epistemic intuitions than I am about (any kind of) scientific evidence. Rather, I would regard such a consequence as a good reason against the form of epistemic individualism that licenses it…

  11. eschwitz says:

    One UCR student (Felipe Leon) has been working on that very question in his dissertation. He distinguishes between “high flying” and “low flying” modal claims. As I think of it, high flying claims are claims about weird and remote possibilities; low flying claims are either not weird or not remote. (Obviously, some substance needs to be put on “weird” and “remote” here.) The case for the reliability of our modal intuitions for low-flying modal claims is good. It’s hard to see, though, why our intuitions about high-flying modal claims would be any good. Unforuntately, many philosophical thought experiments rely on high-flying modal claims (Twin Earth, Chinese Room, super blindsight, person-duplicating transporters, etc.)

  12. jonathanweinberg says:

    I pretty much agree with Brian’s main claim here — though I think that it tells us more about the vagueness of the category of intuitions, and about a kind of theoretical limpness of the notion of reliability, than it does about the merits of our philosophical methods (which Brian does make clear that he’s not really defending here per se). For reasons not too dissimilar from those offered above, esp. along the lines in comment #9, I focused on what I called “hopelessness” rather than unreliability in my “How to Challenge Intuitions Empirically Without Risking Skepticism” paper.

    I also think that it’s better to take particular intuition-deploying practices as targets for epistemological analyzing, rather than intuition tout court.

  13. dtlocke says:

    Hi Brian,

    I think maybe you misunderstood Pete’s point at comment #1 in your response at comment #5. I don’t think Pete was saying anything about intuition getting “credit” in your sense. Rather, he was just saying that the case he presented should not be counted as a case where intuition was reliable—-since, presumably, the relevant belief was formed on the basis of perception, not intuition. I think that he was suggesting that something similar is going on in your cases. For example, did I form my belief that steak eating is bad for my waistline by intuition? It doesn’t seem so. In this case, the belief was probably formed by testimony. (It is, I think, false. But I’ll play along.)

    Perhaps you’ll counter that it doesn’t matter whether the belief was formed on the basis of intuition: still, it is intuitive and true, and hence counts as a positive case towards intuitions reliability. I agree. But in philosophy I think we are interested in the reliability of intuition used as a belief-forming method. In other words, what we want to know is what percentage of the time a belief-that-is-formed-on-the-basis-of-intuition is true.

  14. Joachim Horvath says:

    Jonathan,

    you say that “that it’s better to take particular intuition-deploying practices as targets for epistemological analyzing”, but it seems to me that Brian could – and in fact does (see comment #7 above) – run just the same kind of argument with regard to the specific philosophical practice of using intuitions about hypothetical case as evidence, on which you focus in your paper. So, if his argument really shows that intuitions as used in philosophy are reliable, then I think that to nevertheless criticize them as “hopeless” seems to lose a lot of its epistemological bite. Because, consider what it means for a practice to be reliable: it means that most of the output intuitions/beliefs of that practice are true. So, although we may of course always strive for more, say “hope” or “certainty”, its being a reliable practice lifts the-philosophical-practice-of-appeal-to-intuitions well above many, many other human practices, inside and outside of science.

  15. jonathanweinberg says:

    Eric, I’ll look forward to hearing more about your student’s work down the line!

    Joachim, I don’t think that Brian’s argument can be run so easily against the “practices” version, because that version would deprive him of two of the main resources he has drawn on. First, and most centrally (and along the lines of your comment #6), it deprives him of the appeal to the vast range of ordinary cases that serve to make intuition’s track-record look so good. Such cases just do not play much role in philosophical debate; claims like “torturing kittens for fun is wrong” tend to show up in the literature not to help make the case for one theory of normative ethics over another, but rather as examples of obvious moral truths, or of the sorts of claims that one has to make some sense of in one’s metaethics, etc.

    Second, it also deprives him of the kind of move he makes in his comment #7, too — because those sorts of claims don’t seem to play much of a role in philosophical practice, either. (Maybe the ET case could in the right circumstances, say if one is looking to attack a species-centric account of ethics, perhaps in a debate over eating meat.) Yes, such cases could be multiplied endlessly, but as a matter of fact, doing so doesn’t seem to be something that can do any philosophical work for us. Once one gets the recipe for making variations of the “torturing X for fun” cases, they don’t seem to count any more for or against a philosophical theory than any one version of them could. Brian’s cases are also different from, say, the cases that Eric mentions in #4, in that it seems like they just collapse down to the quotidian versions anyway — one can divide through by the far-fetchedness, and see them just as gussied-up versions of ordinary cases, substituting “ET” for “kittens”. But there’s no equivalent ordinary cases for Block or Searle.

    In general, really obvious & commonsensical cases (whether quotidian or far-fetched) can’t do much work for us in our practices, because mostly we’re mooting among the highly restricted & elite set of theories that philosophers find worthy of consideration and defense. And these theories will most often be ones that one cannot choose between on obvious & commonsensical grounds. Most theories of normative ethics get all these cases right, or have really good resources for accommodating the cases of that sort that they don’t seem to get right. So all the cases that Brian needs to swell the ranks of the highly probably intuitions, are also cases which are unlikely to actually ever get used by anyone in our practices.

    (Maybe Gettier is a counterexample (and thus a meta-counterexample!) here, if it has the status of not-previously-noticed obvious case. I am not inclined to grant it that status, but even so, it seems to me unusual in this regard. Contrast it with, say, the Gypsy Lawyer case, or the various Grabits, or the cases in Jennifer Lackey’s learning from words arguments….)

    I also have to disagree with your claim that “if his argument really shows that intuitions as used in philosophy are reliable, then I think that to nevertheless criticize them as “hopeless” seems to lose a lot of its epistemological bite.” I go to some length to argue that hopefulness is not just a bit of epistemological lagniappe, but really a very central characteristic of trustworthy evidential practices. It should definitely not be put in the same pile with certainty, which I agree is something that’s merely nice to get when you can, but not an epistemic necessity.

    Part of the problem is that it’s just not right to say that mere reliability does very much on its own to lift philosophical intuitionizing above most human epistemic practices. I suspect that most such practices will count as reliable too, along much the same lines as those offered here by Brian. Most astrological predictions come true, after all (because they are sufficiently open-ended that they are unlikely to come out false). I suspect that most uses of things like the sortes vigilianae were reliable, too, since much like intuition, they were in part a way to channel one’s own common sense. Most epistemic practices in the history of the species have been carried out in a manner continuous with common sense, and thus will get the benefit of the same kind of track-record argument. Reliability really is rather cheap, which is part of why we should look to other notions to do our epistemological heavy lifting.

  16. samliao says:

    Joachim,

    You say, “Because, consider what it means for a practice to be reliable: it means that most of the output intuitions/beliefs of that practice are true.”

    I think this picks out one, descriptive, sense of “reliability” but there is another, normative, sense. Consider when I say that baseball umpires’ judgments of foul balls / homeruns are not reliable enough so MLB should employ instant replay. I don’t mean that they’re not getting the call right most of the time. I’m saying that they’re not getting the call right enough of the times, especially in borderline cases. That is, they’re not meeting some standard I believe to be appropriate. When we talk about a practice being reliable in everyday usage, I believe, we often actually have the normative sense in mind.

    So this is the normative sense of “reliability”: as meeting some standard. I agree with Jonathan that descriptive reliability really is rather cheap. But I don’t think normative reliability is so cheap. Perhaps this is because there are other important notions underlying the normativity that are doing the epistemological heavy lifting.

    (I must confess ignorance of not having read Jonathan’s paper. So perhaps what I call normative reliability is the same as his “hopefulness”. Nevertheless, I think the normative notion is equally deserving of the term, as evidenced by our everyday usage.)

  17. Joachim Horvath says:

    Jonathan,

    as to your second point, you seem to be committed, now, to the claim that it is really just a sub-class of intuitions about hypothetical cases that is shown to be unreliable (and hopeless as well) by the results of experimental philosophy, namely the sub-class of intuitions about, as we my call them, “irreducibly far-fetched” cases. I agree that, intuitively, there seems to be a difference between the irreducibly far-fetched cases and the reducibly far-fetched ones, but I’m not sure what this intuition of mine really counts for. For, the distinction seems to be drawn with the sole purpose of isolating and criticizing the specific use of intuitions in philosophy – at least, I can’t see any other motivation for introducing such a distinction…

    Actually, I have a similar worry about hope, for the distinction between “hopeful” and “hopeless” practices is new and formerly unrecognized in epistemology as well (which you also emphasize in your paper), and you introduce it with the main motivation of “challenging intuitions empirically without risking skepticism”, as already the title of your paper indicates. I have to admit, however, that you do a great job in trying to convince us that “hope” is not just an ad hoc epistemic category. Nevertheless, you shoulder a pretty heavy argumentative burden (which is, of course, highly admirable), if your build your empirical challenge to the philosophical use of intuitions on (at least) two formerly not recognized yet absolutely crucial theoretical distinctions…

  18. Joachim Horvath says:

    Shen-yi,

    I’m not sure that the distinction you highlight is really a distinction between a descriptive and a normative sense of “reliable”. Rather, I think that it is the distinction between a basic sense, which may just be something like “more often true than false” and a more demanding sense, which requires a given practice/method to meet a requirement that goes (far) beyond “more likely to be true than false”. After all, I can also use the basic sense to criticize someone as being unreliable, namely if s/he gets something more often wrong than right. And I can use the more demanding sense in a purely descriptive way, once you give me a precise specification of what the relevant standard actually demands; used in this way, the statement “The umpires judgments are unreliable” seems to be a purely descriptive one.

  19. jonathanweinberg says:

    “you seem to be committed, now, to the claim that it is really just a sub-class of intuitions about hypothetical cases that is shown to be unreliable (and hopeless as well) by the results of experimental philosophy, namely the sub-class of intuitions about, as we my call them, “irreducibly far-fetched” cases”

    I’m not sure that I need to be positively committed to all that. First, I’m really not committed to any sort of unreliability claim at all, though I do think that, with regard to Brian’s argument here, that it wouldn’t work so well were it offered as a defense of the reliability of the philosophical practice. With the common-sense-type cases included in the mix, then reliability comes along pretty easily. Once we subtract off all the common-sense-type cases, then I think it probably just becomes fairly hard to guestimate the reliability.

    As for the quotidian/far-fetched distinction, I agree that we don’t want to have to rest things on our rough sense of it. (Maybe Eric’s student will be able to help us here, though it sounds from Eric’s brief description like they might be using a distinction between types of possibilities, and I have in mind something more psychological than modal.) There’s no reason to think the distinction won’t be amenable to scientific study, though. Here’s one working hypothesis: where our concepts have a prototype structure, “far-fetched” cases are ones that involve a significant tension between different features. One prediction of that hypothesis is that such cases will be more amenable to framing, order effects, etc., because subtle shifts in the weighting of the different features could cause a flip in the prototype’s determination. But that is all speculation on my part way in advance of the necessary empirical work!

  20. jonathanweinberg says:

    I should note that this
    “Once we subtract off all the common-sense-type cases, then I think it probably just becomes fairly hard to guestimate the reliability.”
    is consistent with Brian’s main argument, in its maybe-reliability-of-intuition-is-ill-defined branch. It’s also consistent with a denial of Brian’s argument, in which the reliability of the practice is taken to be hard to determine but rather substantially lower than intuitions tout court, and thus suspect. Just to be clear, I’m not trying to attack Brian’s conclusion, even on a “philosophical practices” version — I suspect it’s right, at least for some readings of “reliable” (if Sam is right, then Brian might be wrong for other readings). My point is just that the argument can’t just be run mutatis mutandis on the “philosophical practices” version.

  21. petemandik says:

    For what it’s worth, I think dtlocke understands my point correctly, though I’m agnostic about whether Brian misunderstood it in his response.

  22. Esbenpetersen says:

    Petemandik and dtlocke,

    I think that you are right in pointing out that perception rather than intuition might deserve the credit for the belief about the cat. Moreover, the case might also invite the worry that the supposed intuition is actually caused by the belief, and not by an independent process of intuiting. If the belief is there because of perception, then the intuition might be there because of the belief. In that case it seems that, roughly speaking, the intuition will simply inherit whatever reliability we ascribe to the belief. Obviously, this does not make the belief unreliable, but it certainly seems to imply that checking one’s belief against one’s intuition would not really be worthwhile in such cases. So I think our interest should be in the reliability of intuitions that could not be taken to reflect already held belief in this way. The problem, of course, lies in determining when this is the case.

    And to dtlocke: you say “I think we are interested in the reliability of intuition used as a belief-forming method. In other words, what we want to know is what percentage of the time a belief-that-is-formed-on-the-basis-of-intuition is true.” I am not sure that this is right. In particular, I do not see why we shouldn’t simply be interested in the reliability of intuitions as such. If an intuition that p is something like “an intellectual seeming that p”, as Bealer suggests, then it seem that intuitions may be considered reliable independent of their impact on our beliefs. For instance, contradictory intuitions about some subject matter, as in the case of the sceptical puzzle, might lead one to suspend belief. Arguably, this should make one interested in the reliability of the intuitions themselves, since this would seem to be a case with no good candidate for the role of “intuition used as a belief-forming method”. However, I am not sure that this is more than a verbal disagreement.

  23. dtlocke says:

    “However, I am not sure that this is more than a verbal disagreement.”

    It is more than verbal disagreement and it is, I think, more accurate than what I said! Thanks!

  24. StinkyKoala says:

    Brian, I don’t see how the specific points you make demonstrate that intuition isn’t unreliable.

    I have a friend, and often we agree to hang out. One out of every 10 times we make plans, he doesn’t show up, and when I call him it turns out he forgot completely, or remembered but didn’t feel like going.

    Despite the fact that this friend is reliable 9/10 of the time, overall he is very unreliable.

    If I have a car that won’t run 1 out of every 20 days I try to use it, it is highly unreliable, despite the fact that its successes vastly outnumber its failures.

    And since you correctly point out that we can’t compare the number of intuitive successes with intuitive failures — they are assuredly both infinite, and since we can only make countably many sentences in English, this suggests that they are not only both infinite but both of Aleph-Nought size — it seems that your argument states that intuition doesn’t always fail, but nothing more.

    However, I think the issue of the role of intuition in philosophy goes slightly deeper than whether or not it is reliable. For decades now, there has been a movement in philosophy to adopt concepts or arguments based (at least in part, if not in whole) solely on intuition. Why is X morally wrong? Let’s appeal to intuition. How can I argue my stance of moral absolutism? Hey, intuition!

    Any scientist can tell you that this is a horrendous practice. Intuition is a very valuable, very human tool. It is a wonderful mechanism for generating conjectures. However, it is completely unreliable as a guide towards truth, and completely unreliable as a means of proof. That is to say, more precisely, that if our goal is to arrive at the truth of a situation, and to demonstrate that that is the truth as well as we can, intuition is nigh worthless. Imagine a mathematician saying he had proven the Riemann Hypothesis because it was intuitively obvious, and then trying to claim the $1 Million prize. This should be the reaction of those who hear the Philosopher claim that something — anything — is intuitively obvious.

    This isn’t to say that intuition is worthless, of course — just that it’s worthless as a means of argument. It’s very worthwhile as a guide to the conjectures we wish to argue. But to fail to make that distinction I think is intellectually dishonest, and only promotes poor reasoning skills.

Leave a Reply

You must be logged in to post a comment.