Lackey on Testimonial Knowledge

Ishani was just talking with me about Jennifer Lackey’s prize-winning paper and we noticed something odd about one of the core examples. Here’s the example, with the discussion after the fold.

COMPULSIVELY TRUSTING: Bill is a compulsively trusting person with respect to the testimony of his neighbor, Jill, in whom he has an obsessive romantic interest. Not only does he always trust Jill when he has very good reason to believe her, he is incapable of distrusting her when he has very good reason to not believe her. For instance, even when he has available to him overwhelming evidence for believing that she is deliberately lying or being deceitful, Bill cannot come to believe this about Jill. Indeed, Bill is such that there is no amount of evidence that would convince him to not trust Jill. Yesterday, while taking his afternoon walk, Bill ran into Jill, and she told him that she had seen an orca whale while boating earlier that day. Bill, of course, readily accepted Jillís testimony. It turns out that Jill did in fact see an orca whale on the boat trip in question, that she is very reliable with respect to her epistemic practices, both in general and in this particular instance, and that Bill has no reason to doubt the proffered testimony. Given his compulsively trusting nature with respect to Jill, however, even if he had had massive amounts of evidence available to him indicating, for instance, that Jill did not see an orca whale, that she is an unreliable epistemic agent, that she is an unreliable testifier, that orca whales do not live in this part of the country, and so on, Bill would have just as readily accepted Jillís testimony.

Lackey says that Bill does not know that Jill saw an orca. Here’s her reason for saying that he does not.

To see this, notice that because of his compulsively good nature with respect to Jillís testimony, Bill is simply incapable of being sensitive to the presence of defeaters regarding her reports. In this respect, he is no better epistemically than a subject who has been brainwashed or programmed to accept any report that is made Jill. For were Bill to be inundated with massive amounts of counterevidence, he would have accepted Jillís testimony just as readily as he did in the complete absence of such counterevidence. Indeed, Bill is such that he would have accepted Jillís testimony under any circumstances. Because of this, Billís belief that there was an orca whale in the relevant body of water is evidentially insensitive in a way that is clearly incompatible with warrant, justification, and knowledge. Therefore, while Jillís belief possesses all of the epistemic properties in question, the belief that Bill forms on the basis of her testimony possesses none of them.

I think Bill does know that Jill saw an orca. I have a positive argument for this and a criticism of the principle Lackey uses to conclude Bill does not know.

Change the example in the following four respects, all of which should be irrelevant to the question of whether Bill gets knowledge from Jill’s testimony.

  • Bill is a company director of Acme Inc.
  • Jill didn’t tell Bill about orcas, but that Bill’s sister had bought 10000 shares of Acme Inc.
  • In Bill’s state there is a law saying that whenever a company director comes to know that a family member has bought shares in their company, they have to notify the SEC.
  • Despite Bill believing Jill, he does not notify the SEC.

Well all of this comes out in court, and it doesn’t look good for Bill. Until his lawyer, Professor Lackey, stands up and says that he can’t be guilty of violating the law because he was epistemically insensitive to (merely) possible defeaters and this is incompatible with his knowing that his sister bought 10000 shares of Acme Inc. Actually, it still doesn’t look good, because the judge laughs this defence out of court and Bill goes to jail.

I think the kind of test I’m sketching here, the “Would you go to jail?” test, is quite generally valid. Real world laws do talk about obligations various people have in cases where they come to know certain kinds of things. So we don’t have to imagine far-flung possibilities to try and work out when agents would be vulnerable under those laws. And I think it’s pretty clear in Lackey’s case that Bill would indeed go to jail.

Three quick asides.

First, my gut feeling is that as long as Bill truly believes his sister bought shares in the company and doesn’t report that, he goes to jail. So this is a strongly anti-sceptical test. That doesn’t mean it isn’t correct however.

Second, contextualism is of no help here. Presumably the courtroom is a high-stakes setting if anything is. (Bill could go to jail.) If Bill knows in the courtroom setting, he knows in everyday settings.

Third, I’d be more than interested to know what the case law is on standards for knowledge when we’re interpreting these kinds of laws. Is it just truth plus subjective certainty (as my bush-lawyer’s gut tells me) or something stronger? It would be cool if it varied between jurisdictions. There is a lot of case law on the connection between causation, counterfactual dependence, and responsibiliy, so lawyers are used to dealing with tricky cases of philosophically interesting terms, and they may have useful things to say here.

Now the negative comment. The example below shows that the test Lackey uses to show Bill doesn’t know is either much too strong or leads to violations of closure. Tests that are too strong are bad, and tests that violate closure are bad, so this is bad. First, let’s state Lackey’s test.

INSENSITIVE TO DEFEAT. If X would believe p even if X possessed a defeater for believing that p, then X does not know that p.

The argument against INSENSITIVE TO DEFEAT is the following case.

Brian is a (dashing, witty, handsome) young philosopher who has become thoroughly bored with debates about external world scepticism. He thinks it is a Moorean fact that we know the external world exists. He is so bored with this debate that even if there started to be evidence that he really was in a computer simulation (e.g. buildings suddenly going missing, gaps in his visual field etc) he would totally discount it. In short, he is insensitive to external world defeaters. Fortunately, he has no such defeaters. But in all other respects he is the model of a modern epistemic agent. He is sensitive to defeaters to his other beliefs, and is careful which particular propositions about the external world he believes. For instance, if he had a defeater for his belief that the Red Sox won last year’s World Series, he would (in a slightly panicked state) stop believing they did. But with no such defeaters he can hold onto his beliefs. Moreover, he only ever believes truths, so he is perfectly reliable.

If we accept INSENSITIVE TO DEFEAT, then Brian does not know that the external world exists. Now we face a dilemma, for one of the following two propositions is true.

  1. Brian knows that the Red Sox won, but does not know that the external world exists, so (known, single-premise) closure fails.
  1. Brian does not know any proposition that entails the existence of the external world, so his boredom with a (sometimes boring) philosophical debate has cost him all of his external world knowledge.

Neither of these strikes me as being at all plausible, so I conclude INSENSITIVE TO DEFEAT is false. And with it falling the obstacle to claiming Bill knows falls too.

Despite the length of this post it really is only about a small part of the paper. And I’m sympathetic to at least Lackey’s negative point, that the Belief View of Testimony is mistaken. But I didn’t think this example helped her case.

6 Replies to “Lackey on Testimonial Knowledge”

  1. I think you’re overlooking how extreme Lackey’s case is. As I read Lackey’s example, Bill is disposed to believe Jill even if what Jill says is: ‘You don’t exist’, or ’2 + 2 = 5’, or ‘What I am saying right now is false’, or the like. I put it to you that if this is true of Bill, Bill is more or less insane, in a way that to my mind certainly does prevent him from knowing what he believes on the basis of Jill’s testimony.

    I agree with you that the legal questions are very interesting. But I very much doubt that the legal standard of “knowledge” includes the beliefs of insane people that happen to be true. My guess is that to count as “knowing” something for legal purposes, one must believe the truth in a more or less ordinarily sane way, as a result of an at least ordinarily reliable belief-forming process.

    I also don’t think that your example shows that a contextualist account of ‘knowledge’ can’t help us understand why the legal standard of knowledge is set as it is. In legal contexts it’s not just the stakes for the defendant that are at issue. There are also the stakes for the whole commmunity, if the law makes it too easy for people to escape criminal liability by claiming that their belief-forming habits are insensitive to defeating evidence, and thereby creates a positive incentive in favour of epistemic carelessness. The sort of contextualism that I would favour would allow stakes of this kind to help to determine how good Bill’s epistemic position has to be for it to be true to say in the context that he “knows” about his sister’s purchase of the 10,000 Acme shares.

    Now your example of Brian the brilliant young philosopher who is bored with external world scepticism is a very interesting case (probably more interesting than the much more extreme case than Lackey focuses on). My own view that the right principle is not INSENSITIVE TO DEFEAT as you formulate it, but something just slightly different.

    Actually, I think that the right thing to say about cases of this kind hinges on what is the right solution to the generality problem (which in my view is a problem for all epistemologists whatsoever, not just for reliabilists). This is because, in my view, to be forming one’s beliefs in a rational manner, one must be adequately sensitive to defeaters to the method that one is using: this doesn’t require sensitivity to all possible defeaters — just a reasonable sensitivity to a large enough range of defeaters that one might easily come across (including any defeaters that one actually comes across of course).

    In the case of Brian, we might suppose that the method that Brian is using involves two steps: (1) from his sensory experience to beliefs about his environment, by the general method of “taking sensory experience at face value” or something like that; and (2) deducing from his beliefs about his environment that the external world exists, by some relatively general deductive method. As you’ve told the story, I think that Brian is still adequately sensitive to defeaters to both of these kinds of methods. (1) If he found out that he was in a perceptual psychology lab full of perceptual illusions, or that he had ingested a powerful hallucinogen, he wouldn’t form environmental beliefs on the basis of his sensory experiences with anything like the same level of confidence, etc. (2) For almost all inferences that fall under this deductive method, he would respond rationally to defeating evidence against the conclusion of the inference; the fact that he would fail to do so in the case of inferences where the conclusion is ‘The external world exists’ doesn’t prevent him from being adequately sensitive to defeaters for this method in general.

    In fact, however, I’m inclined to think that we will normally have to individuate methods a bit more finely, so that they are tied to the particular concepts or kinds of concepts that figure in the particular beliefs that are being formed. Then it becomes plausible to say that Brian is not forming beliefs in a rational way at the second stage (2) of his belief-forming method. If that is so, then I’ll happily deny that Brian actually knows that the external world exists.

    But I deny that we have a counterexample to any plausible closure principle. There are two main kinds of plausible closure principle. The first plausible sort of closure principle says that if you know p and q obviously follows from q then you are in a position to know q. But Brian is obviously still in a position to know that the external world exists, because he is in a position to make the inference in a more rational way: as a matter of fact, he has not encountered any defeaters to this inference.

    The second plausible sort of closure principle says that if you know p and you competently deduce q from p then you know q. But in this case, I deny that Brian has competently deduced q from p. For Brian’s deduction to have been really competent, he would have had to have been more sensitive to potential defeaters than he actually was.

    It would obviously not be a plausible form of closure to say that if you know p, and q follows from p and you believe q as a result of any belief-forming method whatsoever, you know q. You might believe q for some utterly idiotic reason, or infer it from p in some ridiculously fallacious manner. This surely doesn’t count as a way of coming to know q. I don’t know what other closure principle you had in mind, but I don’t see any problem with embracing the first horn of your dilemma.

  2. On point 2, I don’t think that your argument shows that contextualism won’t work—or at least not that sensitive invariantism won’t work.

    Sensitive theories will (or, I think, should) have it that the standard for knowledge that p varies the costs of wrongly believing that p vs. the importance of correctly believing that p. In the courtroom it’s very important that the jury not wrongly that Bill engaged in insider trading—it would be very bad to send him to jail if he’s innocent. But that doesn’t show that the question “Did Bill know that his sister bought the shares?” has to be answered according to a high-stakes standard. For Bill, the costs of failing to correctly believe that his sister bought the shares are very high—he’s obliged to keep on top of that sort of thing—and the costs of falsely believing that are not very high.* So this is a case in which we should require low standards of evidence to say “Bill knows that his sister bought shares.” As we do—we accept that whenever he has a true belief.

    *I assume he could independently check this, or something. If Bill goes to jail if he inaccurately tells the SEC that his sister bought shares, and if he has no way of checking whether his sister bought shares other than relying on unsubstantiated tips, then he’s really over a barrel regardless of his trust in Jill.

  3. Thinking it over, I retract my agreement with this:

    as long as Bill truly believes his sister bought shares in the company and doesnít report that

    In the example I’m about to present, Jill doesn’t exist—Bill’s only reason for belief is as presented.

    Bill’s diary is produced in court. It reads, “My sister has been acting shifty lately. Good lord, perhaps she bought shares of the stock without telling me.” And on and on, as Bill becomes gradually convinced that his sister has bought shares, with no concrete reason to believe that this is her reason for acting shifty. The prosecutor says, “Your Honor, Bill knew that his sister had bought shares, and he failed to report it.”

    Counsel for the defense: “Your honor, the evidence shows that Bill had no good reason at all to think that his sister had bought shares. He thought so, but this was a paranoid delusion. It was only coincidence that she actually had bought them. Are we to require CEOs to report every fleeting suspicion on the off-chance that it’s true?”

    It’s not clear to me that Bill goes to jail in this case. (This is connected with what Ralph Wedgwood says in the comments to the next post, that crazy true beliefs shouldn’t count as knowledge.)

  4. Er, what I retract my agreement with is this:
    as long as Bill truly believes his sister bought shares in the company and doesnít report that, he goes to jail


  5. I guess I overlooked some kinds of contextualism that allow for standards to be community-sensitive in just this way. It doesn’t seem to mesh well with either Stew’s or Keith’s form of contextualism, neither of which say much about how specific facts of the situation can lower the stakes. (In a bank case, can I come to know the bank is open on Saturday because it’s hard to get a parking spot on Friday evening, or because I promised a friend I’d meet them for dinner in a few minutes? Perhaps, but that’s not the spirit of Keith’s example I think.)

    On the closure point, I was imagining Brian has noticed, among other things, that his belief in the external world can be supported by his other beliefs and deduction. He deliberately tries to make it hard for people to solve the generality problem by not picking one method to support his belief in the external world. (Deep down he thinks it is a priori, but without a proof of that he’s happy for deductive support.) And I’ll add to the case that he is sensitive to defeaters for deduction – he keeps up to date on arguments about whether double negation elimination, or antecedent strengthening, are admissible inference rules. It’s just in the merely possible situation where he has a defeater for his belief in the external world he ignores it. Note that this would also be a defeater for his belief that the Red Sox won, though it isn’t the nearest defeater, so it doesn’t fall afoul of Lackey’s principle.

    Which is all to say, I don’t see how Brian’s insensitivity to potential defeaters shows he isn’t competently deducing. Make him as good a logician/theorist of modality as you like, he’ll still be confident that the existence of the external world is entailed by particular facts about, e.g. the Red Sox.

    On Matt’s point, this came up on the other thread too, but we have to be very careful about when issues of competency would start to be salient around here. If Bill is as delusional as Matt describes, he won’t be competent to stand trial, or to be criminally liable. I’m inclined to think that’s the only exception to knowledge=true belief, and it isn’t really an exception because the case couldn’t arise, since it wouldn’t get to such a question if Bill is insane.

    There is a more general issue (also from the other thread) on whether there needs to be a causal connection between the share buying and Bill’s belief for it to count as knowledge. When there is no causal connection it is a little easier to imagine Bill getting away with a philosophical defence. Not a lot easier, but a little.

  6. Perhaps I overplayed my hand, but I didn’t intend Bill to be delusional. Bill just formed one belief in a way that was demonstrably irrational—though it happened to be true this time. Perhaps it could be reframed like this—Bill reads in his horoscope that “Be careful about business dealings with someone near you.” He decides that this can only mean that his sister bought shares—which in fact she has, but he has no way of knowing that. (All this is revealed in his diary, introduced into evidence.)

    It’s not clear to me that Bill gets off, but it’s certainly not clear to me that he goes to jail. (It had better not be clear to me, since I’m not a lawyer.) But I don’t think this makes Bill incompetent to stand trial. That would empty the jails right quick.

Comments are closed.