Evidence and Knowledge

I’ve been thinking again about Timothy Williamson’s idea that our evidence is what we know, or as he puts it, E=K. This never struck me as particularly persuasive. At first I thought it was wrong in every possible way. E=K implies that evidence is always propositional, that it is always true, and that it is usually massively redundant. The last point refers to the fact that when I know p I usually also know lots of things that imply p, such as ((p or q) and ~q) for all ~q that I know, as well as many things that inductively support p. All these things will count as evidence on Williamson’s lights, while I think evidence is much sparser. I’ve mellowed a bit in my old age – I now think that the first of these claims is right, evidence is propositional. But I’m sceptical (at best) that evidence has to be true, and rather confident that evidence (at least in the most natural sense) is not so redundant.

Here is what persuaded me that evidence should be propositional. Evidence is the input to thought. Thought is content manipulation. Hence evidence must be contentful, else it couldn’t be manipulated by a content manipulating machine. But to say it is contentful just is to say it is propositional. I’m not sure how close this is to the arguments Williamson provides. It is similar in spirit to the arguments from the role of evidence in inference to the best explanation, but very different from the syntactically motivated arguments concerning ‘because’ clauses.

Here’s what I think evidence is, at least for creatures like us. (I think E=K is intended to be a conceptual and/or analytic truth. What I say about evidence is contingently true if I’m lucky.) It’s the output of our reliable modules. By ‘module’ here I mean just what Fodor means by ‘module’ – it’s an informationally encapsulated processor that has a proprietary input (which could in principle be anything but is usually tightly constrained for particular modules) and delivers propositions as output. Ideally I’d like to think modules are neurologically local – I’d be a touch suspicious of any claim that there is a module whose processing work was distributed over most of the brain, some of the spinal cord and the sensory preceptors in my left leg. I think locality is an important part of ‘module realism’, but that’s a story for another day.

There’s only one other thing about Fodor’s conception of modules that will be important here. Fodor, unlike some contemporary modularity theorists, doesn’t think that there is a ‘thought module’. What modules do is deliver propositions to a central processor, but that processor is not informationally encapsulated and hence is not a module. (I haven’t gone back to check the details, but my impression was that Fodor thought information only got processed once before hitting the central processor. Whether that’s true or not isn’t I think central to the picture I’m going to sketch, though it would make it cleaner if it were true.)

A proposition is part of our evidence iff it is an output of a reliable module. The notion of reliability here is purely frequentist – a module is reliable iff it usually outputs truths. This theory allows that we have mistaken evidence. I could change that, I could say that evidence is the true output of reliable modules, but I’m inclined to think that’s mistaken.

The reason for that is a reason that Williamson discusses (on pages 198-200), though not I think satisfactorily. Consider a regular case of illusion, such the Checkershadow illusion. In the checkershadow, my intuition is that it is part of my evidence that square A is darker than square B. (This sentence is ambiguous. If we take the picture to represent a real situation, then it’s true that square A is represented as being darker than square B. What I mean is that my evidence includes the false claim that the representation of square A is darker than the representation of square B.) If evidence is always true, this cannot be the case.

Williamson’s way of saving the intuition here is to say that our evidence is really that square A looks darker than square B. There’s three problems with that.

One, that Williamson acknowledges, is that it implies that a primitive creature that lacks the concept LOOKS doesn’t get any evidence at all to the effect that square A is darker than square B. It cannot get the evidence that A is darker, because there is no such evidence, nor that square A looks darker, because it cannot process LOOKS. But surely it gets something like evidence to this effect.

A second is that this gets the phenomenology all wrong. We don’t get some evidence about visual appearance and then do something with it, like use it to conclude that A is darker than B. Rather, our evidence just is that A is darker than B.

A third problem, one that connects to the other odd feature of Williamson’s account of evidence, is that it seems ad hoc to say that in the checkershadow illusion our evidence is just about looks, but in a normal case, like looking at a book and coming to know there’s a book there, our evidence is about the external world. One way out of this would be to say that our evidence is always about looks. This potentially leads to scepticism, or at least to blocking the way out of scepticism that externalists about perception (like Williamson) want to endorse. Williamson doesn’t say the ad hoc thing, but nor does he fall back into scepticism. Rather, he thinks that in both cases we get evidence about looks, but in the good case we also get evidence that there is a book in front of us. This looks like too much evidence. If our evidence is that it looks like there is a book in front of me, then presumably the belief that there is a book in front of me is one of my conclusions, not part of my evidence.

(Is the dichotomy here justified? Some might think that in chain arguments a proposition can be first conclusion second premise, so a belief might be both a conclusion and a piece of evidence. I think matters are tricky here. On the one hand we do talk as if intermediate conclusions are part of our evidence for later inferences. On the other, it’s possible to construe all references to intermediate conclusions in arguments as shorthand for references to the real grounds on which they rest. I’m not sure that reflection at this level of generality can tell us much, so I’d rather move on to particular examples. Just how many roles a proposition can play in inference, whether it can be something we conclude as well as part of our evidence for example, is a hard question, one that in my view probably can’t be answered by purely a priori reflection.)

This connects to the concern about evidence being massively redundant on Williamson’s view. Williamson argues against the view that all justified true beliefs are part of our evidence using the following example.

Suppose that balls are drawn from a bag, with replacement. In order to avoid issues about the present truth-values of statements about the future, assume that someone else has already made the draws; I watch them on film. For a suitable number n, the following situation can arise. I have seen draws 1 to n; each was red (produced a red ball). I have not yet seen draw n+1. I reason probabilistically, and form a justified belief that draw n+1 was red too. My belief is in fact true. But I do not know that draw n+1 was red. Consider two false hypotheses:

h: Draws 1 to n were red; draw n+1 was black.
h‘: Draw 1 was black; draws 2 to n+1 were red.

It is natural to say that h is consistent with my evidence and that h‘ is not. (Pp 200-1, my emphasis, notation slightly altered.)

This is indeed natural, but look how much work is done by the line I have highlighted. If Williamson conceded that he knew that drawing n+1 would be red, then he couldn’t say the ‘natural’ thing he says at the end. But surely in some very similar cases we do know the equivalent proposition. Take any case when on the basis of evidence p (where p is something that all parties would consider evidence) we inductively infer q and thereby come to know q. Unless inductive scepticism is true, such a situation is surely possible. But in such a situation it would be very strange to say ~q is inconsistent with my evidence.

For a concrete case, consider my favourite piece of inductive reasoning: inferring how a movie will end on its Friday night screening from how it ended on its Thursday night screening. (It’s my favourite inductive inference because it’s such a nice case of inductive reasoning from a single data point. And I do so wish it were more often true that inductive reasoning from a single data point were epistemically sound.) I watch the Thursday night showing of the movie, and see that the hero dies in the final scene. I conclude, fallibly but pretty reliably, that when my friend watches the Friday night showing of the movie, the hero will die in the final scene. This is fallible, since some movies have multiple endings. (For example, 28 Days Later.) But it’s very reasonable, and I think constitutes knowledge.

Just like we get a disanalogy between h and h‘ in Williamson’s examples, we get a similar disanalogy here.

h: The hero died in the final scene on Thursday night but not Friday night.
h‘: The hero died in the final scene on Friday night but not Thursday night.

I know that both _h_ and _h’_ are false, but it’s very natural to say that _h_ is consistent with my evidence but h‘ is not. I conclude that Williamson is (in one sense) too generous in what he counts as knowledge.

(Rest of post eaten by WordPress. See this paper for more details.)