I’ve been trying to think through how various puzzles in formal epistemology look from certain externalist perspectives. This is a little harder than I think it should be, because there’s so little written on what formal externalist epistemology might look like. (And I haven’t exactly done a great job of chasing down what has been written I guess.) Tim Williamson has quite a bit, though most of it doesn’t seem to have been absorbed into the formal mainstream.
So what I’m trying to figure out at the first stage is a very simple question. Bayesian epistemology is based around the idea of updating by conditionalising on evidence. So what should count as evidence, in the salient sense, for various kinds of externalists?
Williamson is, as always, pretty clear on his answer to this question. Evidence is knowledge, so what goes in the E spot of Pr(C|E) is what the agent knows. But not everyone, and not even every epistemological externalist, accepts Williamson’s identification of evidence and knowledge. What should we say about E, for instance, if we’re a process reliabilist?
To make this concrete, assume that P is actually a reliable process, and it indicates that q is true. So P might be the process of looking out the window and seeing a ferry, and q is that there is a ferry there. Or P might be the process of looking up the weather forecast on my phone and seeing the forecast high for tomorrow is 39, and believing that the forecast high will be (approximately) 39.
(There are hard questions about what happens when P is unreliable, but I’m setting those aside for now. I think the right thing to say is that unreliable processes yield no evidence. Or at least they yield nothing that should go in the E spot of Pr(C|E). But maybe that’s a deep mistake.)
What then should be E? I can see a few options.
- The reliable process P indicates that q.
- The process P indicates that q.
- Some reliable process indicates that q.
- Some process indicates that q.
The last option would bring the process reliabilist closest to Williamson’s picture. And in cases like seeing a ferry, it seems reasonable. But it hardly seems like the right thing to say about the weather forecast case. Even if the process in question is basically reliable, taking the outcome of the process to be something we can conditionalise on seems to be excessive. After all, an agent would not be justified betting on q even at moderate odds, and may be justified in betting against q at long enough odds.
But none of the other options seem significantly better. If we put into the E that the process is reliable as in 1, then we seem to give too much knowledge to the subject. After all, the whole point of this kind of reliabilism is that I can gain justification through using a process that is reliable, even if I don’t know, and even if I’m not in a position to know, that the process is reliable. But if I’m allowed to conditionalise on 1, I can quickly conclude that P is reliable. (I think this argument tells against both options 1 and 3, though I’m less sure about 3.)
And option 4 is ruled out for the following reason. Assume I know that process P* is unreliable, and that it indicates that q. That is my only evidence for q, if indeed it is evidence at all. I then conclude that q using process P, which is reliable. Now I can justifiably believe q. So E must have changed. But on option 4, E has not changed.
Perhaps then the best thing to say is that option 2 is correct. I should conditionalise on the fact that process P indicates q. And when I do that, I should come to have a high credence in q. Does this mean that my priors should reflect the fact that P is a justification-attributing process? This seems like a troubling version of the easy knowledge problem.
Maybe I’m missing something, but none of the options feel particularly happy here. Am I missing something obvious?