Externalism and Updating Credences

I’ve been trying to think through how various puzzles in formal epistemology look from certain externalist perspectives. This is a little harder than I think it should be, because there’s so little written on what formal externalist epistemology might look like. (And I haven’t exactly done a great job of chasing down what has been written I guess.) Tim Williamson has quite a bit, though most of it doesn’t seem to have been absorbed into the formal mainstream.

So what I’m trying to figure out at the first stage is a very simple question. Bayesian epistemology is based around the idea of updating by conditionalising on evidence. So what should count as evidence, in the salient sense, for various kinds of externalists?

Williamson is, as always, pretty clear on his answer to this question. Evidence is knowledge, so what goes in the E spot of Pr(C|E) is what the agent knows. But not everyone, and not even every epistemological externalist, accepts Williamson’s identification of evidence and knowledge. What should we say about E, for instance, if we’re a process reliabilist?

To make this concrete, assume that P is actually a reliable process, and it indicates that q is true. So P might be the process of looking out the window and seeing a ferry, and q is that there is a ferry there. Or P might be the process of looking up the weather forecast on my phone and seeing the forecast high for tomorrow is 39, and believing that the forecast high will be (approximately) 39.

(There are hard questions about what happens when P is unreliable, but I’m setting those aside for now. I think the right thing to say is that unreliable processes yield no evidence. Or at least they yield nothing that should go in the E spot of Pr(C|E). But maybe that’s a deep mistake.)

What then should be E? I can see a few options.

  1. The reliable process P indicates that q.
  2. The process P indicates that q.
  3. Some reliable process indicates that q.
  4. Some process indicates that q.
  5. q

The last option would bring the process reliabilist closest to Williamson’s picture. And in cases like seeing a ferry, it seems reasonable. But it hardly seems like the right thing to say about the weather forecast case. Even if the process in question is basically reliable, taking the outcome of the process to be something we can conditionalise on seems to be excessive. After all, an agent would not be justified betting on q even at moderate odds, and may be justified in betting against q at long enough odds.

But none of the other options seem significantly better. If we put into the E that the process is reliable as in 1, then we seem to give too much knowledge to the subject. After all, the whole point of this kind of reliabilism is that I can gain justification through using a process that is reliable, even if I don’t know, and even if I’m not in a position to know, that the process is reliable. But if I’m allowed to conditionalise on 1, I can quickly conclude that P is reliable. (I think this argument tells against both options 1 and 3, though I’m less sure about 3.)

And option 4 is ruled out for the following reason. Assume I know that process P* is unreliable, and that it indicates that q. That is my only evidence for q, if indeed it is evidence at all. I then conclude that q using process P, which is reliable. Now I can justifiably believe q. So E must have changed. But on option 4, E has not changed.

Perhaps then the best thing to say is that option 2 is correct. I should conditionalise on the fact that process P indicates q. And when I do that, I should come to have a high credence in q. Does this mean that my priors should reflect the fact that P is a justification-attributing process? This seems like a troubling version of the easy knowledge problem.

Maybe I’m missing something, but none of the options feel particularly happy here. Am I missing something obvious?

6 Replies to “Externalism and Updating Credences”

  1. Why not let E be, in your example, its looking to you as if there’s a ferry, and let the externalism come in by way of insisting that you are justified in believing p on the basis of e iff the objective conditional probability of p given e is high enough?

  2. Because I’m worried about the following implication.

    Let E = looks like ferry, and C = ferry. Since the objective probability of C given E is high, I’m justified in having a high credence in C now that I have evidence E. That is, after conditionalising on E, Cr© is high. The only way that happens is if Cr(C | E) was high before I got the evidence.

    But what could justify that conditional credence? It does correspond to a fact, namely the fact that E is reliably connected to C. But that’s not a fact that I have much access to. It isn’t even a fact that I deduced by a reliable process.

    Moreover, we can prove generally that Pr(A -> B) is at least as high as Pr(B | A). (I’m using -> here for material implication.) So if Cr(C | E) is high, and Cr is a probability function, then Cr(E -> C) is high. But that may well not be justified, because I didn’t infer that conditional by a reliable process. That is, I didn’t infer that conditional by a reliable process before getting E. After I got E, I could infer C by a reliable process and then E -> C by another reliable process, deduction, but before I got E I can’t use that method.

    Having said all of that, I think this is still probably the best way to mix reliabilism and Bayesianism, but it does have some odd consequences.

  3. Perhaps the externalist can say this:

    Option 5 is appropriate, if P is perfectly reliable. If P is less than perfect, you don’t have any categorical evidence. You ought to change your credences in q (and any alternatives) to the objective probability given the outcome of the process. Then use Jeffrey Conditionalization.

  4. That would be one option. I still think it is a little odd to have future facts be inputs, but it does avoid my worry about crazy betting practices. (Of course, Jeffrey conditionalisation has its own problems, e.g. asymmetry, but that might be for another day.)

  5. On Williamson’s account however, isn’t Pr some sort of “evidential probability” function, rather than the agent’s degrees of belief, or degrees of outright belief, or rational degree of belief, or anything else? The main reason I have for thinking this is that it really looks like there are plenty of cases where I know that E, even though my degree of belief in E is (rationally) lower than 1. As long as I lend any degree of belief at all to some skeptical scenario, or to the possibility of some error in reasoning (that I didn’t actually make), I oughtn’t have degree of belief 1 in a proposition, even if I know it.

    So it’s not exactly clear what role Pr(C|E) plays in updating credences on Williamson’s account. For everyone else, I think Pr is supposed to be the agent’s degrees of belief, so Pr(C|?) is the right function for updating, but the question is what goes into the ? spot.

  6. I agree with Kenny here: I reckon the problems for various of the update proposals when you interpret Pr as credence would extend to Williamson’s account. So it’s important for him that we don’t interpret Pr in that way.

    (By the way, I’m a rank amateur in this stuff, so apologies if anything that follows is obvious or wrongheaded.)

    Another difference with a standard Bayesian setting, IIRC, is that Williamson likes a setup where you have hypothetical prior Pr*, such that the probabilities of Q at t are given by Pr*(Q|E), where E is the total knowledge you have at t. He doesn’t have what I guess is the more usual setup where Pr(t) is the result of conditionalizing the previous moment’s Pr on whatever new evidence has become available.

    Of course, there’s just a general and familiar issue motivating this: trouble with lost evidence (e.g. in Williamson’s setting, perhaps you lose knowledge you previously had due to new misleading evidence).

    I’m tempted to think that this is particularly nasty for externalists. If you had a super-strong, Cartesian kind of internalism about evidence, maybe just biting the bullet over moment-to-moment conditionalization would be easier (since evidence really would carry with it some kind of rational certainty).

Leave a Reply