First and Second Order Epistemic Probability

Here is an interesting way of making explicit some of the tensions within an externalist account of evidence. I’m drawing here on some points Roger White made at the Arché Scepticism Conference, though I’m not sure Roger is perfectly happy with this way of putting things.

In what follows I’ll use ‘Pr’ for the evidential probability function (and hence assume that one exists), E(a) as a description for a’s evidence, Cr(p, a) for a’s credence in p, and Exp(X, a) as the expected value of random variable X according to probability function Pr conditioned on E(a). Then the following three statements are inconsistent. [There used to be a typo in the previous sentence, which Clayton noted in comments.]

  1. If a is perfectly rational, then Cr(p, a) = Pr(p | E(a)).
  2. If a is perfectly rational, then Cr(p, a) = Exp(Pr(p| E(a)), a).
  3. It is possible for a to be perfectly rational, and for Pr(p | E(a)) to not equal Exp(Pr(p| E(a)), a).

The intuition behind 1 is that for a rational agent, credence is responsive to the evidence.

The intuition behind 2 is that for a rational agent, their credences match up with what they think their credences ought to be. If 2 fails, then rational agents will find themselves taking bets that they (rationally!) judge that they should not take. Roger’s paper at the conference did a really good job of bringing out how odd this option is.

The intuition behind 3 is that not all perfectly rational agents know what their evidence is. So if p is part of a’s evidence, but a does not know that p is part of their evidence, then Pr(p | E(a)) will be 1, although Exp(Pr(p| E(a)), a) will be less than 1. I believe Williamson has some more dramatic violations of this principle in intuitive models, but all we need is one violation to get the example going.

Given that the 3 are inconsistent, we have an interesting paradox on our hands. I think, despite its plausibility, that the thing to give up is 2. Credences should be responsive to evidence. If you don’t know what your evidence is, you can’t know that you’re being responsive to evidence, i.e. being rational. It might be that all of the possible errors are on one side of the correct position. In that case, your best estimate of what you should do will diverge from what you should do. So anyone who thinks evidence isn’t always luminous will think that we will have oddities like the oddities used to motivate 2. So I think we have to learn to live with its failures.

Everyone at the conference seemed to assume that that’s also what Williamson would agree, and say that 2 is what should be given up. I’m not actually sure, as a matter of Williamson interpretation, that that’s correct. Williamson denies that we can interpret evidential probabilities in terms of credences of a hypothetically rational agent. It might be that he would give up both 1 and 2, and deny that there is any simple relationship between rational credence and evidential probability. Or he might accept that Exp(Pr(p| E(a)), a) is a better guide to rational credence than Pr(p | E(a)).

Whatever way we look at it though, I think that this is an interesting little paradox, and one of several reasons I liked the conference at the weekend was that I realised it existed.