# First and Second Order Epistemic Probability

Here is an interesting way of making explicit some of the tensions within an externalist account of evidence. I’m drawing here on some points Roger White made at the Arché Scepticism Conference, though I’m not sure Roger is perfectly happy with this way of putting things.

In what follows I’ll use ‘Pr’ for the evidential probability function (and hence assume that one exists), E(a) as a description for a’s evidence, Cr(p, a) for a’s credence in p, and Exp(X, a) as the expected value of random variable X according to probability function Pr conditioned on E(a). Then the following three statements are inconsistent. [There used to be a typo in the previous sentence, which Clayton noted in comments.]

1. If a is perfectly rational, then Cr(p, a) = Pr(p | E(a)).
2. If a is perfectly rational, then Cr(p, a) = Exp(Pr(p| E(a)), a).
3. It is possible for a to be perfectly rational, and for Pr(p | E(a)) to not equal Exp(Pr(p| E(a)), a).

The intuition behind 1 is that for a rational agent, credence is responsive to the evidence.

The intuition behind 2 is that for a rational agent, their credences match up with what they think their credences ought to be. If 2 fails, then rational agents will find themselves taking bets that they (rationally!) judge that they should not take. Roger’s paper at the conference did a really good job of bringing out how odd this option is.

The intuition behind 3 is that not all perfectly rational agents know what their evidence is. So if p is part of a’s evidence, but a does not know that p is part of their evidence, then Pr(p | E(a)) will be 1, although Exp(Pr(p| E(a)), a) will be less than 1. I believe Williamson has some more dramatic violations of this principle in intuitive models, but all we need is one violation to get the example going.

Given that the 3 are inconsistent, we have an interesting paradox on our hands. I think, despite its plausibility, that the thing to give up is 2. Credences should be responsive to evidence. If you don’t know what your evidence is, you can’t know that you’re being responsive to evidence, i.e. being rational. It might be that all of the possible errors are on one side of the correct position. In that case, your best estimate of what you should do will diverge from what you should do. So anyone who thinks evidence isn’t always luminous will think that we will have oddities like the oddities used to motivate 2. So I think we have to learn to live with its failures.

Everyone at the conference seemed to assume that that’s also what Williamson would agree, and say that 2 is what should be given up. I’m not actually sure, as a matter of Williamson interpretation, that that’s correct. Williamson denies that we can interpret evidential probabilities in terms of credences of a hypothetically rational agent. It might be that he would give up both 1 and 2, and deny that there is any simple relationship between rational credence and evidential probability. Or he might accept that Exp(Pr(p| E(a)), a) is a better guide to rational credence than Pr(p | E(a)).

Whatever way we look at it though, I think that this is an interesting little paradox, and one of several reasons I liked the conference at the weekend was that I realised it existed.

## 6 Replies to “First and Second Order Epistemic Probability”

1. Jonathan Ichikawa says:

I think I was the one person who suggested that Williamson might do well to reject (1) and accept (2).

Maybe the real thing to say is that our notion of ‘rational’ isn’t quite precise enough as it stands to come down one way or another; there’s the rational-in-the-(1)-way and the rational-in-the-(2)-way; one could, if one wanted, just leave it at that.

For the record, though, (2) sounds more like rationality to me; I’d reject (1).

2. I guess I’m with Jonathan here re (1). If you know a bunch of stuff that all very high chance (but not 1) of being true, the conjunction C of what they know may well be extremely low chance. But still for Williamson it’s evidential probability 1 (given E=K). Suppose the person is aware that C is low chance. It seems very harsh to accuse someone who adjusts their credences to the known chances rather than evidential probability (which they don’t know is their evidential probability) of irrationality.

Williamson’s recent reply to Hawthorne and Lasonen-Aarno on chance and safety might be a decent place to get some traction on Williamson-exegesis on this front.

3. clayton says:

“Then the following four statements are inconsistent.”

I take myself to have imperfect access to my evidence, but I think my evidence suggests that there are three statements at issue.

I’ll be the third to say that we ought to be sceptical of (1). It’s similar to the view that rationality requires correctly identifying and responding to reasons. That’s still a live view, but it’s controversial and I think Parfit and Broome have done a nice job showing that the view isn’t one that’s forced upon us.

Two claims about evidence strike me as rather plausible (assuming a propositional conception of evidence). The first is that non-inferential knowledge that p is true suffices for p’s inclusion in a subject’s evidence. The second is that evidence consists of truths. Combined, if you had two subjects perfectly alike in terms of their non-factive mental states, one who hallucinates and one who has a veridical perceptual experience in light of which they know p non-inferentially, the second subject has evidence the first doesn’t. Those who accept this view can accommodate (1) by saying either the second subject counts as less than perfectly rational simply because she’s hallucinating or she’s in a position to determine that she’s hallucinating because she’s perfectly rational. Better, I think, to say that the failure to discriminate between hallucination and veridical perceptual experience is the failure to identify what reasons bear on whether to believe but the failure to identify reasons is not a failure of rationality. There’s a division of labor. Perception is supposed to set the reasons out before the mind and reason reasons from there. Failures of rationality have more to do with failures to respond in the way you should on the hypothesis that your views about the reasons you have is correct (or something like that).

I guess that those who like (1) will either say that non-inferential K isn’t enough to get a proposition into your evidence or will say that there can be false propositions that constitute evidence. Myself, I think that these claims are plausible enough that if (1) can’t be squared with them it is (1) that we should reject.

4. Martin Smith says:

Option (2) strikes me as a bit unstable.

If E=K then Exp(Pr(p| E(a)), a) and Exp(Exp(Pr(p| E(a)), a), a) can come apart.

Suppose there are only two open possibilities – one in which I veridically perceive that P and one in which I hallucinate that P and P is in fact false.

If E=K then, in the good case, p will be part of my evidence and thus have an evidential probability of 1 for me. In the bad case, it will still appear to me that P and this will be part of my evidence. Suppose that the probability of P conditional upon this evidence will be 0.9.

Suppose I’m in the bad, hallucinating case. Then Pr(p| E(a)) = 0.9.
Exp(Pr(p| E(a)), a) = 0.9 × 1 + 0.1 × 0.9 = 0.99
Exp(Exp(Pr(p| E(a)), a), a) = 0.9 × 1 + 0.1 × 0.99 = 0.999

Why should I be rationally obliged to align my credence with expected evidential probability rather than expected expected evidential probability or expected expected expected evidential probability etc.?

(1) has its own problems though – like the one Robbie raised above and those raised in Roger White’s paper. I think rejecting both 1 and 2 may be quite an attractive option for a proponent of the knowledge account.

A lot of people at the conference did seem to think that Williamson would accept something like 1. But, like Brian, I’m a bit suspicious of this.

5. I’m with Martin that (2) is unstable. I think if (1) is to go, it’s because the link between evidential probability and rational credence is quite complicated, not because they are linked at the second-order. Martin’s example is a really nice demonstration of the problems.

Despite the hostility here, I think (1) is arguably correct. Contra Clayton, I don’t think it involves giving up either the claim that non-inferential knowledge is evidence, or the factivity of evidence. It just requires saying that it is hard to know what the rational thing to do in some cases is. But we knew that all along – try figuring out what the rational thing to do when trading mortgage insurance products in 2006 was.