# Infinite Probabilities

There is an odd paper by Jeanne Peijnenburg in the latest Mind. (It’s subscription only, so no link.) There’s a formal point and a philosophical point.

The formal point concerns the following question. Are there values of a1, b1, a2, b2, … such that given that P(Ei|Ei+1) = ai, and P(Ei|~Ei+1) = bi for all i, we can compute the value of P(E1)? This is answered in the affirmative, in some complicated cases where we have to compute some tricky infinite sequences.

The philosophical point is that this is meant to be a defence of infinitism, a la Peter Klein. The idea, if I’ve understood it, is that we can (contra Klein’s critics) say that we can deduce unconditional probabilities from an infinite string of conditional probabiilties. So probabilities don’t have to be ‘grounded’ in unconditional probabilities, as Klein suggests.

But there’s a much simpler way to prove the formal point. If a1 = b1 = x, the Pr(E1) = x, whatever the other values are. Here is a way to get from conditional probabilities to unconditional probabilities. And we don’t even need an infinite chain. So I don’t see how this is meant to give any support to infinitism. Maybe I’m just missing something here. At the very least, I’m certainly missing how these computations of particular probabilities support the idea that infinite chains can justify old-fashioned, non-probabilistic, belief.

## 2 Replies to “Infinite Probabilities”

1. I was going to say that the non-probabilistic case could follow from the special case where the ai all equal 1. But then if the ai all equal 1 and the bi all equal 0, then we get infinitely many solutions – as long as P(Ei) is constant for all i, the equations are satisfied. I suspect that something like this indeterminacy is the intuition against infinitism in the first place (if each belief is justified by the previous in the chain, then everything is fine if they’re all true, but nothing stops them from all being false).

It seems difficult to me to phrase justification in these terms if the probability function is subjective – if a1=.1 and b1=.9, and P(E1)=.1, then P(E0)=.82 – does this mean that E0 is justified by E1? Or by ~E1? And given any probability function, and any two (unrelated) propositions E0 and E1, there are in fact values of a1 and b1, so that if E0 has high probability, it looks like it’s justified by every proposition on this sort of measure.

However, if the probability function is some sort of objective one, and the agent knows the values of the conditional probabilities, then this calculation given in the paper will allow the agent to calculate objective unconditional probabilities, which might make for some sort of infinitary justification. In the case she describes, the unconditional probability value really does depend on all infinitely many ai and bi, so it really is a case where infinitely many premises are needed to justify a certain degree of belief. The case you describe, where a1=b1 is a case where finitely many of the values suffice for the justification. So there really is a difference here. But on my understanding here, it looks like the knowledge of these conditional probabilities will have to be prior to any belief of any proposition in the chain, so the structure of justification here involves not just the infinite backwards chain, but some steps prior to that as well, which is different from the standard infinitist picture I would think.

2. Actually if all the ai = 1, and all the bi = epsilon, then p(e0) = 1, despite there being no obvious justifier for e0. That looks more like an infinitism case.

But one does wonder where the conditional probabilities come from. That looks like a foundational, rather than a infinitist, move.