I’ve been thinking over the last few days about my colleague Gordon Belot’s forthcoming paper Bayesian Orgulity. In it he poses a series of very difficult challenges to Bayesianism. I’ve been trying to think about how the imprecise Bayesian can respond to these challenges. (I’m thinking of what response an imprecise Bayesian who thinks all updating goes by conditionalisation could make to Gordon’s arguments. This isn’t my view about updating.)

Here’s one example that Gordon uses. The agent, call her *A*, is going to get data coming in in the form of a series of 0s and 1s. She is investigating the hypothesis that the data is **periodic**. Say that she **succeeds** iff one of the following two conditions hold.

- The data is periodic, and eventually her credence that it is periodic goes above 0.5 and stays there.
- The data is not periodic, and eventually her credence that it is not periodic goes above 0.5 and stays there.

Call the data sequence for which a prior succeeds its success set, and its complement its failure set.

Gordon suggests the following two constraints on a prior:

- For any initial data sequence x, there are further data sequences y and z such that (a) the agent will have credence greater than 0.5 that the sequence is periodic after getting x + y, and (b) the agent will have credence less than 0.5 that the sequence is periodic after getting x + z. Call any prior with this property
*open-minded*. - The probability that the agent using this prior will succeed (in the sense described above) is not 1.

Much of the paper is an argument for the second condition. The argument, if I’ve understood it correctly, is that for any open-minded prior, the data sequences for which it succeeds are highly atypical. Its success set is measure 0 and meagre, while its failre set is dense (and obviously the complement of a meagre measure 0 set.)

And, as you might have guessed by now, it is impossible to meet these two conditions as a Bayesian agent. Any open-minded prior gives probability 0 to its own failure set. Gordon argues this is a very bad result for Bayesians, and I’m inclined to agree.

This post has gone on long enough, so I’ll leave how the imprecise Bayesian could respond to another post. I think this is a real problem, and indicative of deeper problems that Bayesians (especially precise Bayesians) have with countable infinities.