I change my mind on philosophical matters about once a decade, so even considering that something I have hitherto believed is wrong is quite a rare experience. It’s a pretty esoteric little point to change my mind on though.
For a long time, at least 7 or 8 years I think, I’ve thought it best to model the doxastic states of a rational but uncertain agent not by a single probability function, but by sets of such functions. I’m hardly alone in such a view. Modern adherents have (at various times) included Isaac Levi, Bas van Fraassen and Richard Jeffrey. Like Jeffrey and (I believe) van Fraasen, and unlike Levi, I thought this didn’t make any difference to decision theory. Indeed, I’ve long held a sequences of decision is rationally permissible for an agent characterised by set S iff there is some particular probability function P in S such that no action in the sequence is sub-optimal according to P. I’m thinking of changing that view.
The reason is similar to one given by Peter Walley. He argues that the position I just sketched is too restrictive. The important question for Walley concerns countable additivity. He thinks (as I do) that the arguments from congomerability show that any agent represented by a single probability function should be representd by a countably additive function. But he notes there are sets of merely finitely additive functions such that any agent represented by such a set who follows his decision-theoretic principles will not be Dutch Booked. He argued that such an agent would be rational, so rationality cannot be equivalent to representability by acceptable probability functions.
I never liked this argument for three reasons. First, I didn’t accept his decision principles, which seemed oddly conservative. (From memory it was basically act only if all the probability functions in your representor tell you to act.) Second, I don’t think Dutch Book arguments are that important. I’d rather have completely epistemological arguments for epistemological conclusions. Third, the argument rested on an odd restriction to agents with bounded utility functions, and I don’t really see any reason to restrict ourselves to such agents. So I’d basically ignored the argument up until now. But now I’m starting to appreciate it anew.
I would like to defend as strong a congolmerability principle as possible. In particular I would like to defend the view that if Pr(p | p or q) -t/h
If I’ve done the maths right, for any interval of length l, the objective chance that g(t) falls into l is l. So prior to the process starting up, I better assign probability l to g(t) falling in that interval. The question now is can I extend that to a complete (conditional) probability function in anything like a plausible way, remembering that I want to respect conglomerability. I’m told by people who know a lot more about this stuff than I do that it will be tricky. Let’s leave the heavy lifting maths for another day, because here is where I’m starting to come around to Walley’s view.
Consider the set of all probability functions such that for any interval of length l,
(1) Pr(g(t) is in l) = l.
Some of these will not be conglomerable. Consider, for instance, the function that as well as obeying (1) is such that Pr(g(t) = x | g(t) = x or y) = {1/2} for any real x, y. That won’t be conglomerable, since Pr(g(t) Barkley Rosser’s papers, especially this one on the Holmes-Moriarty problem. Rosser’s work is philosophical enough I think that I should probably track him on the papers blog. I’m very grateful to Daniel Davies for pointing out Rosser’s site to me.)