I was reading David Christensen’s 2001 paper “Preference-Based Arguments for Probabilism” (Philosophy of Science) and I was struck by what seemed to be a gap in the argument.
Christensen is interested in arguments for probabilism based around representation theorems. The argument Patrick Maher gives in _Betting on Theories_ is as good as any, so we’ll work with it. Maher notes that anyone who has preferences that satisfy certain constraints can be represented by a probability function and a utility function such that they prefer A to B if the expected utility of A (acc to those functions) is higher than B. The intended conclusion is that our degrees of belief _should_ form probability functions.
Now as Christensen notes, there’s a gap between premise and conclusion here. He takes Maher to be filling it by _analysing_ degrees of belief as dispositions to bet, and hence making an analytic/metaphysical connection between the preferences (which by hypothesis satisfy the axioms) and the degrees of belief that we can uniquely represent the agent as having. (There’s a complication to do with the fact that on Maher’s theory the probability function is not defined uniquely. We’ll bracket that issue here.) Hence the requirement to have preferences that satisfy the axioms just is the requirement to have degrees of belief that form probability functions.
If that’s what Maher is doing (and I don’t want to get into exegetical debates about this, so this isn’t directed at Maher as much as at Maher under a (mis)interpretation) then it seems bad. Functionalism is a good theory of mental content, but not so simple a functionalism as displayed here. Christensen has made this point in a few places, and it’s a very good point, one that should receive more attention than it does I believe.
Christensen thinks we can bridge the gap normatively. He endorses the following principle
bq. *Informed Preference*: An _ideally rational_ agent prefers the option of getting a desirable prize if _B_ obtains to the option of getting the same prize if _A_ obtains, just in csae _B_ is more probable for that agent than _A_.
He doesn’t say much more, but I think from the context that’s meant to fill in the gap between the representation theorem and probabilism. But it doesn’t, not by a long way, as the following example shows.
Jack’s preferences are only defined over bets to do with the truth or falsity of p, and Boolean constructions from p. He is logically omniscient, so he knows which things are equivalent to p, which to ~p, which to p v ~p and which to p & ~p. His betting dispostions are entirely as if he was maximising expected utility according to a probability function that assigned 1/2 to p (and its equivalents) and 1/2 to ~p (and its equivalents). So he satisfies all of Maher’s axioms. But in fact his degree of belief in p is 1/3, and his degree of belief in ~p is 1/3. (Christensen is committed to this being _possible_, so I’m not begging any questions yet by raising it.)
Note that Jack satisfies *Informed Preference*. But he is irrational by the lights of Probabilism. So an argument for Probabilism of the kind Christensen wants will have to have a much stronger principle than that. One such principle would be a *Maximise Expected Utility* principle, but Christensen rightly notes that would have little force.
I think we can rescue Christensen’s argument by an additional premise.
For any rational agent there is some subset S of [0, 1] such that
bq. For any x in [0, 1] and any c > 0 there exists a member of S such that |x – s| UPDATE: As David notes in the comments, this argument only works against him if it works against Maher. And since Maher has a formal proof that his theory is immune to this kind of counterexample, one might reasonably suspect that it doesn’t work. I _think_ what happens here is that one of Maher’s axioms to which I’d paid little attention rules out this kind of example. But that axiom seems independently questionable, a topic on which I may say more soon. But for now I should just note that since the paper I was commenting on explicitly adopted all of Maher’s axioms, I shouldn’t have proposed a counterexample that didn’t satisfy all those axioms.