I already wrote “one post”:http://tar.weatherson.org/archives/002888.html attempting to poke holes in David Christensen’s paper “Preference-Based Arguments for Probabilism” (Philosophy of Science) and after a little back-and-forth in the comments it seems my argument didn’t work. (It didn’t work for an interesting reason, namely that some of the constraints Christensen had imported from Patrick Maher are “less than persuasive”:http://brian.weatherson.org/totmoc.pdf, but not working for an interesting reason is still a way of not working.) So you’d have every right to be suspicious of another attempt to hole-poke. But I’ll try anyway.
Let’s recap the bits I was agreeing with. Christensen interpreted Patrick Maher as offering the following argument.
P1. Everyone should have preferences that satisfy coherence constraints X.
P2. Anyone whose preferences satisfy coherence constraints X has credences that confirm to the probability calculus.
C. Everyone should have credences that confirm to the probability calculus.
Constraints X here are the long list of constraints that Maher provides. I won’t list them here because I’m bracketing concerns about P1 for the sake of argument.
Christensen argues persuasively that P2 here is false, that the only way it could be justified is by a implausibly strong reductionist functionalism about credences. But he thinks the argument can be revived in the following way. (Note this is my way of putting things, not Christensen’s, and I may be interpreting things uncharitably.)
P1. Everyone should have preferences that satisfy coherence constraints X.
P2′. Everyone whose preferences satisfy coherence constraints X should have credences that confirm to the probability calculus.
C. Everyone should have credences that confirm to the probability calculus.
I think Christensen’s arguments for P2′ are persuasive, like his arguments against P2. I don’t agree with P1, but as mentioned I’m bracketing that concern. The big problem is that the argument now doesn’t look to be valid. To illustrate, consider the case of an agent who is flirting with having intransitive preferences. (This was sort of me earlier this year.) Assume our agent, call him Brian, prefers A to B, and prefers B to C, but can’t decide whether he prefers A to C or not. Let’s also assume he has good reasons to prefer A to B and B to C. Then the following premises seem true.
P3. Brian should prefer A to C.
P4. If Brian prefers A to C, he should believe that he prefers A to C.
C2. Brian should believe that he prefers A to C.
It looks like this argument has true premises and a false conclusion, so it is not valid. P3 is supported by all the arguments for transitivity, and the fact that Brian has reasons to prefer A to B and B to C. P4 is supported by the general fact that Brian should have accurate beliefs about his own preferences. But the conclusion is false. If Brian is undecided between A and C, as he was, he should believe he is undecided between A and C, not that he prefers A to C.
So arguments of this form are invalid. But it seems my little argument has exactly the same form as Christensen’s. So I think Christensen’s argument is invalid. Note that something similar to Christensen’s argument is valid, and may even be sound. (Well, bracketing concerns about P1.)
P1′. All rational agents have preferences that satisfy coherence constraints X.
P2”. All rational agents whose preferences satisfy coherence constraints X have credences that confirm to the probability calculus.
C’. All rational agents have credences that confirm to the probability calculus.
But C’ is not equivalent to C. And the differences are relevant here. After all, it is plausible that all agents that are always rational have credences that conform to, say, Reflection. But as Christensen (and others) have argued that is no reason that _we_ should be Reflective. If the best argument for probabilism is no better than the argument for Reflection, i.e. if it shows that ideal agents are probabilists but not that we should be probabilists, that would be a blow to the probabilist theory.