A Counterexample About Disagreement

S and T regard themselves, antecedently, as epistemic peers when it comes to judging horse races. They are both trying to figure out who will win this afternoon’s race. There are three horses that are salient.

  • Superfast, who is super fast.
  • Three-Legs, who only has three good legs.
  • Magoo, who can’t see well enough to run straight.

They discover the same evidence, and that evidence includes the existence of a genie. The genie will make it the case that if S believes at 3 o’clock that Three-Legs will win, then Three-Legs will win. And the genie will make it the case that if T believes at 3 o’clock that Magoo will win, then Magoo will win. (If both S and T form these beliefs, the genie will cause Three-Legs and Magoo to dead-heat. Otherwise the genie will ensure that there is at most one winner.) The non-supernatural evidence all points in favour of Superfast winning. S and T both have evidence that neither of them is the kind to usually form beliefs in response to what meddling genies do, so both of them have compelling reason to discount the possibility that the other will cause the genie to effect the race.

S and T consider the evidence, then get together at 3:01 to compare notes. S has formed the belief that Superfast will win. T has, uncharacteristically, formed the belief that Magoo will win. At this point it is clear what S should do. Her evidence, plus what she has learned about T, entail that Magoo will win. (We’re assuming that S knows that the genie is good at what he does.) So S should believe that Magoo will win.

This is a problem for several theories of disagreement.

The Equal Weight View says that in cases of peer disagreement, the disagreers should split the difference (or something close to it). But that’s not true. S should defer entirely to T’s view.

The Right Reasons View says that in a case of peer disagreement, the rational agent should stick to her judgment, and the irrational agent should defer. But in this case precisely the opposite should happen.

Someone might deny this by arguing that T’s belief is not irrational. After all, given what T knows, her belief is guaranteed to be true. You might think that this is enough to make it justified. But I don’t think that’s right. When T forms the belief that Magoo will win, she has no evidence that Magoo will win, and compelling evidence that Magoo will lose. It’s irrational to form beliefs like this for which you have no evidence. So T’s belief is irrational.

To back this up, imagine a chemist a few hundred years ago who has little evidence in favour of the oxygen theory, and a lot of evidence in favour of the phlogiston theory. The chemist decides nonetheless to believe the oxygen theory, i.e., to believe that oxygen exists. Now there’s a good sense in which that belief is self-verifying. The holding of the belief guarantees that it is true, since the chemist could not have beliefs if there were no oxygen. But this does not make the belief rational, since it is not justified by the evidence.

Even if you doubt all this, the Right Reasons View is still I think false in this case. If both parties are rational, then the Right Reasons View implies that a rational agent can either stick with their belief, or adopt their peer’s belief. (Or, if some in-between belief is rational, adopt it. But this won’t always be true.) That’s not true in this case. It is irrational for S to hold on to their rational belief in the face of T’s disagreement.

My preferred ‘screening’ view of disagreement gets the right answer here. I think every disagreement puzzle is best approached by starting with the following kind of table. Here p is the proposition that Superfast will win, and E is the background evidence that S and T possess.

Evidence that p Evidence that ¬p
S’s judgment that p T’s judgment that ¬p
E  

I think that the evidential force of *rational* judgments is screened off by their underlying evidence. So this table is a little misleading. Really it should look like this.

Evidence that p Evidence that ¬p
  T’s judgment that ¬p
E  

Except now E is misclassified. Although E is generally evidence for p, in the presence of T’s judgment that Magoo will win, it is evidence that ¬p. (This is just a familiar instance of evidential holism.) So the table in fact looks like this.

Evidence that p Evidence that ¬p
  T’s judgment that ¬p
  E

And clearly this supports S judging that ¬p, and in fact that Magoo will win.

Before thinking about cases like this one, I had thought that the screening view entailed the Right Reasons View about disagreement. But that isn’t true. In some cases, it implies that the person who makes the rational judgment should defer to the person who makes the irrational judgment. Fortunately, it does that just in cases where intuition agrees!