Skip to main content.
July 19th, 2010

A Counterexample About Disagreement

S and T regard themselves, antecedently, as epistemic peers when it comes to judging horse races. They are both trying to figure out who will win this afternoon’s race. There are three horses that are salient.

They discover the same evidence, and that evidence includes the existence of a genie. The genie will make it the case that if S believes at 3 o’clock that Three-Legs will win, then Three-Legs will win. And the genie will make it the case that if T believes at 3 o’clock that Magoo will win, then Magoo will win. (If both S and T form these beliefs, the genie will cause Three-Legs and Magoo to dead-heat. Otherwise the genie will ensure that there is at most one winner.) The non-supernatural evidence all points in favour of Superfast winning. S and T both have evidence that neither of them is the kind to usually form beliefs in response to what meddling genies do, so both of them have compelling reason to discount the possibility that the other will cause the genie to effect the race.

S and T consider the evidence, then get together at 3:01 to compare notes. S has formed the belief that Superfast will win. T has, uncharacteristically, formed the belief that Magoo will win. At this point it is clear what S should do. Her evidence, plus what she has learned about T, entail that Magoo will win. (We’re assuming that S knows that the genie is good at what he does.) So S should believe that Magoo will win.

This is a problem for several theories of disagreement.

The Equal Weight View says that in cases of peer disagreement, the disagreers should split the difference (or something close to it). But that’s not true. S should defer entirely to T’s view.

The Right Reasons View says that in a case of peer disagreement, the rational agent should stick to her judgment, and the irrational agent should defer. But in this case precisely the opposite should happen.

Someone might deny this by arguing that T’s belief is not irrational. After all, given what T knows, her belief is guaranteed to be true. You might think that this is enough to make it justified. But I don’t think that’s right. When T forms the belief that Magoo will win, she has no evidence that Magoo will win, and compelling evidence that Magoo will lose. It’s irrational to form beliefs like this for which you have no evidence. So T’s belief is irrational.

To back this up, imagine a chemist a few hundred years ago who has little evidence in favour of the oxygen theory, and a lot of evidence in favour of the phlogiston theory. The chemist decides nonetheless to believe the oxygen theory, i.e., to believe that oxygen exists. Now there’s a good sense in which that belief is self-verifying. The holding of the belief guarantees that it is true, since the chemist could not have beliefs if there were no oxygen. But this does not make the belief rational, since it is not justified by the evidence.

Even if you doubt all this, the Right Reasons View is still I think false in this case. If both parties are rational, then the Right Reasons View implies that a rational agent can either stick with their belief, or adopt their peer’s belief. (Or, if some in-between belief is rational, adopt it. But this won’t always be true.) That’s not true in this case. It is irrational for S to hold on to their rational belief in the face of T’s disagreement.

My preferred ‘screening’ view of disagreement gets the right answer here. I think every disagreement puzzle is best approached by starting with the following kind of table. Here p is the proposition that Superfast will win, and E is the background evidence that S and T possess.

Evidence that pEvidence that ¬p
S’s judgment that pT’s judgment that ¬p
E 

I think that the evidential force of rational judgments is screened off by their underlying evidence. So this table is a little misleading. Really it should look like this.

Evidence that pEvidence that ¬p
 T’s judgment that ¬p
E 

Except now E is misclassified. Although E is generally evidence for p, in the presence of T’s judgment that Magoo will win, it is evidence that ¬p. (This is just a familiar instance of evidential holism.) So the table in fact looks like this.

Evidence that pEvidence that ¬p
 T’s judgment that ¬p
 E

And clearly this supports S judging that ¬p, and in fact that Magoo will win.

Before thinking about cases like this one, I had thought that the screening view entailed the Right Reasons View about disagreement. But that isn’t true. In some cases, it implies that the person who makes the rational judgment should defer to the person who makes the irrational judgment. Fortunately, it does that just in cases where intuition agrees!

Posted by Brian Weatherson in Uncategorized

5 Comments »

This entry was posted on Monday, July 19th, 2010 at 11:37 am and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.

5 Responses to “A Counterexample About Disagreement”

  1. P.D. Magnus says:

    Perhaps I’m missing something. If we treat T as rational, then screening off entails that we should remove T’s judgment from the first table just as we remove S’s judgment. We only get the second or third table if we treat T as irrational but S as rational. This seems like a dodge. Theories of disagreement are about what to do when one rationally disagrees with one’s epistemic peers, but in this case the resolution comes from denying that S and T are peers – S’s judgment is in play, but T’s judgment is a brute datum.

  2. Brian Weatherson says:

    We have to distinguish whether we treat T as rational, and whether T’s particular judgment is rational. Rational people make mistakes, so a generally rational person might make an irrational judgment. That’s what has happened in this case.

    This doesn’t involve denying that S and T are peers in the generic sense. But if there’s a disagreement, then (in most cases) one person will be making a mistake. And it’s a good idea for the theorist to take the fact that one party is, and one party is not, making a mistake.

  3. Leon Leontyev says:

    I don’t find the analogy between the oxygen case and the horse race very plausible. The chemist has no idea that he could not hold the oxygen theory unless it were true, but T knows that if he believes that Magoo will win then his belief would be true.

    I wonder if voluntarism has anything to do with our judgement of the case. If voluntarism is true, then I think T does have good evidence for thinking that Magoo will win. Since, during the moment of making up his mind about who will win, he intends to believe that Magoo will win (and can tell that he so intends) he has good evidence that Magoo will win (in virtue of his future belief and the Genie’s meddling).

    However, I think something considerably weaker than voluntarism about belief needs to be true for this argument to go through. Even if one doesn’t form an intention about what belief to adopt, it is not unreasonable that ‘making up our minds’ is accompanied by some kind of phenomenology of inclination. As soon as T has an inclination to believe that Magoo will win, he has good evidence to believe that Magoo will win.

    Regarding the right reasons view, I think that if this is a counterexample to the view, then it is a counterexample to the view in letter, not in spirit. The view is not simply that:

    If an agent A rationally forms the belief that P at t1, and encounters a peer at t2 who disagrees despite forming their belief on the same evidence that A had at t1, then A should stick to her belief.

    If, for example, A encounters further evidence between t1 and t2 that counts against P, then the view above surely gets the case wrong. We need to add that A does not get any other evidence between t1 and t2. Now even with this restriction, the view gives the wrong verdict on Brian’s horse race case. But that’s just to say we need a further restriction / modification of the view. I don’t think this is ad hoccery. It is completely in the spirit of the view to be restricted in certain ways, like the one I suggest, and no less so in the horse race case.

    Part of the problem is that all these views of disagreement give epistemic recommendations regarding a specific kind of evidence. They’re not fully general. As a result, it is possible to cook up cases involving other epistemic phenomena to which these views simply don’t speak, and can thus appear to get the cases wrong. We need to evaluate them against a restricted set of cases. As long as the different views give different verdicts on particular cases this does not sap the views of their interest. So I recommend that, just as we ought to restrict the right reasons view to cases where A doesn’t receiving any other evidence between t1 and t2, we should also restrict it to cases where the claim at issue is not one that is true in virtue of some agent believing that it’s true. This of course excludes Brian’s case.

    Perhaps the screening view gives the right verdict about these cases, and so it does better than a restricted version of the Right Reasons View, because the former says something where the latter says nothing. But that’s just to say that the screening view is a particularly good articulation of the right reasons view as opposed to a subtly different contender.

  4. lwalters says:

    Brian,

    Interesting case.

    I don’t know this literature at all, but when S and T meet up, S acquires new information that he didn’t have before and it is this that should trigger his change of mind. But presuably for the equal weight view and perhaps others, there is some caveat about both parties being equally well-informed. If not, we don’t need such other-worldly cases to generate counterexamples.

    The idea that we should split the difference is not even prima facie plausible if the relevant parties have a severe disparity in the evidence at their disposal, and you cases just highlights this?

  5. stoweteti says:

    What stands out to me in this exercise are the manifold assumptions about rationality. The example assumes that rationality and irrationality are polar states. Perhaps rationality admits of degrees; the notion that the genie is “good” at what she does clearly increases the rationality of T’s judgment that ~P in some form; at the very least T’s perceptions and beliefs, which, upon learning of a genie playing the ponies, would be a phenomenological addition to T’s (and S’s) thinking. Perhaps T’s conclusion is not uncharacteristic- we don’t have enough information to know if it was indeed a rational decision for T based on T’s experience.

    Further, I don’t think anyone is going to argue that P->Q iff ((R->G) & (G=“good”)) isn’t determined by G=“good”, rather than “okay”, or “largely unsuccessful genie”. Can we assume that only a specific part of T’s decision is relevant to the question of whether or not T is irrational? Who among us is qualified to demarcate such things?

    Slicing up rationality is precisely what T. Kuhn pointed out 50 years ago, enraging so many in the philosophy of science. Feyerabend’s work specifically deals with these Issues of “evidence”, and the “verification” which allows it to be denoted as such. Evidence and its verification fall prey to a number of weaknesses.

    Does observation trump internal consistency? What about theoretical elegance? Simplicity? And what should we do with recalcitrant data? Higher mathematics certainly isn’t observed; is it an accident that engineers always build significant safety margins into just about everything?

    The most well-known case may be Galileo. Would you say he was being irrational when he insisted he was correct regarding his insistence on geocentricity in spite of his calculations always being off, and very little observational data to support his belief? Galileo had no facts on his side. He was a bad scientist. But he was also right. So if Galileo was irrational, then rationality is irrevocably separated from correctness, in the sense of truth. Whether or not rightness trumps method is essentially what this is all about.

    A wise person use to tell me two things about facts: “truth often depends on who you ask”, and my personal favorite, “when the only tool you have is a hammer, everything begins to look like a nail.”

Leave a Reply

You must be logged in to post a comment.