What is the Equal Weight View of Disagreement?

Here are three quotes from Adam Elga’s paper Reflection and Disagreement, which I think are broadly indicative of how Adam intends to understand the Equal Weight View of disagreement.

When you count an advisor as an epistemic peer, you should give her conclusions the same weight as your own.

[T]he equal-weight view entails that one should weigh equally the opinions of those one counts as peers, even if there are many such people.

It [i.e., the Equal Weight View] says that one should defer to an advisor in proportion to one’s prior conditional probability that the advisor would be correct.

Let’s focus on the last of these, though I think you can make the same point about all of the quotes. Consider the following situation.

Prior to thinking about a question, S thinks it is just as likely that she and T, her peer, will come to the right answer. S gets evidence E, and considers whether p. She concludes that p is indeed true. Her friend T reaches the same conclusion, on the same evidence. This is a horrible mistake on both their parts. The evidence in fact strongly supports ¬p, and p is indeed false. Given the Equal Weight View, what should S do?

A literal reading of the last quote says that she should believe p. After all, there are two people, S and T, and her prior judgment was that each of them was equally likely to be right. So she should ‘defer’ to the average position between the two of them. But since they agree, that means she should do what they both say, i.e. believe p.

But this seems crazy. It was, by hypothesis, irrational for S to believe p on the basis of E in the first place. A literal-minded reading of the Equal Weight View suggests that she can ‘launder’ her irrational beliefs, and have them come out as something she should believe, by simply considering herself an advisor.

Let’s note an even stranger consequence of this way of taking the Equal Weight View. Assume S finds out that T did not in fact make this judgment. That’s because T simply hasn’t considered the question of whether p is true. The only one of her ‘peers’ who has considered that question, on the basis of E, is S herself. Again, a literal minded reading of the Equal Weight View suggests that she now should believe what she actually believes. But that’s wrong; her belief is both false and irrational, and she shouldn’t hold it.

I actually don’t think this is a deep problem for the Equal Weight View. As my repeated references to ‘a literal-minded reading’ of the view have suggested, it seems that the objection here is based on a misinterpretation of what was intended. But I think it’s interesting to note for two reasons. One is that the misinterpretation isn’t so bizarre that it shouldn’t be expressly addressed by proponents of the Equal Weight View. The other is that it isn’t obvious what the right interpretation is. I can think of two very different ways out of the problem here.

One way out, the one I suggest for proponents of the Equal Weight View in Do Judgments Screen Evidence, is to restrict the principle to agents who are making rational decisions. The Equal Weight View then doesn’t have anything to say about agents who start making an irrational decision themselves.

The other way out is to stress an analogy with other modals in consequents of conditionals. So Humeans sometimes say things like “If you desire an end, you should desire the means to it.” That sounds false in some cases. If I desire to rob a bank, I shouldn’t desire the means to rob a bank – I should change my desires. But there presumably is a true reading of the means-end conditional.

One way to make that conditional true is to take the ‘should’ to have wide scope, and read the conditional as “You should make this conditional true: if you desire the end, you desire the means.” Perhaps the Equal Weight View is best framed the following way. You should make this conditional true: “If the average of your peers’ judgment is J, your judgment is J.” If you don’t have any peers, this conditional is trivial, so the Equal Weight View doesn’t rule anything out, or ratify any choice.

Another way to make the means-end conditional true is to take the modal in the consequent to be somehow or other restricted by the antecedent. (Similar moves are suggested by Thony Gillies in papers like these two.) I don’t quite know how to fill out the details of this, so I’ll leave it for another day.

So I think there are three things that Equal Weight View theorists could do to avoid the problem I started with. I don’t know which of them is best though.

2 Replies to “What is the Equal Weight View of Disagreement?”

  1. He does seem to mean something like the wide-scope thing.

    I’ve heard Adam make the following speech. Suppose you, for no good reason, became certain of some obviously false chance theory. No one should say, “Well, the Principal Principle is obviously false. If it were true, it would follow that in the described case you ought to have credences matching the chances of the stupid theory. But you shouldn’t. So the Principal Principle is false.”

    The thought is that however PP avoids that problem, so does the Equal Weight View. Presumably the wide-scope move is the right one.

  2. I discuss both of these potential problems (the two person case and the single person case) for The Equal Weight View in a paper that I wrote in 2007, “Peer Disagreement and Higher Order Evidence” the official version of which is here:


    (For the first problem, see Section 3.2, “Implausibly easy bootstrapping”. As it says there, the problem is due to Aaron Bronfman, then a graduate student at Michigan, now an assistant professor at Nebraska-Lincoln. The second problem is discussed in Section 3.3, “Even easier, and more implausible bootstrapping: single person cases”)
    In my experience, the most common response to these problem(s) on the part of those sympathetic to the EWV is to say, as Brian suggests on their behalf, that the objections depend on an interpretation that isn’t actually what they have in mind. I’ve heard versions of all three alternative interpretations that Brian mentions, and a couple of others in addition. I think that there are problems with each. For example, consider the response that Brian offers in his paper:
    >One way out, the one I suggest for proponents of the Equal Weight View in Do Judgments Screen Evidence, is to restrict the principle to agents who are making rational decisions. The Equal Weight View then doesn’t have anything to say about agents who start making an irrational decision themselves.
    This is a possible view, of course, but it strikes me as quite bizarre and unmotivated. Suppose, for example, that I believe in God and subsequently encounter a peer who disbelieves in God with equal confidence (to make things simple, suppose that we’re in a two person universe). Suppose that I also believe in the EWV. I then ask myself “Given that my peer disagrees with me about whether God exists, how should I adjust my view?”. One would certainly have thought that the EWV would tell me to become an agnostic, or at least, to become significantly less confident of my view (etc.) But on the interpretation on offer, that’s only true if my original belief was rational; if it was irrational, then the view goes silent. That is, so as long as I’m an irrational believer in God, it’s perfectly appropriate for me, as far as the EWV goes, to completely dismiss or ignore the opinion of my peer; I’m off the normative hook, as it were. But that strikes me as completely unmotivated and against the spirit of the view: it’s not as though the disagreement of people we take to be equally reliable (etc.) might cease to be evidence for us so long as we’re irrational, but is (extremely strong!) evidence for us in those cases in which we’re rational. If the fact that all of my peers think that p is true is strong evidence against my (initially) rational belief that not-p, then it better also be true that the fact that all of my peers think that q is true is similarly strong evidence against my initially irrational belief that not-q (and for the same reasons/principles)
    As I said, I think that there are also problems for other alternatives to what Brian calls the ‘literal-minded’ interpretation. But rather than go into that here, I want to suggest that giving up on the literal-minded interpretation is a bigger deal for a proponent of the EWV than Brian’s post might lead one to suspect, even before we get to problems with alternative interpretations. Here’s what I have in mind. The EWV is frequently motivated by appeal to analogies with inanimate measuring devices, and how it would be rational to adjust one’s credences in response to their deliverances. (In fact, in my experience this is probably the most common intuitive motivation.) For example, suppose that you and I both have thermometers that are equally reliable as far as we know, and that we lack any independent access to the current temperature. I look at my thermometer and come to believe that the current temperature is as it says; you do the same with your thermometer. We then compare notes and discover that our thermometers disagree. Here, it’s natural to think (and I think that this is in fact the correct thing to say) that we should go agnostic, divide our credences more or less evenly, and so on. (even though at least one of the two thermometers must be malfunctioning on this particular occasion). Certainly, it would be irrational to favor my thermometer just because it’s mine (etc.) Proponents of The EWV will then try to say that a case of peer disagreement is basically the same thing: you should think of yourself and your peer as thermometers that are generally equally reliable, so you should go agnostic on the disputed proposition, even if (from the God’s eye point of view) you’re the one who has in fact responded to the original evidence correctly, i.e., it’s your peer who is “undetectably” malfunctioning on this particular occasion.
    The point I would like to make is the following: the thermometer analogy [and others like it] only supports Brian’s ‘literal-minded’ interpretation. After all, in the thermometer case, the consequences that Brian rightly calls “crazy” and “even stranger [than crazy]” in the peer disagreement case really are the correct things to say. That is, suppose that you and are in a completely insulated room, and that our only access to the temperature outside is via two thermometers that we can view through a window, thermometers that we know are usually reliable, and equally so. In fact, on this occasion, the two thermometers are malfunctioning terribly, albeit in exactly the same way: the readings that they both give are way off the actual temperature. Still, in the circumstances, what it’s reasonable for us to believe is that the actual temperature is what the thermometers are indicating. And the same holds in a case in which it’s my own thermometer that is badly malfunctioning. So a proponent of the EWV who actually took the thermometer model seriously should, I think, say that the two badly irrational peers who hold the same crazy view really can bootstrap their way into perfect rationality simply by comparing notes, and (even worse) that it works in the one-person case as well. But, as Brian says, that’s crazy.
    The upshot, I think, is this: to the extent that reflection on equally reliable thermometers (etc.) provides intuitive motivation for some view in this area, that view is the ‘literal-minded’ interpretation of the EWV. But we have compelling reasons to think that the EWV is false when interpreted in this way. To the extent that more plausible versions of the EWV claim support from such examples, that support is spurious and illicit. And for what it’s worth, in my experience people sympathetic to The EWV often want to appeal to thermometers (etc.) for a while, but when pushed, aren’t willing to embrace the consequences that come with really taking the thermometer model seriously.

Leave a Reply