Skip to main content.
June 22nd, 2006

Don’t Defer!

Philip Petit has a very good paper in the latest Analysis on the issue of when you should defer to majority wisdom. (Philip Petit, “When to Defer to Majority Testimony – And When Not,” Analysis 66.3, July 2006, pp. 179–87.) Those of you who have Analysis should go read it, and come back here when you’re done. So I’m a blogger so I’m going to pick on a relatively minor technical point, but see at the bottom of the post for more substantive points of agreement with the paper.

Now Philip’s overall position is relatively subtle, but one of the key points he makes is that deferring to the majority view on all positions is likely to be inconsistent, so you shouldn’t do it. This should be a fairly well known point, but it is worth rehearsing the proof.

Imagine that there are two atomic propositions under discussion, p and q. And there are three people in your group, A, B and C. (It won’t matter which you are.) A and B believe that p, but C thinks it is false. B and C believe that q, but A thinks it is false. So the majority believes that p, the majority believes that q, and the majority believes not-p&q. If you follow majority ‘wisdom’ on all counts, you’ll end up with inconsistent views.

Surprisingly, Philip suggests that a supermajoritarian position works better. (He doesn’t say this is a good idea to follow in all cases, but he does seem to think it avoids this problem.)

There is another sort of approach that will do better, however. This is not to allow just any majoritarian challenge to reverse a belief but to allow only a certain sort of supermajoritarian challenge to do so. This would amount to a policy of global supermajoritarian revision. In order to see why this approach need not involve the same problems as the majoritarian counterpart, imagine that you are prepared to defer only to a supermajority of 70%, and to every such supermajority. And suppose that, as before, you are confronted by a situation in which a majority of others hold that p, that q, and that not p & q, where you hold that not p, not q and not-p&q. Suppose in particular that 70% hold by p and that 70% hold by q, giving you a reason to defer to the group on those issues. Will such supermajoritarian deference raise a problem? No, as it happens, it won’t. You will be led to adopt the majority view that p, and the majority view that q, since each proposition commands the requisite supermajority of 70%. But you will also be allowed to revise your belief that not-p&q, thereby ensuring that your beliefs are consistent. You will not be forced, inconsistently, to hold by the majority view that not-p&q, since this will not be supported by a majority of 70%. (184-5)

Philip has an argument for why the supermajoritarian position avoids this particular problem, but it seems to me it runs into a very similar problem. Change the case so that there are four people in the group, and three propositions under discussion. Their opinions are:

Now a 75% supermajority believes each of p, q, r and not-p&q&r. So the supermajoritarian gets led into inconsistency. So supermajoritarian deference seems like a bad idea.

So in the context of this paper, this isn’t a very big deal I think, since what I’m objecting to could be excised without damaging the main theme. What’s really important is the point Philip makes in leading off, about the key reason you shouldn’t defer about matters of extreme epistemic importance.

It would be objectionably self-abasing to revise your belief on matters like intelligent design or the wrongness of abortion just in virtue of finding that others whom you respect take a different view. Or so most of us think. To migrate towards the views of others, even under the sorts of assumptions given, would seem to be an abdication of epistemic responsibility: a failure to take seriously the evidence as it presents itself to your own mind. (181)

That seems exactly right to me. The view Philip is arguing against here is a kind of epistemic collectivism, a view that we should let our beliefs be guided by the majority opinion amongst suitably informed, suitably intelligent, suitably impartial judges. The view he is arguing for is a personal responsibility view. At risk coming off all Ronald Reagan on y’all, at some level people just have to take responsibility for their own beliefs, not regard themselves as merely a jury member with a single vote about the most important issues they consider. The collectivist view tends to run into technical problems, even paradoxes, when we try to develop it. But apart from that it suggests a deeply unappealing picture of our epistemic agency, one that we should firmly reject.

Hopefully I’ll post more on this over upcoming weeks, but for now there’s a football team to cheer for…

Posted by Brian Weatherson in Uncategorized

4 Comments »

This entry was posted on Thursday, June 22nd, 2006 at 2:42 pm and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.

4 Responses to “Don’t Defer!”

  1. Neil says:

    Haven’t got time or the energy (after the match) to read the paper right now. Anyway, I’ll hear it at the AAP. But it seems to me exactly right that we ought to be “self-abasing” in this manner. First, when a majority of better placed observers believe that p, I ought to be epistemically humble enough to accept that p (though I might not be able to bring myself, all at once, to believe that p). The relevant case here is ID. If A is a non-biologist who has not invested a very significant amount of time in evolutionary theory and therefore has not come to appreciate how complexity and order can arise from selection processes, sets herself to evaluate the evidence for ID in the ‘responsible’ way you advocate, I think she will likely come to believe that ID is true. She would do better to defer to the community of better placed observers.

    The trickier case is when you have good reason to think yourself as well-informed (and so on) as the majority, but are disposed to a contrary belief. I’m less sure what to say about these cases. Here’s a bizarre argument for the view that you ought to defer (did I mention my lack of sleep?); bizarre because it turns on cases of people deferring and thereby getting it wrong. I’m thinking of Asch’s classic experiments on length judgments. As you recall, the subject is presented with lines and asked to judge their lengths, with confederates preceding them and giving the wrong answers. About a third of subjects conform to the erronuous majority position. Now the bizzare argument: why do we have this tendency to conform even in clear cases? Because its adaptive, and its adaptive because human perceptual and judgment faculties are so fallible that cross-referencing is needed as a check on them. Were our faculties more reliable, we would not find this tendency adaptive.

    Slightly less (and shorter) argument. We ought to defer when others are better placed than us. We also ought to defer when we judge that others are as well-placed as us, because we have a well-attested tendency to exaggerate our own competence (and therefore in situations in which we judge ourselves to be as well-placed as the majority we will frequently be worse placed).

  2. Ralph Wedgwood says:

    I haven’t had time to read Philip’s Analysis paper. But presumably, what he had in mind must have been something like this.

    The precise size of the supermajority that one should defer to must depend on the number of groups with different views about the atomic propositions in question: if it’s three (as in Philip’s example), then the supermajority must be greater than 2/3 (e.g. 70% as Philip suggests); if it’s four (as in your example), then it must be greater than 3/4 (e.g. 80% or something like that); and so on. Wouldn’t that get round the problem?

  3. Kenny Easwaran says:

    For the point you were picking on, there’s some discussion of this sort of result in a forthcoming paper by Marc Pauly, Josh Snyder, and Fabrizio Cariani. There’s also discussion of this stuff in some papers I’ve heard them cite (but haven’t yet read myself) by Christian List and Franz Dietrich.

    As for the overall point of the paper, I haven’t read it yet myself, but it sounds like it’s arguing against a position that Adam Elga supported in his talk at FEW this year. So it would be interesting to see whether they address the same points, and how.

  4. David Christensen says:

    My main concern about resisting “deference” to the beliefs of others is this: The reason one has for changing one’s beliefs when one finds out that other people disagree is simply that their beliefs are evidence about the subject a hand. The smarter, saner, better-informed, etc. they are, the better the evidence their beliefs provide (ceteris paribus). And (insofar as their beliefs are independent of one another), the more of these people there are, the better evidence their beliefs provide (again, ceteris paribus). I would argue that taking responsibility for one’s own beliefs in the right way would include taking this sort of evidence into account. There’s nothing self-abasing about this. In fact, it is just a special case of taking seriously the evidence (including that provided by both human and non-human indicators) as it presents itself to your mind.

Leave a Reply

You must be logged in to post a comment.