Skip to main content.
May 26th, 2010

Five More Thoughts on Disagreement

These are very atheoretical thoughts about where the disagreement dispute is at.

Local vs global evaluation of agents

At the lunch referred to in the earlier post, we were talking about what kinds of people are drawn to the equal weight view of disagreement, as opposed to views that give peer disagreement less weight. One thought was that it was people who are more confident in their own opinions who dislike the equal weight view.

On reflection, I don’t think that’s right. What really motivates me is that I prefer to use very localised judgments about the reliability of a person. I know in my own case that I have any number of intellectual blindspots, some of them extremely narrowly drawn. (I’m pretty good on evaluating baseball players for instance, unless they happen to play for the Red Sox or Yankees.) When I see someone making an odd judgment on p, I don’t think that they’re in any sense ‘intellectually inferior’, I just think they have odd views about p. And that’s exactly the kind of question that I would use their views on p, and not any independent views they may have, to answer.

Who is being more dogmatic?

Relatedly, I’ve heard a few people describe the equal weight view as a more conciliatory view, and alternative views as less conciliatory. I think this is a mistake twice over.

For one thing, think about the case where you think E is not strong enough evidence for p, because there is a just realistic enough alternative explanation for E, but your (apparent) peer is simply dismissive towards these alternative explanations. He (and it’s easiest to imagine this is a ‘he’) says that only a crazy sceptic would worry about these alternatives. The equal weight view now says that you should firmly believe p, and agree that worries about the alternative, although coherent, are inappropriate. That doesn’t seem particularly conciliatory to me. (Nor does it seem rational, which might be why we never see much discussion of the equal weight view’s use in dismissing seemingly legitimate doubts.)

For another, think about things from the perspective of the irrational agent. For example, consider a case where a rational agent’s credence in p is 0.8, and an irrational agent’s credence is 0.2, and antecedently they regarded each other as peers. I say that both of them should move to a credence of around 0.8 – or maybe a touch less depending on how strong a defeater the irrational agent’s judgment is. The equal weight view says that the rational agent’s credence should move down to 0.5. That is, if I’m the irrational agent, I can accuse the other person of a rational error unless they come half-way to my view. That’s despite the fact that my view is objectively crazy. A view that says that when you’re wrong, you should concede ground to the other person seems more conciliatory than a view that says that you should demand that everyone meet you halfway, even people with a more accurate take on the situation.

Me on political philosophy vs me on epistemology

In an old Analysis paper on land disputes and political philosophy, I was rather hostile to a view on land disputes that purported to resolve any conflict in a way that was fair to both parties. Partially my hostility was because I didn’t think the resolution was particularly fair. But in part it was because the appropriateness of the resolution really relied on this being a genuine conflict in the first place. It seemed to me then, as it seems to me now, that identifying situations where two parties have an equal claim to something (in that case land, in this case perhaps truth or rationality) is much harder than figuring out what to do in such a case.

Somewhat paradoxically, I have a weak preference for us not having too nice a mechanism for solving disputes where parties have a genuinely equal claim. That’s because if we had such a mechanism, we’d be over-inclined to use it. And that would mean we’d end up treating as equals, or more exactly as equal claimants, parties who really weren’t equal in this respect. I think in practice, the way to resolve most disputes is to figure out who is right, and award the prize to them.

Independence

One of the motivations behind some versions of the equal weight view is that we should only use evidence that is ‘independent’ of the dispute in question to decide whether someone is a peer or not. (Nick Beckstead correctly notes this in the comments on the earlier post.) I think this is all a mistake. And as evidence for that, I present the case of Richard Lindzen.

Lindzen is an atmospheric physicist at MIT, and was involved in writing the 2001 IPCC assessment on climate change. That doesn’t make him the world’s foremost expert on climatology, but it does suggest he’d know more about it than me. Surprisingly, he turns out to be a climate change denier. (I’m not sure whether ‘denier’ or ‘delusionist’ is the correct current term; I have trouble keeping up.) I think that’s crazy, and I think the objective evidence, plus the overwhelming scientific consensus, supports this view.

Now what should an equal weight theorist say about the case? They can’t say that I can use the craziness of Lindzen’s views on climate as reasons to say he’s not a peer (or indeed a superior), because that would be giving up their view.

They could try saying that I could appeal to the views of other experts, but I think that misses the point. After all, the other experts are just more evidence, and Lindzen has that evidence just as much as I do. And he dismisses it. (I think he thinks it’s a giant conspiracy, but I’m not sure.) So even if I’m going to believe in global warming because of my reliance on other experts, I have to say that I’m going to trust my judgment of the testimonial evidence over someone else’s judgment of that very same evidence, even though I thought antecedently he would know better than me what to do here.

We could try saying that his dismissal of all the experts proves he is irrational. After all, he’s not an equal weight theorist! (That won’t bear much weight with me, but it might with the equal weight theorists.) But this is just to concede the point about independence. After all, we are judging his ability to make judgments about p not on independent grounds, but on grounds of how well he does on p. That seems like a violation of independence.

The debate, at this point, seems to resemble the complaint I made in Disagreeing about Disagreement. The equal weight theorist needs to treat the status of their theory of disagreement very differently to other epistemological theories. If Lindzen refuses to infer to the best explanation in this case, say, then we can’t dismiss his views unless we can criticise him on independent grounds. But if he refuses to take his peer’s judgments as strong evidence, we don’t need independent grounds to criticise that. This double standard seems objectionable.

Disagreement about evidence vs disagreement about conclusions

I’ve been trying to think about which cases I’m actually disposed to change my views in the face of peer disagreement. I think they are largely cases when there is a legitimate dispute about just what the evidence is. So think of some cases, common in sports, where we are trying to judge a simple factual question on the basis of visual evidence. The most common of these, in my experience, are so-called ‘bang-bang’ plays at first base. In that case we have to decide whether a baseball or a base runner got to a point earlier. And even with the situation right in front of us, this can be surprisingly hard.

Here are two salient facts about that case.

First, it is very hard to say, in a theoretically satisfying way, just what the evidence is in the case. I’m not a phenomenalist about evidence, so I don’t really want to say that the evidence is just how the case seems to us. In some sense, if it is possible to just see that, let’s say, the ball arrived first, then that the ball arrived first is in some sense part of my evidence. Perhaps I don’t know it is part of my evidence, and perhaps I don’t even believe it is true, but it is plausibly evidence for me.

Second, in a case like this, deferral to peers seems like a very natural thing to do. If there are six people watching TV, and I have a different opinion about what happened to the other five, then I’ll usually conclude I was wrong. Let’s assume, at least for the argument, that this is rational.

Here’s a hypothesis. It’s rational to defer to peers when it is unclear what your evidence is. It is less rational to defer to peers when it is unclear what the right response to the evidence is, at least when the peers have the wrong response. To consider the analogy above, I shouldn’t be so willing to defer to peers about disagreements about who will win the game, when we all have the same evidence about that.

The strongest cases for the equal weight view in the peer disagreement literature are, I think, cases where the evidence is not entirely clear. (At least on an externalist view of evidence.) Perhaps those are the cases where the equal weight view is correct.

Posted by Brian Weatherson in Uncategorized

26 Comments »

This entry was posted on Wednesday, May 26th, 2010 at 12:01 am and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.

26 Responses to “Five More Thoughts on Disagreement”

  1. Hilary Kornblith says:

    Brian,
    I don’t see why you think the equal weight folks have a problem in explaining what we should think in the Lindzen case. Like you, I’m very far from an expert on climate science. And like you, I believe, and I think I’m quite rational in believing, that Lindzen is hopelessly mistaken on this issue. I’m familiar with a bunch of the evidence on climate change, in the sort of way that someone who reads the New York Times might be familiar with the evidence. So I could explain to someone who is completely uninformed why there’s reason to believe in climate change, but my reasons for having the degree of confidence I do about climate change have nothing to do with that sort of evidence. More than this, I think it would be unreasonable for me, given my level of understanding of the issues, to believe anything about climate change on the basis of my own assessment of that evidence. The evidence is, after all, complex; I don’t have anything like a thorough acquaintance with the evidence that’s available; and I have no training in this area at all. My assessment of the strength of the direct evidence here is, in my opinion, nearly worthless.
    So why am I so confident that Lindzen, who knows far more about these issues than I, is hopelessly mistaken? Because I know that he is an outlier in the field. The scientific study of climate has the kind of history that is characteristic of scientific fields in the last couple of hundred years. (Here, obviously, I’m oversimplifying.) They show greater accuracy in predictions over time; they show greater comprehensiveness in their explanations over time; they show technological applications of theory which are successful in their interventions with the world. In short, they have a history of convergence to the (approximate) truth. And while individual scientists certainly make mistakes, and even entire fields of scientists can, at times, make mistakes, one would be extremely unwise (within those areas of science which show the kinds of history which I just described) to bet against a near consensus. So my reason for believing what I do about climate change is not one largely ignorant philosopher’s assessment of the direct evidence (namely, my own); it relies instead on the near consensus within the field. And this is just what the equal weight view says I should do. Roughly: proportion my degree of confidence to the distribution of expert opinion.
    So where’s the problem?

    Hilary

  2. Brian Weatherson says:

    The problem is that everything you know about the experts, Lindzen knows as well. And he thinks it doesn’t support the climate change hypothesis. (Or at least the serious climate change hypothesis; I think he thinks there is change, but it’s largely cyclical variation.) So you’re making a judgment that your evaluation of all the evidence, both direct and testimonial, is better than his.

    Now I think that’s fine. I think we can often know that someone is making a mess of a particular decision problem, and we can know it by looking at how they decide this problem. But I don’t know how the equal weight view theorist can coherently say that.

    I think the position you’re defending here is something like:

    • Equal weight view as applied to direct evidence; plus
    • Right reasons view as applied to testimonial evidence

    The reason I think the second bullet point is important is that if one were an equal weight theorist through and through, one should think “Well I think the mass of expert opinion is a strong reason to believe in climate change, but Lindzen doesn’t, so maybe I should split the difference.” But that’s not the way to a sane viewpoint.

    This combination of positions, first-order equal weight, higher-order right reasons, is I think pretty widely held. But I don’t think it’s really a version of equal weight. And I don’t think it’s ultimately very plausible. If I’m allowed to dismiss Lindzen’s views because he’s irrationally responding to the distribution of expert opinion, I should be allowed to dismiss his views because he’s irrationally responding to the distribution of direct evidence.

  3. nbeckstead says:

    Why is the EWVer supposed to agree that Lindzen is a peer at evaluating the evidential importance of the scientific opinion on climate change? Sure, he might be good at evaluating first-order evidence, but why think he’s good at this further task?

  4. Brian Weatherson says:

    Well obviously we can see now that he isn’t a peer at this. But was there any antecedent reason to think this? I would have thought his being a leading expert in any scientific field would have been a good prior reason to think of him at evaluating evidential importance of expert opinion.

    If our prior knowledge of Lindzen was enough to say he wasn’t a peer, then I think the equal weight view will be useless in most empirical investigations.

  5. Dan Greco says:

    A strong version of the equal weight view would say that you can only give somebody else’s opinion less than equal weight when you have prior antecedent reason to think she’s not your peer—this version of the equal weight view would be threatened by Brian’s response to nbeckstead.

    A weaker version, and the one I think is actually defended by Elga and Christensen, would just say that if you have prior antecedent reason think somebody is your peer, then you must give her opinion equal weight.

    This weak version would be silent on what to do in cases where you don’t have much antecedent evidence (or any opinion) one way or the other as to whether somebody is your peer with respect to some type of question, which is what the above case seems to be. (That is, setting aside your disagreement about climate change, you don’t have any evidence, or any opinion, bearing on whether or not Lindzen is a peer with respect to rationally evaluating testimonial evidence).

    My reason for thinking this is Elga’s position comes from the reply starting on p. 25 of “Reflection and Disagreement” (page number from this version: http://philsci-archive.pitt.edu/archive/00002940/01/refdis.pdf). My reason for thinking this is Christensen’s position comes from section 6 of his “Disagreement, Question Begging, and Epistemic Self-Criticism,” entitled “Does Independence Lead to Wholesale Skepticism?”

    Maybe these are more watered down versions of the equal weight view than the one you had in mind, but watered down or not, they don’t seem to have the consequences you’re worried about.

  6. Brian Weatherson says:

    I think the fact that someone has risen to the top of a scientific field, in an era where good science necessarily involves processing information from a wide variety of sources, is a pretty strong antecedent reason to think the person is good at evaluating peer disagreements. So I don’t think this is merely a case where we have no reason to think Lindzen isn’t a peer – we had an antecedent reason to think that he is. (Indeed, an antecedent reason to think he’s an expert.)

  7. Hilary Kornblith says:

    Brian suggests that I’m applying the equal weight view to direct evidence, but the right reasons view to testimonial evidence. In support of this claim he says,

    “The reason I think the second bullet point [right reasons view as applied to testimonial evidence] is important is that if one were an equal weight theorist through and through, one should think “well I think the mass of expert opinion is a strong reason to believe in climate change, but Lindzen doesn’t, so maybe I should split the difference.” But that’s not the way to a sane viewpoint.”

    But equal weight doesn’t require splitting the difference with Lindzen; it requires giving the opinions of each of the experts equal weight. And since Lindzen is part of a very small minority, this will result in very high confidence in global warming, contrary to Lindzen. This doesn’t apply different standards to direct evidence and testimonial evidence, using equal weight for the former and right reasons for the latter. Instead, it applies a single standard—equal weight—to total evidence.

  8. Brian Weatherson says:

    I think what I’m finding hard to see here is how and why we can factor out prior testimonial evidence. Here’s how I see the case.

    At t1, before I’ve heard of Lindzen, I have a bunch of evidence in favour of global warming, and a bunch of evidence that Lindzen is an epistemic peer. Some of the first chunk of evidence is testimonial, some is direct. Lindzen has all that evidence too.

    At t2, I learn that Lindzen doesn’t believe in global warming, on the basis of the same evidence I have at t1.

    What should I do at t2? I’d have thought the EWV would say that I should treat equally my view and Lindzen’s. So I should now view global warming as 50/50. Indeed, I’m pretty sure a literal reading of some of the EWVers says exactly that.

    The alternative seems to be that I do something rather odd. I go back to t1 and factor out my evidence into testimonial and non-testimonial. Then I keep my judgment of the testimonial evidence fixed, and I partially defer to Lindzen on the force of the direct evidence.

    But this move seems like a non-starter to me for all sorts of reasons.

    For one thing, it isn’t obvious I’ll even be able to factor my t1 evidence into testimonial and non-testimonial. It isn’t, for instance, a requirement of rationality that I remember the source of each of my beliefs, so perhaps I’ll have forgotten which of my judgments about global warming come from experts, and which come directly.

    For another, I don’t see why I should defer to Lindzen on the force of direct evidence, but not on the force of testimonial evidence. Perhaps he’s right that the testimonial evidence is much weaker than I thought. If I can engage him directly on that, and decide that I’m right and he’s wrong about the force of the other expert judgments, I should in principle be able to engage him on the direct evidence, and decide that I’m right and he’s wrong, even though it seemed antecedently we’d be equally likely to be right. And that’s just the denial of EWV.

    Put another way, the view Hilary is suggesting (I don’t know if it’s his own view, or he’s just trying to show where I’m going wrong) seems to suggest that I can rebut Lindzen with prior evidence iff that prior evidence is testimonial. That seems implausible to me, except perhaps on views where testimony doesn’t provide evidence, it provides some special non-evidential warrant.

  9. Hilary Kornblith says:

    Brian says that after encountering Lindzen’s views, the equal weight advocate should view global warming as a 50/50 proposition. This would be right if Lindzen and the equal weight character are the only ones whose views are known here. But they aren’t. The reason why I have such confidence in global warming is precisely the fact that Lindzen’s view is such a minority position. So before I encounter Lindzen, I know that there are n experts (for some large n) who believe that the total evidence supports global warming. After encountering Lindzen, I know that there are n experts who believe the total evidence supports global warming and one who does not. I now proportion my confidence in global warming to the distribution of expert opinion. That’s what equal weight requires. It doesn’t require that I give equal weight to, on the one hand, the n experts who believe in global warming, and, on the other, Lindzen, who does not. That wouldn’t accord equal weight to each individual’s opinion.

  10. nbeckstead says:

    Brian, I accept your response to my first question.

    As for the issue of being EWV folks for first-order evidence, right reasons folks for testimonial evidence, I don’t think that’s necessary to avoid splitting the difference with Lindzen. Just as the the vast majority of experts think that the first-order evidence supports climate change, the vast majority of experts think that the distribution of expert judgment supports climate change. So, on both fronts, the vast majority of expert judgment supports climate change. So the equal weight view does suggest that I shouldn’t split the difference with Lindzen.

    There does not appear to be any double epistemic standard in that argument, and the argument does not appear to rely on any dubious distinction between first-order and higher-order evidence.

    —Nick Beckstead

  11. Brian Weatherson says:

    I know there are all these other experts – I don’t know how or why that could possibly matter.

    The EWV is usually stated as a response to a very simply stated question. You and a peer have the same evidence about a problem, and you come to different conclusions. What should you do? The Lindzen example fits that description.

    It seems to me that the response Hilary and Nick are running just has to be a qualification of the EWV. There are some cases where you have exactly the same evidence as a peer, and no independent reason, independent of that evidence that is, to think he is wrong, but you rationally think he is wrong. How is that not simply a violation of equal weight?

    If you want to say the equal weight view is something like “When you and a peer have the same non-testimonial evidence, then you should average out your view, his view, and the views of other people with the same non-testimonial evidence”, then I’d have two challenges.

    First, I’d like to see some evidence that anyone in print has written the principle that way. I think the principle is usually stated as answering the kind of question I asked in the second paragraph.

    Second, once we’ve got a restriction to non-testimonial evidence in the definition of the principle, why not have other restrictions, e.g. restrictions to non-deductive, or non-perceptual, evidence? I don’t believe there is any good answer to that, apart perhaps from the Moran-Hinchman line that denies that testimony is a kind of evidence.

  12. Brian Weatherson says:

    Note that we can sharpen up the Lindzen example in another way. Imagine that the only two people in the world who know about his judgment are you and him. So the only two experts in the world who have exactly the same evidence as you are you and him. It would still be crazy to defer even in large part to him. I don’t see how one can say that without seriously qualifying the equal weight view, at least as originally stated. And I suspect that when one makes the qualifications explicit, they won’t be defensible.

  13. nbeckstead says:

    Maybe I’m changing the view. I don’t know. If 100 of us go out to dinner calculate our shares, and 99 people say $43, except Sam who says $45, do I have to split the difference with Sam, on your interpretation of EWV? You might say, “Well, look, you and Sam (i) are peers, (ii) looked at the same evidence, and (iii) disagree. So according to EWV, you have to split the difference with him and be 50/50 about the appropriate share. After all, that view says that if two people are in that condition, you have to split the difference.”

    I think the thing for the EWV guy to say here is, “Oh. Well, I meant to be talking about situations where only 2 people are disagreeing. In cases where n people disagree, I’ll need a more general theory.”

    I’ve been assuming, and I think Hilary has as well, that the appropriate generalization is something like:

    Generalization: If n people have weighed some evidence E on a particular proposition p, and you take them all to be equally good at evaluating this evidence, and your total p-relevant evidence just consists of E and the aforementioned facts, then your credence in p should be determined by a function that gives each person’s judgment equal weight (so that you have to “split the difference” with the group, whatever that means).

    In the big dinner case, this generalization fits nicely with the suggestion that we should be highly confident that each person’s share is $43, despite the fact that a peer disagrees. You can change the case if you like, so that everyone talks about their disagreement, and Sam continues to believe that the appropriate share is $45. Still, the generalized EWV won’t demand that you split the difference with Sam, since when everyone weighed the relevance of the distribution of peer opinion, they agreed with you that $43 is the appropriate share.

    Likewise, in the case you described, loads of people have reviewed (1) some scientific evidence and (2) some evidence about the distribution of expert judgment. And the vast majority think (1) and (2) support climate change. True, some experts, like Lindzen, disagree. But just as the fact of one disagreeing peer doesn’t call for splitting the difference with Sam in the big dinner case, it doesn’t call for splitting the difference with Lindzen in this case. That just isn’t demanded by the generalization.

    So, did EWV people have the generalized view in mind all along? I suspect they did. If they didn’t, then I guess I changed the view, but I don’t think the change should be regarded as a fundamental shift of viewpoint. I don’t think it’s the kind of change that, once accepted, demands you give up anything like EWV altogether.

  14. nbeckstead says:

    So, in your new case, the scientific community has interacted with lots of climate change skeptics and been unswayed, but hasn’t heard about Lindzen in particular. (You didn’t say this, but I think that’s the most natural way to fill in the story.)

    I take it I can be nearly certain that if the scientific community did know about Lindzen, then they’d still think more or less what they think now. Being nearly certain of this is about as good as conditionalizing on their response. If I were to conditionalize on their response, EWV would say I should do some kind of equally weighted aggregation of judgments again. Knowing all this, I should more or less ignore Lindzen now.

    If I find it quite likely that the scientific community would retract their judgments upon learning about Lindzen, then that’s a different story. But if I really ought to think this, it isn’t implausible that I should be worried when I hear what he thinks.

  15. Brian Weatherson says:

    Nick, isn’t that view inconsistent? I mean, when there are n people with the same evidence, there will also be n-1 people with that evidence? And for that matter 2 people. So we’ll have to have a number of different credences, which is inconsistent.

    I’m also not sure how exactly this principle should be applied to the (extremely common) case where I believe something, but can’t remember how much of the support for it was testimonial, or how many experts there were, etc.

    And I do think it’s a version of the “testimonial evidence is magical” theory. The view now is that any evidence whatsoever is subject to defeat by peer disagreement … psssst … except for testimonial evidence. That just seems wildly implausible to me.

  16. Brian Weatherson says:

    Actually, there’s one other problem with this version of EWV, especially as developed in comment #14.

    Let’s say I believe p on the basis of e. I think e is great evidence for p, so I think that all my other rational peer would believe p given evidence e. Now I learn that one actual peer with e believes ¬p. But I think, well I have millions of peers who would believe p with evidence e, and one who believes ¬p, so I should still be extremely confident that p.

    If I’m allowed to use what I believe about people’s conditional reactions, and those beliefs aren’t affected by the peer disagreement (as Nick is assuming in the Lindzen case) I think this move should be always available, and imply that the EWV couldn’t have any practical force.

  17. nbeckstead says:

    Disclaimer: I reject the equal weight view. I just don’t accept the objection that Brian is raising.

    1. When I said “n people”, I meant “exactly n people”. Sorry if that wasn’t clear from the context. That should handle the inconsistency objection, yes?

    2. I don’t see that the view makes a magical testimony non-testimony distinction. (But I’m happy if I answered your Lindzen objection, since this is a separate issue.) In my example, I applied the very same rules to two cases where testimony was at stake, treating them no differently from non-testimony cases. In one of them, I said that because 99% of the people at dinner thought that the whole of the previous evidence, including everyone’s judgments about what the right share was, we should go with the vast majority of opinion. I said the same thing when the scientific community evaluating the relevance of the distribution of scientific opinion. I did not treat these situations in any kind of magical or different way from situations involving only “first-order” evidence. If the dinner party got together and concluded that all of the previous evidence, including the fact that 99 thought $43 and 1 thought $45 warranted a 25/75 attitude about the bill, then if I thought they were peers at this, I’d have to more or less agree with them. That’s what Generalization says, anyway.

    3. Since the view I proposed doesn’t rely on a testimony/non-testimony distinction, I don’t see why forgetting how you learned something is a problem.

  18. nbeckstead says:

    As for your revised Lindzen case, I’m just using reflection in a totally normal way. Why shouldn’t EWV folks be able to do that?

    For the case you describe, where you think e supports p and meet someone who disagrees, you can’t just say “well, my peers and other smart folks will agree with me anyway”. The fact that this guy disagrees might be evidence that your peers and other smart folks won’t agree (it is only weak evidence in the climate change case, since you already know a lot about what scientists think). (This kind of thing happens a lot. When I was an undergrad, I thought incompatibilism was obvious. I thought others would think the same. When I met smart folks who disagreed, I had to change my expectations about what smart folks will think about incompatiblism.) So this isn’t a recipe for always sticking to your guns.

    So: I don’t see how my move makes EWV inconsequential. If someone disagrees with me in a way that is surprising, I have to change my credences in a radical way. If this person is a peer, it might also change my views about what other peers are likely to think. What more do you want?

  19. Hilary Kornblith says:

    I agree entirely with what Nick Beckstead says in 13 (subject to, what I take to be an obvious clarification in 17). Is this position in the literature? When Christensen discusses the restaurant case, he begins with the two person case, and then moves to the 17 person case. As I remember—I don’t have the text in front of me—he makes the point about the 17 person case that, if it is 16-1 against me, then even if, as a matter of fact I’m right about the division of the bill, it would be outrageous for me to believe that I’m right and everyone else is wrong (assuming the usual background that we’re each highly reliable and there’s no special information about any of us here). This suggests the principle which Nick articulates. I don’t remember whether Christensen explicits cites such a principle, but it does seem a natural interpretation of what David has in mind. I defend a principle very much like Nick’s (and like the one I articulate in 1, 7, and 9 above) in a paper forthcoming in the Feldman and Warfield volume.

    As far as what to say about the case where the only two people whose opinions I’m aware of are Lindzen and myself, where Lindzen is an expert and I’m not, I think it would be crazy for me not to defer to Lindzen, for the reasons I mention in 1. Roughly, he’s an expert on these matters and I’m not. Second-guessing of experts is not a practice which has an enviable track-record. Smart money is on the experts. So I’d bet that I had misunderstood something or was simply ignorant of relevant evidence or just failed to appreciate the relevance of something or other. “Equal weight” views do not require giving equal weight to the opinions of any two randomly chosen individuals, regardless of their background reliability. So if the problem here is now supposed to be the two person case, where one of those is Brian or me (assuming that he has much the same training in climate science that I do—namely, none) and the other is Lindzen, then I really don’t see the problem for equal weight views.

  20. Brian Weatherson says:

    This all still seems like the magical power of testimony view to me, because of how it answers these abstract questions.

    Background: My evidence for p is e. Someone I thought was a peer also has evidence e. He concludes ¬p on the basis of this evidence. What should I do?

    Answer, according to this version of EWV: If e includes no testimonial evidence, I should modify my beliefs a lot. If e consists of testimony from a lot of people, I should modify my beliefs at most a little.

    Is this right? If not, what is the answer? If so, why think testimony is so special?

  21. Dan Greco says:

    In response to comment 20:

    Background: My evidence for p is e. A large, responsibly conducted scientific study finds that -p What should I do?

    Answer, according to pretty much any plausible view: If e includes no other large well conducted scientific studies, I should (at least ceteris paribus) modify my beliefs a lot. If e includes lots of other large well conducted scientific studies, I should (again, ceteris paribus) modify my beliefs at most a little.

    It doesn’t seem like a bad result for the equal weight view that it entails that the marginal force of testimonial evidence diminishes as you get more of it. This doesn’t amount to treating testimonial evidence in a magical or special way, since that will be true for other types of evidence as well (e.g., evidence from scientific studies).

  22. easwaran says:

    Dan Greco makes an interesting point, but I think the equal weight view is saying something slightly different. Somehow my opinion as an individual counts equally to the pieces of testimonial evidence (at least, when the testimony is from peers who have the same evidence). But whatever one says about large, responsibly conducted scientific studies, they aren’t the sorts of things that can “have the same evidence as me”, and they don’t have opinions to count equally to mine.

    Surely any reasonable competitor to the equal weight view will also allow for decreasing marginal force for testimonial evidence as you get more of it. But the equal weight view says that there’s something special about testimonial evidence, in that you put it specifically at equal weight with your own opinions (at least, when it comes from a peer with the same evidence). Any other sort of evidence comes in with a different sort of weight.

    I suppose the alternative is that the equal weight view says that testimony isn’t really evidence at all, or at least not in the same way. And I think this really does feel like the intuition that is being pumped in the examples for the equal weight view. And this is what Brian is calling the “magical power of testimony” I think.

  23. Brian Weatherson says:

    As Kenny said, the problem isn’t that the EWV says that the marginal force of testimony is decreasing as the amount of testimony increases. Dan is right to say that’s plausibly true.

    The problem is that the EWV says that the marginal force of testimony is non-decreasing as the amount of non-testimonial evidence increases. That seems implausible to me. It seems especially implausible given the first claim.

  24. nbeckstead says:

    Thought experiment: A certain inanimate object, The Measuring Device, has a certain tendency. It was built specifically so that if E is true, with a certain probability Pr, it points to p. Otherwise, it points to ~p or “suspend”. You have good evidence that The Measuring Device is equally as reliable as you, in the following sense. This is the only evidence proposition pair for which The Measuring Device was built. Both you and the measuring device have the probability profiles, so that the probability you believe p, given E and p, is the same as the probability that The Measuring Device points to p. Likewise for E, ~p, and pointing to p; E, ~p, and pointing to ~p, and so on. (You could have the measuring device point to number-proposition pairs if you wanted to be more complete.) Now suppose that on a particular occasion, you and the measuring device “disagree”. (If this isn’t what it takes to be a peer, then the measuring device has exactly the probability profile it takes to count as a peer.) What should you do? What should you do according to EWV?

    So, if EWV folks treated The Measuring Device differently from a peer, I would have to agree that they were relying on a magical testimony/non-testimony distinction. But what if they didn’t treat it differently? What if they said that any object that has the same probability profile as a peer counts as a peer, even if it isn’t an agent?

    My suspicion is that they might face a dilemma here. If they say The Measuring Device is a peer, then by extension, it will turn out that everything is a more or less reliable measuring device and will have to be treated in analogous ways, depending on how reliable it is. After all, EWV types should defer a ton to experts and still defer a little to people below themselves. (You’d have to fill in the gaps, but that’s how I suspect it would work out.) Then I suspect, the view will not be distinctive. This is my suspicion because of the following kind of case. If two physicists work in a lab and are some data E from a computer printout, E will be counted as peer-like to some extent. So whatever E actually supports will be given some extra weight, even if a physicist peer disagrees (that’s at least forced if you go for Generalization).

    If they say The Measuring Device isn’t a peer, it will be magic. But it’s hard to say what would happen without filling in the details.

  25. Brian Weatherson says:

    I think once we allow in measuring devices as peers, everything will start to fall apart.

    Let’s say I’m trying to predict an election. I have four models, each of which I think is worthwhile, and my credences are an average over the four. I also have a friend, who I thought was a peer before this conversation.

    I talk to the friend about the election, and he says (a) that his credence is very different to mine, and (b) he doesn’t think much of my models, though the reasons for this aren’t clear. What should I do?

    One option is to treat the friend as a fifth model.

    Another option is to average the friend and my prior credence, i.e., the average of the four models.

    Is either of these defensible in principle? The motivations behind EWV seem to point equally towards each. If I treat the friend as a fifth model, then I’m ignoring the fact that he doesn’t think the models are very good. If I average out the friend and my credences based on all four models, I’m insisting that the models can’t overcome peer disagreement. Both options here seem pretty bad.

  26. jonathan weinberg says:

    It seems to me that all the stuff about testimony is a red herring here. The important thing is that I have found out somehow or other that an epistemic peer has judged that a different set of credences than mine are licensed by the exact same evidence. Whether it is by testimony, or eavesdropping, or inferences from observing their behavior, or mind-reading, or whatever, doesn’t seem that it should make any difference. Most commonly of course we find this sort of thing out via testimony, but that’s an inessential feature of peer-disagreement situation. (Not that there mightn’t be interesting issues about testimony in these sorts of situations as well — but I don’t think that the EWV insight should be framed in terms of it.)

    A more relevant distinction here than that between testimonial/nontestimonial, is that between some set of evidence, and what follows from that set of evidence. These are distinct more-or-less independent aspects of judgment where things can go wrong, and so in general either or both can play a role in explaining what has gone wrong in a case of disagreement. In an epistemic peer case, though, it is stipulated that the first of those factors is constant, so it falls to the latter factor to do all the explaining. So what explanatory inference is available, in such a situation? That you are epistemically superior to the other party cannot be automatically privileged — since, in the absence of any further relevant information, if the cetera really are paria, then you have no reason to take it to be more likely than the opposite hypothesis, or for that matter, than the hypothesis that you are peers, but at least one of you is simply mistaken (and you can’t tell yet which it is). That, I think, is at the heart of the EWV idea.

    But the view need not exclude the possibility that there may at times be other information in play that breaks that symmetry, and lets us legitimately infer to one explanation over the others. In your background theory of the world is going to be included some ideas about what sorts of mistakes are more-or-less plausible for a basically competent reasoner to make, regarding the latter sort of issue. This can, where appropriate, give you a reason to preference one party’s judgment over another’s, when they otherwise are epistemic peers; the extreme restaurant check mis-arithmetic case would be like that. (Of course, it might be your own judgment that you decide not to trust, in some cases.)

Leave a Reply

You must be logged in to post a comment.