Disagreeing about Disagreement

Recently several epistemologists, such as David Christensen, Adam Elga and Richard Feldman, have endorsed a fairly strong view about disagreement. Roughly, the idea is that if you believe p, and someone as smart as you and as well informed as you believes ~p, then you should replace your belief in p with either a suspension of judgment (in Feldman’s view), or a probability of p between their probability and your old probability (in Elga’s view).

I’m glossing over a lot of details here because I think there is a way to see that no view anything like this can be accepted. Many other epistemologists (Tom Kelly, Ralph Wedgwood) do not hold the Christensen-Elga-Feldman view. So by their own lights, Christensen et al should not believe their own view, because according to them they shouldn’t believe a proposition on which there is disagreement among peers, and this epistemological theory is a proposition on which there is disagreement among peers.

I think no one should accept a view that will be unacceptable to them if they come to accept it. So I think no one should accept the Christensen-Elga-Feldman view. Indeed, I think the probabilistic version of it is incoherent because of a variant of the above argument. I’ve written up a short paper saying why.

Disagreeing about Disagreement

18 Replies to “Disagreeing about Disagreement”

  1. Won’t Elga be unhappy with the stipulation that he has to regard non-believers in EW as his peers wrt EW (in the discussion of premise 1 on p2)? He might agree that these guys would look just as good as him on paper, perform just as well on a relevantly-styled interview, etc., but I took it one of his main points was that when you’re making a first-person judgment about whether someone is your peer wrt some hotly contested claim x, these kinds of factors are not the only relevant ones (if they are relevant at all). Rather, we need to take into account whether, given what might be a background of vast disagreement about related matters, one really thinks that, should there be disagreement over x, it’s just as likely that they’ll be right as it that you will. As Elga and Brian both stress, it’s vital this is a judgment based on your weighing of the situation prior to the actual disagreement over x, and not simply made on the basis of your disagreement over x itself. But Elga’s point looks like it’s sensitive to that.

    (I should note, I’m not really concerned to defend EW. I just want to get clearer on the dialectical situation before I suspend judgment on the matter)

  2. Fair enough – I could perhaps have been a bit less flippant in this section.

    But I think as a matter of fact that Elga should regard some non-believers in EW as his peers. Perhaps he shouldn’t regard Tom Kelly that way – there’s a body of evidence that he and Tom disagree about enough fundamental matters in epistemology that maybe Adam should have a prior suspicion that Tom will get the wrong answer here. But I don’t think the same goes for Ralph Wedgwood. I don’t think there’s the same vast body of disagreement here that could be used in a prior weighing of the situation.

    But Aidan is certainly right that Elga is considerably more careful here than I’ve probably made him appear in the paper. I probably should revise this. (On the other hand, I was hoping to keep this short enough for Analysis, and it’s 3950 words as is. Some tight editing may be required!)

  3. I wrote up an argument that this:

    I think no one should accept a view that will be unacceptable to them if they come to accept it

    shouldn\\‘t be our only reason for rejecting a view, but it was too long and off-topic so I published it back home.Here\\‘s a related on-topic question though: What should Adam do if, when he attends only to the arguments concerning disagreement, he\\‘s inclined to give EW a credence of 1; but his peer gives EW a credence of 0? It seems that the answer shouldn\\‘t be 0.5; his only reason for suspending judgment on EW would be EW itself, but if he only has 0.5 credence in EW he shouldn\\‘t take that as a reason to suspend judgment on EW rather than trust his own views concerning the arguments.

    In the above-linked entry I give a calculation that indicates that under these circumstances Adam\\‘s credence in EW should be 0.586, but I confess that I find that very weird.

  4. Isn’t there room here for an argument that we shouldn’t read off anyone’s beliefs from their academic commitments? It seems reasonable to defend things in print even if one isn’t sure whether one believes in them. And it definitely feels hard to characterize my attitude to these things as being the same as my attitude to various ordinary facts (or even theoretical claims like basic ones about gravity and evolution). So perhaps Adam really is suspending belief but supporting the view, or perhaps he could even argue that Tom Kelly and Ralph Wedgwood don’t really disbelieve it!

  5. Does the Elga-Feldman-Christenson view give unfair advantage to people who are stubborn or slow to change their views? E.g. suppose I believe p and you believe ~p. You, seeing the disagreement, revise your opinions so as to have no positive view on whether or not p. Now no one disagrees with me; do I still need to revise my view, even granting EW?

  6. Matt W.,

    You say,

    It seems that the answer shouldn’t be 0.5; his only reason for suspending judgment on EW would be EW itself, but if he only has 0.5 credence in EW he shouldn’t take that as a reason to suspend judgment on EW rather than trust his own views concerning the arguments.

    Suppose you believe EW is true (for simplicity, you give it credence 1). Your peer believes that EW is false (for simplicity, she assigns it credence 0). Since you think EW is true, you should move your credence for EW to .5. But can you thereafter coherently use EW? Suppose you and another peer disagree over another proposition P. Peer believes Pr(P)= .9, You believe Pr(P)= .5. If EW were true (on one version of it) you should move your credence for P to .7. But now you have only a .5 credence that EW is true and that you should move your credence for P to .7. You also have a .5 credence that EW is false and you should keep your credence for P at .5. Since both assignments are equally credible, you should move your credence for P to .6 (or midway between .7 and .5). That’s what the partial credence in EW would have you do. It might generalize in the obvious way when you credence for EW is greater (or less) than .5. You take the weighted sum of the opposing Peer-credences (i.e. weighted by your credence for EW).

  7. You take the weighted sum of the opposing Peer-credences (i.e. weighted by your credence for EW).

    Mike, that’s pretty much what I’m suggesting you should do. In this case, if you were disregarding your peer (as not-EW counsels), you would give EW a credence of 1. If you give equal weight to your own view and your peer’s (as EW counsels), then you would give EW a credence of half what you actually give it. If we weight these two by your credence in not-EW and EW, we get that the credence in EW (call it c) should be (1 – c) * 1 + c * c/2. (Each term is the credence in not-EW/EW times the credence that not-EW/EW counsels that you give to EW.) The only <1 solution to this equation is 2 – √2 = 0.586.

  8. If you give equal weight to your own view and your peer’s (as EW counsels), then you would give EW a credence of half what you actually give it. If we weight these two by your credence in not-EW and EW, we get that the credence in EW (call it c) should be (1 – c) * 1 + c * c/2.

    Matt,

    Suppose I find it equally credible that my credence for EW is correct and Your credence for EW is correct. You place no credence in EW. I place complete credence in EW. From my point of view,(though not yours) I should weight the truth of my credence for EW at .5 and I should weight the truth of Your credence for EW at .5. So, shouldn’t it look like .5(1)EW + .5(0)EW = .5? That is, shouldn’t I weight these opposing credences for EW on the assumption alone that EW is correct, since that is what I believe prior to our disagreement over EW? From your point of view, there is a .5 credence that EW is correct, and so a .5 credence that you should move to the midpoint between 0 and 1. There is also a .5 credence that you are right and that EW has no credibility (in that case you should not move from 0). It should then look like .5(.5)EW + .5(0)EW = .25. No?

  9. Mike, I’ll take the case of the person who, when he evaluates it for himself, finds the argument for EW entirely convincing first. The argument from EW that your credence in EW should be only 0.5 seems like only the first step. For your reason for giving EW a credence of 0.5 was based on EW, and you don’t give full credence to EW anymore.

    So now you should think, “Well, there’s a 0.5 chance that EW is right and I should just average my credence and his credence — yielding 0.25 — and there’s a 0.5 chance EW is wrong and I should just trust myself — yielding 1. So my total credence in EW should be 0.625.” Then you think, “Now there’s a 0.625 chance that I should just average my credence and his credence — yielding 0.3125 — and there’s a 0.375 chance that I should just trust myself — yielding 1. So my total credence in EW should be 0.625 * 0.3125 + 0.375 * 1 = 0.570.” Etc. This sequence converges to a fixed point at 0.586.

    As for my point of view, where I’m the person who doesn’t find the arguments for EW convincing at all, there doesn’t seem to be a problem. I have a credence of 0 in EW, so I should do what non-EW tells me to do: evaluate the arguments over EW for myself, ignoring what you think. Again, this yields a credence of 0 in EW. So my credence of 0 in EW is stable.

  10. Hi Matt, a quick question on what you say here,

    So now you should think, “Well, there’s a 0.5 chance that EW is right and I should just average my credence and his credence — yielding 0.25 — and there’s a 0.5 chance EW is wrong and I should just trust myself — yielding 1.

    Am I misreading this. If I think that EW is wrong (as you seem to say in the last sentence above) why would I trust myself and return to my initial credence for EW, which was 1? Why would I put any credence for EW—if I think it’s wrong—other than 0?

  11. Brian, I was just wondering whether the MJ principle is rather strong, and whether it might suffice for sceptical possibilities. It seems to me that I might be justified in believing something, but because I lack the concepts, I might not be able to believe that I am justified in so believing. Perhaps I’m a savant 4 year old reading euclid and I follow the proofs. I could argue the proofs with you but I couldn’t explain that I was justified, and in fact, because of a peculiar intellectual disability which prevents me having thoughts about thoughts, I cannot acquire the concept of a justified belief.

  12. If you reject the extreme view that, in choosing your beliefs, you should always give all friends’ argument-based-opinions equal weight with yours, and if you also reject the other extreme, that you should always based your beliefs on the arguments you understand, otherwise ignoring the opinions of others, then the interesting question is: how much should you weigh who’s opinion when? The answer could be a lot closer to the equal weight extreme than to the always ignore everyone extreme.

  13. Mike, thanks for the questions. I’m not sure the calculation that I’m doing here actually does make sense, but this is what I’m thinking: If you have credence c in EW, then your credence in any propositiom should be a weighted sum of what EW tells you your credence should be and what not-EW tells you your credence should be. That is, if using EW methods tells you that your credence in P should be x, and using non-EW methods tells you that your credence in P should be y, then your credence in P should be cx + (1-c)y.

    Then the idea is that this applies to EW itself; we want c to be equal to the weighted sum of what the methods of EW tell you credence in EW should be and what non-EW methods tell you credence in EW should be. That yields the equation I gave. (Actually on second thought I think it yields a different equation that makes c 2/3.)

    But this depends on the assumption that, on non-EW methods, the credence I get for EW is 1. You ask, fairly enough, “If I think EW is wrong why shouldn’t my credence in EW be 0?” The answer is that my reasons for thinking EW is wrong may not have conformed to the methods that I ought to use if I think EW is wrong. If EW is wrong then I should just evaluate arguments for myself, without regard to what my peers think. But if I do that, then ex hypothesi I get a credence of 1 in EW, since I find the arguments for it entirely convincing.

    Put another way, we could also ask “If I think EW is right why shouldn’t my credence in EW be 1?” If that were right then Brian’s argument wouldn’t go through. The problem is that the methods you use to decide that EW was right weren’t the methods that EW itself prescribes.

    (Actually, this assumes that the only alternatives are that we give equal weight to peers and that we give no weight to peers. If we consider a position on which we should give unequal weight to peers, maybe that would change the argument.)

  14. Matt, wouldn’t the right way to calculate this be to work out the fixed point of the iterative process implicit in the passage at the end of Aumann’s ‘Agreeing to Disagree’, where you have two Bayesians engaged in reporting to each other their credences in p and updating on the basis of their common knowledge of what each others credence is and then reporting again, and so on?

  15. Nicholas, I’m not sure that Aumann’s result applies. Aumann discusses a process where each person has private information, and each reports their credences to the other, until they converge. But in this case we’re supposing that the two epistemic peers have exactly the same information, but draw different conclusions about the proper credence from it.

    If we were thinking in a purely Bayesian framework, we would have to say that the two parties have different priors for the probability of the CEF thesis conditional on whatever evidence they cite, since they each look at that evidence and wind up with different posteriors. So Aumann’s agreement result wouldn’t hold.

Leave a Reply