Skip to main content.
May 6th, 2009

Trusting Experts

I imagine the following point is well known, but it might be news enough to some people to be worth posting here. The following principle is inconsistent.

Trust Experts. If you know there is someone who is (a) perfectly rational and (b) strictly better informed than you, i.e. they know everything you know and know more as well, whose credence in p is x, then your credence in p should be x.

The reason this is inconsistent is that there can be multiple experts. Here’s one way to generate a problem for Trust Experts.

There are two coins, 1 and 2, to be flipped. The coin flipping procedure is known to be fair, so it is known that for each coin, the chance of it coming up heads is 1/2. And the coins are independent. Let H1 be that the first coin lands heads, and H2 be that the second coin lands heads.

The coins are now flipped, but you can’t see how they are flipped. There are sixteen people, called witnesses, in an adjacent room, plus an experimenter, who knows how the coin lands. Using a randomising device, the experimenter assigns each of the sixteen different propositions that are truth-functions of H1 and H2 to a different witness, and tells them what their assigned proposition is, and what its truth value is. The witnesses know that the experimenter is assigning propositions at random, and that the experimenter always tells the truth about the truth value of propositions. So the witnesses simply conditionalise on the truth of the information that they receive.

Consider first the witnesses who are told the truth values of (H1 v T2) and (H1 v ~T2). One of these will be told that their assigned proposition is true. Whoever that is, call them W1, will have credence 2/3 in H1.

Consider next the witnesses who are told the truth values of (~H1 v T2) and (~H1 v ~T2). Again, one of these will be told that their assigned proposition is true. Whoever that is, call them W2, will have credence 1/3 in H1.

So if you were following Trust Experts, you’d have to have credence 2/3 in H1, because of the existence of W1, and have credence 1/3 in H1, because of the existence of W2. That’s inconsistent, so Trust Experts is inconsistent.

Posted by Brian Weatherson in Uncategorized

17 Comments »

This entry was posted on Wednesday, May 6th, 2009 at 10:01 am and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.

17 Responses to “Trusting Experts”

  1. whitew says:

    Isn’t there a difference between having credence 1/3 in H1, and having credence 1 in “there is a 1/3 probability that H1”?

    The larger point, about duelling experts, seems correct.

    Heath

  2. Zachary Miller says:

    Brian,

    I think there’s a negation missing in the first sentence of the second to last paragraph. (BJW: I’ve corrected this now, thanks!)

    Also, it’s not obvious to me that W1 and W2 are perfectly rational. Given that they’re both aware of the experimental set-up, don’t they both know that some epistemic peer disagrees with them about H1? Maybe perfect rationality requires some sort of convergence to take place?

  3. Mike Titelbaum says:

    Brian,

    Trust Experts may be inconsistent, but I’m not sure it’s a principle anyone really believes. It sounds plausible as a kind of interpersonal version of the Reflection Principle. But the Reflection Principle is about conditional credences, and the corresponding interpersonal principle should be about conditional credences as well. People often report the interpersonal principle as something like Trust Experts, but that’s usually because the details don’t matter for the case they’re working with. (This parallels the way people often report the Principal Principle as “If you know that the chance of an outcome is…” when the Principal Principle is really a principle about conditional credences.)

    Here’s why this makes a difference in your case: In the case you’re describing, the fact that W1 has credence 2/3 in H1 is part of my evidence, but is not my total evidence. (Nor is the fact that W2 has credence 1/3 in H1.) So if we look at the conditional interpersonal principle, no inconsistency is generated. The inconsistency is generated for Trust Experts because Trust Experts says only “If you know that…” instead of something like “If you know that… and that’s your only relevant information” or “If you know that… and don’t also know that….”

    Mike

  4. Michael Kremer says:

    I think this is possibly a less technical version of Mike Titelbaum’s point.

    The blog post is titled “Trusting Experts” and it discusses a principle labeled “Trust Experts”. This might lead one to believe that the post has something to do with the topic of, well, trusting experts.

    But is the principle called here “Trust Experts” anything like a plausible version of an informal principle such as “(ceteris paribus) you should trust the experts”? Can we equate trusting someone with adjusting your degrees of credence to match theirs? And what does the principle have to do with expertise? Is someone who I know to know more than me, thereby an “expert”? I don’t know much — perhaps they don’t know much either, and why should I then go along with them on something that they don’t know, but merely have a (< 1) degree of belief in? Surely I can hold “you should trust the experts” without holding anything like “Trust Experts.”

    The point here isn’t that Brian’s argument is wrong but that the labeling of a principle in this way can be highly misleading as to the philosophical consequences of the sort of argument made in this post.

  5. Brian Weatherson says:

    Zach,

    I don’t think W1 and W2 are doing anything irrational. Remember that they have no idea that they are extreme cases. We could work out what they should think the average credence of the other 15 witnesses is – I suspect it will be their actual credence.

    Michael,

    I’m certainly using ‘expert’ in something other than its traditional English sense here. But it is the sense that it’s frequently used in (some parts of) the formal epistemology literature. It’s a good point though that it’s misleading labelling, and I shouldn’t have so blithely gone along with it.

    Mike,

    I agree that the principles people have usually endorsed are in the form of conditional probabilities not updating principles. But I think the kind of example I’m discussing here (and the related examples in section 10.6 of Knowledge and Its Limits – which I should have acknowledged in the post) raise challenges even for some of those principles about conditional probabilities. For instance, we can’t endorse the following principle.

    Let BIM mean that e is better informed than me, i.e. they have all the knowledge I do about e, and some more, and they advanced from my information state to theirs by conditionalising on their new evidence. And let Cr(A, e) = x mean that e’s credence in A is x. Then

    Pr(A | Ee: (BIM & Cr(A, e) = x)) = x

    That’s inconsistent too, and I don’t think it is universally acknowledged that it is inconsistent.

  6. Michael Kremer says:

    Brian,

    Sorry about any tone of snarkiness in my last comment, first off.

    But… to be honest I was probably more concerned about the rendering of “trust” than of “experts.” Perhaps that label has also been used in this way in formal epistemology. OK. But I think it is a matter for serious philosophical consideration whether words in ordinary usage can be picked up and attached to technical formulations without consequence.

  7. Mike Titelbaum says:

    Brian,

    How exactly does your example (or one like it) show that the conditional credences principle you’ve offered above is inconsistent? Could you spell it out for me? (And what’s the “Ee:” doing there?)

    Thanks,

    Mike

  8. Brian Weatherson says:

    Michael,

    I don’t think it was snarky – I actually think it’s an important point. I think it’s very bad philosophical practice to use terms with a meaning close to, but not identical with, their ordinary usage. The probability that one will slip between the technical and ordinary meaning is too high. There are probably deeper problems too, but the chance of simple slips like this is too high to be happy with similar but not identical meanings.

    And I think I did do that both with “Experts” and with “Trust”. Really it would have been better to use “Defer”, or, to use a term which already has some technical connotations, “Mirror”. So maybe the principle I was attacking should have been called “Mirror those who know more”, not “Trust Experts”.

  9. Brian Weatherson says:

    Mike,

    The Ee was meant to be an existential quantifier (that’s the ‘E’) with variable ‘e’.

    The thought was the agent in question knows, i.e. has credence 1 in, each of these claims

    Ee(BIM & Cr(A, e) = 2/3).
    Ee(BIM & Cr(A, e) = 1/3).

    So her (subjective) probability in A equals her probability in A given Ee(BIM & Cr(A, e) = 2/3), and also equals her probability in A given Ee(BIM & Cr(A, e) = 1/3). In symbols

    Pr(A) = Pr(A | Ee(BIM & Cr(A, e) = 2/3)) = Pr(A | Ee(BIM & Cr(A, e) = 1/3)).

    But the principle says that the second of those terms should be 2/3, and the third should be 1/3, contradicting the fact that they aren’t equal.

  10. Brian Weatherson says:

    Ugh – for some reason the blog is making the string formed by concatenating ‘(’, ‘e’ and ‘)’ disappear. Everywhere above where it prints ‘BIM’, it should print ‘BIM’ followed by that concatenation. Bad blog…

  11. bayesianfool says:

    I think the principle should have the de re reading. Not “If you know there is someone who is ..“ , but “If there is someone known by you to be (a)… ,(b)… etc“. Read thusly the principle does not apply in your example, because you don’t know W1 to have the credence 2/3. You don’t know who W1 is. “W1“ is just a description for whomever has that credence.
    If you read the principle like that there can’t be any expert conflict. If A and B
    are two known experts whose credences I know, they must know each other’s credences, being exerts and knowing everything I know. Furthermore they know they know each other’s credences because I know them to be experts, etc. By Aumann’s agreement theorem they have the same credence, and so must I.

  12. Mike Titelbaum says:

    Brian,

    Bayesianfool has an interesting point about the de re issue, but I think that even if we set that aside and go with a principle with an existential quantifier like the one you offered there should be no inconsistency.

    When you write Pr(A | Ee: (BIM & Cr(A, e) = x)) = x, there’s a question about what the “Pr” represents. It could represent values of an actual agent’s subjective credence function, or it could represent values of some sort of “ideal prior” (what Lewis would’ve called a “reasonable initial credence function”). If it’s the former, then the conditional credence expert principle has to have an admissibility clause saying that it doesn’t apply to agents whose background evidence includes inadmissible material (which in this case would include other material about the opinions of experts). If it’s the latter, then no admissibility clause is required but no inconsistency is generated in your case, because I would set my credence by conditioning the ideal prior on my total evidence, which includes what I know about both W1 and W2.

    Either way, the key point is that when you spell out one of these rough-and-ready expert principles precisely, there always needs to be something in there that has the effect of saying “If you know that… and that’s your only relevant information…”.

    Mike

  13. Brian Weatherson says:

    Bayesian Fool,

    I don’t see what could possibly motivate a de re version of the principle that wouldn’t also motivate a de dicto version. Even if it were possible to show that such a principle is consistent (and I’m far from sure it is), I don’t see why it would be philosophically defensible.

    Mike,

    I meant Pr to be credences of a rational agent. What I don’t get is why the agent in the situation I described should have any evidence that’s inadmissible. I would bet that any account of admissible/inadmissible evidence that said this agent had inadmissible evidence would also imply that pretty much every application of the principle one would want in philosophy also violated the admissibility clause.

  14. Mike Titelbaum says:

    Brian,

    Let’s say the principle is: Pr(A | Ee: (BIM & Cr(A, e) = x)) = x for any agent who doesn’t have inadmissible evidence other than “Ee: (BIM & Cr(A, e) = x)”, where inadmissible evidence is evidence about the existence of experts who set particular credences in A. This inadmissibility clause is surely narrower than it ought to be, but it’s broad enough to suit my purposes and certainly isn’t broad enough to quash most applications of the principle. (Most of the time when we know about the existence of experts with credences on a particular proposition we don’t know about other such experts.)
    We can instantiate this principle as:
    Pr(A | Ee: (BIM & Cr(A, e) = 1/3)) = 1/3
    The trouble with this instantiation is that it doesn’t apply to me, since I have evidence about another expert besides the one who sets Cr(A,e)=1/3. That is, I know about W1 as well as W2.
    We can also instantiate it as:
    Pr(A | Ee: (BIM & Cr(A, e) = 2/3)) = 2/3
    but now the trouble is that I know about W2 as well as W1.
    Even if we ignore all the other experts and focus on just my knowledge about W1 and W2, the only instantiation that will apply to me is:
    Pr(A | [Ee: (BIM & Cr(A, e) = 1/3)] & [Ee: (BIM & Cr(A, e)=2/3)]) = ???
    but then the principle doesn’t dictate what goes on the right-hand side of the equation. So no inconsistency is generated.

    By the way, I thought of a way of thinking about all this in terms of the Principal Principle: Suppose we had a version of the Principal Principle that said if you know there exists a time at which the chance of A is x, you ought to set credence x in A. We could show this principle to be inconsistent via a case in which I know there exists a time when the chance that I’ll get to the center of the labyrinth before noon is 1/3, and another time when that chance is 1/2. So we need to fix up that version of the Principal Principle to avoid those cases, without losing the useful feature of being able to make use of information about chances even when I don’t know precisely when those chances obtain. My guess is whatever fix will make a viable version of the Principal Principle here will also apply to your expert principle.

    Mike

  15. bayesianfool says:

    Brian Weatherson said:
    “Bayesian Fool,
    I don’t see what could possibly motivate a de re version of the principle that wouldn’t also motivate a de dicto version. Even if it were possible to show that such a principle is consistent (and I’m far from sure it is), I don’t see why it would be philosophically defensible.“

    A Dutch book argument exists for the de re version and not in the existential quantifier version. Let John be an expert whose credence in a proposition I know, and whose extra information I know I will be told at some point in the future. If my credences before and after being told aren’t the same I am opening myself to a Dutch book. If my credence is the same as John’s after being made to know everything and only what he knows, as it must be, we must have the same credence before.

    Why can’t the same argument be made with something like W1 in place of John. Even if W1 is John , for example the (H1 v T2) witness, I cannot be made to know everything and only what he knows. If Both propositions are true bookmaker must choose one of the witnesses as W1 and tell that choice to me. That is extra information that John or W1 does not have. I know John is W1 if I am told (H1 v T2) as the extra information W1 has. John’s credence in being W1 is 2/3. As we don’t have the same information we can disagree on our credences in H1. Mine is now Pr(H1/John is W1) = (½.)(½.)/(½.) = ½, same as before. If John is told that he is W1 he will lower his credence to ½ too. In Soviet Russia the expert trusts in you.

  16. Brian Weatherson says:

    But thinking of this in terms of betting behaviour just makes it more bizarre.

    Me: What’s your credence in p?
    You: 1/2.
    Me: There’s someone better informed than you whose credence in p is 2/3.
    You: So what?
    Me: I’m about to tell you his name.
    You: So what?
    Me: His name is Inigo Montoya.
    You: Oh, now I’m moving my credence to 2/3.
    Me: Because you trust Inigo Montoya?
    You: No, I’ve barely heard of him. But now I know who you were talking about before.

    I think that’s way more absurd than anything a Dutch Book argument could show.

  17. bayesianfool says:

    As soon as you told me Inigo Montoya’s name two things happened. (1) My credence in Inigo Montoya having credence 2/3 rose to 1 (2) My credence in any other agent except Inigo having that credence fell because you didn’t tell me any of their names. In terms of the effect on my credence in p, those two cancel each other out, so my credence stays at 1/2. My credence doesn’t match Inigo’s because I now have information about credences of other agents that he doesn’t have. He is no longer better informed than me.

Leave a Reply

You must be logged in to post a comment.