There is such a thing as being too cautious…

In his very interesting paper at the Rutgers Epistemology Conference, Higher-Order Evidence (PDF), David Christensen discusses a lot of cases where, in the process of investigating whether p, we learn something about our ability to detect whether p. In the primary cases Christensen discusses, we first come to believe p, then come to believe that our capacities are impaired in some way. And one of the epistemologically interesting questions is what we should do at this stage. I wanted to consider a slightly different question.

S is investigating a murder. She gets evidence E, and on the basis of that quite reasonably concludes that it is quite likely the butler did it (her credence in that is 2/3), a serious possibility that the gardener did it (her credence in that is 1/4), and very little chance that neither did it (her credence in that is 1/12).

S then is told, by a usually reliable source, that she has taken some drug that leads to people systematically underestimating how strongly their evidence supports various propositions. So if someone’s taken this drug, and believes p to degree 2/3, then p is usually something that’s more or less guaranteed to be true by their evidence.

What should S do?

I think that if there is an answer, it is nothing, and that’s true of any such case where S comes to a rational conclusion, then gets evidence that she has been irrational. She can’t coherently raise her credences in all of these propositions, after all. And if anything was justified in response to learning about the drug, it would be raising her credence in all the propositions. So no reaction to learning about the drug is justified.

My impression is that my reaction to these cases is not the most popular one, to say the least. But I wondered what everyone else thought about the case.

8 Replies to “There is such a thing as being too cautious…”

  1. Brian, doesn’t that result perhaps indicate that the hypothetical is itself maybe incoherent? That is, maybe for any agent who is not already somewhat skeptically-inclined, there’s no such thing as across the board being insufficiently confident in their beliefs? For precisely the reasons you indicate, if the person has reasonably strong attitudes about a range of evidentially related propositions, raising credences in some will simply have to involve decreasing their credences in others.

    Given your gloss on the drug’s impact (“if someone’s taken this drug, and believes p to degree 2/3, then p is usually something that’s more or less guaranteed to be true by their evidence”), maybe it would need to be a drug that only flattens the peaks of one’s credences; or perhaps one that flattens those peaks by means of raises the troughs. (That would be one way to get some of a person’s higher credences lowered: a kind of paranoia-inducing drug, in which things with really trivial likelihoods get seen as presenting live possibilities, which thereby makes things that had seemed near-certain suddenly less so.)

  2. I have the same impression as Jonathan. If the drug really makes people systematically give lower credences than they ought to, then people under its influence will have credences that don’t add up to 1. Since, by hypothesis, the person in this scenario had credences that added up to 1, it seems that she couldn’t be under the influence of this drug unless there was something else causing her to systematically increase her credences as well. And in that case, it’s not obvious which of her credences she should lower and which she should raise, so Brian’s view seems right.

    The alternative is what Jonathan said, that the drug makes people assign 2/3 as their credence only in cases where much higher values are rationally appropriate – but also makes people assign 1/3 as their credence only in cases where much lower values are rationally appropriate. If this is the description of the drug, then the intuition that Brian didn’t share seems right.

    Underspecified thought experiments are annoying!

  3. Building on Kenny’s comment, I wonder if this is a useful way to think about it: in general, the rational thing to do when one learns that one is under the influence of some credence-altering drug, is to characterize the function that the drug applies to one’s credences, and then try to apply the inverse of that function. If the function isn’t one-to-one, then it might be that one can only specify a range of credences, but that’s still better than nothing. But if, for one’s particular set of credences, the inverse function comes up with a null result — if there just is no initial set of credences such that the drug could have gotten from them to your current set — then the rational thing to conclude is that you just are not in fact under the influence of a drug with that credence-shifting function. No?

  4. I think you guys are assuming that S can know things that should be at issue. S could, in principle, get evidence against probabilism. It’s not completely obvious that one should retain a belief in probabilism come what may. I think that if S has the belief that her credences are probabilities, and evidence that she’s generally too cautious, she has evidence against probabilism.

    This is hardly over the top. In Christensen’s examples, people get evidence that their actually correct logical and mathematical reasoning is mistaken. I think that’s possible, and I think the same is true for my case.

    Put another way, I don’t agree with Jonathan’s last claim that S shouldn’t think the function is one-one. That’s only true if she assumes probabilism, which is part of what’s at issue.

  5. I didn’t say that S shouldn’t think the function is one-one — if it is, that’s great, and if it isn’t, you can still usefully constrain what your credences should be. The problem is if you find yourself (as the hypothetical seems to suggest) with a set of credences that do not fall within the image of the function at all.

    Was the original post meant as a kind of reductio against probabilism? I didn’t read it that way, but this is all a bit outside my balliwick.

  6. Sorry, but if S’s being told by a “usually reliable source” that she has taken some strange drug ordinarily makes it seem to her more or less guaranteed to be true that she has done so, are we further supposed to imagine that her credence in the drug proposition is itself going to be 2/3 in this scenario? That adds to the fun.

  7. Jonathan,

    I wasn’t thinking it was an argument against probabilism. It was presupposing that a rational agent can get compelling evidence against probabilism. Maybe a perfectly rational agent can’t get that, but I think some rational agents can.

    (Compare: A rational logic undergrad taught by Tim Williamson will get very good reasons to believe excluded middle, while a rational logic undergrad taught by Crispin Wright will get very good reasons to disbelieve it. Maybe a perfectly rational logic undergrad will only accept whichever of them is speaking the truth, but a perfectly rational undergrad really is a mythical beast. If S is rational but not perfectly rational, she might be diverted from the probabilist path by certain evidence.)

    So I think S’s actual credences are an image of the function. The input to the function has to be non-probabilistic. But unless S has undefeated reason to accept probabilism, that’s not really relevant to her predicament. And it’s far from obvious that she has such an undefeated reason.

  8. Jennifer,

    To avoid regress, I was thinking the way the drug worked was that it affected most people the following way. If you took the drug, but didn’t take into account the fact that you’d taken the drug, you would be under-confident in p, for more or less arbitrary p. But once you’d taken into account the effect of the drug, you wouldn’t be systematically under-confident.

Leave a Reply