Skip to main content.
March 30th, 2014

Higher-Order Thought Experiments

I’m currently reading about Higher Order Evidence, starting with David Christensen’s important paper on the subject. The literature includes a lot of cases, of which this one from David is fairly indicative.

I’m a medical resident who diagnoses patients and prescribes appropriate treatment. After diagnosing a particular patient’s condition and prescribing certain medications, I’m informed by a nurse that I’ve been awake for 36 hours. Knowing what I do about people’s propensities to make cognitive errors when sleep-deprived (or perhaps even knowing my own poor diagnostic track-record under such circumstances), I reduce my confidence in my diagnosis and prescription, pending a careful recheck of my thinking.

The higher-order evidence (HOE) here is that the narrator (let’s call him DC, to avoid confusion with the philosopher) knows he has been awake 36 hours, and people in that state tend to make mistakes. Here are three interesting features of this case.

1. The natural way to take the HOE into account is to lower one’s confidence in the target proposition.
2. The natural way to take the HOE into account is to take actions that are less decisive.
3. The HOE suggests that the agent is less good at reasoning about the target field than he thought he was.

If one includes the peer disagreement literature as giving us cases of HOE (as David does), then the literature includes a lot of case studies, thought experiments, intuition pumps and the like.

To the best of my knowledge, all the published cases have these three features. Does anyone know of any exceptions? If so, could you leave a comment, or email me about them? I’d be particularly interested in hearing from people who have presented cases that don’t have these features – I’d like to credit you!

To give you a sense of how we might have examples of HOE without these features, consider these three cases. In all cases, I want to stipulate that the agent initially makes the optimal judgment on her evidence, so the HOE is misleading.

A is a hospital resident, with a patient in extreme pain. She is fairly confident that the patient has disease X, but thinks an alternative diagnosis of Y is also plausible. The treatment for X would relieve the pain quickly, but would be disasterous if the patient actually has Y. Her judgment is that, although this will involve more suffering for the patient, they should run one more test to rule out Y before starting treatment. A is then told that she has been on duty for 14 hours, and a recent study showed that residents on duty for between 12 and 16 hours are quite systematically too cautious in their diagnoses. What should A believe/do?

B is a member of a group that has to make a decision. The correct decision turns on whether p is true. The other members of the group are sure it is true, B is sure it is not true. B believes, on the basis of a long history with the group, that they are just as good at getting to the truth as she is, and they have no salient evidence she lacks. The norms of the group are that if all but one person in the group is sure of something, and the other is uncertain, they will act as if it is true, but if the one remaining person is sure it is false, they will keep on discussing things. B is very committed to the norm that she should tell the group the truth about her beliefs, so if she reacts to the peer disagreement by becoming uncertain about p, she will say that, and the group will act as if p, while if she remains steadfast, the group will continue deliberating. What should B believe?

C has just read a book putting forward a surprising new theory about a much studied historical event. (This was inspired by a book suggesting JFK was killed by a shot fired by a Secret Service agent, though the rest of the example relies on stipulations that go beyond the case.) The author’s evidence is stronger than C suspected, and she finds it surprisingly compelling. But she also knows the author will have left out facts that undermine her case, and that it would be surprising if no one else had developed this theory earlier. So her overall credence in the author’s theory is about 0.1, though she acknowledges a feeling that the case feels more compelling than this. C then gets evidence that she may have been infected with a drug that makes people much more sensitive to the strengths and weaknesses of evidence than usual. (This isn’t true; C wasn’t infected, though she has good grounds to believe she was.) If that’s right, her initial positive reaction to the book, before she qualified it by thinking about all the experts who don’t hold this view, may have been more accurate. What should C believe?

For what it’s worth, I wouldn’t want to rest an argument for my preferred view on HOE on intuitions about these cases. But I would be interested in knowing any discussion of them, or anything like them, in the literature.

Posted by Brian Weatherson in Uncategorized

2 Comments »

This entry was posted on Sunday, March 30th, 2014 at 11:50 pm and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.

2 Responses to “Higher-Order Thought Experiments”

  1. Jeremy Goodman says:

    I’ve thought a bit about some cases that seem to me to have a similar flavor to the ones in the HOE literature, but in which there is no suggestions “that the agent is less good at reasoning about the target field than he thought he was”. More generally, they are cases that aren’t naturally thought of as involving any misleading evidence at all. Suppose an omniscient oracle who I know to speak truly randomly picks some proposition t that I know and a bunch of unrelated false propositions f1,…,fn that I believe with the same degree of reasonableness with which I believe t. The oracle then informs me that only one of these propositions is true. It is very tempting to think that this defeats my knowledge that t, and that I should now have very low credence in each of t,f1,…,fn of approximately 1/(n+1). (Roger White mentioned in conversation that he’d thought about similar cases in which you get back your score on a multiple choice test without being told which answers you got wrong.) Maybe this is closer to more standard cases of undercutting defeat than the cases Christensen discusses, but I’m not so sure. Suppose we modify the case so that the oracle chooses n initially known propositions t1,…,tn together with n falsehoods f1,…,fn that you believe just as reasonably. I think the case then starts to look a lot like Christensen’s tip-calculation example, where you and your friend have an even track record in cases of disagreement. Anyway, I think these are interesting cases to think about, especially for thinking about defeat in a knowledge-first setting. (FWIW, I like the really radical view that such cases are impossible, since the oracle’s testimony would prevent your belief that t from having ever amounted to knowledge in the first place.)

  2. Greg Frost-Arnold says:

    I don’t know whether you would count this as a HOE case, but I wonder whether someone who accepts the Pessimistic (Meta-)Induction would count? (Sherrilyn Roush’s “The Rationality of Science in Relation to its History” argues in detail why the conclusion of the conclusion of the pessimistic induction has to be understood as a second-order belief.)

    Usual defenders of the PI, I’m guessing, would accept your 1 (since that’s just being an anti-realist), but may not accept your 2. The reason I think they might not accept your #2 is that many anti-realists think we should still go on predicting and intervening exactly as we would if we believed our current best-supported theories are true (Kyle Stanford says this in the last chapter of his Exceeding Our Grasp, for example).

    I’m honestly not sure what a pessimistic inductor would say about your #3. The pessimistic inductor might reject #3, on the grounds that the pessimistic induction need not impugn our reasoning. (Perhaps scientists reason perfectly, but history shows that scientists often have misleading evidence, for example.)

Leave a Reply

You must be logged in to post a comment.