I’m currently reading about Higher Order Evidence, starting with David Christensen’s important paper on the subject. The literature includes a lot of cases, of which this one from David is fairly indicative.
I’m a medical resident who diagnoses patients and prescribes appropriate treatment. After diagnosing a particular patient’s condition and prescribing certain medications, I’m informed by a nurse that I’ve been awake for 36 hours. Knowing what I do about people’s propensities to make cognitive errors when sleep-deprived (or perhaps even knowing my own poor diagnostic track-record under such circumstances), I reduce my confidence in my diagnosis and prescription, pending a careful recheck of my thinking.
The higher-order evidence (HOE) here is that the narrator (let’s call him DC, to avoid confusion with the philosopher) knows he has been awake 36 hours, and people in that state tend to make mistakes. Here are three interesting features of this case.
1. The natural way to take the HOE into account is to lower one’s confidence in the target proposition.
2. The natural way to take the HOE into account is to take actions that are less decisive.
3. The HOE suggests that the agent is less good at reasoning about the target field than he thought he was.
If one includes the peer disagreement literature as giving us cases of HOE (as David does), then the literature includes a lot of case studies, thought experiments, intuition pumps and the like.
To the best of my knowledge, all the published cases have these three features. Does anyone know of any exceptions? If so, could you leave a comment, or email me about them? I’d be particularly interested in hearing from people who have presented cases that don’t have these features – I’d like to credit you!
To give you a sense of how we might have examples of HOE without these features, consider these three cases. In all cases, I want to stipulate that the agent initially makes the optimal judgment on her evidence, so the HOE is misleading.
A is a hospital resident, with a patient in extreme pain. She is fairly confident that the patient has disease X, but thinks an alternative diagnosis of Y is also plausible. The treatment for X would relieve the pain quickly, but would be disasterous if the patient actually has Y. Her judgment is that, although this will involve more suffering for the patient, they should run one more test to rule out Y before starting treatment. A is then told that she has been on duty for 14 hours, and a recent study showed that residents on duty for between 12 and 16 hours are quite systematically too cautious in their diagnoses. What should A believe/do?
B is a member of a group that has to make a decision. The correct decision turns on whether p is true. The other members of the group are sure it is true, B is sure it is not true. B believes, on the basis of a long history with the group, that they are just as good at getting to the truth as she is, and they have no salient evidence she lacks. The norms of the group are that if all but one person in the group is sure of something, and the other is uncertain, they will act as if it is true, but if the one remaining person is sure it is false, they will keep on discussing things. B is very committed to the norm that she should tell the group the truth about her beliefs, so if she reacts to the peer disagreement by becoming uncertain about p, she will say that, and the group will act as if p, while if she remains steadfast, the group will continue deliberating. What should B believe?
C has just read a book putting forward a surprising new theory about a much studied historical event. (This was inspired by a book suggesting JFK was killed by a shot fired by a Secret Service agent, though the rest of the example relies on stipulations that go beyond the case.) The author’s evidence is stronger than C suspected, and she finds it surprisingly compelling. But she also knows the author will have left out facts that undermine her case, and that it would be surprising if no one else had developed this theory earlier. So her overall credence in the author’s theory is about 0.1, though she acknowledges a feeling that the case feels more compelling than this. C then gets evidence that she may have been infected with a drug that makes people much more sensitive to the strengths and weaknesses of evidence than usual. (This isn’t true; C wasn’t infected, though she has good grounds to believe she was.) If that’s right, her initial positive reaction to the book, before she qualified it by thinking about all the experts who don’t hold this view, may have been more accurate. What should C believe?
For what it’s worth, I wouldn’t want to rest an argument for my preferred view on HOE on intuitions about these cases. But I would be interested in knowing any discussion of them, or anything like them, in the literature.
Posted by Brian Weatherson in Uncategorized