Does Judgment Screen Evidence?

Suppose a rational agent S has some evidence E that bears on p, and makes a judgment J about how E bears on p. The agent is aware of this judgment, so she could in principle use its existence in her reasoning. Here’s an informal version of the question I’ll discuss in this paper: How many pieces of evidence does the agent have that bear on p? Three options present themselves.

  1. Two – Both J and E.
  2. One – E subsumes whatever evidential force J has.
  3. One – J subsumes whatever evidential force E has.

This post is about option 3. I’ll call this option JSE, short for Judgments Screen Evidence. I’m first going to say what I mean by screening here, and then say why JSE is interesting. Ultimately I want to defend three claims about JSE.

  1. JSE is sufficient to derive a number of claims that are distinctive of internalist epistemology of recent years (meaning approximately 2004 to the present day).
  2. JSE is necessary to motivate at least some of these claims.
  3. JSE is false.

This post will largely be about saying what JSE is, then some arguments for 1 and 2. I’ll leave 3 for a later post!

Screening

The idea of screening I’m using here comes from Reichenbach’s The Direction of Time, and in particular from his work on deriving a principle that lets us infer events have a common cause. The notion was originally introduced in probabilistic terms. We say that C screens off the positive correlation between B and A if the following two conditions are met.

  1. A and B are positively correlated probabilistically, i.e. Pr(A | B) > Pr(A).
  2. Given C, A and B are probabilistically independent, i.e. Pr(A | B ∧ C) = Pr(A | C).

I’m interested in an evidential version of screening. If we have a probabilistic analysis of evidential support, the version of screening I’m going to offer here is identical to the Reichenbachian version just provided. But I want to stay neutral on whether we should think of evidence probabilistically. In general I’m somewhat sceptical of probabilistic treatments of evidence for reasons Jim Pryor goes through in his Uncertainty and Undermining (PDF). I mention some of these in my The Bayesian and the Dogmatist (PDF). But I won’t lean on those points in this note.

When I say that C screens off the evidential support that B provides to A, I mean the following. (Both these clauses, as well as the statement that C screens off B from A, are made relative to an evidential background. I’ll leave that as tacit in what follows.)

  1. B is evidence that A.
  2. B ∧ C is no better evidence that A than C is, and ¬B ∧ C is no worse evidence for A than C is.

Here is one stylised example, and one real-world example.

Detective Det is trying to figure out whether suspect Sus committed a certain crime. Let A be that Sus is guilty, B be that Sus’s fingerprints were found at the crime scene, and C be that Sus was at the crime scene when the crime was committed. Then both clauses are satisfied. B is evidence for A; that’s why we dust for fingerprints. But given the further evidence C, then B is neither here nor there with respect to A. We’re only interested in finding fingerprints because they are evidence that Sus was there. If we know Sus was there, then the fingerprint evidence isn’t useful one way or the other. So both clauses of the definition of screening are satisfied.

The real world example is fairly interesting. Imagine that we know Vot is an American voter in last year’s US Presidential election, and we know Vot is either from Alabama or Massachusetts, but don’t know which. Let A be that Vot voted for Barack Obama, let B be that Vot is from Massachusetts, and let C be that Vot is pro-choice. Then, somewhat surprisingly, both conditions are met. Since voters in Massachusetts were much more likely to vote for Obama than voters in Alabama, B is good evidence for A. But, at least according to the polls linked to the state names above, pro-choice voters in the two states voted for Obama at roughly the same rate. (In both cases, a little under two to one.) So C screens off B as evidence for A, and both clauses are satisfied.

The Idea Behind JSE

When we think about the relation between J and E, there are three conflicting pressures we immediately face. First it seems J could be evidence for p. To see this, note that if someone else comes to know that S has judged that p, then that could be a good reason for them to believe that p. Or, at the very least, it could be evidence for them to take p to be a little more likely than they previously thought. Second, it seems like ‘double counting’ for S to take both E and J to be evidence. After all, she only formed judgment J because of E. Yet third, it seems wrong for S to simply ignore E, since by stipulation, she has E, and it is in general wrong to ignore evidence that one has.

The simplest argument for JSE is that it lets us accommodate all three of these ideas. S can treat J just like everyone else does, i.e. as some evidence for p without either double counting or ignoring E. She can do that because she can take E to be screened off by J. That’s a rather nice feature of JSE.

To be sure, it is a feature that JSE shares with a view we might call ESJ, or evidence screens judgments. That view says that S shouldn’t take J to be extra evidence for p, for while it is indeed some evidence for p, its evidential force is screened off by E. This view also allows for S to acknowledge that J has the same evidential force for her as it has for others, while also avoiding double counting. So we need some reason to prefer JSE to ESJ.

One reason (and I don’t think this is what anyone would suggest is the strongest reason) is from an analogy with the fingerprint example. In that case we look for one kind of evidence, fingerprints, because it is evidence for something that is very good evidence of guilt, namely presence at the crime scene. But the thing that we are collecting fingerprint evidence for screens off the fingerprint evidence. Similarly, we might hold that we collect evidence like E because it leads to judgments like J. So the later claim, J should screen E, if this analogy holds up.

JSE and Disagreement

My main concern here isn’t with any particular argument for JSE, but with the role that JSE might play in defending contemporary epistemological theories. The primary case in which I’ll be interested in concerns disagreement. Here is Adam Elga’s version of the Equal Weight View of peer disagreement, from his Reflection and Disagreement.

Upon finding out that an advisor disagrees, your probability that you are right should equal your prior conditional probability that you would be right. Prior to what? Prior to your thinking through the disputed issue, and finding out what the advisor thinks of it. Conditional on what? On whatever you have learned about the circumstances of the disagreement.

It is easy to see how JSE could lead to some kind of equal weight view. If your evidence that p is summed up in your judgment that p, and another person who you regard as equally likely to be right has judged that ¬p, then you have exactly the same kind of evidence for p as against it. So you should suspend judgment about whether p is true or not.

But the distinctive role that JSE can play is in the clause about priority. Here is one kind of situation that Elga wants to rule out. S has some evidence E that she takes to be good evidence for p. She thinks T is an epistemic peer. She then learns that T, whose evidence is also E, has concluded ¬p. She decides, simply on that basis, that T must not be an epistemic peer, because T has got this case wrong.

Now at first it might seem that S isn’t doing anything wrong here. If she knows how to apply E properly, and can see that T is misapplying it, then she has good reason to think that T isn’t really an epistemic peer after all. She may have thought previously that T was a peer, indeed she may have had good reason to think that. But she now has excellent evidence, gained from thinking through this very case, to think that T is not a peer, and so not worthy of deference.

Since Elga thinks that there is something wrong with this line of reasoning, there must be some way to block it. I think by far the best option for blocking it comes from ruling that E is not available evidence for S once she is using J as a judgment. That is, the best block available seems to me to come from JSE. For once we have JSE in place, we can say very simply what is wrong with S here. She is like the detective who says that we have lots of evidence that Sus is guilty&emdash;not only was she at the crime scene, but her fingerprints were there. To make the case more analogous, we might imagine that there are detectives with competing theories about who is guilty in this case. If we don’t know who was at the crime scene, then fingerprint evidence may favour one detective’s theory over the other. If we do know that both suspects were known to be at the crime scene, then fingerprint evidence isn’t much help to either.

So I think that if JSE is true, we have an argument for Elga’s strong version of the Equal Weight View, one which holds agents are not allowed to use the dispute at issue as evidence for or against the peerhood of another. And if JSE is not true, then there is a kind of reasoning which undermines Elga’s Equal Weight View, and which seems, to me at least, unimpeachable. So I think Elga’s influential version of the Equal Weight View stands and falls with JSE.

White on Permissiveness

In his 2005 Philosophical Perspectives paper, Epistemic Permissiveness (PDF), Roger White argues that there cannot be a case where it could be epistemically rational, on evidence E, to believe p, and also rational, on the same evidence, to believe ¬p. One of the central arguments in that paper is an analogy between two cases.

Random Belief: S is given a pill which will lead to her forming a belief about p. There is a ½ chance it will lead to the true belief, and a ½ chance it will lead to the false belief. S takes the pill, forms the belief, a belief that p as it turns out, and then, on reflecting on how she formed the belief, maintains that belief.

Competing Rationalities: S is told, before she looks at E, that some rational people form the belief that p on the basis of E, and others form the belief that ¬p on the basis of E. S then looks at E and, on that basis, forms the belief that p.

White claims that S is no better off in the second case than in the former. As he says,

Supposing this is so, is there any advantage, from the point of view of pursuing the truth, in carefully weighing the evidence to draw a conclusion, rather than just taking a belief-inducing pill? Surely I have no better chance of forming a true belief either way.

But it seems to me that there is all the advantage in the world. In the second case, S has evidence that tells on p, and in the former she does not. Indeed, I long found it hard to see how we could even think the cases are any kind of analogy. But I now think JSE holds the key to the argument.

Assume that JSE is true. Then after S evaluates E, she forms a judgment J. Now it might be true that E itself is good evidence for p. (The target of White’s critique says that E is also good evidence for ¬p, but that’s not yet relevant.) But given JSE, that fact isn’t relevant to S‘s current state. For her evidence is, in its entirity, J. And she knows that, as a rational agent, she could just as easily have formed some other judgment to J. Indeed, she could have formed the opposite judgment. So J is no evidence at all, and she is just like the person who forms a random belief, contradicting the assumption that believing p could, in this case, be rational, and that believing ¬p could be rational.

Without JSE, I don’t see how White’s analogy holds up. There seems to be a world of difference between forming a belief via a pill, and forming a belief on the basis of the evidence, even if you know that other rational agents take the evidence to support a different conclusion. In the former case, you have violated every epistemic rule we know of. In the latter, you have reasons for your belief, you can defend it against challenges, you know how it fits with other views, you know when and why you would give it up, and so on. The analogy seems worse than useless by any of those measures.

I think this analogy is crucial to White’s paper. Indeed, much of the rest of the paper consists of responses to objections to the argument from analogy made here. So I think if the analogy stands or falls with JSE, then the fortunes of White’s view on permissiveness are tied to those of JSE.

Christensen on Higher-Order Evidence

Finally, I’ll look at some of the arguments David Christensen brings up in his Higher Order Evidence. Christensen imagines a case in which we are asked to do a simple logic puzzle, and are then told that we have been given a drug which decreases logical acumen in the majority of people who take it. He thinks that we have evidence against the conclusions we have drawn.

Let’s consider a particular version of that, modelled on Christensen’s example of Ferdinand the bull. S knows that ∀x(FxGx), and knows that ¬(FaGa). S then infers deductively that ¬Fa. S is then told that she’s been given a drug that dramatically impairs abilities to draw deductive conclusions. Christensen’s view is that this testimony is evidence against ¬Fa, which I assume implies that it is evidence that Fa.

This looks quite surprising. S has evidence which entails that Fa, and her evidence doesn’t rebut that evidence. It does, says Christensen, undermine her evidence for ¬Fa. But not because it undermines the entailment; it isn’t like the evidence gives her reason to believe some non-classical logic where this entailment does not go through is correct. So how could it be an underminer?

Again, JSE seems to provide an answer. If S‘s evidence that ¬Fa is ultimately just her judgment that it is entailed by her other evidence, and that judgment is revealed to be unreliable because of her recent medication, then S does lose evidence that ¬Fa. But if we thought the original evidence, i.e., ∀x(FxGx) and ¬(FaGa), was still available to S, then there is a good reason to say that her evidence conclusively establishes that ¬Fa.

I’m not saying that Christensen argues from JSE to his conclusion. Rather, I’m arguing that JSE delivers the conclusion Christensen wants, and without JSE there seems to be a fatal flaw in his argument. So Christensen’s view needs JSE as well.

Conclusion

I’ve argued that Elga’s version of the Equal Weight View of disagreement, White’s view of permissiveness, and David Christensen’s view of higher-order evidence, all stand or fall with JSE. Not surprisingly, Christensen also has a version of the Equal Weight View of evidence, and, as Tom Kelly notes in his Peer Disagreement and Higher-Order Evidence (DOC), there is a strong correlation between holding the Equal Weight View, and rejecting epistemic permissiveness. Note that Rich Feldman, for instance, agrees broadly with Elga on disagreement, White on permissiveness and Christensen on higher-order evidence. Indeed, his work has been highly influential in all three of those fields. So these are not arbitrary selections from work of contemporary internalists.

I don’t, therefore, think it is a coincidence that these views stand or fall with JSE. Rather, I think JSE is a common thread to the important work done by various internalists on disagreement, permissiveness and higher-order evidence.

I also think that JSE is false, and is false for some fairly systematic reasons. But that is something that will have to wait for another post.

9 Replies to “Does Judgment Screen Evidence?”

  1. I have a quick clarificatory question, Brian. You say that “If we have a probabilistic analysis of evidential support, the version of screening I’m going to offer here is identical to the Reichenbachian version just provided.” I see one direction of this “identity”, but not the other. The two conditions are:

    (R) Pr(A | B & C) = Pr(A | C).

    and

    (W) c(A,B&C) ≤ c(A,C) and c(A,~B&C) ≥ c(A,C)

    where c(H,E) is “the degree to which E supports H”. It is true that® entails (W), for just about any probabilistic measure c(H,E) [that’s a cool fact, closely related to one that Jim Hawthorne and I have appealed to in another context: http://fitelson.org/ic_2.pdf%5D. But, (W) does not entail®, right? I’m not sure anything trades on this “identity” claim, but I was just curious about it.

  2. Ah, good point.

    When I was doing things probabilistically, I said there was a strict equality, Pr(A | B & C) = Pr(A | C). But when I did things non-probabilistically, then I just said things like “provides no better evidence that”.

    If I make the second clause of the screening definition:

    B ∧ C is exactly as good evidence for A as C is, and so is ¬B ∧ C.

    then does that make it clearer?

  3. Yes, thanks, this is better. So long as you don’t use support measures like this one:

    s(H,E) = Pr(H | E) – Pr(H | ~E)

    This measure (and some others as well) will (strangely) satisfy the (W) => (R) direction, but violate the® => (W) direction of the equivalence. For most measures, though, the equivalence does hold between the equality version of (W) and®.

  4. That’s a really interesting unification of some apparently distinct positions that have baffled me. The only thing I would take issue with is the original argument motivating JSE – it seems to me that it does better to motivate ESJ. After all, it seems plausible that the only reason S having judged that p could be evidence for anyone else is that it provides some evidence that S has some direct evidence, like E. Thus, the judgment plays the role of the fingerprints, while E plays the role of actually having been at the crime scene.

    Maybe the only thing evidence is good for is producing the judgments, but it seems more plausible to me that, evidentially speaking, the only reason judgments of others matter is because they indicate evidence, while one’s own judgments play some sort of constitutive role rather than an evidential one in your beliefs.

  5. Thanks Kenny!

    I agree that the motivation for JSE isn’t really compelling. I’m somewhat tempted to argue, using something like this argument, for ESJ.

    ESJ helps explain something that’s rather puzzling. If I know you’re confident in p, and don’t know anything else about p, that gives me a reason to be somewhat confident in p. After all, you’re a smart guy, you wouldn’t be confident in p unless there was a good reason and so on.

    But the following is bad reasoning I think. If I simply notice that I’m confident in p, that doesn’t seem like nearly as good a reason to be confident in p. If I’ve never had any evidence that p (apart from my current confidence), and I know I have no other evidence, then I can’t reason “Brian’s a smart guy, if he’s confident in p, that’s a good reason to be confident in p,” even though I can run that reasoning on your confidence. There’s an asymmetry there which needs explaining, and ESJ offers a nice explanation of it.

  6. Brian—

    Your post raises some interesting questions. One of them is about whether the undermining that Elga and I see in certain cases is produced by a judgment, or by more evidence. I’ve always seen it in the latter way. Here’s why I think it might matter in the higher-order evidence case (and I of course agree that similar points apply to the position on disagreement Elga and I have defended).

    In the drug case, I have two bits of evidence:

    E1: the original evidence provided by the logic puzzle
    E2: the evidence about my being drugged

    We suppose that, in fact, E1 entails P, the answer to the logic puzzle.

    Now we can consider various judgments I could make about E’s relation to P. We might take these as different attitudes I can take toward something like the following claim:

    C: E1 supports P.

    After taking in E1 and E2, I need to form credences (make judgments) about both P and C. On my view, I should not do so in a way that involves having great confidence in P, but having low confidence in C. After all, my only reason for confidence in P comes from E1. But this leaves open a couple of possibilities. I could be confident of both P and C, or neither. I defend the view that, in my version of the drug case, the rational response is the second. But it’s not clear to me that this amounts to a judgment screening off evidence. Of course, probabilistic understandings of screening off will be problematic here, given that E1 entails P. But I’m not sure that even this case is at root similar to the sort of subsumption that occurs in ordinary screening off.

    For example, I don’t think that in drug-undermining cases in general, it’s right to say that the original evidence gets subsumed by the evidence or judgment about the drug. The evidence about the drug might, for example, indicate that 50% of people reach random answers under the drug, while the other 50% are unaffected. This alone says nothing about which answer to the problem is correct. So the original evidence will have a role to play in determining one’s final judgment about P.

    I also don’t think the real idea behind my verdicts on the drug or disagreement cases depends on the idea that J is evidence for other people. In the drug case, the fact that I’m drugged doesn’t seem to tell you anything about whether P is true. This seems to me to be another way of drawing a contrast between what the drug evidence does, and ordinary evidential subsumption.

    So I don’t think my position depends on JSE. It’s really motivated by what I take to be compelling judgments about particular cases. (I’m also not sure I see the fatal flaw in my argument, unless by “fatal flaw” you mean the cool consequence that even conclusive reasons can be disabled by higher-order evidence…)

  7. Kenny-

    I think that others’ judgments often matter epistemically in a way that has nothing to do with indicating evidence that I don’t have. They matter because I know I make mistakes in thinking. Suppose, for example, that I know that 100 excellent thinkers have exactly the same ordinary evidence that I have concerning P. Perhaps P is some scientific hypothesis, and we’ve examined all the same data; we may even believe the same background theories. Still, if I believe P and find out that they all reached the opposite conclusion, that should lower my confidence in P. This is not because it indicates that there’s some other data I don’t have, but because it indicates that I mistook the import of the evidence I do have. I can’t check for this kind of mistake purely by thinking about the data again, for I know I’m liable to mistakes there, too. So insofar as I’m rationally required to take into account my own epistemic fallibility, I’m required to use the beliefs of other people as checks on my own reasoning.

    This is the way the disagreement cases resemble the drug cases, in which it’s clear that information about my being drugged is no ordinary evidence at all about which answer to the puzzle is correct. I think it’s also related to the asymmetry Brian mentions in his response to your comment: my (equally-informed) friend’s belief can serve for me as a check on my own reasoning, in a way that my own belief clearly cannot.

  8. Hi David,

    I might have been sloppy in what I was meaning by “evidence”. I didn’t mean to presuppose that the ‘higher-order evidence’ view was wrong. (I meant to eventually argue that it was wrong, but not to presuppose it.) So that was probably sloppy on my part.

    And I agree that the views I’m mentioning here aren’t motivated by JSE. As you say, they are motivated by cases. But I think without JSE there is a pretty good response to those cases.

    In fact even before that I think the cases aren’t as compelling as they might at first appear. I think in general the intuitions you offer in “Higher-Order Evidence” go away when we consider cases where we have (misleading) evidence that we’ve been overly cautious. Consider this case, for instance.

    A doctor has been on duty for 12 hours. In the world of the story, at that stage in a shift, doctors are typically excessively cautious about their diagnosis. The initial feelings of drowsiness cause them to second-guess themselves even though they are capable of making reliable confident judgments. Helen, a doctor, knows these facts, and has been on duty for 12 hours. Helen is in fact immune to this general tendency of over-caution, though she does not have any prior reason to believe this. She looks at the symptoms of a patient who is in some discomfort, and concludes that probably he should be given 100mg of drug X, although more tests would confirm whether this is really the best action. That’s the right reaction to the medical evidence; there are realistic explanations of the symptoms according to which 100mg of X would be harmful, and the tests Helen considers would rule out these explanations. Had she only just come on duty, she would order the tests, because the risk of harming the patient if the probably correct diagnosis is wrong is too great. But Helen now has reason to worry that if she does this, she is being excessively cautious, and is making the patient suffer unnecessarily. What should Helen do?

    I think in that case she should run the extra tests. It would be reckless not to. But I don’t see how to get that result given your views about higher-order evidence, since by hypothesis the doctor has higher-order evidence that drug X is exactly what should be prescribed.

    I also worry that having “E1 supports P” as an extra step is going to lead to Lewis Carroll style worries. Could you allow an agent who knew E1, and who knew E1 supported P, but who wasn’t in a position to be sure about P, because they had reason to doubt that E1, and E1 supports P, together support P? I find that quite hard to imagine. And once I think about the impossibility of that case, I start to worry a lot about cases like the Ferdinand case.

  9. Hi Brian—

    The Helen case is interesting, but seems to me a bit underspecified at this point. (For example: How good is Helen’s evidence for the overconfidence effect? How severe is the effect? Does Helen have evidence that, in her situation, all the tests that doctors order are unneeded?) But maybe there are cases of this general sort that would put pressure on my sort of view.

    I do think, though, that we will all have to make room for good reasoning to be undermined by misleading evidence that we’ve reasoned badly. How this will work out theoretically is certainly not yet settled. (I don’t think the sort of position I favor requires using “E1 supports P” as an extra step, but I do see how that would be problematic.) But to hold that the rationality of beliefs based on good reasoning was never undermined by the agent getting strong evidence that she had reasoned badly would seem to me to forbid agents from taking seriously the possibility of their own epistemic errors.

Leave a Reply