A Puzzle for Subject-Sensitive Invariantism

When I was at Rutgers the weekend before last I was talking to Sam Cumming about, among other things, subject-senstive invariantism. Sam mentioned that there seem to be some interesting difficulties in generalising SSI so it is a theory of group knowledge as well as individual knowledge. These all seemed like excellent concerns, and I didn’t have much to say about them. (On my theory the problem of explaining what group knowledge is ‘reduces’ to the problem of explaining what group preferences are, which may not be progress.) I’ll leave Sam to say what the problems he’s noticed are, but I thought I’d note here that one of them seems to be a complication even for people who merely care about individual knowledge. Here’s the problem.

S has a lot of evidence that p, and p is in fact true. She doesn’t think much turns on p, so she accepts p. We might imagine that were p of little importance to her, she’d actually know that p. But it turns out that p is really important to her, so by SSI standards she doesn’t know that p.

S knows that p entails q, and she infers q from p. All her evidence for p is evidence that q. And q really isn’t important to her, or at least that important. (Presumably q is evidence that p, so q is of some importance, but not that important.) Could she thereby come to know that q?

Intuitions are pretty blurry here I think. On the one hand, she has reason to believe q to a high degree, she infers it (indirectly) from that evidence, and q isn’t so important that she should need more evidence than that to know that q. Note that she hasn’t inferred q from a _false_ premise, so there isn’t a defeater of that kind. On the other hand, she has inferred q from a premise that she didn’t know, and you might think that that is a defeater. If so, the definition of SSI will have to somehow rule that out. (Perhaps by adding in a stipulation that we can’t acquire knowledge by inference from what we don’t know, but we better be careful that this doesn’t rule out all sorts of scientific reasoning from approximate theories where we aren’t aware they are merely approximate.)

So one question is whether my preferred version of SSI is vulnerable to this objection. I think that the interest relativity comes in to the definition of *belief*, not the degree of justification (or warrant) that is needed to turn a belief into knowledge. The way I think this works is as follows.

bq. S believes that p = For any live, salient possible actions A and B, and any salient q, S prefers A to B given q iff S prefers A to B given p and q.

‘Live’ and ‘salient’ are meant to be technical terms, and to some extent so is ‘action’. (For the purposes of this theory, believing that p is an action, and one that the agent prefers to do rather than not do iff her credence in p is above 0.5.) The ‘interest-relativity’ comes in because with different interests, conditionalising on p will have different effects, and may have no effect at all.

Question: Could it be the case that in the above case, p is believed to degree x, q is believed to degree y>x, but with the bulk of this credence coming from the evidence for p, and q is known but p is not?

In such a case, there will have to be a salient r, and salient actions A and B such that the agent prefers A to B given p and r, but doesn’t prefer A to B given r. (Or vice versa, but let’s ignore that case hopefully without loss of generality.) Now if they believe q (and hence are evden really in a position to know that q) it has to be the case that they don’t prefer A to B given q and r. If they have totally ruled out q & ~p, i.e. assigned it credence 0, then this won’t be possible, because then the preferences conditional on q will be identical to preferences conditional on p. So the most extreme case, where the credence for q is *equal* to the credence for p, cannot be a case where q is known but not p. Still, the credence the agent must have in ~p & q need not be more than minimal, as in the following case.

A is the action of betting on p at very very long odds, B is the action of declining that bet, and r is a known tautology, so preferences given r are preferences simplciter. There are no other salient actions that depend on p or q in any way. (That is, no other normal actions, there are still the ‘actions’ of believing p, not believing p, etc.) Then the agent prefers A to B given p (it’s a bet on p after all), but does not prefer A to B simpliciter, or even given q.

So I don’t *really* get out of this difficulty. Let’s work through an example to see how bad this feels. There is a test match between England and South Africa being played. Neither Jack nor Jill has a personal skae in the match. In fact South Africa has won, though this news is only slowly filtering through to the betting sites. Both Jack and Jill have testimonial evidence, from a moderately reliable source, that South Africa won. Both of them come to (reasonably) believe to degree 0.98 that South Africa won and to degree 0.99 that a team won, since they assign equal credence to an England win as to a draw. There is no option of betting on there being a result, but there is the option of betting on South Africa at 1000 to 1. Jack knows all this, so knows that he isn’t in a position to believe that South Africa won. But still he’s in a position to believe that there was a result, and he does believe this. Jill doesn’t know that this option exists, though in fact it is totally salient (she should have been looking at the betting options but wasn’t) so she unreasonably takes herself to believe that p. She too is in a position, I say, to justifiably believe that q.

If there are no defeaters in the neighbourhood, then both of them know that there was a result. I don’t know whether that is the right result, or if it isn’t whether Jack, Jill or both should be said to not know that there was a result. Any thoughts?