Following up a little from yesterday’s post I think that many people on both sides of the IRI debate have misconstrued the force of IRI examples. For instance, here’s what Jason Stanley says is the take-home message of the examples.
The advocate of IRI simply proposes that, in addition to whatever one’s favored theory of knowledge says about when x knows at time t that p, there is a further condition on knowledge that has to do with practical facts about the subject’s environment. (Knowledge and Practical Interests, pg. 85)
I think that’s wrong, or at least misleading, in a couple of respects. (And I think it’s trivial to say “when x knows at time t that p“; it’s at t isn’t it?!)
- It suggests that interests are a “further condition” on knowledge, rather than integrated into the other conditions.
- It suggests, perhaps indirectly, that the ‘low stakes’ condition is the analytically basic condition, and there are extra conditions in ‘high stakes’ cases.
I’m not sure Jason commits to either of these claims, but I think critics of IRI have often taken those claims as being part of the theory, and I don’t think those critics are being entirely uncharitable when they do that. Be that as it may, I think both claims are false, and certainly neither claim is supported by examples motivating IRI. (Or, if you’re like me, by theoretical arguments motivating IRI; I don’t think the examples show a great deal.)
Here’s an alternative way to capture the motivations behind IRI that doesn’t endorse a “further conditions” view, and takes the ‘high stakes’ case to be analytically basic.
There are coherence constraints on knowledge. Violations of them amount to doxastic defeaters. Some of these constraints are simple. I think, for instance, the following constraint is plausibly a universal truth.
- If x believes ¬p, then x does not know that p.
It doesn’t matter whether x‘s belief that p is true, justified, safe, sensitive, not derived from falsehoods, caused by the truth of p, robust with respect to the addition of further true beliefs, or whatever you like. If x believes both p and ¬p, there is too much incoherence in that part of her belief space for there to be knowledge. The belief that ¬p is a doxastic defeater of the (putative) knowledge that p.
There’s a motivation for this. Knowledge that p should mean that adding p to the cognitive state, and making the subsequent alterations that a good theory of belief revision suggests, would make no changes whatsoever. If x knows p, then p has been added. This suggests a further constraint.
- If x prefers φ to ψ, but prefers ψ ∧ p to φ ∧ p, she doesn’t know p.
In this case, however, the principle needs to be qualified. This seems to basically rule out anyone (rational) knowing p unless p is absolutely certain. (Proof sketch: Let ψ be the act of taking a bet on p at crazy long odds, and φ be the act of declining that bet.) So we qualify the principle. How? IRI, or at least one version of it, says that the qualification is interest relative. So the real rule is something like this:
- If x prefers φ to ψ, but prefers ψ ∧ p to φ ∧ p, she doesn’t know p, unless one of φ and ψ is too irrelevant to x‘s current interests.
The general picture is this. Knowledge that p requires that one’s belief that p sufficiently cohere with the rest of one’s cognitive state. Ideally, we’d like it to perfectly cohere. But requiring perfect coherence seems to lead to scepticism. So there is an exception clause. And it says that when you’re in a ‘low stakes’ context, certain restrictions on knowledge don’t apply. So a belief that doesn’t fully cohere can still be knowledge.
I think that makes better sense of how interests fit into knowledge. It isn’t that knowledge merely requires a belief with high utility. Or that changing one’s interests can be the basis for knowledge. It’s rather that certain barriers to knowledge get lowered in certain (relatively familiar) situations.
I’m in general sympathetic to approaches to knowledge that say that the sceptical scenario is the basic one, and the reason we have a lot of knowledge is because ‘exceptions’ to the rather tight restrictions on knowledge are frequently triggered. That is one way to explain the appeal of scepticism; abstract away too much from real-life situations and you lose the triggers for these exceptions, and then scepticism is true. The kind of lottery situations that IRI people worry about aren’t cases where strange new conditions on knowledge are triggered. Rather, they are cases where abstract Cartesian standards for knowledge are restored, just like they are in the simplest models for knowledge.
Posted by Brian Weatherson in Uncategorized