What is IRI?

Following up a little from “yesterday’s post”:, I think that many people on both sides of the IRI debate have misconstrued the force of IRI examples. For instance, here’s what Jason Stanley says is the take-home message of the examples.

bq. The advocate of IRI simply proposes that, in addition to whatever one’s favored theory of knowledge says about when _x_ knows at time _t_ that _p_, there is a further condition on knowledge that has to do with practical facts about the subject’s environment. (Knowledge and Practical Interests, pg. 85)

I think that’s wrong, or at least misleading, in a couple of respects. (And I think it’s trivial to say “when _x_ knows at time _t_ that _p_”; it’s at _t_ isn’t it?!)

  • It suggests that interests are a “further condition” on knowledge, rather than integrated into the other conditions.
  • It suggests, perhaps indirectly, that the ‘low stakes’ condition is the analytically basic condition, and there are extra conditions in ‘high stakes’ cases.

I’m not sure Jason commits to either of these claims, but I think critics of IRI have often taken those claims as being part of the theory, and I don’t think those critics are being entirely uncharitable when they do that. Be that as it may, I think both claims are false, and certainly neither claim is supported by examples motivating IRI. (Or, if you’re like me, by theoretical arguments motivating IRI; I don’t think the examples show a great deal.)

Here’s an alternative way to capture the motivations behind IRI that doesn’t endorse a “further conditions” view, and takes the ‘high stakes’ case to be analytically basic.

There are coherence constraints on knowledge. Violations of them amount to doxastic defeaters. Some of these constraints are simple. I think, for instance, the following constraint is plausibly a universal truth.

  • If _x_ believes ¬_p_, then _x_ does not know that _p_.

It doesn’t matter whether _x_’s belief that _p_ is true, justified, safe, sensitive, not derived from falsehoods, caused by the truth of _p_, robust with respect to the addition of further true beliefs, or whatever you like. If _x_ believes both _p_ and ¬_p_, there is too much incoherence in that part of her belief space for there to be knowledge. The belief that ¬_p_ is a doxastic defeater of the (putative) knowledge that _p_.

There’s a motivation for this. Knowledge that _p_ should mean that adding _p_ to the cognitive state, and making the subsequent alterations that a good theory of belief revision suggests, would make no changes whatsoever. If _x_ knows _p_, then _p_ has been added. This suggests a further constraint.

  • If _x_ prefers φ to ψ, but prefers ψ ∧ _p_ to φ ∧ _p_, she doesn’t know _p_.

In this case, however, the principle needs to be qualified. This seems to basically rule out anyone (rational) knowing _p_ unless _p_ is absolutely certain. (Proof sketch: Let ψ be the act of taking a bet on _p_ at crazy long odds, and φ be the act of declining that bet.) So we qualify the principle. How? IRI, or at least one version of it, says that the qualification is interest relative. So the real rule is something like this:

  • If _x_ prefers φ to ψ, but prefers ψ ∧ _p_ to φ ∧ _p_, she doesn’t know _p_, unless one of φ and ψ is too irrelevant to _x_’s current interests.

The general picture is this. Knowledge that _p_ requires that one’s belief that _p_ sufficiently cohere with the rest of one’s cognitive state. Ideally, we’d like it to perfectly cohere. But requiring perfect coherence seems to lead to scepticism. So there is an exception clause. And it says that when you’re in a ‘low stakes’ context, certain restrictions on knowledge don’t apply. So a belief that doesn’t fully cohere can still be knowledge.

I think that makes better sense of how interests fit into knowledge. It isn’t that knowledge merely requires a belief with high utility. Or that changing one’s interests can be the basis for knowledge. It’s rather that certain barriers to knowledge get lowered in certain (relatively familiar) situations.

I’m in general sympathetic to approaches to knowledge that say that the sceptical scenario is the basic one, and the reason we have a lot of knowledge is because ‘exceptions’ to the rather tight restrictions on knowledge are frequently triggered. That is one way to explain the appeal of scepticism; abstract away too much from real-life situations and you lose the triggers for these exceptions, and then scepticism is true. The kind of lottery situations that IRI people worry about aren’t cases where strange new conditions on knowledge are triggered. Rather, they are cases where abstract Cartesian standards for knowledge are restored, just like they are in the simplest models for knowledge.