I’ve been thinking again about the issues about knowledge justified belief and practical interests that I explored a bit in this old paper. In that paper I have a rather complicated example that’s meant to show that a principle Jeremy Fantl and Matthew McGrath endorse, namely (PC) is false. Here is the principle.
(PC) S is justified in believing that p only if S is rational to prefer as if p.
The rough outline of why (PC) is wrong is that whether one is rational to prefer as if p might depend not only on whether one has justified attitudes towards p, but on whether one’s other attitudes are justified. Here is one example in which that distinction matters.
S justifiably has credence 0.99 in p. She unjustifiably has credence 0.9999 in q. (She properly regards p and q are probabilistically independent.) In fact, given her evidence, her credence in q should be 0.5.
S is offered a bet that pays $1 if _p_v_q_ is true, and loses $1000 otherwise. Assume S has a constant marginal utility for money. It is irrational for S to prefer to take the bet. Given her evidence, it has a negative expected value. Given her (irrational) beliefs, it has a positive expected value, but if she properly judged the evidence for q, then she would not take the bet.
Of course, given p the bet is just a free grant of $1, so she should take it.
So this is a case where it is not rational to prefer as if p. She should prefer to decline the bet, but to accept the bet given p.
If we accept (PC), it follows that S is not justified in believing p. But this conclusion seems wrong. S’s credence in p is perfectly justified. And on any theory of belief that seems viable around here, S’s credence in p counts as a belief. (On my preferred view, S believes p iff she prefers as if p. And she does. The main rival to this view is the “threshold view”, where belief requires a credence above the threshold. And the usual values for the threshold are lower than 0.99.)
So this is a counterexample to (PC). In a recent paper, Fantl and McGrath defend a weaker principle, namely (KA).
(KA) S knows that p only if S is rational to act as if p.
Is this case a counterexample to (KA) as well? (Assume that p is true, so the agent could possibly know it.) I don’t believe that it is a counterexample. I think the things that an agent knows are the things she can use to frame a decision problem. If the agent knows p, then the choice between taking or declining the bet just is the choice between taking a dollar and refusing it. So she should take the bet. This would be irrational, so that must be the wrong way to frame the bet. Hence she doesn’t know that p.
The upshot of this is that these practical cases give us a new kind of counterexample to K = JTB. In the case I’ve described, the agent has a justified true belief that p, but does not know p.