In “a forthcoming paper”:http://brian.weatherson.org/cwdwpe.pdf I argue that the reasons adduced by various authors (e.g. Jeremy Fantl and Matthew McGrath, Jason Stanley, and John Hawthorne) don’t give us reason to think that there is a need for a pragmatic component in our theory of _justified belief_. My view was that the cases they developed showed we needed a pragmatic component to our analysis of _belief_, as functionalists have been saying for a few years now, but the _justified_ part of _justified_ belief could be left entirely free of pragmatic concerns. When I wrote the paper I thought that the same would be true for our theory of knowledge, though I was a little worried about whether the right account of defeaters would be pragmatically neutral. I think the paper is cautious enough to not _say_ that the same is true of knowledge, though it probably does implicate that.
Anyway, now that I’m trying to write up the extension of my theory of justified belief to a theory of knowledge, it seems I should have been more worried. The impact of practical considerations seems to be very different on knowledge and on justification in a couple of cases, both of which I was aware of when I wrote the earlier paper. The cases are one that Jason calls ‘Ignorant High Stakes’ and a case I discussed of a gamble where the agent has unreasonable beliefs about the cost of losing the bet.
Ignorant High Stakes is a version of Keith DeRose’s Bank Case. Hannah has a pay check to deposit, but she doesn’t feel like going to the bank on Friday evening when she’d rather be out drinking. Surprisingly, she finds standing in bank queues on Saturday morning a good hangover cure, so she likes depositing her pay check on Saturday. She last did this a couple of weeks ago, and has been doing it frequently, so she has pretty good evidence that the bank will be open Saturday. But of course banks do change their hours, go on strike, create bank holidays etc, so there is a sorta kinda live possibility that the bank will be closed tomorrow. Now it turns out, totally unbeknownst to Hannah, that it is crucial to her financial wellbeing that her check be deposited by the end of the weekend, or her account will go dangerously into overdraft. Hannah doesn’t know this, but she does know that banks occasionally do weird things, so quite reasonably assigns a high credence slightly less than 1 to the proposition that the bank will open Saturday. As it turns out the bank will be open on Saturday, as this isn’t one of those weird cases. Three questions.
# Does Hannah believe the bank will be open Saturday?
# Does Hannah justifiably believe the bank will be open Saturday?
# Does Hannah know the bank will be open Saturday?
My theory says that 1 and 2 go together when Hannah has reasonable credences, and in the case the answer is ‘yes’ to 1 and 2, but intuition says ‘no’ to 3. What should we do? I’m tempted to accept both the theory and the intuition here, and say that we have a case where justified true belief is not knowledge.
One then has to say what knowledge _is_. Here’s a first pass theory, simplified in several ways. The third clause has to be understood in the somewhat technical senses of section 3 of “the earlier paper”:http://brian.weatherson.org/cwdwpe.pdf.
# p is true
# S’s credence in p is reasonable
# For any false proposition f that S believes and is relevant to the action-guiding inferences she makes using p, for any live and salient options A and B, and any active proposition q, the agent prefers A to B given q & ~f iff she prefers A to B given p & q & ~f
In slightly less technical terms, if we conditionalise on ~f (where f is the relevant false belief) then further conditionalising on p doesn’t change anything important about the agent’s (conditional) preference ordering. By this criteria Hannah doesn’t know that the bank is open, because the following two claims are true.
* Conditional on it being really important that she banks her paycheck by the weekend, she prefers going to the bank on Friday
* Conditional on it being really important that she banks her paycheck by the weekend and the bank being open on Saturday, she prefers going to the bank on Saturday
So we get the right result, but the clause I had to add to get this looks rather ambiguous, and I suspect the disambiguations will look a lot like a pragmatic addition to the theory of knowledge. Or, more precisely, a pragmatic addition to the theory of knowledge that isn’t mirrored by a pragmatic addition to the theory of belief. So that looks like the kind of pragmatic encroachment I was trying to be rid of.
(Moreover, I’m not sure that ~f is exactly what should be added. In cases where the false belief is a close but flawed scientific theory, the right thing to conditionalise on seems to be not the negation of that theory, but the truth of the true theory. But then we have to ask how to make sense of subjective probabilities conditional on a theory the agent does not know and possibly cannot even conceptualise. This seems like a major project to work on.)
The second case is a problem for roughly everyone I think. Let p and q be two propositions about history that the agent assigns super high credence. In the case of p this is reasonable – she has lots of independent evidence for it. But in the case of q her belief is based solely on reading a work of historical fiction that she should have recognised in other respects played fast and loose with the historical context. And let r be the proposition that a particular fair coin about to be tossed will land heads. The agent has to decide whether to take or decline a bet with the following characteristics.
* If p & r, win $3
* If p & ~r, lose $1
* If ~p & q, lose $1
* If ~p & ~q, lose $1,000,000
Assume for the sake of argument that the agent could (just) afford the loss of $1,000,000, that a reasonable credence assignment would imply that this bet has negative expected utility, though according to the agent’s actual credences it has positive expected utility. (If you don’t think this is plausible, adjust the costs in the final line so the numbers make sense. I think it’s about right, though maybe the $1,000,000 should be a little lower.) If this all looks a little complicated, maybe a flow chart will help.
!http://brian.weatherson.org/gamble.jpg!
In the earlier paper I discuss a case like this and argue that it is a case where the agent reasonably believes each of the premises of the following argument, but is not in a position to reasonably believe the conclusion.
bq. p
Given p, taking the bet is preferable to not taking it
Taking the bet is preferable to not taking it
This commits me, in what I think should be the most controversial part of my theory, to the following principle: Even though the agent reasonably believes that p, she can’t ignore worlds in which ~p in working out what to do. I think this principle sounds implausible at first, but ultimately defensible in these cases. The question now is whether I should accept the following principle.
bq. If the agent knows that p, then she can reasonably ignore worlds in which ~p in working out what to do. That is, she can reasonably infer from the fact that A is preferable to B given p to the preferability of A to B.
And although I want to reject that principle for reasonable belief, it seems more compelling as a principle about knowledge. If I accept it, then I’m committed to saying that the agent in this case does not know p. Because she knows that taking the bet is preferable to not taking it given p, but does not know that taking it is preferable to not taking it, so she doesn’t know p. So this would be a very unusual case of justified true belief without knowledge.
To make things more complicated, let’s imagine that both p and q are actually true. As far as I can tell, in this case the agent does know that p on Jason Stanley’s theory, because now the agent is actually (assuming there are no other salient bets) in a low-stakes situation with respect to p. She has a bet on p that returns a bet with an expected return of $1 if p, and loses $1 if ~p. That looks like a paradigmatic low stakes situation! So I’m a little worried that Jason’s view isn’t interest-relative *enough*. But maybe I’m wrong and he’ll write in to tell me how I’m misinterpreting him.
Note that if we do want to say that the agent doesn’t know p in this situation (even when q is true) we have to adjust the analysis of knowledge above. Now the quantifier has to range not only over false propositions the agent believes, but also over propositions to which the agent assigns an unreasonably high credence. At this stage the theory is starting to look a little disjunctive, but perhaps not so disjunctive as to be worth abandoning.
One last point. Given that I’m contemplating a gap between justified true belief and knowledge here, does that mean I’m taking back the claims in my earlier Phil Studies paper about Gettier cases? No. The argument there was merely that the standard _arguments_ against the JTB analysis weren’t very good, because those arguments consisted either of appeals to cases that we had reason to be suspicious of, or appeals to principles that seem false. And I still think both of those things are correct. (If anything, I’m _more_ persuaded of them now than I was when I wrote the paper.) But nothing I said there should have implied that we’d never find a good argument against the JTB theory, and perhaps there is such an argument to be found by digging among these cases involving interactions between practical and theoretical rationality.