It’s often claimed that there is a close connection between the tacit reasoning we use in Gettier cases and the safety constraint. Here, for instance, is John Hawthorne from page 54 of *Knowledge and Lotteries*.

Insofar as we withhold knowledge in Gettier cases, it seems likely that ‘ease

of mistake’ reasoning is at work, since there is a very natural sense in such cases, in which the true believer forms a belief in a way that could very easily have delivered error.

I suspect that’s not true. I don’t mean to pick on John here, I think it’s a widespread view in epistemology. But it’s false.

Here’s what I’ll argue. (Step One) There are some Gettier cases where the resultant belief satisfies every safety condition we could want. In those cases we don’t tend to assign knowledge to the agent. (Step Two) The reasoning we use in those cases is the same as the reasoning we use in all Gettier cases. So, the reasoning we use in Gettier cases is not safety reasoning.

Here’s the example of a true, safe Gettier belief.

Bob and Bill are talking about a particular mathematical conjecture. Bob says, “I predict that either Fred will prove it, or it is unprovable.” A few days later, Bill is told by Frank, a trusted and generally reliable friend, that Fred has proved the conjecture. This is false – Fred merely proved a lemma that he thought would help with the proof, and in fact the conjecture is unprovable. Bill concludes, “So Bob was right, either Fred will prove it or it is unprovable.”

Now if we take the Gettier intuitions at face value, we should conclude that this is not a case of knowledge, because it is a case where the agent has concluded that a disjunction is true on the strength of gaining good evidence that the false disjunct is true. But the belief is by any measure safe.

First, the belief is necessarily true, so it satisfies Williamson’s safety requirement. Given the tight connection between Bill’s belief and Bob’s statement, there is no nearby world in which Bill’s belief has a different content, hence no nearby world in which it is false, hence it satisfies my preferred safety requirement. And since the method Bill uses, namely come to believe those predictions of Bob’s that are entailed by reports of Frank’s, yields true results in all nearby cases, it satisfies the principle *only use safe methods*. There’s really no sense here in which Bill’s belief is unsafe.

But it is hard to see in what ways this case differs from other Gettier cases. If safety was crucial to the Gettier reasoning, then the intuitions in this case should be much weaker than in standard Gettier cases where safety is violated. But we don’t see any such thing – the intuition here is exactly the same as in the standard case. So I think that what is central to the Gettier cases is not safety, but the fact that Bill’s final belief is supported by a false proposition.

Nice case!

Any randomly-formed belief in a true mathematical proposition is “safe” when construed counterfactual-style (as roughly ‘Bp @-> p’), right? And I’m inclined to say those beliefs aren’t knowledge either.

But there does still seem to me a wider, intuitive sense where Bob’s reasoning – and the reasoning of a random math guesser – could “very easily have delivered error”. (I’m not sure how to capture that modally, though!)

I think in lots of mathematical cases wild guesses could easily lead to error, even if what is guessed is necessarily true. But I don’t think that is what is happening here. After all, Bill isn’t coming to believe any old thing, he’s coming to believe Bob’s prediction. That’s why I think this belief satisfies lots of safety principles, not just the simplest one.

Might it be true that there is a nearby possible world in which Bob makes a similar prediction about a different conjecture, and in this possible world the prediction Bob makes is false? This seems to capture a sense in which Bill ‘could have been wrong’, i.e., to suggest a safety constraint under which Bill’s belief would be unsafe.

Another way to go is to allow for epistemically possible, but metaphysically impossible worlds. (We need them anyway for antecedents of counterfactuals of the following kind: if 2 plus 3 were 5, you and me would have enough money to buy a pizza). Such is the world W* in which the conjecture is false, and it is epistemically (relative to what Bob knows about math) very close to the actual world.

I guess I’m not totally clear on the notion of safety that’s being invoked here, but I think that Bill’s method isn’t terribly safe. The fact that Frank misreported Fred’s progress suggests that Frank could have easily misreported Fred as having proved conjecture B instead of conjecture A, especially if B and A are related conjectures, both thought to be related to the same lemma. Now, as you’ve stated Bill’s method, he wouldn’t come to believe that either Fred has proved B or it is unprovable – he would only come to believe it if Bob had predicted this before Frank reported it.

But as “Wrong” points out above, there seems to be a nearby possible world in which Bob predicts in addition that either Fred will prove B or B is unprovable. In a world with both this second prediction and Frank’s second misreporting, Bill will come to believe that either Fred will prove B or B is unprovable. There is some sense in which this world is not terribly distant (though maybe it’s not terribly close either), and in this world Bill uses the method to achieve a false belief, so the method isn’t terribly safe.

This seems to me to be the relevant test for safety – consider nearby worlds where different beliefs are achieved by the same method, and see how likely those are to be true. I guess I’m not sure if this is what you mean by “yields true results in all nearby cases”. This case isn’t terribly nearby, but that’s just because the method is such a constrained one, so we have to look somewhat far afield for

anynearby cases.Brian, the belief is this right,

“… either Fred will prove it, or it is unprovable.”

Now you suggest that the belief is necessarily true, but that depends on whether you take the demonstrative ‘it’ to designate rigidly the actual theorem or to designate the mathematical conjecture that Fred is trying to prove. In the latter case it seems like the belief is not necessarily true, since in at least some worlds (perhaps in some nearby worlds) the conjecture that Fred is working on is not a theorem. But then perhaps it is not safe.

I agree with people who say that it is logically possible for Bill to come to a false belief this way. But it isn’t very likely. Two independent things would have to have gone wrong, and they would have had to go wrong in the same way, and at least one of them (Fred’s testimony) is specified to not frequently go wrong. That seems like safety to me.

And I think it isn’t too hard to make the example clearly one where ‘it’ is a rigid designator. Maybe my specification didn’t do that, but I only need one example, and I think we should be able to specify that that is the case, as long as it is ever possible to directly refer to mathematical hypotheses.

I got thinking about whether the play with mathematical hypotheses and rigid designation is necessary to construct the example, and got side-tracked, but I hope interestingly so.

The recipe for generating Brian-style Gettier cases seems to just require that the subject forms a justified belief, P, by a safe method. They then form a belief in the disjunction of P and some necessary truth X (doesn’t matter what X is). P is false, but (P or X) is true, justified, and plausibly safe (assuming, as seems plausible, that disjoining a belief with a necessary truth is also a safe method).

Two brief comments. Firstly, that these cases of safe Gettier beliefs are quite so easy to construct should perhaps ring alarm bells – it may be that what these cases show is that safety, as characterised in the ways mentioned above, is just too easily had. If that’s the case, then there’s a worry Brian’s conclusion will be robbed of its interest; if safety really is that cheap, we shouldn’t be surprised to learn via Brian’s Gettier case that knowledge is significantly more expensive.

The second comment is a qualification of the first. It’s only true that we can choose any necessary truth to construct the example if we restrict our attention to Williamson’s and the safe-methods characterisations of safety. Brian’s own favoured version doesn’t seem to generate safe Gettier beliefs so easily. This seems to me to be something of a point in its favour; content safety (plus JTB) doesn’t suffice for knowledge, if Brian’s original case hits its mark, but this result seems of real interest because content-safe Gettier beliefs are a rarer breed than (say) Williamson-safe Gettier beliefs.

My head’s not clear enough for me to tell if I’m making some horrific error here, but I’ll post and hope I’m not………..

For some discussion on cases like these, you may wish to check out “Safety and Epistemic Luck”, forthcoming in Synthese by Ram Neta and myself. It can be found on Ram’s website, http://www.unc.edu/depts/phildept/neta.html

For example, we discuss a sample case where someone has a justified safe true belief in “Jones owns a Ford or the cube root of 1728 is 12”. We also discuss other cases where the proposition believed is not necessary but is still safe, etc.