Fantl and McGrath on Fallibilism

I’ve been reading through Jeremy Fantl and Matthew McGrath’s excellent _Knowledge in an Uncertain World_. So there will be a few posts about it to come. I’ll start with a question about their definition of fallibilism. They offer up three definitions, and endorse the third.

*Logical Fallibilism* – You can know something on the basis of non-entailing evidence.

*Weak Epistemic Fallibilism* (hereafter, WeakEF) – You can know something even though it is not maximally justified for you.

*Strong Epistemic Fallibilism* (hereafter, StrongEF) – You can know that _p_ even though there is a non-zero epistemic chance that not-p.

They frequently restate StrongEF as the doctrine that you can know things with an epistemic chance of less than 1. That’s equivalent only if the following is true. The epistemic chance of _p_ is less than 1 iff the epistemic chance of not-p is non-zero. And that’s true I guess if epistemic chance is a probability function. (It isn’t only true that way, but I can’t see any other good motivation for the equivalence.) And I really don’t see any reason whatsoever to believe that epistemic chance is a probability function.

We never get a full definition of ‘epistemic chance’. It’s partially introduced through its natural language meaning. We talk about there being a chance that Oswald didn’t shoot Kennedy, or that the Red Sox will win the pennant this year. But that intuitive notion clearly isn’t a probability function. After all, in that sense of chance there’s some chance that the twin prime conjecture will be proved, and some chance that it won’t be proved. Yet one of those two things has probability zero.

The other way that ‘epistemic chance’ is introduced is in terms of rational gambles. I assume the idea is something like this. The epistemic chance of _p_ is _x_ iff it would be rational to regard a bet that costs _x_ utils and returns 1 util iff _p_ is fair. Fantl and McGrath never say anything that precise, but that seems to be the idea.

Now the same objection can be raised. It is rational to regard various bets at non-zero prices on the proof or disproof of the twin prime conjecture as fair. So epistemic chance so defined can’t be a probability function.

More seriously, I don’t think there’s a reason to think it is *anything like* a probability function. It doesn’t, as far as I can tell, have anything like the *topology* of a probability function.

For one thing, I don’t see any reason to think that it’s linear. That is, I don’t see why we should think epistemic chance defined in terms of gambles produces anything more than a very partial order over propositions. If you believe in totally ordered utilities you might think the definition I gave two paragraphs back can produce a total ordering over propositions. But I don’t really believe that utilities are totally ordered.

For another, I don’t see any reason to think that it’s got an upper and lower limit. Maybe “I exist” is at the top. But couldn’t we get even more confident in it, that is, even more willing to accept outrageous bets, by thinking through some philosophy, reading the _Meditations_ etc? I think that’s a reason to think that the chance of “I exist” can go up, at least if ‘chance’ is defined in terms of rational gambles.

Even if I was wrong about both these things, I’d think epistemic chances were more likely to be Dempster-Shafer functions than probability functions, so it wouldn’t be equivalent to say that _p_ having a chance of 1 or instead saying that not-p has a chance of zero.

I think one of the more pernicious influences of Bayesianism on epistemology is that theorists just assume that various functions are probability functions. This isn’t a mistake Bayesians make; they have long _arguments_ that probability theory is applicable where they apply it. (I don’t think those are typically good arguments, but that’s another story.) But in mainstream epistemology, we see probability theory brought in, either explicitly or tacitly, when it seems far from clear that it is appropriate.