Pragmatics, Belief and Knowledge

I’ve been thinking a bit about Jeremy Fantl and Matthew McGrath’s argument for ‘pragmatic encroachment’ into epistemology. Unless I’m missing some important distinctions, their argument is an argument for a position called ‘value-based epistemology’ in the feminist epistemology literature. There is a “long discussion of their paper at Certain Doubts”:http://bengal-ng.missouri.edu/~kvanvigj/certain_doubts/index.php?p=273#more-273. They end up arguing for the following principle:

bq. Two subjects can have the same evidential (or, more generally, purely epistemic) standing to a proposition, but one can be justified and the other not, simply because, for one, the stakes are higher.

(The quote is filched from a comment of Fantl’s on the CD thread.) I want to set out a position that isn’t yet occupied in this debate. This principle may be true, and yet there is in no interesting sense pragmatic encroachment into _epistemology_. The position is that what it is to believe a proposition can be affected by pragmatic matters, but once we’ve fixed what belief is in a practical position, what it takes to be justified in having that attitude does not vary with practical considerations.

There’s a big project that’s at the back of this – a Keynesian “Probability First” approach to epistemology. The position I’m taking here is that there is no pragmatics in probabilistic epistemology, and hence no pragmatics in epistemology proper, but plenty of pragmatics in the relationship between probabilistic and non-probabilistic doxastic states, and hence pragmatics in non-probabilistic epistemology. I don’t have convincing arguments for this position, for instance I don’t have responses to the feminist arguments for values-based epistemology I alluded to above, but I’m going to set out the position anyway.

We start with a familiar puzzle, the puzzle of explaining the connection between believing a proposition to degree _x_ and believing the proposition _tout court_. I’m going to run with a fully functionalist solution here. To believe _p_ is to treat _p_ as true for the purposes of practical decision making. That is, the following principle, which should look very similar to a principle Fantl and McGrath endorse, is (to a first approximation!) true.

bq. X believes that _p_ iff for all actions A, B, X prefers A to B iff she prefers A & _p_ to B & _p_.

There are some complications to this, to deal with inconsistent agents and to deal with cases where _p_ is of no practical importance, so it isn’t perfect as it stands. But I think it’s close enough to the truth to run with, at least properly understood.

The proper understanding involves getting the domain of the quantifier right. If we take the quantifier to range over all possible actions, then the analysis reduces to the familiar account that X believes that _p_ iff X believes _p_ to degree 1. Proof: Let A be betting on _p_ at extremely long odds, and B be declining that very bet. X prefers A & _p_ to B & _p_, but will only prefer A to B if her degree of belief in _p_ is arbitrarily high. By varying A, we can show that for any _x_ less than 1, if X believes _p_ to that degree, X does not believe that _p_. But we need not take that to be the right quantifier scope. Let us say instead, in keeping with the functionalism driving the project, that the quantifier ranges over A and B that are live practical options for X. If no one is offering X a bet on _p_ at long odds, such an option will not be in the quantifier domain.

Let us apply this to something like Fantl and McGrath’s train case. X and Y are in Ithaca, and both of them need to get access to a certain book that is not in the Cornell library. They both know the book is in the Rochester library, which is normally about a 2 hour drive from Ithaca. X would prefer to get the book sooner rather than later, but time is not of the essence. Y also prefers this, but it is absolutely vital that she get access to the book within the next 2 hours. They check whether the Syracuse library (which is about an hour drive away, and over an hour from Rochester) has the book, but sadly the Syracuse computers aren’t working. So they (simultaneously) ask a colleague whether the book is at Syracuse. The colleague says that the book was in that library last week, and since it is not on a popular subject it probably wouldn’t have been borrowed since then. So they each form a high degree of belief (short of one) that the book is in Syracuse.

Now consider these actions and propositions

A = Go to Syracuse to look for the book
B = Go to Rochester to look for the book
_p_ = The book is in Syracuse

X prefers A to B, since the book is probably there, and if not it isn’t a huge cost to turn around and head out to Rochester. But Y prefers B to A, since if she goes to Syracuse and the book is there, she wouldn’t be able to make it to Rochester within the 2 hour deadline. Obviously both of them prefer A & _p_ to B & _p_. Assuming there are no other relevant practical considerations, my theory says that X believes that _p_ and Y does not, even though they have the very same evidence, and the very same degree of belief in _p_.

Fantl and McGrath say that in a case like this, it is rational for X to believe that _p_, and hence I guess that X can know that _p_, although this is not rational for Y. I agree. But not because I have any pragmatic considerations in my theory of epistemic justification. Rather, in this circumstance, believing that _p_ is a very different state for Y than it is for X. To use somewhat arbitrary numbers, X need only believe _p_ to degree 0.8 to believe that _p_, while Y needs to believe it to degree 0.99 or even higher. And both of them are justified in believing _p_ to some degree between these two numbers.

I need to say a lot more about my analysis of belief, and I will over upcoming posts. As alluded to above, I think it has plausible things to say about the lottery and preface paradoxes. I also think it captures something very close to our ordinary notion of belief, though I’m sure there will be some counterintuitive consequences. I’m a little worried about what to say about the practically irrelevant propositions, and about the inconsistent agents, but I think I’ve got things to say about them as well. I’ll also say some stuff about how this compares to other analyses in the literature, though I’ll do that after I flesh out the positive view more. (I hope it doesn’t compare to an existing view by being identical to it, but I don’t do lit searches before blog posts, so I could have been beaten to the punch here.)