This is a short post where I jot down my initial impressions of Jake Ross and Mark Schroeder’s interesting paper

Belief, Credence, and Pragmatic Encroachment, and in particular how it compares to my views on belief and credence. I’m not going to summarise the paper, so this post won’t make a lot of sense unless you’ve read their paper too.

I think there are three main criticisms of my view.

- The second version of the Renzo case (on page 11)
- The ‘correctness’ argument on page 18.
- The arguments from irrelevant propositions towards the end of the paper.

I think the Renzo argument is a good objection to the way I developed my view in Can we do without Pragmatic Encroachment?. But I’m pretty sure the modifications I make in Knowledge, Belief and Interests can deal with the case. In the latter paper, I say that if the agent is interested in the expected value of random variable *X*, and their interest in it goes to such a precise level that they care about the difference between the expected value of *X*, and the expected value of *X* given *p*, then even when those values are very close, if they are not identical for the (theoretical) purposes of the agent, the agent doesn’t believe *p*. And that’s what is going on in Renzo’s case. He clearly cares, at least for now, about the difference between a variable being -3, and it being -3.0003. And whether the ticket scanners work is relevant to that difference. So he doesn’t simply believe the ticket scanners will work.

Of course, “Knowledge, Belief and Interests” isn’t finished, let alone published. So I don’t think Jake and Mark should be worrying about whether I have a reply to them in that paper. All I want to note is that I agree with them that this is the kind of case a theory should get right, and their theory gets the right answer, but I think the most recent version of my theory does as well.

I’m actually not sure whether the correctness argument is meant to be an objection to my theory, but I think it’s written as an argument against a class of theories of which mine is a member. And I think I can avoid it. Here’s what they say:

Whatever it is that constitutes, or makes it the case, that an agent is wrong about whether

pwhenpis false, it can‟t be an attitude that involves, or commits the agent to,acknowledging the possibility that p is false.

But on my theory, belief that *p* is inconsistent with the agent *acknowledging the possibility that p is false*, at least on the natural ways of understanding the italicised phrase. If the agent is acknowledging that ¬*p* has a non-zero probability, then the agent cares about the difference between the probability of ¬*p*, and the probability of ¬*p* given *p*. So they don’t really believe *p*.

I hope that on my theory, at least in the “Knowledge, Belief and Interests” form, belief really does require ruling out the possibility of error, at least in occurrent deliberation. And that’s enough, I think, to avoid this objection.

The irrelevant proposition objections are trickier, but I think these are tricky for everyone. So I’m inclined to make a *tu quoque* objection. Here’s how Jake and Mark set up their positive theory.

Since we must treat uncertain propositions as true, and since we must, at least sometimes, do so without first reasoning about whether to do so, it seems we must have automatic dispositions to treat some uncertain propositions as true in our reasoning. It would not make sense, however, for us to have

indefeasibledispositions to treat these propositions as true in our reasoning. For if an agent had an indefeasible disposition to treat a propositionpas true, then she would act as ifpeven in a choice situation such as High, in which she has an enormous amount to lose in acting as ifpif p_p_is false, and little to gain in acting as ifpifpis true. Thus, having an indefeasible disposition to treatpas true would make one vulnerable to acting in dangerously irrational ways.

What we should expect, therefore, is that for some propositions we would have a

defeasibleordefaultdisposition to treat them as true in our reasoning—a disposition that can be overridden under circumstances where the cost of mistakenly acting as if these propositions are true is particularly salient. And this expectation is confirmed by our experience. We do indeed seem to treat some uncertain propositions as true in our reasoning; we do indeed seem to treat them as true automatically, without first weighing the costs and benefits of so treating them; and yet in contexts such as High where the costs of mistakenly treating them as true is salient, our natural tendency to treat these propositions as true often seems to be overridden, and instead we treat them as merely probable.

I don’t think this will really do as it stands. What does it mean to have a *default* disposition to treat something as true in reasoning? Actually, let’s step back from that. What does it mean to treat something as true in reasoning?

One answer is that it means using that thing as a premise in our reasoning. But that can’t be necessary. I haven’t used the fact that Hawthorne won the 1988 Grand Final as a premise in any reasoning in years, but I have believed it this whole time. Perhaps it is to use it as a premise if it is relevant. But that can’t be sufficient. I have used the proposition that Melbourne won the 1988 Grand Final as a premise in every situation where it would have been relevant in the last few years, i.e., in all none of those situations. But I don’t believe Melbourne won; as I just said, I believe Hawthorne won.

Maybe we can steer between these two extremes the following way.

- To treat something as true in a circumstance is to use it as a premise in reasoning if it is relevant. (Note that the ‘if’ here is material implication, so this clause is easy to satisfy.)
- To be disposed (as a default) to treat something as true is to be disposed to use it as a premise in reasoning in any normal situation where it is relevant.

Perhaps there is some way of tidying this up, but I suspect the objection I’m about to make will work to any such ‘tidying up’. The worry is that there might be propositions which aren’t relevant to any reasoning we might undertake at all in normal situations. (If Jake and Mark want to reject this, I’ll simultaneously reject the idea that there are propositions which are “strongly practically irrelevant” in the sense that they need for the objections to me at the end of the paper to work.) And those propositions – and their negations! – will be automatically believed on reasoning disposition theory of belief.

I can see a clever way out of this, one that seems like it would be attractive to the reasoning disposition account. I’ve presupposed that there is such a thing as ‘normal circumstances’. Perhaps that is false. Perhaps what we should say that when we’re considering whether an agent believes *p*, we should look at their disposition over some range of reasoning situations which include some situations in which *p* is relevant. More formally, we’d like something like the following function,

f: C × P → D

where C is a circumstance, P is a proposition and D is a set of problems, or decisions, the agent must make. And then we’ll say that S believes *p* in *c* iff S is disposed to use *p* as a premise for any decision in f(*c*, *p*). This might well work.

But what I really want to conclude is that the chances of saving the reasoning disposition theory this way are roughly as good as the chances of saving a theory like mine. My theory says that there is some function g from circumstances and propositions to decisions, and that S believes *p* in *c* iff her answer to any problem in g(*c*, *p*) is the same as her answer to any such problem conditional on *p*.

Now I don’t just say that, I also say something about what g might be. Basically, g(*c*, *p*) is the set of all problems relevant in *c*, plus the problem of whether the probability of *p* is more or less than ½. Perhaps that’s not right; I agree that Jake and Mark raise some serious puzzles for this way of understanding what g is. Indeed, I feel I should tinker with g given what they say.

But ultimately I think the big picture here is not about whether I’ve got g right or wrong. What I really hope is that the following two claims are true.

- There is some way of defining g so that the theory of belief in terms of matching answers to conditional and unconditional problems is correct; and
- The function is sensitive to its first argument, so that belief turns out to be sensitive to practical situations in roughly the way several of us have argued over the past 10 years or so.

OK, so this has got a bit longer than the short post I hoped for at the top. But here’s the tentative conclusion I wanted to draw. My theory of belief in terms of credences etc., needs a function like g in order to work. I think the reasoning disposition theory needs a function like f in order to work. I think the odds of finding f are about as good as the odds of finding g. Indeed, I reckon any plausible candidate for g will be a plausible candidate for f, and vice versa.

So I think that once we start spelling out the vague notion of a default, defeasible disposition, we’ll have to deal with the same problems that led to the quirky consequences of my theory. If the reasoning disposition theory avoids having odd consequences merely by being vague at a key point like this, that isn’t a virtue of the reasoning disposition theory.

Many thanks to Weatherson for his comments on “Belief, Credence, and Pragmatic Encroachment.” I’ll make a stab at some replies.

As I see it, Mark Schroeder and I raise five objections to the kind of view of belief that Weatherson (along with some others, including Fantl and MacGrath), defend:

(1) This kind of view implies that one can believe a proposition even if one has no disposition to treat it as true in reasoning

(2) This kind of view fails to predict that someone who believes that p is mistaken if p is false

(3) This kind of view has implausible implications concerning changes in belief.

(4) This kind of view implausibly implies that, for many propositions (what we all strongly practically irrelevant propositions), any credence in them above .5 is sufficient for outright belief.

(5) This kind of view implausibly implies that one can rationally believe each proposition in an inconsistent triad, so long as these propositions are strongly practically irrelevant.

Weatherson does not discuss objection (3), but he does discuss the other four. He claims that, while the older version of his view to which were responding in our paper may be subject to objection (1), his new view gets around it. And he claims that his new view likewise gets around objection (2). Concerning objections (4) and (5), Weatherson claims that, while both versions of his view may be subject to these objections, our view is subject to the same objections, and that if there’s any way we could modify our view to avoid these objections, similar modifications would allow his view to get around these objections.

I’m not convinced that Weatherson’s new view avoids objections (1) and (2), and I am quite confident that Weatherson is mistaken in claimingI defend is subject to objections (4) and (5). Let’s look at Weatherson’s responses in turn.

Concerning (1), Weatherson holds that, on his new view, someone who never treats p as true in reasoning, but instead always takes into account the possibility that ~p, won’t count as believing that p. He argues this in relation to the example we discuss in our paper—Renzo, who, when deciding how to get to his destination, always takes into account the possibility that ~q, where q is the proposition that the subway scanner will not accept his ticket. We considered a case where, conditional on q, the expected cost of taking a given train is $3, whereas conditional on ~q the expected cost is $3.0003 . Weatherson replies:

[Renzo] clearly cares, at least for now, about the difference between a variable being -3, and it being -3.0003. And whether the ticket scanners work is relevant to that difference. So he doesn’t simply believe the ticket scanners will work.

But I don’ t think this is right. Renzo might take into account the possibility of ~q in his reasoning without caring about the difference between $3 and $3.0003. For he might not realize, before he does the calculation, that the difference is so trivial—he might take into account the possibility of ~q precisely because he thinks this possibility might make a significant difference to the expected values of his alternatives. Moreover, far from caring about the difference between $3 and $3.0003, Renzo needn’t even pay any attention to this difference. Even if he takes into account the possibility of ~q in his calculations, he might round off the results to the nearest dollar, and treat the expected cost of the option as $3 both conditional on q and conditional on ~q.

Concerning (2), Weatherson points out that our objection relies on the following premise:

Whatever it is that constitutes, or makes it the case, that an agent is wrong about whether p when p is false, it can’t be an attitude that involves, or commits the agent to, acknowledging the possibility that p is false.

Weatherson replies:

But on my theory, belief that p is inconsistent with the agent acknowledging the possibility that p is false, at least on the natural ways of understanding the italicised phrase. If the agent is acknowledging that ¬p has a non-zero probability, then the agent cares about the difference between the probability of ¬p, and the probability of ¬p given p. So they don’t really believe p.

Now I agree that, on Weatherson’s new view, someone who believes that p would not actively or occurrently acknowledge the possibility that ~p, e.g., by uttering or occurrently thinking “maybe p is false.” Hence, on Weatherson’s new view, believing that p is incompatible with occurrently acknowledging the possibility of ~p, since anyone who does the latter would have to care about the possibility that ~p. But one can be committed to doing something without actually doing it. Thus, if I hold that all men are mortal and that Socrates is a man, then I’m committed to holding that Socrates is mortal, even if I have no interest in whether Socrates is mortal, and even if I would never bother forming any beliefs on the subject. Similarly, if I have credence .01 in the possibility that ~p, then it seems there’s a sense in which I’m committed to acknowledging the possibility that ~p. After all, if someone were to ask me how likely it is that ~p, and I said “the probability is .01,” then I’d be responding sincerely. And if I said “the probability is less than .01,” then I’d be responding insincerely.

Concerning (4) and (5), Weatherson claims that our view is subject to analogous objections. He says the following:

There might be propositions which aren’t relevant to any reasoning we might undertake at all in normal situations … And those propositions – and their negations! – will be automatically believed on reasoning disposition theory of belief.

I believe that Weatherson’s response relies on one assumption that is definitely false, and two more assumptions that are questionable at best.

(1) The definitely false assumption on which Weatherson’s reponse relies is that, on the view of belief that Mark and I defend, being disposed to treat p as true in reasoning is a sufficient condition for believing that p. But that isn’t our view. Rather, our view is that such a disposition is a necessary condition for believing that p. Hence, even if we grant Weatherson’s claim that, in the case of strongly practically irrelevant propositions, agents are always trivially disposed to treat them as true, it wouldn’t follow that, on our view, agents always count as believing such propositions.

(2) Weatherson’s second questionable assumption is that, on the view of belief Mark and I defend, the disposition involved in believing that p is a disposition to treat p as true in any normal circumstance in which it is relevant. But again, that isn’t our view. Our view is that believing that p involves a disposition to treat p as true in any circumstance in which it is relevant. We claim, however, that in circumstances where the costs of mistakenly treating p as true are salient, this disposition will typically be masked or overridden. Consequently, the problem Weatherson points to does not arise. For, for every proposition, no matter how practically irrelevant, there will be some possible circumstances in which this proposition is relevant (e.g., circumstances in which one is offered a bet in the proposition in question). And so, even in the case of strongly practically irrelevant propositions, our view does not imply that the disposition to treat them as true in the pertinent circumstances is a trivial one.

(3) The third questionable assumption of Weatherson’s argument is that, whenever it is impossible for a circumstance C to obtain, one will trivially have the disposition to phi on the condition that C obtains. This assumption underlies his claim that, since there are no normal circumstances where strongly practically irrelevant propositions are relevant, we trivially count as being disposed to treat them as true in any such circumstance. But this assumption hardly seems plausible. To give a counterexample example inspired by Dan Nolan, Fermat had the disposition to be alarmed in any circumstance in which he is shown a valid proof of the falsity of Fermat’s Last Theorem, but starving children in Africa have no such disposition. It follows from Weatherson’s assumption, however, that starving children in Africa are disposed to be alarmed upon being shown such a valid proof, since it is impossible for such a circumstance to obtain.