Ross and Schroeder on Belief

This is a short post where I jot down my initial impressions of Jake Ross and Mark Schroeder’s interesting paper ”
Belief, Credence, and Pragmatic Encroachment”:http://www-bcf.usc.edu/~jacobmro/ppr/Belief_Credence_and_Pragmatic_Encroachment.pdf, and in particular how it compares to my views on belief and credence. I’m not going to summarise the paper, so this post won’t make a lot of sense unless you’ve read their paper too.

I think there are three main criticisms of my view.

  1. The second version of the Renzo case (on page 11)
  2. The ‘correctness’ argument on page 18.
  3. The arguments from irrelevant propositions towards the end of the paper.

I think the Renzo argument is a good objection to the way I developed my view in “Can we do without Pragmatic Encroachment?”:http://brian.weatherson.org/cwdwpe.pdf. But I’m pretty sure the modifications I make in “Knowledge, Belief and Interests”:http://brian.weatherson.org/KBI.pdf can deal with the case. In the latter paper, I say that if the agent is interested in the expected value of random variable _X_, and their interest in it goes to such a precise level that they care about the difference between the expected value of _X_, and the expected value of _X_ given _p_, then even when those values are very close, if they are not identical for the (theoretical) purposes of the agent, the agent doesn’t believe _p_. And that’s what is going on in Renzo’s case. He clearly cares, at least for now, about the difference between a variable being -3, and it being -3.0003. And whether the ticket scanners work is relevant to that difference. So he doesn’t simply believe the ticket scanners will work.

Of course, “Knowledge, Belief and Interests” isn’t finished, let alone published. So I don’t think Jake and Mark should be worrying about whether I have a reply to them in that paper. All I want to note is that I agree with them that this is the kind of case a theory should get right, and their theory gets the right answer, but I think the most recent version of my theory does as well.

I’m actually not sure whether the correctness argument is meant to be an objection to my theory, but I think it’s written as an argument against a class of theories of which mine is a member. And I think I can avoid it. Here’s what they say:

bq. Whatever it is that constitutes, or makes it the case, that an agent is wrong about whether _p_ when _p_ is false, it can‟t be an attitude that involves, or commits the agent to, acknowledging the possibility that p is false.

But on my theory, belief that _p_ is inconsistent with the agent acknowledging the possibility that p is false, at least on the natural ways of understanding the italicised phrase. If the agent is acknowledging that ¬p has a non-zero probability, then the agent cares about the difference between the probability of ¬p, and the probability of ¬p given _p_. So they don’t really believe _p_.

I hope that on my theory, at least in the “Knowledge, Belief and Interests” form, belief really does require ruling out the possibility of error, at least in occurrent deliberation. And that’s enough, I think, to avoid this objection.

The irrelevant proposition objections are trickier, but I think these are tricky for everyone. So I’m inclined to make a _tu quoque_ objection. Here’s how Jake and Mark set up their positive theory.

bq. Since we must treat uncertain propositions as true, and since we must, at least sometimes, do so without first reasoning about whether to do so, it seems we must have automatic dispositions to treat some uncertain propositions as true in our reasoning. It would not make sense, however, for us to have _indefeasible_ dispositions to treat these propositions as true in our reasoning. For if an agent had an indefeasible disposition to treat a proposition _p_ as true, then she would act as if _p_ even in a choice situation such as High, in which she has an enormous amount to lose in acting as if _p_ if p_p_is false, and little to gain in acting as if _p_ if _p_ is true. Thus, having an indefeasible disposition to treat _p_ as true would make one vulnerable to acting in dangerously irrational ways.

bq. What we should expect, therefore, is that for some propositions we would have a _defeasible_ or _default_ disposition to treat them as true in our reasoning—a disposition that can be overridden under circumstances where the cost of mistakenly acting as if these propositions are true is particularly salient. And this expectation is confirmed by our experience. We do indeed seem to treat some uncertain propositions as true in our reasoning; we do indeed seem to treat them as true automatically, without first weighing the costs and benefits of so treating them; and yet in contexts such as High where the costs of mistakenly treating them as true is salient, our natural tendency to treat these propositions as true often seems to be overridden, and instead we treat them as merely probable.

I don’t think this will really do as it stands. What does it mean to have a _default_ disposition to treat something as true in reasoning? Actually, let’s step back from that. What does it mean to treat something as true in reasoning?

One answer is that it means using that thing as a premise in our reasoning. But that can’t be necessary. I haven’t used the fact that Hawthorne won the “1988 Grand Final”:http://en.wikipedia.org/wiki/1988_VFL_Grand_Final as a premise in any reasoning in years, but I have believed it this whole time. Perhaps it is to use it as a premise if it is relevant. But that can’t be sufficient. I have used the proposition that Melbourne won the 1988 Grand Final as a premise in every situation where it would have been relevant in the last few years, i.e., in all none of those situations. But I don’t believe Melbourne won; as I just said, I believe Hawthorne won.

Maybe we can steer between these two extremes the following way.

  • To treat something as true in a circumstance is to use it as a premise in reasoning if it is relevant. (Note that the ‘if’ here is material implication, so this clause is easy to satisfy.)
  • To be disposed (as a default) to treat something as true is to be disposed to use it as a premise in reasoning in any normal situation where it is relevant.

Perhaps there is some way of tidying this up, but I suspect the objection I’m about to make will work to any such ‘tidying up’. The worry is that there might be propositions which aren’t relevant to any reasoning we might undertake at all in normal situations. (If Jake and Mark want to reject this, I’ll simultaneously reject the idea that there are propositions which are “strongly practically irrelevant” in the sense that they need for the objections to me at the end of the paper to work.) And those propositions – and their negations! – will be automatically believed on reasoning disposition theory of belief.

I can see a clever way out of this, one that seems like it would be attractive to the reasoning disposition account. I’ve presupposed that there is such a thing as ‘normal circumstances’. Perhaps that is false. Perhaps what we should say that when we’re considering whether an agent believes _p_, we should look at their disposition over some range of reasoning situations which include some situations in which _p_ is relevant. More formally, we’d like something like the following function,

bq. f: C × P → D

where C is a circumstance, P is a proposition and D is a set of problems, or decisions, the agent must make. And then we’ll say that S believes _p_ in _c_ iff S is disposed to use _p_ as a premise for any decision in f(c, p). This might well work.

But what I really want to conclude is that the chances of saving the reasoning disposition theory this way are roughly as good as the chances of saving a theory like mine. My theory says that there is some function g from circumstances and propositions to decisions, and that S believes _p_ in _c_ iff her answer to any problem in g(c, p) is the same as her answer to any such problem conditional on p.

Now I don’t just say that, I also say something about what g might be. Basically, g(c, p) is the set of all problems relevant in _c_, plus the problem of whether the probability of _p_ is more or less than ½. Perhaps that’s not right; I agree that Jake and Mark raise some serious puzzles for this way of understanding what g is. Indeed, I feel I should tinker with g given what they say.

But ultimately I think the big picture here is not about whether I’ve got g right or wrong. What I really hope is that the following two claims are true.

  • There is some way of defining g so that the theory of belief in terms of matching answers to conditional and unconditional problems is correct; and
  • The function is sensitive to its first argument, so that belief turns out to be sensitive to practical situations in roughly the way several of us have argued over the past 10 years or so.

OK, so this has got a bit longer than the short post I hoped for at the top. But here’s the tentative conclusion I wanted to draw. My theory of belief in terms of credences etc., needs a function like g in order to work. I think the reasoning disposition theory needs a function like f in order to work. I think the odds of finding f are about as good as the odds of finding g. Indeed, I reckon any plausible candidate for g will be a plausible candidate for f, and vice versa.

So I think that once we start spelling out the vague notion of a default, defeasible disposition, we’ll have to deal with the same problems that led to the quirky consequences of my theory. If the reasoning disposition theory avoids having odd consequences merely by being vague at a key point like this, that isn’t a virtue of the reasoning disposition theory.