In this paper, I offered the following analysis of belief.

S believes that p iff for any* A, B, S prefers A to B simpliciter iff S prefers A to B conditional on p.

The * on any is to note that the quantifier is restricted in all sorts of ways. One of the restrictions is senstive to S’s interests, so this becomes a version of interest-relative invariantism about belief. And if we assume that belief is required for knowledge we get (with some other not too controversial premises) interest-relative invariantism about knowledge.

I now think this wasn’t quite the right analysis. But I don’t (yet!) want to take back any of the claims about the restrictions on any. Rather, I think I made a mistake in forcing everything into the mold of preference. What I should have said was something like the following.

S believes that p iff for any* issue, S’s attitudes simpliciter and her attitudes conditional on p match.

Here are some issues, in the relevant sense of issue. (They may be the only kind, though I’m not quite ready to commit to that.)

- Whether to prefer A to B
- Whether to believe q

- What the probability of q is

Previously I’d tried to force the second issue into a question about preferences. But I couldn’t find a way to force in the third issue as well, so I decided to retreat and try framing everything in terms of issues.

Adding questions about probability to the list of issues allows me to solve a bunch of tricky problems. It is a widely acknowledged point that if we have purely probabilistic grounds for being confident that p, we do not take ourselves to (unconditionally) believe that p, or know that p. On the other hand, it hardly seems plausible that we have to assign p probability 1 before we can believe or know it. Here is how I’d slide between the issues.

If I come to be confident in p for purely probabilistic reasons (e.g. p is the proposition that a particular lottery ticket will lose, and I know the low probability that that ticket will win) then the issue of p’s probability is live. Since the probability of p conditional on p is 1, but the probability of p is not 1, I don’t believe that p. More generally, when the probability of p is a salient issue to me, I only believe p if I assign p probability 1.

However, when p’s probability is not a live issue, I can believe that p is true even though I (tacitly) know that its probability is less than 1. That’s how I can know where my car is, even though there is some non-zero probability that it has been stolen/turned into a statue of Pegasus by weird quantum effects. Similarly I can know that the addicted gambler when end up impoverished, though if pushed I would also confess to knowing there is some (vanishingly small) chance of his winning it big.

Interesting suggestions! A couple of thoughts:

(1.) A probability’s being a “live issue” sounds like it will vary over time for the same subject and content. At one moment the probability of my son’s being at school is live, at another it’s not. I assume, then, that you’ll accept that I move between not believing it and believing it? How will this work for a purely dispositional belief? Is there some dispositional sense in which we can assess whether the probability of my son’s being at school is live for me, even when all relevant thoughts are far from my mind?

(2.) What about cases (if there are any) where you believe not-Q but also believe P and P->Q? The paradox of the preface might be an iterated version of this. But consider just a case where you haven’t thought things through. Maybe you believe p(Q)=0, p(P)=1, and p(Q|P)=1. Irrational but not impossible? Do you have room for this?

Hi Eric

On (1), I certainly think that what’s a live issue varies for a subject over time. So I think that what we believe varies quite a bit depending on what we’re thinking about and, more importantly, what decisions we have to make.

I think the main issues that are live are decisions. So the main way that we can stop believing that p (without becoming less confident that p) is by confronting a decision between A and B, where you prefer one choice, but you’d prefer a different choice if p were given.

I’m fairly hesitant about this, but I suspect that the issue of what p’s probability is rarely becomes live (in the sense needed for the theory) unless we consciously think about it. I can imagine all sorts of decisions that might be affected by the fact of your son’s being at school or not. But it’s hard to think of a case where the probability of your son’s being at school is relevant to any real-world decision. Of course, you might think about that probability, just because you (a) like thinking about probability or (b) like creating stressful thoughts for yourself! But otherwise I’d say it isn’t a live issue. (Maybe when you’re trying to decide whether to buy life insurance the probability of your early death becomes live. But that’s a very special decision situation; I suspect it’s practically impossible to stay sane while constantly thinking about the probability of imminent death. Which is partly why airline travel can be so stressful sometimes.)

On the second question, I’d in the first instance want to sidestep it. I follow Ramsey, Savage and others in taking (conditional) preferences as primitives, and personal probabilities largely as constructs out of these. And a person might have all sorts of crazy preferences. Now while there has been a lot done on how to model agents whose preferences satisfy various nice constraints, there isn’t much that I know of on how to model agents that don’t satisfy these constraints. That’s what I’d need to see before knowing quite how to answer this question.

So the short version of the non-answer is that in the first instance I mostly care about the preferences, and I’m happy to allow that these can go all over the place. Just what to say about incoherent beliefs and probabilities themselves is a bit trickier, and I don’t have anything useful to say on that right now.

Thanks, Brian, that helps! I’ve a hunch that a lot of our actions would be a little different if we depended on things p=1-ishly rather than p=.97-ishly, even when we’re not thinking about p consciously. I have to admit, though, that I can’t think of a really compelling example right now. There are various ways in which we’re nonconsciously slightly cautious — for example in trusting people implicitly but not quite absolutely….

Hi Brian,

Mike Titelbaum has made me wonder whether even an assignment of probability 1 suffices for belief. Throwing an infinitely fine dart at random the [0, 1] interval, you give probability 1 that it will land on an irrational number. But do you BELIEVE that it will land on an irrational number? Perhaps not – the possibility of error is still too salient.

Your addicted gambler example is interesting, because it’s an intermediate case of salience (or lack thereof) of the relevant probabilities. After all, if I reflect for a moment on why I believe that he will be ruined, my reasons are explicitly probabilistic.

Al