I’ve been reading a little bit on the preface paradox, so what I say in the following might be unoriginal. I doubt it is false however.

The standard way of setting up the preface paradox is something like the following. A historian writes a book. It includes, let’s say, 4000 sentences, each of them (we’ll assume for sake of argument!) expressing a proposition. She is careful with writing the book, and it is natural enough to say she believes each of the propositions in it. Call these P1, P2, …, P4000. In the preface she writes something like the following.

Despite my best efforts, I’m sure that this book, like all books, contains some mistakes.

The thought is that she’s now contradicted herself, because she has said each of the following.

P1, P2, …, P4000, ~(P1 & P2 & … & P4000)

But it is really unclear that she has asserted these things, or believes them, which is what’s really at issue. What she said was that there is a mistake in the book. Now it is true that the book is (among other things) the conjunction of P1 through P4000. (“Among other things” because the book also contains claims about evidential relationships between the claims.) But from that it doesn’t follow that she believes that one of P1 through P4000 is false unless she *believes* that P1 through P4000 are the propositions in the book.

(Actually even that isn’t enough – she also needs to infer from the falsity of something in the book and the fact that what’s in the book is is P1 through P4000 that one of P1 through P4000 is false. One of the standard ways to resolve the preface paradox is to deny that beliefs have to be closed under conjunction. It is noticable that even deniers of closure assume closure in setting up the puzzle.)

To be sure, the author did write the book, so in some sense she knows what is in it. But if the book is long enough to get a prefatory warning of falsity, it isn’t clear that the author needs to remember everything is in the book. At best, what she could remember is what she *intended* to write. She can hardly remember her own typos that went uncorrected, or misprints. But in reality she probably can’t remember all the intentions either. (I hardly remember the start of this post, let alone the start of a 300 page book.)

What is unclear to me is how far this goes to solving the preface paradox. I’m half inclined to say that it *entirely* solves it. A rational author who knew exactly what they said, and believed every claim in the book, would not take any of it back in the preface. Real authors are not like this – they are forgetful.

UPDATE: I should research first, write second. The main point I’m making here has already been made – in a paper by Simon Evnine “Believing Conjunctions”, *Synthese* 118: 201–227, 1999. This isn’t to say I agree with everything Evnine says, but he does make this point first, or at least before me!

“A rational author who knew exactly what they said, and believed every claim in the book, would not take any of it back in the preface.”

Really? Surely the preface paradox is just a special case of the paradox of fallability – ‘I believe some falsehood’ is not only not an irrational thing to believe, it’s arguable an irrational thing

notto believe. Likewise, authors know they make mistakes of fact, not simply mistakes of expression.It’s an interesting epicycle, but I don’t see how your consideration solves the preface paradox at all.

Well it might be rational to believe that you believe some falsehood. (In fact since that’s guaranteed to be true if it has content, that can hardly be irrational.) But what’s not clear is that is rational to simultaneously believe P1, P2, …, Pn and believe ~(P1, P2, …, Pn). If you do that you’ve got explicitly contradictory beliefs, and that’s bad. Fortunately you can tell a story about how writers who express modesty in their prefaces say no such thing.

This might be a cheat. Suppose the author finds some of the propositions she believes less credible than others. Most she believes have a probability of 1, but there are about nine propositions (scattered throughout) that she thinks have a probability of about .9. It follows that the probability that some proposition is false is at least .52. So she rightly believes that some proposition in the book is false. But of course it is also true that she should believe each individual proposition.

Brian: this is exactly right. (This is what I have been saying to my students about the preface paradox for years.)

In one sense, the belief that one would express by saying ‘Some of my beliefs are not true’ guarantees that there is something defective about one’s beliefs — either one believes something that is not true, or the belief that one expresses by saying this is paradoxical in some way.

But it does

notfollow from this that adding this belief to a set of beliefs introduces any logical inconsistency in the strict sense. That is, it doesn’t follow that the set of propositions that one believes {P1, …. P4000} entails a contradiction, or that these propositions can’t all be true.Of course, given that {P1, …. P4000} includes the proposition ‘Some of my beliefs are not true’, a situation in which all these propositions {P1, …. P4000} are true would have to be a situation in which one believes some

other(false) propositionsin addition to{P1, …. P4000}. But at least if one is an ordinary sort of believer, there is no problem with that: for any ordinary believer S, the set of propositions that S believes {P1, …. P4000} will not itself contain the proposition Pn that the propositions that S believes are exactly {P1, …. P4000}. (Indeed, if such a proposition Pn is true, it would have to be a self-involving proposition of a sort that some philosophers would find rather troubling — although following Bealer and Harman, I’m not convinced that there’s any probem with such self-involving propositions myself.)For this reason, it seems to me that there is nothing obviously irrational about the preface belief at all. On the contrary, this belief seems an entirely rational response to the obvious fact that one is not an infallibly reliable arbiter of the truth. Forming this belief does indeed guarantee that there is something defective about one’s belief-set as a whole; but it doesn’t show that the defect is located in this particular belief, or that the

contentsof one’s beliefs are in any way logically inconsistent with each other.It can’t be rational to believe an explicit contradiction, I think: an instance of p&~p. It also can’t be rational to believe p and rational to believe ~p (in the same way, under the same guise). But it’s not that hard to construct Preface and Fallibility so that it involves a set of rational beliefs that entail a contradiction. The author believes that the book contains some mistakes, believes each of the claims in the book, and believes that claims 1…n are all the claims there are in the book. And in fallibility, I believe 1…m, believe these are all the beliefs I have, and one of these beliefs is that I’m fallible (which, with a bit of a stretch, we say is the belief that some of my beliefs are false). Neither of these descriptions involves rational contradictory beliefs, but they can’t all be true. Moreover, one can alter the fallibility or preface belief to be either self-involving or not: talk about one’s first-order beliefs (claims in chapters of the book) and a meta-belief to the effect that some of my first order beliefs are false.

And I don’t see how it helps to say that ordinary believers don’t have these sorts of beliefs. If that’s true, idealize to believers who do; you can do so without presuming that they’ve left fallibility behind. Moreover, it’s a stunning fact about rationality that inconsistent sets of beliefs can all be rational. As Russell would say, just abstract away from merely medical limitations. Doing so tells us something important about rationality.

Prima facie, the preface comment could still reasonably made by an author who knows exactly what they wrote. So one still needs a diagnosis of that case. Here the situation is analogous to the judge who asserts (and believes), of all the defendants they have found guilty: “D1 is guilty”, …, “D100 is guilty”, “I’m fallible, so at least one of the previous judgments is wrong, so one of the D1-D100 is innocent.” Prima facie, the judge accepts P1, …, P100, and ~(P1&…&P100). Now, maybe the prima facie description is incorrect, or maybe the judge is being irrational, but neither of these claims is at all obvious, so this version of the preface paradox still presents a puzzle. Maybe one way to look at it is that there are two versions of the preface paradox.

The probabilistic reply is pretty satisfying: if believing is P(p)>(1-delta) then no problem; errors compound. Nothing so simple is right about the nature of belief, but plausibly this story will be adaptable to a more accurate theory of how believing relates to credence.

But it works only if the author’s credences for some of the propositions in the book are less than 1.

So consider the certainty version: I am certain of P1, …, P100; I assign them probability one. But I assign high credence to their negated conjunction. That makes my credences incoherent. But it seems like a natural way to describe one’s fallibility about even what one takes to be certain. How to handle? Take it to be evidence that judgments of “certainty” don’t really indicate P=1?

I’m astonished at the level of response to this post. It seems trivial.

People believe defeasible things.

Saying that you believe some of the propositions to be false is

no morethan that.The author is simply saying that the propositions represent a reasonable view, not incontestible knowledge. I see no major philosophical issue here.

Cheers,

-T

Brian,

David Christensen’s new book gives a good extended discussion of the relationship between (various formulations of) the preface paradox and rationality. I suggest the two middle chapters from that book.

Foley can accommodate both the intuition that it’s rational to make the preface assertion and the intuition that—as Ralph said—“there’s something defective about those beliefs”. He says it is possible to have justified inconsistent beliefs (TER, pp. 96-102), but those beliefs cannot be used as evidence (WWN, pp. 192-197 for some details), they must be quarantined so to speak, which is acknowledgement of a deficiency.

Note also in connection with Jon’s comments that the humble prefacer will never actually believe a proposition of the form (P & ~P), rather he will just have beliefs such that the right logical and psychological moves would end up with him believing it. So he’s subject to that blatant inconsistency only if one invokes the appropriate form of closure, the rejection of which is a common response to this family of paradoxes. Being a probabilist is a nice way to deny closure.

To speak in terms of epistemic praise and blame: I certainly don’t get credit for all my beliefs entail, so a presumption of symmetry between praise and blame indicates that I shouldn’t be deemed irrational by taking into account all that my beliefs entail.

Hey there, Trent, remember me?

(to all:) It strikes me that a more true-to-spirit analysis of the

de dictoassertion “there are some mistakes in this book” would be ~(P1 v P2 v … v P4000). That seems a far more natural interpretation of what the author surely means by her claim. The comparative weakness of the author’s claim, as opposed to the explicit statement of “the conjunction of every proposition expressed in this book is false” (which is surely not equivalent to what the authormeans), is brought out by the disjunction, which allows for distribution of the denial through the constituent propositions.Of course, there is still something of a contradiction in P1 & P2 & … & P4000 & ~(P1 v P2 v … v P4000), but it doesn’t seem as violent a one as in the other version: the fact that it’s indeterminate from the disjunctive form of the statementwhichproposition(s) are false brings out the importantde dictocharacter of the author’s original statement.And, of course, it means taking the entire book as an inclusive disjunction rather than conjunction, and that seems really weak, but it does seem to be entailed by the author’s disclaimer. So probably historians shouldn’t be saying that about their works if they want to be taken seriously!Wait a sec, if you interpret the entire book sans preface as (P1 v P2 v … v P4000), then since ~(P1 v P2 v … v P4000) distributes to (~P1 v ~P2 v … v ~P4000), there’s no contradiction in {(P1 v P2 v … v P4000) & (~P1 v ~P2 v … v ~P4000)}, is there? (Or I may just be totally confused; I’m something of an amateur in the logic-chopping business, if you couldn’t tell.)

Okay, never mind. I didn’t have my

Elementary Logicin front of me. :~~) Thanks to Trent for deigning to correct some “crank.” ;~~)