Skip to main content.
May 17th, 2005

Christensen on the Preface Paradox

I’m reading through David Christensen’s interesting Putting Logic in Its Place. Christensen is on the side of those who think that the Preface Paradox shows that deductive cogency is not a constraint on rational belief. He responds to several arguments to the contrary, of which I think the most interesting is what he calls the Argument Argument. This is, roughly speaking, the view that deductive cogency has to be a constraint because otherwise we couldn’t explain the force of deductive argumentation. (This is rough because as stated the argument, or at least the arguer, looks poised to confuse inference with implication. I think a version of the argument that doesn’t make this confusion can be given, but that’s for a later post.) After some back-and-forth that I won’t repeat, Christensen gets to the following worry.

Suppose, for example, the author of a history book were to discover that the claims in the body of her book formed an inconsistent set. Intuitively, wouldn’t this be very disturbing? … It is hard to see why an author should be more concerned by an inconsistency within the body of the book than with preface-style inconsistency … But wouldn’t discovering inconsistency among the individual historical claims always actually be highly disturbing?

This looks like a fairly serious concern to me. But Christensen dismisses it in a rather odd way.

What the defender of cohency needs to make his point is a case involving an inconsistency that necessarily involves a great number of the huge and diverse set of historical claims making up the body of a book, and for my part I know of no case in which we’ve had experience of this sort of discovering in actual inquiry … Until persauive specific examples are found, then, it seems to me that we’ve been given no good reason to think that deductive cogency requirements play an important part in epistemic rationality.

There are two interpretations of this, and both of them seem odd to me.

First, Christensen might be requiring that we really find a large set of inconsistent claims in a history book (which has only consistent proper subsets) before we can run this argument. But that is a bizarre restriction. For one thing, it is really really hard (I’d imagine impossible) to find an actual instance of the preface paradox of the form that supports the anti-cogency view. An actual preface paradox would have to satisfy the following constraints. (I’m somewhat repeating myself from a couple of days ago here, but the point being made is a little different.)

I think it is very rare to find an actual genuine work of scholarship that satisfies either of these constraints. Now this might not matter to philosophy, we can always idealise away from these facts about the real world. (Though once we idealise we can’t rely on intuitions about actuality, a fact that wielders of the preface paradox are not always careful to respect.) But of course if the preface-paradox-mongerers can go to hypothetical cases, so can the defenders of cogency. So this can’t be the right interpretation.

Maybe Christensen means there hasn’t been as much as a fully specific philosophical example. That was true, but we can easily enough make it false. Imagine the following kind of, perhaps not particularly exciting, history book. The background (this part is fictional) is that there is some huge debate in the House of Commons about, say, whether to join the Euro. This involves all sorts of speeches on the floor, and votes.

Professor X writes a book about this debate, going into much too much detail about what each member said, for some key members how they voted on various subsidiary issues (amendments and the like), and how this related to their views of their party whips and their own place in their party. At the start X says matter-of-factly, that Labor has 356 members in the Commons. The number of members each party has is usually one of the best-known features of the Parliament, so we can imagine X had many sources for this. Nevertheless, when one reads the text closely one finds 357 different MPs such that X either says they are Labor or says something that entails (given other things X says) that they are Labor. (For instance, X might say that M.P. Z broke with his party to vote for the amendment, and say Labor were the only party opposing the amendment.)

This way we’ll have a contradiction involving 400 or so things that X says, even though no subset of that is contradictory. We could coherently give probability 0.997 to each of the things that X says, which is a perfectly high probability by anyone’s lights. But the inconsistency should, I think, worry us. This kind of mistake seems different in kind from X saying in the preface, “I’m sure there are mistakes in here, etc.” And Christensen’s theory doesn’t, at least thus far, have a way of reflecting this difference.

(Thanks to Barry Lam in comments on the last preface thread for suggesting Christensen’s book. It was on my ‘to-be-read-someday’ list but Barry’s comment moved it to by ‘to-be-read-now’ list.)

Posted by Brian Weatherson in Uncategorized

4 Comments »

This entry was posted on Tuesday, May 17th, 2005 at 5:37 pm and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. Both comments and pings are currently closed.

4 Responses to “Christensen on the Preface Paradox”

  1. David Christensen says:

    Brian, you make an interesting point. Let me set the stage a bit before responding directly.

    The passage you quote is in a bit examining the question of why, and to what extent, we should be bothered when some set of our beliefs is inconsistent. One obvious reason for being bothered is that inconsistency guarantees that at least one of the relevant beliefs is false, and having false beliefs is bad. But if that’s the problem with a set of inconsistent beliefs, it would seem that we should be almost equally bothered by the knowledge that it is almost certain that at least one of the relevant beliefs is false. I argue that this is right—that we should be bothered just about as strongly in the latter case—and that it’s not inconsistency per se that’s epistemically troubling.

    The quoted passage responds to an objection to this line, based on the claim that we actually would be more bothered by inconsistency in a book we’d written than I seem to think we should be. The worry is that there is something in our attitudes toward inconsistent beliefs that I have yet to explain. I point out that our feelings of discomfort with inconsistent belief sets may in many cases be explained without invoking a consistency requirement. For example, sometimes, inconsistency can reveal a problem with the methods used in arriving at the beliefs (and hence undermine our confidence in the believed claims). Or if an inconsistency occurs in a small set of claims where one is highly confident of each, discovering the inconsistency can show that one’s initial high confidence levels are not all rational. In these sorts of cases, one’s epistemic consternation may be explained by one’s desire to have true beliefs and avoid false ones, rather than by one’s seeing inconsistency itself as a problem. This was the reason for my saying that the defender of a deductive cogency requirement should provide an example in which discovering inconsistency in one’s book would elicit feelings of epistemic distress, but wouldn’t require a significant decrease in one’s confidence about any of the relevant claims.

    You are certainly right to say that the example need not involve a real published book. So what about the imagined House of Commons example?

    First, insofar as I have clear intuitions on this example, I think that, before finding out that there was an inconsistency in my book’s claims, I might well be highly confident that I had not misidentified (directly or by implication) the party of one of the MPs. In that sort of case, discovering the inconsistency would significantly raise my confidence in having made such a mistake. This would explain my epistemic discomfort in a way that need not make inconsistency per se a bad thing.

    However, let’s suppose the contrary case. Suppose that, before discovering the inconsistency, I’m almost sure that I’ve misidentified the party of at least one of the MPs. Suppose I’m pretty sanguine about this (I’m, like, “Hey, history may be preeminent among the humanities, but it ain’t rocket science. Mistakes happen, dude, even in the official records of Parliament.”) If I start out quite confident that some of my party-identifications are false, I really don’t think I would be bothered much by the news that not all my party-identifications could possibly be true together.

    However, I should make a couple of concessions here. First, the example brings up a defect in the way I stated my point. Consider a version of the MP example in which I begin quite confident that I haven’t misidentified any MPs, and then discover the inconsistency. As you point out, I still needn’t reduce my confidence in any one of the single claims very far. So the case fits the requirements I set out in the quoted passage. Nevertheless, I think that it does not yet give much intuitive support to a cogency requirement. For this is a case where I would even be upset by the information that it was extremely probable that one of my MP-identifications was wrong. So the formulation in my book (in terms of reducing the probability of individual claims in the relevant set) should have been more general, to cover different ways in which factors other than inconsistency could explain our epistemic upset.

    The second concession is that I would not be incredibly surprised if examples could be produced where some (especially, I would guess, those trained in analytic philosophy) would have the intuition that the inconsistency in a book was worrisome—and where the worrisomeness could not directly be explained as flowing from consistency-independent factors. If such examples can be constructed, they will give some intuitive support to deductive cogency requirements. But I would also note that such intuitive support would hardly be decisive—particularly in the absence of an account of what the point of having deductively cogent beliefs is supposed to be.

  2. Barry Lam says:

    David,

    I’m worried that, in your response to Brian, you are stacking the deck too much against anyone who might think there is a pretheoretic reason to demand a deductive consistency constraint on rational belief, or that such a constraint can receive any support from an examination of cases.

    On the one hand, you are demanding an explanation of why we would want consistent beliefs. On the other hand, once there is a case that seems to support such a constraint, you want to resist such support by claiming that it actually supports some other consideration, and not “consistency per se.” That seems to make it close to impossible to argue against you. As soon as I give some kind of explanation (call it E) that might be satisfactory as to why we should want a consistency constraint on rational belief, any cases that can support a consistency constraint can be claimed to support this other thing E as the REAL constraint on rational belief. Hence, it isn’t “consistency per se” that is relevant to rational belief, but E. So someone can only succeed against one of your demands by failing in another: they either say that consistency is an unexplainable, basic requirement, or they say that there is an explanation of the requirement, but therefore all support for the requirement is not a support for the requirement “per se”, but for the explanation.

    That just gives the game to you a little too easily, doesn’t it?

  3. Peter Gerdes says:

    Barry,

    While your response indicates that David’s point needs clarifying I think this is a burden he can meet. In particular there are two ways one can interpret the norm of deductive cogency.

    1) The heuristic of trying to be deductively cogent often gives the right result, i.e., usually it is appropriate to try to be deductively cogent.

    2) One should never hold a set of views you know not to be deductively cogent.

    I take his response here to be showing that because we only value non-contradiction heuristic for correct beliefs there may be times we are justified in believing things we know to be in contradiction. That is he is arguing against 2 not 1. In particular in the MP example above as there may be no way to achieve deductive cogency without also radically reducing the number of true beliefs one has.

    In other words he isn’t objecting to any extrinsic motivation for cogency. If such a motivation was infallible (i.e. there never was a case where this motivation and cogency disagreed) then I think this would suffice to refute the point. However since the preface paradox illustrates that cogency and true belief norms can diverge we shouldn’t accept cogency as an indepdent norm.

    As for a quick example where we are bothered by lack of consistancy in and of itself consider a mathematical text claiming to offer a new axiomatic theory. In particular say Frege’s book on set theory which russell showed to be inconsistant. Even if you think there might be issues of truth involved in set theory consider someone who is expressly axiomitizing a mathematical theory with no intended interpratation.

    Admitedly I find the example sucpiscous and may just illustrate the practice in the mathematical community. The same way many people would have problems with ‘fuck’ appearing in a childrens book even though this doesn’t really have import for belief.

  4. David Christensen says:

    Barry,

    You raise an interesting point about methodology: that it would be unfair to demand that deductive consistency requirements be both (1) requirements for “consistency per se”, and (2) explained by something deeper. Most of my post was devoted to pressing (1), but my last sentence suggests I want to press (2) as well. And I think you’re right that I shouldn’t demand that (1) and (2) be satisfied simultaneously. First, if “per se” means something like “basic” or “unexplainable”, that would rig the game too transparently for even my taste. Second, it seems clear that if thinking about cases provided strong intuitive support for taking consistency as a rational requirement, that would count for a lot, even absent any deeper explanation for why consistency was important. And if a convincing deeper explanation were offered, that would count for a lot, even if our intuitions on cases were pretty mixed.

    I think my use of “consistency per se” was, well….somewhat short of maximally clear. In arguing that cases should exhibit a demand for consistency per se, I did not mean to rule out accounts on which a demand for consistency was explained by something more basic. I only meant to describe a certain problem that can crop up when one supports consistency requirements by adducing certain examples where discovering inconsistency would cause epistemic distress. Schematically, the problem is this: it’s possible that there’s something else (call it X) that, in these particular cases, flows from the inconsistency, and that X is what’s really responsible for the epistemic distress. Evidence for this possibility may be provided by similar examples where similar distress is caused by discoveries involving X, but not involving inconsistency. It may also be provided by examples where inconsistency doesn’t lead to X, and where we don’t feel epistemic distress. Thus my suspicion that, in the Parliament case, our epistemic distress is not due to “inconsistency per se” is not the suspicion that the demand for consistency is explained by something more basic. It’s that there is a consistency-independent factor X which is distressing us.

    To put it another way: Suppose someone defended and explained a consistency requirement by saying that the basic point of belief was to furnish a rich story of the world that had a good chance of being true. There is a sense in which such an account wouldn’t support requiring “consistency per se”—just because the consistency requirement would be explained by the basic purpose of having a story which had a chance of being true. But I’d have no objection at all to this sort of argument. And if it were supported by our intuitions in the predominant balance of cases we considered, the two strands could provide complementary pillars of support for a consistency requirement.

    (In the process of posting this, I notice that Peter Gerdes makes a similar point in different way. So I’m in sympathy with what Peter says above the line in his post. I’ll have to think more about contradictions in math books and ‘fuck’ in children’s books. My initial reaction is that insofar as the point of giving an axiomatization for a mathematical theory is to provide a way of formulating the whole theory, and insofar as a contradiction in the axioms would allow everything to be derived as part of the theory, an inconsistent axiomatization would be useless. So initially, this does seem like a special case.)