I’m reading through David Christensen’s interesting _Putting Logic in Its Place_. Christensen is on the side of those who think that the Preface Paradox shows that deductive cogency is not a constraint on rational belief. He responds to several arguments to the contrary, of which I think the most interesting is what he calls the Argument Argument. This is, roughly speaking, the view that deductive cogency has to be a constraint because otherwise we couldn’t explain the force of deductive argumentation. (This is rough because as stated the argument, or at least the arguer, looks poised to confuse inference with implication. I think a version of the argument that doesn’t make this confusion can be given, but that’s for a later post.) After some back-and-forth that I won’t repeat, Christensen gets to the following worry.
bq. Suppose, for example, the author of a history book were to discover that the claims _in the body of her book_ formed an inconsistent set. Intuitively, wouldn’t this be very disturbing? … It is hard to see why an author _should_ be more concerned by an inconsistency within the body of the book than with preface-style inconsistency … But wouldn’t discovering inconsistency among the individual historical claims always _actually be_ highly disturbing?
This looks like a fairly serious concern to me. But Christensen dismisses it in a rather odd way.
bq. What the defender of cohency needs to make his point is a case involving an inconsistency that necessarily involves a great number of the huge and diverse set of historical claims making up the body of a book, and for my part I know of no case in which we’ve had experience of this sort of discovering in actual inquiry … Until persauive specific examples are found, then, it seems to me that we’ve been given no good reason to think that deductive cogency requirements play an important part in epistemic rationality.
There are two interpretations of this, and both of them seem odd to me.
First, Christensen might be requiring that we really find a large set of inconsistent claims in a history book (which has only consistent proper subsets) before we can run this argument. But that is a bizarre restriction. For one thing, it is really really hard (I’d imagine impossible) to find an actual instance of the preface paradox of the form that supports the anti-cogency view. An actual preface paradox would have to satisfy the following constraints. (I’m somewhat repeating myself from a couple of days ago here, but the point being made is a little different.)
* The author believes each of the claims in the book – none of the claims are put forward because they seem interesting, or controversial, or career-promoting, or any of the reasons authors might put forward arguments short of actually believing them. (Christensen assumes that belief is a norm of assertion, but various arguments by/with “Ishani”:http://philosophy.syr.edu/maitra.html have convinced me that’s not right – we put up with much more for the sake of theory.)
* The author believes that one of the claims in the book is false, where this is understood _de re_ not _de dicto_.
I think it is _very rare_ to find an actual genuine work of scholarship that satisfies _either_ of these constraints. Now this might not matter to philosophy, we can always idealise away from these facts about the real world. (Though once we idealise we can’t rely on intuitions about actuality, a fact that wielders of the preface paradox are not always careful to respect.) But of course if the preface-paradox-mongerers can go to hypothetical cases, so can the defenders of cogency. So this can’t be the right interpretation.
Maybe Christensen means there hasn’t been as much as a fully specific philosophical example. That was true, but we can easily enough make it false. Imagine the following kind of, perhaps not particularly exciting, history book. The background (this part is fictional) is that there is some huge debate in the House of Commons about, say, whether to join the Euro. This involves all sorts of speeches on the floor, and votes.
Professor X writes a book about this debate, going into much too much detail about what each member said, for some key members how they voted on various subsidiary issues (amendments and the like), and how this related to their views of their party whips and their own place in their party. At the start X says matter-of-factly, that Labor has 356 members in the Commons. The number of members each party has is usually one of the best-known features of the Parliament, so we can imagine X had many sources for this. Nevertheless, when one reads the text closely one finds 357 different MPs such that X either says they are Labor or says something that entails (given other things X says) that they are Labor. (For instance, X might say that M.P. Z broke with his party to vote for the amendment, and say Labor were the only party opposing the amendment.)
This way we’ll have a contradiction involving 400 or so things that X says, even though no subset of that is contradictory. We could coherently give probability 0.997 to each of the things that X says, which is a perfectly high probability by anyone’s lights. But the inconsistency should, I think, worry us. This kind of mistake seems different in kind from X saying in the preface, “I’m sure there are mistakes in here, etc.” And Christensen’s theory doesn’t, at least thus far, have a way of reflecting this difference.
(Thanks to Barry Lam in comments on the last preface thread for suggesting Christensen’s book. It was on my ‘to-be-read-someday’ list but Barry’s comment moved it to by ‘to-be-read-now’ list.)