Also on NDPR, William Talbott reviews “David Christensen’s book on the place of logic”:http://ndpr.nd.edu/review.cfm?id=4181. Talbott approves of Christensen’s use of the preface paradox in anti-closure arguments.
bq. In what is surely the most entertaining example in the book (40-53, 101-105), Christensen asks us to imagine three historians, X, Y, and Z. X regards Y and Z as having a somewhat neurotic obsession with detail-checking, which makes their books more reliable than his on details, though none of their books has ever been completely error-free. X, Y, and Z have each published a new book recently. Here is Christensen’s summary of the case: “Professor X has expressed his firm beliefs that (1) every previous book in the field (including his own) has contained multiple errors; (2) he’s not as careful a scholar as Y or Z; and (3) the new books by Y and Z will be found to contain errors.” (45)
bq. What about his own book? Does X expect reviewers to find any errors in it? Given X’s opinions about Y and Z’s superior fact-checking and his confidence that even their books contain errors, even if X currently believes every statement in his book, Christensen thinks it is intuitively quite absurd to think that it could be rational for X to believe that his new book is error-free. But, of course, that belief is required by deductive cogency.
This all seems mistaken to me twice over.
First, it isn’t at all absurd to have people insist on the accuracy of what they write. Travel guides do it all the time. The other day I was watching a BBC show on travelling to Iran that started with something like “Everything in the show was accurate at the time of filming, but things may have changed in the interim.” It is a fact about academic books that they include modest prefaces. But that’s because you don’t have to believe what you say in academia – you just have to be defend it. My attitude towards my philosophical theories is a bit like my attitude towards my footy picks: they’re the best I can do, but I’m not going to stake very much on any of them. X is only required by deductive cogency to believe the conjunction of everything he writes if he believes everything in it, and if he’s a smart historian he shouldn’t.
Second, there’s a crucial scope ambiguity here that is distracting. Deductive cogency doesn’t require that I believe that every proposition with the property _is believed by Brian_ is also true. That only follows from deductive cogency _plus_ perfect knowledge of my own beliefs. Similarly, even if I believe every proposition in the book, I don’t have to believe it is mistake free unless I know _exactly_ what is in it. And if the example is at all realistic, that won’t be the case. What deductive cogency does require is that for every set of propositions I believe, I also believe their conjunction. I don’t really see an example here that tells against this.
But the real problem is the following.
bq. All of us find ourselves in fallibility paradoxes, when, for example, we believe that at least one of our memory beliefs is false, or when we simply believe that at least one of our beliefs is false. Think of how insufferable a person would be if, when there was a conflict of memories, she always insisted that other people’s memories were mistaken, never her own.
Deductive cogency is a constraint on _beliefs_, not _memories_. And it’s close to analytic that _everyone_ insists in a conflict of beliefs that their beliefs are correct and the other person’s are faulty. If they didn’t insist that _p_ is correct, they wouldn’t be _believing_ that _p_.
There is a point here about behaviour in debate, but the right lesson here is the one that the cogency advocates have been saying for years. In lots of everyday cases, there is good reason to fall back from believing that _p_ to believing that _Probably p_. That’s what I do when faced with someone who gives me good, and at the time unanswerable, reason to believe ~p, although I had previously believed _p_.