Preface Paradox

I’ve been reading a little bit on the preface paradox, so what I say in the following might be unoriginal. I doubt it is false however.

The standard way of setting up the preface paradox is something like the following. A historian writes a book. It includes, let’s say, 4000 sentences, each of them (we’ll assume for sake of argument!) expressing a proposition. She is careful with writing the book, and it is natural enough to say she believes each of the propositions in it. Call these P1, P2, …, P4000. In the preface she writes something like the following.

bq. Despite my best efforts, I’m sure that this book, like all books, contains some mistakes.

The thought is that she’s now contradicted herself, because she has said each of the following.

bq. P1, P2, …, P4000, ~(P1 & P2 & … & P4000)

But it is really unclear that she has asserted these things, or believes them, which is what’s really at issue. What she said was that there is a mistake in the book. Now it is true that the book is (among other things) the conjunction of P1 through P4000. (“Among other things” because the book also contains claims about evidential relationships between the claims.) But from that it doesn’t follow that she believes that one of P1 through P4000 is false unless she _believes_ that P1 through P4000 are the propositions in the book.

(Actually even that isn’t enough – she also needs to infer from the falsity of something in the book and the fact that what’s in the book is is P1 through P4000 that one of P1 through P4000 is false. One of the standard ways to resolve the preface paradox is to deny that beliefs have to be closed under conjunction. It is noticable that even deniers of closure assume closure in setting up the puzzle.)

To be sure, the author did write the book, so in some sense she knows what is in it. But if the book is long enough to get a prefatory warning of falsity, it isn’t clear that the author needs to remember everything is in the book. At best, what she could remember is what she _intended_ to write. She can hardly remember her own typos that went uncorrected, or misprints. But in reality she probably can’t remember all the intentions either. (I hardly remember the start of this post, let alone the start of a 300 page book.)

What is unclear to me is how far this goes to solving the preface paradox. I’m half inclined to say that it _entirely_ solves it. A rational author who knew exactly what they said, and believed every claim in the book, would not take any of it back in the preface. Real authors are not like this – they are forgetful.

UPDATE: I should research first, write second. The main point I’m making here has already been made – in a paper by “Simon Evnine”:http://www.miami.edu/phi/evnine/ “Believing Conjunctions”, _Synthese_ 118: 201–227, 1999. This isn’t to say I agree with everything Evnine says, but he does make this point first, or at least before me!

Functionalism and Conjunctions

Here’s a toy functionalist definition of belief.

bq. To believe that P is to be disposed to act in ways that would tend to satisfy one’s desires, whatever they are, in a world in which P (together with one’s other beliefs) were true. (Stalnaker, _Inquiry_, p. 15)

Stalnaker says that this is much too simple, so my criticisms of this definition aren’t criticisms of Stalnaker. I’m just interested here in working out just how this is too simple. I’m interested in this in part because of how it relates to this claim that Stalnaker makes, one he doesn’t qualify.

bq. If a person is, in general, disposed to act in ways that would tend to be successful if P (together with his other beliefs) were true, and is also disposed to act in ways that would be successful if Q (together with his other beliefs) were true, then he is disposed to act in ways that would be successful if P & Q (together with his other beliefs) were true. (82)

It seems to me that this isn’t true, unless we accept the toy definition, which we should not. The following example should illustrate this. (By the way, I have no idea whether I’m just reinventing the wheel with this example. I suspect all the points in this post have been made elsewhere – I’m a little out of my expertise here.) [UPDATE: As Matt Weiner points out in comments, the case I describe is very similar to one he describes “here”:http://mattweiner.net/blog/archives/000508.html, without appealing to beliefs. So I’m _certainly_ not original!]
Continue reading

Papers Blog – May 14

The “papers blog”:http://opp.weatherson.org/archives/004371.html is up again. It has been getting a little slack here, but hopefully I’m not falling so far behind for it to be utterly useless.

Richard Heck’s “PhOnline”:http://phonline.org/index.php is usually more up to date, because philosophers most their own papers there. (Though still not as many people post there as I would like.) Anyone following OPP should also be following PhOnline.

Truer

“Jon Kvanvig is rather mad”:http://bengal-ng.missouri.edu/~kvanvigj/certain_doubts/index.php?p=327#more-327 at some things Donald Kagan said in his Jefferson lecture arguing for (or at least asserting) the preeminence of history in the humanities. I don’t want to get into a war about what the leading humanities department should be (I don’t think it should be philosophy because I don’t think it’s part of the humanities, but never mind that) but I do want to agree with one thing Kagan said.

bq. [S]ome things [are] truer than others.

Truer words were never spoken!

Truer

“Jon Kvanvig is rather mad”:http://bengal-ng.missouri.edu/~kvanvigj/certain_doubts/index.php?p=327#more-327 at some things Donald Kagan said in his Jefferson lecture arguing for (or at least asserting) the preeminence of history in the humanities. I don’t want to get into a war about what the leading humanities department should be (I don’t think it should be philosophy because I don’t think it’s part of the humanities, but never mind that) but I do want to agree with one thing Kagan said.

bq. [S]ome things [are] truer than others.

Truer words were never spoken!

Law, Philosophy and Naturalism Conference

Some readers may be interested in a conference on naturalism in law and philosophy to be held at Rutgers next month, June 7 to be precise. It’s a fairly impressive speaker list, including the blogworld’s own Brian Leiter, as well as Michael Smith, Jerry Fodor, Stephen Stich and many others. This link provides many more details about the program as well as the logistics. In an earlier post, Brian Leiter said, “As things stand, Stich and I will be carrying the flag for naturalism, with the Williamses and Zipursky representing the forces of retrograde philosophy!” As a proud reactionary, I’m hoping some of the other philosophers and legal scholars present, especially Michael Smith (who has never seemed averse to good a priori theorising) can help hold the fort.

Context and Questions

In their sustained “defence of insensitivity”:http://www.amazon.com/exec/obidos/ASIN/1405126752/caoineorg-20?creative=327641&camp=14573&link_code=as1, Cappelen and Lepore rely quite a bit on indirect speech reports. So, for instance, the fact that the following bit of discourse always seems natural

bq. A: S knows that p
B [later, in different context]: A said that S knows that p

is taken as evidence against contextualism. I think this is fairly strong evidence, but not everyone agrees. It might be argued that speech reports are messy things, and no one really understands them. Fair enough, perhaps. It might be worth noting though that the same kind of phenomenon occurs with questions. Consider the following scenario. A is walking aimlessly around Ithaca. Every five minutes, she asks “Is Tamar here?”. All through this time, Tamar is working in her office in Goldwin Smith Hall, so her location doesn’t change. B, who knows this, changes her answer to A’s question, depending on whether or not A is in Goldwin Smith Hall. When A asks the question downtown, B says “No”. When she asks it again in Goldwin Smith, B says “Yes”. When she asks it yet again while climbing down Cascadilla Gorge, B says “No”. ‘Here’ is a true contextually sensitive term.

Note that if A and B are separated, B will answer according to A’s location, not her own. So if A is downtown (and B knows this) and is talking to B by phone, and asks “Is Tamar here?”, if B is in Goldwin Smith she can answer, “No, Tamar is here.” For any context sensitive term such that different speakers are in different contexts, this kind of speech act, where we answer “No” and then follow up by uttering the sentence that looks like the indicative form of the question, is possible.

Here we have two tests for context sensitivity. First test, can we change answers while the underlying facts stay the same? Second test, can we consistently answer “No” and then repeat the question? It seems ‘knows’ fails both tests for context-sensitivity. Since neither case involves speech reports, this means the contextualist has to posit semantic blindness that extends even to fairly simple question and answer conversations. Let’s see a couple of cases illustrating this.
Continue reading

Knowledge of Lottery Results

A lot of people seem to have the intuition that you can’t, in ordinary circumstances, know that a particular lottery ticket will lose. Dana Nelkin, in a 2000 _Philosophical Review_ paper says this is because the belief you have is based solely on statistical evidence. (She says you can’t even have a justified belief to this effect.) Duncan Pritchard (in “this paper (PDF)”:http://www.philosophy.stir.ac.uk/staff/duncan-pritchard/documents/KnowledgeLuckLotteries.pdf) says that it is because the possible world in which the ticket wins is too similar to the actual world. As he says…

bq. After all, the possible world in which I win the lottery is a world just like this one, where all that need be different is that a few coloured balls fall in a slightly different configuration.

I rather doubt both of these explanations. I think the intuition that you can’t know you’ll lose is a bit of bad scepticism. To test that, I want to see how intuitions go on the following kind of lottery. This is a real-world case by the way, a bit of found philosophy. On Monday through Saturday, the lotteries in Australia are based around coloured balls falling in distinctive configurations. But on Sundays things are different, as “this site”:http://www.ozlotteries.com/play.php?lottery_id=6 explains.

bq. Sunday Lotto (or ‘Soccerpools’) is based on Australian and European soccer matches. You don’t need to know anything about Soccer though – it can be played just like a normal lottery game. Each week, 38 matches are listed and numbered 1 to 38 inclusive. For a standard lottery, you choose 6 numbers from the range of 1 to 38. The 6 matches that accumulate to the highest total drawn scores are the winning numbers (e.g. Match A with a final score of 4-4 has a higher total score than Match B which finished 3-3). The 7th highest result is the supplementary number.

Imagine I’m looking at a particular ticket, say the ticket of someone who does play the Sunday Lotto as a lottery, and I believe it won’t win. Could this be knowledge?

Pritchard says that it isn’t knowledge if there is a nearby world in which it wins. But imagine that (unbeknownst to me or the buyer) for this ticket to win requires there to be a score draw between Chelsea (at home) and a relegation threatened team. Chelsea normally win these one-sided games, and they very rarely concede a goal. So the worlds in which this ticket wins are rather remote. Is that sufficient for my knowing the ticket will lose? Do I have to believe it will lose because of these facts about Chelsea to know it will lose? Immediately following the quote above, Pritchard says

bq. Crucially, however, the _nearness_ of the relevant possible worlds has an impact on our judgements about the presence of luck.

That doesn’t seem right. If I believe this ticket will lose, and it turns out that (because one of the numbers corresponds to the Chelsea vs scrubs game) that the nearest world in which it wins is a long ways away, our judgments about the luckiness of my belief don’t seem to change. Or at least they don’t to me. Known distance from the actual world matters more than actual distance, I think for determining whether my belief is true by luck or not.

If I just believe this ticket will lose for standard lottery reasons, then Nelkin will still say I don’t know it will lose. But by her standards, all I need to do is have a minimal amount of knowledge about the underlying games in order to genuinely know the ticket will lose. And that doesn’t seem right either. Unless I’m deeply involved in fixing the games or some such, I think intuitions about the cases are that I can no more know a particular Sunday Lotto ticket will lose than I can know a Saturday lotto ticket will lose, even if I know a little bit about football.

If this is right about the intuitions, one of three things follows.

# We reject both intuitions (the one about Sunday and the one about Saturday) as being bad sceptical intuitions; or
# We find a way to distinguish Saturday from Sunday lotteries; or
# We find a new explanation for what is wrong with beliefs about lotteries.

I’m all for option 1, but obviously it isn’t the only option on the table.