Synthetic A Priori

I’ve been reading Scott Soames’s 20th Century history books, and I’ve been surprised by a few things. Here’s one little New Years Eve puzzle for you that arises out of some things in the book: did Kripke show that there are synthetic a priori propositions?

At various places Soames seems to take this to be important. It’s a mistake, he seems to say, to identify the analytic and the a priori. Not as big a mistake as identifying the necessary with either of these, but still a mistake. (At least he seems to say this is a mistake in the discussion of Wittgenstein – I’d be happy to have it shown I’ve misinterpreted him here.) But we never get a conclusive example of a synthetic a priori proposition.

I’ve “argued previously”:http://brian.weatherson.org/sre.pdf that propositions like _I’m not a brain in a vat_ are knowable a priori, though they are pretty clearly synthetic. And I’m disposed to think that mathematical truths are synthetic a priori, as are some metaphysical principles like _There is no metaphysical vagueness_ and _Any two objects have a fusion_. So I’m happy the analytic and the a priori are separate. But Soames doesn’t discuss these, and nor does Kripke, so they don’t show that _Kripke_ showed the two concepts are distinct. (I’m bracketing here discussion of whether Kripke _couldn’t_ have _shown_ the two are distinct because showing in this sense implies novelty, and Kant beat him to it.)

Soames does discuss examples like _The metre stick is a metre long_ and argues, convincingly in my view, that these are not contingent a priori. He also argues, again I think convincingly, that propositions like _Snow is white iff snow is actually white_ are *contingent* a priori. Is that enough?

Well, that depends on how we view the case. Two options arise. First, we might say that all contingent propositions are synthetic, and hence this is an example of the synthetic a priori. But there’s another option, which is to say _Snow is white iff snow is actually white_ is an example of the contingent analytic. Why should we believe that? Well, one reason is that the argument Soames gives for it being a priori knowable (and hence true) seems only to rest on premises about the meanings of terms involved, especially of the _actually_ operator. So it looks to be analytic. That would suggest there are no Kripkean examples of the synthetic a priori.

Now that I’ve written all this it strikes me that there must be literature on this question somewhere. But I’ll leave the lit search to the new year.

Happy and safe New Year everyone!

JFP Analysis 2004-5

By now the APA interviews are in the books, so analysis of _Jobs for Philosophers_ is a little out-of-date. But hopefully this is still of some historical interest going forward.

bq. “Analysis of jobs advertised in _Jobs for Philosophers_ October and November 2004”:http://brian.weatherson.org/jfp2004a.htm

The most striking thing to me was the paucity of jobs in logic. I don’t know if that’s compensated for by people using open area searches to hire logic people, or if departments are thinking that logic is not a pressing need, or it is simply random variation. Apart from that the numbers are pretty much as you might have expected.

On more sombre notes, if you want to donate money to earthquake/tsunami victims, there are a number of good links “here”:http://www.crookedtimber.org/archives/003044.html.

CFP: Workshop on Relativising Truth

I’ll be going to a conference on relative truth in Barcelona next September. Here’s the announcement and call for papers. I like the company I’m being grouped in with!

bq.. 5-7 Sept 2005, Barcelona

Fregean orthodoxy has it that the contents of speech (thoughts) have absolute truth-values. If one thinks one has identified the content of an utterance and the presumed content is one whose truth value is still relative to some parameter, then one has not succeeded in identifying the content. Most philosophers of language follow this Fregean principle even today. Those who prefer not to speak of Fregean contents usually accept an analogous principle concerning utterances: that utterances of declarative sentences have absolute truth-values.

This orthodoxy has recently been challenged for a variety of different reasons. Some claim that relativizing the truth of utterances to moments of assessment is the only good way to avoid determinism. Some claim that the only way to make room for faultless disagreement is to relativize the truth of propositions. Some claim that the best semantics for epistemic modals involves relativised truth at the level of utterances. Some forms of supervaluationism about vagueness might also be seen as employing this strategy. There are further potential examples.

Thus relativizing utterance or propositional truth is a novel semantic strategy which is motivated by a variety of different phenomena. The purpose of this workshop is to bring together some proponents (and possibly opponents) in order to discuss any aspect of this topic.

Current list of contributors:

Kit Fine (NYU)
Manuel Garcia-Carpintero (Barcelona)
Andrea Iacona (Vercelli/Columbia)
Max Kölbel (Birmingham/Barcelona)
John MacFarlane (Berkeley)
Brian Weatherson (Cornell)

There is space for several further papers. If you are interested in presenting your work on this topic then please submit an abstract (ca. 1000 words) of your intended presentation by 15 April 2005 to “Max Kolbel”:mailto:m.kolbel@bham.ac.uk.

or

“Relativizing Utterance Truth”
c/o LOGOS research group
Dept. de Logica, Historia i Filosofia de la Ciencia
Facultat de Filosofia
Universitat de Barcelona
Baldiri i Reixac s/n
08028 Barcelona
Spain

There will be a limited number of rooms available at the centrally located “Residencia de Investigadors”, where the workshop will take place. The cost of a single room there will be 58 Euros (single) and 81 Euros (double) per night. No conference fee is planned.

There will shortly be a web page with information about the workshop accessible through the “LOGOS web site”:http://www.ub.es/grc_logos/.

Christmas in Manoguayabo

Since it’s the season for spreading good news stories, here’s a “delightful story about Pedro Martínez”:http://www.nytimes.com/2004/12/23/sports/baseball/23pedro.html?ex=1261544400&en=734c78e5d89f0103&ei=5090&partner=rssuserland and the resources he’s put back into his home town of Manoguayabo. It’s easy to feel jealous (or worse) towards sports stars for all the money they earn, but these feelings are hard to maintain when the star does so much good with the money.

For years Pedro has been my favourite player on my favourite (non-Australian) sporting team, and it was rather sad when he left so he could get more money from the New York Mets. But it’s hard to feel bad about Pedro getting the extra $13 million or so the Mets were offering when so much of it will be returned to Manoguayabo.

Ethics and Neurology

In the long Philosophical Perspectives thread there was very little discussion of the actual papers in the volume, so I thought it might be time for an ethics post around here to move the discussions back to philosophy. In particular, I wanted to note one possible complication arising out of “the paper Andy and I contributed”:http://brian.weatherson.org/prank.pdf.

A lot of people think that the way to do ethical epistemology is to systematise intuitions about a range of cases. One of the points Andy and I were making was that if you’re playing this game, it really might matter just which cases you focus on. Focus on life-and-death cases and you might get a different theory to if you focus on cases of everyday morality. This probably isn’t too surprising – I imagine a lot of people think consequentialism is at least extensionally correct for everyday matters, while some kind of deontological theory is needed for life-and-death cases. That is, I imagine there are lots of people who are happy with consequentialism plus rights as trumps, and the rights in question are only in danger of being violated in life-and-death cases. This is hardly a majority view, but it’s not a surprising view. What was odd about our position was we went the other way, arguing that a form of consequentialism (and maybe even of Consequentialism) was extensionally adequate in life-and-death cases, but failed to give the right answers when thinking about some everyday pranksters. (Actually we were neutral on whether this consequentialist theory was _extensionally_ adequate, since the counterexamples we had in mind might not be actual. But it failed to be extensionally adequate in a nearby world.)

Bracketing the details of the cases for now, it’s worthwhile to stop back and reflect on what this should tell us about methodology. In particular, I want to think about what would happen if we found out the following things were true.

* Systematising intuitions about life-and-death cases supported moral theory X
* Systematising intuitions about everyday cases supported moral theory Y, which is inconsistent with X
* The reason for the divergence is that different parts of the brain are involved with forming moral intuitions about everyday cases as compared to life-and-death cases; everyday cases are handled by a part of the brain generally associated with cognition, life-and-death cases by a part of the brain generally associated with emotional response

The third point is an enormous oversimplification of the neurology – it’s not really true there’s a part of the brain for cognition I guess, and the divide between emotionally loaded cases and non-loaded cases doesn’t exactly track the everyday/life-and-death distinction – but from what I’m told it’s not entirely off base. There are different parts of the brain that are at work in different moral cases. (Thanks to Tamar for pointing me to the studies showing this.) And the different parts are differentially correlated with emotional response. So figuring out what to do in such a case might be of some practical import.

As I see it there are four possible responses.

First, we might take this kind of result to be evidence that we were wrong all along in thinking moral epistemology should be based around intuitions in this way. There’s something to be said for that view, though I won’t have anything useful to say about it here.

Second, we could adopt a relatively weak form of particularism, one which said not that there are necessarily no general moral principles, but there are no general principles that you can support from one kind of case that have application in a different kind of case. The idea would be that whatever we learn about life-and-death cases tells us about life-and-death cases and nothing more, so the possibility that theories X and Y above could be genuinely _inconsistent_ vanishes. I think this is a reasonable view to take I guess.

Third and fourth, we could come up with arguments for one of other of X and Y being more firmly supported by the intuitions. Which way we go here will depend, I think, on how great a role we think emotional response should play in moral epistemology. On the one hand, it is odd to think our sober reasoned judgments could have to be corrected by the judgments we make under emotional duress. (And I take it part of the point of the neurological studies is that even considering some of the cases ethicists work with does constitute at least a mild form of emotional duress.) On the other hand, it seems a moral theory coldly detached from our emotional bond with the world is somehow deficient, that moral judgments at some level are _meant_ to carry emotional commitment with them.

I don’t have any ideas for how we should proceed at this point, I think it is just a hard question. But if the neurological data suggests that moral intuitions are radically diverse in their origins, it is a question that we intuition-synthesisers will have to address sooner rather than later.

Game Theory and Uncertainty

This semester I ended up teaching some stuff on the risk/uncertainty distinction and some stuff on game theory in fairly quick succession, and that got me thinking about how the two might interact. In particular, I started thinking about what would happen if you expanded the definition of mixed strategies to include strategies where each option was given an uncertain, as opposed to merely a risky, probability of being played. (E.g. you play A if p is true and B if ~p is true, where p is a proposition about which you don’t have a numerical probability judgment.)

The upshot is that in some cases you radically expand the range of Nash equilibria. In particular I discovered a game where by orthodox lights there is a unique Nash equilibrium, and it is a pure strategy of playing option C, suddenly includes a new equilibrium of both players playing a mixture of A and B. Just what the philosophical implications of this are are unclear, and I haven’t done much to clarify them. In earlier work I’d argued that as important as the risk/uncertainty distinction is to epistemology, it isn’t that important to decision theory. Indeed, I argued it can be effectively ignored in decision theory. If you need to pay careful attention to the risk/uncertainty distinction to work out all the Nash equilibria in a game, that’s more evidence that these equilibrium concepts that game theorists toss around are radical departures from standard single-person decision theory. I could be totally confused here though, and I’m much more confident in the technical result than in any philosophical derivations from it. The technical part is written up in this short note.

bq. “Game Playing Under Ignorance”:http://brian.weatherson.org/gpui.pdf

Hiatus

I’m in Australia for a while, so the papers blog won’t be updated over the break, and TAR will be updating rarely. The good news (apart from the inherent goodness in being in Australia in summer) is that I got work done on the flight over for the first time ever. As well as doing some reading, I wrote a paper!

bq. “Vagueness as Indeterminacy”:http://brian.weatherson.org/vai.pdf

Since it was written on a plane I had to guess/leave out a lot of the references, but I hope it makes sense as it stands. The paper is a response to recent work by Patrick Greenough, Nick Smith and Matti Eklund, all of whom argue that we can’t identify vagueness with indeterminacy. I argue that we can and should. The most interesting part of the paper is a long list of examples of vague words that don’t behave at all the same way as the paradigmatic examples of vague words you see in most philosophical work. I was writing this while reading _Sense and Sensibilia_ and the paper feels a little Austinian in my constant complaints about other people not having a sufficiently broad range of examples.

From the SEP

I just wanted to pass along some good news from the “Stanford Encylopaedia”:http://plato.stanford.edu/ that some of you may not have heard. (This is from an email to authors that I imagine is meant to be basically public.)

bq. We are delighted to announce that the National Endowment for the Humanities has awarded a $500,000 Challenge Grant to core library organizations which are building support for the Stanford Encyclopedia of Philosophy (SEP). The terms of the grant require these library organizations to raise $1.5 million in matching funds from their member libraries.
Continue reading

From the SEP

I just wanted to pass along some good news from the “Stanford Encylopaedia”:http://plato.stanford.edu/ that some of you may not have heard. (This is from an email to authors that I imagine is meant to be basically public.)

bq. We are delighted to announce that the National Endowment for the Humanities has awarded a $500,000 Challenge Grant to core library organizations which are building support for the Stanford Encyclopedia of Philosophy (SEP). The terms of the grant require these library organizations to raise $1.5 million in matching funds from their member libraries.
Continue reading

Definite Descriptions and NPIs

I “agree with Kai”:http://semantics-online.org/blog/2004/12/rothschild_on_definites_and_npis that “Daniel Rothschild’s paper on definite descriptions and NPIs”:http://www.princeton.edu/%7Edrothsch/npidd.pdf looks very interesting, and worth much consideration. Two quick related comments on it.

(UPDATE: Daniel has posted a “longer version of the paper”:http://www.princeton.edu/~drothsch/NPIrev2.pdf which interested parties should look at. And see the comments for several corrections to misstatements I make in the post.)

First, there’s no discussion of any theory of negative polarity licencing apart from Ladusaw’s. Now Lawdusaw’s theory is very good, but it isn’t the only theory on the market, but it does face difficulties, and back when I looked at NPIs seriously (around 1996 I believe) it didn’t even seem to be the majority view. (That is, it didn’t seem to be the majority view of people actively writing on it. That’s consistent with being the majority view of most linguists. Active workers in a field usually oversubscribe to fringe theories.)

Second, one of the difficulties for Ladusaw’s view is handling negative polarity in the antecedents of conditionals, as in (1) or (2).

(1) If John thinks about the puzzle at all, he will solve it.
(2) If John had thought about the puzzle at all, he would have solved it. (See below for second thoughts on this one.)

These aren’t downward entailing, but as we see they are NPI licencing. In many respects conditionals behave semantically like Russellian definite descriptions. (1) is similar (if not identical) to the claim “In the nearest possibility that John thinks about the puzzle at all, he will solve it.”. So the worry for Rothschild’s objection is that a story about NPIs that explains what they do in conditionals might, somehow, help the Russellian.

Two big on the other hands…

Since the problem with indicatives is that they suggest the Ladusaw account is too restrictive, and the problem for the Russellian is that Russellian theories make Ladusaw’s account too permissive, it isn’t clear how fixing the account to get conditionals to work is going to help. But it might help.

There’s of course a simple explanation for why NPIs are licenced in the antecedents of subjunctives – subjunctives implicate the negations of their antecedents. If I wasn’t so lazy I’d find a dozen references of people making this simple explanation. I always thought there was a simple reason that didn’t work – subjunctives _don’t_ in general implicate the negations of their antecedents. Just as I was writing this up, I noticed that reply won’t work. It won’t work because when the implication from “Had p, would q” to not p is blocked, so is the licencing of NPIs in p. Compare (2) and (3).

(2) If John had thought about the puzzle at all, he would have solved it.
(3) If John had thought about the puzzle at all, things would be exactly as they are.

(3) is a ‘forensic’ counterfactual of the sort discussed in Alan Ross Anderson’s 1951 Analysis paper. (I think it’s 1951, I don’t have the reference in front of me.) It’s exactly the kind of conditional that shows the (pragmatic) inference from Had p, would q” to not p is not universal. And it doesn’t licence NPIs. Maybe the simple explanation, which is of course consistent with Ladusaw’s theory, is right after all.

Final point. Rothschild is entirely right that a theory of DDs should take NPIs very seriously, though I don’t think his evidence (that Ladusaw’s seminal paper turns up in anthologies) provides great reason to believe that. NPIs are basically gifts from God to the semanticist – they provide non-trivial non-obvious tests of semantic hypotheses that probably weren’t what theories were originally designed to capture, but which can’t easily be explained away. There’s not many of those in semantics. Compare the long-running disputes over what Russellians can say about “The table is covered in books”, where there are (fittingly) too many rather than too few “explanations away”. NPI tests are (relatively speaking) good clean tests for whether a semantic theory works, and if Russellian theories of definite descriptions don’t, then those theories are wrong.