Intuitions

This is a very badly worked out laundry list of ideas for my paper on intuitions. Most of it falls under the category of responses to the “Weinberg, Nichols, and Stich “:http://ruccs.rutgers.edu/ArchiveFolder/Research%20Group/Publications/NEI/NEIPT.html experiments, but some of it is probably just repitition.
Continue reading

Intuitions

This is a very badly worked out laundry list of ideas for my paper on intuitions. Most of it falls under the category of responses to the “Weinberg, Nichols, and Stich “:http://ruccs.rutgers.edu/ArchiveFolder/Research%20Group/Publications/NEI/NEIPT.html experiments, but some of it is probably just repitition.
Continue reading

Moral Concepts and Meanings

I’m pretty sure this is discussed somewhere, but maybe it hasn’t been, so let’s try.

It’s (very) plausible that someone can share our moral concepts and disagree, perhaps extremely, about how they apply. Osama bin Laden doesn’t mean something different to what I do by ‘good’, he just has wild views about which kinds of actions are and aren’t good. The proof of this, if it’s needed, is that when he says “Killing Westerners is good”, he’s revealing he has different morals to me, not a different language. (Well, he has some different languages to me, but when we’re both speaking English we mean the same thing by good.)

It’s also plausible that some people can share our moral concepts and disgaree, perhaps extremely, about the conceptual connections between moral belief and action. This is just David Brink’s case of the amoralist.

It’s not plausible, or at least not to me, that someone could share our moral concepts, but differ extremely in *both* which things they apply to and what their connection to action is. That is, someone who said things like “Killing Westerners is good”, “Supporting democracy is bad” etc., but wasn’t at all moved to kill westerners or undermine support for democracies would, I think, mean something different to us by “good” and “bad”. Or at least so I think.

Perhaps we can imagine such a person. Imagine an amoralist in Al-Qaeda land, who goes around saying “Killing Westerners is good” and so on, but is completely unmotivated, even denies that the goodness of killing Westerners provides her with a reason to actually go and kill Westerners. Perhaps she would be just like Brink’s amoralist, and perhaps she would mean what we mean by “good”. But the case looks very marginal.

All of this does make me think that the ‘scare quotes’ response to Brink is the right one. If we can only make sense of the amoralist as expressing moral concepts when her moral expressions match up with moral orthodoxy, then it’s plausible that by “good” she just means something like “usuallly called _good_”.

Switzerland!

Here’s the “program”:http://www.unifr.ch/philo/modern-contemporary/3eme_cycle/programme.html for a workshop on intuitions I’m attending in Fribourg in November. It looks very exciting, though I might get a little overwhelmed by my co-panelists!

I Don’t Understand Essentialists

I’m reading “Kathleen Stock’s”:http://www.sussex.ac.uk/Users/kms21/ paper _The Tower of Goldbach and Other Impossible Tales_ from _Imagination, Philosophy and the Arts_, and I just wanted to share one odd example. Stock argues that we cannot imagine impossible things. I might say a little more about this below. But for now I just want to note a very odd example of an (alleged!) impossibility.

bq. I want to deny that one can imagine that _a banana is a gun_, in the sense that one imagines that _there is an object such that it is both a gun and a banana_.

Setting aside questions of whether this is imaginable, this doesn’t even strike me as prima facie impossible. If it’s possible to have a machine-gun cane, as “Secret Squirrel”:http://www.workingforchange.com/comic.cfm?itemid=16378 did, why not a banana revolver? If we distributed all the workings of a gun throughout a banana, wouldn’t we have a banana gun? With miniturisation these days, I imagine this is, or soon will be, technologically possible. (Note to airport security – I obviously have no idea what I’m talking about vis a vis the technological challenges involved. No banana _I_ take on board is also a gun.)

I guess the idea is that there would be two separate objects in such a construction, one of them a banana, the other a gun. But if the gun mechanisms were in part held together by the banana, and none of the mechanisms notably protruded from the skin of the banana (the nozzle being built into the stem) it seems to me we’d have a single ordinary object. And it would certainly be a gun. And I don’t think it becomes an ex-banana by putting things inside it, especially if they are very small relative to the size of the banana. This would make it inedible, but that’s no problem. A poisoned banana is still a banana. So why think the banana gun is an impossibility?

My diagnosis, and it’s horribly uncharitable, is that this is what happens when essentialism goes too far. People become convinced that objects have some of their properties essentially, and go overboard about how many such properties they essentially have. On occasions like this I’m inclined to react to the other extreme, and join Lewis in the class (plurality) of modern anti-essentialists.

Two-Envelopes and Variables

“Eric Schwitzgebel”:http://www.faculty.ucr.edu/~eschwitz/ and “Josh Dever”:https://webspace.utexas.edu/deverj/personal/dever.html have “a paper on the two-envelope paradox”:http://www.faculty.ucr.edu/~eschwitz/SchwitzAbs/TwoEnvelope.htm arguing that the paradox arises because of faulty reasoning involving variables. They note that if we impose a constraint on which variables can be used in decision-theoretic reasoning, the paradoxical reasoning is blocked. I won’t repeat the formal version of the constraint (from page 4 of the paper) in HTML. But the effect is that X is only a legitimate variable if “the expected value of X is the same conditional on each event in the partition.” The problem is then that the paradoxical reasoning essentially involves appeal to a variable that does not satisfy this constraint.

As an aside, this _kind_ of response is not entirely uncommon in discussions of the two-envelope literature, so it’s worth taking seriously. And Schwitzgebel and Dever’s version of the response is by far the most careful and plausible I have seen. (And it’s probably the earliest such version, given their note in the paper that they discussed this with people in Berkeley in 1993. Given the history of the two-envelope discussion, where so much happens online etc, this kind of fact seems quite relevant to priority, if priority matters at all here.) But it still seems flawed.

Here’s the reason. It’s true that their constraint blocks the paradoxical reasoning. But getting a constraint with that property is dead easy. Just say that any decision-theoretic reasoning is invalid and you’ll do that. The hard part is finding a constraint that knocks out the two-envelope reasoning, but not any reasoning that we want, both intuitively and on reflection, to preserve. And I think Schwitzgebel and Dever’s constraint fails that test.

Consider the following example. God partitions the reals in [0, 1] into two unmeasurable sets, S1 and S2. He picks a real at random from [0, 1]. If it’s in S1, He puts $10 into a red envelope, if it’s in S2 He puts $20 into that red envelope. He then rolls two fair and independent dice. If they land double-six, he puts an amount into a blue envelope equal to the amount in the red envelope plus $5. Otherwise, he puts an amount into that blue envelope equal to $5 less than the amount in the red envelope. Got it? (It’s easier with tables, but tables are hard in blogs.)

You are not told which number He picked, or how the dice landed, but you are told all of the above. You are then given a choice of the red or blue envelopes. How should you choose?

I take it that it’s obvious you should pick the red envelope. After all, whatever is in it, you have a 35/36 chance of getting $5 less with blue, and only a 1/36 chance of getting more. So I say, pick red.

But Schwitzgebel and Dever can’t say that. For they say the above reasoning violates their constraints on which variables can be used. (Or, more precisely, that any formalised version of the above reasoning would do so.) As near as I can tell, the reasoning I just made is just as bad, by their lights, as the paradoxical two-envelope reasoning.

As I see it, they are now under an obligation. For it seems obvious that red is better than blue, so they should tell us what principle they *do* endorse that gets that conclusion. It can’t just be the principle _Always maximise expected utility_, since in this case neither picking red nor picking blue has a defined expected utility. And, although this might just be a failure of imagination on my part, I can’t see what else it might be.

While I’m in this combative mood, I should also note that this example casts some doubt on _any_ attempt to resolve the two-envelope paradox by appeal to expected utility reasoning. For the two-envelope paradox rests on principles that are plausible in cases like this one, even when expected utility reasoning fails. I’ll be polite/lazy enough to not quote anyone who actually does try and solve the problem that way.

Josh Dever

I just noticed that “Josh Dever”:https://webspace.utexas.edu/deverj/personal/dever.html has a website with a number of interesting papers, many of them on *vagueness*, on it. It’s been added to the list of pages being tracked. Today’s “papers blog”:http://opp.weatherson.org is also up, with an unusual (for the papers blog!) focus on history.

New Paper

Another draft paper for me. So drafty it might not even go on my papers page, but since it is more carefully crafted than a blogpost, it will go on the blog.

bq. “Moore, Bradley and Indicative Conditionals”:http://brian.weatherson.org/mbaic.pdf

Nietzsche’s Moral and Political Philosophy

Brian Leiter has posted the Stanford Encyclopaedia entry on “Nietzsche’s Moral and Political Philosophy”:http://plato.stanford.edu/entries/nietzsche-moral-political/. If it has anything like the “page views of the main Nietzche entry”:http://plato.stanford.edu/usage/12279.html these might be the most widely read words he’s ever written.

Harman vs Peacocke on Lewis on Conditionals

As regular readers will know, I tend to be a quivering Milquetoast when it comes to philosophical disputes. And most of the time I tend to expect my colleagues in the profession to behave similarly. So I was shocked, _shocked_, to see this paragraph in Gil Harman’s “review of Peacocke’s _The Realm of Reason_.”:http://www.princeton.edu/~harman/Papers/Peacocke.pdf.

bq. At another point, Peacocke says, “There is no plausible truth-conditional content for the indicative conditional.” with a footnote reference, “See D. Lewis, ‘Probabilities of Conditionals and Conditional Probabilities’, _Philosophical Review_, 85 (1976), 297-315, and ‘Probabilities of Conditionals and Conditional Probabilities II’, _Philosophical Review_, 95 (1986), 581-9.” Peacocke gives no further explanation. This completely misrepresents the content of the two papers and Lewis’ view about indicative conditionals.

I had mostly forgotten about this bit of the book. (And it took me a while to find it again because the index references to Lewis don’t include the relevant page. 14, if you’re wondering.) When I first saw it I didn’t make too much of it. Here’s a quick telling of the back story to say why.

What Lewis proved in those papers was that (given some more or less undeniable assumptions) there is no function _f_ from pairs of propositions to propositions such that the probability of f(p, q) always equals the probability of _q_ given _p_.

One might go on to reason the following way. If the conditional _If p then q_ has “truth-conditional” content, then the probability of that content being accurate, i.e. of the conditional being true, is the probability of _q_ given _p_. But then there would be such an _f_, since f(p, q) could just be _If p then q_. So we conclude that the conditional does not have “truth-conditional” content. (The scare/quotation quotes are because I’m not really sure what the effect of this modifier is meant to be. Are relativist contents like MacFarlane assigns to various claims, and Egan, Hawthorne and I assign to epistemic modal claims meant to be truth-conditional or not? I honestly don’t know, which makes me a little wary of _using_ this locution.)

One might argue this, and to a very rough approximation Dorothy Edgington has argued this, so it’s not like there’s no connection between Lewis’s results and the idea that conditionals don’t have “truth-conditional” content. (Edgington’s views are of course much more subtle and detailed than this, but unless I’m entirely misremembering her position, she isn’t entirely unsympathetic to a this line of argument.)

But Harman is _entirely right_ to point out that it’s not what Lewis argued. Lewis denied the premise that if the conditional _If p then q_ can be true or false, the probability of it being true is the conditional probability of _q_ given _p_. Lewis acknowledged that that position has some intuitive plausibility, but suggested that it could be explained by either Gricean or Jacksonian mechanisms. In fact Lewis thought that _If p then q_ was true just in case _p_ was false or _q_ true, so the probability of _q_ given _p_ is just a lower bound for the probability of _If p then q_ being true.

When I first started out here, I sorta meant to write a post saying that each side had some merit and some demerit. The idea was that Peacocke was wrong to not note the discrepancy between his position and the position of the papers he was using to support that position, but Harman’s comment might leave a misleading impression that there was no connection between Lewis’s results and Peacocke’s conclusion. But having actually tried to spell out the connection, that seems a little ridiculous. Harman was obviously space-constrained in a review, and couldn’t go into this level of detail. Peacocke wasn’t so constrained in a _book_, and could have easily added a few lines pointing out the differences between his position and Lewis’s, and maybe even name-checking the papers that have run the kind of argument I sketch above. (Especially since it seems rather doubtful that he thought up the whole connection on his own, given the lack of detail in the book.) So basically I’m on Harman’s side here, and I regret a little not making more of a fuss about this line when I first read it.

Maybe I should be less of a Milquetoast sometimes!
Continue reading