Some optimistic links

Norm’s Philosophers XI. I’m tempted to do a philosophers 9 – maybe after the conference.

I thought this week’s Google doodles were amusing in a simple-minded kinda way.

After all the distubring reports of poor crowd behaviour at the Gabba and the Adelaide Oval, it warmed by little heart to hear how friendly the MCG was yesterday.

I don’t know how much posting I’ll do from the APA. Certainly the papers blog won’t be active again until the New Year. See lots of you in DC. Any readers who want to catch up for a drink should email me, or just drop by the hotel bar at any random time and I might well be there. The only paper I’ll be at is Ted’s book symposium on the last afternoon, otherwise I’ll be interviewing, lobbying, networking and generally carousing.

Good luck to all the job candidates!

The Christmas Post on Boxing Day

First I want to engage in something that’s a bit of a blog tradition already: begging for money. Not for me mind you. This site costs nothing to run. And I get paid quite well thank you, and presumably will be paid well more next year, so I don’t need the money. But there’s plenty of people who do. Rather than do my own work on finding out which charities are best deserving I’ll assume that all the people who say Oxfam does pretty good things with your money are right. Here’s the links to their sites in Australia, New Zealand, Britain and America. Harry suggested giving something proportionate to the amount you spend on Christmas presents this year. In that spirit, I’d like to suggest spending less on conference socialising and New Year’s celebration and sending the $$ Oxfam’s way. (Unless you are buying me drinks, in which case you should buy me the drinks and send Oxfam a donation anyway.)

I spend Christmas with Simon Keller and his wife Maree and brother Reuben. Very good times. We didn’t spend much time talking shop, so the philosophical highlight of the day was when finger pupper Plato was recruited to play Santa Claus trying (and failing) to shimmy down the chimney of Reuben’s gingerbread house. (Pictures of this were taken, but they were put on film so I have to wait for them to be developed and scanned before I can post them.)
Continue reading

Two Envelopes

Despite this blog’s title, there really aren’t enough rants here. Let’s make up for that.

I was just reading another less than interesting paper on the two-envelope paradox when I started thinking, “Why is anyone still writing about the two-envelope paradox? Surely everything that needs to be said about it has been said.” And I was right. But perhaps everything that needs to be said about it has not been said in the one place. So I’ll say it all here. Note that nothing that I’ll say here is even close to being original – the real message of this rant is that this is a puzzle that’s well past its use-by date. (And I use lots of italics when I’m ranting.)

The argument for the paradoxical conclusion, that you’re better off switching no matter which envelope you get, relies crucially on an inference like the following.

The amount of money X in your envelope is from the set {x1, x2, …, xn, …}. (Note this is a countably infinite set. There are probably versions of the paradox where the set is uncountable. The same things can be said about that version of the paradox.) Call this set S.

For all x in S, the conditional expected utility of swapping given X=x is positive.

Therefore, it is in your interest to swap.

Call the inference here (CC). (CC) is a kind of conglomerability principle – it says if something is good according to every member of a particular partition, then it is good simpliciter. Given some standard Bayesian assumptions, (CC) is equivalent to the following principle.

Let Y and Z be bets. For any proposition p, and bet W, let W & p be the bet that pays what W pays if p, and nothing otherwise. (I assume bets can have negative ‘payouts’, so all choices are bets.) Let (p1, p2, …, pn, …} be a countable partition of possibility space. Then if for all i, Y & pi is preferable to Z & pi, then Y is preferable to Z.

It’s really important to keep in mind here that (CC), or something very much like it, is just essential to the paradoxical reasoning. There’s simply no argument that you should swap that doesn’t use as a premise the principle that you should swap whatever is in your envelope. And this premise doesn’t get you to the conclusion without (CC), or something stronger than it.

Now (CC) in either its intuitive or formal versions is a very plausible principle. To prove this, just note how many people have tacitly appealed to it in setting up the two-envelope paradox. But unfortunately it is inconsistent. Vann McGee showed this “An Airtight Dutch Book” Analysis 1999. The only agents that can satisfy (CC) are those that have either (a) bounded utility curves or (b) ‘opinionated’ belief states – more precisely, there are only finitely many propositions about which their credence is neither 0 nor 1. And, as has been known since at least John Broome’s 1995 Analysis paper, the two-envelope paradox only gets going if you assume the agent in question satisfies neither condition.

Now as far as I can tell, that’s all one needs to say about the paradox. The paradoxical conclusion is only reached by taking an inconsistent principle of reasoning, and applying it in just the case where we know on independent grounds that it cannot safely be applied. But is that all people say? Well, no.

If you want to find out what they do say, Google is your friend. Note that some of the links you’ll find, such as the first one, do say reasonable things – i.e. something similar to what I say. But not many I’m afraid.

To be sure, there still is something odd here. Principles like (CC) are very intuitive. It’s hard to know what to do when faced with a situation where you know your preferences will change as soon as you find out something, and you know you’re about to find it out. As McGee says, it looks like you face an Airtight Dutch Book in that situation. But that’s all the counterintuitiveness – there’s simply nothing special about the two-envelopes here, because there’s simply no argument from consistent premises that says you should switch.

Happy holidays!

iPod

Right now I’ve got iTunes converting songs off a CD to MP3s, transferring other MP3s to the iPod and playing a third batch of songs. And it seems capable of doing these three tasks with minimal effort. (For a while it was also doing some level calibration on the other songs on the iPod, but that was starting to drain resources.) I love multitasking! If only I could grade papers and blog at the same time, I’d be in multitasking heaven.

Now I just have to decide how many of these albums I want/need to buy from the iShop. How many new CDs can I get at once before the marginal value of each new CD goes to 0?

Ethics

I got so annoyed with the CD drive on my office computer that I had to go to Radio Shack and buy a new one before I could get back to work. You might think this is just me procrastinating again, but the old drive was really really bad. And the new one is quite good, even if I did have to change the jumper settings before I could get it to work. (Who knew that drives still had jumper settings?)

But this isn’t meant to be a technology post. I wanted to mention a couple of things about my favourite ethical theory. For the reasons Andy and I lay out here I no longer think it is the one true ethical theory, but it’s nevertheless my favourite theory.

It’s a form of consequentialism, so in general it says the better actions are those that make for better worlds. (I fudge the question of whether we should maximise actual goodness in the world, or expected goodness according to our actual beliefs, or expected goodness according to rational beliefs given our evidence. I lean towards the last, but it’s a tricky question.) What’s distinctive is how we say which worlds are better: w1 is better than w2 iff behind the veil of ignorance we’d prefer to be in w1 to w2.

What I like about the theory is that it avoids so many of the standard counterexamples to consequentialism. We would prefer to live in a world where a doctor doesn’t kill a patient to harvest her organs, even if that means we’re at risk of being one of the people who are not saved. Or I think we would prefer that, I could be wrong. But I think our intuition that the doctor’s action is wrong is only as strong as our preference for not being in that world.

We even get something like agent-centred obligations out of the theory. Behind the veil of ignorance, I think I’d prefer to be in a world where parents love their children (and vice versa) and pay special attention to their needs, rather than in a world where everyone is a Benthamite maximiser. This implies it is morally permissible (perhaps even obligatory) to pay special attention to one’s nearest and dearest. And we get that conclusion without having to make some bold claims, as Frank Jackson does in his paper on the ‘nearest and dearest objection’, about the moral efficiency of everyone looking after their own friends and family. (Jackson’s paper is in Ethics 1991.)

So in practice, we might make the following judgment. Imagine that two children, a and b, are at (very mild) risk of drowning, and their parents A and B are standing on the shore. I think there’s something to be said for a world where A goes and rescues her child a, and B rescues her child b, at least if other things are entirely equal. (I assume that A and B didn’t make some prior arrangement to look after each other’s children, because the prior obligation might affect who they should rescue.)

But what if other things are not equal? (I owe this question to Jamie Dreier.) Imagine there are 100 parents on the beach, and 100 children to be rescued. If everyone goes for their own child, 98 will be rescued. If everyone goes for the child most in danger, 99 will be rescued. Could the value of paying special attention to your own loved ones make up for the disvalue of having one more drown? The tricky thing, as Jamie pointed out, is that we might ideally want the following situation: everyone is disposed to give preference to their own children, but they act against their underlying dispositions in this case so the extra child gets rescued. From behind the veil of ignorance, after all, we’d be really impressed by the possibility that we would be the drowned child, or one of her parents.

It’s not clear this is a counterexample to the theory. It might be that the right thing is for every parent to rescue the nearest child, and that this is what we would choose behind the veil of ignorance. But it does make the theory look less like one with agent-centric obligations than I thought it was.

This leads to a tricky taxonomic question. Is the theory I’ve sketched one in which there are only neutral values (in Parfit’s sense) or relative values? Is it, that is, a form of ‘Big-C Consequentialism’? Of course in one sense there are relative values, because what is right is relative to what people would choose from behind the veil of ignorance, and different people might reasonably differ on that. But setting a community with common interests, do we still have relative values or neutral values? This probably just reflects my ignorance, but I’m not really sure. On the one hand we have a neutrally stated principle that applies to everyone. On the other, we get the outcome that it is perfectly acceptable (perhaps even obligatory) to pay special attention to your friends and family because they are your friends and family. So I’m not sure whether this is an existence proof that Big-C Consequentialist theories can allow this kind of favouritism, or a proof that we don’t really have a Big-C Consequentialist theory at all.

Ethics

I got so annoyed with the CD drive on my office computer that I had to go to Radio Shack and buy a new one before I could get back to work. You might think this is just me procrastinating again, but the old drive was really really bad. And the new one is quite good, even if I did have to change the jumper settings before I could get it to work. (Who knew that drives still had jumper settings?)

But this isn’t meant to be a technology post. I wanted to mention a couple of things about my favourite ethical theory. For the reasons Andy and I lay out here I no longer think it is the one true ethical theory, but it’s nevertheless my favourite theory.

It’s a form of consequentialism, so in general it says the better actions are those that make for better worlds. (I fudge the question of whether we should maximise actual goodness in the world, or expected goodness according to our actual beliefs, or expected goodness according to rational beliefs given our evidence. I lean towards the last, but it’s a tricky question.) What’s distinctive is how we say which worlds are better: w1 is better than w2 iff behind the veil of ignorance we’d prefer to be in w1 to w2.

What I like about the theory is that it avoids so many of the standard counterexamples to consequentialism. We would prefer to live in a world where a doctor doesn’t kill a patient to harvest her organs, even if that means we’re at risk of being one of the people who are not saved. Or I think we would prefer that, I could be wrong. But I think our intuition that the doctor’s action is wrong is only as strong as our preference for not being in that world.

We even get something like agent-centred obligations out of the theory. Behind the veil of ignorance, I think I’d prefer to be in a world where parents love their children (and vice versa) and pay special attention to their needs, rather than in a world where everyone is a Benthamite maximiser. This implies it is morally permissible (perhaps even obligatory) to pay special attention to one’s nearest and dearest. And we get that conclusion without having to make some bold claims, as Frank Jackson does in his paper on the ‘nearest and dearest objection’, about the moral efficiency of everyone looking after their own friends and family. (Jackson’s paper is in Ethics 1991.)

So in practice, we might make the following judgment. Imagine that two children, a and b, are at (very mild) risk of drowning, and their parents A and B are standing on the shore. I think there’s something to be said for a world where A goes and rescues her child a, and B rescues her child b, at least if other things are entirely equal. (I assume that A and B didn’t make some prior arrangement to look after each other’s children, because the prior obligation might affect who they should rescue.)

But what if other things are not equal? (I owe this question to Jamie Dreier.) Imagine there are 100 parents on the beach, and 100 children to be rescued. If everyone goes for their own child, 98 will be rescued. If everyone goes for the child most in danger, 99 will be rescued. Could the value of paying special attention to your own loved ones make up for the disvalue of having one more drown? The tricky thing, as Jamie pointed out, is that we might ideally want the following situation: everyone is disposed to give preference to their own children, but they act against their underlying dispositions in this case so the extra child gets rescued. From behind the veil of ignorance, after all, we’d be really impressed by the possibility that we would be the drowned child, or one of her parents.

It’s not clear this is a counterexample to the theory. It might be that the right thing is for every parent to rescue the nearest child, and that this is what we would choose behind the veil of ignorance. But it does make the theory look less like one with agent-centric obligations than I thought it was.

This leads to a tricky taxonomic question. Is the theory I’ve sketched one in which there are only neutral values (in Parfit’s sense) or relative values? Is it, that is, a form of ‘Big-C Consequentialism’? Of course in one sense there are relative values, because what is right is relative to what people would choose from behind the veil of ignorance, and different people might reasonably differ on that. But setting a community with common interests, do we still have relative values or neutral values? This probably just reflects my ignorance, but I’m not really sure. On the one hand we have a neutrally stated principle that applies to everyone. On the other, we get the outcome that it is perfectly acceptable (perhaps even obligatory) to pay special attention to your friends and family because they are your friends and family. So I’m not sure whether this is an existence proof that Big-C Consequentialist theories can allow this kind of favouritism, or a proof that we don’t really have a Big-C Consequentialist theory at all.

Sticky Carpet

Andy Egan’s thesis defence went very well yesterday, and the post-game celebrations went so well, or at least so long, that I am a little shaky on the keyboard this morning, er, afternoon.

So rather than adding to the world’s stock of philosophical knowledge, I’ll just report one very cool thing. The Cloning paper that Sarah and I wrote got accepted to the ethics conference being held at Baton Rouge just before Mardi Gras. Andy’s submission to that conference got accepted, as did Liz’s, so the MIT-Brown crew from last year’s conference will all be back there. (Labelling warning: none of the ‘MIT people’ in this group are officially MIT affiliated any more, but it’s a convenient and not too misleading label.) The conference was a bunch of fun last year and very informative – the organisers did a fantastic job I thought – so I’m really looking forward to this year’s conference.

Ambiguity?

Is this sentence ambiguous?

(1) Vegemite could have tasted icky.

(I assume ‘icky’ is unambiguous.)

I half think it has readings with each of the following truth-conditions.

(2) There is a world w such that Vegemite in w is disposed to cause icky-tasting reactions in normal observers in the actual world.
(3) There is a world w such that Vegemite in w is disposed to cause icky-tasting reactions in normal observers in w.

I think (2) is the most natural reading, but I half think (3) is a possible reading. Do you agree?

(Two caveats. First, the dispositional analysis of tastes here is really crude, but I don’t care as long as something like it is going to work. Whatever the right story is, we’ll still be able to ask this kind of question about whether there are ambiguities. Second, I’m leaning towards treating ‘normal’ as a MacFarlanesque relative intension predicate, so what’s normal is fixed by the context of evaluation, not the context of utterance. I think that doesn’t matter to the question of whether (1) is ambiguous, but I’m a little less certain of that than I am of the first caveat.)

News

As Brian Leiter noted, I just got a job offer from Cornell. This is pretty exciting stuff – Cornell is a great department (in my totally unbiased opinion). I’d start spouting advice about how prospective grad students in X, Y or Z should seriously consider applying to Cornell because of the quality of the faculty in those areas, but that advice would be of dubious objectivity right now.

Of course Brown is a great department too, and it should also be on your list of schools to apply to!

Just two quibbles with Brian’s note though. He doubly underestimated the extent of my campaign to have no area of speciality whatsoever, as illustrated by:

Ethics
(with Sarah McGrath) Cloning and Harm.
(with Andy Egan) Prankster’s Ethics.

History
Keynes and Wittgenstein.

Admittedly all those papers are unpublished, though hopefully they are all en route to that destination.

Despite the title, there’s actually remarkably little Wittgenstein in the history paper, it’s really all about Keynes, and about how (contrary to some recent suggestions) there is little evidence there is that Wittgenstein’s return to Cambridge influenced his views on probability.

While pottering around Brian’s site, I found another grad student blog: the mumblings of a platonist. It looks interesting, but on my browser the font was too small to comfortably read it. It’s (I think) the only grad student blog I know focussing on history of philosophy, and I’m always pleased to see niches like this being filled.