I’m currently in Atlanta en route to California for the Wettstein conference. (I was meant to fly via Minneapolis but I got rerouted. I imagine there’ll be plenty of chances to get to Minneapolis in the future though.) Posting will be light here and non-existent on OPP until probably the weekend.
In a nice paper in a recent Philosophical Review Alan Hajek argued that Pascal’s argument in the Wager fails because he doesn’t take account of mixed strategies. I’ve been spending too much of today wondering whether the same thing is true in other fields. (Not that I’m entirely convinced by Hajek’s argument, but the response would take another post, and historical research, and that’s for another week.)
For a while I thought mixed strategies could solve some of the problems Andy Egan discusses in his paper on causal decision theory. Maybe they can, but I’m not so sure. For now I just want to discuss what they do to Nick Bostrom’s Meta-Newcomb Problem.
The first thing to say is that it’s hard to say what they’d do, because Bostrom doesn’t say what his predictors do if they predict you’ll use a mixed strategy. I’ll follow Nozick and say that if they predict a mixed strategy, that’s the same as predicting a 2-box choice. Importantly I make this assumption both for Bostrom’s predictor and his Meta-Predictor. But if the “Predictor” is not predicting, but is in fact reacting to your choice (as is a possibility in Bostrom’s game) then I’ll assume that what matters is what choice you make, not how you make it. So choosing 1 box by a mixed strategy will be the same as choosing 1 box by a pure strategy for purposes of what causal consequences it has.
Given those assumptions, it sort of seems that the “best” thing to do in Bostrom’s case is to adopt a mixed strategy with probability e of choosing 2 boxes, for vanishingly small e. That will mean that if the meta-predictor is “right” your choice will cause the predictor to wait until you’ve made your decision, and with probability 1 less a vanishingly small amount, you’ll get the million. (Scare quotes because I’ve had to put an odd interpretation on the MetaPredictor’s prediction to make it make sense as a prediction. But this is just in keeping with the Nozickian assumptions with which I started.)
Problem solved, at least under one set of assumptions.
Now I had to set up the assumptions about how to deal with mixed strategies in just the right way for this to work. Presumably there are other ways that would be interesting. I’m not interested in games where predictors are assumed to know the outputs of randomising devices used in mixed strategies. That seems too much like backwards causation. But there could be many other assumptions that lead to interesting puzzles.
UPDATE: Be sure to read the many interesting comments below, especially Bob Stalnaker’s very helpful remarks.
Hopefully I’ll soon post the follow up to the earlier preliminary analysis of JFP ads. Sadly actually doing search activities has taken priority over analysing searches. But until then I had a thought for how to do the classifications.
As can be seen from the comments thread in the earlier post, there is some discussion about how to label the area of philosophy that people more or less like me more or less work in. For a while Brian Leiter used “core”, but that understandably upset people working in other areas. I tried “descriptive”, but that didn’t meet with much approval either.
The best suggestions seemed to be that I have a purely disjunctive label. But “Language, Epistemology, Metaphysics and Mind” seemed to be too long. What we needed was a shortening. Maybe an acronym. But LEMM seemed boring. If we just add a suffix we could have a name. I know…
From now on all who work on Language, Epistemology, Metaphysics or Mind will be known around here as Lemmings. This little relabelling program will be a success iff within the next 3 years there’s a job ad in JFP saying “We want to hire a Lemming”. (I’m expecting a failure, though an amusing failure.)
If I had photoshop skills I’d post here pictures of lemmings with the faces of famous actual Lemmings (e.g. Jason Stanley, John Hawthorne, Ted Sider, Daniel Stoljar) superimposed over the little cartoon lemming. But I don’t have those wicked Photoshop skills, or indeed any Photoshop skills at all.
I’m visiting Melbourne for a short while over the upcoming break, and I’m in the process of trying to get in touch with everyone I’d like to see while visiting home. But by a process of being a very bad email correspondant and having a few computer glitches lately, my email address book is rather badly out-of-date. (As I’ve found when trying to search for various friends’ addresses.) So if you live in Melbourne, suspect I’m trying to email you now, and have recently changed your email address, email me (brian at weatherson point net) with the new contact details. Please!
PS: If you are reading this with an eye to burgling my house while I’m away, be warned that I have a killer guard dog and many neighbours watching the house. And remarkably little of value stored here.
Fritz Warfield sent me this link.
It consists of 1032 monkeys typing away on typewriters to see who can do the best job of replicating Shakespeare. It seems that if you log on, you can contribute some monkeys to the project, but make sure to bring some (virtual) bananas.
The best any monkey has done so far is the first 22 letters of Cymbeline. The best any of my monkeys have done is the first 20 letters of Pericles. (My monkeys have better taste in plays than the average.) This might seem disappointing returns, but what the hosts of the site don’t say is that the monkeys have already managed to replicate two of my blog posts, and are well on their way to a third.
We have four discussion club talks scheduled between Thanksgiving and the end of semester, so these are all going to run together fairly quickly. Here is the schedule:
- Thursday, December 2, 2004, Goldwin Smith Hall room 124, 7:30pm.
Eric Gilbertson, Cornell University
- Friday, December 3, 2004, Goldwin Smith Hall room G22, 4:30pm.
David Sedley, Cambridge University
“Myth, Politics and Punishment in Plato’s Gorgias”
Commentary by Amber Carpenter
- Thursday, December 9, 2004, Goldwin Smith Hall room 124, time tba.
Karen Neilsen, Cornell University
- Friday, December 10, 2004, Goldwin Smith Hall room 124, 4:30pm.
Mathew Lu, Cornell
I was just looking over the blogstats, and noticed that on November 9, I had 4192 unique visitors. Most of them visited just the break-up lines page, though several visitors stopped by the rather unintersting post after the break-up. I think 4192 will stand as the single day record for quite a while, unless I start giving away money to visitors or something.
For months I’ve been thinking about writing a paper on the suddenly fashionable topic of what vagueness is. One of the most interesting views on the subject is by Matti Eklund who argues that a term is vague iff a tolerance principle is meaning-constitutive for it.
A tolerance principle is basically a Sorites premise. A tolerance principle is something like this, “Whereas large enough differences in Fs parameter of application sometimes matter to the justice with which it is applied, some small enough difference never thus matters.”
A principle is meaning-constitutive for a term if “if it is part of competence with it to be disposed to accept it.” (Both quotes are from Matti’s paper.)
I think that competence (in the sense of meaning the same thing as the rest of the linguistic community, which I think is the relevant sense of competence here) requires accepting very few principles, and certainly nothing as contentious as this. Note that Matti’s definition entails two other competence requirements, both of which I’ll argue against. First, being competent with vague term F requires knowing what F’s parameter of application is. Second, being competent with vague term F requires knowing that F is vague. Both of these might be plausible for tall or rich, but they aren’t true, or even that plausible I think, for vague terms in general.
Consider the plausibly vague term morally acceptable. Imagine three speakers who have some thoughts about what is and isn’t morally acceptable. Tom thinks that an action is morally acceptable iff it is approved of by God. Jack thinks an action is morally acceptable iff produces more utils than any rival action would produce. And Mike thinks that an action is morally acceptable iff it’s an action a suitably virtuous person would perform.
It seems to me that Tom, Jack and Mike can all be competent users of the term morally acceptable. When they debate what things are morally acceptable, as they often do, they aren’t speaking past each other, rather they are genuinely contradicting what the others say. So they’re competent. But they don’t agree even on what kind of magnitude is measured by the term’s “parameter of application”. So the first competence principle is false.
As well as having very different views on what a tolerance principle for morally acceptable should look like, they have very different views on whether such principles are prima facie plausible, let alone meaning-determining. Tom thinks no such principles are plausible, and certainly doesn’t think they are meaning-determining. Jack thinks that whether such principles are true turns on hard questions about the semantics and metaphysics of counterfactuals. But since he thinks hard questions about the semantics and metaphysics of counterfactuals don’t determine what’s meaning-determining for morally acceptable, these principles are not meaning-determining. Mike is more disposed to accept the prima facie plausibility of tolerance principles, though he too doesn’t think they are meaning-determining, since he thinks that if they were Jack and Tom would be conceptually confused (which he thinks they are not) rather than morally confused (which he thinks they are).
So I think Matti’s claim runs into trouble when we try to apply it to vague normative terms. But these are a very large part of the class of vague terms.
UPDATE: Zoltan pointed out to me that Matti’s definition could be interpreted, and perhaps should be interpreted, as not requiring that competence requires knowing F’s parameter of application. Rather, it just requires being disposed to believe that whatever F’s parameter of application is, small changes in that parameter don’t change whether F applies. This seems to be correct, so one of my objections here fails. I still stand by the more general point that Tom, Jack and Mike can deploy the same concept while one believes it is vague and the other not, but my argument needs to be more careful here than I hinted at last night.
SECOND UPDATE: Matti responds at length in the comments. Be sure to read these as well.