Praising the Greats

Here is one way to run the New Evil Demon argument against externalist theories of justification. Let S be a normal person, with a justified belief that p, and S* her brain-in-vat twin.

1. If S’s belief that p is justified, then S*’s belief that p is justified
2. If S*’s belief that p is justified, then externalism is false.
C. So externalism is false.

Now how might we motivate premise 1? One way is by something like the following argument.

1. The same reactive attitudes are appropriate towards S and S*’s doxastic states.
2. Whether a belief is justified supervenes on the reactive attitudes that are appropriate towards it.
C. So if S’s belief that p is justified, then S*’s belief that p is justified

Continue reading

You Might be a Relativist If…

At the end of my “Conditionals and Indexical Relativism”:http://brian.weatherson.org/CaIR.pdf paper, there is a throw-away reference to the possibility that indexical relativism might be the right theory for various pronouns in modern language. ‘Modern language’ only because for traditional (i.e. spoken) languages contextualism seems to capture all the data. This post is a start on making that a bit more plausible.

I’m interested in uses of ‘you’ in written work where the writer has no way of knowing how broad the audience is. One notable feature of such uses is that it is very common to use epistemic modals scoping over the pronoun, so you often see things like “You might”, as e.g. here, or “You probably”, as, e.g. “here”:http://www.techeblog.com/index.php/tech-gadget/5-feature-firefox-tricks-you-probably-didnt-know-about. I’m particularly interested in the latter uses. What, you’re probably thinking right now, could they mean?

Continue reading

Privacy and Slippery Slopes

Ever since Google’s street view service was debuted there have been “many discussions over its privacy implications”:http://www.google.com/search?q=%22Google+Street+View%22+privacy&hl=en&client=firefox-a&rls=org.mozilla:en-US:official&hs=G0c&pwst=1&start=90&sa=N. I’ve found most of these fairly overblown, but this morning I started to get a better sense of what some of the concerns might be about. Writing on the SMH’s news blog, Matthew Moore “writes”:http://blogs.smh.com.au/newsblog/archives/freedom_of_information/013696.html approvingly,

bq. Mr McKinnon reckons you can hardly have a reasonable expectation of privacy on a public street when every second person has a video camera or mobile phone and when Google is now using street-level maps with images of real people who have no idea they have been photographed.

Continue reading

Congratulations Language Log

“This”:http://itre.cis.upenn.edu/~myl/languagelog/archives/004576.html is a nice story. The latest issue of Southwest Airlines’ inflight magazine features some “recommended diversions”:http://spiritmag.com/clickthis/8.php. They include the usual summer books, movies and music, and a plug for “Language Log”:http://itre.cis.upenn.edu/~myl/languagelog/ as blog reading. Academic blogs have come a long way if they’re being recommended in inflight magazines. Now we only have to get them to be promoting other academic blogs the same way.

I’ve been seeing a lot of references to Language Log around the web recently, particularly to their prescriptivist-bashing posts. I particularly liked this attack on the “alleged rules for using less and fewer”:http://itre.cis.upenn.edu/~myl/languagelog/, complete with examples from King Alfred’s Latin translations. It’s an example of how academic blogs can make an impact on public life not by dumbing down their work, or by stretching to find alleged applications, but simply by setting out their work in a clear and accessible way. Or, to bring things back to a favourite theme of mine, of why academics should get credit for successful blogs not necessarily as examples of research, but as examples of service to the community. Now giving people diversions alongside summer blockbusters isn’t quite the same kind of service as solving their medical or social problems, but it is a service, and a praiseworthy one.

The Traveler’s Dilemma

I’ve been busy at FEW the past few days, but thanks to everyone who has responded to my previous post. Anyway, in the airport on the way back from Pittsburgh, I saw that the current issue of Scientific American has several philosophically interesting articles, including ones about the origin of life (did it start with a single replicating molecule, or a process involving several simple ones?) and anesthesia (apparently, the operational definition of general anesthesia isn’t quite what you’d expect, focusing on memory blockage more than we might have expected). (It looks like you’ll have to pay to get either of those.)

But I want to discuss an interesting article by economist Kaushik Basu on the Traveler’s Dilemma (available free). This game is a generalization of the Prisoner’s Dilemma, but with some more philosophically interesting structure to it. Each player names an integer from 2 to n. If they both name the same number, then that is their payoff. If they name different numbers, then they both receive the smaller amount, with the person who named the smaller number getting an additional 2 as a bonus, and the one with the larger number getting 2 less as a penalty. If n=3, then this is the standard Prisoner’s Dilemma, where naming 2 is the dominant strategy. But if n≥4, then there is no dominant strategy. However, every standard equilibrium concept still points to 2 as the “rational” choice. We can generalize this game further by letting the plays range from k to n, with k also being the bonus or penalty for naming different numbers.

Unsurprisingly, in actual play, people tend not to actually name k. Interestingly, this is even the case when economics students play, and even when game theorists at an economics conference played! Among untrained players, most play n, which interestingly enough is the only strategy that is dominated by another (namely, by n-1). Among the trained players, most named numbers between n-k and n-1.

In the article, this game was used to suggest that a better concept of rationality is needed than Nash equilibrium play, or any of the alternatives that have been proposed by economists. I think this is fairly clear. The author also uses this game to suggest that the assumption of common knowledge of rationality does a lot of the work in pushing us towards the choice of k.

I think the proper account of this game may bear some relation to Tim Williamson’s treatment of the Surprise Exam Paradox in Knowledge and its Limits. If we don’t assume common knowledge of rationality, but just some sort of bounded iteration of the knowledge operator, then the backwards induction is limited.

Say that an agent is rational0 only if she will not choose an act that is dominated, based on what she knows about the game and her opponent’s options. Say that an agent is rationali+1 iff she is rationali and knows that her opponent is rationali. (Basically, being rationali means that there are i iterations of the knowledge operator available to her.) I will also assume that players are reflective enough that there is common knowledge of all theorems, even if not of rationality.
Now I claim that for ii, then when she plays the Traveler’s Dilemma, she will pick a number less than n-i.

Proof: By induction on i. For i=0, we know that the agent will not choose any dominated strategy. However, the strategy of picking n is dominated by n-1, so she will not pick n=n-i, as claimed. Now, assume that it is a theorem that if an agent is rationali, then when she plays the Traveler’s Dilemma, she will pick a number less than n-i. Then the agent knows this theorem. In addition, if an agent is rationali+1, then she knows her opponent is rationali, and by knowing this theorem, she knows that her opponent will pick a number less than n-i. Since she is also rationali, she will pick a number less than n-i. But given these two facts, picking n-(i+2) dominates picking n-(i+1), so by rationality0, she will not pick n-(i+1) either, proving the theorem, so the induction step goes through, QED.

Thus, if an agent picks a number n-i, then she must be at most rationali-1. But based on what Williamson says, iterations of the knowledge operator are generally hard to come by, so it should not be a surprise that even game theorists playing with common knowledge that they are game theorists will not have very high iterations of rationality. I wonder if it might be possible to use the Traveler’s Dilemma to estimate the number of iterations of knowledge that do obtain in these cases.

Different Ideas About Newcomb Cases

One advantage of going to parties with mathematicians and physicists is that you can describe a problem to them, and sometimes they’ll get stuck thinking about it and come up with an interesting new approach to it, different from most of the standard ones. This happened to me over the past few months with Josh von Korff, a physics grad student here at Berkeley, and versions of Newcomb’s problem. He shared my general intuition that one should choose only one box in the standard version of Newcomb’s problem, but that one should smoke in the smoking lesion example. However, he took this intuition seriously enough that he was able to come up with a decision-theoretic protocol that actually seems to make these recommendations. It ends up making some other really strange predictions, but it seems interesting to consider, and also ends up resembling something Kantian!

The basic idea is that right now, I should plan all my future decisions in such a way that they maximize my expected utility right now, and stick to those decisions. In some sense, this policy obviously has the highest expectation overall, because of how it’s designed.

In the standard Newcomb case, we see that adopting the one-box policy now means that you’ll most likely get a million dollars, while adopting a two-box policy now means that you’ll most likely get only a thousand dollars. Thus, this procedure recommends being a one-boxer.

Now consider a slight variant of the Newcomb problem. In this version, the predictor didn’t set up the boxes, she just found them and looked inside, and then investigated the agent and made her prediction. She asserts the material biconditional “either the box has a million dollars and you will only take that box, or it has nothing and you will take both boxes”. Looking at this prospectively, we see that if you’re a one-boxer, then this situation will only be likely to emerge if there’s already a box with a million dollars there, while if you’re a two-boxer, then it will only be likely to emerge if there’s already an empty box there. However, being a one-boxer or two-boxer has no effect on the likelihood of there being a million dollars or not in the box. Thus, you might as well be a two-boxer, because in either situation (the box already containing a million or not) you get an extra thousand dollars, and you just get the situation described to you differently by the predictor.

Interestingly enough, we see that if the predictor is causally responsible for the contents of the box then we should follow evidential decision theory, while if she only provides evidence for what’s already in the box then we should follow causal decision theory. I don’t know how much people have already discussed this aspect of the causal structure of the situation, since they seem to focus instead on whether the agent is causally responsible, rather than the predictor.

Now I think my intuitive understanding of the smoking lesion case is more like the second of these two problems – if the lesion is actually determining my behavior, then decision theory seems to be irrelevant, so the way I seem to understand the situation has to be something more like a medical discovery of the material biconditional between my having cancer and smoking

Here’s another situation that Josh described that started to make things seem a little more weird. In Ancient Greece, while wandering on the road, every day one either encounters a beggar or a god. If one encounters a beggar, then one can choose to either give the beggar a penny or not. But if one encounters a god, then the god will give one a gold coin iff, had there been a beggar instead, one would have given a penny. On encountering a beggar, it now seems intuitive that (speaking only out of self-interest), one shouldn’t give the penny. But (assuming that gods and beggars are randomly encountered with some middling probability distribution) the decision protocol outlined above recommends giving the penny anyway.

In a sense, what’s happening here is that I’m giving the penny in the actual world, so that my closest counterpart that runs into a god will receive a gold coin. It seems very odd to behave like this, but from the point of view before I know whether or not I’ll encounter a god, this seems to be the best overall plan. But as Josh points out, if this was the only way people got food, then people would see that the generous were doing well, and generosity would spread quickly.

If we now imagine a multi-agent situation, we can get even stronger (and perhaps stranger) results. If two agents are playing in a prisoner’s dilemma, and they have common knowledge that they are both following this decision protocol, then it looks like they should both cooperate. In general, if this decision protocol is somehow constitutive of rationality, then rational agents should always act according to a maxim that they can intend (consistently with their goals) to be followed by all rational agents. To get either of these conclusions, one has to condition one’s expectations on the proposition that other agents following this procedure will arrive at the same choices.

Of course this is all very strange. When I actually find myself in the Newcomb situation, or facing the beggar, I no longer seem to have a reason to behave according to the dictates of this protocol – my actions benefit my counterpart rather than myself. And if I’m supposed to make all my decisions by making this sort of calculation, then it’s unclear how far back in time I should go to evaluate the expected utilities. This matters if we can somehow nest Newcomb cases, say by offering a prize if I predict that you will make the “wrong” decision on a future Newcomb case. It looks like I have to calculate everything all the way back at the beginning, with only my a priori probability distribution – which doesn’t seem to make much sense. Perhaps I should only go back to when I adopted this decision procedure – but then what stops me from “re-adopting” it at some later time, and resetting all the calculations?

At any rate, these strike me as some very interesting ideas.

2nd Online Philosophy Conference

The 2nd Online Philosophy Conference has just entered its second (and final) week. My paper on Logical Pluralism (which is, in a way, a paper about the objects of validity) is up, with comments by JC Beall and Jonanthan McKeown-Green. I was really happy that JC agreed to comment on the paper, since he and Greg essentially wrote the book on logical pluralism. Jonathan is a good friend of mine from my graduate days. He had the office nextdoor to mine for a while at Princeton, but he has since returned to Auckland, where (some of you may be interested to note) there is currently a vacancy in logic. Anyway, the paper is only 14 pages long, and I’d be really grateful for any comments.

The Online Philosophy Conference is well worth supporting of course, and this week it also features papers from Derek Pereboom, Jeff McMahan, Caspar Hare, John Martin-Fischer and Jonathan Dancy.

Links and a Paper

Here are two more philosophy blogs that I don’t think I’ve previously linked to.

* Esa D{i’}az-Le{o’}n’s “been there/done that”:http://beentheredonethat-esa.blogspot.com/.
* Anthony Gillies’ “blog”:http://www-personal.umich.edu/~thony/blog.html

As Thony reports on his blog, “CIA Links”:http://www-personal.umich.edu/~thony/cia_leaks.pdf, a fine paper he wrote with “Kai von Fintel”:http://semantics-online.org/section/fintel has been accepted for publication in the _Philosophical Review_. Congratulations Thony and Kai!

I’ve been spending a bit of time recently working through a paper by “Tamina Stephenson”:http://web.mit.edu/tamina/www/ (a student of Kai’s), called “A Parallel Account of Epistemic Modals and Predicates of Personal Taste”:http://web.mit.edu/tamina/www/em-ppt-10-10-06.pdf. I don’t buy everything she says, but some of the technical resources she introduces have been incorporated into the latest version of my conditionals paper, which I can now post.

* “Conditionals and Indexical Relativism”:http://brian.weatherson.org/CaIR.pdf

This used to be called ‘Conditionals and Relativism’, and the change is something of a big one. I now defend a version of what we called ‘content relativism’ in “Epistemic Modals in Context”. Except, for reasons that become clear in the paper, I’d rather call it ‘indexical relativism’. The paper is fairly drafty, but I would be interested in knowing what people think of it. (The PDF is also bigger than I expected; I just changed some software around, and I wonder if that’s the cause.)

I’ll be off to Arch{e’} soon, so comments may take a little while to appear depending on how good my internet access is. But I’ll try to get to everything as quickly as I can.

Saturday Links Blogging

No one seemed to notice the terrible counting in the previous post. Ah well,

* Robert Stalnaker is currently doing the Locke lectures at Oxford, and Oxford has, very impressively, made the lectures available “as a podcast”:http://www.philosophy.ox.ac.uk/misc/johnlocke/index.shtml.
* “John Hawthorne”:http://www.philosophy.ox.ac.uk/members/jhawthorne/index.htm has a number of forthcoming papers available on his website. I just read a nice paper on “comparative adjectives”:http://www.philosophy.ox.ac.uk/members/jhawthorne/docs/Comparative%20Adjectives..pdf that I found while looking for something rather different. There is also a paper he wrote with Andrew McGonigal on the “Many minds theory of vagueness”:http://www.philosophy.ox.ac.uk/members/jhawthorne/docs/Many%20Minds.pdf.
* Speaking on Andrew, he just pointed out to me how developed the Uncyclopedia pages on “philosophy”:http://uncyclopedia.org/wiki/Philosophy and “Logic”:http://uncyclopedia.org/wiki/Logic have become. A lot of the humour there is pretty sophomoric, but I do like lines like “The purpose of chicken studying philosophy is to disprove your religion, your scientific methodology, the laws of your entire civilization, your ethics, and the existence of that chair you’re sitting on (although not convincingly enough as to make you feel you have to stand up).” I don’t know what the ‘chicken’ reference is though; one of the problems with the uncyclopedia is that it is hard to tell vandalism from failed attempts at humour.
* Dan L{o’}pez de Sa, who has written several “papers”:http://www.st-andrews.ac.uk/~dlds/ I’ve been reading while trying to say something new about semantic relativism, has a nice looking “blog”:http://blebblog.blogspot.com/.

Two Quick Links

Because I know everyone loves these.

* “Wikipedia page on the Leiter Report”:http://en.wikipedia.org/wiki/Philosophical_Gourmet
* New “Feminist Philosophers Blog”:http://feministphilosophers.wordpress.com/
* A rather novel version of “the design argument”:http://aidanmcglynn.blogspot.com/2007/05/atheists-nightmare.html