![]() |
I’ve been blogging rather lightly over the summer, in part because there has been so much else happening. In particular, earlier this month Ishani and I got married! The picture is immediately after the ceremony, when we’re looking happy and newlyweddish. It was a small ceremony in Minneapolis (where Ishani’s family lives) with just a few friends and family. It all went pretty well, and hopefully we’ll have a bunch more pictures to show you/bore you with soon. |
Category Archives: Uncategorized
Dutch Books and Irrationality
One objection that Henry Kyburg raises in several places to the Dutch Book argument for the notion of subjective probability is that people can avoid Dutch Books by exercise of purely deductive reasoning, and therefore they provide no constraint on betting odds or the like. As he puts it in his 1978 paper, “Subjective Probability: Criticisms, Reflections, and Problems”:
No rational person, whatever his degrees of belief, would accept a sequence of bets under which he would be bound to lose no matter what happens. No rational person will in fact have a book made against him. If we consider a sequence of bets, then quite independently of the odds at which the person is willing to bet, he will decline any bet that converts the sequence into a Dutch Book.
I think there’s something right about the general point, but this particular passage I quoted seems just plain wrong. I’ll give an example in which it seems perfectly reasonable to get oneself into such a Dutch Book.
Let’s say that back in January I was very impressed by John McCain’s cross-partisan popularity, and his apparent front-runner status as the Republican nominee for president, so I spent $40 on a bet that pays $100 if he’s elected president. After a few months, seeing his poll numbers plummet, let’s say I became more bullish on Giuliani, and spent $40 on a bet that pays $100 if he’s elected instead. But now that Republicans seem to be backing away from him too, and that Hillary Clinton may be pulling ahead in the Democratic primary, say I now think she’s the most likely candidate to win. If Kyburg is right, then no matter what my degree of belief, I wouldn’t spend more than $20 on a bet that pays $100 if she wins, because I will have converted my set of bets into a Dutch Book against myself (assuming as I do that no more than one of them can be elected). However, it seems eminently rational for me to buy a bet on Clinton for some larger amount of money, because I regard my previous bets as sunk costs, and just want to focus on making money in the future.
Something like this is possible on the Bayesian picture whenever I change my degrees of belief at all – I might have already made bets that I now consider regrettable, but that shouldn’t stop me from making future bets (unless it perhaps does something to convince me that my overall bet-placing skills are bad).
To be fair, I’m sure that Kyburg intends his claim only in the case where the agent is sequentially accepting bets in a setting where her beliefs aren’t changing, where the basic Dutch Book theorem is meant to apply. He’s certainly right that there are ways to avoid Dutch Books while still having betting odds that violate the probability axioms, unless one is somehow required to accept any sum of bets for and against any proposition at one’s published odds.
But somehow Kyburg seems to be suggesting that deductive rationality alone is sufficient to prevent Dutch Books, even with this extra flexibility. However, I’m not sure that this will necessarily happen – one can judge a certain loss as better than some combination of chances of loss and gain. And he even provides a footnote to a remark of Teddy Seidenfeld that I think makes basically this point!
It is interesting to note, as pointed out to me by Teddy Seidenfeld, that the Dutch Book against the irrational agent can only be constructed by an irrational (whether unscrupulous or not) opponent. Suppose that the Agent offers odds of 2:1 on heads and odds of 2:1 on tails on the toss of a coin. If the opponent is rational, according to the theory under examination, there will be a number p that represents his degree of belief in the occurrence of heads. If p is less than a half, the opponent will maximize his expectation by staking his entire stake on tails in accordance with the first odds posted by the Agent. But then the Agent need not lose. Similarly, if p is greater than a half. But if p is exactly a half, then the rational opponent should be indifferent between dividing his stake (to make the Dutch Book) and putting his entire stake on one outcome: the expectation in any case will be the same.
If Kyburg’s earlier claim that agents will never get themselves into Dutch Books is correct, then this argument by Seidenfeld can’t be – the same reasoning that keeps agent out of Dutch Books should make bookies buy them (unless it’s more bad to have a sure loss than it is good to have the corresponding sure gain). I suspect that each of the two arguments will apply in some cases but not others. At certain points, the bookie will feel safer buying the Dutch Book, while at others, she will favor maximizing expectation. Similarly, the agent will sometimes feel safer allowing a Dutch Book to be completed against her, rather than exposing herself to the risk of a much greater loss.
I think Kyburg is right that there are problems with any existing formulation of the Dutch Book argument, but I think he’s wrong in the facts of this particular criticism, and also wrong about subjective probability as a whole. Seidenfeld’s argument is really quite thought-provoking, and probably deserves further attention.
Unemployed Logicians’ Alert
Logicians often complain that there are no logic jobs in philosophy, but Adam Morton has just sent me news of one, and it’s at the University of Alberta in Edmonton, where I did a postdoc. Alberta is a great department – I had an absolutely fantastic year there – and Edmonton is a great place to be if you have any interest in winter sports…or theatre for that matter, the Edmonton Fringe Festival is some of the best fun you can have without snow.
……………………..
The Department of Philosophy, University of Alberta, invites applications for a tenure-track position in Philosophy, with a specialization in Logic. Other areas of research and teaching specialization and competence are open. The appointment will be made at the rank of Assistant Professor, effective July 1, 2008. Responsibilities include undergraduate and graduate teaching and maintaining an active research programme. Tenure stream faculty normally teach four one term courses per year. Candidates should hold a PhD in Philosophy and provide evidence of scholarly and teaching excellence. Salary is commensurate with qualifications and experience, and the benefit package is comprehensive. Applicants should arrange to send a letter of application indicating the position applied for and describing areas of research interest, curriculum vitae, all university transcripts, a sample of written work, letters from three referees, and, if available, a teaching dossier and teaching evaluations to Bruce Hunter, Chair, Department of Philosophy, University of Alberta, Edmonton, Alberta, CANADA, T6G 2E5. CLOSING DATE: November 10, 2007. The University of Alberta hires on the basis of merit. We are committed to the principle of equity in employment. We welcome diversity and encourage applications from all qualified women and men, including persons with disabilities, members of visible minorities, and Aboriginal persons. All qualified candidates are encouraged to apply; however, Canadian citizens and permanent residents will be given priority. For further information concerning the Department, please consult http://www.uofaweb.ualberta.ca/philosophy/.
BSPC Sorting Hat
Just for fun, here are the assignments of BSPC attendees into their Hogwarts houses. The Sorting Hat consisted of Ross Cameron, Hud Hudson, me and Daniel Nolan, with help and advice from many others. (As I hope is obvious, no offence of any kind to anyone is intended!)
BSPC Photos
Further to that promise of photos, I have put some online.
Maximizing, Satisficing and Gradability
Greetings from the BSPC, now complete apart from Recreation Day. Soon to follow: BSPC participants sorted into their Harry Potter houses, and lots of photos. But first, some philosophy.
This is actually unrelated to anything that happened during the sessions, and is instead something I have been chatting about with Daniel Nolan (who, incidentally, should get a joint-authorship credit on this post for helping me write up the idea and improve my examples, though I do not have evidence that he is committed to the view itself, nor should any errors herein be attributed to him, etc.).
The idea is that gradability can help accommodate the apparently conflicting intuitions of Maximizing and Satisficing consequentialists.
Maximizers think that only the action(s) with the best consequences are right; all others are wrong (though perhaps to greater or lesser degrees). Satisficers think that all actions with good enough consequences are right, and that there may be several actions, with consequences of differing values, which have good enough consequences. (It need not be assumed that to be good enough a state of affairs has to be good simpliciter; the least worst option may count as good enough even if it is not very good at all.)
My basic thought is that ‘right’ appears to be a gradable adjective like ‘tall’ or ‘flat’. Familiarly, in some contexts, such as when we are talking about basketball players, ‘tall’ is used in a very demanding way, so that someone has to be at least 6’5” to fall within its extension. In other contexts, such as when we are talking about children, it is used in a less demanding way, so that someone who is only 3’5” falls within its extension.
Another example of gradability may be helpful on the way to the gradability of ‘right’. Consider ‘at the front of the line’. (I’m in the US so it’s a line rather than a queue.) Sometimes, we use that phrase in such a way that only the one person at the very front of the line counts as ‘at the front of the line’. For instance, if we ask ‘Who is at the front of the line?’ because we want to award a prize to the person who is next to be served, we are using it in this demanding way. On other occasions, we use it in such a way that the first few people count as ‘at the front of the line’. For instance, if you and I join a queue of 50 people and I then notice that Ross is in fourth in line, I might say to you ‘It’s OK, we can queue-jump: I know someone at the front of the line’.
The idea about ‘right’, then, is that in some contexts, ‘right’ is used in a very demanding way, so that only the action with the best consequences will be in its extension. On other occasions of use, ‘right’ is used in a less demanding way, so that any action with good enough consequences is in its extension. This is a common phenomenon in natural language; there are other gradable phrases, like ‘at the front of the line’, which are also sometimes used in such a way that only the first thing in some ordering falls within their extension, and on other occasions used in such a way that the first n things in that ordering fall within their extension (for some n>1).
The Maximizers and the Satisficers are therefore both half right; they are each offering a good account of how ‘right’ works on certain occasions of use. Both are motivated by good intuitions, which I think we can accommodate with this gradability point. Comments welcome (including especially, since I don’t know this literature well, comments of the form “wasn’t this said by X at t only better?”).
Hiring
“Brian Leiter”:http://leiterreports.typepad.com/blog/2007/07/summary-of-majo.html has a summary of the recent rounds of faculty movement. Here was one interesting statistic from looking at the top 30 US departments.
There were 15 senior hires, 13 male and 2 female.
There were 13 junior hires, 6 male and 7 female.
Having 19 out of 28 hires by top 30 programs be male is not great, but it is promising that so many women are being hired at tenure-track level.
Where the junior hires came from is also interesting. The most successful program by this metric was UCLA, with 3 people hired. After that, NYU, Rutgers and MIT had two graduates each hired, with the other four coming from Princeton, Duke, Freie and Colorado.
Tierney, Gott and the Philosophers
“John Tierney”:http://www.nytimes.com/2007/07/17/science/17tier.html?8dpc=&_r=1&oref=slogin&pagewanted=all today writes about Richard Gott’s Copernican principle. He has a little more on “his blog”:http://tierneylab.blogs.nytimes.com/2007/07/16/how-nigh-is-the-end-predictions-for-geysers-marriages-poker-streaks-and-the-human-race/#more-103, along with some useful discussion from “Bradley Monton”:http://www.colorado.edu/philosophy/fac/monton.html. The principle in question says that you should treat the time of your observation of some entity as being a random point in its lifetime. Slightly more formally, quoting Gott via “a paper”:http://spot.colorado.edu/~monton/BradleyMonton/Articles_files/future%20duration%20pq%20final.pdf Monton wrote with Brian Kierland,
bq. Assuming that whatever we are measuring can be observed only in the interval between times tbegin and tend, if there is nothing special about tnow, we expect tnow to be located randomly in this interval.
As Monton and Kierland note, we can use this to argue that the probability of
bq. a tpast < tfuture < b tpast
is 1/ (a+1) – 1 / (b+1), where tpast is the past life-span of the entity in question, and tfuture is its future life-span. Most discussion of this has focussed on the case where a = b = 39. But I think the more interesting case is where a = 0 and b = 1. In this case we get the result that the probability of the entity in question lasting longer into the future than its current life-span is 1/2.
As a rule I tend to be very hostile to these attempts to get precise probabilities from very little data. I have a short argument against Gott’s rule below. But first I want to try a little mockery. I’d like to know anyone who would like to take any of the following bets.
Wikipedia’s “History of the Internet”:http://en.wikipedia.org/wiki/History_of_the_Internet dates the founding of the World Wide Web to around the early 1990s, so it is 15 or so years old. Gott’s formula would say that it less than 50/50 that it will survive until around 2025. I’ll take that bet if anyone is offering.
The iPhone has been around for about 3 weeks at this time of writing. Again, Gott’s formula would suggest that it is 50/50 that it will last for more than 3 weeks from now. Again, I’ll take that bet!
Finally, it has been “about 100 years”:http://en.wikipedia.org/wiki/Demography_of_Australia#Historical_population_estimates since there were over 4,000,000 people on the Australian continent. I’m unlikely to be around long enough to see whether there still will be more than 4,000,000 in 100 years time, but I’m a lot more than 50/50 confident that there will be. I will most likely be around in 10 years to see whether there are more than 4,000,000 people there in 11 years time. Gott’s formula says that the probability of that is around 0.9. I’m a little more optimistic than that, to say the least.
Anyway, here is the argument. Consider any two plays, A and B, that have been running for x and y weeks respectively, with x > y. And consider the following three events.
E1 = Play A is running
E2 = Play B is running
E3 = Plays A and B are both running
Note that E3 has been ongoing for y, just like E2. The Copernican principle tells us that at some time z in the future, the probabilities of these three events are
Pr(E1 at z) = x / (x + z)
Pr(E2 at z) = y / (y + z)
Pr(E3 at z) = y / (y + z)
Now let’s try and work out the conditional probability that A will still be running at z, given that B is running at z. That is, Pr(E1 at z | E2 at z). It is
Pr(E1 at z & E2 at z) / Pr(E2 at z)
= Pr(E3 at z) / Pr(E2 at z)
= (y / (y + z)) / (y / (y + z))
= 1
So using the Copernican formula, we can deduce that the conditional probability of A still running at z given that B is still running at z is 1. And that’s given only the information that z is in the future, and that A has been running at B. That is, to say the least, an absurd result. So I’m sure there is something deeply mistaken with the Copernican formula.
AAP 2007
Daniel and I gave our Backwards Explanation paper at the AAP. It survived well, even convinced a few people, so now it’s full steam ahead for its outing at the BSPC next month, where it will receive the critical attention of Alyssa Ney and Trenton Merricks. Unfortunately our presentation was scheduled up against a bunch of papers that we would have really liked to see. In fact, a downside of the AAP in general was the number of sessions which either had nothing I was particularly interested in or several very interesting papers.
My highlights from the AAP included Josh Parsons‘s talk on Assessment-Contextual Indexicality (draft available from his papers page), which sets out to see what the communicative point of assessment-context indexicals would be and why we might want a language to contain them, and Nic Southwood‘s paper which conjectured that the normativity of rationality is a matter of what we owe to ourselves. In question time I tried to persuade Nic that this view need not engender the rejection of naturalistic reductionism. Daniel Star also raised the question of what distinguises rationality from prudence, which also looks like a matter of what we owe to ourselves. There was an excellent discussion in both sessions.
Some photos should be on their way soon. (Dave Chalmers has posted some already here.)
Social Choice and Trees
(I’m going to make a point about the relevance of social choice impossibility results to the drawing of phylogenetic trees, but it’ll take a while to get there.)
Continue reading
