Updates

I’ve added new versions of three new papers to my website. They are:

There are also a couple of summer courses I’ve been asked to announce.

Problems of the Self at CEU.

bq. The course aims to present the state of the art in research on the self from philosophy, psychology, cognitive neuroscience, sociology, and cognitive anthropology. Themes revolve around the nature of the self, as revealed through self-consciousness, body perception, action and joint action, and its embedding in society and culture. Historical and developmental perspectives provide other angles on the self. The course presents a unique opportunity for interdisciplinary discussion on the self from multiple perspectives. It is directed at advanced graduate students, postdoctoral fellows and junior faculty working in philosophy, psychology, cognitive neuroscience and cognate disciplines.

Metaphysical Mayhem at Rutgers.

bq. Metaphysical Mayhem is back! Rutgers University will be hosting a 5-day summer school for graduate students May 14-18, 2012. John Hawthorne, Katherine Hawley, Ted Sider, Jonathan Schaffer, and Dean Zimmerman will lead the seminars on a variety of topics in metaphysics, including: natural properties, composition as identity, grounding, metaphysical explanation, and stuff like that…

Lewis on causation and biff

I’ve been reading Lewis’s late papers on causation, and I can’t figure out how to make consistent some of the things he says in ‘Void and Object’ and some of the things he says in ‘Causation as Influence’. Here is one of the objections to applying the Canberra plan to causation that he offers in ‘Causation as Influence’. (Page numbers are from the versions of the papers in Causation and Counterfactuals.)

bq. The problem of the many diverse actual causal mechanisms, or more generally of many diverse mechanisms coexisting in any one world, is still with us. If causation is, one might be, wildly disjunctive, we need to know what unifies the disjunction. For one thing the thug platitudes tell us is that causation is one thing, common to the many causal mechanisms. (76)

But in Void and Object, Willis says that the Canberra plan approach is a good approach to determining what biff is, and he makes the following speculations about what kind of thing biff will turn out to be.

bq. Myself, I’d like to think that the actual occupant of the biff-role is Humean-supervenient, physical, and at least fairly natural; but nothing else I shall say here is premised on that hope. (284)

Here’s the problem. There are, as Lewis says in Causation as Influence, many actually existing causal mechanisms. They don’t seem to have a lot in common. So biff looks like it should be pretty disjunctive. Yet Lewis says, or at least hopes, but it will turn out to be fairly natural. I don’t see how both those things can be true.

Three Bits of News

  • The Annual Bellingham Summer Philosophy Conference (aka the greatest conference on the annual calendar) has been announced for 2012.
  • The deadline for submissions to this year’s Formal Epistemology Workshop (which will be in Munich in early summer) is in a few days.
  • I’ve been using, and loving, John MacFarlane’s excellent program Pandoc. It is a document converter for converting between, more or less, any two commonly used open-source document formats. It is particularly helpful for me for converting between TeX and file formats that can be read by Microsoft Word, since so many journals seem addicted to Word. Writing this is a really incredible public service on John’s part. It’s not what people commonly mean by a public intellectual, but I’ve always thought a public intellectual should be someone who uses intellectual skills for the public good, and this is one of the best instances I’ve seen of this by a philosopher in a long time.

Oxford Studies in Metaphysics Prize

Sponsored by the “Ammonius Foundation”:http://www.ammonius.org/ and administered by the editorial board of _Oxford Studies in Metaphysics_, the 2012 Younger Scholar Prize annual essay competition is open to scholars who are within ten years of receiving a Ph.D. or students who are currently enrolled in a graduate program. (Independent scholars should enquire of the editor to determine eligibility.) The award is $8,000. Winning essays will appear in _Oxford Studies in Metaphysics_, so submissions must not be under review elsewhere.

Essays should generally be no longer than 10,000 words; longer essays may be considered, but authors must seek prior approval. To be eligible for the 2012 prize, submissions must be electronically submitted by 30 January 2012 (paper submissions are no longer accepted). Refereeing will be blind; authors should omit remarks and references that might disclose their identities. Receipt of submissions will be acknowledged by e-mail. The winner is determined by a committee of members of the editorial board of _Oxford Studies in Metaphysics_, and will be announced in early March. At the author’s request, the board will simultaneously consider entries in the prize competition as submissions for _Oxford Studies in Metaphysics_, independently of the prize.

Previous winners of the Younger Scholar Prize are:

  • Thomas Hofweber, “Inexpressible Properties and Propositions”, Vol. 2;
  • Matthew McGrath, “Four-Dimensionalism and the Puzzles of Coincidence”, Vol. 3;
  • Cody Gilmore, “Time Travel, Coinciding Objects, and Persistence”, Vol. 3;
  • Stephan Leuenberger, “Ceteris Absentibus Physicalism”, Vol. 4;
  • Jeffrey Sanford Russell, “The Structure of Gunk: Adventures in the Ontology of Space”, Vol. 4;
  • Bradford Skow, “Extrinsic Temporal Metrics”, Vol. 5;
  • Jason Turner, “Ontological Nihilism”, Vol. 6;
  • Rachael Briggs and Graeme A. Forbes, “The Real Truth About the Unreal Future”, Vol. 7;
  • Shamik Dasgupta, “Absolutism vs Comparativism about Quantities”, forthcoming, Vol. 8.

Enquiries should be addressed to “Dean Zimmerman”:mailto:dwzimmer@rci.rutgers.edu.

Where are the philosophical baby boomers?

Eric Schwitzgebel has “a fascinating post”:http://schwitzsplinters.blogspot.com/2011/12/baby-boom-philosophy-bust.html about how little influence baby boomers have had in philosophy. He uses a nice objective measure; looking at which philosophers are most cited in the “Stanford Encyclopaedia of Philosophy”:http://plato.stanford.edu. He finds that of the 25 most cited philosophers, 15 were born between 1931 and 1945, and just 2 were born between 1946 and 1960.

Now to be sure some of this could be due to philosophers who were born in 1960 having not yet produced their best work – lots of great philosophical work is published after one’s 51st birthday. And it could be because those philosophers have produced great work that hasn’t yet dissipated widely enough to be cited.

But I don’t believe either explanation. For one thing, Eric notes that if anything, the boomers are at the age where philosophers’ influence “typically peaks”:http://schwitzsplinters.blogspot.com/2010/04/discussion-arcs.html. For another, the stats Eric posts back up something I’ve heard talked about in conversation a bit independently.

There are lots of very prominent, and ground-breaking, philosophers in my generation. (I’m defining generations in a way that my generation includes roughly people born between 1965 and 1980.) And looking at the current crops of grad students, the next generation looks fairly spectacular too. But between the generation of Lewis, Kripke, Fodor, Jackson etc, and my generation, there aren’t as many prominent, field-defining figures. It’s not like there are none; Timothy Williamson alone would refute that claim. But I didn’t think there were as many, and neither did a number of people I’ve talked about this with over the years, and Eric’s figures go some way to confirming that impression.

Eric also makes a suggestion about why this strange state of affairs – strange because you’d expect boomers to be overrepresented in any category like this – may have come about.

bq. College enrollment grew explosively in the 1960s and then flattened out. The pre-baby-boomers were hired in large numbers in the 1960s to teach the baby boomers. The pre-baby boomers rose quickly to prominence in the 1960s and 1970s and set the agenda for philosophy during that period. Through the 1980s and into the 1990s, the pre-baby-boomers remained dominant. During the 1980s, when the baby boomers should have been exploding onto the philosophical scene, they instead struggled to find faculty positions, journal space, and professional attention in a field still dominated by the depression-era and World War II babies.

That’s an interesting hypothesis, though it seems that if it is true, it should generalise to other disciplines. And I’m wondering whether it does. Are baby boomers underrepresented among the leading figures in other fields such as political science, history, sociology, English literature and so on? If not, I think we need another explanation for philosophy’s recent history.

Games and Knowledge

I’ve been interested recently in defending a particular norm relating knowledge and decision problems. To set out the norm, it will be useful to have some terminology.

  • A *decision problem* is a triple (S, A, U) consisting of a set of states, a set of actions, and a utility function that maps state-action pairs to utilities.
  • An agent *faces* a decision problem (S, A, U) if she knows that her utility function agrees with U about how much she values each state-action pair, she knows she is able to perform each of the actions in A, and she knows that exactly one of the states in S obtains.
  • A decision problem (S’, A, U’) is an *expansion* of a problem (S, A, U) for agent x iff S’ is a superset of x, U’ agrees with U on every state-action pair where the state is in S, and the agent knows that none of the states in S’ but not S obtain.

Then I have endorsed the following principle:

bq. Ignore Known Falsehoods. If (S’, A, U’) is an expansion for x of (S, A, U), then the rational evaluability of performing any action φ is the same whether φ is performed when x faces (S’, A, U’) or when she faces (S, A, U).

I’m now worried about the following possible counterexample. Let’s start with two games.

bq. Game One. There are two players: P1 and P2. It is common knowledge that each is rational. Each player has a green card and a red card. Their only move in the game is to play one of these cards. If at least one player plays green, they each get $1. If they both play red, they both get $0. P2 has already moved, and played green.

bq. Game Two. There are two players: P1 and P2. It is common knowledge that each is rational. Each player has a green card and a red card. Their only move in the game is to play one of these cards. If at least one player plays green, they each get $1. If they both play red, they both get $0. The moves will be made simultaneously.

Here’s the problem for Ignore Known Falsehoods. The following premises all seem true (at least to me).

  1. Games are decision problems, with the possible moves of the other player as states.
  2. In Game One, it doesn’t matter what P1 does, so it is rationally permissible to play red.
  3. In Game Two, playing green is the only rationally permissible play.
  4. If premises 1 and 3 are true, then Game Two is an expansion of Game One.

The point behind premise 4 is that if rationality requires playing green in Game Two, and P2 is rational, we know that she’ll play green. So although in Game Two there is in some sense one extra state, namely the state where P2 plays Red, it is a state we know not to obtain. So Game Two is simply an expansion of Game One.

So the big issue, I think, is premise 3. Is it true? It certainly seems true to me. If we think that rationality requires even one round of eliminating weakly dominated strategies, then it is true. Moreover, it isn’t obvious how we can coherently believe it to be false. If it is false, then rational P2 might play red. Unless we have some reason to give that possibility 0 probability, it follows that playing green maximises expected utility.

(There is actually a problem here for fans of traditional expected utility theory. If you say that playing green is uniquely rational for each player, you have to say that two outcomes that have the same expected utility differ in normative status. If you say that both options are permissible, then you need some reason to say they have the same expected utility, and I don’t know what that could be. I think the best solution here is to adopt some kind of lexicographic utility theory, as Stalnaker has argued is needed for cases like this. But that’s not relevant to the problem I’m concerned with.)

So I don’t know which of these premises I can abandon. And I don’t know how to square them with Ignore Known Falsehoods. So I’m worried that Ignore Known Falsehoods is false. Can anyone talk me out of this?

Lecture Notes

I’ve created a “lecture notes page”:http://brian.weatherson.org/LectureNotes.shtml on my webpage.

On that page, I’ve posted the notes I’ll be using for my decision theory class this fall.

These notes are something of a merger of the game theory notes I used over the summer, with the decision theory notes I had previously posted. A straight merger of those two would have involved a lot of overlap, and been too long for a semester. As it is, these notes are probably a little long for a semester course, but I think that with careful use they should be the basis for a good course.

Wolverine!

Starting January 1, Ishani Maitra and I will be starting new positions in the University of Michigan philosophy department. I’m going to be the inaugural Marshall M. Weinberg Professor of Philosophy, which is an incredible honour.

I’m really looking forward to being part of (another) great philosophy program. I’ve been incredibly impressed with the way Michigan has gone about its hiring in recent years, and you don’t need me to tell you how amazing the longer serving faculty there are. Indeed, both the newer and the older faculty there are so good that I’ve repeatedly tried to hire several of them away at my previous jobs!

Living in Ann Arbor will be great for the three of us. I’m looking forward to being able to walk to work, and to the markets, and to great public schools. And I’m really looking forward to having these folks as colleagues and neighbours. I’d make a list of which things I’m most looking forward to professionally, but it would be too long, and I’m sure to inadvertently leave something or someone out. And in any case, I suspect that most readers of this blog don’t need to be told how fantastic the philosophers at Michigan are, to put it mildly.

While I will miss many things about the Rutgers philosophy department (and linguistics and cog sci departments), one thing I won’t miss having to worry about what might happen to my job thanks to changes in state government policy. Twice, proposals that would have made it impossible to work at Rutgers and live in New York passed a house of the state legislature. In both cases the bills that resulted didn’t directly affect academics, but that this kind of thing would even be proposed was worrying. What passed was a huge rise in health premiums (effectively a 5% pay cut for most faculty), and an unknown extra rise in health care if you have any interest in seeing a doctor outside New Jersey (UPDATE: I made a couple of mistakes in making this calculation, see below for labourious details). The budget cuts from Trenton have also seen cuts in carried forward research accounts, and non-payment of contractually agreed pay raises. And who knows what they’ll think up next? All this made the choice much easier.

That said, there has been a lot I’ve loved about being part of the Rutgers philosophy department. I’m particularly fond of the current crop of grad students we have, who are truly great philosophers, and great people. I bet in a couple of decades time, people will look back and say, “There was a seminar that had her and him and her and … in it? That’s like having a philosophy all-stars conference every week.” And I’ll be like, “Yep, and I was, at least nominally, teaching them.” Those of you who are on search committees over the next few years will hear much much more from me about these students, so I won’t go on too much more here. My colleagues-to-be in particular will be hearing about them a lot. (I suspect at some point I’ll be replaced on a search committee by a talking dummy saying, “I think we should hire the Rutgers student”, and no one will spot the difference.)

Building a grad student body this good takes a lot of work, but I think Jeff King (as DGS) and Ruth Chang and Jason Stanley (as admissions directors in recent years) deserve a lot of credit for it. Impressively, the students aren’t just thriving philosophically, but seem to be happier than one could reasonably expect graduate students to be. I think that wouldn’t have happened without the hard work several faculty members have put in to making the grad program work so well.

I don’t know the current Michigan students nearly as well, so it’s impossible to make any comparisons. I have been impressed by the people (and work) I have seen, so I have high hopes for what things will be like. I’m looking forward to more seminars with a different batch of philosophy all-stars to be!

UPDATE: I oversimplified and overstated the recent changes in NJ health care law, so I should correct this. What happened is that the costs of being part of the NJ health plan went from a fixed percentage of salary (roughly 1.5%), to a sliding percentage of premium costs, dependent on one’s income. The effect of this will be complicated, because it is being phased in over time, but I think the following is all true. (Some of this is taken from this calculator, which models what will happen if there’s no change in wages or health insurance premiums over the course of implementing the plan.)

  • Health care costs for staff and faculty will rise at the rate of health insurance premium inflation, not at the rate of wage inflation.
  • If premiums and wages don’t change between now and when the plan is fully implemented in 2014, most people with singles coverage will see a small rise in contributions, and people with family coverage will (in general) see a large rise, from something like 1.5% of income to something that could be nearly 5% of income.
  • That ‘could be’ is because the premiums as a percentage of salary will be largest on those earning between $80,000 and $133,000; either side of that the costs, in percentage terms, will be less. Hence the increase from the current 1.5-ish% of salary premiums will be less. For people getting singles coverage and earning over $200,000, or getting family coverage and earning over $500,000, costs may even fall.
  • But barring premium increases, no one will see a 5% of income rise in health care costs; that was a mistake, and I’m sorry for it.
  • I also hadn’t realised how differentially impacted people on different salaries, and with different family situations, will be by the law; projecting what will happen to those with families in the $80-133K salary range to “most faculty” was also a mistake I’m sorry for.
  • There will be separate plans for people who want primarily instate care, and those that don’t. This could massively shrink the pool of people being insured by the out-of-state plan, assuming (as I think is true) that most people who live in NJ will take the (presumably cheaper) instate plan. If the pool shrinks too far, you’d expect to see premium rises, and premium volatility, both of which would be directly passed on to staff and faculty. But it is far too early to make such a prediction, and the effects could be much much smaller than I fear.

I still think the changes were a very bad idea, and I’m glad to not have to worry about them more. And I’m especially glad I don’t have to worry about what effect splitting the insured pool like this will do to premiums for those in the smaller group, which I think is very hard to model. (One data point for the model: at University of Michigan, if we moved out of state, our health care contributions would more than double.) I have an aversion to this kind of uncertainty, so this bothered me more than it might bother other people; we’ll see in ten years time how worried I should have been.

But the changes weren’t as bad, or as simple, as I suggested in the post, hence this correction. And I apologise for the errors.

Game Theory as Epistemology

I taught a series of classes on game theory over the last few weeks at Arché. And one of the things that has always puzzled me about game theory is that it seems so hard to reduce orthodox views in game theory to orthodox views in decision theory. The puzzle is easy enough to state. A fairly standard game theoretic treatment of “Matching Pennies”:http://en.wikipedia.org/wiki/Matching_pennies and “Prisoners’ Dilemma”:http://en.wikipedia.org/wiki/Prisoner’s_dilemma involves the following two claims.

(1) In Matching Pennies, the uniquely rational solution involves each player playing a mixed strategy.
(2) In Prisoners’ Dilemma, the uniquely rational solution is for each player to defect.

Causal decision theory says denies that mixed strategies can ever be *better* than all of the pure strategies of which they are mixtures, at least for strategies that are mixtures of finitely many pure strategies. So a causal decision theorist wouldn’t accept (1). And evidential decision theory says that sometimes, for example when one is playing with someone who is likely to do what you do, it is rational to cooperate in Prisoners’ Dilemma. So it seems that orthodox game theorists are neither causal decision theorists nor evidential decision theorists.

So what are they then? For a while, I thought they were essentially ratificationists. And all the worse for them, I thought, since I think ratificationism is a bad idea. But now I think I was asking the wrong question. Or, more precisely, I was thinking of game theoretic views as being answers to the wrong question.

The first thing to note is that problems in decision theory have a very different structure to problems in game theory. In decision theory, we state what options are available to the agent, what states are epistemically possible and, and this is crucial, what the probabilities are of those states. Standard approaches to decision theory don’t get off the ground until we have the last of those in place.

In game theory, we typically state things differently. Unless nature is to make a move, we simply state what options are available to the players, and what plays are available to each of the actors, and of course what will happen given each combination of moves. We are told that the players are rational, and that this is common knowledge, but we aren’t given the probabilities of each move. Now it is true that you could regard each of the moves available to the other players as a possible state of the world. Indeed, I think it should be at least consistent to do that. But in general if you do that, you won’t be left with a solvable decision puzzle, since you need to say something about the probabilities of those states/decisions.

So what game theory really offers is a model for simultaneously solving for the probability of different choices being made, and for the rational action given those choices. Indeed, given a game between two players, A and B, we typically have to solve for six distinct ‘variables’.

  1. A’s probability that A will make various different choices.
  2. A’s probability that B will make various different choices.
  3. A’s choice.
  4. B’s probability that A will make various different choices.
  5. B’s probability that B will make various different choices.
  6. B’s choice.

The game theorists method for solving for these six variables is typically some form of reflective equilibrium. A solution is acceptable iff it meets a number of equilibrium constraints. We could ask about whether there should be quite so much focus on equilibrium analysis as we actually find in game theory textbooks (and journal articles), but it is clear that solving a complicated puzzle like this using reflective equilibrium analysis is hardly outside the realm of familiar philosophical approaches

Looked at this way, it seems that we should think of game theory really not as part of decision theory, but as much a part of epistemology. After all, what we’re trying to do here is solve for what rationality requires the players credences to be, given some relatively weak looking constraints. We also try to solve for their decisions given these credences, but it turns out that is an easy part of the analysis; all the work is in the epistemology. So it isn’t wrong to call this part of game theory ‘interactive epistemology’, as is often done.

What are the constraints on an equilibrium solution to a game? At least the following constraints seem plausible. All but the first are really equilibrium constraints; the first is somewhat of a foundational constraint. (Though note that since ‘rational’ here is analysed in terms of equilibria, even that constraint is something of an equilibrium constraint.)

  • If there is a uniquely rational thing for one of the players to do, then both players must believe they will do it (with probability 1). More generally, if there is a unique rational credence for us to have, as theorists, about what A and B will do, the players must share those credences.
  • 1 and 3, and 5 and 6, must be in equilibrium. In particular, if a player believes they will do something (with probability 1), then they will do it.
  • 2 and 3, and 4 and 6, must be in equilibrium. A players decision must maximise expected utility given her credence distribution over the space of moves available to the other player.

That much seems relatively uncontroversial, assuming that we want to go along with the project of finding equilibria of the game. But those criteria alone are much too weak to get us near to game theoretic orthodoxy. After all, in Matching Pennies they are consistent with the following solution of the game.

  • Each player believes, with probability 1, that they will play Heads.
  • Each player’s credence that the other player will play heads is 0.5.
  • Each player plays Heads.

Every player maximises expected utility given the other player’s expected move. Each player is correct about their own move. And each player treats the other player as being rational. So we have many aspects of an equilibrium solution. Yet we are a long way short of a Nash equilibrium of the game, since the outcome is one where one player deeply regrets their play. What could we do to strengthen the equilibrium conditions? Here are four proposals.

First, we could add a *truth* rule.

  • Everything the players believe must be true. This puts constraints on 1, 2, 4 and 5.

This is a worthwhile enough constraint, albeit one considerably more externalist friendly than the constraints we usually use in decision theory. But it doesn’t rule out the ‘solution’ I described here, since everything the players believe is true.

Second, we could add a *converse truth* rule.

  • If something is true in virtue of the players’ credences, then each player believes it.

This would rule out our ‘solution’. After all, neither player believes the other player will play Heads, but both players will in fact play Heads. But in a slightly different case, the converse truth rule won’t help.

  • Each player believes, with probability 0.9, that they will play Heads.
  • Each player’s credence that the other player will play heads is 0.5.
  • Each player plays Heads.

Now nothing is guaranteed by the players’ beliefs about their own play. But we still don’t have a Nash equilibrium. We might wonder if this is really consistent with converse truth. I think this depends on how we interpret the first clause. If we think that the first clause must mean that each player will use a randomising device to make their choice, one that has a 0.9 chance of coming up heads, then converse truth would say that each player should believe that they will use such a device. And then the Principal Principle would say that each player should have credence 0.9 that the other player will play Heads, so this isn’t an equilibrium. But I think this is an overly _metaphysical_ interpretation of the first clause. The players might just be uncertain about what they will play, not certain that they will use some particular chance device. So we need a stronger constraint.

Third, then, we could try a *symmetry* rule.

  • Each player should have the same credences about what A will do, and each player should have the same credences about what B will do.

This will get us to Nash equilibrium. That is, the only solutions that are consistent with the above constraints, plus symmetry, are Nash equilibria of the original game. But what could possibly justify symmetry? Consider the following simple cooperative game.

Each player must pick either Heads or Tails. Each player gets a payoff of 1 if the picks are the same, and 0 if the picks are different.

What could justify the claim that each player should have the same credence that A will pick Heads? Surely A could have better insight into this! So symmetry seems like too strong a constraint, but without symmetry, I don’t see how solving for our six ‘variables’ will inevitably point to a Nash equilibrium of the original game.

Perhaps we could motivate symmetry by deriving it from something even stronger. This is our fourth and final constraint, called *uniqueness*.

  • There is a unique rational credence function given any evidence set.

Assume also that players aren’t allowed, for whatever reason, to use knowledge not written in the game table. Assume further that there is common knowledge of rationality, as we usually assume. Now uniqueness will entail symmetry. And uniqueness, while controversial, is a well known philosophical theory. Moreover, symmetry plus the idea that we are simultaneously solving for the players’ beliefs and actions gets us the result that players always believe that a Nash equilibrium is being played. And the correctness condition on player beliefs means that rational players will always play Nash equilibria.

So we sort of have it, an argument from well-known (if not that widely endorsed) philosophical premises to the conclusion that when there is common knowledge of rationality, any game ends up in a Nash equilibrium.

Of course, we’ve used a premise that entails something way stronger. Uniqueness entails that any game has a unique rational equilibrium. That’s not, to put it mildly, something game theorists usually accept. The little coordination game I presented from a few paragraphs back is a game that sure looks like it has multiple equilibria! So I haven’t succeeded in deriving orthodox game theoretic conclusions from orthodox philosophical premises. But I think this epistemological tack is a better way to make game theory and philosophy look a little closer than they look if one starts thinking of game theorists as working on special (and specially important) cases of decision problems.