Game Theory as Epistemology

I taught a series of classes on game theory over the last few weeks at Arché. And one of the things that has always puzzled me about game theory is that it seems so hard to reduce orthodox views in game theory to orthodox views in decision theory. The puzzle is easy enough to state. A fairly standard game theoretic treatment of “Matching Pennies”:http://en.wikipedia.org/wiki/Matching_pennies and “Prisoners’ Dilemma”:http://en.wikipedia.org/wiki/Prisoner’s_dilemma involves the following two claims.

(1) In Matching Pennies, the uniquely rational solution involves each player playing a mixed strategy.
(2) In Prisoners’ Dilemma, the uniquely rational solution is for each player to defect.

Causal decision theory says denies that mixed strategies can ever be *better* than all of the pure strategies of which they are mixtures, at least for strategies that are mixtures of finitely many pure strategies. So a causal decision theorist wouldn’t accept (1). And evidential decision theory says that sometimes, for example when one is playing with someone who is likely to do what you do, it is rational to cooperate in Prisoners’ Dilemma. So it seems that orthodox game theorists are neither causal decision theorists nor evidential decision theorists.

So what are they then? For a while, I thought they were essentially ratificationists. And all the worse for them, I thought, since I think ratificationism is a bad idea. But now I think I was asking the wrong question. Or, more precisely, I was thinking of game theoretic views as being answers to the wrong question.

The first thing to note is that problems in decision theory have a very different structure to problems in game theory. In decision theory, we state what options are available to the agent, what states are epistemically possible and, and this is crucial, what the probabilities are of those states. Standard approaches to decision theory don’t get off the ground until we have the last of those in place.

In game theory, we typically state things differently. Unless nature is to make a move, we simply state what options are available to the players, and what plays are available to each of the actors, and of course what will happen given each combination of moves. We are told that the players are rational, and that this is common knowledge, but we aren’t given the probabilities of each move. Now it is true that you could regard each of the moves available to the other players as a possible state of the world. Indeed, I think it should be at least consistent to do that. But in general if you do that, you won’t be left with a solvable decision puzzle, since you need to say something about the probabilities of those states/decisions.

So what game theory really offers is a model for simultaneously solving for the probability of different choices being made, and for the rational action given those choices. Indeed, given a game between two players, A and B, we typically have to solve for six distinct ‘variables’.

  1. A’s probability that A will make various different choices.
  2. A’s probability that B will make various different choices.
  3. A’s choice.
  4. B’s probability that A will make various different choices.
  5. B’s probability that B will make various different choices.
  6. B’s choice.

The game theorists method for solving for these six variables is typically some form of reflective equilibrium. A solution is acceptable iff it meets a number of equilibrium constraints. We could ask about whether there should be quite so much focus on equilibrium analysis as we actually find in game theory textbooks (and journal articles), but it is clear that solving a complicated puzzle like this using reflective equilibrium analysis is hardly outside the realm of familiar philosophical approaches

Looked at this way, it seems that we should think of game theory really not as part of decision theory, but as much a part of epistemology. After all, what we’re trying to do here is solve for what rationality requires the players credences to be, given some relatively weak looking constraints. We also try to solve for their decisions given these credences, but it turns out that is an easy part of the analysis; all the work is in the epistemology. So it isn’t wrong to call this part of game theory ‘interactive epistemology’, as is often done.

What are the constraints on an equilibrium solution to a game? At least the following constraints seem plausible. All but the first are really equilibrium constraints; the first is somewhat of a foundational constraint. (Though note that since ‘rational’ here is analysed in terms of equilibria, even that constraint is something of an equilibrium constraint.)

  • If there is a uniquely rational thing for one of the players to do, then both players must believe they will do it (with probability 1). More generally, if there is a unique rational credence for us to have, as theorists, about what A and B will do, the players must share those credences.
  • 1 and 3, and 5 and 6, must be in equilibrium. In particular, if a player believes they will do something (with probability 1), then they will do it.
  • 2 and 3, and 4 and 6, must be in equilibrium. A players decision must maximise expected utility given her credence distribution over the space of moves available to the other player.

That much seems relatively uncontroversial, assuming that we want to go along with the project of finding equilibria of the game. But those criteria alone are much too weak to get us near to game theoretic orthodoxy. After all, in Matching Pennies they are consistent with the following solution of the game.

  • Each player believes, with probability 1, that they will play Heads.
  • Each player’s credence that the other player will play heads is 0.5.
  • Each player plays Heads.

Every player maximises expected utility given the other player’s expected move. Each player is correct about their own move. And each player treats the other player as being rational. So we have many aspects of an equilibrium solution. Yet we are a long way short of a Nash equilibrium of the game, since the outcome is one where one player deeply regrets their play. What could we do to strengthen the equilibrium conditions? Here are four proposals.

First, we could add a *truth* rule.

  • Everything the players believe must be true. This puts constraints on 1, 2, 4 and 5.

This is a worthwhile enough constraint, albeit one considerably more externalist friendly than the constraints we usually use in decision theory. But it doesn’t rule out the ‘solution’ I described here, since everything the players believe is true.

Second, we could add a *converse truth* rule.

  • If something is true in virtue of the players’ credences, then each player believes it.

This would rule out our ‘solution’. After all, neither player believes the other player will play Heads, but both players will in fact play Heads. But in a slightly different case, the converse truth rule won’t help.

  • Each player believes, with probability 0.9, that they will play Heads.
  • Each player’s credence that the other player will play heads is 0.5.
  • Each player plays Heads.

Now nothing is guaranteed by the players’ beliefs about their own play. But we still don’t have a Nash equilibrium. We might wonder if this is really consistent with converse truth. I think this depends on how we interpret the first clause. If we think that the first clause must mean that each player will use a randomising device to make their choice, one that has a 0.9 chance of coming up heads, then converse truth would say that each player should believe that they will use such a device. And then the Principal Principle would say that each player should have credence 0.9 that the other player will play Heads, so this isn’t an equilibrium. But I think this is an overly _metaphysical_ interpretation of the first clause. The players might just be uncertain about what they will play, not certain that they will use some particular chance device. So we need a stronger constraint.

Third, then, we could try a *symmetry* rule.

  • Each player should have the same credences about what A will do, and each player should have the same credences about what B will do.

This will get us to Nash equilibrium. That is, the only solutions that are consistent with the above constraints, plus symmetry, are Nash equilibria of the original game. But what could possibly justify symmetry? Consider the following simple cooperative game.

Each player must pick either Heads or Tails. Each player gets a payoff of 1 if the picks are the same, and 0 if the picks are different.

What could justify the claim that each player should have the same credence that A will pick Heads? Surely A could have better insight into this! So symmetry seems like too strong a constraint, but without symmetry, I don’t see how solving for our six ‘variables’ will inevitably point to a Nash equilibrium of the original game.

Perhaps we could motivate symmetry by deriving it from something even stronger. This is our fourth and final constraint, called *uniqueness*.

  • There is a unique rational credence function given any evidence set.

Assume also that players aren’t allowed, for whatever reason, to use knowledge not written in the game table. Assume further that there is common knowledge of rationality, as we usually assume. Now uniqueness will entail symmetry. And uniqueness, while controversial, is a well known philosophical theory. Moreover, symmetry plus the idea that we are simultaneously solving for the players’ beliefs and actions gets us the result that players always believe that a Nash equilibrium is being played. And the correctness condition on player beliefs means that rational players will always play Nash equilibria.

So we sort of have it, an argument from well-known (if not that widely endorsed) philosophical premises to the conclusion that when there is common knowledge of rationality, any game ends up in a Nash equilibrium.

Of course, we’ve used a premise that entails something way stronger. Uniqueness entails that any game has a unique rational equilibrium. That’s not, to put it mildly, something game theorists usually accept. The little coordination game I presented from a few paragraphs back is a game that sure looks like it has multiple equilibria! So I haven’t succeeded in deriving orthodox game theoretic conclusions from orthodox philosophical premises. But I think this epistemological tack is a better way to make game theory and philosophy look a little closer than they look if one starts thinking of game theorists as working on special (and specially important) cases of decision problems.

Knowing How, Regresses and Frames

I’m just back from my annual trip to St Andrews to work at Arché. It was lots of fun, as always. The highlight of the trip was taking the baby overseas for the first time, and letting her meet so many great people, especially the other babies. And there was lots of other fun besides. I taught a 9-seminar class on game theory. I have to revise my notes a bit to correct some of the mistakes that became clear in discussion there, but hopefully soon I’ll post them.

Over the last two weekends I was there, there were two very interesting conferences. The first was on the interface between the study of language and the study of philosophy. The second was on knowing how. I didn’t get to attend all of it, so it’s possible that the things I’ll be saying here were addressed in talks I couldn’t make. And this isn’t really my field, so I suspect much of what I’m saying here will be old news to cognoscenti. But I thought that at times some of the anti-Ryleans understated, or at least misstated, the force of Ryle’s arguments.

*Regress Arguments*

Jason Stanley briefly touched on the regress argument Ryle gives in favour of a distinction between knowing how and knowing that. Or, at least, he briefly touched on *a* regress argument that Ryle gives, though I think this isn’t Ryle’s only regress argument. Here’s a rough version of the argument Jason attributes to Ryle.

* Knowing that is a static state.
* No matter how many static states a creature is in, there is no guarantee that anything dynamic will happen, e.g., tht the creature will move, or change.
* But our knowledge does sometimes lead to dynamic effects.
* So there is more to knowledge than knowing that.

This is a pretty terrible argument I think, and Jason did a fine job demolishing it. For one thing, whatever it means to say that knowing that is static, knowing how might be just as static. And given a functionalist/dispositionalist account of content, it just won’t be true that knowing that is static in the relevant sense. If an agent never has the disposition to go to the fridge even though they have a strong desire for beer, and no conflicting dispositions/impediments, then they don’t really believe there is beer in the fridge, so don’t know that there is beer in the fridge.

This way of presenting Ryle makes it sound like knowing how is some kind of ‘vital force’, and Ryle himself is a vitalist, looking for the magical force that is behind self-locomotion. I don’t think that’s a particularly fair way, though, of looking at Ryle. A better approach, I think, starts with consideration of the following kind of creature.

The creature is very good, indeed effectively perfect, at drawing conclusions of the form I should φ. But they do not always follow this up by doing φ. If you think it is possible to form beliefs of the form I should φ without ever going on to φ, or even forming a disposition to φ, imagine the creature is like that. If you think that’s impossible, perhaps on functionalist grounds, imagine that the creature moves from knowledge she expresses with I should φ to actually doing φ as rarely as is conceptually possible. (I set aside as absurd the idea that the functionalist characterisation of mental content rules out there being large differences in how frequently creatures move from I should φ to actually doing φ.)

I think such a creature is missing something. If they frequently don’t do φ in cases where it would be particularly hard, what they might be missing is willpower. But let’s not assume that. They frequently just don’t do what they think they should do, given their interests, and often instead do something harder, or less enjoyable. But what they are missing doesn’t seem to be propositional knowledge, since by hypothesis they are very good at figuring out what they should do, and if they were missing propositional knowledge, that’s what they would be missing.

What they might be missing is a *skill*, such as the skill of acting on one’s normative judgments. But I think Ryle has a useful objection to that characterisation. It is natural to characterise the person’s actions as *stupid*, or more generally *unintelligent*, when they don’t do what they can quite plainly see they should do. A person who lacks a skill at digesting hot dogs quickly, or playing the saxaphone, or sleeping on an airplane, isn’t thereby stupid or even unintelligent. (Though they might be stupid if they know they lack these skills and nevertheless try to do things that call for such a skill.) Indeed, we typically criticise cognitive failings as being unintelligent. So our imagined creature must have a cognitive failing. And that failing must not be an absence of knowledge that, since by hypothesis that isn’t lacking. So we call what is lacking knowledge how.

Note that I really haven’t given an argument that this is the kind of thing that natural language calls knowing how. It’s consistent with this argument that *everything* that is described as knowing how in English is in fact a kind of knowing that. But it is an argument that there is some cognitive skill that plays one of the key roles in regress-stopping that Ryle attributed to knowing how.

*Ryle on the Frame Problem*

There’s another problem for a traditional theory that identifies knowledge with knowing that, and it is the “frame problem”:http://plato.stanford.edu/entries/frame-problem/. Make the following assumptions about a creature.

* It knows most of the relevant true propositions of the form That p is true is relevant to my decision about whether to do ψ.
* It knows an enormous number of the relevant true propositions of the form That q is true is irrelevant to my decision about whether to do ψ, though of course there infinitely many it does not know.
* If it consciously draws on a piece of knowledge that in figuring out whether to do ψ, that has large computational costs.
* If it subconsciously draws on a piece of knowledge that in figuring out whether to do ψ, that has small but not zero computational costs.
* If it simply ignores q in figuring out whether to do ψ, rather than first considering whether to ignore q and then ignoring it, that has zero computational costs.

It seems to me that such a creature has to work out a way to use its knowledge that of propositions like That q is true is irrelevant to my decision about whether to do ψ in making practical deliberations without actually drawing on that knowledge. If it does draw on it, the computational costs will go to infinity, and nothing will get done. In short, it has to be able to ignore propositions like _q_, and it has to ignore them without thinking about whether to ignore them.

It seems that a skill like this is not something one gets by simply having a lot of knowledge. One can know all you like about how propositions like _q_ should be ignored in practical deliberation. But it won’t help a bit if you have to go through the propositions one by one and conclude that they should be ignored, even if you can do all this subconsciously.

Moreover, it is a sign of *intelligence* to have such a skill. Someone whose mind drifts onto thoughts about the finer details of French medieval history when trying to decide whether to catch this local train or wait for the express is displaying a kind of unintelligence. As above, Ryle concludes from that that the skill is a distinctively cognitive skill, and worthy of being called a kind of knowledge. Since it isn’t knowledge that – our creature has all the salient knowledge that – it is a kind of knowing how.

Now I assume that the five assumptions I made above are actually true of creatures like us. Perhaps they are not; perhaps we have a way of drawing on knowledge that which doesn’t involve any computational costs. But I rather doubt that’s true. I think that we drawing on knowledge by using it computationally, and computational usage is by definition costly. Not nearly as costly as conscious thought, but costly. Many of us are sensitive to our knowledge of unimportance without drawing on it; we make decisions about whether to catch the local or the express without first considering whether French medieval history is relevant, and deciding that it isn’t. But this is because we know how to ignore irrelevant information, not merely that we know that the irrelevant information is irrelevant. Knowing that it is irrelevant is no use if you don’t know how to adjust your decision making process in light of that knowledge.