I’ve been interested recently in defending a particular norm relating knowledge and decision problems. To set out the norm, it will be useful to have some terminology.

- A
**decision problem**is a triple (S, A, U) consisting of a set of states, a set of actions, and a utility function that maps state-action pairs to utilities. - An agent
**faces**a decision problem (S, A, U) if she knows that her utility function agrees with U about how much she values each state-action pair, she knows she is able to perform each of the actions in A, and she knows that exactly one of the states in S obtains. - A decision problem (S’, A, U’) is an
**expansion**of a problem (S, A, U) for agent x iff S’ is a superset of x, U’ agrees with U on every state-action pair where the state is in S, and the agent knows that none of the states in S’ but not S obtain.

Then I have endorsed the following principle:

Ignore Known Falsehoods. If (S’, A, U’) is an expansion for x of (S, A, U), then the rational evaluability of performing any action φ is the same whether φ is performed when x faces (S’, A, U’) or when she faces (S, A, U).

I’m now worried about the following possible counterexample. Let’s start with two games.

Game One. There are two players: P1 and P2. It is common knowledge that each is rational. Each player has a green card and a red card. Their only move in the game is to play one of these cards. If at least one player plays green, they each get $1. If they both play red, they both get $0. P2 has already moved, and played green.

Game Two. There are two players: P1 and P2. It is common knowledge that each is rational. Each player has a green card and a red card. Their only move in the game is to play one of these cards. If at least one player plays green, they each get $1. If they both play red, they both get $0. The moves will be made simultaneously.

Here’s the problem for Ignore Known Falsehoods. The following premises all seem true (at least to me).

- Games are decision problems, with the possible moves of the other player as states.
- In Game One, it doesn’t matter what P1 does, so it is rationally permissible to play red.
- In Game Two, playing green is the only rationally permissible play.
- If premises 1 and 3 are true, then Game Two is an expansion of Game One.

The point behind premise 4 is that if rationality requires playing green in Game Two, and P2 is rational, we know that she’ll play green. So although in Game Two there is in some sense one extra state, namely the state where P2 plays Red, it is a state we know not to obtain. So Game Two is simply an expansion of Game One.

So the big issue, I think, is premise 3. Is it true? It certainly seems true to me. If we think that rationality requires even one round of eliminating weakly dominated strategies, then it is true. Moreover, it isn’t obvious how we can coherently believe it to be false. If it is false, then rational P2 might play red. Unless we have some reason to give that possibility 0 probability, it follows that playing green maximises expected utility.

(There is actually a problem here for fans of traditional expected utility theory. If you say that playing green is uniquely rational for each player, you have to say that two outcomes that have the same expected utility differ in normative status. If you say that both options are permissible, then you need some reason to say they have the same expected utility, and I don’t know what that could be. I think the best solution here is to adopt some kind of lexicographic utility theory, as Stalnaker has argued is needed for cases like this. But that’s not relevant to the problem I’m concerned with.)

So I don’t know which of these premises I can abandon. And I don’t know how to square them with Ignore Known Falsehoods. So I’m worried that Ignore Known Falsehoods is false. Can anyone talk me out of this?

*Posted by Brian Weatherson at 3:54 pm*