Consider the following decision problem. You have two choices, which we’ll call 1 and 2. If you choose option 1, you’ll get $1,000,000. If you choose option 2, you’ll get $1,000. There are no other consequences of your actions, and you prefer more money to less. What should you do?
It sounds easy enough right? You should take option 1. I think that’s the right answer, but getting clear as to why it is the right ansewr, and what question it is the right answer to, is a little tricky.
Here’s something that’s consistent with the initial description of the case. You’re in a version of Newcomb’s problem. Option 1 is taking one box, Option 2 is taking two boxes. You have a crystal ball, and it perfectly reliably detects (via cues from the future) whether the demon predicted your choice correctly. And she did; so you know (with certainty) that if you pick option one, you’ll get the million, and if you pick option two, you’ll get the thousand. Still, I think you should pick option one, since you should pick option one in any problem consistent with the description in the first paragraph. (And I think that’s consistent with causal decision theory, properly understood, though the reasons why that is so are a little beyond the scope of this post.)
Here’s something else that’s consistent with a flat-footed interpretation of the case, though not I think with the intended interpretation. Option 2 is a box with $1000 in it. Option 1 is a box with a randomly selected lottery ticket in it, and the ticket has an expected value of $1. Now as a matter of fact, it will be the winning ticket, so you will get $1,000,000 if you take option 1. Still, if everything you know is that option 2 is $1,000, and option 1 is a $1 lottery ticket, you should take option 2.
Now I don’t think that undermines what I said above. And I don’t think it undermines it because when we properly interpret descriptions of games/decision problems, we’ll see that this situation isn’t among the class of decision problems described in the first paragraph. When we describe the outcomes of certain actions in a decision problem, those aren’t merely the actual outcomes, they are things that are properly taken as fixed points in the agent’s deliberation. They are, in Stalnakerian terms, the limits of the context set. In the lottery ticket example, it is not determined by the context set that you’ll get $1,000,000 if you take option 1, even though it is in fact true.
I think “things the agent properly takes as fixed points” are all and only the things the agent knows, but that’s a highly controversial theory of knowledge. (In fact, it’s my version of interest-relative invariantism.) So rather than wade into that debate, I’ll simply talk about proper fixed points.
Saying that something is a fixed point is a very strong claim. It means the agent doesn’t even, shouldn’t even, consider possibilities where they fail. So in Newcomb’s problem, the agent shouldn’t be at all worrying about possibilities where the demon miscounts the money she puts into box 1 or 2. Or possibilities where there is really a piranha in box 2 who’ll bite your hand, rather than $1000. And when I say that she shouldn’t be worrying about them, I mean they shouldn’t be in the algebra of possibilities over which her credences are defined.
There’s a big difference formally between something being true at all points over which a probability function is defined, and something (merely) having probability 1 according to that function. And that difference is something that I’m relying on heavily here. In particular, I think the following two things are true.
First, when we state something in the set up of a problem, then we say that the agent can take it as given for the purposes of a problem.
Second, when we are considering the possible outcomes of a situation, the only situations we need to consider are ones that are not fixed points. So in my version of Newcomb’s Problem, the right thing to do is to take one box, because there is no outcome where you do better than taking one box. On the other hand, some things that we now know to be false might (in some sense of might) become relevant, even though we now assign them probability 0. That’s what goes on in cases where backwards induction fails; the context set shifts over the course of the game, and so we have to take into account new things.
Having said all that, there is one hard question that I don’t know the answer to. It’s related to some things that Adam Elga, John Collins and Andy Egan were discussing at a reading group on the weekend. In the kind of puzzle cases that we usually consider in textbooks, the context set consists of the Cartesian product of some hypotheses about the world, and some choices. That’s to say, the context set satisfies this principle: If _S_ is a possible state of the world (excluding my choice), and _C_ is a possible choice, then there is a possibility in the context set where _S_ and _C_ both obtain. I wonder if that’s something we should always accept. I’ll leave the pros and cons of accepting that proposal for another post though.