Different Ideas About Newcomb Cases

One advantage of going to parties with mathematicians and physicists is that you can describe a problem to them, and sometimes they’ll get stuck thinking about it and come up with an interesting new approach to it, different from most of the standard ones. This happened to me over the past few months with Josh von Korff, a physics grad student here at Berkeley, and versions of Newcomb’s problem. He shared my general intuition that one should choose only one box in the standard version of Newcomb’s problem, but that one should smoke in the smoking lesion example. However, he took this intuition seriously enough that he was able to come up with a decision-theoretic protocol that actually seems to make these recommendations. It ends up making some other really strange predictions, but it seems interesting to consider, and also ends up resembling something Kantian!

The basic idea is that right now, I should plan all my future decisions in such a way that they maximize my expected utility right now, and stick to those decisions. In some sense, this policy obviously has the highest expectation overall, because of how it’s designed.

In the standard Newcomb case, we see that adopting the one-box policy now means that you’ll most likely get a million dollars, while adopting a two-box policy now means that you’ll most likely get only a thousand dollars. Thus, this procedure recommends being a one-boxer.

Now consider a slight variant of the Newcomb problem. In this version, the predictor didn’t set up the boxes, she just found them and looked inside, and then investigated the agent and made her prediction. She asserts the material biconditional “either the box has a million dollars and you will only take that box, or it has nothing and you will take both boxes”. Looking at this prospectively, we see that if you’re a one-boxer, then this situation will only be likely to emerge if there’s already a box with a million dollars there, while if you’re a two-boxer, then it will only be likely to emerge if there’s already an empty box there. However, being a one-boxer or two-boxer has no effect on the likelihood of there being a million dollars or not in the box. Thus, you might as well be a two-boxer, because in either situation (the box already containing a million or not) you get an extra thousand dollars, and you just get the situation described to you differently by the predictor.

Interestingly enough, we see that if the predictor is causally responsible for the contents of the box then we should follow evidential decision theory, while if she only provides evidence for what’s already in the box then we should follow causal decision theory. I don’t know how much people have already discussed this aspect of the causal structure of the situation, since they seem to focus instead on whether the agent is causally responsible, rather than the predictor.

Now I think my intuitive understanding of the smoking lesion case is more like the second of these two problems – if the lesion is actually determining my behavior, then decision theory seems to be irrelevant, so the way I seem to understand the situation has to be something more like a medical discovery of the material biconditional between my having cancer and smoking

Here’s another situation that Josh described that started to make things seem a little more weird. In Ancient Greece, while wandering on the road, every day one either encounters a beggar or a god. If one encounters a beggar, then one can choose to either give the beggar a penny or not. But if one encounters a god, then the god will give one a gold coin iff, had there been a beggar instead, one would have given a penny. On encountering a beggar, it now seems intuitive that (speaking only out of self-interest), one shouldn’t give the penny. But (assuming that gods and beggars are randomly encountered with some middling probability distribution) the decision protocol outlined above recommends giving the penny anyway.

In a sense, what’s happening here is that I’m giving the penny in the actual world, so that my closest counterpart that runs into a god will receive a gold coin. It seems very odd to behave like this, but from the point of view before I know whether or not I’ll encounter a god, this seems to be the best overall plan. But as Josh points out, if this was the only way people got food, then people would see that the generous were doing well, and generosity would spread quickly.

If we now imagine a multi-agent situation, we can get even stronger (and perhaps stranger) results. If two agents are playing in a prisoner’s dilemma, and they have common knowledge that they are both following this decision protocol, then it looks like they should both cooperate. In general, if this decision protocol is somehow constitutive of rationality, then rational agents should always act according to a maxim that they can intend (consistently with their goals) to be followed by all rational agents. To get either of these conclusions, one has to condition one’s expectations on the proposition that other agents following this procedure will arrive at the same choices.

Of course this is all very strange. When I actually find myself in the Newcomb situation, or facing the beggar, I no longer seem to have a reason to behave according to the dictates of this protocol – my actions benefit my counterpart rather than myself. And if I’m supposed to make all my decisions by making this sort of calculation, then it’s unclear how far back in time I should go to evaluate the expected utilities. This matters if we can somehow nest Newcomb cases, say by offering a prize if I predict that you will make the “wrong” decision on a future Newcomb case. It looks like I have to calculate everything all the way back at the beginning, with only my a priori probability distribution – which doesn’t seem to make much sense. Perhaps I should only go back to when I adopted this decision procedure – but then what stops me from “re-adopting” it at some later time, and resetting all the calculations?

At any rate, these strike me as some very interesting ideas.