Skip to main content.
May 30th, 2007

Different Ideas About Newcomb Cases

One advantage of going to parties with mathematicians and physicists is that you can describe a problem to them, and sometimes they’ll get stuck thinking about it and come up with an interesting new approach to it, different from most of the standard ones. This happened to me over the past few months with Josh von Korff, a physics grad student here at Berkeley, and versions of Newcomb’s problem. He shared my general intuition that one should choose only one box in the standard version of Newcomb’s problem, but that one should smoke in the smoking lesion example. However, he took this intuition seriously enough that he was able to come up with a decision-theoretic protocol that actually seems to make these recommendations. It ends up making some other really strange predictions, but it seems interesting to consider, and also ends up resembling something Kantian!

The basic idea is that right now, I should plan all my future decisions in such a way that they maximize my expected utility right now, and stick to those decisions. In some sense, this policy obviously has the highest expectation overall, because of how it’s designed.

In the standard Newcomb case, we see that adopting the one-box policy now means that you’ll most likely get a million dollars, while adopting a two-box policy now means that you’ll most likely get only a thousand dollars. Thus, this procedure recommends being a one-boxer.

Now consider a slight variant of the Newcomb problem. In this version, the predictor didn’t set up the boxes, she just found them and looked inside, and then investigated the agent and made her prediction. She asserts the material biconditional “either the box has a million dollars and you will only take that box, or it has nothing and you will take both boxes”. Looking at this prospectively, we see that if you’re a one-boxer, then this situation will only be likely to emerge if there’s already a box with a million dollars there, while if you’re a two-boxer, then it will only be likely to emerge if there’s already an empty box there. However, being a one-boxer or two-boxer has no effect on the likelihood of there being a million dollars or not in the box. Thus, you might as well be a two-boxer, because in either situation (the box already containing a million or not) you get an extra thousand dollars, and you just get the situation described to you differently by the predictor.

Interestingly enough, we see that if the predictor is causally responsible for the contents of the box then we should follow evidential decision theory, while if she only provides evidence for what’s already in the box then we should follow causal decision theory. I don’t know how much people have already discussed this aspect of the causal structure of the situation, since they seem to focus instead on whether the agent is causally responsible, rather than the predictor.

Now I think my intuitive understanding of the smoking lesion case is more like the second of these two problems – if the lesion is actually determining my behavior, then decision theory seems to be irrelevant, so the way I seem to understand the situation has to be something more like a medical discovery of the material biconditional between my having cancer and smoking

Here’s another situation that Josh described that started to make things seem a little more weird. In Ancient Greece, while wandering on the road, every day one either encounters a beggar or a god. If one encounters a beggar, then one can choose to either give the beggar a penny or not. But if one encounters a god, then the god will give one a gold coin iff, had there been a beggar instead, one would have given a penny. On encountering a beggar, it now seems intuitive that (speaking only out of self-interest), one shouldn’t give the penny. But (assuming that gods and beggars are randomly encountered with some middling probability distribution) the decision protocol outlined above recommends giving the penny anyway.

In a sense, what’s happening here is that I’m giving the penny in the actual world, so that my closest counterpart that runs into a god will receive a gold coin. It seems very odd to behave like this, but from the point of view before I know whether or not I’ll encounter a god, this seems to be the best overall plan. But as Josh points out, if this was the only way people got food, then people would see that the generous were doing well, and generosity would spread quickly.

If we now imagine a multi-agent situation, we can get even stronger (and perhaps stranger) results. If two agents are playing in a prisoner’s dilemma, and they have common knowledge that they are both following this decision protocol, then it looks like they should both cooperate. In general, if this decision protocol is somehow constitutive of rationality, then rational agents should always act according to a maxim that they can intend (consistently with their goals) to be followed by all rational agents. To get either of these conclusions, one has to condition one’s expectations on the proposition that other agents following this procedure will arrive at the same choices.

Of course this is all very strange. When I actually find myself in the Newcomb situation, or facing the beggar, I no longer seem to have a reason to behave according to the dictates of this protocol – my actions benefit my counterpart rather than myself. And if I’m supposed to make all my decisions by making this sort of calculation, then it’s unclear how far back in time I should go to evaluate the expected utilities. This matters if we can somehow nest Newcomb cases, say by offering a prize if I predict that you will make the “wrong” decision on a future Newcomb case. It looks like I have to calculate everything all the way back at the beginning, with only my a priori probability distribution – which doesn’t seem to make much sense. Perhaps I should only go back to when I adopted this decision procedure – but then what stops me from “re-adopting” it at some later time, and resetting all the calculations?

At any rate, these strike me as some very interesting ideas.

Posted by Kenny Easwaran in Uncategorized

7 Comments »

This entry was posted on Wednesday, May 30th, 2007 at 2:21 am and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.

7 Responses to “Different Ideas About Newcomb Cases”

  1. Branden Fitelson says:

    Kenny — This sounds a bit (but perhaps not exactly) like Ned McClennan’s “resolute choice” approaches to such problems. Have a look at the following items of his:

    “PRISONER’S DILEMMA AND RESOLUTE CHOICE” IN “PARADOXES OF RATIONALITY AND COOPERATION”, CAMPBELL, RICHMOND (ED), 94-104.

    http://www.ubcpress.ca/search/title_book.asp?BookID=1614

    “The Rationality of Being Guided by Rules” in The Oxford Handbook of Rationality, Mele, Alfred R (ed), 222-239.

    http://www.oxfordscholarship.com/oso/private/content/philosophy/9780195145397/p058.html#acprof-0195145399-chapter-12

    Also, his book “Rationality and Dynamic Choice” might be worth looking at in this connection.

  2. Branden Fitelson says:

    Kenny — One more salient paper is Meek & Glymour’s “Conditioning and Intervening”. What’s wrong with their diagnosis of Newcomb?

    http://bjps.oxfordjournals.org/cgi/content/abstract/45/4/1001

  3. Duncan Watson says:

    I think I share you worries about what stops one from re-adopting the decision procedures at some later point- that is I don’t see what the justification is for, in the future, sticking to one’s past decisions.

    In the standard Newcomb problem the person taking the boxes is given the choice after the predictor has decided whether or not to put the million dollars in the box. It does seem like the biggest payout comes from somehow forming the intention now (before the chooser has been presented with a Newcomb problem choice) to be a one-boxer no matter what when presented with the choice in the future. The rationale for this is that the predictor will predict this and therefore put the million dollars in the box, hence the chooser will be better of than if they had formed the intention to take both boxes and the predictor had predicted this. But once the choice is given (i.e. the million dollars is now either present or absent in the box) the chooser should switch, ignore all their previous intentions, and take both boxes. Doing this makes them one thousand dollars better off than taking just the one box.

    Of course if the predictor is good at their job they will have predicted that the chooser will switch; but no matter how adamant the chooser was when forming their intentions (specifically the intention that if they were presented with a Newcomb problem choice in the future that they would take one box), it still remains that the rational thing to do once they are presented with the choice is to switch. As Lewis puts it in ‘Why Ain’cha Rich?’, the irrational are richly pre-rewarded.

  4. Kent Bach says:

    About sticking to one’s past decisions, at least in the standard Newcomb problem, I cannot resist quoting the last bit from an old (Canadian JPhil 1985) paper of mine on the subject, only because of the great quote at the end:

    Wondering how the predictor anticipates people’s choices, you just can’t forget that what’s done is done and not worry about yielding to a last-minute temptation to take BOTH. To combat that worry you should commit yourself to taking ONE. So instead of wondering how the predictor does it, you should recall what Muhammad Ali said prior to facing the seemingly invincible heavyweight champion Sonny Liston: “If Cassius Clay says a rooster can lay an egg, don’t ask how – grease that skillet!”

  5. Joshua Von Korff says:

    Hi all, thanks for your comments on these ideas …

    I wanted to mention the example that illustrates why one boxing makes sense to me. (It can be applied to Kenny’s other examples too.)

    Imagine you are in a world where, three times a day, you go to a cafeteria to get food. Your food is served in two Newcomb boxes. The opaque box is empty or it contains a decent meal. The transparent box contains a small snack, like an apple or a cookie. There is no other source of food in this world — and let’s say it’s for some reason impossible to beg, borrow, or steal food from other people.

    You try the game a few times, and 99% of the time when you take one box, you get a meal; 99% of the time when you take both boxes, you get a snack. You know that if you don’t eat an average of between 1-2 meals a day, you will eventually starve.

    Now, how many real-life two-boxers would slowly starve to death over the course of a month, all the while saying that they were making rational decisions? I don’t think many would. If they give in and become one boxers, does that mean they are weak or not very smart? I don’t think that’s right either.

    We have a deeply ingrained notion that causality is a crucial component of reasoned actions, because in real life it always is. I must throw the spear in order to impale the deer. I must go to the bank in order to get cash from the ATM. And so on. But Newcomb situations never happen in real life. In a hypothetical “Newcomb world” where causal reasoning frequently doesn’t work (i.e. leads to people starving), I think people would have a very different notion of what constitutes a reasoned action. And it seems to me that it’s only fair to judge Newcomb’s paradox as would an inhabitant of a “Newcomb world.”

  6. Matt Weiner says:

    Josh,
    I’m a one-boxer under many circumstances; at least, I think it’s rational for me to publicly proclaim my one-boxerness now so as to make it easier for any future predictor to figure out that I’m a one-boxer (and also I think that it’s rational for me to stick to that decision). But I worry that cafeteria example doesn’t really make the case for one-boxing. At least it may make the case for one-boxing as a signal.

    Basically there are two alternatives: Either all the meals are prepackaged far in advance, or they’re newly packaged every day. Suppose they’re newly packaged every day. Then it makes sense to one-box every day as a signal to the predictor that you’re going to one-box the next day. Causal decision theorists should have no problem with that.

    The other alternative is that the meals are prepackaged far in advance. But can we really conceive of this world? I find it possible to imagine a world in which a predictor can judge a single choice very accurately, but I’m not sure how a predictor could judge thousands of choices accurately enough to set the problem up. That requires knowing exactly when the two-boxer falls off the wagon; or for that matter exactly when someone may crave a snack. And I’m not sure that we can really draw any moral from a situation that requires a predictor of such uncanny accuracy.

  7. Joshua Von Korff says:

    Matt,

    I’m interested to hear more about your position on one-boxing. If your predictor is known to rely primarily on public proclamations, then why not proclaim yourself to be a one-boxer, but secretly plan to be a two-boxer all the while?
    (Of course, if this works, he isn’t a very good predictor. But why wouldn’t it work?)

    Regarding Newcomb’s cafeteria — it’s true that if the predictor relies on past actions in the game to predict future actions, then one-boxing is a useful signal. The game becomes like a repeated prisoner’s dilemma, with the predictor playing the tit-for-tat strategy. And, as you say, there would then be good causalist reasons for one-boxing.

    But the tit-for-tat predictor would be easy to fool every so often. All you’d have to do is one-box a few times in a row, and two-box the last time. The predictor would mess up. I guess I have been assuming that the predictor keeps his 99% accuracy rating regardless of what strategem you apply to mess him up. So I don’t think your past one-boxing or two-boxing can play a significant role in the predictor’s decisions, even if the meals are packaged every morning. Which means that a causalist has no incentive to be a one-boxer here.

Leave a Reply

You must be logged in to post a comment.