# How Surprising is the Two Envelope Paradox

Matt Weiner writes:

I don’t think we want it to be the case that, once you open the first envelope, you have a reason to switch. Now, one issue here is the mathematical impossibility of defining a probability distribution on which every rational number is equally likely—probabilities are supposed to be countably additive, which makes it impossible to assign the same value to an infinite number of different ones. [Afterthought: Anyone know if non-standard analysis can do any work here?]

Actually you don’t even need non-standard analysis. As John Broome (1995) pointed out you can get just the result that Matt doesn’t want within standard probability theory. To be sure, you can’t get the result that whatever you see in the envelope it is equally likely that there is twice as much in the other envelope as half as much. But we can get the result that whatever you see, you should switch. Here’s one way. (It could be Broome’s exact method for all I remember – I don’t have the paper in front of me. And the numbers feel familiar from what I remember of Broome’s paper.)

Let’s assume we have a red and blue envelope, a currency denominated in utils, and a God, Eris seems appropriate, who chooses how much to put in the envelopes as follows. She takes a coin that has a 2/3 probability of falling tails, and tosses it repeatedly until it falls heads. Let n be the number of tosses this takes. (If it falls tails infinitely often, stipulate that n=1.) She then takes a fair coin and tosses it once to determine what goes in the envelopes by the following rules.

If heads, then 2n utils in the red envelope and 2n+1 utils in the blue envelope.

If tails, then 2n utils in the blue envelope and 2n+1 utils in the red envelope.

Let’s assume she now gives you the red envelope and you somehow (presumably with the help of another God – this part sort of defeats Eris’s purposes) get to see what’s in it. It turns out there are only two cases worth considering.

Case 1: You see 2 utils. In this case you know that there are 4 utils in the other envelope, so the expected gain from switching is 2.

Case 2: You see 4 utils. In this case you know that there are either 2 utils in the other envelope or 8. The prior probability (i.e. before Eris tosses any coins, which we can assume is when she stops telling you what happens) that the red envelope would end up with 4 and the blue envelope with 2 is 1/9. The prior probability that the red envelope would end up with 4 and the blue envelope with 8 is 2/27. So by a quick application of Bayes’s theorem, the posterior probability that the blue envelope has 2 is 0.6, and that it has 8 is 0.4. So the expected value of the blue envelope is 2 * 0.6 + 8 * 0.4 = 4.4. So the expected gain from swapping is 0.4.

What about all the other cases you ask? Well I won’t try doing the algebra in HTML, but it isn’t too hard to prove that for any x larger than 2 that you see, the expected gain from swapping is x/10. So whatever you see, you should not only prefer to swap, you should be prepared to pay at least 0.4 to swap. And note that we’ve only used standard (countably additive) probability functions here and standard reasoning.

By the way, although I’ll leave the proof for another day, I’m pretty sure that the expected utility of receiving one of the envelopes does not have to be infinite for a two-envelope paradox style situation to arise, i.e. a situation where you will want to swap whatever you see in your envelope, whichever envelope you are given. (No one I know of has ever said that it has to be, but it’s easy to misread David Chalmers as saying that it must be.) In the case here, the expected value of Eris’s gift is infinite. (Well, until she starts getting you to pay to switch envelopes.) But I’m pretty sure that by mixing positive and negative payouts in the distribution it’s possible to produce a paradoxical distribution with each envelope having a strictly undefined expected utility.

## 5 Replies to “How Surprising is the Two Envelope Paradox”

1. Matt Weiner says:

BTW, my question was whether or not non-standard analysis could help us define a distribution in which every rational had an equal chance—whether there might be some infinitesimal value that could be given to each rational so that it sums to 1. I think not, probably.

2. Bill Carone says:

“But we can get the result that whatever you see, you should switch.”

I don’t see that result coming out of this argument.

If n goes from 1 to N, then the answer is “If you see anything except N, switch; if you see N, don’t switch,” right?

So, let N go to infinity, and the answer converges to “Switch if you get anything except the highest possible value, otherwise don’t.” It doesn’t converge to “Switch always,” right? In other words, as N goes to infinity, the number of cases where you should switch always remains equal to 1.

You can’t just start with N=infinity; you need to start with finite N and take the limit.

You should do a similar thing with the expectation of Eris’s original deal; start summing over n=1 to N, then take the limit as N goes to infinity. This limit doesn’t exist (it diverges), so you shouldn’t take the limit; the fact that surprising things happen when you do shouldn’t surprise you 🙂

Is there something wrong with the above? I’ve alwyas been taught that to deal with infinities you start with finite quantities and carefully take the limit. If the limit doesn’t exist, you shouldn’t use it.

3. Mark says:

I’m afraid this is a fallacy. As N goes to infinity, the number which is 1 if N is finite and 0 otherwise remains at 1, and so that is its limit. We obviously cannot conclude that for infinite N, N is finite.

4. Bill Carone says:

“I’m afraid this is a fallacy. As N goes to infinity, the number which is 1 if N is finite and 0 otherwise remains at 1, and so that is its limit. We obviously cannot conclude that for infinite N, N is finite.”

This is the problem I’m having. N is never infinite. You can’t jump to N=infinity (if nothing else, N is a real number and infinity isn’t). All you can do is take a limit, right?

Your example: I would not say for infinite N, N is finite, since I don’t know what it means to say “for infinite N.” All I know how to say is “As N increases indefinitely, N is always finite” which is true, no?

Perhaps you could point me to a good source on this kind of issue? I follow Gauss:

“I protest against the use of infinite magnitude as something accomplished, which is never permissible in mathematics. Infinity is merely a figure of speech, the true meaning being a limit.”

and Jaynes:

“Apply the ordinary processes of arithmetic and analysis only to expressions with a finite number n of terms. Then after the calculation is done, observe how the resulting finite expressions behave as the parameter n increases indefinitely.”

5. Bill Carone says:

“the number which is 1 if N is finite and 0 otherwise”

I also have a little problem with this function, since N is a real number and therefore never infinite.

Do you have a similar example without this problem?

BTW, thanks for the response; I am not a particularly good mathematician, so thanks for educating me 🙂

Comments are closed.