Skip to main content.
August 26th, 2004

Two-Envelopes and Variables

Eric Schwitzgebel and Josh Dever have a paper on the two-envelope paradox arguing that the paradox arises because of faulty reasoning involving variables. They note that if we impose a constraint on which variables can be used in decision-theoretic reasoning, the paradoxical reasoning is blocked. I won’t repeat the formal version of the constraint (from page 4 of the paper) in HTML. But the effect is that X is only a legitimate variable if “the expected value of X is the same conditional on each event in the partition.” The problem is then that the paradoxical reasoning essentially involves appeal to a variable that does not satisfy this constraint.

As an aside, this kind of response is not entirely uncommon in discussions of the two-envelope literature, so it’s worth taking seriously. And Schwitzgebel and Dever’s version of the response is by far the most careful and plausible I have seen. (And it’s probably the earliest such version, given their note in the paper that they discussed this with people in Berkeley in 1993. Given the history of the two-envelope discussion, where so much happens online etc, this kind of fact seems quite relevant to priority, if priority matters at all here.) But it still seems flawed.

Here’s the reason. It’s true that their constraint blocks the paradoxical reasoning. But getting a constraint with that property is dead easy. Just say that any decision-theoretic reasoning is invalid and you’ll do that. The hard part is finding a constraint that knocks out the two-envelope reasoning, but not any reasoning that we want, both intuitively and on reflection, to preserve. And I think Schwitzgebel and Dever’s constraint fails that test.

Consider the following example. God partitions the reals in [0, 1] into two unmeasurable sets, S1 and S2. He picks a real at random from [0, 1]. If it’s in S1, He puts $10 into a red envelope, if it’s in S2 He puts $20 into that red envelope. He then rolls two fair and independent dice. If they land double-six, he puts an amount into a blue envelope equal to the amount in the red envelope plus $5. Otherwise, he puts an amount into that blue envelope equal to $5 less than the amount in the red envelope. Got it? (It’s easier with tables, but tables are hard in blogs.)

You are not told which number He picked, or how the dice landed, but you are told all of the above. You are then given a choice of the red or blue envelopes. How should you choose?

I take it that it’s obvious you should pick the red envelope. After all, whatever is in it, you have a 35/36 chance of getting $5 less with blue, and only a 1/36 chance of getting more. So I say, pick red.

But Schwitzgebel and Dever can’t say that. For they say the above reasoning violates their constraints on which variables can be used. (Or, more precisely, that any formalised version of the above reasoning would do so.) As near as I can tell, the reasoning I just made is just as bad, by their lights, as the paradoxical two-envelope reasoning.

As I see it, they are now under an obligation. For it seems obvious that red is better than blue, so they should tell us what principle they do endorse that gets that conclusion. It can’t just be the principle Always maximise expected utility, since in this case neither picking red nor picking blue has a defined expected utility. And, although this might just be a failure of imagination on my part, I can’t see what else it might be.

While I’m in this combative mood, I should also note that this example casts some doubt on any attempt to resolve the two-envelope paradox by appeal to expected utility reasoning. For the two-envelope paradox rests on principles that are plausible in cases like this one, even when expected utility reasoning fails. I’ll be polite/lazy enough to not quote anyone who actually does try and solve the problem that way.

Posted by Brian Weatherson in Workbench

12 Comments »

This entry was posted on Thursday, August 26th, 2004 at 10:41 am and is filed under Workbench. You can follow any responses to this entry through the comments RSS 2.0 feed. Both comments and pings are currently closed.

12 Responses to “Two-Envelopes and Variables”

  1. Eric Schwitzgebel says:

    That’s a clever example, but it seems to me — I can’t speak for Josh — to be cheating a bit to appeal to undefined probabilities.
    If the probabilities were defined, then the relevant calculation would satisfy our constraint — i.e., for the value of the blue envelope (35/36)(X-5) + (1/36)(X+5), where X is the amount in the red envelope. This satisfies our constraint since the expectation of X is assumed to be independent of the outcome of the die toss.
    Now, there is an easy way to define the probabilities and thus escape your difficulty: Use subjective decision theory. (Isn’t that what good Bayesians are supposed to do, anyway?) Regardless of the unmeasurability of the reals, my priors in your example would be governed by indifference, so my subjective probability of $10 in the red envelope equals my subjective probability of $20 in that envelope — i.e., both are .5. There are well-known difficulties with the principle of indifference of course, but that’s a general problem in assigning priors and nothing particular to Josh’s and my proposal.
    But maybe your problem is with subjective decision theory in general?

  2. Brian Weatherson says:

    Wow – quick response!

    I don’t have a problem with subjective decision theory. I do have a problem, however, with indifference governing anything. And I think it’s particularly problematic here. (Warning: the following could contain serious set-theoretic errors, but I think it’s all OK.)

    Expand the example, so Zeus, who happens to be sitting around, partitions [0, 1] into three unmeasurable sets, T1 T2 and T3. He then asks us what probability we assign to God’s chosen number being in T1. I say No answer because I follow measure theory. But someone using indifference should presumably say 1/3.

    Now Zeus notices something odd. “Oh look, it seems my T1 just is S1. Now you think the probability of God’s number being in that set is both 1/2 and 1/3.” And he laughs a very Zeus-ian laugh.

    At this point the defender of indifference has a few moves available, but I don’t think any of them are attractive. They could say that Zeus’s presence changes the probabilities, but I don’t see why that should be the case. After all, the partition Zeus mentions always existed. They could ignore Zeus’s tripartite partition, but I don’t see why it is less relevant to the probabilities than God’s bipartite partition. Or they could do something else I guess, but I can’t see what.

    Some days I’m a good Bayesian and think that if someone wants to arbitrarily apply indifference in one case, they may be undermotivated and on the verge of inconsistency, but as long as they’re careful they are OK. Other days I think that all applications of indifference, even consistent ones, are irrational. Yet other days (like today) I think that in cases like this the objective chances are (a) known and (b) non-numerical so by the Principal Principle (or something like it) the subjective credences should also be non-numerical. But that’s a controversial view.

    What should be less controversial is that a subject could have completely undefined subjective probabilities about the number appearing in S1 or S2. But still it seems such an agent should prefer red to blue.

  3. Eric Schwitzgebel says:

    Yeah, that’s the problem with indifference. But I needn’t use indifference to determine my priors. I might, for example, think God tends to be a little stingy, but also is a fair enough sport to give me a fighting chance and so settle on (say) p = .72 for $10 and p = .28 for $20 in the red box.
    The true blue Bayesian (or one kind of true blue Bayesian) will say it doesn’t matter how I come up with my priors, and I’ll have priors for everything, so your problem is no problem. But I accept that that view leaves something to be desired. Can’t we just have undefined subjective probabilities for some things?
    If we do have undefined priors, the whole apparatus collapses, including Josh’s and my suggested constraint.
    At the same time, I think there is a way to salvage the spirit of our constraint even with undefined priors. In your example, the outcome of the coin flip is independent of the outcome of God’s number choice. Consequently, it seems there’s a sense in which we would not “expect” the X in the (35/36)(X-5) part of the equation to be any different from the X in the (1/36)(X+5) part of the equation, even if there is no expectation of X in the formal sense.
    I can’t think now of how to put the matter formally, but it would be nice if we could broaden our constraint to include your sort of case. My hunch is that we could, without thereby admitting the two envelope case.

  4. John Quiggin says:

    When you say “unmeasurable”, I assume you mean “not Lebesgue measurable”. But I don’t think there’s any reason to confine ourselves to Lebesgue measure. For any set S1, and any p, there is a minimal algebra including S1, and an associated measure m such that m(S1)=p. This is consistent with a Bayesian prior as suggested by Eric.

    As Eric implies, we can also set up a multiple priors model and impose a version of the sure-thing principle, yielding the conclusion that we should pick the red envelope.

  5. Bill Carone says:

    “God partitions the reals in [0, 1] into two unmeasurable sets, S1 and S2.”

    You seem to use this type of argument a lot; the “and then an infinity happens” argument. In your Dr. Evil paper, it is practically your only form of argument, and you use it for each version of the two-envelope problem you have produced. It seems to me that your methods can produce paradoxes in any mathematics, not just decision theory.

    You can always mass produce paradoxes by

    1) Starting with an infinite set, without specifying a limiting process that produced it, and
    2) Asking a question that depends on how you take the limit.

    If you restate your problems as limits of finite sets, the paradoxes disappear, right?

    So as long as you confine yourself to finite sets and well-behaved limits of finite sets, there is never any problem? Same for the Zeus example?

    For example, instead of partitioning the set of reals between 0 and 1, how about go through the following sequence of finite sets:

    {0,1}
    {0,0.5,1}
    {0,0.25,0.5,0.75,1}
    (0,0.125,0.25,0.375,0.5, …}

    Why would you think that any results you get when you start with the infinite set [0,1] should be so radically different from what happens in the limit here? And in the limit, none of your paradoxes manifest themselves, right?

    You might reply that there are all sorts of cases where f(n) doesn’t equal the limit of f(x) as x approaches n; they are called discontinuities. However, f(infinity) is defined as the limit of f(x) as x goes to infinity; there are no discontinuities at infinity.

    Now different limiting processes might give different answers, or the limits may not exist, and that is a little mysterious; but when dealing with God (which is the only way this problem can occur, right?) you would expect a little mystery that human mathematics might have difficulty with.

    For example, God produces two infinite weights, labelled A and B. How A weights will it take to balance one B weight on a scale. Our mathematics can only pose this problem as the limit as A and B go to infinity of B/A, which simply doesn’t exist. Therefore, our mathematics can’t help us solve this problem, even though if God actually did it, we could find the answer; it is a mystery, but not a contradiction or paradox.

    So, whenever you are dealing with infinite sets and come up with a paradox, restate everything in terms of finite sets, then see how they behave in the limit.

    The question then comes up: what are some problems where this way of looking at infinity will not possibly work. In other words, can you give an example where, by looking at “infinity” only as short hand for “increasing without bound,” we get an obviously wrong answer, or refuse to find an obviously correct answer?

    For example, we pick a natural number at random; what is the probability it is even?

    Obvious answer = 1/2.

    However, we may not be able to find this obvious answer, as one way of producing the natural numbers is by the following sequence of finite sets.

    {1,2,4}
    {1,2,4,3,6,8}
    {1,2,4,3,6,8,5,10,12}

    (and in general adding the next odd number x along with 2x and 2x+2)

    and the answer in the limit is 1/3. Now there are all sorts of arguments as to why this isn’t the right way to take this limit, but the only times I’m really comfortable with giving an answer in infinity-type cases is when all limiting procedures give the same answer. In some problems, there might be obvious limiting procedures, and this might be one case. However, I’m not smart enough to determine what is obvious and what isn’t.

  6. Akel Raoman says:

    True that setting up the problem with finite sets gets rid of the paradoxical nature, but it also detracts from the realism or the possible situation. In reality there are possibilities of non-finite situations, and if you are able to at least think and reason you could at least figure out a way out of them.

  7. Bill Carone says:

    Alex,

    “True that setting up the problem with finite sets gets rid of the paradoxical nature, but it also detracts from the realism or the possible situation.”

    Perhaps you can help my thinking and reasoning ability by showing me a practical example where modelling an infinite set as a well-behaved limit of finite sets either refuses to give the obviously right answer or gives an obviously wrong answer? I haven’t found one, and I’ve been looking for years.

  8. Bill Carone says:

    “Expand the example, so Zeus, who happens to be sitting around, partitions [0, 1] into three unmeasurable sets, T1 T2 and T3. He then asks us what probability we assign to Godís chosen number being in T1. I say No answer because I follow measure theory. But someone using indifference should presumably say 1/3.”

    Someone who believes in indifference but doesn’t believe in using probability theory directly on infinite sets would also say “No answer.” One thing measure theory does is warn us when directly working on infinite sets may not work well (e.g. unmeasurable sets, sets of measure zero, etc.).

    Note that you cannot set up the same kind of paradox of indifference without the infinite set; it turns into something like the following:

    “A number from 1 to 10 must either be 1 or not-1, so you must assign a 50% chance to both.”

    which is clearly fallacious.

  9. Bill Carone says:

    Akel (and apologies for misreading your name eariler),

    “In reality there are possibilities of non-finite situations, and if you are able to at least think and reason you could at least figure out a way out of them.”

    Three things:

    1) In material reality there are no infinities, right? We might model something as infinitely big or infinitely small, but that is our model, not reality. Or am I off base? Can you give a counterexample?

    Now true, we often model finite situations using infinities, when it is easier (e.g. integrals are often easier to solve than sums). And almost all the time it works just fine. But whenever we see contradictions, we have to step back and see if the contradiction is only due to our misuse of infinities.

    If you bring in God and an immaterial reality, then ignore the above; God can create immaterial infinities all he wants, I suspect. But in that realm, I don’t think human mathematics will work all the time.

    2) I gave an example where our mathematics cannot solve an infinite problem (God creates two infinite weights), even though, presumably, there is an answer. So I argue that, even though we can think and reason, we cannot always solve problems to do with infinities. I believe we can only solve problems about infinities by modelling them as limits. For example, if God told us that, although infinite, the two weights were exactly the same weight, then we could answer the question (since the limit of N/N when N goes to infinity exists and equals 1).

    3) If you use non-standard analysis, or hyperreals, to model infinite quantities, then I believe you get the same answers as when you use limits, since hyperreals avoid the sillyness of N=N+1 when N is infinite. All of Brian’s paradoxes disappear when you model them with hyperreals or limits.

    So, when you model infinities correctly, the paradoxes disappear. These are not decision-theoretic paradoxes, they are misuse-of-infinity paradoxes.

  10. Brian Weatherson says:

    What’s the sense in which there are no infinities in nature? All I was using was the idea that some variable can take a precise real value, and there can be a fact of the matter about which of two unmeasurable sets it can be in. Now maybe the world is so quantised that every variable takes discrete rather than continuous values, but I don’t see why we should take that for granted in decision theory. And maybe we should give up so much of set theory that we don’t have unmeasurable sets, but I don’t really see what advantage there would be to that either. Moreover, since we’re dealing with uncountable sets here, I don’t even see how to begin thinking of the probability of the value being in an unmeasurable set being the limit of some kind of process.

    In short, if the world contains infinite precision, or even if we should take that possibility seriously in decision making, we have to face problems to do with infinite sets all the time. (After all, the probability that the weather will be between 30 and 40 degrees is an uncountable infinite set of possibilities, and that’s the kind of thing we want to reason about all the time.) I think in these cases using anything like indifference reasoning is unjustified and often inconsistent, so I don’t use it.

  11. Bill Carone says:

    “In short, if the world contains infinite precision, or even if we should take that possibility seriously in decision making, we have to face problems to do with infinite sets all the time.”

    I guess the best way I can express my difficulty is this:

    In your arguments, “arbitrarily precise” doesn’t necessarily limit to “infinitely precise.” In other words, why is it that, as my precision gets better and better without limit, I don’t approach your “infinite precision” answer? In fact, there is a radical difference between our two answers; yours produces a contradiction and mine doesn’t. Similarly with “arbitrarily large” and “infinitely large”.

    In the mathematical analysis I know, f(infinity) is defined as f(x) as x goes to infinity. So there cannot be a distinction between “arbitrarily large” and “infinitely large,” or “arbitrarily precise” and “infinitely precise.” But you seem not only to think you can make the distinction, but that it is important.

    “(After all, the probability that the weather will be between 30 and 40 degrees is an uncountable infinite set of possibilities, and thatís the kind of thing we want to reason about all the time.)”

    Let me use an example similar to the one in your Dr. Evil paper to show a simple way to deal with your uncountably large set using limits, and getting the obviously correct answer.

    I flip a coin, and I have no reason to believe that either side is more likely that the other. I argue, due to some form of indifference, that I should assign 50% probability to both sides.

    You provide the fact that it is between 30 and 40 degrees outside, and argue, “You don’t have just two possibilities, heads and tails. You have uncountably many: heads and temperature t, and tails and temperature t, where t goes continuously from 30 to 40. You must therefore assign probability 1/infinity or zero to each possibility.” This, I believe, is the exact argument from your Dr. Evil paper.

    I disagree with your plugging in infinity rather than treating it as a limit. My way to do this is to split the 30 to 40 degree interval into N intervals. Now, the probability of the coin landing heads and the temperature being in the nth interval is simply 1/2N. Summing over the N intervals, the probability of heads is just 1/2. Now take N to infinity, and the answer remains 1/2.

    So here I have dealt with an uncountably infinite set using limits. An integral over a continuous area is only a limit of discrete sums.

    “I think in these cases using anything like indifference reasoning is unjustified and often inconsistent, so I donít use it.”

    Maximum entropy is one justification of indifference reasoning, and physicists in statistical mechanics have been using it to make correct predictions for a century.

    And no, I don’t want to throw out measure theory or set theory or anything like that, any more than I would throw out my wood lathe because it can’t shape titanium. Measure theory is a great tool; it makes impossibly difficult calculations smooth and easy. Most of the time, infinite sets work fine and give answers quickly and easily without limits. But once your tool starts producing paradoxes, you need to go back to the basics (that is, all together now, finite sets and well-behaved limits of finite sets, where the limit is taken at the very end of the calculation).

    I also want to ask again for a clear, practical counterexample where limits either don’t produce the obviously correct answer, or produce an obviously incorrect answer.

  12. jen white says:

    I am impressed with how intelligent you all are for having analyzed something to death. I am an artist and yes, I am right brained, so keep that in mind…- now my head hurts after having read this. (so if the tylenol is in the red envelope and the advil is in the blue one, i should pick…just kidding…you guys must have a sense of humor right?…