I realised earlier this week that an example of Jamie Dreier’s might be related to my objections to Adam Elga’s indifference principle, and my continuing war against indifference in all its manifestations. So let me tell you Jamie’s example, get your opinion on it, and then tell you my variant on it. (I’m not sure how similar my example will look to Jamie’s, but it really is pretty much the same case, so I don’t want to claim any credit for it. On the other hand, blame for misappropriation is appropriate.)
The world consists of infinitely many people (or at least morally salient agents) who live forever without moving. The people are arranged in a lattice-like pattern – relative to a suitable co-ordinate system the people are set up so that a point (x, y, z) is occupied iff x, y and z are integers. (We assume our ‘people’ are point-sized!) There is a small sphere, currently of radius < 1, centred at (0,0,0). Over time, the radius of this sphere will grow at some constant rate. (It won’t matter what the rate is, but feel free to pick a value for its expansion if it helps you visualise the example.)
A less than belevolent god gives us a choice about what will happen to the people in the world throughout the rest of time. If we pick option A, call it pain in, then at every point in time, those inside the sphere will be in pain, and those outside the sphere will be in pleasure, or at least happy. If we pick option B, call it pain out, then at every point in time, those inside the sphere will be happy, those outside will be in pain. Which should we choose?
Jamie points out some odd features of the case. If we poll people in that world, we’ll find overwhelming support for option B. Every person in the world should prefer we pick B, because it means they will be in pain for a finite amount of time, then happy for an infinite duration, rather than the reverse. (Some agents far from the centre with a high discount rate might dissent, but they are irrational dolts, so let’s ignore them.) On the other hand, if we come back to look at the world at any time after choosing B, we’ll see many more people in pain as a result of our decision than in pleasure. And while the the sphere with the pained people in it is expanding, it isn’t really clear that we are buying some future happiness as the result of present suffering. At every point in the future of the world, there will be more suffering than pain.
So, which will it be, option B or option A?
Now for the application to Elga, which surprisingly enough doesn’t turn on your answer there. Dr Evil creates infinitely many (a countable infinity) duplicates of Alex, and tweaks their biology a little so that (a) each of them will live forever and not age, and have the same experiences as all the others (b) every fifty years they will forget everything they have experienced for the last fifty years and be reverted to the epistemic state of the person who right now is being told by Dr Evil that she will live forever and by the way she has countable many like situated Sissyphusians.
These immortals are epistemically alike, but they are different from each other in two small respects. First, they each have a ‘serial number’ written on their chest in noumenal pen (so they can’t get any evidence about what it says). This number records how many Alexes existed before this Alex was made. So the original Alex is 0, the first duplicate is 1, the second is 2 and so on. (Since there are a countable infinity of them, Evil could have created them in a linear order.) Secondly, on their back they have a ‘version number’ which records how many times their memories have been erased. (I just wrote that this too was in noumenal ink, but I think I used the wrong pen, so let me say it again.) So they are born/created with 0 on their back, and this is replaced with 1 after fifty years, 2 after a hundred, and so on and so forth ad infinitum ad infinitum.
Each of these souls is told about their numbers, S and V, when they are created or having their memories wiped. And each of them occasionally wonders whether she is such that S > V, or S = V, or S < V. As best I can tell, Adam’s theory says nothing at all about what they should think about this question. Not even, surprisingly enough, that Pr(S = V) = 0, which you might expect.
The point is that all he says is that any two hypotheses S = s & V = v and S = s´ & V = v´ should receive the same credence. And same credence in the strong sense of same probability that the conditional probability of a particular one of these being true given that one or other of them is is 1/2, not just in the weak sense that the probability of each equals zero, although that would be problem enough. One issue with this, which I may have noted once or twice before, is that it commits Adam to denying countable additivity. For Pr(S = s) = 0 for all s, but Pr($x S = x) = 1, which is impossible if Pr($x S = x) is just equal to Pr(S = 0) + Pr(S = 1) + Pr(S = 2) + …, which countable additivity says that it is. Now countable additivity is not a golden calf, many smart people have rejected it. Indeed, many smart people have rejected it for just the reason that Adam is implicitly adopting. Intuitvely, they say, it is possible to have an even distribution of credences over a countably additive set, and if countable additivity is inconsistent with that (which it is) then so much the worse for wear for countable additivity. But with countable addivity gone, so go a lot of other things which we might have expected. One of them is the ability to get from what premises we have (that any two hypotheses about the value of S and V are strongly equiprobable) to conclusions like Pr(S = V) = 0.
So Adam’s theory doesn’t compel Alex to take any particular attitude towards S > V. But he does seem to assume that a particular Alex should assign some number or other to the probability that, for her, S > V. And therein lies the problem. For the theory does imply the following constraints.
- "x Pr(S > V | V = x) = 1
- "y Pr(S > V | S = y) = 0
So Alex is in the following awkward position. If her credence in S > V is less than 1, then by the first constraint, she knows that there is a partition of the possibility space (specifically, {<V = n>: n Î N} where I use angle brackets to represent de se propositions, and N for the set of natural numbers) such that conditional on every member of that partition, her conditional credence in S > V is higher than her actual credence. That’s often taken to be a bad thing. But perhaps all it means is Pr(S > V) = 1. Perhaps. On the other hand, if her credence in S > V is greater than 0, then by the second constraint there is a partition of the possibility space (specifically {<V = n>: n Î N}) such that conditional on every member of that partition, her conditional credence in S > V is higher than her actual credence. That is not good.
So I think as long as Alex follows Adam’s advice, she is in trouble, and that’s bad news for Adam’s theory.
Some people would try and spell out the particular kind of trouble that Alex is in by using some kind of Dutch Book argument, or in some other way get an epistemological conclusion from decision theory. But that would be mistaken thrice over.
First, those ‘pragmatic’ arguments generally aren’t very good, although I won’t go into the reasons why they aren’t very good here. (Quick summary of the reasons, if you care. There’s no obvious connection between stupid actions and stupid doxastic states. The pragmatic arguments for various conclusions within probability theory all presuppose some particular theory about how the connection is supposed to hold. But these presuppositions are often (a) false, (b) not things that people who don’t believe the arguments conclusions would want to believe or (c) both. And unsound question-begging arguments aren’t worth the electrons they’re reflected from.)
Secondly, we already know from the two-envelope problem that once infinities come into play, then the kind of position Alex will find herself in with respect to bets on S > V, there being some bets such that she prefers them and she knows that when she learns which member of a partition is actual she will no longer prefer them, is a position that anyone could find themselves in. What is distinctive about Alex is that we don’t need to bring in decision-theoretic considerations to embarrass her. There’s a distinction between Alex’s problem (if she takes Adam’s advice) and the problem facing someone facing a two-envelope paradox, and bringing decision theory in blurs that distinction.
Thirdly, Alex knows that she won’t find out which member of the partition is true, for to do that would require that she have different evidence to her twins, and as the problem clearly states, she has the same evidence as all the others, so really that’s not very likely. This matters, because the kind of argument that would be used here to show that Alex’s position is embarrassing would be to get her to buy or sell a bet on S > V, and then depending on what she did release to her the info about the value of S or V, and get her to reverse the transaction she just made with interest. Since Alex knows this can’t happen, I don’t see how her decision making apparatus is noticably faulty.
None of this is to say that Alex gets off the hook. I think the awkward position I laid out really is awkward, and it’s a bug not a feature of Adam’s indifference principle that it leads his advisees there. But I can’t conclusively demonstrate that using arguments from decision theory.
So what should Alex do? Well, I think that she should assign a non-numerical credence to S > V that is neither less than 1 nor greater than 0. Could Adam say the same thing? Possibly, though there’s a worry once he opens the door to this kind of move. Often the situations that he thinks call for application of an indifference principle I think call for non-numerical probabilities. I think that some of the cases he brings up in his paper are like this, for example. That’s a post for another day, but the quick version of that post will be that if Adam adopts non-numerical probabilities to get out of the Alex problem, he wins (or at least not loses) the battle, but he loses the war. But it’s late and I need to work tomorrow, so I’ll leave that for now.