There are about a
million other things I should be doing right now, so its probably time to say
something more about Dr. Evil. I
knew that deep down one of the reasons I disliked approaches to probability
based on principles of indifference was that they threatened to collapse the
important distinction between risk and uncertainty. What I hadnt realised,
until very recently, was Adams argument for his indifference principle involves
just such a collapse at one point.
First some
background. To my mind, what should have been a very important discovery in
early 20th century work on probability was that there is a
distinction between risk and uncertainty. Heres how Keynes introduces the
concept of uncertainty in an article from 1937 (The General Theory of Employment
Quarterly Journal of Economics).
By uncertain
knowledge, let me explain, I do not mean merely to distinguish what is known
for certain from what is only probable. The game of roulette is not subject, in
this sense, to uncertainty; nor is the prospect of a Victory bond being drawn.
Or, again, the expectation of life is only slightly uncertain. Even the weather
is only moderately uncertain. The sense in which I am using the term is that in
which the prospect of a European war is uncertain, or the price of copper and
the rate of interest twenty years hence, or the obsolescence of a new
invention, or the position of private wealth owners in the social system in
1970. About these matters there is no scientific basis on which the for any
calculable probability whatever. We simply do not know. Nevertheless, the
necessity for action and decision compels us as practical men to do our best to
overlook this awkward fact and to behave exactly as we should if we ha behind
us a good Benthamite calculation of a series of
prospective advantages and disadvantages, each multiplied by its appropriate
probability, waiting to be summed.
I think this is
all incredibly important, and any theory that ignores the distinction between
what is probable and what is genuinely uncertain is mistaken. Decisions based
on what is probable or improbable are grounded at least in well understood
principles about risk; decisions grounded in what is genuinely uncertain are
not. And Im inclined to think that any theory that says that an agents attitude
to some uncertain propositions can be expressed by a single probability function
does ignore the distinction. This is especially true for theories that say this
about ideal agents.
This is hardly an
original thought. It was the basis of Keyness theory of probability outlined
in his dissertation of 1909, which eventually became the Treatise on Probability of 1921. Keynes had the probability, which
for him was just rational credence, of an uncertain proposition be a
non-numerical value. Ramsey criticised this on the grounds that probability
values are meant to enter into computations, according to the theory we can add
and multiply them, for example, and we dont know how to add and multiply
non-numerical values. In my dissertation, I proposed that the theory that holds
that the credal states of a rational agent can be
represented by a set of probability functions rather than just a single
probability function could capture all of Keyness insights without being
vulnerable to Ramseys objection. This is not a new theory, it has been
discussed by Isaac Levi (Ignorance, Probability and Rational Choice 1982) Richard
Jeffrey (Bayesianism with a Human Face 1983), Bas
van Fraassen (Figures in a Probability Landscape
1990) and extensively by Peter Walley (Statistical Reasoning With Imprecise
Probabilities 1991), and in Walleys case theres
some connection drawn to Keyness work, so I still dont want to make any dramatic claims to originality.
We draw a
connection between Keyness theory and these new theories by identifying the
probability of a proposition p as a
function from members of S, the set
of probability functions that represents the credal
states of an ideal agent, to [0, 1], where the value of the function is
the value of P(p) according to each P in
S. For most purposes we can simplify
this by saying the probability of p
is the range of that function. Then p
has a numerical probability in Keyness sense iff its probability is a singleton,
it is uncertain otherwise. Arguably the range of the function should always be
an interval (well, I argue for this at any rate) and if so we can say p is more uncertain the larger that
interval is. This gives us a concept of comparative uncertainty, and with that
we can say that everything Keynes
says in the above quote is true.
Now one of the
surprising things about interpreting Keyness term uncertainty this way is
that a proposition can become more uncertain as we acquire more evidence about
it. Keynes seemed to think this was impossible, but here I think he was just
mistaken about the behaviour of some of his own concepts. (We all make
mistakes.) Heres a case where just that happens. (As it turns out, its a case
Ive written about. See my Keynes,
Uncertainty and Interest Rates Cambridge
Journal of Economics 2000).
Im watching a
roulette game going on, and in particular paying close attention to one player,
called Kim. Its a crowded room, so I cant see the roulette wheel, or the
board where bets are placed, but I can see the croupier, and I can see Kim. I see
Kim place a bet on either red or black (I can see that from where shes leaning
over the table) but I cant tell which. And I have no evidence that tells me
one way or the other. I know from prior observation that this is a fair
roulette wheel. And I can see that the croupier is about to spin the wheel. Now
consider the following propositions. (For simplicity well assume its a
roulette wheel with no green slots – this makes the example rather unrealistic,
but simplifies the computations no end without having any major philosophical
costs.)
kr = Kim bet on red
kb = Kim bet on black
br = The
ball lands on red
bb = The ball lands on black
h
= Kim is happy in a few seconds
At this stage, I
think I can assign numerical probabilities in the following cases:
1. P(h
| kr Ù br) = 1
2. P(h | kr Ù bb)
= 0
3. P(h | kb Ù bb)
= 1
4. P(h | kb Ù br) = 0
5. P(br | kr) = ½
6. P(bb | kr)
= ½
7. P(br | kb) = ½
8. P(bb | kb) = ½
Also note {kr, kb} and {br, bb} are partitions, and my credences reflect that (e.g. P(kr Ú kb) = 1.)
What I cant do is
assign a numerical probability to kr or to kb, they
are just uncertain. Perhaps theyre not so uncertain that their probability is
[0, 1] – thats what happens when a proposition is completely uncertain,
but they are uncertain to a degree.
Now I wait a few
seconds, and see that when the wheel stops, Kim is happy. So I update my
credences accordingly. What should my new credences be? Some may suggest that
my credences in br,
bb and bg should be unchanged, because I
have no new evidence that is relevant to their assessment. But this must be
false. For if it were true, I could do the following computations (11 and 12
are background, the new assumptions come in at 13 and 14).
11. P(br) = ½ from 5 and 7
12. P(bb) = ½ from 6 and 8
13. P(br | h) = ½ by assumption
14. P(bb | h) = ½
by assumption
15. P(kr | h) = P(kr Ù br | h)
by 2
16. P(kr Ù br | h) = P(br | h) by 4
17. P(kr | h) = ½ by 13, 15 and 16
18. P(kb | h) = ½
by identical reasoning to the last three lines
19. P(br | Øh) =
½ (since by 11 and 13 br
and h are independent)
20. P(bb | Øh) = ½ (since
by 12 and 14 bb and h are independent)
21. P(kr | Øh) =
½ (by equivalent reasoning to 15-17, with just the relevant appeals changed)
22. P(kr) = ½ by 17 and 21
And 22 is just
what we said we couldnt conclude, because we werent in a position to assign
numerical probabilities to kr and kb. So the
simple assumption that we shouldnt change our credences in br and bb when we learn h must
have been mistaken. What should happen is that after learning h, br and bb should go from being not at all uncertain to being rather
uncertain, in fact exactly as uncertain as kr and kb were (and I guess still are).
This is contentious,
but I think that the same thing is going on in Adams main argument. (I.e. its
contentious that its the same thing.) Here are the main examples again.
TOSS&DUPLICATION After Al goes
to sleep, researchers toss a coin that has a 10% chance of landing heads. Then
(regardless of the toss outcome) they duplicate Al. The next morning, Al and the duplicate
awaken in subjectively indistinguishable states.
Adam wants to
argue that in this case when Al wakes up his credence in HEADS should be 1/10.
A crucial premise in the argument for this is that P(HEADS/HeadsAl
or TailsDup) (TailsDup is
the proposition that hes the duplicate and the coin landed tails – you can
figure out the rest of the code from that) is also 1/10. And he argues for that
as follows.
COMA As in TOSS&DUPLICATE, the
experimenters toss a coin and duplicate Al. But the following morning, the
experimenters ensure that only one person wakes up: If the coin lands heads, they allow Al to wake up (and put the
duplicate into a coma); if the coin lands tails, they allow the duplicate to
wake up (and put Al into a coma)
Suppose that in the COMA case, Al
gets lucky: the coin lands heads, and so the experimenters allow him to awaken.
Upon awakening, Al is immediately in a position to assert Either I am Al and
the coin landed heads, or else I am the duplicate and the coin landed tails.
So when Al wakes up in the COMA case, he has just the evidence about the coin
toss as he would have if he had been awakened in TOSS&DUPLICATE and then been told [HeadsAl
or TailsDup]. So to defend (3)to show that in the
latter case Als credence in HEADS ought to be 10%it is enough to show that when
Al wakes up in the COMA case, his credence in HEADS ought to be 10%.9 Let me
argue for that claim now.
Before Al was put to sleep, he was
sure that the chance of the coin landing heads was 10%,
and his credence in HEADS should have accorded with this chance: it too should
have been 10%. When he wakes up, his epistemic situation with respect to the
coin is just the same as it was before he went to sleep. He has neither gained
nor lost information relevant to the toss outcome. So his degree of belief in
HEADS should continue to accord with the chance of HEADS at the time of the
toss. In other words, his degree of belief in HEADS should continue to be 10%.
Adam considers an
objection that Als memories should give him evidence that hes Al, and hence given
HeadsAl or TailsDup, he
should have a very high credence in HEADS. He responds as follows:
Thats all
wrong. TRUST YOUR MEMORIES, AL makes the same mistake that TRUST YOUR MEMORIES,
OLEARY does. While it is true that in the absence of defeating auxiliary
beliefs, one ought to trust ones memories, when Al wakes up he does have defeating auxiliary beliefs. He is sure thatwhatever
the outcome of the coin tosssomeone was to wake up in just the
subjective state he is currently in. As far the outcome of the coin toss goes,
the total evidence Al has when he wakes up warrants exactly the same opinions
as the total evidence he had when he went to sleep.
This is what I
think is wrong. Adam is concerned to reject the line of reasoning that memories
provide evidence, because he thinks that theyre really only q-memories and
they dont count for very much. But this ignores a crucial point I think. Al
doesnt know whether his memories are real memories or mere q-memories. But
Adam thinks that he can assign a very precise credence to their being real: in
this case exactly 1/10. I dont think this is true, and I think the only way
youd come to infer it is by more or less presupposing an indifference
principle.
Id put the
dialectic as follows. Al has some memories. These are actually conclusive
evidence that HEADS, though of course Al doesnt know this. In fact he has no
idea whatsoever what the evidential force of those memories is. But that doesnt
mean he should act as if they have no evidential value at all – if he does hes
drawing a substantive conclusion, that q-memories have no evidential value from
premises that are essentially worthless, that he has no idea how much
evidential worth they have. (Substantive and, we might as well note, false.) He
should act like he has no idea how valuable the evidence is, just like in the
casino case I should act like I have no idea what the evidential force of h is. In that case I go from regarding br as risky to
regarding it as uncertain. I think Als attitude towards HEADS should be the
same in COMA. And if it is, the argument for the indifference principle in the Dr. Evil
paper fails.