A Puzzle for Counterfactual Decision Theory

The following feels like a purely technical problem for standard versions of causal decision theory, and it should yield to a purely technical solution, but the said solution wasn’t immediately obvious to me, so I’m posting the puzzle here.

Let A > B be the counterfactual, if it were the case that A, it would be the case that B. Following Gibbard and Harper, who in turn follow Stalnaker, we’ll assume a logic for > that respects strong centring (i.e. A & B entails A > B) and conditional excluded middle (i.e. A > (B v C) entails (A > B) v (A > C).) In that case we can say that the expected value of an action A is the sum across possible consequences C of the following function:

Pr(A > C) U(A & C)

Where Pr is the probability function and U is the utility function. We’ll need to add on some bells and whistles for dealing with cases where there are a continuum of possible consequences, but the basic approach seems to deal soundly with most cases. But there’s at least one case where there is a bug.

For any proposition p, let @p be the proposition that says p is actually true.

Let A be the act of my taking a particular bet such that all the following are true for some particular p.

Pr(p) = = Pr(p) = 0.1
Pr(A > p) = 0.9
U(A & (p & ~
p)) = 100
U(A & ~(p & ~@p)) = -1
U(~A) = 0

What should I do? A is a bet that pays off 100 iff p & ~@p is true, and -1 otherwise. Since I know a priori that p & ~@p isn’t true, I know a priori that A will lose 1. So I should do ~A. But Pr(A > (p & ~@p)) is quite high, at least 0.8 given the numbers listed if I’ve done the sums correctly. So even though I know a priori that A is a losing bet, it seems I would be better off were I to take the bet.

As far as I can see, this provides no reason whatsoever to take the bet. There is one rational thing to do here, and it is to decline the bet. There should be a version of causal/counterfactual decision theory that implies that, but I can’t quite see how it should go.

14 Replies to “A Puzzle for Counterfactual Decision Theory”

  1. Whatever else one believes, it is true at every world w that w is actual. So the proposition expressed by ‘p is true and it is not the case that p is actually true’ ought to be false at every world. And since ‘[]’ is equivalent to a wide-scope quantifier in this case, (1) should come out true.

    1. []~(p & ~@p)

    And since (1) is true, the rational choice is ~A. But then it seems that (on standard semantics for counterfactuals) the probability that A > (p & ~@p) should be zero. But of course it will be insisted that it is not zero. Here’s how to solve the problem. Simply replace the terms employing indexicals or quasi-indexicals with names. For instance, instead of referring to our world with ‘the actual world’ refer to it by ‘Sam’ (and if you like others by Sam0, Sam0.1, and so on). Then you will know exactly what proposition you’re betting on.

  2. A related case: There will be a lottery involving 100 balls. We introduce ‘Winner’ as a name for the ball that will (actually) win, and offer you a bet that pays 10 if Winner wins, -10 if Winner doesn’t. Obviously you should take the bet. But there are 100 possible outcomes (ball n is drawn) with 0.01 probability each. And one knows that for 99 of these values of n, if ball n had won, then Winner would not have won. So the counterfactual test above gives the bet an expected utility of 0.01*10 + 0.99*-10 = -9.8.

    I take it that the moral is that for terms involving relevant rigidification and/or occurrences of uses of ‘actual’, then either one shouldn’t appeal to counterfactuals here, or if one does, one shouldn’t give them the standard semantics. There are nearby quasi-counterfactuals that seem to give the right results (e.g. there’s a reasonably natural non-epistemic reading of “If ball n were to win, then Winner would win” on which it’s correct for all n). I imagine there are various ways to do the semantics, but one way might be to keep the standard closeness relation on worlds while invoking 2D evaluation of sentences in those worlds.

  3. Mike, I don’t think your 1 sounds true. If p is false, then had p been true, things would have been different from the way they actually are. That is, it would have been the case that p is true although actually it is false. That is, p & ~@p would have been true in the world. So there’s a possible world (any one in which p is true) in which p & ~@p is true.

    Dave, I think that’s right that there are 2D ways out of this. If we have a ‘de-rigidifying’ operator, then we’re home free. My impression was that not every anti-2Der believed in such operators. (But I could be wrong about this – you certainly know much more about the pro- and anti-2D forces than I do!) It would be surprising if decision theory forced us to posit such operators.

  4. Sure — the 2D way is one way to do the semantics, but probably not the only way. Your original case could be handled by having the semantics invoke some sort of ‘actually’-ignoring operator, which I take it one could endorse without endorsing full-blown two-dimensionalism. (At most this would get one to the equivalent of the logical-form-based two-dimensionalism of Davies and Humberstone, which is pretty innocuous.) Whether that move could be extended to other cases involving rigid designators (e.g. ‘Winner’) depends on whether those expressions can be understood as involving an ‘actual’ in their logical form, which is pretty controversial. If they can’t be, then one either needs a stronger sort of two-dimensionalism, or one needs some quite different treatment, such as an analysis of the conditionals in more strongly epistemological terms.

  5. Brian, that’s hard to follow. Let me see. We assume that p is false. Then we consider the counterfactual that had p been true, things would have been different (from the way they actually are). True, I agree with all of this. Then you say,

    “That is, it would have been the case that p is true although actually it is false.”

    And I agree with that, too. But then,

    “That is, p & ~@p would have been true in the world.”

    This is what I don’t see, since in that counterfactual world that we are considering (say, for simplicity the closest world, w1) it is true that p. Had p been true, we know that w1 would have been actual. So it would be true in w1 that p. But since w1 is actual, it is also true there that p. And so there isn't a world in which it is true that p & ~p. Or else, I don’t yet see how.

  6. Had p been true, we know that w1 would have been actual.
    No, that’s not how indexicals work. Compare:

    (1) Tomorrow, May 11th will be today.
    (2) Where Dave is, Canberra is here.
    (3) If Brian asserted this sentence, then I would have red hair.

    These are false (asserted by me in Providence, on May 10th), and your counterfactual is false analogously.

  7. I agree with Jamie about the claims about indexicals.

    On Dave’s post, those of us who are 2-boxers about Newcomb think we really better not analyse the conditionals in CDT in epistemic terms. Perhaps there is a way to understand the conditionals epistemically in a way that doesn’t lead to 2-boxing.

  8. Well, consider Lewis and Plantinga on the same point. For Lewis “at any world W, the name ‘the actual world’ names W; the predicate ‘is actual’ designates or is true of W and whatever exists in W; the operator ‘actually’ is true of propositions true at W…“ (Anselm and Actuality, 185, my emphasis). Plantinga pretty much endorses this view (NN 49ff.) Brian says that there are worlds in which p & ~@p is true. Let w1 be such a world. It follows that in w1 the proposition p is true and the operator ‘actually’is not true of the proposition p. That seems false. Rather, ‘actually’ is true of the propositions true at w1.

  9. Mike, you are confusing sentences with propositions. Let p be some false proposition. The sentence, “p & ~@p”, is false in whatever context it is uttered, but the proposition it expresses is true at any world at which p.
    Again, compare the indexical sentence (1) above, which is false, with the following true one:
    (1*) Tomorrow, an utterance of ‘May 11th is today’ will be true.
    And similarly for (2) and (3) (left as an exercise).

  10. Cute puzzle. It might help wrap one’s mind around it if one gave the abstract structure some intuitive content, so let me try a story, and see if this fits what Brian had in mind.

    The Newcomb predictor, has lost her predictive powers, and given away most of her money to those annoying one-boxers, but she still has her opaque box, a hundred and one dollars that she is determined to keep, and a penchant for decision-theoretic conundrums. Hoping that you are a confused causal decision theorist, she offers you the opportunity to play the following game, for a modest fee of one dollar: 90% of the time she puts her $101 in the box if, but only if, you first choose to play, and 10% of the time she puts the money in the box if and only if you first choose NOT to play. You get to keep the money in the box if, but only if, the following two conditions are met: (1) you choose to play, and (2) there is more money in the box than there actually is. No risk there, or chance of gain for you, so naturally, you choose not to play. But it might seem that the causal decision theorist recommends that you play. For given that you know that you will in fact choose not to play, you know that it is probably true (to degree 0.9) that if you had chosen to play, there would have been more money in the box than there actually is, and under those conditions, you would get to keep the money, had you chosen to play. It seems that if one calculates expected utility by the counterfactual-causal-decision theorist’s formula (weighted average of the alternative utilities, weighted by the probabilities of the counterfactuals) then one gets (at least for one way of partitioning the alternatives) a higher value for playing (approximately 90) than for not playing (0), given that you do not, in fact, play.

    The numbers are exactly as in Brian’s abstract description: A is the action of choosing to play, p is the proposition that the money is in the box. The statement, “there is more money in the box than there actually is” is equivalent, in the circumstances, to the statement “p & ~@p”. Your credence in A is zero, in p is 0.1, and in (A > p) is 0.9.

    One might dispute that the credence in what is expressed by ‘@p’ is also 0.1. It is (one-dimensional) propositions that get credences and utilities, and in the 2D semantics, there are different propositions determined by the 2-dimensional intension of the sentence. On one way of interpreting the 2D semantics, ‘@p’ expresses the impossible proposition 90% of the time, and the necessarily true proposition 10% of the time, so it either has credence 1 or credence 0—you don’t know which. But if we are going to talk about the credences of what is expressed by sentences of the 2D language, we had better use what Frank Jackson calls the A-intension, or the diagonalization of the 2D matrix for the sentence, and this seems to be what Brian assumes, since this yields the conclusion that the proposition expressed by ‘@p’ is the same as that expressed by ‘p’, and so gets a credence of 0.1. But the difference between ‘@p’ and ‘p’ comes out when those sentences are embedded in complex counterfactual constructions. So applying the standard 2D semantic rules, the diagonal intension of the sentence ‘A > (p & ~@p)’ will be a proposition that is true if and only if A and p are both false (you don’t play, and there is no money in the box), but there would have been money in the box if you had played.. This is a proposition that gets a credence of 0.9. The (diagonal)proposition expressed by ‘A & p & ~@p’ will be the impossible proposition, by the standard calculation, and so gets credence 0, but it is its utility, not its probability that matters, and (one may argue) it gets, as Brian says, a utility of 100, since it describes the circumstances in which Ms. Newcomb gives you the money.

    But I think one should question whether it makes sense to ascribe that utility to that proposition. It is (one-dimensional) propositions, or “events”, in the statistician’s jargon, that get probabilities and utilities, and the theory can’t coherently assign conflicting utility values to the same proposition. But consider the sentence ‘A & ~p & @p’. This is true just in case you choose to play, and there is LESS money in the box than there actually is. In these (unrealizable) circumstances, Ms. Newcomb gives you no money, so the utility (it seems) should be -1. But this is the same proposition.

    The technical fix that Brian is looking for, I think, is something like this: The theory should start with probability and utility values for propositions—subsets of the state space, or set of possible worlds. Then the expected utility rule should be stated for any partition of the state space (X1, X2, . . Xn) and action A: eu(A) = P(A > X1)xu(A&X1) + P(A > X2)xu(A&X2) + . . etc. One might use a 2D language. with sentences with 2D intensions, to describe the space, but the way you determine the utility to assign to an interpreted sentence of the 2D language is first to determine it’s A-intension, or diagonal proposition, and then to apply the expected utility formula to the propositions expressed. One is free to assign utilities to impossible propositions, or to include impossible propositions in your partition—that won’t have any effect, since if A&X is impossible, P(A > X) will equal zero. The source of the original puzzle (I think) was that the 2D semantics allowed for the same sentence (‘p & ~@p’) to determine different propositions in different contexts (the impossible proposition, in the context of the conjunction ‘A & p & ~@p’, and a contingent proposition in the modal context ‘A > (p & ~@p)’.

  11. Or to make the point with ‘here’, which Lewis regards as being just like ‘actually’, note that if it is not raining in Ithaca (where I am) but raining in Seattle, I can say

    If I were in Seattle it would be raining where I was but it’s not raining here.

    That’s even though I can never say “It’s raining where I am even though it’s not raining here”. The same goes for actually. Let RainWorld be a world where it rains in Ithaca on May 10th.

    If I were in RainWorld it would be raining, but it’s not actually raining.

    That is to say, the proposition actually expressed by “It is raining, but not actually raining” is true in RainWorld. (Though of course if they uttered those worlds they would say something false.

  12. I should just mention that my last post was a follow up to Jamie’s, not to Bob’s. Bob’s response seems to me like it works as a solution to the puzzle, or at least a sketch that will work once we fill in the details.

  13. What seems closer to the truth is (as in the Stalnaker post) the same sentence is being used to express different propositions: an impossible proposition and a contingent one. I was defending the claim that (1) (way up there now) is true.

    1. []~(p & ~@p)

    But (1) is true (of course) for only one of the propositions that the sentence (depending on context) expresses.

  14. Mike:

    On the claim that ‘[]~(p & ~ p)' is true. As I compute it, this will be true if and only if p is true. The diagonal intension of the sentence will be a contingent proposition that is necessarily equivalent to p. It is right that the A-intension of the unmodalized '~ (p & ~ p)’ is necessary (whether or not p is true), but this does not imply that ‘[]~ (p & ~ @p)’ is true.

Comments are closed.