Yet another post inspired by conversations with John Hawthorne. I think on this case some of his comments were in turn inspired by conversations with Tim Williamson. (It should be obvious where that comes in.) I also had several discussions at Bellingham about this, most productively with Daniel Nolan and Eleanor Mason. And Fred Feldman’s paper (and Liz Harman’s comments on it) were also helpful. After all that, maybe one or two of the ideas are mine. It’s all in a long dialogue because I don’t want to go on record endorsing any one of these positions. But it should be clear where my sympathies lie. A lot of the discussion turns on cases discussed by Frank Jackson in his 1991 paper, “Decision-Theoretic Consequentialism and the Nearest and Dearest Objection.”
AGNES: I believe that a rational agent should always perform that action that will maximise her actual returns, as measured by her own utility function.
EDITH: But the agent can’t always know what action that is.
AGNES: True, but in those cases she should always aim to be maximising utility. Sometimes she won’t know what she should be doing, but as Malcolm Fraser said, life wasn’t meant to be easy.
EDITH: Ah, even that seems a little strong. Consider the following, relatively realistic, case. I walk into a casino in Vegas, and I see that the line on the Packers-Patriots game is Packers +8.5 points. I have $55 in my pocket and the minimum bet is $55. I have three choices. I can (1) bet the $55 on the Patriots to cover the spread, winning $100 if they do so. Or I can (2) bet the $55 on the Packers with the points, winning $100 is they can win or hold the margin under 9 points. Or I can (3) leave the money in my pocket. Now I know very little about football, but I know the market has taken into account most salient information, so I think the probability of the Patriots covering the spread is pretty close to 0.5. So the only option with positive expected utility is (3). But doing that means I clearly won’t maximise utility, since clearly either (1) or (2), but I don’t know which, is the utility maximising option.
AGNES: Hmmm, this is tricky. I’m not sure what to say about these cases.
(Enter AOIFE)
AOIFE: Why should you do (3)? Why shouldn’t you do the one of (1) or (2) that will lead to the $100 win?
EDITH: Well, you can’t know which one that is. And a prudential principle must be action-guiding, just like an ethical principle must be.
AOIFE: I’m not sure what you mean by action-guiding. If you mean it must always deliver a knowable answer, then that’s absurd because there will always be cases of ties. If you mean that whenever X is the right thing to do (according to the principle) you can know this, that’s also absurd because it means only luminous properties can be action-guiding. And there are hardly any luminous properties, except for necessarily instantiated ones, while there are lots of principles of correct action.
EDITH: What about the principle that you should maximise expected utility? That seems action-guiding.
AOIFE: But it isn’t luminous. Look, we can in principle restate your football example the same way. Do you agree that you don’t always know what your subjective credences are?
EDITH: That seems reasonable. My margin-of-error on introspection is pretty enormous.
AOIFE: Good. Well now just rerun the football example. Assume you know that one of (1) or (2) maximises expected utility, but you don’t know which, and you know that (3) is much closer (in expected utility terms) to the more valuable one than the less valuable one. Wouldn’t your action-guiding intuition say do (3)? But your expected utility principle says do either (1) or (2).
EDITH: Hmm, this is tricky. Could I go to expected expected utility.
AOIFE: Only if you want me to repeat the example for expected expected utility.
EDITH: Yes, I see where this is going. Maybe I should just bite the bullet and say in your example the agent doesn’t know what the prudentially right thing to do is.
AOIFE (laughing slightly): Ah, but now you’ve played into my trap. Once you’ve accepted that bullet, what argument do you have against the person who says we should say the same thing about your football example.
(While EDITH thinks over that one, enter EIRTAÉ.)
EIRTAÉ: Here are two asymmetries between the cases that may block that move. First, there’s a temporal asymmetry. Intuitively in the football case it isn’t determined at the time you place the bet whether (1) or (2) maximises actual utility. But even if it is unknown what maximises expected utility in Aoife’s case, there is a fact of the matter. Second,…
AOIFE: Wait there. We’ll come back to the second case. I want to hear more about this. Surely you can’t be saying that the reason to do things that don’t maximise actual utility is always that the utility isn’t determined. There are football-type cases where we are betting on past events. The expected utility theorist can’t use your theory then.
EIRTAÉ: True, but she doesn’t have to, for there are two other points to make. First, she’s looking for a systematic theory that respects the intuition about indeterminacy, and the actual utility principle doesn’t do that. Second, this is just a defensive move. She’s just trying to show why the intuition that (3) is the right thing to do in the football case can be justified. If that intuition rested solely on an implausible commitment to luminous principles, that intuition could not be justified. But here is an independent justification of that intuition.
AOIFE: What if we are in a deterministic world?
EIRTAÉ: That makes it tougher. One option is to say that the right principle should be world-independent, and there are indeterministic worlds. But I see you’re unhappy with both premises there. Perhaps in a deterministic world maximising actual utility is the way to go. Fortunately, that’s not our world.
AOIFE: That doesn’t seem like a very stable resting point. What’s the other point you were going to make.
EIRTAÉ: The idea that principles should be action-guiding can be valuable even if we can’t give a reductive definition of that concept. It is a fact, a contingent fact but a fact, that there are lots of cases like the football case, where it is easy to see what maximises expected utility and hard to see what maximises actual utility. On the other hand, cases where it is hard to see what maximises expected utility, even after a bit of reflection, are much harder to find. Note that you didn’t provide a real-life illustration of a matching problem, just a sketch of how one would look. That suggests that in the practical, fallible sense we’re interested in, the expected utility principle is action-guiding. On the other hand, walk into any casino in Vegas, or any betting shop anywhere for that matter, and the maximise actual utility principle will not give you very useful guidance at all.
AOIFE: Look, this has all been very interesting, but I just landed all 13 winners from the day’s football, so I’m going to go and celebrate my winnings. With the $40000 I won, the party should be fun. I started with $55 you know, which I notice is exactly how much you have in your wallet. Somehow I suspect the party you have won’t be as much fun as mine.
EIRTAÉ: That doesn’t seem to be an argument, but it does seem strangely compelling. Where’s my bookmaker?
(All exit.)