Skip to main content.
May 30th, 2011

Cian Dorr on Imprecise Credences

In the latest Philosophical Perspectives, Cian Dorr has a very interesting paper about a puzzle about what he calls the Eternal Coin. I hope to write more about the particular puzzle in future posts, but I wanted to mention one thing that comes up in passing about imprecise probabilities. In the course of rejecting a solution to one puzzle in terms of imprecise probabilities, he says

My main worries about this response are worries about the unsharp credence framework itself. In my view, there is no adequate account of the way unsharp credences should be manifested in decision-making. As Adam Elga has recently compellingly argued, the only viable strategies which would allow for someone with an unsharp credential state to maintain a reasonable pattern of behavioural dispositions over time involve, in effect, choosing a particular member of the representor as the one that will guide their actions. (The choice might be made at the outset, or might be made by means of a gradual process of narrowing down over time; the upshot is much the same.) And even though crude behaviourism must be rejected, I think that if this is all we have to say about the decision theory, we lack an acceptable account of what it is to be in a given unsharp credential state—we cannot explain what would constitute the difference between someone in a sharp credential state given by a certain conditional probability function, and someone in an unsharp credential state containing that probability function, who had chosen is as the guide to their actions. Unsharp credential states seem to have simply been postulated as states that get us out of tricky epistemological dilemmas, without an adequate theory of their underlying nature. It is rather as if some ethicist were to respond to some tricky ethical dilemma—say, whether you should join the Resistance or take care of your ailing mother—by simply postulating a new kind of action that is stipulated to be a special new kind of combination of joining the Resistance and taking care of your mother which lacks the objectionable features of obvious compromises (like doing both on a part-time basis or letting the outcome be determined by the roll of a dice). It would be epistemologically very convenient if there was a psychological state we could rationally be in in which we neither regarded P as less likely than HF, regarded HF as less likely than P, nor regarded them as equally likely. But we should be wary of positing psychological states for the sake of epistemological convenience.

I actually don’t think that imprecise (or unsharp) credences are the solution to the particular problem Cian is interested in here; I think the solution is to say the relevant credences are undefined, not imprecise. But I don’t think this is a compelling objection to imprecise credences either.

It is, I think, pretty easy to say what the behavioural difference is between imprecise credences and sharp credences, even if we accept (as I do!) what Adam and Cian have to say about decision making with imprecise credences. The difference comes up in the context of giving advice and evaluating others’ actions. Let’s say that my credence in p is imprecise over a range of about 0.4 to 0.9, and that I make decisions as if my credence is 0.7. Assume also that I have to make a choice between two options, X and Y, where X has a higher expected return iff p is more likely than not. So I choose X. And assume that you have the same evidence as me, and face the same choice.

On the sharp credences framework, I should advise you to do X, and should be critical of you if you don’t do X. On the imprecise credences framework, I should say that you could rationally make either choice (depending on what other choices you had previously made), and shouldn’t criticise you for making either choice (unless it was inconsistent with other choices you’d previously made).

I don’t want to argue here that it makes sense to separate out the role of credences in decision making from the role they play in advice and evaluation. All I do want to argue here is that once we move beyond decision making, and think about advice and evaluation as well, there is a functional difference between sharp and unsharp credences. So the functionalist argument that there is no new state here collapses.

One other note about this argument. I don’t think of sharp and unsharp credences as different kinds of states, or as states that need to be separately postulated and justified. What I think are the fundamental states are comparative credences. The claim that all credences are sharp then becomes the (wildly implausible) claim that all comparative credences satisfy certain structural properties that allow for a sharp representation. The claim that all credences should be sharp becomes the (still implausible, but not crazy) claim that all comparative credences should satisfy those structural properties. Either way, there’s nothing new about unsharp credences that needs to be justified. What needs to be justified is the ruling out of some structural possibilities that look prima facie attractive.

Posted by Brian Weatherson in Uncategorized

11 Comments »

This entry was posted on Monday, May 30th, 2011 at 5:14 pm and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.

11 Responses to “Cian Dorr on Imprecise Credences”

  1. Dan Greco says:

    I’m not sure I follow your response to Cian. In particular, it seems as if you may be assuming some version of the “Uniqueness thesis,” (I think the term was coined by Richard Feldman, but it’s also been discussed by David Christensen, Roger White, and others) according to which a given body of evidence always rationalizes a unique doxastic attitude towards a given proposition.

    Suppose the defender of sharp credences rejects uniqueness, as I take many such people will (e.g., orthodox subjective Bayesians). I’m not sure how the advisory/assessment role that credences play will let us distinguish between the permissivist advocate of sharp credences (i.e., the person who believes that in many cases, there are a range of precise credence functions that it is permissible to adopt), and the non-permissivist advocate of unsharp credences (i.e., the person who believes that in such cases, there is instead a unique unsharp probability function that it is permissible to adopt as one’s doxastic state).

    For instance, if we are sharp permissivists, we might describe your example above by saying that any sharp credence in the range 0.4-0.9 is permissible, rather than by saying that an unsharp state that is spread out over the 0.4-0.9 range is the unique permissible state.

    It looks as if we’ll give others advice and evaluate others’ actions in much the same way depending on whether we opt for the sharp/permissive package or the unsharp/non-permissive package.

    Either way, I’ll agree that you could rationally do X or Y, depending on the other choices you make. More specifically, I take it that Cian’s point is that either way, I’ll effectively give you the advice to act as if you had some particular precise probability function, though without singling out any particular probability function such that you should act as if you had THAT one. On both the sharp and the unsharp picture, I can give you that advice and agree that you act rationally, even if I act differently from you.

  2. barrylam says:

    Dan, good point. Of the top of my head, does this look like something in the spirit of Brian’s reply?

    An unsharper can describe A with credence .7 as disagreeing with B with credence .8, but neither is in disagreement with C with credence .4-.9. C is not in a position to evaluate or criticize either A or B’s credence, and the explanation for this is that they are not in disagreement.

    A sharp permissivist will describe A with credence .7, and B with credence .8, as disagreeing, but being in the range of permissible sharp credences between .4-.9. No one is in a position to criticize A or B’s credence because they are in the range of rationally permissible credences.

    The difference is in the explanation of the evaluation and criticism, not a difference in the actual evaluation and criticism in the two cases. The sharper’s evaluation is normative, the unsharper’s is…dialectical??

    So on the sharp credence framework (even if you are a permissivist), I don’t advise or evaluate you negatively even though I disagree with you (think you are wrong?) because I think you are still reasonable. On the unsharp framework, I don’t advise or evaluate you negatively because I don’t disagree with you.

    This doesn’t sound like a practical difference in decision-making, evaluation, or advice itself, but maybe some such difference can be drawn from it?

  3. Brian Weatherson says:

    Right, the argument about evaluation gets more complicated if you are a permissivist. And it gets particularly complicated if, as Barry suggests, any sharpening of a permissible state is permissible. I used to think that is the case; I’m now less sure that’s true. But here’s the two complications I would add.

    1) Instead of talking about evaluating all agents, we can ask about how we would evaluate agents with the same credences as the evaluator. Or we can talk about evaluating pairs of agents with the same credences. The latter might be easier in some ways. If a pair of agents have the same credences, then the sharp credence person says that we evaluate them the same way, the unsharp person doesn’t. That’s a behavioral difference, so there’s no behaviorist/functionalist argument against the existence of unsharp credences.

    2) I don’t think permissivism matters so much to the argument I gave about advice. I’m inclined to be a permissivist about, say, the Equal Weight View. I don’t think Adam Elga is irrational to hold it. But I’d advise him against holding it; it isn’t actually true! And I’d advise him against betting on it; he’ll lose money that way.

    So we have reason, independent of the sharp/unsharp debate, to think that one uses one’s own credences in giving advice, even if one thinks that the advisee’s (distinct) credences are also rational. And that’s all I need to get a behavioural difference between sharp and unsharp credences.

  4. Cian Dorr says:

    Thanks, Brian – I’m delighted you found the paper. And I’m eager to hear more about why you want to say that the credences the feature in the puzzle are undefined rather than unsharp – I had imagined that fans of unsharp credences would all want to appeal to them as a way out of the puzzle.

    Here are a few comments in reply to your defence of unsharp credences.

    (i) I don’t think it would be so wildly implausible to think that ‘all comparative credences satisfy certain structural properties that allow for a sharp representation’. I’m guessing that the relevant structural property is something like this: if A and B are both to some extent confident that P, then either A is at least as confident as B that P, or B is at least as confident as A that P. I think there is at least something to be said on behalf of the following completely general ‘comparability’ principle for comparative adjectives: if A and B are both F to some degree, then either A is at least as F as B or B is at least as F as A. The appeal of this comes from the sense that ‘A is not as F as B’ seems to come close to entailing ‘A is less F as B’. I grant that this can’t quite be an entailment – although ‘my coffee spoon is not as happy as me’ is true, ‘my coffee spoon is less happy than me’ isn’t. But the fact that we make this transition so easily in normal cases gives some support to the idea that all you need to add to make the inference valid is a premise to the effect that A is F to some degree.

    (I’m not suggesting that this is a knock-down argument. It might be claimed that the felt goodness of the inference can be adequately explained just by saying that ‘A is as F as B’ has ‘Either A is [at least] as F as B or B is [at least] as F as A’ as a presupposition. But views on which incomparability is a commonplace phenomenon might make it hard to understand how this presupposition could arise. On such views, one might think it would be pretty unusual for us to have reason to take the disjunction for granted without having reason to take either disjunct for granted.)

    (ii) Even if comparability is not true for comparative adjectives in general, I think it might plausibly be true for some of them – ‘full’, for instance. I think it’s a live option that ‘confident’ is one of the ones for which comparability holds. I certainly don’t think that the vagueness of ‘more confident than’ is any kind of a reason to think otherwise – ‘full’ is vague too, after all.

    (iii) If I’m wrong about (i) and (ii), then the remark in the paper about unsharp credences being “postulated” isn’t exactly on target. As you say, unsharp states can simply be defined in terms of existing concepts like ‘more confident than’. But what I still think might be unwarranted is the postulation that there is a way to be in an unsharp state without being irrational in the way that leads one to be Dutch-bookable. Suppose that A’s credence that P is x, B’s credence that P is y, x<y, and C is neither more confident that P than A is nor less confident that P than B. Thinking about what dispositions in C could ground such a description, my first thought is that they would have to involve some kind of contextual variability (or randomness) in his action-dispositions. Maybe C is someone who reacts differently to a bet depending on how you describe it to him, or depending on his mood, or something like that. Of course, fans of unsharp credence think there is another, very different kind of psychological profile which can make it correct to describe a person as having unsharp credence, and which makes for diachronically coherent action-dispositions. It is this alleged coherent form of unsharpness that I regard as a suspicious posit, driven by epistemological considerations without an adequate underpinning in the philosophy of mind.

    (iv) Your positive suggestion is that the behavioural hallmark of unsharp credence can be found in one’s dispositions to advise and evaluate others. What puzzles me about this suggestion is that I don’t see how to fit it with the thought that making speeches of advice and evaluation is itself a piece of behaviour, subject to the same requirements of cross-temporal consistency as any other. Whatever the relevant behaviour might be, what stops it from being rationalised just as well by sharp credences and utilities? As Dan suggests, if the behaviour consists of my saying things like ‘You should do such-and-such’ and ‘It would be OK for you to do such-and-such’, it could be rationalised by my (sharp) credences in propositions about what you should do and what it would be OK for you to do, and preference for speaking truly. Of course, other relevant factors might include my credences about how well doing such-and-such would work out and my relative utilities for giving advice that worked out, giving advice that didn’t worked out, and withhold advice. I don’t really have a grip on what a reasonable pattern of advice-and-evaluation behaviour could look like that would resisted such psychological explanation in terms of sharp credences.

    (v) In reply to Dan and Barry, you suggest that the behavioural difference between sharpness and the lack thereof should be cashed out in terms of their dispositions to evaluate people with the same credences as them. I’m not sure I’ve understood how this suggestion works. But one worry is that it seems circular: to apply it, one would need to already know who counts as having the same credences.

  5. Brian Weatherson says:

    On point (1), I think my response isn’t going to be very satisfying. I think it’s just a very widespread false belief that comparatives like ‘more F than’ are linear in this sense. I don’t know why it is that so many people have this false belief, but I’m sure it is false. I think the arguments that, for instance, more intelligent than, does not satisfy these structural properties are strong enough to overcome any doubt brought about by a kind of argument from universal assent.

    In the case of more intelligent than, I think the false belief that people can be linearly ordered has extremely pernicious effects; for instance it promotes the idea that there is something for IQ to measure. And it wouldn’t have these pernicious effects if it weren’t a widespread belief.

    I would like to know why this false belief is so widespread, but I don’t have anything to offer on this. Perhaps the experimental philosophers can help!

    On (2), I think that if we allow for non-linear orderings, the argument that degrees of (rational) confidence are among the things that are non-linearly ordered seems pretty simple to me. Let p and q be two propositions for which the evidential support is radically dissimilar, but neither of which the rational agent is more confident in than the other. Let r be a proposition of the form “this lottery ticket will lose”, where the lottery in question has a massive number of tickets. If one is equally confident in p and q, one should be less confident in p & r than in q. But in these cases it will be natural to say that neither is more likely than the other. And that requires a non-linear ordering.

    It’s true that there will naturally be some amount of randomness associated with credences in P of the sort you describe in (3). But I think that’s unavoidable.

    Assume a real number z will be chosen at random from [0, 1]. Maybe this is physically unrealistic, but I don’t think it’s so obviously unrealistic that we should exclude it from discussions about the nature of credences. Let Q be the proposition that z is a member of a particular unmeasurable set S. Let’s say S has inner measure 0.4 and outer measure 0.9. The agent is offered, for 60 cents, a bet that pays $1 if Q and nothing otherwise. What will she do? Seems to me that she should give a relatively random decision, though if she buys the bet she shouldn’t turn around and sell it for 50 cents. That is, she should make her initial decisions at random, though over time she should ensure that her decisions are collectively rational.

    The only alternatives to having something like that kind of randomness are, I think, to say either

    (A) It is a rational requirement that the agent forms some credence in Q. But which one? And why?
    (B) There is some other decision theory for betting on unmeasurable propositions.

    I think (A) is implausible, and if (B) is true, the fan of imprecise credences can adopt the decision theory.

    On (4) and (5), I worry that I’ve misunderstood the debate. Here’s one position we could hold.

    • There’s no way to explain patterns of advice and evaluation without positing imprecise credences.

    I don’t hold that position. Here’s the position I do hold.

    • There are epistemological reasons to adopt a theory on which imprecise credences are rationally permissible, and perhaps reasons to adopt a theory on which they are rationally required. On the theory on which imprecise credences exist, we can say why they are not epiphenomenal mental states; they play a crucial role in advice and evaluation.

    It’s true that if we were just looking at the explanation of action, we wouldn’t feel compelled to posit unsharp credences. But given that we have, for independent reasons, posited them, we can show that they earn their keep in philosophical psychology.

    This is a kind of halfway behaviourism. I don’t want to say that the only mental states there are are those that we get by ruthlessly applying Occam’s Razor to a broadly Humean approach to mental causation. But I do think that our positive theory should have a causal role for each of the states it posits. That’s all I was trying to do with the advice-and-evaluation point.

  6. Brian Weatherson says:

    I forgot to add something about the undefined credences point. I was thinking that it might be best to say that credences in unmeasurable propositions are undefined rather than imprecise. And the same might go for conditional credences in cases where conditionalisation will lead to violations of countable additivity.

    I was thinking this is actually a less radical move than the imprecise credences position. Everyone, I thought, should say that the function from propositions to credences (or from pairs of propositions to conditional credences) is a partial function. It might be that some of these propositions about the eternal coin are outside its domain.

  7. Cian Dorr says:

    Good!…..

    (1) I wasn’t really basing my case for the linearity of (e.g.) ‘more intelligent than’ on ‘a kind of argument from universal assent’, but on the premise that ‘x is somewhat intelligent (or: x is more intelligent than z); x is not as intelligent as y; therefore, y is more intelligent than x’ is a valid argument. And you have to admit that it has some air of validity, although I grant that there is an alternative explanation in terms of presupposition that is also worth pursuing.

    I suppose I agree about the pernicious effects of the belief about linearity. But true beliefs can have pernicious effects too, if people reason from them in fallacious ways. And in the case of the belief that for any two people a and b, either a is at least as intelligent as b or b is at least as intelligent as a, I think the bad effects you mention are mediated by the fallacious assimilation of this to something like ‘for any two people a and b, either definitely a is at least as intelligent as b, or definitely b is at least as intelligent as a’. I think we both agree that every person is either virtuous or not virtuous; but this doesn’t sound like something I would be comfortable saying in a non-philosophical context, and the reason I think is that I’d worry that people would react as if I had made something like the (dangerously false) claim that everyone is either definitely virtuous or definitely not virtuous.

    (2) I am not convinced by the premises of the argument you give here, but I have nothing new to say against them right now. I’ll just note that it seems a bit roundabout for an argument for the possibility of a certain state of affairs (unsharp credence) to turn on premises about the normative goodness (or rationality) of that state of affairs. (It’s a bit like arguing for the non-linearity of the comparative adjective ‘lenient’ by arguing that when two prisoners a and b have radically dissimilar cases, a wise judge will neither be more lenient to a than to b, more lenient to b than to a, nor equally lenient to a and b.) Presumably it is part of your view that failures of linearity are widespread in the actual world (rational unsharp credence is another matter): I think that there might be more dialectical traction in arguments that focus on rich descriptions of the underlying (physical, cognitive-scientific, dispositional) descriptions which according to you suffice for failures of linearity in ‘more confident’.

    (3) I think we agree here. I take it from your initial post that you think that people with unsharp credences do manage to co-ordinate their decisions so as to avoid being Dutch-bookable; if their initial decisions are random, their later decisions are less so. I was just pointing out that anyone who thinks that unsharp credence is possible at all will presumably also think it’s possible to combine unsharp credences with irrational, Dutch-book-prone levels of randomness and context-sensitivity. As you note, it is more controversial whether it is also possible to combine them with diachronically coherent dispositions.

    (4-5) Your comments here are very helpful; you are right that I was ignoring some important distinctions in my earlier comment. Let me see if I can state the worry in a better way.

    What I am looking for is not just an account of the behaviour that would typically be caused by a state of unsharp credence. What I want is a constitutive account of the difference between two specific, closely related credential states: (A) a state in which your credence in P is unsharp (e.g. [0.4,0.9]) but in which you have (perhaps because of having been forced to make certain choices in the past) decided to ‘make decisions as if’ your credence is 0.7; and (B) a state just like (A) in every respect except that your credence that P really is 0.7. I took you as suggesting that (A) will lead to different advice-and-evaluation behaviour than (B). But I’m not seeing how this is going to work out, given that your utility function, and thus your preferences about such questions as whether you speak truly about epistemology, whether you give advice that turns out to lead to bad results, whether you disappoint people by refusing to give them any specific advice, and so on, are by stipulation exactly the same in the two states.

    You might say that there is no behavioural way to distinguish these two states, but that this isn’t a problem since behaviourism is false and functionalism is true. Having unsharp credences (you might say) has certain characteristic effects on your utilities, and it is by this means that it produces its characteristic behavioural signature of diffidence in advice-giving. If we suppose that someone got to have those same utilities in some other way, we will see the same behaviour. From an abstract point of view there is nothing wrong with this proposal (it is in the nature of functionalism to allow for distinct but behaviourally indiscernible states); but I am still skeptical. The utilities required to justify the behaviour seem like quite intelligible and reasonable ones to have, even for someone with sharp credences. So it is hard to see how a person’s having these utilities could play much of a role in explaining what makes it true that they are in fact in a certain state of unsharp credence. Moreover, standard stories about how mental states should evolve suggest that your utility function should not change in any systematic way when your credences change.

    Here is a different thought. My character in state (B) gets to have those advice-and-evaluation dispositions only because of her bizarre, false introspective beliefs. Even though her credence is sharp, she thinks that it is unsharp – after all, by stipulation, her credential state is exactly the same as (A) except for P. If she had true introspective beliefs, she would behave differently, since her utility function assigns different values to worlds where she gives a given piece of advice depending on whether her credences in those worlds are sharp or unsharp. Two responses to this thought. First, those seem like strange, and rather self-regarding, utilities to have: I don’t think it is plausible that it is ceteris paribus better to interpret people as having that kind of utility function . Second, this appeal to introspective credences seems no better than the following simpler appeal to introspection: states of unsharp credence tend to cause states of having high credence that one is in a state of unsharp credence, while states of sharp credence tend to cause states of having high credence that one is in a state of sharp credence. And the latter isn’t the sort of difference that one can legitimately appeal to in telling a non-circular functionalist story of what it is to be in a given state of sharp or unsharp credence.

  8. Brian Weatherson says:

    I have to think more about points (1)-(3), but I have at least something to say on (4) and (5), I hope.

    I think I might have been overly concessive in some of my replies about advice-and-evaluation. Not a mistake that philosophers are likely to make! (And not, I suspect, the particular mistake that most people will think I’ve made here.)

    I think when we look more closely at the particular instance of advice giving, things look a little better for the unsharp credences person. Take again the case where credence in P is unsharp over 0.4, 0.9, but I bet as if my credence in P is 0.7. I think this little dialogue is rational.

    A: I’ve been offered a bet that pays $1 if P for 60 cents. What should I do?
    Me: You could take it or leave it; both would be rational. If it were offered to me, I would probably take it, because I have already been buying such bets for 70 cents, so 60 cents would be a bargain. But that was a somewhat arbitrary choice, you could just as well leave it.
    A: What if the price were lowered to 59 cents?
    Me: Same advice, you could take it or leave it.
    A: Or even down to 55 cents?
    Me: Still the same advice. All of these look like free choices to me, though it would be dumb to buy at 60 and then immediately sell at 55.

    I think one has to assign very odd credences/utilities to me to make that bit of advice giving consistent with my having credence 0.7 in P. It’s true that it is consistent with my not thinking the conversation is in English, or not caring about A’s welfare, and having credence 0.7 in P. But I don’t think it’s consistent with my having credence 0.7 in P, and knowing the conversation is in English, caring about A’s welfare and other usual assumptions like that.

    So given some very plausible auxiliary assumptions (e.g., I know what language we’re speaking!) I think advice contexts do bring out behavioural differences between sharp and unsharp credences.

    What I should have said earlier was that unsharp credences behave like sharp credences in betting situations. And while many many everyday settings are betting situations in the salient sense, not all of them are. Advice contexts need not be, for instance. And that’s enough for the unsharp credences person.

  9. Dan Greco says:

    I’m not sure I see why the dialogue in the last post is any harder to make sense of on the sharp credence picture than on the unsharp one. I take it the reason it’s supposed to be hard to make sense of on the sharp credence picture is that if I really have 0.7 credence that P, and I care about your welfare, then I should advise you to bet as if your credence is 0.7 too (even if I’m a permissivist, and I think you’d be rational in adopting any credence between 0.4 and 0.9).

    But if that’s right, I don’t see why things should be different on the unsharp credence picture. Suppose I’ve been buying bets as if I have 0.7 credence in P, though my credence is in fact unsharp from 0.4 to 0.9. Now you’re asking for advice—you’ve been offered a bet that will pay $1 if P for the price of 60 cents—and you’ll heed the advice I give you. It seems to me that if I care about your welfare, this is effectively just another bet on whether P. So however the advocate of the unsharp credence picture manages to get the result that I should bet as if my credence is 0.7 (if that’s necessary to avoid being Dutch Booked, given my past behavior), that strategy will also commit her to saying that if I care about your welfare, then I should advise you to bet as if your credence is 0.7 (at least, insofar I both care about your welfare and know that you’ll take my advise).

    If we can drive a wedge between betting and advising (though when we know that advice will be heeded, I’m not sure how), then I’m not sure why it’s any harder for the defender of sharp credences to exploit this wedge (e.g., by holding that advice should track what you think it is rational for the advisee to do, which if you’re a permissivist, can come apart from what you would do) than it is for the defender of unsharp credences to do so.

  10. Brian Weatherson says:

    It seems to me that if I care about your welfare, this is effectively just another bet on whether P.

    I don’t think this is correct. But the reason I think that never really seems to convince anyone. Still, I’ll try again to make it convincing!

    Let’s say I do the following things.

    First, I buy a bet that pays $1 if P for 60 cents.

    Second, I sell that bet for 55 cents.

    Is the first thing rationally permissible? Yes; it’s a take it or leave it choice.

    Is the second thing rationally permissible? Yes; it’s a take it or leave it choice.

    Is the combination rationally permissible? No; I’ve just given away 5 cents for no good reason.

    It’s true that when I perform the second act, I’ll have gone from being in a state where everything (salient) I did was rationally permissible to a state where that’s no longer the case. But that doesn’t mean the thing I did was impermissible; it clearly was permissible. The irrationality here is, I think, purely wide scope: I shouldn’t have done both those things.

    Now let’s say that I don’t do the second thing, but instead advise you that it is a take it or leave it choice. And you decide to sell. Have I done anything irrational? I don’t see how. It would be irrational to do the thing I did (i.e., buy for 60) and the thing you do (i.e., sell for 55), but neither of us did both those things.

    Maybe an analogy would make this a little clearer. Let’s say I’m a judge, and I have to decide on a sentence for a particular criminal. Any sentence between 18 months and 3 years would be rationally and morally defensible; none stands out as being the optimal sentence. I decide on a sentence of 2 years. Now compare two ways the case might be extended.

    Case 1. The next day I have to sentence someone in a case that’s almost exactly the same as the case I just decided. I think justice requires that I treat like cases alike, and sentence this criminal to 2 years as well.

    Case 2. The next day you ask me for advice about someone in a case almost exactly the same as the case I just decided. You ask me for advice about the sentence. I think the right advice is to say, “Well, I just sentenced someone like that to 2 years, but anything between 18 months and 3 years would be equally defensible.”

    Whenever there are wide scope norms, I think action norms and advice norms will come apart, so giving advice won’t be just like placing a bet.

  11. Cian Dorr says:

    I share Dan’s puzzlement about the bifurcation between advice-giving and actions of other kinds. Let me give an example. Suppose I have two children, Anne and Brad, and I care about their welfare in a “utilitarian” way: I want them to be happy; I am indifferent to the question how happiness is shared among them; and I have no other relevant preferences. On Day 1 Anne is offered a bet that pays her $3 if P and costs her $1 if not-P; she asks my advice. I know that she will (probably) follow my advice if I give it to her, and that if I don’t give her any concrete advice she will (probably) not take the bet (she is generally quite cautious). On Day 2, Bert is offered the converse bet, which pays him $3 of not-P and costs him $1 if P; he too asks my advice, and I know that he will (probably) follow it if I give it to him and reject the bet if I don’t say anything concrete.

    Elga-style claim: if I am rational, then given my preferences as described, I will be disposed to advise at least one of my children to take the bet that they are offered. The case for this seems to me exactly as strong as the corresponding case for the claim Elga makes in his paper: the fact that the actions involved are givings-of-advice, and that my utilities are not selfish, seems quite irrelevant.

    Of course, the utilities I am stipulated to have are rather unrealistic. It would be more realistic for me to care to some extent about equality in the distribution of goods between the children, so that I will be less enthusiastic about a shift from the status quo in which one child is enriched by $3 while the other is impoverished by $1. It would also be more realistic for me to especially disvalue outcomes where bad things happen to people as a result of their taking my advice, as against outcomes where the same bad things happen but where I didn’t play any role. And if the form my advice-giving takes involves my making assertions like ‘You should do such-and-such’, there is room for my credences about normative subject matters, and my preferences for speaking the truth about these subject matters, to play a role. All of these are considerations that may, in some cases, make the option of giving no definite advice to either child look better than it does in my original example. But I take it that if we fill in the details of how my preferences work in any any of these more realistic ways, we will still be able to adjust the payoff structure in such a way that the Elga-like conclusion is still as plausible as it ever was.

    Brian’s analogy with the judge doesn’t help me to understand the opposing view. This judge prefers that relevantly similar cases decided by the same person attract different sentences, but does not seem to disvalue outcomes where people whose cases are judged by different judges attract different sentences. Or if he does disvalue this, the negative value in question is outweighed by the positive value he places on telling his colleagues the truth about ‘defensibility’. This is enough to explain why it is reasonable for this particular person, in this particular role and with these particular appropriate preferences, to have different dispositions as regards advice-giving and sentencing. I don’t see that it helps support the idea that a bifurcation between advice and other kinds of action will be a general feature of any theory about norms governing actions.

Leave a Reply

You must be logged in to post a comment.