So I was reading the Hájek and Pettit paper in Lewisian Themes and I was very very confused. This may be a reflection of something wrong with me, or maybe something confusing is going on. (Warning – this is written with effectively *zero* knowledge of the actual literature, so I might just be reinventing the sled.)

The paper is on Lewis’s attacks on the Desire-as-Belief thesis, which somehow gets transformed into a general-purpose anti-Humean thesis. Here’s Lewis’s statement of the thesis, slightly reworded to allow for formatting in HTML.

there is a certain function (call it the ‘star’ function) that assigns to any proposition A a proposition A* (‘A-star’) such that, necessarily, for any credence distribution C, V(A) = C(A*).

V here is the (ideal?) agent’s (normalised) valuation function and C her credence function. This thesis is shown to be false on the ground that it implies credences in starred propopsitions don’t change, which is I guess implausible. Hájek and Pettit point out that for many purposes an anti-Humean can get by with the following claim, which is invulnerable to Lewis’s arguments.

For any credence function C, there is a star function that assigns to any proposition A a proposition A* (‘A-star’) such that V(A) = C(A*).

For instance, if the star function maps A onto the proposition that A maximises expected utility, then there is no *formal* argument against their weaker claim. Why is this relevant? Well because Lewis seems to make rather bold claims about what follows from the falsity of *his* thesis. He considers something like Hájek and Pettit’s thesis and says that its truth would not make us think that A* is the proposition that A is objectively good. As I said, Hájek and Pettit argue fairly convincingly that that is incorrect.

But they agree with Lewis about one thing. They think the claim that A is objectively good will have to be in a way indexical to avoid Lewis’s argument. (If the objectively good maximises *expected* utility according to some salient credence function it will be indexical, so the decision-theoretic utilitarian is off the hook here.) And I’m just confused about why we should think that is the case.

It might make things easier if we consider a particular kind of objectivist about ethics, the ethical actualist. The actualist says that *A is good* is necessarily equivalent to A. (Perhaps what is good is what is part of God’s plan, and God’s plan is revealed by what is true.) Now the actualist is an objectivist about ethics if anyone is. And the actualist is an anti-Humean – they certainly think you can infer an ought from an is. (The actualist theory is obviously *ethically* flawed, but Lewis’s argument doesn’t seem to turn on *ethical* considerations, so that should be set to one side.) But does the actualist accept Lewis’s anti-Humean thesis? I think no, though just how they reject it depends on how we interpret V.

Let’s start with a practical case of ignorance. The Giants are playing the Brewers, and our actualist agent’s credence in A, the proposition that the Giants win, is 0.6. That is, C(A) is 0.6. Since the agent is an actualist and an anti-Humean of Lewis’s preferred kind, then A*, the proposition that A is objectively good, is just A. So C(A*) = 0.6. What is V(A)? Two options stand out.

One possibility is that V should measure how much the agent *values* A. In that case V(A) is either 1, if the Giants win, or 0, if they don’t, but the agent doesn’t know which. Objectivists about value tend *not* to think that agents can always know what is valuable, or even what they value. So we shouldn’t assume that values (which are external) can link up with credences (which are in a sense internal). So our anti-Humean rejects Lewis’s statement of anti-Humeanism.

Another possibility is that V(A) just equals C(A*) which is just C(A). This looks more like we’re accepting Lewis’s statement of anti-Humeanism. Not so fast! Lewis’s thesis is meant to hold for *any* credence function C. In particular it’s meant to hold still when Barry Bonds hits a first-inning grand slam and her credence that the Giants will win rises to 0.8. Now C(A) is 0.8, so V(A) should also be 0.8 by this method.

What the agent accepts is that *her* valuation function V perfectly tracks her credence function. She doesn’t accept that her valuation function does (or even *could*) track *all* possible credence functions. What she accepts is something more like the following

There is a function * from propositions to propositions such that for any evidence E, the desirability of A

given Efor her equals her credence in A* given E.

We know that theory is coherent because our actualist, who thinks the star function is the identity function, is coherent. (Or are they incoherent in a way I’m missing?) And the theory is anti-Humean, at least on some interpretations of star. I must be missing something here I guess. Suggestions as to just what I’m missing are welcomed.

“And the actualist is an anti-Humean – they certainly think you can infer an ought from an is.”

Brian,

There are two kinds of anti-Humeanism in the ballpark — rational anti-Humeanism and motivational anti-Humeanism. Very roughly, rational anti-Humeans think that desires can be rationally criticized in more substantive ways than just that they conflict or are self-defeating. Less roughly,

motivationalanti-humeans think that beliefs can motivate. In other words they deny that “Reason alone can ever be a motive of the will,” or whatever the exact words were. Lewis is concerned to deny this. And he thinks that such anti-Humeans have to think that their desire for an outcome be correlated with their beliefs that that ourcome is good. That’s because most such anti-Humeans think that it is precisely moral beliefs that violate Hume’s stricture about beliefs not motivating by themselves (without some antecedent desire).I’m not a very good formal philosopher, so I have trouble tracking every nuance of this debate, but I think I agree with Hajek and Pettit (and Byrne in another paper) that Lewis’s interpretation of what this sort of anti-Humean has to think is not really the only one available.

But my point here is just that the use of anti-Humean here is a different one than that suggested by the quoted language. It is anti-Humeanism about motivation that is at stake.

Thanks Mark – that is rather helpful in clearing up what is going on.

I guess I still agree that anti-Humeanism about

motivationdoesn’t require something as strong as Lewis’s ‘anti-Humean’ thesis, but nothing I said here could count as an argument for that.While I haven’t been able to access the article in the Australasian journal of philosophy online (stupid proxy server) so I can’t give a detailed response. However, I do suspect the situation you described is actually incoherent.

In particular it seems that you can’t just have any function from statements to real numbers as a valuation function. For instance it seems impossible to make sense of a function that assigns a different number to ‘I am a bachelor’ than to ‘I am an unmarried man’ as a valuation in any coherent sense. While the equivalence of analytically equivalent statements is the most obvious restriction I expect there will be others as well. Once again without access to the journal or some more experience with the literature I can’t be sure what these restrictions might be I do expect this is where the error lies.

My intuition is that in the end we want to use this valuation function to somehow compare total states of affairs. I’m unclear what it would even mean to assign value to isolated propositions individually. Thus I would guess that what we mean by V(A) is something like how much better it would be for A to be true

other things being equal. Spelling this out mathematically I expect V(A) would represent the weighted (by likliehood) average value of those states where A is true minus the weighted average value of those states where A is false. Of course from a psychological perspective the function V on propositions may be primitive but the idea that this function must somehow ultimately represent a valuation on total states of affairs would place mathematical restrictions on the function.As an example suppose my day involves to seperate deciscions. I can either go to the store (call this proposition S) or go to school (~S). I can either drive (D) or walk (~D). It would seem that once my valuation of the four propositions S&D, S&~D, ~S&D, ~S&~D is fixed then assuming some fixed background credence function so should my values for any logical combination of S and D. What could one possibly mean by giving S a very high valuation but giving both S&D and S&~D very low (or negative if allowed) valuations?

Thus in your example I worry that setting * to be the identity will make the valuation function inchorent by dictating things like V(SvD)=C(SvD). Similarly I just don’t understand what it would mean to assign a valuation of 1 to logical truths.

Maybe I am just confused by my lack of familiarity with the literature but I just don’t see how one can make sense of an arbitrary function from propositions to reals as a valuation. I suspect once you make it clear how this valuation function is supposed to be used to compare states of affairs or make choices extra mathematical constraints will become necessery and this very well might make your example incoherent.

Hi Brian,

You found confusing our granting Lewis that A* will have to be somehow indexical if his anti-DAB results are to be avoided. So the challenge is to come up with a * -function that is fixed, irrespective of the agent (what we called “etched in stone”), but that survives the Lewis-style arguments. You suggest that the identity function is such a function: A* = A.

But this seems incoherent to me. By decision theory, we have for any B:

V(A) = V(A & B).C(B | A) + V(A & ¬B).C(¬B | A).

By (DAB), we can replace all the V’s by corresponding C’s:

C(A*) = C((A & B)*).C(B|A) + C((A & ¬B)*).C(¬B|A).

By your suggestion, this becomes:

C(A) = C(A & B).C(B | A) + C(A & ¬B).C(¬B | A).

But how can this be? We have as a theorem of probability:

C(A) = C(A & B).1 + C(A & ¬B).1

So it looks like you’re going to need:

C(B | A) = 1 and C(¬B | A) = 1.

And that’s incoherent.

So I part company with you when you say “our actualist, who thinks the star function is the identity function, is coherent.”

Best wishes,

Alan

Quale,

The idea of valuation functions is fairly standard from the decision theory literature. See, for instance, Richard Jeffrey’s

The Logic of Decisionfor a summary of the formal constraints. They include the ones you mentioned, and a few more besides, though I don’t have a quick description of the constraints.Alan,

That does look like a problem, but I wonder whether these problems are really based around us putting an unreasonable demand on the anti-Humean. When the anti-Humean says that (for instance) to be valued is to be true, that needn’t mean that V(A) = C(A). What it means is that V(A) = 1 if A is true, and V(A) = 0 if A is false. When C(A) is between 0 and 1, then we don’t know what V(A) is. The anti-Humean owes us a story of what happens in that case. But so does the Humean. The Humean only avoids Lewis’s results by assuming that there is never any doubt about V(A). What happens when V(A) is unknown is not entirely clear on the orthodox picture. I’d bet (small amounts at reasonable odds) that whatever they say can be transferred across to the anti-Humean.