Moral Concepts and Meanings

I’m pretty sure this is discussed somewhere, but maybe it hasn’t been, so let’s try.

It’s (very) plausible that someone can share our moral concepts and disagree, perhaps extremely, about how they apply. Osama bin Laden doesn’t mean something different to what I do by ‘good’, he just has wild views about which kinds of actions are and aren’t good. The proof of this, if it’s needed, is that when he says “Killing Westerners is good”, he’s revealing he has different morals to me, not a different language. (Well, he has some different languages to me, but when we’re both speaking English we mean the same thing by good.)

It’s also plausible that some people can share our moral concepts and disgaree, perhaps extremely, about the conceptual connections between moral belief and action. This is just David Brink’s case of the amoralist.

It’s not plausible, or at least not to me, that someone could share our moral concepts, but differ extremely in *both* which things they apply to and what their connection to action is. That is, someone who said things like “Killing Westerners is good”, “Supporting democracy is bad” etc., but wasn’t at all moved to kill westerners or undermine support for democracies would, I think, mean something different to us by “good” and “bad”. Or at least so I think.

Perhaps we can imagine such a person. Imagine an amoralist in Al-Qaeda land, who goes around saying “Killing Westerners is good” and so on, but is completely unmotivated, even denies that the goodness of killing Westerners provides her with a reason to actually go and kill Westerners. Perhaps she would be just like Brink’s amoralist, and perhaps she would mean what we mean by “good”. But the case looks very marginal.

All of this does make me think that the ‘scare quotes’ response to Brink is the right one. If we can only make sense of the amoralist as expressing moral concepts when her moral expressions match up with moral orthodoxy, then it’s plausible that by “good” she just means something like “usuallly called _good_”.

Two-Envelopes and Variables

“Eric Schwitzgebel”:http://www.faculty.ucr.edu/~eschwitz/ and “Josh Dever”:https://webspace.utexas.edu/deverj/personal/dever.html have “a paper on the two-envelope paradox”:http://www.faculty.ucr.edu/~eschwitz/SchwitzAbs/TwoEnvelope.htm arguing that the paradox arises because of faulty reasoning involving variables. They note that if we impose a constraint on which variables can be used in decision-theoretic reasoning, the paradoxical reasoning is blocked. I won’t repeat the formal version of the constraint (from page 4 of the paper) in HTML. But the effect is that X is only a legitimate variable if “the expected value of X is the same conditional on each event in the partition.” The problem is then that the paradoxical reasoning essentially involves appeal to a variable that does not satisfy this constraint.

As an aside, this _kind_ of response is not entirely uncommon in discussions of the two-envelope literature, so it’s worth taking seriously. And Schwitzgebel and Dever’s version of the response is by far the most careful and plausible I have seen. (And it’s probably the earliest such version, given their note in the paper that they discussed this with people in Berkeley in 1993. Given the history of the two-envelope discussion, where so much happens online etc, this kind of fact seems quite relevant to priority, if priority matters at all here.) But it still seems flawed.

Here’s the reason. It’s true that their constraint blocks the paradoxical reasoning. But getting a constraint with that property is dead easy. Just say that any decision-theoretic reasoning is invalid and you’ll do that. The hard part is finding a constraint that knocks out the two-envelope reasoning, but not any reasoning that we want, both intuitively and on reflection, to preserve. And I think Schwitzgebel and Dever’s constraint fails that test.

Consider the following example. God partitions the reals in [0, 1] into two unmeasurable sets, S1 and S2. He picks a real at random from [0, 1]. If it’s in S1, He puts $10 into a red envelope, if it’s in S2 He puts $20 into that red envelope. He then rolls two fair and independent dice. If they land double-six, he puts an amount into a blue envelope equal to the amount in the red envelope plus $5. Otherwise, he puts an amount into that blue envelope equal to $5 less than the amount in the red envelope. Got it? (It’s easier with tables, but tables are hard in blogs.)

You are not told which number He picked, or how the dice landed, but you are told all of the above. You are then given a choice of the red or blue envelopes. How should you choose?

I take it that it’s obvious you should pick the red envelope. After all, whatever is in it, you have a 35/36 chance of getting $5 less with blue, and only a 1/36 chance of getting more. So I say, pick red.

But Schwitzgebel and Dever can’t say that. For they say the above reasoning violates their constraints on which variables can be used. (Or, more precisely, that any formalised version of the above reasoning would do so.) As near as I can tell, the reasoning I just made is just as bad, by their lights, as the paradoxical two-envelope reasoning.

As I see it, they are now under an obligation. For it seems obvious that red is better than blue, so they should tell us what principle they *do* endorse that gets that conclusion. It can’t just be the principle _Always maximise expected utility_, since in this case neither picking red nor picking blue has a defined expected utility. And, although this might just be a failure of imagination on my part, I can’t see what else it might be.

While I’m in this combative mood, I should also note that this example casts some doubt on _any_ attempt to resolve the two-envelope paradox by appeal to expected utility reasoning. For the two-envelope paradox rests on principles that are plausible in cases like this one, even when expected utility reasoning fails. I’ll be polite/lazy enough to not quote anyone who actually does try and solve the problem that way.

Eklund on Vagueness

I just noticed that Matti Eklund’s paper “What Vagueness Consists In” is on the ‘forthcoming papers’ list at “Philosophical Studies”:http://www.kluweronline.com/issn/0031-8116/contents. As is traditional, I’ll try to honour his paper by coming up with as many counterexamples as I can. I stole borrowed one of them from “Jonathan Ichikawa”:http://ichikawa.blogspot.com/2004_03_01_ichikawa_archive.html#107902380173158917. This all gets very long, and I got a little footnote-crazy (if you’re going to use footnotes you really should exploit their full comic potential) so it’s all below the fold.
Continue reading

Eklund on Vagueness

I just noticed that Matti Eklund’s paper “What Vagueness Consists In” is on the ‘forthcoming papers’ list at “Philosophical Studies”:http://www.kluweronline.com/issn/0031-8116/contents. As is traditional, I’ll try to honour his paper by coming up with as many counterexamples as I can. I stole borrowed one of them from “Jonathan Ichikawa”:http://ichikawa.blogspot.com/2004_03_01_ichikawa_archive.html#107902380173158917. This all gets very long, and I got a little footnote-crazy (if you’re going to use footnotes you really should exploit their full comic potential) so it’s all below the fold.
Continue reading

Game Theory on the Spot

As noted before, I’ve never understood a lot of the attraction behind game theory. In particular, I’ve never heard a convincing argument for why Nash equilibria should be considered especially interesting. The only argument I know of for choosing your side of a Nas equilibria in a one-shot game involves assuming, while deciding what to do, that the other guy knows what decision you will make. This doesn’t even make sense as an idealisation. There’s a better chance of defending the importance of Nash equilibria in repeated games, and I think this is what evolutionary game theorists make a living from. But even there it doesn’t make a lot of sense. In the most famous game of all, Prisoner’s Dilemma, we know that the best strategy in repeated games is __not__ to choose the equilbrium option, but instead to uphold mutual cooperation for as long as possible.

The only time Nash equilibria even look like being important is in repeated zero-sum games. In that case I can almost understand the argument for choosing an equilibrium option. (At least, I can see why that’s a not altogether ridiculous heuristic.) One of the many benefits of the existence of professional sports is that we get a large sample of repeated zero-sum games. And in one relatively easy to model game, penalty kicks, it turns out players really do act like they are playing their side of the equilibrium position, even in surprising ways.

bq. Testing Mixed Strategy Equilibria When Players Are Heterogeneous: The Case of Penalty Kicks in Soccer (P.A. Chiappori, S. Levitt, T. Groseclose). (paper, tables) (Hat tip: Tangotiger)

Some of you will have seen this before, because it was published in __American Economic Review__, but I think it will be news to enough people to post here. The results are interesting, but mostly I’m just jealous that those guys got to spend research time talking to footballers and watching game video. I haven’t heard any work that sounded less like research since I heard about that UC Davis prof whose research consists in part of making porn movies.

Changing my Mind

I change my mind on philosophical matters about once a decade, so even considering that something I have hitherto believed is wrong is quite a rare experience. It’s a pretty esoteric little point to change my mind on though.

For a long time, at least 7 or 8 years I think, I’ve thought it best to model the doxastic states of a rational but uncertain agent not by a single probability function, but by sets of such functions. I’m hardly alone in such a view. Modern adherents have (at various times) included Isaac Levi, Bas van Fraassen and Richard Jeffrey. Like Jeffrey and (I believe) van Fraasen, and unlike Levi, I thought this didn’t make any difference to decision theory. Indeed, I’ve long held a sequences of decision is rationally permissible for an agent characterised by set S iff there is some particular probability function P in S such that no action in the sequence is sub-optimal according to P. I’m thinking of changing that view.

The reason is similar to one given by Peter Walley. He argues that the position I just sketched is too restrictive. The important question for Walley concerns countable additivity. He thinks (as I do) that the arguments from congomerability show that any agent represented by a single probability function should be representd by a countably additive function. But he notes there are sets of merely finitely additive functions such that any agent represented by such a set who follows his decision-theoretic principles will not be Dutch Booked. He argued that such an agent would be rational, so rationality cannot be equivalent to representability by acceptable probability functions.

I never liked this argument for three reasons. First, I didn’t accept his decision principles, which seemed oddly conservative. (From memory it was basically act only if all the probability functions in your representor tell you to act.) Second, I don’t think Dutch Book arguments are that important. I’d rather have completely epistemological arguments for epistemological conclusions. Third, the argument rested on an odd restriction to agents with bounded utility functions, and I don’t really see any reason to restrict ourselves to such agents. So I’d basically ignored the argument up until now. But now I’m starting to appreciate it anew.

I would like to defend as strong a congolmerability principle as possible. In particular I would like to defend the view that if Pr(p | p or q) -t/h

If I’ve done the maths right, for any interval of length l, the objective chance that g(t) falls into l is l. So prior to the process starting up, I better assign probability l to g(t) falling in that interval. The question now is can I extend that to a complete (conditional) probability function in anything like a plausible way, remembering that I want to respect conglomerability. I’m told by people who know a lot more about this stuff than I do that it will be tricky. Let’s leave the heavy lifting maths for another day, because here is where I’m starting to come around to Walley’s view.

Consider the set of all probability functions such that for any interval of length l,

(1) Pr(g(t) is in l) = l.

Some of these will not be conglomerable. Consider, for instance, the function that as well as obeying (1) is such that Pr(g(t) = x | g(t) = x or y) = {1/2} for any real x, y. That won’t be conglomerable, since Pr(g(t) Barkley Rosser’s papers, especially this one on the Holmes-Moriarty problem. Rosser’s work is philosophical enough I think that I should probably track him on the papers blog. I’m very grateful to Daniel Davies for pointing out Rosser’s site to me.)

A Really Bizarre Two-Envelope Paradox

This could get complicated. I wanted to create a two-envelope paradox where the expected utility of receiving either envelope was not infinite. It’s impossible to create a paradox when the utility is finite, but it turns out it is possible to devise one where the utility is undefined. What’s the difference between an infinite utility and an undefined utility? Well, if X is infinitely valuable, it is irrational to prefer any good with finite utility to it, whereas if its utility is undefined, such a preference would be rationally acceptable. For a simple example, consider a situation like the following.

Eris tosses a fair coin repeatedly until it falls heads. She counts how many throws that took, call that n, and then places something worth (-2)n utils in an envelope. How much is the envelope worth?

If you try and work this out, it comes to -1+1-1+1-1+1-…, which is obviously undefined. I think (and I could be wrong about this) that for any good with finite utility, it is rationally permissible to be indifferent between Eris’s envelope and that good.

That isn’t the two envelope paradox I have in mind though. It works something like this. Eris takes three fair coins, A, B and C. She tosses A repeatedly until it falls heads. Let n be the number of tosses this takes. She then tosses B and C. The amount of utility put into the two envelopes is determined as follows:

  Larger Smaller
Heads 3n+1 3n
Tails 5-g(n) 5-g(n+1)

where Heads means B lands heads, Tails means B lands tails, and g is the function recursively defined as follows:

g(1) = 2
g(n+1) = 2 * g(n) – n/4

If coin C lands heads she puts the larger amount into the blue envelope, and the smaller amount into the red envelope. If it lands tails she puts the larger amount into the red envelope, and the smaller amount into the blue envelope. She then gives you the blue envelope.

Question: How much is the blue envelope worth?
Answer (I hope): It’s undefined.

Question: Should you pay to swap envelopes?
Answer: Er, no.

Question: If you see how much is in the blue envelope, will you want to swap?
Answer (I hope): Yes – whatever you see the expected utility of swapping is at least 1/12.

Question: Was this whole thing just an attempt to get fewer readers?
Answer: No – some people find this stuff genuinely interesting. Well, at least I find it genuinely interesting and I can project.

Complex Demonstratives and Singular ‘they’

Here’s a neat fact I learned from Geoff Pullum’s radio talk about singular ‘they’.

It’s appropriate to use ‘they’ in spoken English as a singular pronoun, provided it plays something like the role of bound variable. So (1) could mean (2)

(1) Every scientist said they believe in evolution.
(2) [All x: scientist x]Believe in evolution(x)

The proviso is important. You can’t use ‘they’ as short for ‘he or she’ (as it appears to be used in (1)) when it is anaphoric on a name.

(3) Morgan said they believe in evolution.

In (3) ‘they’ has to refer to some group. Morgan might be a part of that group, but he or she can’t be denoting him or herself with ‘they’. Wouldn’t it be easier if I could say there “they might be denoting themselves”? I can’t, which shows that the use of ‘they’ here derided by some self-ordained grammarians is actually rule-governed. (If anyone has seen Bill Safire sounding off on this use of ‘they’ I would be very happy to see quotes!)

It’s not just universal quantifiers that can bind singular ‘they’, as the following examples show. (These are all from Pullum.)

‘Everybody should marry as soon as they can do it to advantage.’ That’s Jane Austen in 1814. And there are thousands of other examples down the years.

‘A person can’t help their birth.’ (That’s William Makepeace Thackeray in 1848.)

‘Nobody fancies for a moment that they are reading anything beyond the pale.’ (That’s Walter Bagehot in his book ‘Literary Studies’ in 1877.)

‘… at the end of the season when everyone has practically said whatever they had to say …’ (That’s Lady Bracknell speaking in Oscar Wilde’s play, The Importance of being Earnest in 1895, and Lady Bracknell never says anything ungrammatical.)

‘Who ever thought of sparing their grandmother worry?’ (That’s Edith Wharton writing in 1920, using singular ‘they’ with ‘who’ as the antecedent).

‘Too hideous for anyone in their right mind to buy.’

The conclusion Pullum draws from this data is:

The relevance of the distinction is this: in English, the pronoun ‘they’ is fairly strictly limited to having a plural-inflected antecedent when it is used as a referring pronoun, but there is no such restriction when it’s a bound pronoun.

He attributes much of this to a PhD dissertation by Rachel Lagunoff, who pointed out that some genuinely existential quantifiers can govern ‘they’ as in (4)

(4) There’s a caller with a musical question on Line 1. They realise they may have to wait. (This was an example Pullum noticed while going in to the studio to record the talk I’m ripping off here.)

I’m not sure referential/bound is quite the right distinction here, because I think (5) sounds bad, even if the definite description is uncontroversially attributive.

(5) The scientist said they believe in evolution.

Still, I think there’s a good point that when the NP is referential, singular ‘they’ is inappropriate. Which brings me back to the title. Some of the time I can convince myself that complex demonstratives can licence singular ‘they’, as in (6).

(6) That scientist said they believe in evolution.

(6) is a little marginal, especially compared to the Austen to Auden examples above, but I think it can be OK. And that’s a bit of evidence (hardly compelling, but evidence) for the claim that complex demonstratives are quantificational rather than referential.

Confession. I haven’t gone and looked up the literature on complex demonstratives, and for all I know this argument has been refuted more times than I’ve had Chinese dinners. If not, I gladly offer up some more evidence for the quantificational side of the disputes about complex demonstratives.

NPIs, Modals and Tense

Here’s a cute little factoid I found out about from Ben Russell, a PhD student at Brown who is working on (among other things) the relationship between implicatures and NPI licencing. One issue that arises with NPIs is how they behave in complex sentences inside the scope of a ‘negative’ modifier. It turns out there are some surprising generalisations in the area. For instance, it seems NPIs are not licenced inside conjunctions, unless something that licences them appears in the same conjunct. So even though (1) is OK, (2) is bad.

(1) I doubt that he ate any beans.
(2) *I doubt that he ate some potatoes and any beans.

We get a similar result with NPIs inside universal quantifiers.

(3) I doubt that he lifted a finger to help.
(4) *I doubt that everyone lifted a finger to help.

(Note, by the way, that the NPIs in (2) and (4) are in downward-entailing environments.)

Those results are fairly well known, but it doesn’t seem there’s been much work on how far these results can be extended. It seems we don’t get a similar result with modals. (5) isn’t a great sentence, but it seems like it could be a reasonable way to express a certain kind of anti-essentialist view.

(5) He doesn’t necessarily have any parents.

On the other hand, we do (it seems to me) get a similar result for temporal modifiers.

(6) ??He didn’t always have any children.
(7) *He doesn’t always give a red cent to the Christmas charity appeal.

Compare

(8) He always doesn’t give a red cent to the Christmas charity appeal.

(8) isn’t perfect, but it’s nowhere near as bad as (7).

The real news, to me, was the difference between (5) and (6). I would have thought that both modifers like always which are quantifiers across times, and modifiers like necessarily which are (I thought) quantifiers across worlds would be syntactically similar. But they behave quite differently here. (By the way, I put the emphasis on any in both (5) and (6) deliberately. Apparently stressed any is a stronger NPI than unstressed any, and it certainly helps to bring out the contrast.)

Impossible Stories

Wo makes several good points about my imaginative resistance paper. It will take me a while to respond to all of them, but I just want to respond to one point for now. Wo suggests that my impossible time travel stories are not really impossible, they are just taking place in branching time. This is a good objection. I have to say more than I’ve said to show these really are impossible stories that don’t generate imaginative resistance.

One point is that the Restaurant at the end of the Universe wasn’t just supposed to be an impossible time travel story. It was supposed to be a story that was internally incoherent. I have my doubts that one could watch the end of the universe even once. Wouldn’t you be seeing it after it happened, which is after the universe ended?

I don’t have a full story here, but I think that even without the time travel component (you know, the going back and seeing it again from the same spot without running into yourself) there’s an impossibility here. And I think the impossibility arises from combinatorialism run amok. We can imagine a certain event, say the end of the universe. We can imagine ourselves watching a different event, say a lunar eclipse. So we can imagine watching the end of the universe, by substituting the first event in place of the lunar eclipse. And voila, impossibility in imagination!

Here’s another try at an impossible story that doesn’t generate imaginative resistance. At least, there’s no alethic puzzle. It’s pretty clearly true in the story that quadragons exist. You’ll have to read it to find out what a quadragon is, but suffice to say, it’s impossible.

The story is long, so I put it in the expanded section. I also don’t want to claim any virtues for the quality of the writing. If I ever use it I’ll try hamming it up a bit more because it’s meant to be a parody of cartoon superhero stories. (Whether this kind of parody is cheating, a point that Wo alludes to at the end of his post, is hard to say. I should try writing the story straight.)
Continue reading