Animal Communication

The other day I was reading about the amazing <A href=>waggle dance</a> that honeybees perform to tell their hivemates of the location of food, and then found the wikipedia page on <A href=>animal communication</a>.  From reading this article, it seems that some very useful work could be done if philosophers of language collaborated with ethologists (or whatever scientists work in this field) and cleared up some of the fundamental issues.  Now, I don’t know how representative of the field the wikipedia article is (the article references many studies and papers, though it’s hard to tell whether experts agree with the overall organization of the article), but it suggests some fundamental confusions.

The wikipedia article states that “Animal communication is any behavior on the part of one animal that has an effect on the current or future behavior of another animal.“  This is a nice operational definition for scientists to use, but it obviously has some flaws.  This is admitted further along in the article:<blockquote>If a prey animal moves or makes a noise in such a way that a predator can detect and capture it, that fits the definition of “communication” given above. Nonetheless, we do not feel comfortable talking about it as communication. Our discomfort suggests that we should modify the definition of communication in some way, either by saying that communication should generally be to the adaptive advantage of the communicator, or by saying that it involves something more than the inevitable consequence of the animal going about its ordinary life.</blockquote> It seems to me that just thinking about things in Gricean terms would help clear things up.

Some interesting examples that are discussed include warning coloration (many poisonous animals have very bright coloration, which has co-evolved with the perceptual systems of potential predators, saving both species much grief in the long run), pursuit-deterrence (some antelopes engage in “stotting” (high jumping while starting to run) when escaping predators, to indicate that they have the energy to far outrun the predator), and warning signals (many monkeys make certain vocalizations to indicate to their group the presence of predators).  It seems that these particular examples rely on different aspects of Grice’s account of speaker meaning.  Warning coloration doesn’t seem to rely on any particular intention of the “speaker” – in fact, the animal with the coloration generally has no intentional control at all.  Stotting is also similar – a predator that sees the antelope stotting can quickly realize that it can’t catch the potential prey, and will give up.  A difference between these two however is that warning coloration is purely conventional (a predator may know that bright orange frogs are poisonous, but if it ends up in a different environment with bright blue snakes, it might not recognize the signal) while stotting is somehow more natural (which is not to say that every potential predator will recognize the speed advantage the stotting indicates – this shows that there is still a difference between stotting and the rustle animals generally make in the bushes, which is an unmistakable sign of prey).  Warning calls that monkeys make seem to involve more of the Gricean mechanism – they may or may not be intentional in the sense we are familiar with for human behavior (perhaps they’re more akin to humans saying “ouch!” when hurt), but the recognition of the quasi-intention is essential for the targets of the signal.  Unlike stotting, this is a signal that can be faked (stotting is presumably so hard to do that it would be impossible to fake if an animal wasn’t actually capable of outrunning the predator).  Thus, the listener needs to understand the intention of the “speaker” in order to properly respond to the signal.

This last point about the potential for faking a signal has apparently been a focus of discussion – most evolutionarily stable animal communication is honest, though there are some instances of dishonesty.  (For instance, many harmless animals that live in the same environment as poisonous ones end up evolving the same coloration, to protect themselves from predators.  Human communication is another notable instance of animal communication that often involves dishonesty.)  But according to this article on animal communication, Amotz Zahavi has argued that evolutionarily stable dishonest communication is impossible – I don’t know exactly what the bounds of this claim are, but it sounds reminiscent of the Kantian argument for why lying is wrong.

Of course, even if some of this communication reaches the level of Gricean speaker-meaning, none of it seems to constitute full-fledged language.  The wikipedia article on <A href=>animal language</a> seems to make this clear, though again the categories that are studied seem like they might be slightly puzzling to philosophers of language.  But I would guess there is good potential for interdisciplinary work in this area.

The Hiring (Im?)possibility Theorem

Following up on <A href=>Brian’s recent post</a> about candidates having to signal to departments that they’re actually interested, I’ll mention some ideas that my friend and colleague Mike Titelbaum and I were discussing one evening at the APA.

One thing that would remove the need for people to signal like this would be by putting the decisions of who is hired where in the hands of some sort of benevolent third-party (perhaps like the APA or something). Candidates could submit a ranked list of their preferences for which department they’d like to be at, and departments could submit a ranked list of their preferences for candidates (after having seen the files and conducted interviews and such), and hopefully some sort of matching between candidates and departments could be arranged from this information. (We might also want to allow some sort of cut-off where a candidate or department could specify that they’d rather just remain unmatched this year and repeat the search next year, rather than take anything further down on their list.) If this could be centralized, it would eliminate the inefficiencies each year where a position goes unfilled, because a department’s first few choices take other jobs, by which point their later choices have already settled for something else. These situations hurt both candidates and departments, because there are fewer actual jobs to go around, and some departments end up having to repeat the whole search process.

The important question to answer (ignoring temporarily the question of whether such a process would have negative consequences as well as positive ones) is whether such a process is even possible. Of course, one could just randomly assign candidates to jobs, but that would be no good – we’d want the assignment process to satisfy certain criteria.

1. The process should be able to take any set of rankings of departments and candidates and produce an assignment, with exceptions only if it’s impossible to construct a matching meeting the minimum acceptability cutoffs.

2. The matching should be “stable” in the sense that if C1 is matched with D1 and C2 is matched with D2, then it should not be the case that C1 prefers D2 to D1 and D2 prefers C1 to C2. (This condition guarantees that no department and candidate have an incentive to defect from the centralized assignment. Perhaps this condition can be dropped if the overall system is important enough to people’s long-term careers that there are already strong incentives not to defect.)

3. If one particular list of preferences produces a match between candidate C and department D, then keeping the same lists of preferences while raising C’s position on D’s list, or D’s position on C’s list should also result in C and D being matched matched. (This is the condition that guarantees there is no incentive to falsely list one’s preferences. We might want to further require that these changes in preferences make <i>no</i> change in the overall matching, because these changes should be irrelevant to anyone else’s matching.)

4. Maybe there should be other conditions too – the only potential one that comes to mind is that changing your preferences among candidates or departments that are lower on your list than the one you were matched to shouldn’t change anything, though perhaps this criterion is more arguable.

Once we’ve got a list of criteria like this, it should be possible either to construct an algorithm that meets these criteria, or to prove that no such algorithm exists. By <A href=>Hall’s Marriage Theorem</a>, criterion 1 is always possible as long as there is no set of m departments that find only the same n candidates acceptable, or m candidates that only find the same n departments acceptable, with m>n. By the <A href=>Stable Marriage Theorem</a>, there is in fact an algorithm that satisfies criterion 2. The question is whether these two can be combined with criteria 3 and 4.

Now, since criteria 3 and 4 were inspired by <A href=>Arrow’s Impossibility Theorem</a>, it might seem that such an algorithm is impossible. However, I have hope in this case, because the construction is not as involved in this case. In Arrow’s theorem, the problem is that given a bunch of rankings, no group ranking can be found that is positively influenced by all of them. In this case, we start with a bunch of rankings, but don’t need to produce a ranking – instead we just need to produce a pairing. And in this case, differences in rankings seem like they should only make things easier (because if the two of us have different first choices, you can make both of us happy in this case, while you can’t in Arrow’s situation).

In fact, if we don’t have minimum acceptability cutoffs, and all the candidates agree on the ranking of the departments (or vice versa), I can construct an algorithm that satisfies all these criteria. Just have the departments draft candidates one at a time, with the departments picking from most preferred to least preferred. (Or vice versa, if the departments all agree on a ranking for the candidates.) Since disagreements in ranking look like they should intuitively make things easier, hopefully this means that there’s an algorithm that will work in general, though actually coming up with this algorithm looks much harder.

Anyway, someone working on social choice theory should find the right set of criteria here and publish the proof of the possibility (or impossibility) theorem, and then the APA (and other professional societies) can have the discussion about whether or not to adopt the procedure. One thing that can be said for the current system is that it gives candidates and departments many chances to adjust their rankings of one another – this might be a way to help get around impossibility theorems, which will assume that the ranking is fixed.

Representation Theorems

This may all be old news to philosophers who work on decision theory and related things, but I think it bears repeating.

There’s an interesting post up at Cosmic Variance by the physicist Sean Carroll wondering idly about some issues that come up in the foundations of economics. One paragraph in particular caught my eye:

But I’d like to argue something a bit different – not simply that people don’t behave rationally, but that “rational” and “irrational” aren’t necessarily useful terms in which to think about behavior. After all, any kind of deterministic behavior – faced with equivalent circumstances, a certain person will always act the same way – can be modeled as the maximization of some function. But it might not be helpful to think of that function as utility, or [of] the act of maximizing it as the manifestation of rationality. If the job of science is to describe what happens in the world, then there is an empirical question about what function people go around maximizing, and figuring out that function is the beginning and end of our job. Slipping words like “rational” in there creates an impression, intentional or not, that maximizing utility is what we should be doing – a prescriptive claim rather than a descriptive one. It may, as a conceptually distinct issue, be a good thing to act in this particular way; but that’s a question of moral philosophy, not of economics.

There’s a lot of stuff in here. Part of this is a claim that science only addresses descriptive issues, not normative ones (or “prescriptive” in his words – I’m not sure what distinction there is between those two words, except that “prescriptive” sounds more like you’re meddling in other people’s activities). Now to a physicist I think this claim sounds natural, but I’m not sure that it’s true. I think it’s perhaps clearest in linguistics that scientific claims are sometimes about normative principles rather than merely descriptive facts. As discussed in this recent post by Geoffrey Pullum on Language Log, syntax is essentially an empirical study of linguistic norms – it’s not just a catalog of what sequences of words people actually utter and interpret, but includes their judgments of which sequences are right and wrong. Linguists may call themselves “descriptivists” to contrast with the “prescriptivists” that don’t use empirical evidence in their discussions of grammaticality, but they still deal with a notion of grammaticality that is essentially normative.

I think the same is true of economics, though the sort of normativity is quite different from the norms of grammaticality (and the other norms studied in semantics and pragmatics). There is some sort of norm of rationality, but of course it’s (probably) different from the sort of norm discussed in “moral philosophy”. Whether or not it’s a good thing to maximize one’s own utility, there’s a sense in which it’s constitutive of being a good decision maker that one does. Of course, using the loaded term “rationality” for this might be putting more force on this norm than we ought to (linguists don’t call grammaticality a form of rationality, for instance) but I think it’s actually a reasonable name for it. The bigger problem with the term “rationality” is that it can be used both to discuss good decision making and also good reasoning, thus confusing “practical rationality” and “epistemic rationality”.

And that brings me to the biggest point I think there is in this paragraph. While there might be good arguments that maximizing utility is the expression of rationality, and there might be some function that people descriptively go around maximizing, it’s not clear that this function will actually be utility. One prominent type of argument in favor of the claim that degrees of belief must obey the axioms of probability theory is a representation theorem. One gives a series of conditions that it seems any rational agent’s preferences should obey, and then shows that for any such function there is a unique pair of a “utility function” and a “probability function” such that the agent’s preferences always maximize expected utility. However, for each of these representation theorems, at least some of the conditions on the preference function seem overly strong to require of rational agents, and then even given the representation, Sean Carroll’s point still applies – what makes us sure that this “utility function” represents the agent’s actual utilities, or that this “probability function” represents the agent’s actual degrees of belief? Of course, the results are very suggestive – the “utility function” is in fact a function from determinate outcomes to real numbers, and the “probability function” is a function from propositions to values in the interval [0,1], so they’re functions of the right sort to do the job we claim they do. But it’s certainly not clear that there’s any psychological reality to them, the way it seems there should be (even if subconscious) for an agent’s actual utility and degree-of-belief functions.

However, if this sort of argument can be made to work, then we do get a connection between an agent’s observed behavior and her utility function. We shouldn’t assume her decisions are always made in conformity with her rational preferences (since real agents are rarely fully rational), but if these conditions of rationality are correct, then there’s a sense in which we should interpret her as trying to maximize some sort of expected utility, and just failing in certain instances. This sense is related to Donald Davidson’s argument that we should interpret someone’s language as having meanings in such a way that most of their assertions come out as true. In fact, in “The Emergence of Thought”, he argues that these representation theorems should be united with his ideas about “radical translation” and the “principle of charity” so that belief, desire, and meaning all fall out together. That is, the normativity of rationality in the economic sense (as maximizing expected utility) just is part of the sort of behavior agents have to approximate in order to be said to have beliefs, desires, or meaning in their thoughts or assertions – that is, in order to be an agent.

So I think Sean Carroll’s found an important point to worry about, but there’s already been a lot of discussion on both sides of this, and he’s gone a bit too fast in assuming that economic science should avoid any talk of normativity.

Dutch Books and Irrationality

One objection that Henry Kyburg raises in several places to the Dutch Book argument for the notion of subjective probability is that people can avoid Dutch Books by exercise of purely deductive reasoning, and therefore they provide no constraint on betting odds or the like. As he puts it in his 1978 paper, “Subjective Probability: Criticisms, Reflections, and Problems”:

No rational person, whatever his degrees of belief, would accept a sequence of bets under which he would be bound to lose no matter what happens. No rational person will in fact have a book made against him. If we consider a sequence of bets, then quite independently of the odds at which the person is willing to bet, he will decline any bet that converts the sequence into a Dutch Book.

I think there’s something right about the general point, but this particular passage I quoted seems just plain wrong. I’ll give an example in which it seems perfectly reasonable to get oneself into such a Dutch Book.
Let’s say that back in January I was very impressed by John McCain’s cross-partisan popularity, and his apparent front-runner status as the Republican nominee for president, so I spent $40 on a bet that pays $100 if he’s elected president. After a few months, seeing his poll numbers plummet, let’s say I became more bullish on Giuliani, and spent $40 on a bet that pays $100 if he’s elected instead. But now that Republicans seem to be backing away from him too, and that Hillary Clinton may be pulling ahead in the Democratic primary, say I now think she’s the most likely candidate to win. If Kyburg is right, then no matter what my degree of belief, I wouldn’t spend more than $20 on a bet that pays $100 if she wins, because I will have converted my set of bets into a Dutch Book against myself (assuming as I do that no more than one of them can be elected). However, it seems eminently rational for me to buy a bet on Clinton for some larger amount of money, because I regard my previous bets as sunk costs, and just want to focus on making money in the future.

Something like this is possible on the Bayesian picture whenever I change my degrees of belief at all – I might have already made bets that I now consider regrettable, but that shouldn’t stop me from making future bets (unless it perhaps does something to convince me that my overall bet-placing skills are bad).

To be fair, I’m sure that Kyburg intends his claim only in the case where the agent is sequentially accepting bets in a setting where her beliefs aren’t changing, where the basic Dutch Book theorem is meant to apply. He’s certainly right that there are ways to avoid Dutch Books while still having betting odds that violate the probability axioms, unless one is somehow required to accept any sum of bets for and against any proposition at one’s published odds.

But somehow Kyburg seems to be suggesting that deductive rationality alone is sufficient to prevent Dutch Books, even with this extra flexibility. However, I’m not sure that this will necessarily happen – one can judge a certain loss as better than some combination of chances of loss and gain. And he even provides a footnote to a remark of Teddy Seidenfeld that I think makes basically this point!

It is interesting to note, as pointed out to me by Teddy Seidenfeld, that the Dutch Book against the irrational agent can only be constructed by an irrational (whether unscrupulous or not) opponent. Suppose that the Agent offers odds of 2:1 on heads and odds of 2:1 on tails on the toss of a coin. If the opponent is rational, according to the theory under examination, there will be a number p that represents his degree of belief in the occurrence of heads. If p is less than a half, the opponent will maximize his expectation by staking his entire stake on tails in accordance with the first odds posted by the Agent. But then the Agent need not lose. Similarly, if p is greater than a half. But if p is exactly a half, then the rational opponent should be indifferent between dividing his stake (to make the Dutch Book) and putting his entire stake on one outcome: the expectation in any case will be the same.

If Kyburg’s earlier claim that agents will never get themselves into Dutch Books is correct, then this argument by Seidenfeld can’t be – the same reasoning that keeps agent out of Dutch Books should make bookies buy them (unless it’s more bad to have a sure loss than it is good to have the corresponding sure gain). I suspect that each of the two arguments will apply in some cases but not others. At certain points, the bookie will feel safer buying the Dutch Book, while at others, she will favor maximizing expectation. Similarly, the agent will sometimes feel safer allowing a Dutch Book to be completed against her, rather than exposing herself to the risk of a much greater loss.
I think Kyburg is right that there are problems with any existing formulation of the Dutch Book argument, but I think he’s wrong in the facts of this particular criticism, and also wrong about subjective probability as a whole. Seidenfeld’s argument is really quite thought-provoking, and probably deserves further attention.


I just found out from the blog of mathematician Terence Tao about Scholarpedia, which is apparently trying to fill in the space between Wikipedia and academic encyclopedias. The goal is to be more authoritative than Wikipedia, and more responsive and current than other academic encyclopedias. Right now, this space is filled quite well in philosophy by the Stanford Encyclopedia of Philosophy, though I can also imagine a use for something in which multiple people can update and edit articles. But right now, Wikipedia seems quite spotty on philosophy (it seems quite good on math and physics, though perhaps not so much so for people who aren’t already well-educated on the relevant topics).

Since it’s quite new, there’s a lot that’s still under development, and there are especially few articles on philosophy so far. But if philosophers get involved in this early enough, it could become quite useful. It looks like they’re commissioning an article on philosophy of mind from Jerry Fodor. The article on the mind-body problem looks like it needs some revision at the moment. And the article on intentionality looks like it could use some philosophical additions – right now it seems to define intentionality as a property only of brains. (Even if this is a technical use of the word, it seems relevant to mention the different but related technical use by philosophers.)

Also, it looks like the way policy is determined depends on who has made edits that previous moderators found useful, so making some good edits now could make sure that some philosophers have a say in how this develops.

The Traveler’s Dilemma

I’ve been busy at FEW the past few days, but thanks to everyone who has responded to my previous post. Anyway, in the airport on the way back from Pittsburgh, I saw that the current issue of Scientific American has several philosophically interesting articles, including ones about the origin of life (did it start with a single replicating molecule, or a process involving several simple ones?) and anesthesia (apparently, the operational definition of general anesthesia isn’t quite what you’d expect, focusing on memory blockage more than we might have expected). (It looks like you’ll have to pay to get either of those.)

But I want to discuss an interesting article by economist Kaushik Basu on the Traveler’s Dilemma (available free). This game is a generalization of the Prisoner’s Dilemma, but with some more philosophically interesting structure to it. Each player names an integer from 2 to n. If they both name the same number, then that is their payoff. If they name different numbers, then they both receive the smaller amount, with the person who named the smaller number getting an additional 2 as a bonus, and the one with the larger number getting 2 less as a penalty. If n=3, then this is the standard Prisoner’s Dilemma, where naming 2 is the dominant strategy. But if n≥4, then there is no dominant strategy. However, every standard equilibrium concept still points to 2 as the “rational” choice. We can generalize this game further by letting the plays range from k to n, with k also being the bonus or penalty for naming different numbers.

Unsurprisingly, in actual play, people tend not to actually name k. Interestingly, this is even the case when economics students play, and even when game theorists at an economics conference played! Among untrained players, most play n, which interestingly enough is the only strategy that is dominated by another (namely, by n-1). Among the trained players, most named numbers between n-k and n-1.

In the article, this game was used to suggest that a better concept of rationality is needed than Nash equilibrium play, or any of the alternatives that have been proposed by economists. I think this is fairly clear. The author also uses this game to suggest that the assumption of common knowledge of rationality does a lot of the work in pushing us towards the choice of k.

I think the proper account of this game may bear some relation to Tim Williamson’s treatment of the Surprise Exam Paradox in Knowledge and its Limits. If we don’t assume common knowledge of rationality, but just some sort of bounded iteration of the knowledge operator, then the backwards induction is limited.

Say that an agent is rational0 only if she will not choose an act that is dominated, based on what she knows about the game and her opponent’s options. Say that an agent is rationali+1 iff she is rationali and knows that her opponent is rationali. (Basically, being rationali means that there are i iterations of the knowledge operator available to her.) I will also assume that players are reflective enough that there is common knowledge of all theorems, even if not of rationality.
Now I claim that for ii, then when she plays the Traveler’s Dilemma, she will pick a number less than n-i.

Proof: By induction on i. For i=0, we know that the agent will not choose any dominated strategy. However, the strategy of picking n is dominated by n-1, so she will not pick n=n-i, as claimed. Now, assume that it is a theorem that if an agent is rationali, then when she plays the Traveler’s Dilemma, she will pick a number less than n-i. Then the agent knows this theorem. In addition, if an agent is rationali+1, then she knows her opponent is rationali, and by knowing this theorem, she knows that her opponent will pick a number less than n-i. Since she is also rationali, she will pick a number less than n-i. But given these two facts, picking n-(i+2) dominates picking n-(i+1), so by rationality0, she will not pick n-(i+1) either, proving the theorem, so the induction step goes through, QED.

Thus, if an agent picks a number n-i, then she must be at most rationali-1. But based on what Williamson says, iterations of the knowledge operator are generally hard to come by, so it should not be a surprise that even game theorists playing with common knowledge that they are game theorists will not have very high iterations of rationality. I wonder if it might be possible to use the Traveler’s Dilemma to estimate the number of iterations of knowledge that do obtain in these cases.

Different Ideas About Newcomb Cases

One advantage of going to parties with mathematicians and physicists is that you can describe a problem to them, and sometimes they’ll get stuck thinking about it and come up with an interesting new approach to it, different from most of the standard ones. This happened to me over the past few months with Josh von Korff, a physics grad student here at Berkeley, and versions of Newcomb’s problem. He shared my general intuition that one should choose only one box in the standard version of Newcomb’s problem, but that one should smoke in the smoking lesion example. However, he took this intuition seriously enough that he was able to come up with a decision-theoretic protocol that actually seems to make these recommendations. It ends up making some other really strange predictions, but it seems interesting to consider, and also ends up resembling something Kantian!

The basic idea is that right now, I should plan all my future decisions in such a way that they maximize my expected utility right now, and stick to those decisions. In some sense, this policy obviously has the highest expectation overall, because of how it’s designed.

In the standard Newcomb case, we see that adopting the one-box policy now means that you’ll most likely get a million dollars, while adopting a two-box policy now means that you’ll most likely get only a thousand dollars. Thus, this procedure recommends being a one-boxer.

Now consider a slight variant of the Newcomb problem. In this version, the predictor didn’t set up the boxes, she just found them and looked inside, and then investigated the agent and made her prediction. She asserts the material biconditional “either the box has a million dollars and you will only take that box, or it has nothing and you will take both boxes”. Looking at this prospectively, we see that if you’re a one-boxer, then this situation will only be likely to emerge if there’s already a box with a million dollars there, while if you’re a two-boxer, then it will only be likely to emerge if there’s already an empty box there. However, being a one-boxer or two-boxer has no effect on the likelihood of there being a million dollars or not in the box. Thus, you might as well be a two-boxer, because in either situation (the box already containing a million or not) you get an extra thousand dollars, and you just get the situation described to you differently by the predictor.

Interestingly enough, we see that if the predictor is causally responsible for the contents of the box then we should follow evidential decision theory, while if she only provides evidence for what’s already in the box then we should follow causal decision theory. I don’t know how much people have already discussed this aspect of the causal structure of the situation, since they seem to focus instead on whether the agent is causally responsible, rather than the predictor.

Now I think my intuitive understanding of the smoking lesion case is more like the second of these two problems – if the lesion is actually determining my behavior, then decision theory seems to be irrelevant, so the way I seem to understand the situation has to be something more like a medical discovery of the material biconditional between my having cancer and smoking

Here’s another situation that Josh described that started to make things seem a little more weird. In Ancient Greece, while wandering on the road, every day one either encounters a beggar or a god. If one encounters a beggar, then one can choose to either give the beggar a penny or not. But if one encounters a god, then the god will give one a gold coin iff, had there been a beggar instead, one would have given a penny. On encountering a beggar, it now seems intuitive that (speaking only out of self-interest), one shouldn’t give the penny. But (assuming that gods and beggars are randomly encountered with some middling probability distribution) the decision protocol outlined above recommends giving the penny anyway.

In a sense, what’s happening here is that I’m giving the penny in the actual world, so that my closest counterpart that runs into a god will receive a gold coin. It seems very odd to behave like this, but from the point of view before I know whether or not I’ll encounter a god, this seems to be the best overall plan. But as Josh points out, if this was the only way people got food, then people would see that the generous were doing well, and generosity would spread quickly.

If we now imagine a multi-agent situation, we can get even stronger (and perhaps stranger) results. If two agents are playing in a prisoner’s dilemma, and they have common knowledge that they are both following this decision protocol, then it looks like they should both cooperate. In general, if this decision protocol is somehow constitutive of rationality, then rational agents should always act according to a maxim that they can intend (consistently with their goals) to be followed by all rational agents. To get either of these conclusions, one has to condition one’s expectations on the proposition that other agents following this procedure will arrive at the same choices.

Of course this is all very strange. When I actually find myself in the Newcomb situation, or facing the beggar, I no longer seem to have a reason to behave according to the dictates of this protocol – my actions benefit my counterpart rather than myself. And if I’m supposed to make all my decisions by making this sort of calculation, then it’s unclear how far back in time I should go to evaluate the expected utilities. This matters if we can somehow nest Newcomb cases, say by offering a prize if I predict that you will make the “wrong” decision on a future Newcomb case. It looks like I have to calculate everything all the way back at the beginning, with only my a priori probability distribution – which doesn’t seem to make much sense. Perhaps I should only go back to when I adopted this decision procedure – but then what stops me from “re-adopting” it at some later time, and resetting all the calculations?

At any rate, these strike me as some very interesting ideas.

Fish on Spin

Stanley Fish, in his blog behind the TimesSelect pay-wall at the New York Times, argues that “[l]anguage (or discourse), rather than either reflecting or distorting reality, produces it, at least in the arena of public debate,” and that thus, people are wrong to criticize Karl Rove for spinning economic figures. After all, he suggests, “spin – the pronouncing on things from an interested angle – is not a regrettable and avoidable form of suspect thinking and judging; it is the very content of thinking and judging”.

There’s something clearly right about this – anyone pronouncing on anything does have some particular opinions, and every observation does depend to some degree on unstated assumptions – but it seems to me that in a larger sense this is just wrong. He says “Forms of language … furnish our consciousness; they are what we think with, and we can’t think without them (in two senses of “without”).” There’s something interesting about this picture, but it seems to me that there are important empirical questions as well as conceptual ones that he skips in order to reach this conclusion.

The particular example he talks about is a statement by Karl Rove that “[r]eal disposable income has risen almost 14 percent since President Bush took office.” This figure has been criticized because “the 14-percent increase did not benefit everyone, but went largely “to those in the upper half of society”; the disposable income of the lower half had “fallen by 3.6 percent.”“ But Fish argues that this is just an argument about beliefs about what makes a healthy economy, between “trickle down” and “spread the wealth”, and that any evidence can only be interpreted in the lights of which of these beliefs one holds. “Those beliefs … tell you what the relevant evidence is and what it is evidence of. But they are not judged by the evidence; they generate it.” He says that “the reality of the economic situation will emerge when one of the competing accounts … proves so persuasive that reality is identified with its descriptions.”

Some have called economics “the dismal science”, because economists have discovered relatively little about the world. But if Fish is right, then there couldn’t even be such a thing as economic knowledge. There could be no evidence for or against trickle down economics – we just have to persuade people of its merits or demerits.

There may be something to his points about the purely normative claims of economics, that one situation is better or worse than another. But I think that most of these political arguments aren’t of this sort – I think Republicans and Democrats and just about everyone else thinks the world would be better if more people had access to more material goods, other things being equal. The dispute is really about the empirical question of whether rising incomes at the top of the income distribution bring about more of this sort of effect than rising incomes at the very bottom of the income distribution. Fish seems to be denying that any sorts of discoveries of this sort could be relevant as anything other than persuasive material in an argument.

I think he’s broadly right that there’s no conceptual possibility of something like a purely neutral or disinterested way to couch all of the evidence in these social disputes. But this doesn’t mean that there’s no such thing as evidence that can be shared across the lines, or that “[o]pen-mindedness, far from being a virtue, is a condition which, if it could be achieved, would result in a mind that was spectacularly empty.”