This may all be old news to philosophers who work on decision theory and related things, but I think it bears repeating.
There’s an interesting post up at Cosmic Variance by the physicist Sean Carroll wondering idly about some issues that come up in the foundations of economics. One paragraph in particular caught my eye:
But I’d like to argue something a bit different – not simply that people don’t behave rationally, but that “rational” and “irrational” aren’t necessarily useful terms in which to think about behavior. After all, any kind of deterministic behavior – faced with equivalent circumstances, a certain person will always act the same way – can be modeled as the maximization of some function. But it might not be helpful to think of that function as utility, or [of] the act of maximizing it as the manifestation of rationality. If the job of science is to describe what happens in the world, then there is an empirical question about what function people go around maximizing, and figuring out that function is the beginning and end of our job. Slipping words like “rational” in there creates an impression, intentional or not, that maximizing utility is what we should be doing – a prescriptive claim rather than a descriptive one. It may, as a conceptually distinct issue, be a good thing to act in this particular way; but that’s a question of moral philosophy, not of economics.
There’s a lot of stuff in here. Part of this is a claim that science only addresses descriptive issues, not normative ones (or “prescriptive” in his words – I’m not sure what distinction there is between those two words, except that “prescriptive” sounds more like you’re meddling in other people’s activities). Now to a physicist I think this claim sounds natural, but I’m not sure that it’s true. I think it’s perhaps clearest in linguistics that scientific claims are sometimes about normative principles rather than merely descriptive facts. As discussed in this recent post by Geoffrey Pullum on Language Log, syntax is essentially an empirical study of linguistic norms – it’s not just a catalog of what sequences of words people actually utter and interpret, but includes their judgments of which sequences are right and wrong. Linguists may call themselves “descriptivists” to contrast with the “prescriptivists” that don’t use empirical evidence in their discussions of grammaticality, but they still deal with a notion of grammaticality that is essentially normative.
I think the same is true of economics, though the sort of normativity is quite different from the norms of grammaticality (and the other norms studied in semantics and pragmatics). There is some sort of norm of rationality, but of course it’s (probably) different from the sort of norm discussed in “moral philosophy”. Whether or not it’s a good thing to maximize one’s own utility, there’s a sense in which it’s constitutive of being a good decision maker that one does. Of course, using the loaded term “rationality” for this might be putting more force on this norm than we ought to (linguists don’t call grammaticality a form of rationality, for instance) but I think it’s actually a reasonable name for it. The bigger problem with the term “rationality” is that it can be used both to discuss good decision making and also good reasoning, thus confusing “practical rationality” and “epistemic rationality”.
And that brings me to the biggest point I think there is in this paragraph. While there might be good arguments that maximizing utility is the expression of rationality, and there might be some function that people descriptively go around maximizing, it’s not clear that this function will actually be utility. One prominent type of argument in favor of the claim that degrees of belief must obey the axioms of probability theory is a representation theorem. One gives a series of conditions that it seems any rational agent’s preferences should obey, and then shows that for any such function there is a unique pair of a “utility function” and a “probability function” such that the agent’s preferences always maximize expected utility. However, for each of these representation theorems, at least some of the conditions on the preference function seem overly strong to require of rational agents, and then even given the representation, Sean Carroll’s point still applies – what makes us sure that this “utility function” represents the agent’s actual utilities, or that this “probability function” represents the agent’s actual degrees of belief? Of course, the results are very suggestive – the “utility function” is in fact a function from determinate outcomes to real numbers, and the “probability function” is a function from propositions to values in the interval [0,1], so they’re functions of the right sort to do the job we claim they do. But it’s certainly not clear that there’s any psychological reality to them, the way it seems there should be (even if subconscious) for an agent’s actual utility and degree-of-belief functions.
However, if this sort of argument can be made to work, then we do get a connection between an agent’s observed behavior and her utility function. We shouldn’t assume her decisions are always made in conformity with her rational preferences (since real agents are rarely fully rational), but if these conditions of rationality are correct, then there’s a sense in which we should interpret her as trying to maximize some sort of expected utility, and just failing in certain instances. This sense is related to Donald Davidson’s argument that we should interpret someone’s language as having meanings in such a way that most of their assertions come out as true. In fact, in “The Emergence of Thought”, he argues that these representation theorems should be united with his ideas about “radical translation” and the “principle of charity” so that belief, desire, and meaning all fall out together. That is, the normativity of rationality in the economic sense (as maximizing expected utility) just is part of the sort of behavior agents have to approximate in order to be said to have beliefs, desires, or meaning in their thoughts or assertions – that is, in order to be an agent.
So I think Sean Carroll’s found an important point to worry about, but there’s already been a lot of discussion on both sides of this, and he’s gone a bit too fast in assuming that economic science should avoid any talk of normativity.