In my seminar class last week we were reading over Milton Friedmans __The Methodology of Positive Economics__ and I was surprised by a couple of things. First, I agreed with much more of Friedmans view than I had remembered from last time Id looked at it. Second, I thought there was a rather large problem with one section of the paper that I didnt remember from before, and that I dont think has received much attention in the subsequent literature.[1]
Friedman was writing (in 1953) in response to the first stirrings of experimental economics, and the results that seemed to show people are not ideal maximisers. The actual experimental data involved wasnt the most compelling, but I think with 50 years more data we can be fairly confident that there are systematic divergences between actual human behaviour and the behaviour of people typical of economic models. The experimentalists urged that we should throw out the existing models and build models based on the actual behaviour of people.
Friedmans position was that this was too hasty. He argued that it was OK for models to be built on false premises, provided that the actual predictions of the model, in the intended area of application, are verified by experience. Hence he thought the impact of these experimental results was less than the experimenters claimed. When I first heard this position I thought it was absurd. How could we have a science based on false assumptions? This now strikes me as entirely the wrong attitude. Friedmans overall position is broadly correct, provided certain facts turn out the right way. But hes wrong that this means we can largely ignore the experimental results, as Ill argue.
Why do I think Friedman is basically correct? Because read aright, he can be seen as one more theorist arguing for the importance of idealisations in science. And I think those theorists are basically on the right track. On this point, and on several points in what follows, Ive been heavily influenced by Michael Strevens, and some of the justifications for Friedman below will use Strevenss terminology.[2]
Often what we want a scientific theory to do is to predict roughly where a certain value will fall, or explain why it fell roughly there. In those cases, we dont want the theory to include every possible influence on the value. Some of these, although they are relevant to the value taking the exact value it did, are irrelevant to it taking roughly that value. In those cases, we can build a better theory, or explanation, or model, by leaving out such factors.
Heres a concrete illustration of this (that Strevens uses). The standard explanation for Boyles Law – that for a constant quantity of gas at constant temperature, pressure times volume is roughly constant – is a model in which, among other things, gas molecules never collide. Now this is clearly an inaccurate model, since gas molecules collide all the time, but for this purpose, the model works, which tells us that collisions are not that relevant to the value of pressure times volume, and in particular to that value being roughly constant. Since this model is considered a good model, despite having the false feature that gas molecules do not collide, it seems in general we should be allowed to use inaccurate models as long as they work. Thats one of Friedmans theses, and its worth highlighting.
- Idealised models, models that are inaccurate in a certain respect, are acceptable as long as that respect is irrelevant to the value you are trying to predict or explain.
Lets note two more related things about the gas case. First, theres no way to tell whether the size of the idealisation, removing all collisions from the model, is large or small by just looking at how many collisions there are. By any plausible measure, there are __lots__ of collisions but it makes no difference to the pressure-volume product.
Second, whether an idealisation is large or small is relative to what you are trying to model. (I got this point from Michael Strevens as well.) If youre trying to model the speed at which a gas will spread from an open container, you better include collisions in the model, because collisions make a __big__ difference to how fast the gas spreads. Friedman makes the same point by noting that air pressure makes a big difference to how fast a feather falls, and a very small difference to how fast a baseball falls from low altitude. Lets note this as an extra point.
- Whether an idealisation is large or small is relative to what you are trying to model.
All that I think is basically right, though its best to bracket issues about whether the idealisations really are small in the intended case. Lets assume for now that there are lots of nice models that idealise away from non-maximising behaviour, and these models work – they deliver surprising but well-confirmed predictions about economic phenomena. If so, the idealisations should be acceptable I think. The idealised models are very nice arguments that the existence of these departures from perfect maximising behaviour is irrelevant to the phenomena being modelled.
Its at this point that I think Friedman goes wrong. Friedman says that at this stage we have some prima facie evidence that other models using the same kinds of idealisations are also going to be correct. And this strikes me as entirely wrong. Its wrong because its inconsistent with the view of the models as idealisations rather than as accurate descriptions of reality.
Note that the structure of argument Friedman is trying to use here is not always absurd. If evidence E supports hypothesis H, and the best model for hypothesis H includes assumption A as a positive claim about the world, then E is indirect evidence for A, and hence for other consequences of A. Thats what Friedman wants. He says that the success of hypotheses in other areas of economics provides indirect support for the hypothesis that there is less racial and religious discrimination when there is a more competitive labour market. I think the idea is that the other hypotheses show that people are, approximately, maximisers, so when trying to explain the distribution of discrimination we can assume they are approximately maximisers.
But it should now be clear that doesnt make sense. Remember the very same idealisation can be a serious distortion in one context, and an acceptable approximation in another. Without independent evidence, the fact that we can idealise away from non-maximising behaviour in one context is no reason at all to think we can do so when discussing, say, discrimination. If we take Friedman to be endorsing the claim that its OK to idealise away from irrelevant factors, then at this point hes trying to defend the following argument.
bq. The fact that people arent perfect maximisers is irrelevant to (say) the probability that various options will be exercised.
Therefore, the fact that people arent perfect maximisers is irrelevant to (say) how much discrimination there is in various job markets.
And this doesnt even look like a good argument.
The real methodological consequence of Friedmans instrumentalism is that idealised models can be good ways to generate predictions about the economy, but every single prediction must be tested anew, because these models have little or no evidential value on their own. This conclusion might well be __true__, but I dont think its one Friedman would want to endorse. But I think its what follows inevitably from his methodological views, at least on their most charitable interpretation.
fn1. Lifes too short to read all the commentaries on Friedmans paper, so this last claim is not especially well backed up.
fn2. Some of the views I’m relying on are not published, but most of the details can be gleaned from the closing pages of this paper of Michael’s.