Idealisations in Economics

In philosophy of economics through roughly the 1990s, there was a somewhat annoying dialectic that went as follows.

First, a philosopher would come along saying that economists typically make all sorts of implausible assumptions. These include:

  • Perfect rationality;
  • Perfect information;
  • Markets in everything;
  • Zero transaction costs; etc.

Obviously these are implausible, and the philosopher will say loudly how absurd it is that we take seriously a discipline built on crazy foundations.

Second, an economist will come along and say that this is all a misreading of the discipline and a sign of ignorance of the state of contemporary economics. It’s true, they’ll say, that every one of those assumptions is made in intro textbooks. But they aren’t made by real working economists. Look, here’s a paper where perfect rationality is dropped, and here’s one where perfect information is dropped, and here’s one where we assume limited markets, and here’s one where we assume zero transaction costs. (If you want real-life examples of my storybook version of history, see some of the responses to Daniel Hausman’s “The Inexact and Separate Science of Economics”.)

And at this point it might look like the economist has won. At least I haven’t see a lot of pushback from philosophers from philosophers. But I think the response rests on a scope ambiguity. It’s true that cutting edge work doesn’t make all of the assumptions listed above. Indeed, it’s true that every one of those assumptions has been questioned in some cutting edge work or other. But what’s not true is that there is a large body of mainstream top level work that questions all of the assumptions simultaneously. The steady state of the discipline seemed to be that we’d start with the absurd idealisation, and then relax assumptions very slowly, seeing how far we could go before the mathematics became intractable. (Or, worse still, we had to pay attention to institutional/sociological matters.)

None of this will be news to people who have been following Crooked Timber, because it’s been a major theme of John Quiggin’s posts on the failures of modern economics. (See, for instance, “here”:http://crookedtimber.org/2009/10/13/what-went-wrong-with-new-keynesian-macro-more-bookblogging/, “here”:http://crookedtimber.org/2009/10/12/bookblogging-micro-based-macro-2/ and “here”:http://crookedtimber.org/2009/11/19/bookblogging-what-next-for-macroeconomics/.) But it was sort of news to me. If I’d realised this point about scope ambiguities 10 years ago, I might have written several papers in philosophy of economics in the meantime. It’s obviously a bit late for that. But I do strongly recommend these Quiggin posts, which are both relevant to live policy debates, and connect to several philosophically fascinating questions about scientific methodology and the nature of rationality.

Scaffolding in New York

This post on Curbed about “scaffolding in New York”:http://curbed.com/archives/2009/11/19/scaffolding_doesnt_save_it_kills.php resonated a lot with my feelings about one aspect of New York. Before I moved here, my mental picture of a typical New York street had scaffolding over it. My mental picture of midtown (where I don’t spend a lot of time) still has that scaffolding over the streets, especially 7th avenue and Broadway. A lot of it must be unnecessary. I haven’t been to Hong Kong in 20 years, so I can’t make any comparison there, but as they say, there’s nowhere near this much scaffolding in London or Paris.

I sometimes read people saying that they don’t like the ‘boxed in’ feel of Manhattan. On the whole, I don’t really get that – what Manhattan loses in a little light from the tall buildings it makes up for by having such a high portion of public spaces. And unlike most parts of America, you can use those public spaces without wearing 2 tons of metal armour. But in those neighbourhoods where you’re always walking under a nine foot high quickly constructed wood ceiling, I can see the downside. The solution is just less scaffolding!

Recent NDPR Reviews

There have been several interesting reviews at Notre Dame Philosophical Reviews recently. These include some of the first reviews to come in since I joined the editorial board there. So far that’s involved suggesting potential reviewers and reading reviews that come in to check them over before publication. Don’t be surprised if you see a slight uptick in the proportion of Scotland-based reviews in the next few months!

But the three reviews I wanted to highlight weren’t ones I had anything to do with. They are

  • Debra Satz, Rob Reich (eds.), “The Political Philosophy of Susan Moller Okin”:http://ndpr.nd.edu/review.cfm?id=18245, Reviewed by Ann E. Cudd, University of Kansas
  • Richard Holton, “Willing, Wanting, Waiting”:http://ndpr.nd.edu/review.cfm?id=18225, Reviewed by Carl Ginet, Cornell University
  • Galen Strawson, “Selves: An Essay in Revisionary Metaphysics”:http://ndpr.nd.edu/review.cfm?id=18205, Reviewed by Sydney Shoemaker, Cornell University

I hadn’t actually realised that Richard Holton had written up his work on will and intention into a book, and it’s something I’m looking forward to reading. I’ve learned a lot from Richard’s papers on these topics over the years. (Obviously much of “Deontology and Descartes’ Demon”:http://brian.weatherson.org/DDD.pdf draws on ideas that he developed, for instance.) And although it looks from the review that the book recapitulates a lot of what’s in the papers, seeing the ideas as a unified whole will I’m sure be valuable.

Compass Articles

We’ve been publishing a lot of Philosophy Compass articles recently. Here is a selection of what’s just come out.

Formal Epistemology Festival, May 2010, Toronto

CALL FOR PAPERS

3rd Formal Epistemology Festival: Learning From Experience & Defeasible Reasoning
University of Toronto, May 11-13, 2010

This is the third of three small, thematically focused events in formal epistemology, organized by Franz Huber (Konstanz), Eric Swanson (Michigan), and Jonathan Weisberg (Toronto). This year’s festivities coincide with the 30th anniversary of Ray Reiter’s “A Logic for Default Reasoning” and the 15th anniversary of John Pollock’s Cognitive Carpentry. The event is dedicated to the memory of John Pollock. Confirmed participants include Thony Gillies, John Horty, Mohan Matthen, Jim Pryor, Susanna Siegel, and Scott Sturgeon.

We welcome submissions of papers on topics related to learning from experience, defeasible reasoning, or both. Please send a pdf prepared for blind reviewing to FEF3@utoronto.ca.

The conference website is http://www.utm.utoronto.ca/~weisber3/3FEF/. Some funding for travel expenses may become available.

Deadline for submissions: February 28, 2010.
Notification of acceptances: March 21, 2010.

Northern Institute Journal

As part of the setting up of the Northern Institute of Philosophy at Aberdeen, plans are afoot for a new journal for short papers. Here’s a brief description of what the journal will be like.

bq. The Northern Institute of Philosophy (NIP) intends to establish a journal dedicated to the publication of concise, succinct (of less than 4500 words), original, philosophical papers in the following areas: Logic, Philosophical Logic, Philosophy of Mathematics, Philosophy of Language, Metaphysics, Epistemology (including formal epistemology and confirmation theory), Philosophy of Science and Philosophy of Mind. All published papers will be analytic in style. We intend that readers of this journal will be exposed to the most central and significant issues and debates in contemporary philosophy that fall under its remit. We will publish only papers that exemplify the highest standard of clarity.

bq. The philosophical community has been vocal about the need for such a journal (broadly, a second Analysis-style journal) for many years, and the Northern Institute of Philosophy is in an optimal position to deliver one, since the editorial process will be able draw on the wide-ranging expertise of the very extensive network of philosophers and research institutions variously affiliated with NIP. We are thus exceptionally well placed to quickly establish a journal that can attain and maintain the highest standards of contemporary analytic philosophy.

I’m very enthusiastic about this proposal, since I think there is a need for more high-quality venues for publishing in M&E broadly construed. Indeed, I’ve signed on to be the epistemology editor for the journal, assuming it comes in to being.

As part of convincing the powers that be that such a journal will be a good idea, the Northern Institute is commissioning a small market survey. You can take the survey at:

.

The more people who take this, the better the journal’s prospects will be.

Rutgers News

I have very exciting news to report on behalf of Rutgers. We have just made 3 new senior hires. They are Branden Fitelson, Jonathan Schaffer and Susanna Schellenberg. Branden will be starting in Fall 10, Jonathan and Susanna in Spring 11.

Branden is of course one of the leading formal epistemologists in the world, as well as the driving force behind big events like the Formal Epistemology Workshop. He’s also been doing some incredibly interesting work in cognitive science explaining some of the probabilistic fallcies that have been documented over the last 30 years using concepts from confirmation theory. Given Rutgers’ existing strengths in formal epistemology (Barry Loewer, Thony Gillies, etc), I think there’s a very good case that we’re the world leader in formal epistemology, and it will be great to have more connections built up with the excellent cognitive science program at Rutgers.

Jonathan is already a central figure in contemporary debates in epistemology and in philosophy of language. (Both of them also strengths of Rutgers.) But, in my opinion at least, his most significant work is in metaphysics. His paper on monism was in many ways the best paper I reviewed during my term as editor of the Philosophical Review. There’s not many philosophers (let alone metaphysicians) around right now who make contributions that are simultaneously significant to the big perennial philosophical questions, and to contemporary debates about the details of popular theories. The philosophers I value most highly are always excellent on both scores, advancing a big picture while being careful over the details. Jonathan’s work on monism, like so much of his work, is really a paradigm of this way of doing philosophy, and he’ll be a super colleague to have.

Susanna has to date largely been working on perception, and has a number of insightful papers (in very top journals) on the various debates about perception. Much of her work to date has involved synthesising apparently conflicting views, and showing that there are attractive yet under-explored grounds between some of the warring factions in today’s debates. Rutgers has a long history of being at the forefront of research in philosophy of mind, and hiring Susanna is one of the steps we’re taking to keep that tradition going.

We’d be thrilled by these hires at any time. To have pulled them all off in the middle of the Great Recession is something of a coup for the department and university. So congratulations to Barry Loewer for steering these hires through, and thanks (and congratulations) to the university for this show of faith in philosophy.

Your Favourite Theory of Knowledge is Wrong

Consider this proposition:

N: Brian does not know that N.

Assume N is false. That is, I know that N. Knowledge is factive, so N. That contradicts our original assumption. So N must not be false. So it follows, at least classically, that N is true. So I don’t know N.

But I can follow the reasoning that showed N is true. And I accept that reasoning, so I believe N. And the reasoning justifies me in believing N. So I have a justified true belief that isn’t knowledge. So the JTB theory of knowledge fails.

My reasoning didn’t go via any false lemmas. It went via a false assumption, but making false assumptions for purposes of reductio is consistent with knowledge. So I have a JTB with no false lemmas, but no knowledge. So much for the JTB+No false lemmas.

I’m (generally) a competent logical reasoner. My belief in N, which is a true belief, was a product of my logical competence. Indeed, I formed the belief in N, rather than some alternative, because of that competence. So I should have Sosa-style animal knowledge of N. Indeed, I can reflectively, and aptly, endorse the claim that my belief in N is accurate because it was an exercise of competence. So I should have Sosa-style reflective knowledge that N. But I don’t; clearly I don’t know N.

It seems to me that pretty much any otherwise plausible theory of knowledge will fall this way. Whatever qualities or virtues a belief might have, short of knowledge, my belief in N has. But I don’t know N. Indeed, logic prevents me from knowing N. So any such theory must be false.

N also undermines various proposals people have relating knowledge to other things. Some people think knowledge is a norm of belief. But there seems to be nothing wrong with my believing N on the basis of the reasoning above, even though I don’t know N. So knowledge isn’t a norm of belief. Many people think knowledge is a norm of assertion. But I don’t see why I shouldn’t assert N. I have a deductive argument that it is true after all; I simply don’t know that it is true. So knowledge isn’t a norm of assertion.

I’m not sure whether N alone could knock out Williamson’s thesis that all and only evidence is knowledge, commonly known as E=K. But N’s good friend E can do the trick.

E: Brian’s evidence does not include E.

Assume E is false. Then my evidence includes E. Either evidence is factive or it isn’t. If it isn’t, then E=K is false for independent reasons. If it is, then it follows E is true, contradicting our assumption. So E is true. Since I can follow this argument competently, I know its conclusion is true. (Unlike the argument about N, logic doesn’t stop me knowing E is true.) So I know E, but E is, as it says, not part of my evidence. So E=K is false.

Note that this argument doesn’t touch the plausible view that “evidence is all and only our non-inferential knowledge”:http://www.philosophersdigest.com/philphen/fallibilism-epistemic-possibility-and-concessive-knowledge-attributions-trent-dougherty-and-patrick-rysiew. Even if I know E via that argument, it is clearly inferential knowledge. So while I can refute all theories of knowledge with self-referential propositions, I can’t refute all theories of evidence.

Decision Theory and the Context Set

Consider the following decision problem. You have two choices, which we’ll call 1 and 2. If you choose option 1, you’ll get $1,000,000. If you choose option 2, you’ll get $1,000. There are no other consequences of your actions, and you prefer more money to less. What should you do?

It sounds easy enough right? You should take option 1. I think that’s the right answer, but getting clear as to why it is the right ansewr, and what question it is the right answer to, is a little tricky.

Here’s something that’s consistent with the initial description of the case. You’re in a version of Newcomb’s problem. Option 1 is taking one box, Option 2 is taking two boxes. You have a crystal ball, and it perfectly reliably detects (via cues from the future) whether the demon predicted your choice correctly. And she did; so you know (with certainty) that if you pick option one, you’ll get the million, and if you pick option two, you’ll get the thousand. Still, I think you should pick option one, since you should pick option one in any problem consistent with the description in the first paragraph. (And I think that’s consistent with causal decision theory, properly understood, though the reasons why that is so are a little beyond the scope of this post.)

Here’s something else that’s consistent with a flat-footed interpretation of the case, though not I think with the intended interpretation. Option 2 is a box with $1000 in it. Option 1 is a box with a randomly selected lottery ticket in it, and the ticket has an expected value of $1. Now as a matter of fact, it will be the winning ticket, so you will get $1,000,000 if you take option 1. Still, if everything you know is that option 2 is $1,000, and option 1 is a $1 lottery ticket, you should take option 2.

Now I don’t think that undermines what I said above. And I don’t think it undermines it because when we properly interpret descriptions of games/decision problems, we’ll see that this situation isn’t among the class of decision problems described in the first paragraph. When we describe the outcomes of certain actions in a decision problem, those aren’t merely the actual outcomes, they are things that are properly taken as fixed points in the agent’s deliberation. They are, in Stalnakerian terms, the limits of the context set. In the lottery ticket example, it is not determined by the context set that you’ll get $1,000,000 if you take option 1, even though it is in fact true.

I think “things the agent properly takes as fixed points” are all and only the things the agent knows, but that’s a highly controversial theory of knowledge. (In fact, it’s my version of interest-relative invariantism.) So rather than wade into that debate, I’ll simply talk about proper fixed points.

Saying that something is a fixed point is a very strong claim. It means the agent doesn’t even, shouldn’t even, consider possibilities where they fail. So in Newcomb’s problem, the agent shouldn’t be at all worrying about possibilities where the demon miscounts the money she puts into box 1 or 2. Or possibilities where there is really a piranha in box 2 who’ll bite your hand, rather than $1000. And when I say that she shouldn’t be worrying about them, I mean they shouldn’t be in the algebra of possibilities over which her credences are defined.

There’s a big difference formally between something being true at all points over which a probability function is defined, and something (merely) having probability 1 according to that function. And that difference is something that I’m relying on heavily here. In particular, I think the following two things are true.

First, when we state something in the set up of a problem, then we say that the agent can take it as given for the purposes of a problem.

Second, when we are considering the possible outcomes of a situation, the only situations we need to consider are ones that are not fixed points. So in my version of Newcomb’s Problem, the right thing to do is to take one box, because there is no outcome where you do better than taking one box. On the other hand, some things that we now know to be false might (in some sense of might) become relevant, even though we now assign them probability 0. That’s what goes on in cases where backwards induction fails; the context set shifts over the course of the game, and so we have to take into account new things.

Having said all that, there is one hard question that I don’t know the answer to. It’s related to some things that Adam Elga, John Collins and Andy Egan were discussing at a reading group on the weekend. In the kind of puzzle cases that we usually consider in textbooks, the context set consists of the Cartesian product of some hypotheses about the world, and some choices. That’s to say, the context set satisfies this principle: If _S_ is a possible state of the world (excluding my choice), and _C_ is a possible choice, then there is a possibility in the context set where _S_ and _C_ both obtain. I wonder if that’s something we should always accept. I’ll leave the pros and cons of accepting that proposal for another post though.

Publishing Survey

Sally Haslanger has put together a survey on publishing, and she’d like any professional philosophers to take it. It should take about 10 minutes. It will be useful to have your CV handy as you fill it out. Please go here to find it:

If all goes well, Sally Haslanger will report on the results at the December APA in the symposium on philosophy publishing (Wednesday December 30th, 11:15-1:15).