The Australasian Association of Philosophy has compiled a new set of journal rankings for submission to the Australian Research Council. Without knowing exactly what the ARC wants them for, I would nevertheless imagine these are worth knowing about if you ever plan to work in Australia. Some of the rankings look very odd to me. Several flaws with the whole procedure are noted in the AAP’s covering letter.
Monthly Archives: August 2008
Links
I’m mostly just writing lecture notes for the upcoming term and paying more attention to vice-presidential rumours than baseball rumours. So some links to keep the blog moving.
- The situation for “philosophy at University of Melbourne”:http://consequently.org/news/2008/08/20/pain_stress_redundancies_another_day_at_the_office/index.php seems to be very unpleasant. The University of Melbourne is, or at least has been, a great university, and the excellent philosophers there deserve much better treatment from their administrators.
- “Andrej Bauer”:http://math.andrej.com/2008/08/13/intuitionistic-mathematics-for-physics/ on why physicsists should care about intuitionistic mathematics. (HT: “Greg Restall”:http://consequently.org/.)
- “Peter Railton and Don Loeb”:http://bloggingheads.tv/diavlogs/13443 debate moral realism.
- I may not have put this up before, but here’s Wo’s feed of “Online Papers in Philosophy”:http://www.umsu.de/wo/opp.rss.
- Via that feed, Ross Cameron argues that “There are no things that are Musical Works”:http://www.personal.leeds.ac.uk/~phlrpc/There%20are%20no%20musical%20works.pdf, and Nico Kolodny discusses a puzzles about “Ifs and Oughts”:http://johnmacfarlane.net/ifs-and-oughts.pdf.
- Richard Price emailed to tell me about “Academia.edu”:http://www.academia.edu/, which could be a useful way of keeping up with academics, and more importantly their work, throughout the world.
- Finally, as much as it pains me to write this, congratulations to the British Olympic team for identifying the valuable intersection of “complex demonstratives and rowing”:http://www.guardian.co.uk/sport/2008/aug/07/olympics.acernethercottinterview, and of “running and taboo vocabulary”:http://languagelog.ldc.upenn.edu/nll/?p=509.
Intuition isn’t Unreliable
At least since Robert Cummins’s paper “Reflections on Reflective Equilibrium”:http://books.google.com/books?id=kOjtQwQ0XmkC&pg=PA113&source=gbs_toc_r&cad=0_0&sig=ACfU3U3tRJ9Pv34k2FDeD3emET6VzxcK2g in “Rethinking Intuition”:http://books.google.com/books?id=kOjtQwQ0XmkC, a lot of people have worried that intuition, that old staple of philosophical argument, is unreliable. This is fairly important to the epistemology of philosophy, especially to intuition-based epistemologies of philosophy, so I think it’s worth considering.
(Worries about intuition obviously don’t start 10 years ago, but the particular worry about reliability does become pronounced in Cummins. I suspect, though I don’t have the relevant papers in front of me, that there are related worries in earlier work by Stich. Note that this post is strictly about reliability, not a general defence of intuition in philosophy.)
The happy new is that there’s a simple argument that intuition isn’t unreliable. I think it isn’t clear whether intuition simply is reliable, or whether there’s no fact of the matter about how reliable it is. (Or, perhaps, that there is no such thing as intuition.) But we can be sure that it is not unreliable.
Start with a fact that may point towards the unreliability of intuitions. Some truths are counter-intuitive. That’s to say, intuition suggests the opposite of the truth. I’m told it’s true that eating celery takes more calories than there is in the celery, so you can’t gain weight by eating it. If true, that’s pretty counterintuitive. And just about everything about “counter-steering”:http://en.wikipedia.org/wiki/Countersteering strikes me as counterintuitive. So those are some poor marks against intuition.
But now think of all the falsehoods that would be even more counterintuitive if true. If you couldn’t gain weight by eating steak, that would be really counterintuitive. Intuitively, steak eating is bad for your waistline. And that’s true! Intuitively, you have less control of a motorbike at very high speeds than at moderate speeds. And that’s true too! It would be really counterintuitive if remains from older civilisations were generally closer to the surface and easier to find than remains from more recent civilisations. And that’s false – the counterintuitive claim is false here.
In fact almost everywhere you look, from archeology to zoology, you can find falsehoods that would be very counterintuitive if true. That’s to say, intuition strongly supports the falsehood of these actual falsehood. That’s to say, intuition gets these right.
To be sure, most of these cases are boring. That’s because, to repeat a familiar point, we’re less interested in cases where common sense is correct. And here intuition overlaps common sense. But that doesn’t mean intuition is unreliable; it’s just that we don’t care about it’s great successes.
There are so many of these successes, so many falsehoods that would be extremely counterintuitive if true, that intuition can hardly be unreliable. But maybe it’s not actually reliable either. I can think of two reasons why we might think that.
First, there may be no fact of the matter about how reliable intuition is.
It’s counterintuitive that there can be proper subsets of a set that are equinumerous with that set. And that’s true, so bad news for intuition. It would be really counterintuitive if there could be proper subsets of a set of cardinality 7 that are also of cardinality 7. But there can’t be, so good news for intuition. And the same for cardinality 8, 9, etc. So there are infinitely many successes for intuition! A similar trick can probably be used to find infinitely many failures. So there’s no such thing as the ratio of successes to failure, so no such thing as how reliable intuition is.
On the other hand, perhaps we’re counting wrongly. Perhaps there is one intuition that covers all of these cases. Perhaps, though it isn’t clear. It isn’t clear, that is, how to individuate intuitions. Arguably our concept of an intuition isn’t that precise to give clean rules about individuation. But if that’s right, there again won’t be any fact of the matter about how reliable intuition is.
This isn’t, I think, bad news for using intuition in philosophy. Similar arguments can be used to suggest there is no fact of the matter in how reliable vision is, or memory is. But it would be absurd on this ground to say that vision, or memory, is epistemologically suspect. So this doesn’t make intuition epistemologically suspect.
Second, there might be no single such thing as intuition. (I’m indebted here to conversations with Jonathan Schaffer, though I’m not sure he’d endorse anything as simple-minded as any of the sides presented below.)
It would be counterintuitive if steak eating didn’t lead to weight gain. It would be counterintuitive if Gettiered subjects have knowledge. In both cases intuition seems to be correct. But perhaps this is just a play on words. Perhaps there is no psychologically or epistemologically interesting state that is common to this view about steak and this view about knowledge.
If that’s so, then perhaps, just perhaps, one of the states in question will be unreliable.
I doubt that will turn out to be the case though. Even if there are distinct states, it will still turn out that each of them gets a lot of easy successes. Let’s just restrict our attention to philosophical intuition. We’ll still get the same results as above.
It would be counterintuitive if torturing babies for fun and profit was morally required. And, as it turns out, torturing babies for fun and profit is not morally required. Score one for intution! It would be counterintuitive if I knew a lot about civilisations on causally isolated planets. And I don’t know a lot about civilisations on causally isolated planets. Score two for intuition! It would be counterintuitive if it were metaphysically impossible for me to put off serious work by writing blog posts. And it is metaphysically possible for me to put off serious work by writing blog posts. 3-0, intuition! I think we can keep running up the score this way quite easily, even if we restrict our attention to philosophy.
The real worry, and this might be a worry for the epistemological significance of intuition, is that the individuation of state types here is too fuzzy to ground any epistemological theory. For once any kind of intuition (philosophical, epistemological, moral, etc) is isolated, it should be clear that it has too many successes to possibly be unreliable.
Newcomb’s Centipede
The following puzzle is a cross between the “Newcomb puzzle”:http://en.wikipedia.org/wiki/Newcomb’s_paradox and the “centipede game”:http://en.wikipedia.org/wiki/Centipede_game.
You have to pick a number between 1 and 50, call that number _u_. A demon, who is exceptionally good at predictions, will try to predict what you pick and will pick a number _d_, between 1 and 50, that is 1 less than _u_. If she predicts _u_ is 1, then she can’t do this, so she’ll pick 1 as well. The demon’s choice is made before your choice is made, but only revealed after your choice is made. (If the demon predicts that you’ll use a mixed strategy to choose _u_, she’ll set _d_ equal to 1 less than the lowest number that you have some probability of choosing.)
Depending on what numbers the two of you pick, you’ll get a reward by the formula below.
If _u_ is less than or equal to _d_, your reward will be 2u.
If _u_ is greater than _d_, your reward will be 2d – 1.
For an evidential decision theorist, it’s clear enough what you should do. Almost certainly, your payout will be 2u – 3, so you should maximise _u_, so you should pick 50, and get a payout of 97.
For a causal decision theorist, it’s clear enough what you should not do. We know that the demon won’t pick 50. If the demon won’t pick 50, then picking 49 has a return that’s better than picking 50 if _d_ = 49, and as good as picking 50 in all other circumstances. So picking 49 dominates picking 50, so 50 shouldn’t be picked.
Now the interesting question. What *should* you pick if you’re a causal decision theorist. I know of three arguments that you should pick 1, but none of them sound completely convincing.
_Backwards Induction_
The demon knows you’re a causal decision theorist. So the demon knows that you won’t pick 50. So the demon won’t pick 49; she’ll pick at most 48. If it is given that the demon will pick at most 48, then picking 48 dominates picking 49. So you should pick at most 48. But the demon knows this, so she’ll pick at most 47, and given that, picking 47 dominates picking 48. Repeating this pattern several times gives us an argument for picking 1.
I’m suspicious of this because it’s similar to the bad backwards induction arguments that have been criticised effectively by Stalnaker, and by Pettit & Sugden. But it’s not quite the same as the arguments that they criticised, and perhaps it is successful.
_Two Kinds of Conditionals_
In his very interesting “The Ethics of Morphing”:http://web.mit.edu/~casparh/www/Papers/CJHareMorphing.pdf, Caspar Hare appears to suggest that causal decision theorists should be sympathetic to something like the following principle. (Caspar stays neutral between evidential and causal decision theory, so it isn’t his principle. And the principle might be slightly stronger than even what he attributes to the causal decision theorist, since I’m not sure the translation from his lingo to mine is entirely accurate. Be that as it may, this idea was inspired by what he said, so I wanted to note the credit.)
Say an option is unhappy if, supposing you’ll take it, there is another option that would have been better to take, and an option is happy if, supposing you take it, it would have been worse to have taken other options. Then if one option is happy, and the others all unhappy, you should take the happy option.
Every option but picking 1 is unhappy. Supposing you pick n, greater than 1, the demon will pick n-1, and given that you would have been better off picking n-1. But picking 1 is happy. Supposing that, the demon will pick 1, and you would have been worse off picking anything else.
There’s something to the _pick happy options_ principle, so this argument is somewhat attractive. But this does seem like a bad consequence of the principle.
_Stable Probability_
In Lewis’s version of causal decision theory, we have to look at the probability of various counterfactuals of the form _If I were to pick n, I would get k dollars_. But we aren’t really told where those probabilities come from. In the Newcomb problem that doesn’t matter; whatever probabilities we assign, two boxing comes out best. But the probabilities matter a lot here.
Now it isn’t clear what constrains the probabilities in question, but I think the following sounds like a sensible constraint. If you pick n, the probability the demon picks n-1 (or n if n=1) should be very high. That’s relevant, because the counterfactuals in question (what would I have got had I picked something else) are determined by what the demon picks.
Here’s a constraint that seems plausible. Say an option is Lewis-stable if, conditional on your picking it, it has the highest “causally expected utility”. (“Causally expected utility” is my term for the value that Lewis thinks we should try to maximise.) Then the constraint is that if there’s exactly one Lewis-stable option, you should pick it.
Again, it isn’t too hard to see that only 1 is Lewis-stable. So you should pick it.
_Summary_
It seems intuitively wrong to me to pick 1. It doesn’t dominate the other options. Indeed, unless the demon picks 1, it is the worst option of all. And I like causal decision theory. So I’d like a good argument that the causal decision theorist should pick something other than 1. But I’m worried (a) that causal decision theory recommends taking 1, and (b) that if that isn’t true, it makes no recommendation at all. I’m not sure either is a particularly happy result.
Limbs and Limps and Persons and Bodies
I was thinking again about Ryle’s response to Descartes’ argument that minds are immaterial. Ryle, I think, takes Descartes to be making the following inference.
(1a) My mind is not identical to any physical part of me.
(2a) My mind is a part of me.
(3a) So, my mind is a non-physical part of me.
(4a) So, there are non-physical things.
And Ryle thinks that’s a bad argument, one whose badness can be seen by considering the following argument. (Imagine, for the sake of this argument, that I have a limp.)
(1b) My limp is not identical to any physical part of me.
(2b) My limp is a part of me.
(3b) So, my limp is a non-physical part of me.
(4b) So, there are non-physical things.
Clearly the latter argument breaks down. (4b) is pretty clearly not supported by this inference, and Ryle thinks (3b) isn’t either. The problem is that Descartes thinks that _I have a mind_ is like _I have a limb_, whereas it is really like _I have a limp_. We possess things other than parts; in particular we possess attributes. These include limps and minds.
This is a way we can reject Descartes without worrying about the modal argument Descartes offers for (1a). Indeed, we can accept (1a). And, in a sense, Ryle does. He certainly thinks it is not true that my mind is a physical part of me, but perhaps as a category mistake the negation of this isn’t true either. No matter; (1a) at least gets a truth across whether it is literally true or not.
I tend to think Ryle is basically correct about all of this. The Australian tradition (represented by Smart, Lewis and Jackson) I think says that Ryle concedes too much to Descartes, and he shouldn’t concede (1a). Determining whether that’s right I think would rely on a large cost-benefit analysis of the different ways Smart and Ryle think about minds, which is obviously too big for a blog post. But here might be another reason to think Ryle was on the right track.
It seems to me that even if (5) were true, (6) would be in some way defective.
(5) I have a limp.
(6) My body has a limp.
(6) sounds like a category mistake to me. That is, it sounds like it isn’t true. If that’s true, it follows immediately by Leibniz’s law that I’m not identical to my body.
The kind of move I’m making here is similar to the move Kit Fine has been making over recent years, arguing for the non-identity of (what’s usually thought of) a thing and what it is composed of without looking at modal or temporal properties of the thing. But what’s nice, I think, about this example is that makes the non-identity point without suggesting any particularly odd metaphysics. We don’t get a sense that there are spooks of any kind by thinking about limps.
I’m inclined to take the difference between (5) and (6) as more evidence for the view I independently favour, namely that people are really *events*. I’m the event of this body having certain dynamic features. When the event that is me ends, my body will (in all probability) survive, but it will lose those dynamic features. My body isn’t an event, so I can’t be identical to it. Unless we regard events as too metaphysically spooky to contemplate, having people be distinct from bodies isn’t any kind of spooky view.
Perception and Philosophy
I’ve been thinking a fair bit about the epistemology of philosophy recently. And I find it helpful to think out the possible positions by comparison to possible positions on the epistemology of perception. Here, crudely, are four positions one might take about perception.
- Scepticism – Our perceptual evidence consists of non-factive mental states. (Perhaps it is the existence of those states, perhaps it is our beliefs about those states; the categories I’m interested in don’t discriminate along those lines.) And those states don’t give us sufficient evidence to warrant belief in propositions about the non-phenomenal world.
- Idealism – Same as scepticism, except that the propositions we pre-theoretically thought were warranted by perceptual evidence are, in some sense, about the phenomenal world.
- Indirect Realism – Our perceptual evidence consists of non-factive mental states and those states do give us sufficient evidence to warrant belief in the non-phenomenal world.
- Direct Realism – Our perceptual evidence consists in factive mental states. (This includes views on which the evidence is the existence of the state, and views on which the evdience is the content of the state.) Since our evidence consists in facts about the non-phenomenal world, we have non-inferential warrant for propositions about the external world.
There are lots of different ways in which people draw the direct/indirect realism distinction. See, for instance, “this paper”:http://consc.net/neh/papers/copenhaver.htm by Rebecca Copenhaver for a number of different ways the distinction is drawn. Note in particular that my indirect realist doesn’t require that we represent the existence of mental states in perception, let alone the existence of non-factive mental states, let alone believe that we have non-factive mental states, let alone have those beliefs be the basis for beliefs about the non-phenomenal world. If I token _that’s a table_ in my “visual representation box”, and that tokening justifies my belief that that’s table, that counts as indirect realism for me. (Assuming that the visual representation box does, or at least could, include falsehoods.)
I make no claim that my usage here is in keeping with traditional usage, only that it’s one helpful way to divide up the categories.
Note that one can have different views about different sense modalities. One could be a sceptic about colour, an idealist about smell, and a direct realist about spatial vision, for instance. More interestingly, I think there’s an interesting view that is indirectly realist about most perception, but directly realist about touch. The motivation for this comes from (of all places) Descartes. Here’s something Descartes writes towards the end of the “sixth meditation”:http://oregonstate.edu/instruct/phl302/texts/descartes/meditations/Meditation6.html.
bq. I remark, besides, that the nature of body is such that none of its parts can be moved by another part a little removed from the other, which cannot likewise be moved in the same way by any one of the parts that lie between those two, although the most remote part does not act at all. As, for example, in the cord A, B. C, D, [which is in tension], if its last part D, be pulled, the first part A, will not be moved in a different way than it would be were one of the intermediate parts B or c to be pulled, and the last part D meanwhile to remain fixed. And in the same way, when I feel pain in the foot, the science of physics teaches me that this sensation is experienced by means of the nerves dispersed over the foot, which, extending like cords from it to the brain, when they are contracted in the foot, contract at the same time the inmost parts of the brain in which they have their origin, and excite in these parts a certain motion appointed by nature to cause in the mind a sensation of pain, as if existing in the foot; but as these nerves must pass through the tibia, the leg, the loins, the back, and neck, in order to reach the brain, it may happen that although their extremities in the foot are not affected, but only certain of their parts that pass through the loins or neck, the same movements, nevertheless, are excited in the brain by this motion as would have been caused there by a hurt received in the foot, and hence the mind will necessarily feel pain in the foot, just as if it had been hurt; and the same is true of all the other perceptions of our senses.
There’s an interesting principle here. The principle is that if we get evidence through a chain, then what evidence we get supervenes on the qualities of the last link in the chain. I’m not sure that’s right, but for the sake of developing a position, let’s say that it is. If that’s right, it follows that vision, for instance, can’t be understood the way the direct realist wants to understand it. For if the light between me and my computer (the chain through which I get visual knowledge of the computer’s properties) were altered by a malicious demon, so the computer changed but the light immediately around me stayed the same, I would get the same evidence. But that evidence would not consist in facts about the computer, for I would represent what I actually represent, and this would now be false. So we get very quickly led to indirect realism about any form of evidence that arrives through a chain.
Now for Descartes, all our evidence arrives through chains or cords of one kind or another. That’s because all our evidence has to get to the brain, and thence to the pineal gland. But perhaps that’s false. If we’re good Ryleans, and think that we (as opposed to our brains) gather evidence, then perhaps tactile evidence is not really mediated. If my hand touches my desk, there is nothing between the hand and the desk that mediates the connection. So perhaps we can be direct realists about tactile perception. Note that Descartes’ dualism isn’t doing much work in this argument; what is doing the work is that thinking takes place in (or through) the brain. And that’s a much more widely held view. But arguably it isn’t right; arguably thought, or at least representation/evidence collection, takes place at least throughout the nervous system, and perhaps throughout the whole body. Or so we’ll assume.
The theory we end up with is largely indirect. We have direct evidence that the external world exists. We can touch it. (Note the prevalence throughout history of thinking that touch gives us especially direct evidence of the external world. We refute idealism by _kicking_ the stone, not looking at it.) But we don’t get a whole lot else. Most of the details require filling in by evidence that is indirectly related to its subject.
Coming back to philosophy, all four of our positions are well represented in the contemporary debate.
The sceptic is the person who thinks that we have at best indirect evidence, i.e. intuitions, for philosophical theses, and these are not good evidence for the claims we want.
The idealist thinks that we only have intuitions, but that’s good because the desired conclusions were largely conceptual in nature. The idealist, that is, thinks philosophy is largely an investigation into the nature of concepts, so facts about mental states, i.e. intuitions, are a perfectly good guide.
The indirect realist thinks that philosophical questions are not, or at least not usually, about concepts. And she thinks that our evidence is largely intuitive, and hence indirect. But she thinks that, when we’re doing philosophy well, these can provide warrant for our desired conclusions.
And the direct realist thinks that all three are wrong about the nature of evidence. We start with evidence that bears directly on the questions we’re interested in. We simply know, and hence have as part of our evidence, facts like the fact that a Gettiered subject doesn’t know, and that torturing cats for fun is wrong. There isn’t any need to worry about the link between evidence and conclusion since the evidence often entails the desired conclusion.
Williamson’s “The Philosophy of Philosophy” is largely an argument for direct realism in philosophy, an argument that often proceeds by attacking the other views. So chapter 2 is a direct attack on idealism. Chapter 7 is an attack on indirect realism, with some attacks on scepticism thrown in. And the sceptic is the subject of criticism throughout the book, especially in chapters 5 and 7.
I think the position I want to end up holding is something like the position on perception I outlined above. Direct realism is partially true. Some of our philosophical evidence consists of knowledge, not of non-factive states. (For instance, our knowledge that a vegetarian diet is healthy is philosophical evidence.) But this won’t get us very far, any more than touch alone gives us much perceptual insight into the world. Most of our evidence is indirect; it is intuitive. So I’m largely an indirect realist about philosophy
Holding indirect realism leads to two challenges. First, we must respond to arguments against indirect realism. The rough response I’ve been running in recent posts has been that the arguments against indirect realism are generally arguments against a very strong form of indirect realism, and we can hold on to a modified form without any cost. Second, we must explain how indirect evidence can bear on philosophical questions. That’s obviously the harder challenge, and one I wish I had more to say about.
Refereeing Journals and Rants
Over at “Brian Leiter’s blog”:http://leiterreports.typepad.com/blog/2008/07/a-proposal-abou.html there was a long thread recently about journal refereeing and reviewing practices. I thought I’d make a few points here that are getting lost in the crush.
1) In my experience, most absolute disasters with delays about refereeing concern (a) potential referees who simply don’t answer requests to referee, and (b) cases where the editors run out of people they know/trust on the relevant topic. If everyone who received a request to referee a paper could answer it, even in the negative, that day, and if answering negatively suggest 1-3 names of people with some expertise in the field, that would make things flow much more smoothly.
2) Relatedly, I think a lot of people, when refereeing, don’t take into account how time sensitive it is. Imagine you’ve got a paper that you’ve promised to referee within the month. And you’ve got a project of your own that is due at the end of that month. And you’ve got enough time in the month to do both. What should you do? I think the answer is that you should referee the paper straight away. Usually getting your paper done earlier won’t make a difference to anyone. Getting the report done earlier will make a difference. I think the system would work a lot more smoothly if every referee, upon getting a paper, seriously considered the question “Can I do this today?” Obviously if you have to present a lecture that day, or the next day, and it isn’t done, then the answer is no. But often times the answer is yes. It’s not like you’ll often spend more than a few hours on the paper, or that doing the paper that day will take more time, but it will make a difference to editors and writers.
3) If we want to keep the model of some journals being run through departments, rather than through publishers, then some amount of delay is going to be inevitable. If nothing else, most journals run by departments have a support staff of 1. If that one person is sick, or on annual leave for a time, the whole system basically creaks to a halt. If that person is spending literally all their time for a two or three week period getting an issue readyopt print, nothing happens with submissions. I’ve never had to deal with this, but I imagine if you don’t have good staff (or, more likely, don’t have good staff management) things are worse.
Probably the single biggest thing that could be done to improve journal response times would be to find a way to keep the system running when less than fully staffed. But it’s hard to do that in a small operation, when you can’t simply move staff from elsewhere onto the project.
4) The journal management software systems that are currently being rolled out make a huge difference. There’s nothing as good as keeping a paper from dropping off the face of the earth as reminders every few days that your report on it is overdue. (Since I sign off on every paper on Compass, I get a lot of these, but I’m not that late on too many.) Potentially these systems can, by automating processes now done by staff, help a lot with point (1). And that’s important, because otherwise point (1) seems to me to be intractable short of handing over all the journals to commercial presses.
Having said that, everyone hates the software when it is being rolled out. But it really makes all the difference in the world.
5) There’s been some discussion of cutting back on referee reports. I think this is basically a good idea. It’s true that referees need to say something to editors about what’s good or bad about a paper. But from experience I’ve learned that it’s *much* easier to find something informative to say about a paper to an editor than it is to say something informative and polite to an author. And anything that speeds up the process is probably good.
6) But I really don’t think the comments thread at Leiter is taking seriously how much of the problem is caused by there being too many papers being submitted. If every paper being submitted was a real philosophical advance, that wouldn’t be a problem – it would be paradise. But I don’t really think this is so.
Lots of papers I see to referee are basically glorified blog points that don’t attempt to make more than a very small point. Some of them would be quite good blog posts. But most journals aim a little higher than that. (Note this is different to the length point. Lots of good papers, even papers in top anthologies, are short. But they are all ambitious.)
Disturbingly, many papers seem to be largely unaware of the relevant literature, especially with the most recent developments. I see too many papers that simply don’t pay attention to relevant work from the last 10 years.
Now I don’t want to pretend that I’ve never written (or published) papers that fall in one or other of these categories. But I do think that many papers get sent out when the author could profitably have either rolled the paper into a larger paper, or spent time talking to colleagues/friends/blog readers about relevant literature that should be consulted.
I used to think this was a tragedy of the commons problem. (Mark van Roojen makes this suggestion in the Leiter thread.) The pressures to publish meant not quite cooked papers were being frequently released. And that’s too bad, but an inevitable consequence of everyone acting in enlightened self-interest. But really I don’t think that’s true.
That’s because I don’t think most people appreciate how important very very good papers are to one’s philosophical career. If you’re Tim Williamson or David Lewis you can write several papers a year that are important and groundbreaking. But most of us aren’t like that. Most of us will be such that most papers we write will sink without much trace. The vast bulk of attention will be paid to just a few papers. This can be seen in public through looking at citation rates. (Here are “mine”:http://scholar.google.com/scholar?q=weatherson&hl=en&lr=&btnG=Search on Google Scholar for example.) The most cited papers have an order of magnitude more citations than the bulk of papers, especially when self-citations are removed.
And if we care about professional advancement as much as contribution to philosophical thought, the same story really holds. People tend to get hired based on their best papers. (And they tend to get passed over based on their worst papers.) This shouldn’t be too surprising. People are busy. They don’t have time to read a job candidates full dissertation, let alone their full output if they’re more senior. They read what is (reputed to be) the best work. And that’s what goes into hiring decisions. As we see every year when looking at junior hires, it doesn’t really matter if that best paper was published in _Philosophical Review_, the Proceedings of the Philosistan grad conference, or (more likely) the candidate’s own website. What matters is how good it is, or appears. As a rule, spending more time improving your best paper will do more for your professional prospects than sending it off and moving on to another paper.
Indeed, even if one just cares about publication, I imagine a lot of people (probably me included) could do with being slower on the “submit” button. Most, though not all, bad papers get rejected. And that takes time. Spending time making a good paper very good, rather than submitting the (seemingly) good paper may well mean one fewer rejection, and hence quicker publication.
So, simple solution to the problem of journals being so slow – don’t submit so much!
Barstool Philosophy
One of the things that’s been a running thread through my recent thoughts about the epistemology of philosophy is that it is importantly a group activity. This is largely for prudential reasons. For those of us who aren’t Aristotle or Kant, by far the best way to regiment our philosophical thinking is subjecting it to the criticisms of others. That’s a substantial constraint; it means giving up points that can’t convince our peers. And sometimes that will have costs; we’ll be right and our peers wrong. Sometimes we might even know we’re right and they’re wrong. But as a rule one does better philosophy if one subjects oneself to this kind of constraint from the group.
Or so it seems to me. A thorough empirical investigation would be useful here, especially in terms of trying to figure our just what exceptions, if any, exist to this general principle. But given the relatively low quality of philosophy produced by most people who don’t regard themselves as being regulated by criticisms of their peers, I think it’s pretty clear the rule as a whole is a good one.
That all suggests that the metaphor of “armchair theorising” or “armchair philosophy” is very much mistaken. For armchairs are really places where one engages in solitary activities. And contemporary philosophy is a group activity par excellence.
So we need a new metaphor. “Conference room philosophy” sounds dreary even to me. “Coffeeshop philosophy” is better. But it might be better still to keep the idea of a seat. After all, most philosophy is done sitting down. I suggest “barstool philosophy”. I’m not convinced the best philosophy is done during/after drinking, but the image is pleasingly social at least!