h-scores of Philosophy Journals

A quick and dirty (and probably inaccurate) count of the h-scores of various philosophy journals over the last 10 years (i.e., for papers published 2004 and later). This is the largest n such that the journal has published at least n articles cited at least n times since 2004. (I thought this might be relevant to this poll.)

Mind & Language – 24
P&PA – 19
Philosophy of Science – 18
BJPS – 18
Philosophical Studies – 18
Ethics – 18
Journal of Political Philosophy – 17
Nous – 16
PPR – 16
Linguistics & Philosophy – 16
Phil Review – 15
JPhil – 14
Mind – 14
PQ – 13
AJP – 12
Economics & Philosophy – 11
Erkenntnis – 10
Monist – 7
APQ – 8
CJP – 7
Journal of the History of Philosophy – 7
Review of Metaphysics – 3

2 Replies to “h-scores of Philosophy Journals”

  1. Oh, it’s not intrinsically important. I just think it’s a nice way to summarise how influential a journal has been.

    Here’s the best case I can make for its importance. Any single measure of citation impact you use will have weird cases; ones where the journal is intuitively much more (or much less) important than the number you use.

    For example, if you just look at the average number of citations a journal has, then a journal that publishes one article which takes off (a la Machamer, Darden and Craver 2000), and most of its other articles are barely cited, could still end up really high. But that’s not a high-impact journal; it’s a low-impact journal that published one high-impact paper.

    You can do the same kind of exercise for any measure you like – including h-scores. The question is, when you look at the cases that the measure gets ‘wrong’, how likely are they to come up in real life cases? The kind of example I mentioned in the previous paragraph is, I think, not too implausible. If I could get reliable data about Philosophical Topics over the last 15 years, I bet “Shifting Sands” would pull its average citation count up a lot.

    On the other hand, the kind of case that throws off h-scores is pretty weird. If a journal has 20 out of its last 100 papers with exactly 20-30 cites each, and the others with 0-5 cites each, then it will look much better by h-score than it really is. But I don’t think that happens so frequently. There may be 1 or 2 ‘outlier’ articles that are cited at a much higher rate to the rest of the journal, but in practice there aren’t 20.

    What does worry me, as several people pointed out on Facebook, is that what we’re seeing here is an artifact of different citation practices in different disciplines. Psychology and linguistics cite at higher rates than philosophy does. So Mind & Language papers that get picked up outside of philosophy get more citations than equally influential articles in, say, the AJP, which are only important to philosophers.

    That’s part of the story of what’s going on here, and why all the journals at the top either publish a lot of stuff of interest to non-philosophers, or publish a lot full stop (like Phil Studies). But even that is interesting to know I think; that Mind & Language, Ethics and Philosophy of Science all routinely publish work that gets picked up outside philosophy (in different departments of course) should inform our views about those journals.

Leave a Reply