Placement Rates for Top Philosophy Programs

(Originally posted on Crooked Timber.)

We all know there are lots of horror stories about trying to find work in academia. The smart money is on not even starting a PhD unless you are prepared to sell your soul on the job market. Just say no to those fancy scholarships. Unless, it seems, they’re from a good school in philosophy, where the numbers don’t exactly support the bad tidings.

Thanks to lobbying from various sources (prominent amongst them being Brian Leiter’s Philosophical Gourmet Report) we now have quite a bit of data about how philosophy PhDs do on the job market. And the news on the whole is fairly good, or at least much better than I had expected.

Here are the recent placement records of (most of) the top 15 U.S. philosophy departments.

Princeton: 48%, 81%
Rutgers: 36%, 85%
Michigan: 33%,70%
Pittsburgh: 40%, 84%
Stanford: 27%, 72%
Harvard: 63%, 96%
MIT: 33%, 77%
Arizona: 13%, 91%*
UCLA: 19%, 75%
UNC: 20%, 80%
Berkeley: 35%, 82%
Notre Dame: 11%, 80%
Texas: 4%, 60%

Note that the omissions are NYU, which hasn’t had a PhD program long enough to have a meaningful placement record, and Columbia, who either don’t want to share this information with us, or (more likely) have posted it somewhere too hard for an amateur sleuth like me to find.

So what are those numbers after the records. They are my rough estimations of, first, the percentage of grads that ended up in great jobs, and second, the percentage of grads that ended up in good jobs. The ‘great’ classification is fairly subjective, and I don’t think I really kept to a constant standard throughout. The ‘good’ classification is meant to be 3/3 load or better, tenure-track or tenured, plus the occasional 2-3 year research-oriented position at a good school (provided it is a first job). I count those as good jobs because people take them over 3/3 load tenure-track jobs. I don’t know the teaching load at every school in the country, and I probably counted too many jobs as good. Arizona had lots of grads at schools I hadn’t heard of – I counted most of them as good, but plenty might not be so good. The 91% is probably high – but it is still over 70%.

(This concession is not meant to mean I stand by all the other numbers. The margin of error on my calculations is probably +/- 20%. But I think they’re a fair approximate indication of what is happening.)

Overall, I’d say, those are pretty good numbers. The Texas percentages aren’t great, but Texas has a very big PhD program. In numerical terms they were placing as many people as most of their peers, just they had lots of non-placements (several apparently voluntary) as well. The top 14 schools had placement rates of 70% or better. It’d be suprising to even find an average student from one of those schools who didn’t have at least a decent job.

There are of course limits to one’s optimisim. Things get tougher for students not from a top 15 school. The data on these schools starts to get sketchier as well, perhaps not coincidentally. (For one thing schools suddenly stop listing how many of their grads didn’t get jobs, something all the schools listed do.) And obviously there are people even at the best departments who aren’t getting good jobs. And even good academic jobs occasionally leave something to be desired. It’s hard to tell from the publically available information whether some of these people have, say, never been offered a job within 10,000 miles of their home. That can be a little annoying, even if there are very good jobs offered 11,000 miles away. And of course many of these people don’t start in good jobs, even if they end up in them. (And some start in good jobs and don’t get tenure or leave for other reasons. But I don’t think it’s fair to chalk those up to a bad job market.)

So it’s not all a bed of roses. But the impression the information creates is that in philosophy at least, median to somewhat below median students at good to great departments will get pretty good jobs. And that’s a lot better both than the impression I have of most humanities disciplines, and that many people in philosophy have of the state of play within our discipline. I don’t know if there’s been any good cross-disciplinary studies done on this recently, but I would be surprised if philosophy isn’t one of the better humanities to be in from the point of view of finding work.

13 Replies to “Placement Rates for Top Philosophy Programs”

  1. As Brian said recently, no-one reads old blog postings, so no-one will read this one, either, which is probably a good thing. But I recently read the post to which this comment is attached and was immediately struck by the fact that, since Brian has produced a quantitative measure of placement success, we can do something I’ve wanted to do for a long time: Use statistical methods to examine the claim, often made by fans of the Philosophical Gourmet Report, that its rankings (or faculty quality, which is what it purports to measure) correlate well with placement results.

    I used Brian’s percentages to calculate a number of correlation figures, using the Pearson Product Moment Correlation formula. I did so in a number of different ways, which are listed in the table below. I primarily used the mean scores for departments, since the 2003-04 version of PGR counsels users to pay more attention to them than to ordinal rank, but I calculated one of the measures using ordinal rank, as well, for comparison.

    Three bits of explanation. First, “my formula” is an attempt to bring Brian’s two percentages together, so that placement into “good” jobs counts but placement into “great” jobs counts a bit more. The formula is (2*GREAT)+GOOD. So Princeton gets (2*48) + (81-48). Second, I did the calculation both including and excluding Texas. As Brian notes, their placement numbers may be artifically low due to the size of their program, and their being as low as they are compared to the other programs therefore skews the scatter plot. If Texas’s “true” figures are better than Brian’s numbers, as he suspects they are, then the correlation would not be as good. Finally, I report both correlation and the square of correlation, which is sometimes a more useful figure. In my girlfriend’s stats course, long ago, they said that the square of correlation gives a measure of “predictive value”. Basically, if the square of correlation is 0.5, then that means that you can predict y from x with 50% reliability; if its 0.25, then you can predict y from x with only 25% reliability. In the latter case, a high value of x makes it somewhat but not very much more likely that one will find a high value of y.

    Unfortunately, the blog won’t do HTML tables, so what follows is kind of ugly….

    X-value: Correlation, …Square Thereof, Correlation Excluding Texas, …Square Thereof, …Using Ordinal Rank

    “Good” Jobs: 0.108, 0.011, -0.15, 0.023, 0.003

    “Great” Jobs: 0.509, 0.259, 0.42, 0.176, 0.228

    My Formula: 0.477, 0.228, 0.418, 0.174, 0.387

    As one can see from the table, PGR is most useful if one wants to predict how likely a department is to get its students “great” jobs. Even then, however, its predictive value is not very impressive, at just 26%. (If we exclude Texas, then predictive value falls to 18%.) PGR is essentially useless if one’s goal is to predict how likely a department is to get its student “good” jobs, the predictive value, even with Texas excluded, being below 10%; with Texas included, the correlation is actually negative. If one combines the two, using my formula, then predictive value is close to what it was for “great” jobs, but is below 25%.

    Using ordinal rank gives worse results in the first two cases (the correlation is again negative in the case of “good” jobs, at -0.05), but improves matters when my formula is used. Predictive value then increases to 39%, which is not great, but isn’t terrible, either.

    Of course, as Brian W notes, there are all sorts of concerns one might have about the data he has provided us, so one should not draw any strong conclusions from the foregoing. Nonetheless, this exercise suggests that it is at least an open question whether the mean scores or rankings reported in PGR correlate at all well with placement results. Perhaps an improved measure of placement success could be derived that would permit stronger conclusions to be drawn.

    A stats question: Can someone explain to me why scaling the figures in these calculations gives different results? So, if I divide all the placement values by 10, or normalize them to a five point scale, like the mean scores, the correlations are much worse? Shouldn’t a good measure of correlation be insensitive to linear transformations of the data?

  2. Wow, cool results!

    A few little disclaimers, though I think it’s a very interesting study. (And I don’t know enough stats to know why linear transformations matter. I thought they didn’t with usual measures of correlations.)

    It turns out my standards for ‘good’ and ‘great’ in this study corresponded to no identifiable objective quality. They did sort of correspond to a subjective quality. Great jobs were jobs I would have left Australasia for (at least in the short term), good jobs were ones I would have stayed in academia for, provided they were in Australasia. That’s a pretty crude test, but it gives you some sense of what I was actually measuring. I was a little too confident that my preferences tracked objective reality.

    I suspect the correlations get much stronger if we go further down the rankings. The problem is that lower ranked departments simply don’t report all the necessary data. All I can say is, if you’re planning to go to a school outside the top 15, demand comprehensive placement data. (And if you’re planning to go to a school outside the top 25, make sure you really are a philosophy junkie, because the job prospects start looking less than rosy.)

  3. It was posited that “no-one reads old blog postings, so no-one will read this one, either.”

    I write to tell you of your error.

    Jill

  4. I’m sure Prof. Heck’s question has been answered by now, but I just came across this and he may have forgotten about it—at any rate, correlations are only invariant under linear transformations if the data falls under a gaussian distribution. Typically this is not the case, the small sample size would tend to suggest a T distribution regardless, and typically ordinal data tends to follow a distribution much closer to Poisson at any rate.

    Hope that helps.

    Ryan
    (a philosophy and one-time sociology student at Boston College)

  5. What do placement coordinators actually do?

    Some people talk about departments’ placement records as though placement were simply an indicator of various other virtues: Good departments attract the best students. Good departments give their students a better education. Prestigious faculty members write compelling letters of recommendation for their students. Good departments earn good reputations, which help distinguish their graduates in the job market, etc.

    This is all plausible, but that can’t be the whole story. Otherwise, why do departments need placement coordinators? It isn’t because they find out about jobs that nobody else knows about—philosophy jobs are rare but they’re well-publicized.

    The term “placement” implies that departments are actively doing something to place their graduates in good jobs. What is it, exactly? A lot of excellent academic departments are concerned that their placement records don’t reflect their overall stature. The mandate of their new placement coordinators is probably not improve the overall excellence of the department, or the quality of graduate education, or any of the factors that ought to help graduates get jobs.

    I’m afraid that this emphasis on placement distorting what should be a meritocratic hiring process. In an ideal world, programs would strive to improve their placement records by producing better graduates, or better scholarship—thereby conferring a competitive advantage upon their graduates. I worry that all this emphasis on placement is rewarding departments for systematically working their connections and promoting their students. There’s nothing morally wrong with this. On the contrary, departments that didn’t do this for their students would be letting them down.

    However, I can’t help but wonder whether this trend is healthy for the profession as a whole. As a small profession, we should take extra care to guard memetic diversity. As the best departments devote more of their superior resources to increasing placement rates, even more jobs will be occupied by the graduates of a handful of programs.

    This post is already a lot longer than I intended, so I’ll leave it there.

  6. Interesting point.

    1. Of course, if the programs with the most resources concentrated on producing better scholarship and students, that would still presumably reduce memetic diversity. So diversity seems to me to be an independent point.

    2. Different placement officers do very different things, I believe. Some coach job market students, some send e-mail or phone search committees, some are very laissez faire.

    3. A lot of the interest in placement is, obviously, perfectly reasonable. It’s of great interest to prospective graduate students, and even if ideally they would have more platonic interests, one can hardly blame them for taking some interest in the practical prospects.

    4. I still agree with you about placement officers.

  7. You’re right about the diversity. I what I meant was that excessive preoccupation with active “placement” risks reducing diversity without increasing quality.

    Interest in placement is reasonable, especially for graduate students. But now that placement has become a closely-studied quantified variable, a lot of institutions are getting caught up in a potentially worrisome game. They’re vying to bump up their placement numbers to improve their ratings.

    I suspect that the most effect active placement strategies leverage social, rather than academic capital. But then, improved placement scores get quantified and fed back into the overall rating system, If this continues, the relative importance of prestige and connections will “feed forward” to the advantage of the existing establishment.

  8. Because there is no standard specifying what information should be reported, I’m worried that the data the schools report is skewed. Michigan, e.g., lists “the results of every job search by students who first entered the market in the years …” — that’s great, though their data may compare less favorably as a result. It’s not clear that other schools are listing all students that got a PhD, went on the market, and failed to get a job this last year. It seems clear that a student that is on the verge of getting her PhD and went out and failed to get a position is not listed on many sites. (I’m probably missing something simple, but I don’t understand Harvard’s distinction between students that are ‘noted’ and those that are ‘listed’ — I assume this isn’t a bookkeeper’s way of getting a high placement percentage, but it could be clearer.)
    It would be ideal to have the results for all students who entered the school, since a school with a 100% placement record that has a 80% drop-out rate is far from ideal, but I doubt that sort of data is forthcoming.

  9. Because there is no standard specifying what information should be reported, I’m worried that the data the schools report is skewed. Michigan, e.g., lists “the results of every job search by students who first entered the market in the years …” — that’s great, though their data may compare less favorably as a result. It’s not clear that other schools are listing all students that got a PhD, went on the market, and failed to get a job this last year. It seems clear that a student that is on the verge of getting her PhD and went out and failed to get a position is not listed on many sites. (I’m probably missing something simple, but I don’t understand Harvard’s distinction between students that are ‘noted’ and those that are ‘listed’ — I assume this isn’t a bookkeeper’s way of getting a high placement percentage, but it could be clearer.)
    It would be ideal to have the results for all students who entered the school, since a school with a 100% placement record that has a 80% drop-out rate is far from ideal, but I doubt that sort of data is forthcoming.

  10. A few months ago I collected some hiring numbers that loosely complement BW’s placement numbers. I visited the websites of the top 50 U.S. philosophy programs as reported in the 2004-06 Gourmet Report, identifying all the faculty members of these departments who received their PhDs after 1998. There were 108 of these, by my count. (I may have missed a few, since in a handful of cases it wasn’t clear when the degree was conferred.) For each faculty member, I looked up the Gourmet Report ranking of his or her PhD program at the time that he or she graduated, and for the two years prior to that. (I counted Oxford and ANU as top ten programs.) Of 108 individuals, 81 graduated from top 10 departments, 97 from top 20 departments, 103 from top 30 departments, and 106 from ranked departments. So:

    Top 10: 75%
    Top 20: 90%
    Top 30: 95%
    Ranked: 98%

  11. As Mike’s data indicate, the PGR rankings track placement success—at least generally. That isn’t their intended application, though, and there are exceptions: for example, a prominent philosopher at UCLA has alleged (in personal communication) that grad student quality at Arizona lags behind faculty reputation, an allegation that Brian’s data bear out. Harvard may be a minor exception; my impression is that they place better than some schools ranked higher than them in the PGR, but it’s hard to know for sure because the data they provide is inadequate. Most notably, they don’t distinguish between tenure-track and non-tenure-track positions, and they don’t tell us about people who have gone on the market and failed (so far) to get a job. Their numbers do seem abnormally high, which casts doubt on the data.

  12. I haven’t read through the entire thread but had a thought & wondered what people think: when estimating the placement record of schools, one should take into account placement at non-Ph.D. granting institutions, e.g. four-year colleges. Doing so would be difficult, but any attempt to produce a comparative assessment of placement records, esp. at programs outside the top-15, is misleading if it doesn’t do so. Assuming that the method under discussion does not take this into account, that method is somewhat misleading – esp. for prospective students who consider a job at a (non-ranked) four year college like Williams to be a good job.

    Sorry if this is an old point or if my assumption about the method is wrong.

  13. Further web searching for the meaning of “good job” revealed (in an old CT post) that my worry is most likely taken into account in Brian’s criteria. But this might still be a problem for those who determine placement rates by looking at ranked institutions only (Mike’s study above). It would be interesting to expand on Mike’s study by looking at “good” four-year institutions and seeing where the faculty earned their degrees.

Comments are closed.