In the comments to this post, Richard Heck responds to a few of the arguments Brian Leiter made here and here in defence of the PGR. Being a wimpy Milquetoast, I don’t want to enter into the fray. But there were some comments about what is measured by the PGR and what is important in a graduate school that I couldn’t resist making.
As Richard points out in his comments, the idea that he circulated the anti-PGR letter in defence of Harvard is absurd for several reasons. One suspects the most charitable motivation to attribute to him for circulating the letter is that he believed what was in there was true. But this is a blog, so let’s set charity (and reason!) to one side and assume for the sake of the argument it was just score-settling. (Is this like the assumption in a reductio proof? Something like that.) If it wasn’t Harvard’s honour that was being defended, whose might it have been? One natural thought is that it was MIT’s.
Now MIT has always been fairly harshly done by on the PGR. On the last report it was 8th. Before that it was 9th, and on the report before that it was (somewhat laughably) 13th. I suspect that if I were advising a grad student who had offers from everywhere as to where they should go to grad school, then barring some special circumstances I’d advise them to go to MIT. Maybe this would be bad advice, but given it’s my blog I’m going to assume it’s good advice, and see what that says about the PGR.
At this point I have a small confession to make. When I was voting on the PGR last time, I didn’t put MIT at the top, as you might think I should given my advisory dispositions. Partially that was because I was just following orders – I was ranking the faculty quality, and in terms of overall faculty quality, there’s just no way MIT stacks up well against, say, Rutgers. And partially that was because the way the options were presented on the voting paper, just lists of faculty, with no affiliation listed, and MIT having a very short list, made MIT look less attractive than it really is. (And this isn’t just because MIT has only 10 listed faculty. Ned Hall and Alex Byrne may well be world-class philosophers, but their names don’t significantly increase the string length of the list of faculty, and when it comes to MIT that string length is deceptively short.)
But maybe ‘confession’ is the wrong word here. Perhaps I did the right thing, and the PGR simply isn’t a measure of overall school quality. And here’s why MIT is an interesting case to focus on. In lots of important respects not explicitly measured by the PGR, (which is primarily a measure of (perceived) faculty quality) MIT seems like a better school than its most prominent competitors.
- Faculty Availability
This is a big point. There are several cases I can think of around the country where in principle faculty member X at not-MIT would be as good, if not better to have on a faculty ceteris paribus than faculty member Y at MIT. But ceteris ain’t paribus around here. Having faculty member Y on faculty, and with their level of presence and availability in the department, would be much better for the typical grad student than faculty member X and their levels of (non-)availability. (I was going to start citing examples here, but my defamation lawyers advised against it.) - Access to Other Schools
It’s a very good thing that MIT students get to attend Harvard classes. (Especially given that the area coverage 10 faculty members provides is, by necessity really, less than comprehensive.) I remember when Jeff King gave a (very good) visiting seminar at Harvard, it seemed some days I was the only non-MIT person there. Obviously being at MIT isn’t quite the same thing as having Christine Korsgaard or Richard Heck or whomever on the faculty where you are studying, but it’s not like they are on another planet. (There’s also this good mind and epistemology just down the highway from MIT, but it seems few people from Cambridge ever get there.) - Quality of the Students
I think I’ve learned more philosophy in the last two years talking to MIT students than I have from the faculty in any department I can think of. Presumably that’s a help when one is a student there as well, at least if the point of having a high-quality faculty is that you’re going to learn things from them. - Speed of Completion
This is a little pet peeve of mine. There’s no reason students should be taking 7, 8 even 10 years to do PhDs. For one thing, this is very costly. Assuming that taking this extra time not only slows down your entry to the job market, but slows down your progress towards tenure, full professorships etc (which might be false, but looks true from anecdotal observation) every extra year in grad school has an opportunity cost of potentially over $100,000. Given this, it is a very good thing that departments strongly encourage timely completion. MIT isn’t the only school that does well in this respect (Brown, as it turns out, is better than most) but it’s clearly one of the best. - Proximity to Fenway Park
I can see why people might not treat this as a serious reason for choosing a grad school, but while we’re making a list…
The point here isn’t to gratuitously praise MIT, which is hardly a perfect department. The point is rather to stress how many factors other than faculty quality can and should enter into a decision about where to go to grad school, and that when these all factors all point the same direction, they can override quite large differences in faculty quality. MIT is a nice case to focus on, because there’s so many things in its favour other than faculty quality. (I should stress, by the way, that I’m using a somewhat additive measure of faculty quality here. If we were just measuring average faculty quality, some of the smaller departments might do a lot better than they actually do by the standards I’m using.)
Finally we can get back to a point Richard makes in his comments. At the top of the scale, students can be expected to find out these things and judge accordingly. (And in practice some students do choose MIT over significantly higher ranked rivals.) But in the middle, especially between 20 and 40, it gets harder. The difference in faculty quality between the number 20 and number 40 school may not be that great, and may be small enough that these kinds of considerations should be over-riding for a smart grad student. (I’ve deliberately not looked at who is 20 and 40 before writing this – I hope it’s not too embarrassing a comparison.) Are there schools like MIT further down the list, schools that are much better to be at as a grad student than their PGR ranking indicates? I don’t know, even after reading the PGR from top to bottom. And as someone who is occasionally asked for advise about grad schools, I think it is something I should know.
When there are so many schools to compare, as there are around the 20s and 30s in the rankings, it’s just impossible for any prospective student to get an impression of what all of them are like in their day-to-day qualities. Here the rankings may (and I stress may, I have no evidence of this) distort options, by causing students to just not look at places they would be well suited to.
Having said all that, I should note that the PGR isn’t just meant to be a survey of faculty quality. Respondants are asked to make adjustments for other significant factors. But I’m not sure that helps. For one thing, the structure of the survey still makes it the case that it is overwhelmingly a survey of faculty quality. For another, the different judgments people make about which things are important in a graduate school would start to play too large a role if people took this seriously. And finally, it’s not at all clear how anyone is meant to know the relevant facts about a large number of schools. I can keep up with how good Z’s research is through the journals and his/her website. I can’t keep up with how good his/her seminars are, or how closely s/he reads draft chapters submitted by students she is supervising. And those are the things that would matter.
I think if anything the PGR should make less of an effort to measure things other than faculty quality. It would be better, or at least clearer, to say upfront that this is what we are measuring, and while it is an important component in a decision about where to go to grad school, it shouldn’t be the only component. Indeed, if you’re lucky enough to get accepted to MIT, throw out the 7 higher ranked acceptances and take their offer. Just how we are to get and publicise enough information about the ‘business end’ of the survey to help students make similar decisions, I don’t know, but if someone found out I’d be very pleased. More information the better I say, which is why despite some objections I still think the PGR is a very good thing, since it provides at least one important piece of information (perceived faculty quality, which is both important and a decent guide to actual faculty quality) that potential grad students should have, and judiciously use.
UPDATE: I really was trying to stay out of the partisan fray with this one. But I didn’t really manage I guess, so let me clarify a few things.
First, the one time MIT was ranked 13 was the only time it was outside the top 10, so perhaps I shouldn’t place too much weight on a rogue result.
Second, I actually think that as a ranking of overall faculty quality, the PGR gets MIT’s ranking about right. The only point I was making (and I think this is one all sides agree on) is that there’s more to choosing a grad school than faculty quality. So it’s consistent to say that MIT is correctly ranked, and that it’s right to advise most students to accept offers there.
Third, I’m not sure that surveys are the right way to measure the other factors that go into picking a grad school. Indeed, apart from placement data I’m not sure that the other factors are objectively measurable. (How do you rank in importance the quality of the local sporting teams, to pick a salient characteristic?)
Fourth, I still think it’s valuable, very valuable, to have the two most easily measurable qualities (faculty quality and placement record) being measured and reported as widely as possible. And I think Brian Leiter has done more than anyone in recent times to make that data available to incoming students. Provided students are smart enough, and well informed enough, to know how to use those pieces of data, it is a very good thing have them available. (And if students aren’t smart enough to use the data, it’s not obvious they should be in grad school.)