• September 4, 2015

An Elaborate Ranking of Doctoral Programs Makes Its Long-Awaited Debut

Now it can be told. The American doctoral program with the longest median time-to-degree is the music program at Washington University in St. Louis: 16.3 years.

That's just one of a quarter million data points that appear in the National Research Council's new report on doctoral education in the United States, which was finally unveiled Tuesday afternoon after years of delay. (The Chronicle has published an interactive tool that allows readers to compare doctoral programs across 21 variables.)

The NRC's new ranking system will draw the most immediate attention. It is far more complex than the method the agency used in its 1982 and 1995 doctoral-education reports. Whereas Cornell University's philosophy program was once simply ranked as the eighth strongest in the country, it must now be content to know that it has an "R-ranking" between 2 and 19 and an "S-ranking" between 16 and 34. (The first is derived indirectly from programs' reputations, and the second is derived more directly from programs' characteristics. For a detailed explanation, see our Frequently Asked Questions page.)

The Chronicle's exclusive interactive tool lets you compare the quality and effectiveness of doctoral programs. To use the tool, purchase a one-year Web pass from The Chronicle Store.

Why did the project adopt those complex ranges? Because the old system of simple ordinal ranks offered a "spurious precision," said Jeremiah P. Ostriker, chairman of the project committee, in a conference call with reporters on Monday.

"There are many different sources of uncertainty in the data," said Mr. Ostriker, who is a professor of astrophysics at Princeton University. "We put them together as well as we could.... That means that we can't say that this is the 10th-best program by such-and-such criteria. Instead, we can say that it's between fifth and 20th, where that range includes a 90-percent confidence level. It's a little unsatisfactory, but at least it's honest."

Over the long run, scholars may focus less on those baroque rankings and more on the report's underlying data. The NRC report contains some of the most-thorough measures ever collected of doctoral-student completion rates, time-to-degree, faculty diversity, and student-support activities.

Evidence of Age

The bad news is that many of those data points have surely gone stale, because the NRC conducted its surveys back in 2006-7. In some departments, so many faculty members have come and gone since 2006 that the research-productivity numbers may no longer be reliable. In other departments, student services and financial aid have changed for better or worse since 2006. So all of the figures should be approached with some caution. The (tentative) good news is that many graduate-school deans hope to continue to collect and analyze such data, even if the NRC itself never conducts another national study.

"There's going to be a short-term response and a long-term response to this report," said Debra W. Stewart, president of the Council of Graduate Schools, in an interview on Monday. "The long-term response will be the important one. I think that the framework of this report will help support an ethos of continuous improvement."

Donna Heiland, vice president of the Teagle Foundation, who has written about the challenges of doctoral assessments, hopes that scholars will not spend too much time picking at the data's inevitable flaws.

"With projects like this," she said in an interview last week, "the first thing that happens is that everyone looks at the data and complains that it's stale or that it isn't right. But I've become converted to the idea that data are just not going to be perfect. The data are not going to be correct down to every single detail. But if you can use this report to open up conversations about student funding or other elements of your program, it's accomplished its purpose."

Others are not so sure. Many of the data in the report depend crucially on the correctness of the underlying counts of each program's faculty members. (Critics of the previous NRC reports said they unreasonably favored large programs, so in this report, several variables are scored on a "per-full-time-faculty-member" basis.) On Monday, the University of Washington's College of Engineering published a note of protest, saying that the NRC had used incorrect, severely inflated faculty counts when assessing Washington's engineering programs. Because those denominators are wrong, the statement says, each program's faculty-publication rates and citation rates look much weaker that they actually are.

Then there is the inevitable issue: Will universities use the report to think about culling weak programs? Should they?

Some officials say the report shouldn't be used to guide the ax, while others say that the data may indirectly point to winners and losers. "Context is everything," said Joan F. Lorden, provost and vice chancellor for academic affairs at the University of North Carolina at Charlotte and a member of the project committee. "Maybe a low-ranked program is one that you want to invest in."

"The conversation will be much more complicated than just producing a cut list from the NRC rankings," said Richard Wheeler, a vice provost of the University of Illinois at Urbana-Champaign and another member of the NRC project committee, during Monday's conference call. "Any time a program is looked at in a really stringent review, an enormous amount of information is brought forward. At our universities, an enormous amount is known about these programs that couldn't possibly be captured in a report like this."

Potential for Program Cuts

Be that as it may, Ms. Heiland of the Teagle Foundation believes the report could affect the survival of some programs, because many institutions will soon begin to change their structure for economic reasons, which might sometimes mean shedding doctoral programs or merging them with those of nearby institutions.

"Universities tend to think of themselves as complete universes," said Ms. Heiland. "But I think we need to rethink campuses and to think of them not as self-contained entities but as hubs of learning, open to the world, networked." The NRC report, she believes, holds the seeds of much of this rethinking. "I'd love to see these data, with all of their flaws and limitations, spark some kind of creative discussion that responds to the national need to educate more scientists, to educate more humanists."

Some scholars, of course, believe that the entire project of ranking academic programs is folly.

"Rankings have been perceived as synonymous with quality," said Bruce E. Keith, an associate dean at the United States Military Academy, in an interview on Monday. But projects like the NRC's, he said, tend to measure quality overwhelmingly in terms of research prestige while paying too little attention to how students are shaped by the programs. Where do their graduates work five years after they have completed their degrees? How many of their dissertations are later published as books? How many of them receive major grants from the National Science Foundation or the National Institutes of Health? (The new report does include a measure of whether graduates of the programs immediately get academic jobs or postdoctoral fellowships, but there are no long-term measures of students' careers.)

Mr. Keith wishes the new NRC project had focused more explicitly on how programs affect students—an idea that was endorsed in the research council's 1995 report. One passage in that report said, "The primary questions to be answered are, 'Do differences in scholarly quality of program faculty or other ratings result in measurable differences in careers of research and scholarship among program graduates? Are these differences attributable to program factors, or are other factors at work?'"

The quarter million data points in the new NRC report will probably shed light on many mysteries, Mr. Keith said. But those fundamental questions about programs' effects on students still wait to be answered by some new study over the horizon.


1. impossible_exchange - September 28, 2010 at 02:19 pm

Why is "guiding the ax" the first thing that comes up?
Why not guiding the support?

Why negative?
What dog are you folks trying to wag?

2. rightwingprofessor - September 28, 2010 at 04:23 pm

According to these rankings the University of Delaware mathematics graduate program ranks 11-40th, and the University of Chicago's rankis 27-57th. This is one of many many absurdities in this report.

3. princeton67 - September 28, 2010 at 07:43 pm

Go to http://graduate-school.phds.org/ to see what you really want to see: the actual rankings. None of this 8 - 18 obfuscation.

Also, go to http://www.math.columbia.edu/~woit/wordpress/?p=3197 for criticism of the criteria.

4. amnirov - September 28, 2010 at 10:12 pm

This is the stupidest thing in the history of things.

5. lkmwi - September 28, 2010 at 11:50 pm

I second comment #4. I am the department chair at a big ten school. In looking over the data for our department, I found several errors. For instance, they state our graduate cohort is 25% female, when the reverse is true. The system is clearly flawed. There are two many points at which inaccuracies can be introduced, starting with the faculty surveys.

6. ajkind - September 29, 2010 at 12:18 am

The ranking criteria and the concepts used are innovative and fairly accurate. Academicians, dont waste your time and my tax dollars in writing rebuttals to the ranking. Spend he time in producing students who can get a decent job in their major.

7. penguin17 - September 29, 2010 at 09:52 am

In my field, linguistics, one of the absolute best departments in the entire world - never mind the US alone - is at the University of California Santa Cruz. I can be pretty objective about this: I don't work there, I've never worked there, and I have no degree from there. It's just a great department, and everyone in the field knows that. You can argue about whether they should be #1, tied for #1, in the top 5, or whatever - that's a matter of taste - but they are at the top of the profession by everyone rational assessment.

Now go look at where they "rank" on both r-rating and s-rating scales. Utterly absurd!

8. gsawpenny - September 29, 2010 at 11:15 am

This is great stuff. It does not matter what the "meaning" of the ranking system is, it offers a tool for those realistically looking for a full time job to see where their pedigree ranks in terms of the institution where they are applying to work. I think this is a great tool, a breath of easy to understand data in an absurd world of near meaningless rankings.

9. anonscribe - September 29, 2010 at 11:48 am

The "S" rankings - what most comments so far seem to be criticizing - are intended (it seems to my layman mind) to assess both the research productivity/scholarly reputation of faculty members AND how good the department is at teaching and supporting graduate students. Aren't both integral (even defining) goals of Ph.D.-granting departments? The folk wisdom is to always go study with the brightest mind in a field, never mind whether this shining star will actually care about mentoring you.

It seems many of the departments with the highest reputations ALSO do an excellent job of educating their grad students. Perhaps what appear to be anomalies aren't: the reputation of faculty may be top-knotch...and they may treat grad students like cattle and do a poor job of educating them (or they may just get awful funding and are thus unable to support grad students properly). I think the S-rankings will actually go a long way toward helping grad students make more informed choices (unless, of course, websites like phds.org do something cynical like only rank programs according to the reputational "R" rankings....oh wait...)

I also love how powerful the self-validating cycle of prestige is: X is the best school. Everyone "just knows it." Any data-driven approach that doesn't conform to what everyone "just knows" is invalid. What more can high-performing but low-profile programs do to try to break this cycle?

10. penguin17 - September 29, 2010 at 11:52 am

Anonscribe: The R-rankings aren't exactly reputational rankings. They are based on exactly the same data as the S-rankings, but the data are prioritized in a way that is supposed to match people's perceptions of what the great, good, middle, and not-so-great departments actual are in their fields. Same data, different prioritization. (And it's not circular: "reputation" isn't part of the data.)

All I know is that in my field, both rankings are coming out looking screwy by any standard, and only the union of the two paints a picture that begins to look somewhat reasonable (albeit still with some utterly bizarre anomalies).

11. gavitt - September 29, 2010 at 12:14 pm

There are 145 History graduate programs in the US News rankings but only 138 here. Where did the other 7 go?

12. anonscribe - September 29, 2010 at 01:05 pm

penguin17 - Thank you. That clarifies things for me. So, is phds.org's ranking system some modification of these S/R rankings, or are they just an itemized way of compiling your own rankings based on personal preference? Just curious (and maybe you haven't looked at those).

13. countinplaces - September 29, 2010 at 01:10 pm

Business Programs?

14. rightwingprofessor - September 29, 2010 at 02:59 pm

I'm pretty sure phds.org is still using the data from the 1995 rankings.

15. fiscalwiz - September 29, 2010 at 04:26 pm

Strong or weak program has little to do with whether a program should shut down. If a strong program produces graduates who cannot become employed, there is no particular reason why it should continue. It exists for the enjoyment of the faculty and that is scarce reason for continuation. If a weak program is producing graduates who head for immediate gainful employment, why on earth should it close? It might work to improve, but never to close.

"Excellence" is no reason by itself for a progrm to operate.

16. andyj - September 29, 2010 at 05:09 pm

It's easy to take potshots at any ranking system. Certainly they are imperfect, but data based comparisons are better than reputational ones, or at least they provide another kind of useful information. I am wondering, however, why the omissions of some programs from institutions that participated, e.g., no public health ranking for Northwestern and Loma Linda? Also I question, as have others, what went into the numerator and denominator for publications. The numbers don't look right in a number of instances. Having taking these random shots, I applaud what NRC is trying to do.

17. penguin17 - September 29, 2010 at 05:29 pm

Dear Andyj, both rating systems are fully data-based. The only difference is how the sets of data get prioritized: according to what people say they value in a program in the abstract - or according to the properties of the programs they actually value. And yes although it is easy to take potshots at any ranking system, that doesn't mean they are all equally useful or fair.

18. andyj - September 29, 2010 at 07:54 pm

Dear penguin7,
Understood and agreed. Smaller programs may feel more confortable with S since it reduces, although does not eliminate, the big program bias. Large programs may feel that the "bias" has a substantive basis and prefer "R". Something here for many if not for all:-)

19. john_deere - September 29, 2010 at 11:06 pm

Dear rightwingprofessor,
the University of Delaware has a math department oriented purely towards applied math. The number of grants, publications, citation indices are surely different because of this. And UChicago math program has been considered not-so-prestiguous for a while now. The market reflects that, many of their graduates had problems finding positions.

20. rightwingprofessor - September 30, 2010 at 09:37 am

I don't know what you are smoking but the University of Chicago is one of the top mathematics department in the world, and its hire in the last two-three years have only cemented that position, including two Field's medalists.

21. john_deere - September 30, 2010 at 10:52 am

Says who, exactly? Because according to the numbers they are not. Hiring Fields medalists is a poor measure of quality (btw, I believe they hired only one recently). There are other departments which have Fields medalists more influential than then the ones at Chicago and don't rank to well either, for example Stony Brook or University of Florida.
That is exactly what is said in comment #9: Chicago is presumably great because everybody ``just knows it". Well, apparently it is not.

22. fearless_winnower - October 04, 2010 at 10:34 am

One of the reason the publications/citations numbers look "off" as andyj put it, is that they don't count all publications. For all non-humanities fields, they don't count books at all (as either a "publication" or "citation"). So, departments in, say, the social sciences, that have a lot of faculty who produce books, the rankings are WAY off. Especially given that these are often the most influential pieces of work those faculty produce. This seems to me to be a pretty fatal flaw for the ranking of social science departments at the very least.

23. mmd1960 - October 04, 2010 at 12:30 pm

LOL while I read this story. So the NRC cannot say with any degree of certainty what are the best programs? All this stupid report will do is give greater acceptance to US News and World Report rankings, which is a pity and a missed opportunity for academics and administrators who care about the quality differences among institutions.

24. zbicyclist - October 04, 2010 at 12:57 pm

"There are 145 History graduate programs in the US News rankings but only 138 here. Where did the other 7 go?"

Obviously, they're history.

25. 11272784 - October 04, 2010 at 01:11 pm

Research productivity may be a factor in "counting coup" at the university level, but I consider it directly antithetical to teaching students. Faculty whose time is spent in pursuing grants and generating money for the institution often avoid the classroom and minimize interactions with students. I always approach the term "research productivity" with great suspicion. It usually equates to "get as much money from research and spend as little time with students as possible."

26. moongate - October 04, 2010 at 06:02 pm

Well, I hate to enter snark filled waters (particularly regarding a subject such as math which I hated in school and know nothing about) but the UofC ranks in the top 10 or so in any number of other ranking outfits (US News, for instance), so perhaps it might appear that UofC's reputation is different than the NRC's ranking would indicate. On the other hand, perhaps US News is inaccurate? -- certainly its methodology is not as complex or indepth.

Personally I like the NRC...my grad institution ranked somewhat higher when compared to US news and much more in line with phd.org. And, interestingly, so did our good friend's institution, which I believe is 4th tier in US News but ranks well in the NRC. For whatever it is worth (since obviously no one but me knows who I am referring to) these "4th tier" people are some of the brightest, most successful academics I know.

27. john_deere - October 05, 2010 at 10:59 am

The US News ranking is purely reputation-based, they just ask department heads and graduate directors to rank the departments and average the results.
The NRC is on the other spectrum where they measured numerically the characteristics of the department, grants, publications, citations, awards. One can argue, but for me personally the NRC ranking is much more convincing.

The biggest influence it is going to have is that now places that experienced a significant drop will not be able to attract graduate students as good as before - UofC is one example but in math there are other places that were much lower than in the reputation-based ranking: Rice, UNC, Northwestern and a few more.

Add Your Comment

Commenting is closed.

  • 1255 Twenty-Third St., N.W.
  • Washington, D.C. 20037
subscribe today

Get the insight you need for success in academe.