Doctoral-Program Rankings, Delayed Years, May Be Merely a Historical Record

U. of Michigan

Zach Cofran, a graduate student at the U. of Michigan, highly ranked by the NRC in 1995, compares skulls of different hominids.
June 13, 2010

If you ask a dean or provost about the National Research Council's long-delayed assessments of American doctoral programs, you might hear this: "The NRC report? That's so last decade."

That line, which apparently circulated on an e-mail list this year, isn't the greatest witticism. But somehow the very staleness of its humor signals how weary university leaders are of the multimillion-dollar project.

The NRC report—a sequel to the research council's widely cited 1982 and 1995 rankings of doctoral programs—has been in the works since 2003. And in the works, and in the works. In November 2007 the research council booked space at a Washington hotel for a public unveiling of the report. But that release date came and went, and so did several subsequent ones. The most recent delays have sprung from peer reviewers' concerns about the report's complex statistical apparatus.

Now there are signs that the report may finally see the light of day. At a conference in Chicago last month, the study's director, Charlotte V. Kuh, said the report would be released "soon," though she declined to be more specific.

"This report is going to come out, or I'm going to die trying," said Ms. Kuh, who is the research council's deputy executive director of policy and global affairs.

But at this late hour, it is not clear how reliable the report will be. The survey data that underlie the program assessments were gathered way back in late 2006 and early 2007—and many of the survey questions concerned the 2005-6 academic year. A lot may have changed since then.

"In five or six years at a research university, people come and go," says Karen L. Klomparens, associate provost for graduate education at Michigan State University. "They retire. They die. They get hired away. And if you're going to have a rating based largely on faculty research productivity, it almost becomes moot because those aren't the same faculty anymore."

Ms. Klomparens has some kind words for the NRC project. Some universities, including her own, have already used the data they gathered internally for the project to improve their stipend systems and their students' average time-to-degree. When the NRC report finally emerges, she hopes to use its national analyses to guide similar improvements.

But alongside such hopes lies deep frustration.

"When you put this much work into something, you'd like for it to end up being a good project," says Mary M. Sapp, assistant vice president for planning and institutional research at the University of Miami. In that role, she spent countless hours in 2006 and 2007 prodding departments to accurately respond to the NRC's surveys.

"I think we're just kind of all worn out," she says. "We wait, and then we think it's coming, and then it doesn't. We wait, and—I mean, it's become a bit of a joke."

Attempts at Objectivity

By all accounts, the problems with the latest NRC report were born from the best of intentions. The 1982 and 1995 editions of the report had been widely embraced; even 15 years later, doctoral programs occasionally boast about having been highly ranked in the 1995 report. But many scholars were concerned that those reports were too heavily based on subjective, reputational factors that unfairly privileged larger, well-established doctoral programs.

So for the third edition, Ms. Kuh and her colleagues have tried to rely on objective measures of faculty research productivity, student completion, and other factors. They have weighted some of those factors on a "per full-time-faculty-member" basis, so that strong, small programs can be duly recognized.

They also decided that giving programs a specific ordinal ranking—for example, the sociology department at the University of Michigan is the fourth-best in the country—is foolish. It is impossible to be so certain about rankings, Ms. Kuh's committee decided.

They chose instead to devise "ranges of rankings" that would more realistically reflect the statistical uncertainties in their data. So readers might be told, for example, that there was a 50-percent chance that Michigan's sociology program was between second-best and sixth-best in the country.

But all of those new approaches turned out to be more cumbersome than the research council had expected. Collecting the data was hugely labor intensive. Among other things, Ms. Kuh's staff had to make sure that different programs defined faculty members the same way.

The NRC's surveys asked programs to break their instructors down into three categories: "core faculty," "new faculty," and "associated faculty." Core faculty members were people who had served on at least one dissertation committee between 2001-2 and 2005-6, or who had served on the graduate-admissions or curriculum committees. The new faculty comprised people hired in tenure-track positions between 2003-4 and 2005-6, who did not meet the criteria for the core faculty. Associated faculty members were people in other departments or programs who had served on at least one dissertation committee for the program under scrutiny between 2001-2 and 2005-6. In the report's analysis of professors' publication and citation records, core faculty members are weighted more heavily than associated faculty.

If all of that sounds complicated, it is. A major reason for the report's delay in 2007 and 2008 was that Ms. Kuh's committee realized that in their survey responses, universities were not using those faculty definitions in consistent, comparable ways. So the NRC staff spent hundreds of hours on the phone with various programs, trying to fix those discrepancies.

"They made very good efforts to try to fix it after the fact," Ms. Sapp says. "But it obviously would have been better to have done it correctly from the beginning." Even after all of the data cleaning, Ms. Sapp says, she worries that the study may still harbor anomalies.

Ms. Sapp and others are also concerned about measures of student retention and the time it takes them to complete their degrees. Some universities seem to have used inconsistent definitions of the starting line. Imagine a student who entered graduate school in 2002 intending only to earn a master's degree, but who decided in 2004 to continue into the doctoral program. Should this student be counted as a first-year doctoral student in 2004, or as third-year? Here, too, the NRC made extensive efforts to iron out inconsistencies, but Ms. Sapp is not entirely confident about the accuracy.

A Historical Document

Timeliness, not accuracy, is the major concern among people awaiting the report.

"We aren't the same institution that we were five years ago," says F. Douglas Boudinot, dean of graduate studies at Virginia Commonwealth University. "We're interested in the report, but we've grown so much that it has less meaning for us. We have 33 percent more doctoral students than we did then, and we have about 57 percent more doctoral students graduating each year."

Julie W. Carpenter-Hubin, director of institutional research and planning at Ohio State University, says that certain departments there have seen so much faculty turnover that the NRC report's analyses of research output will be badly out of date.

Take Ohio State's doctoral program in communications. The program had 18 faculty members when the surveys were completed, in 2006. Five of those people subsequently departed, but the department has ballooned with 20 new hires, including two at the senior level. More than half its faculty members today are people who were not there at the time of the survey.

Graduate deans at the University of Maryland at College Park, the University of Minnesota-Twin Cities, and the University of California's Berkeley, Irvine, and Los Angeles campuses voiced similar fears to The Chronicle. The year 2006 seems like a long time ago, they said.

Officials at the research council say such concerns are overblown. In her speech in Chicago last month, Ms. Kuh said that the pace of overall doctoral-faculty turnover has declined in the last decade. She also argued that the report's major function will be to provide a general analysis of patterns that lead to successful programs.

Jeremiah P. Ostriker, chair of the rankings-project committee and a Princeton astronomy professor, suggested in an e-mail message to The Chronicle that the new report's data will be no more stale than the data used in the NRC's previous reports.

The surveys that underlay the 1995 report were conducted in 1992-93, so they were not exactly oven-fresh. And "the primary input data in the '95 report were the reputations of the individual programs," Mr. Ostriker wrote. "Reputation is, in itself, a lagging indicator."

Last-Minute Changes

Although the methodology of the report is still undergoing peer review, at the Chicago meeting Ms. Kuh described two changes that are likely to be reflected in the final report:

First, the report's "ranges of rankings" will be broader. Rather than saying that there is a 50-percent chance that the doctoral program in economics at Imaginary University is somewhere between the 10th-best in the country and the 16th-best, the final report will probably present 90-percent confidence intervals. "Provosts didn't like those interquartile rankings," Ms. Kuh said. So instead the report will say, for example, that there is a 90-percent chance that the economics program is somewhere between the seventh-best and the 23rd-best in the country.

Second, instead of offering a single range of rankings for each program, the report will present two separate ranges of rankings. One will be based on 20 objective elements, such as faculty publications and students' average time-to-degree.

The other will be based on extrapolations from survey respondents' subjective assessments of particular programs.

But Ms. Kuh said that she hoped no one would be too obsessed with the rankings. The report will be presented online in a flexible format that will allow deans, policy makers, and prospective students to analyze the data according to the criteria important to them.

The final report will include data on approximately 4,900 doctoral programs in 61 disciplines at 222 American universities.

Ms. Kuh said she was hopeful that the report could be updated soon with fresher data, now that its statistical framework is finally (almost) in place.

Other Players in the Game

But it is not certain that all universities would support such an effort. "There are other places to go for some of this data now," says Ms. Carpenter-Hubin, of Ohio State. (Among other things, she cited the research-productivity reports produced by Academic Analytics, a five-year-old company in New York.) "The data and analyses that the NRC is putting together will be terribly valuable. But in the future, do we need the NRC to do it?"

Ms. Klomparens, of Michigan State, wonders whether administrators would invest the staff needed for future rounds of the study.

"Any of us who work on research understand that it takes time to get things right," she says. "But to have that much money in play, to have that many individuals working for as many months as they did to provide the data, and then not to have results—it just makes people skeptical."

The Shelf Life of a Doctoral Survey

The National Research Council’s study of U.S. doctoral programs is based on surveys that were sent out in 2006—but the results of the study have still not been released. Some university leaders are concerned that those survey data are getting stale, especially where doctoral programs, such as this one at Ohio State U., have seen heavy faculty turnover