After many rounds of delay, the National Research Council seems to be drawing nearer to releasing its comprehensive assessments and rankings of American doctoral programs.
This afternoon the council released a 201-page methodology guide that explains how its forthcoming report was constructed. The council’s last major doctoral-program evaluations were released in 1995, and graduate deans have been impatiently awaiting the sequel.
Now that the methodology guide has been released, how imminent is the assessment report itself? The council isn’t making promises — perhaps wisely, given the project’s history. In a “frequently asked questions” page that was also unveiled today, the council says simply that “we are working to complete all this work as expeditiously as possible.”
In an interview this afternoon, Jeremiah P. Ostriker, a professor of astrophysical sciences at Princeton University and the chair of the committee that oversees the doctoral-assessment project, said that he does not expect the report to be released during the next several weeks, but that he would be surprised if it is not released by the end of 2009.
“It’s more important to get it done right than to get it done fast,” Mr. Ostriker said.
Perhaps so — but some scholars are concerned that because the forthcoming report is based largely on data from the 2005-6 academic year, some of the information may be too stale to reliably interpret.
“I think the data will be fresh enough for some programs, but not for others,” said Julie Carpenter-Hubin, the director of institutional research and planning at Ohio State University, in an interview this afternoon.
“I’ve looked at how our regular tenure-track faculty have changed since 05-06,” Ms. Carpenter-Hubin continued. “Twenty-two percent of our tenure-track faculty have been hired since that time. And 18 percent of the tenure-track faculty who were there then have retired or have left for other reasons. So especially if you’re in a small program, that could be a pretty big shift.”
Ms. Carpenter-Hubin and other institutional researchers have also expressed concern that universities may not have consistently and uniformly answered some of the research councils’ questionnaires — especially questions pertaining to students’ average time to degree.
But Ms. Carpenter-Hubin also praised the research council’s general approach to the forthcoming report. Unlike in its 1995 report, the council has weighted most of its measures according to how many full-time faculty members teach in a given program — so there should not be an artificial advantage for programs with large faculties.
The report will also weight its variables differently in different fields, according to scholars’ own reports of which variables are most important in their field.
Mr. Ostriker said that this is one of the forthcoming report’s most important innovations. “In the humanities, external honors and awards might be the most important, so that’s what we counted in the humanities. If in the physical sciences citations are important, that’s what we counted most heavily there. The criteria that we used we obtained from the academic departments themselves.”
In that way, Mr. Ostriker said, the report will thread the needle between subjective reputational rankings, which are famously subject to “halo effects” and other biases, and overly crude quantitative measures such as citation counts, which are not always reasonable measures for every field. —David Glenn