It’s actually happening this time. On Tuesday at 1 p.m. Eastern time, the National Research Council will lift the curtain on its long-awaited assessments of U.S. doctoral programs.
The research council will present the data in the form of a gigantic Excel file that will allow users to sort programs according to whatever criteria are most interesting to them: research productivity, faculty diversity, student financial aid, and so on. The Chronicle will also unveil some interactive data tools on Tuesday.
As most of our readers probably know by now, the new report is based on fundamentally different methods than the NRC’s previous two efforts to rank doctoral programs, which appeared in 1982 and 1995. The earlier reports were based heavily on reputational surveys, but the new one is grounded in programs’ objective characteristics, including such measures as faculty citation rates and students’ median time-to-degree. (Some people are concerned that those data may have gone stale, as the NRC conducted its surveys back in 2006 and 2007.)
And where the previous reports ranked programs in simple ordinal lists that your grandparents could understand, the new report gives each program two different ranges of rankings, known as S-rankings and R-rankings.
(Since you asked: S-rankings are derived by comparing individual programs’ characteristics with the characteristics that scholars in the field say they value. For example, if political scientists say that citation rates are the most important measure of a program’s quality, programs that have high citation rates will do well in the S-rankings game. R-rankings are derived by comparing individual programs’ characteristics to faculty members’ opinions of a sample of programs in the field. For example, if political scientists say that Harvard, Wisconsin, and Berkeley are the three strongest programs in their field, then programs that are objectively similar to those three will do well in the R-rankings game. That’s an oversimplified sketch, but it will do for purposes of this blog post. Stay tuned for the flow chart that we’ll publish Tuesday afternoon—or, if you’re impatient, see this recent presentation by the project’s staff directors or this summary prepared by officials at Stanford University.)
The political-science program at Hypothetical University might be told that it has an R-ranking of 12-36, meaning that we can say with 90 percent confidence that its “true” position is somewhere between 12th-best and 36th-best in the country. Its S-ranking, meanwhile, is 14-23, meaning that we’re 90 percent certain that its true position under this kind of analysis is somewhere between 14th-best and 23rd best in the country.
Are the NRC’s new methods conceptually and statistically sound? Not everyone thinks so. We’ll explore those debates in our coverage this week. But whatever its merits, the new ranking system seems bound to cause a healthy disruption in the traditional status economies of academe. Scholars who have walked with a certain confidence-bordering-on-arrogance, serene in the knowledge that they teach in a top-10 program, may now be prompted to think in terms like these: “Well, we seem to be somewhere between sixth and 20th place. At least, we’re 90 percent certain of that. But then there’s that second analysis that says we’re between somewhere 11th and 32nd . . . ”
And that brings us to a paper that appeared this weekend, in a perfect bit of timing, on the Web site of the journal Research in Higher Education.
In the paper, “Are You Satisfied? Ph.D. Education and Faculty Taste for Prestige: Limits of the Prestige Value System,” four scholars poke holes in the truism that prestige, not money, is the primary currency of academic life. (Money is definitely the currency of the journal’s Web site; the article is behind a paywall.)
The authors—Emory Morrison of Mississippi State University and Elizabeth Rudd, Joseph Picciano, and Maresi Nerad of the University of Washington—mined data from a recent large survey of social scientists five years after the scholars had earned their doctorates. At that early stage of their careers, were scholars more satisfied with their jobs if they taught at high-prestige programs (as measured in part by the 1995 NRC rankings)?
In general, the answer turned out to be no. Salaries, not program prestige, were the strongest predictor of the scholars’ job satisfaction. But there was an interesting exception: Among scholars who had earned their own doctorates in high-prestige programs, the prestige of the university where they were teaching significantly affected their job satisfaction, especially their satisfaction with their level of autonomy.
Morrison and his co-authors argue that high-prestige programs culturally transmit a “taste for prestige” among their graduates—but that it is a mistake to assume that all scholars share that taste. Among people who earn their doctorates at not-so-prestigious institutions, salary is the primary determinant of job satisfaction.
The new paper tips its hat to The Academic Marketplace, a 1958 treatise on scholarly prestige by the sociologists Theodore Caplow and Reese McGee. In a memorable passage in that book’s introduction, Jacques Barzun described
the radical ambiguity of a profession in which one is hired for one purpose, expected to carry out another, and prized for achieving a third: teaching, research, and prestige are independent variables, besides being incommensurable per se. The upshot is as lively a set of anxieties for the agents and the responsible heads of the institution as one could hope to produce by teasing hamsters in electrified cages. Hence the peculiar governance and subdued restlessness of the American university.