• October 31, 2014

The Rankings Game: Who's Playing Whom?

This summer, Forbes magazine published its new rankings of "America's Best Colleges," implying that it had developed a methodology that would give the public the information that it needed to choose a college wisely. U.S. News & World Report, which in 1983 published the first annual ranking, just announced its latest ratings last week — including new rankings of "up-and coming" colleges and of institutions favored by high-school counselors. The publication has spawned what has become a cottage industry, transformed how the public thinks about higher education, and in the process made a lot of money.

Over the past three decades, I've had ample opportunity to dissect the various rankings or discuss the validity of their methodologies with those responsible for publishing them in an effort to explain to a wide range of university constituencies, including the news media, why the universities where I worked — the University of Illinois at Urbana-Champaign, Cornell, and for the last 17 years, Duke — were rated where they were. It's fun as I retire from university administration to ruminate on the absurdity of it all.

Ours is a competitive culture, and it should be no surprise that many people are interested in such external assessments of the quality of American higher education. After all, students and families spend as much as $50,000 a year to go to college, and it is reasonable for them to want a credible, independent assessment to help guide their thinking about where to make that significant investment.

That said, I don't know anyone in higher education whom I've talked to since the ratings game began who believes that the magazine rankings of undergraduate education can capture what makes the experience offered by an individual institution unique or effective. One can make a plausible argument for the National Research Council's long-delayed assessment of the quality of Ph.D. programs — due out this fall — as it provides both a long enough view and uses recognized qualitative performance criteria to evaluate the relative strengths of programs in a field. But even that assessment can have a time lag that cannot capture the appointment of one or two faculty members who can transform the quality and productivity of a graduate department.

The undergraduate magazine rankings, in contrast, give considerable weight to perception and tend to be based on annual assessments, as if undergraduate-program innovations or tweakings manifest significant change in two semesters. But if the objective is to sell magazines, manifesting change is important. U.S. News has artfully — in the guise of improving the veracity of its rankings — made one or more changes in its methodology every few years, which enables it to argue that there is some shift in the quality of institutions that the new methodology has captured. The cynic in me says that the changing of the methodology is more a strategy for getting different results in the rankings, which helps the publication sell more copies. If the rankings stayed constant, why buy the magazines?

Moreover, the precision that U.S. News purports its methodologies reveal is, on the face of it, rather silly. If you look at the top 10 institutions, you will see that some of them are separated by small fractions of a percent. In the Olympics, those fractions make a difference, but it's hard to understand how in the real-life breadth of activities of a university, they make any difference at all to a student. I have talked with many people at U.S. News who share my skepticism and, in some cases, are embarrassed by the magazine's rankings. But they recognize that the rankings are a significant moneymaker. (The magazine has created separate rankings of graduate and professional programs, as well as research hospitals, not to mention books based on the rankings.)

Let me cite a few examples from previous U.S. News rankings, along with some experiences of my own with other magazine rankings, which have made me so cynical.

Several years ago, the California Institute of Technology, admittedly one of the great institutions of this country — although a very specialized one, which makes it difficult to compare with other leading private institutions or such public giants as Ohio State University or the University of Texas at Austin — jumped from being ranked 8th to the top ranking. I asked people at Caltech if they could identify any differences in the educational programming they offered that year that could possibly justify such a dramatically different evaluation. Of course they couldn't and smiled at their good fortune to be named No. 1. At the same time, they also knew that as soon as the methodology was changed, they would probably drop back into the pack of the top 10, which is essentially what happened a year or so later. Interestingly, when I asked people at U.S. News how they could justify such a big change, they shrugged and said they couldn't.

During my years at Duke, the university ranked as high as tied for three and as low as tied for eighth. The year we tied for three was my favorite. Folks at Duke were understandably elated. That year U.S. News had added a new item to the methodology called "value added," which assumed that if you started with high board scores, your graduation rates should correlate in some way. Duke, which has very high board scores but was in the lower part of the top 10 institutions in that category, had one of the highest graduation rates. The assumption was that we must have done something to improve the performance of our students because they graduated at rates that were disproportionately higher than their board scores would indicate. Thus Duke jumped up a few places, while institutions like the Massachusetts Institute of Technology and the University of Chicago fell more than a full point and dropped in their overall rankings. I recall telling leaders of Duke, including our trustees, not to crow too much about this jump to our position of three because inevitably the methodology would change, and we would drop a few places — which, of course, is what happened.

My favorite magazine ranking experience wasn't with U.S. News but with Money magazine, which, in the 90s, had a "Best Buys in Higher Education" issue. In that one, the public universities, almost by definition, ended up having a built-in advantage, although 15 private institutions were listed among the top 100. (Meanwhile, the criteria U.S. News uses are skewed heavily to the advantage of private institutions with smaller student bodies, better faculty-student ratios, and, in many cases, considerably larger endowments.) Duke was not among the 15 in Money's rankings, much to the consternation of some of our trustees and others. So I met with the editors of Money and asked how they could explain that we could be ranked in the top 10 in the country in U.S. News and other ratings (as skeptical as I was about them) and not make the top 10 private institutions, or even close, in Money's listing. They really didn't have an answer for me, mumbling something about our library resources, and I was able to document their numbers were wrong. The next year, Money came out with a new category: "Costly Schools That Are Worth the Price." Duke was ranked highly in that, and people at Duke were pleased. Alas, I didn't keep the pressure on the magazine, and one year later it dropped the category.

It's also true that it is possible to game the system with U.S. News. For example, it is widely known that at least one of the better institutions in the country developed a strategy to improve its profile. Since U.S. News gives great weight to yield — the percentage of students who are admitted who choose to attend — that institution significantly increased the numbers of students admitted under early admission to around 50 percent, knowing that the early-admit students would accept its offer. The institution moved from the bottom rank of the top 10 into the top four or five rather quickly and has managed to maintain that position. That is not to suggest that it's not a very strong university, only that a smart marketing strategy has enabled it to position itself in a very different way, with the U.S. News rankings at the core of the strategy.

I've long felt that one could make a reasonable case that the top 25 or so institutions that U.S. News has identified probably are about right, give or take a few at the lower end. Knowing what I do about Duke and other institutions, do I believe we're one of the top 10 universities in the country in terms of the quality of the undergraduate educational experience that we offer our students? Absolutely. But can I say that we're five, or eight, or 10? Absolutely not. I've argued to people at U.S. News that they would be much more intellectually honest to get away from pretending they can rate institutions with great precision and should instead group institutions in the top 10 or 25. But that wouldn't sell magazines.

Although a group of college presidents last year announced that most of its members would no longer complete the reputational survey U.S. News distributes, virtually all of higher education has fallen into the trap of playing the rankings game. I remember well a wonderful speech by a distinguished faculty member at my son's freshman convocation several years ago. The scholar compared the founding of that institution to Odysseus' journey, noting that both had decided not to let others define who they were. He urged the freshmen to create their own identity through the choices they made during their college years. Within a moment or two of the faculty member taking his seat, the chancellor of the university — a person I admire enormously — told the assembled freshmen and their parents that while the information was embargoed publicly until 11:59 that night, he felt comfortable telling them in confidence that the university for the first time had cracked the top 10 of U.S. News rankings. The response was predictable, with students jumping up and down, and parents smiling at the thought that their investment clearly was going to be worth it. The faculty member sat there, his head bowed.

In the earliest days of the U.S. News & World Report ratings, Stanford University was ranked No. 1. I greatly admired Donald Kennedy, then president of Stanford, for saying that while it was nice to have the magazine make the assessment of his institution, he surely hoped that students didn't make their judgments based on magazine ratings. Mr. Kennedy got it right. If you live by U.S. News, you're likely to die by it.

Thus I always said when reporters and other people sought my reaction to Duke's being ranked somewhere in the top 10: "It's nice to have confirmed by U.S. News what we know about the quality of our students and faculty. But magazine ratings are really designed to help sell magazines and should not be the basis upon which any student makes a choice. Rather, students should visit a campus, spend real time learning about the academic programs, and determine whether or not they have the right fit with a particular institution." I still think that's very sound advice.

John F. Burness is a visiting professor of the practice of public policy at Duke University. He previously served as senior vice president for public affairs and government relations at the university.


http://chronicle.com Section: Commentary Volume 55, Issue 2, Page A80

subscribe today

Get the insight you need for success in academe.