• November 1, 2014

30 Ways to Rate a College

The lines below connect raters to each of the measures they take into account. Notice how few measures are shared by two or more raters. That indicates a lack of agreement among them on what defines quality. Much of the emphasis is on “input measures” such as student selectivity, faculty-student ratio, and retention of freshmen. Except for graduation rates, almost no “outcome measures,” such as whether a student comes out prepared to succeed in the work force, are used.

* Published in a partnership between these two organizations through 2009.
Note: In some cases, separate measures shown here are combined to create a single variable used for ranking colleges.

Comments

1. 22228715 - August 30, 2010 at 08:54 am

In the introductory paragraph, I would argue with the statement that "freshman retention" is an input measure. Using the I-E-O model, it is either an outcome of the first full year, or it might be considered an environmental variable if one takes the long view or wants to see the first year as more determined by input variables than created by the institution. But yes, selectivity and h.s. class rank etc. as input variables can be used to calculate a predicted first-to-second-year retention rate and then compared. And, yes, there are very few outcome measures.

This suggests a bigger issue for the rankings... what is the outcome product or variable being measured, and how do the listed variables fare as appropriate proxies for such outcome?

2. 11217546 - August 30, 2010 at 10:29 am

Many of these measures are not even indicators of quality. These rankings publications make use of data that are readily available and are collected for purposes other than creating ordinal rankings among institutions. Some of the measures are quite subjective.

No serious educational researchers have attempted to create such rankings because reliable data to systematically compare the quality of an educational experience do not exist. These amateurish attempts by magazines to attempt to measure quality are inadequate. Unfortunately they get a lot of popular attention.

3. fambus2009 - August 30, 2010 at 10:39 am

The absence of shared measures does not indicate a "lack of agreement" on quality definition, but rather attempts by the raters to differentiate what they're selling to the public to gain competitive advantage. These current efforts are not the place to start.

4. nweinstein - August 30, 2010 at 10:45 am

The categories absent from these surveys are powerfully suggestive of what is missing in rating colleges. How students and faculy evaluate themselves as learners is nowhere to be found. Gaps between the images of colleges created by marketing departments vs. their daily performative reality are also neglected. As for "outcome assessment," it might be novel and valuable to probe the range of potential careers graduates find themselves prepared to enter as a consequence of their college experience. It is also surprising that no category exists for "campus atmosphere catalyzing creativity." Finally, this entire business of rating colleges like so many football teams might be looked at through creating a scenario where a roundtable discussion of great educators from the past could be brought into 2010 to ponder the ratings game. Here are suggested participants: Socrates, Lao Tzu, Emerson, and James. How would they care to play the ratings game?

5. metacomets1 - August 30, 2010 at 10:58 am

Where are the ratings by institution for number of graduates that obtain meaningful employment? This should be very important information for a student to know when choosing a college, especially in today's environment.

6. srojstaczer - August 30, 2010 at 11:08 am

It should be noted that the authors of this chart for some strange reason did not include the third highest ranked (according to the Google search engine) college ranking system in the world, The College Ranking Service (http://www.rankyourcollege.com). As the Boston Globe notes, "The most piercing rankings are found on rankyourcollege.com." The French newspaper Le Monde says that the CRS provides "satisfaction garantie." The College Ranking Service is the acknowledged world leader in college evaluation.

Yes, the CRS is a parody. But the quotes above are real and the methods the CRS uses are no more or less ridiculous than the methods described on the Chronicle of Higher Ed chart.

7. ccnorm - August 30, 2010 at 11:11 am

Ditto metacomets1 - Where are the placement rates? Where are the satisfaction measures from the graduates?

8. 22024814 - August 30, 2010 at 11:23 am

None of these rankings look at what students and employers themseleves see as "measures" of success today. Job placement; indebtedness ratio to salary; real-world curriculumn; student satisfacton; teacher effectiveness and engagement ,, and so on. These are all "sunset" models of rankings based on what academics and insiders (experts) thought equate to quality or "ought" to equate to quality 2 or 3 decades ago. People themselves don't see the items measured as relevant to what they are looking for. For rankings to hold meaning they need to relate to what people are searching for from higher education. That is not research spending or the number of Nobel Laureates on faculty.

9. 11310086 - August 30, 2010 at 11:54 am

But the big issue, which no one seems to mention, is that quality is *complicated*, much too complicated to be summarized in a single summary measure or ranking on a list. That's why when colleges themselves do self studies for reaccreditation they're 300 pages long; the many, many functions of the modern college or university and the many measurements that may legitimately be made of each are not something you can easily or even rationally combine into some global measure. That anyone even thinks it's a good idea to try suggests to me that part of the problem is that colleges are now being expected---and themselves buying into---the idea that every college must be everything to everybody.

10. unusedusername - August 30, 2010 at 01:04 pm

"Except for graduation rates, almost no “outcome measures,” such as whether a student comes out prepared to succeed in the work force, are used."

This is key. The real gauge of an education is how well they do after college as compared to their predicted performance when they walk into college. But, this is really hard to do. Looking at the list, it looks to me like the Forbes list comes closest.

11. sstandif - August 30, 2010 at 01:40 pm

If we assume that markets are relatively efficient, one could argue that the quality inputs IS a key measure of success. Good students will gravitate toward the best universities.

There is a Buddhist Proverb that suggests "When the student is ready, the teacher will appear". A slight variation of this idea could be that "When the student is demanding, the teacher will respond". One could argue that the best students (high quality inputs) bring out the best that a university has to offer. At a minimum, we know students tend to learn together and from one another. Having high quality colleagues matters.

Clearly, it's not all about inputs. The fact that the ratings only focus on inputs is problematic for a variety of reasons. That said, I would not want to make the mistaking of assuming that inputs don't matter.

12. 22213708 - August 30, 2010 at 03:11 pm

Where are the ratings by institution for number of graduates that end up leading meaningful lives? This should be very important information for a student to know when choosing a college.

13. josephofoley - August 30, 2010 at 03:39 pm

The proof of the pudding is in the eating. Wouldn't it be helpful if we rated graduates rather than their institutions? Selective colleges and universities are more than willing to rate their applicants. Moreover, if we grant that much of the motivtion for education is based on the quest for vocational advantage, shouldn't we remember that employers rate job seekers?

To the degree that anybody can agree on the desired characteristics of a college graduate (If no one can, why rate colleges?), we should be able to create some sort of evaluation of a graduate's proximity to that standard. If writing is important, evaluate writing. If reading is important, evaluate reading. If subject knowledge is important, evaluate that. The same goes for problem solving, teamwork, etc. CPA's and lawyers are required to show that thay have learned enough to practice their profession. Of course, this would encourage self-education and undermine the importance of degrees.

If we were ever able to enshrine individual competency over educational pedigree, we would finally have an objective way to evaluate institutions. Ye shall know them by their fruits.

14. triumphus - August 30, 2010 at 03:49 pm

So much to measure; so little time.

15. a_voice - August 30, 2010 at 04:31 pm

One commenter said, "Where are the ratings by institution for number of graduates that obtain meaningful employment?" Another asked, "Where are the ratings by institution for number of graduates that end up leading meaningful lives?"

What is "meaningful employment" or "meaningful life"? How and how often can we measure that? Is the single role of college to provide for "meaningful employment"? Is college the only predictor of a "meaningful life"? If not, how can we separate the contributions of college from other variables?

For some reason, metrics-oriented people make me scared (more than not having metrics)?

16. hoppingmadjunct - August 30, 2010 at 05:48 pm

Only two raters use FT/PT faculty ratio, and none use the ratio of their salaries or any kind of relative measure of their benefits, job security, or opportunities for professional advancement. True, these are a little trickier to measure. But with 70% of American faculty hired off the tenure track now, teaching half of undergraduate courses, such oversights seem as negligent as the hiring practices that have so deeply entrenched the inequitible two-tiered faculty itself. Negligent? Nay, criminal.

17. mattymel - August 30, 2010 at 07:16 pm

The Provão test in Brazil is something that deals with measuring outcomes and doesn't seem to get much of a mention in this conversations. Can't see it getting much support where i'm from but interesting none the less.

18. mathprof47 - August 30, 2010 at 11:03 pm

We need to divest ourselves of the idea that numerical rankings are therefore "objective." Yes, it's true that 5 is a lower number than 10, but everything that goes into the determination of those numbers is the product of thought, judgment, argument, and, yes, subjectivity.
The same thing is true of scores on standardized tests.
It would help public understanding of things like rankings if we avoided the word "objective" for things whose only objectivity is in the numerical outcomes of completely subjective processes.

19. shanda10 - August 31, 2010 at 07:59 am

Will you update the map with the new (2010) measures/ categories of the Times Higher Education Ranking?
Since we have to live with rankings, this map is a really amazing way to demonstrate which measures rankings include and therefore how many shortcomings they have. Thank you!

20. siwasher - August 31, 2010 at 04:06 pm

These rankings remind me of teenagers' debates as to which is the 'best' computer. For what? For whom? Evidently, the best college, whatever the criteria for that judgement, is the best one for the 'average' student, a high-order abstraction, which means it doesn't really exist. Students are particular, individual, unique, with distinct talents, needs, learning styles, and definitions of success and satisfaction. If educators didn't often forget that, starting in k-12, the students would be even more unique, higher education a more diverse ecosystem, and the choices among schools more meaningful.

21. 22260020 - August 31, 2010 at 04:58 pm

Outcomes Assessment In Higher Education by Hernon & Dugan (2004) is one serious attempt by respected academics to address methods of rating colleges and universities. They survey the current methods and suggest options as well, looking at input and outputs to be measured. I don't agree with all the suggestions, but it's a worthy effort, and I don't know of any other by people in the academy.

22. rambo - August 31, 2010 at 07:30 pm

equal number of intellectual diversity (liberals and conservatives)
equal number of gender (male and females)

23. bwhite123 - September 01, 2010 at 04:41 pm

Why is there a need to rank schools or programs within schools? Given the cost of college success could be measured by employment, but the student's first job may not be a measure of his or her career, let alone lifetime interests. Georgetown to its credit has surveyed and published online what exactly its seniors did upon graduation. It may not be comforting to those looking at facially, but for the students it might have been what they wanted. Current student ranking of their professors could be a better measure, if they were objective. Prospective students use social media to evaluate schools they are interested by contacting students who are attending. This helps, but opinions are no more than that and are subject to change. I think we need to stop looking for a silver bullet through metrics. Students can obtain a good education at most schools, if they are interested and apply themselves. With grade inflation, even this cannot be measured.

24. jthelin - September 01, 2010 at 04:44 pm

I draw inspiration from those Philadelphia high school students in the 1950s who danced away on Dick Clark's American Band Stand each afternoon on national television. When asked to rate and rank a new rock n roll record, the analytic-aesthetic response was, "It's got a good beat. You can dance to it. I give it a 7."

Ratings and measures, not unlike a monetary currency, only work if consumers have confidence in them and believe them to be true or worthy. Consider how inane it is to rely on "batting averages" to rate and rank major league baseball players's ability and skill. But the custom persists even though it's not clear what the batting average connotes.



25. lexisaro - September 04, 2010 at 10:07 pm

I can critique the rating systems as much as the next person. Obviously the miss the mark in many many ways. At the same time, for students and parents looking for a college, what do you suggest they do? Do a tour about and talk to five friends to get a better assessment? I mean that seriously- they are flawed but better than nothing.

As for outcomes, while nice to add that too, pretty darn useless since you have no idea if the outcome is related to the type of student that gets into said college (and would have the same outcome at ANY college), the distribution of majors (schools emphasizing engineering over arts will report higher employment and salary figures), as well as the location (schools in economically vibrant centers will bode far better than schools in rural locations). Unless these factors and others are accounted for and adjusted, all pretty pointless. I want to kjnow what schools PRODUCE talent, not just ones that buy existing talent and entertain them for four years.

Metrics are here to stay and will always be flawed and I'm delighted that so many different ones exist so one can compare a school across different methods of assessment.

26. lexisaro - September 04, 2010 at 10:08 pm



Why the sole fixation on job outcomes? Is it not the case that a lot of undergraduate work leads many into graduate or professional schools? Surely that is as important an indicator as employment after undergrad.

27. mikereddin - September 17, 2010 at 02:57 am

World university rankings take national ranking systems from the ridiculous to the bizarre. Two of the most glaring are made more so by these latest meta analyses.

Number One: R&D funding is scored not by its quality or contribution to learning or understanding but by the amount of money spent on that research; it ranks expensive research higher than cheap research; it ranks a study of 'many things' better than the study of a 'few things'; it ranks higher the extensive and expensive pharmacological trial than the paper written in tranquility over the weekend. I repeat, it does not score 'contribution to knowledge'.

Number Two. Something deceptively similar happens in the ranking of citations. We rank according to number alone - not 'worth' - not whether the paper merited writing in the first place, not whether we are the better for or the worse without it, not whether it adds to or detracts from the sum of human knowledge. Write epic or trash .... as long as it is cited, you score. Let me offer utter rubbish - the more of you that denounce me the better; as long as you cite my name and my home institution.

Which brings me full circle: the 'rankings conceit' equates research / knowledge / learning / thinking / understanding with institutions - in this case, universities and universities alone. Our ranking of student 'outcomes' (our successes/failure as individuals on many scales) wildly presumes that they flow from 'inputs' (universities). Do universities *cause* these outcomes- do they add value to what they admitted? Think on't. Mike Reddin www.publicgoods.co.uk

Add Your Comment

Commenting is closed.

subscribe today

Get the insight you need for success in academe.