College-Rating Plan Is Starting Out in an ‘Absurdly Wrong Direction’

To the Editor:

As you’ve probably heard, a federal college-rating program is in the works at the Department of Education. President Obama has directed the department to develop a system to quantify institutional outcomes, accessibility, and affordability. By the end of 2014, this information will be made publicly available as a consumer guide for students and their families. In addition, Congress and states will be encouraged to use the rating system as an investing guide, tying levels of aid to institutional ratings. The hopes for this plan are ambitious: The rating system will enable students to make better choices, colleges will respond to new incentives towards improved quality, and the resulting competition will help control or decrease tuition costs.

The rating system hasn’t been fully developed yet, so the Secretary of Education, Arne Duncan, has warned that criticizing it is “a little premature—and more than a little silly.” Perhaps. But do we need to wait until the entire outfit has been sewn to proclaim that the emperor has no clothes? This plan being put forward is foolish. It won’t help students make good decisions because it gives them only some of the information they need while simultaneously offering them terrible investing advice. It will fail at incentivizing excellence because it provides rewards to the wrong people for the wrong reasons. And it will fail at either cost control or quality improvement because achieving both at the same time is rare except in Silicon Valley and the fantasies of policy-makers.

Fortunately, the department has pledged that they won’t risk the quality of our higher education system along with billions in aid dollars without first collecting feedback from “literally hundreds and hundreds of stakeholders.” Assuming this quota has not yet been reached, let’s look at what this plan gets wrong and what can be done to fix it.

College rating is not helpful; college matching is. A key goal of the college rating plan is to “empower students with information2” so that they can “make more informed choices.” Much like an investor choosing a mutual fund, a student will be able to consult each college’s rating to help determine “which colleges offer good value and which ones do not.”

This plan would make lots of sense if choosing a college actually was like choosing a mutual fund. Unfortunately, it is not. Once purchased, the value of a mutual fund is essentially independent of your own efforts. Moreover, additional investors in the same fund will not dilute the value of your investment, and may well raise it. It should be obvious, but neither of these conditions applies to higher education. Student characteristics are the largest factors in determining the value realized from tuition payments. Moreover, the more students who enter the same field or institution, the more the potential value of the degree is diluted (Harvard wouldn’t quite be Harvard if it graduated millions each year, would it?).

Both of these points require rethinking the college-rating plan. First, the question “Is college worth it?” cannot be answered from institutional data alone. This would be like trying to choose the best clothes based on their resale value alone, regardless of your own size and shape. Clearly, students should select a college with consideration of their own academic and non-academic qualities, as these are the biggest factors influencing their likelihood of graduating. To the extent that college characteristics matter, it is mostly in the degree to which student characteristics match with the environment offered at that school.

What is really needed is a college-match website where students can see how their own characteristics (high-school GPA, anticipated amount of study time, etc.) relate to outcomes at different higher-education institutions. Certainly, this is what most colleges are doing on the admissions end, and to good purpose, as smart selection is essential for financial health at many institutions. Shouldn’t students have access to the same predictive power when selecting a school that the school has in selecting its students?

Performance-chasing is bad investing; pursue a balanced portfolio. The college-rating plan couples incomplete information with terrible investing advice. Specifically, the plan is meant to encourage both students and funders to favor those institutions with the highest ratings. On the stock market, this naïve investing strategy is known as “performance chasing.” While the strategy is seductive, it has repeatedly been shown to be suboptimal. Due to regression to the mean, today’s high performers are not especially likely to be tomorrow’s (Arne Duncan: I hope you’re not managing your own portfolio this way!).

Performance chasing is especially disastrous for commodities. In agriculture, for example, farmers may rush to plant more of a crop with a current high price only to find that the popularity of this strategy has led to over-production and price collapse. Does it really make sense, then, to coordinate college-decision making and funding through the publication of a federally-baked rating system?

As with regular investing, the key goal for college planning should be diversification—diversification on an individual level through a focus on common core competencies useful in many vocational pursuits, and diversification across higher education in terms of the majors, programs, and institutional contexts available for students to choose from. Rather than chasing statistical noise, the department should think more about how to help students and the industry develop a balanced portfolio.

Don’t reward students for alumni behavior; apply incentives directly to student performance. A second key aim in the college-rating system is to provide incentives to promote excellence within higher education. According to Mr. Duncan, this is necessary because our current financial aid models “doesn’t support excellence.” Instead, “our existing funding model essentially only provides incentives for enrollment-growth—the more students you have, the more money you get.”

This is an odd way to characterize the situation. The rest of the economy operates with the same incentives: the more customers you can attract, the more money you get. This seems to be a potent motivation for excellence in the rest of the economy; it’s not clear why Mr. Duncan is so down on this model for higher education.

But supposing that we do need to enhance the incentives for excellence: how should this be done? Bizarrely, the proposed system does this by tracking institutions in order to reward students. For example, the president’s fact sheet on the program suggests that “students attending high-performing colleges could receive larger Pell Grants and more affordable student loans.” From an incentive perspective, this would make B.F. Skinner’s head explode. In this scheme, students gain financially for the previous good performance of other students—the school’s alumni. Once admitted to a top-rated school, will there be any incentive for a student to match the performance of the illustrious alums who went before? No. That student’s outcome data won’t be available until after graduation, so it cannot impact their financial aid during their college education. Imagine, though, being a first-year student and realizing that the seniors about to graduate are out partying, potentially leading,upon their graduation, to higher costs for your next year of college. We could call this scheme Pay it Backwards.

It would make more sense to reward students for their own performance. For example, the Hope scholarships in Georgia provide about 90 percent of state tuition costs to any Georgia student with at least a 3.0 high school GPA. This type of incentive structure has some weaknesses (it is a bit regressive and also prone to problems of grade inflation), but it provides a straightforward way to address accessibility, learning, and timely degree completion. Even better, the program is pegged to statistics already available (individual GPA), so the overhead is very low.

You get what you measure, so only measure what you want. For incentives to work, they must not only be applied to the right people, but also given for the right reasons. A critical consideration, then, is how the college-rating plan will assess quality. This part of the plan is the most in flux, so there is no definitive strategy at this stage. Still, it has already been determined that it “will be looking at three big performance buckets”: access, affordability, and outcomes. For outcomes, there are many possible metrics, but Mr. Duncan states the initial focus will be on outcomes such as “graduation and transfer rates, alumni-satisfaction surveys, graduate earnings, and the advanced degrees of college graduates,” and how many students at an institution “get a job in a field they choose.”

Did you notice the dog that didn’t bark in the night? There is no mention of rating colleges based on student learning, the core mission of higher education! This is inexplicable. It would be like ranking college football teams based on their merchandise sales rather than their actual wins and loses. This comparison should be especially apt for President Obama, who was “fed up with these computer rankings and this and that and the other” in college football and thus encouraged the creation of a national football championship series to directly assess football quality. The same principle applies here: If you want to foster learning, you need to measure it and reward it directly. Easy-to-measure but indirect metrics are no substitute, and in fact can be harmful because they tend to be unresponsive to goal-directed change. Consider, for example, a medical school which develops an innovative pedagogy which strongly increases learning. Is this likely to produce a large difference in the employment rate of their graduates?

Of all the metrics which aren’t learning being discussed in the college-rating plan, completion rates stick out as the most egregiously misguided. Unfortunately, this is the metric which seems to have most captivated policymakers: It is consistently the first metric mentioned in materials about the college-rating plan and has been the focus of much recent discussion at both the state and federal levels.

What’s so bad about rewarding high completion? First, low completion rates partly reflect the very competition over students that the overall plan is meant to foster. Which states lead the nation in completion? South Dakota and Wyoming, two states which rank near the bottom in terms of student choice per square mile. It doesn’t make any sense to punish volatility if you want to foster competition. It makes even less sense when you consider that completion rates are slightly anti-correlated with attainment, the overall proportion of degrees earned within the adult population. For example, Connecticut, Rhode Island, and Delaware are currently the bottom three states for completion, but rank 4th, 13th, and 18th in attainment, respectively. Even more striking, sustained efforts to improve completion rates have not yielded improved graduation rates. While this may seem counter-intuitive, it reflects the basic principle that you get only what you measure: measure completion and colleges will find ways to increase completion and completion alone. For example, boxing students out of transferring or reaching a “gentleman’s agreement” to reduce transfer-admissions recruitment are two strategies which could increase completion without boosting overall attainment or learning. Consider the latest innovation from the University of Phoenix: a scholarship that enables students to pay less the longer they attend. This reduces students’ incentives to transfer elsewhere, but it also front-loads their costs! Is this the type of shake-up we need in higher education?

Focusing on completion rates is antithetical not only to fostering competition, but also to increasing student learning. Consider the fact that poor academic performance is one of the primary factors students cite when dropping out or transferring. To substantially increase completion rates, then, an institution can either a) enroll fewer students who are at risk for poor academic performance, b) lower academic standards, or c) create innovative pedagogies and support services to help these students. Will tying an institution’s financial health to its completion rates make it more likely to take risks on innovative approaches? Wouldn’t that be something?

If the government insists on pushing forward with rating college “value,” it should recognize that completion rates are worthless for this endeavor. Similarly, it would be a mistake to reward schools for the many other outcome measures which are easily at hand, but only loosely connected with learning (alumni earning!). As the great statistician John Tukey has noted, “the combination of some data and aching desire for an answer does not ensure that a reasonable answer can be extracted.”

The best course of action would be to ditch the completion agenda. Instead, the department could double down on its support for developing a variety of reliable and valid measures of student learning. With good tools for tracking learning, schools would be able to better assess and improve their impact and students would be better able to articulate what it is, exactly, they gained from their efforts and their tuition. This may be an arduous path to pursue, but basing billions of dollars of aid on whatever data is at hand will wreak havoc on higher education.

Competition is not magic; focus on higher quality or lower costs. The design flaws of the college-rating plan can be remedied. But it is still important not to oversell it. Currently, the plan is being pitched as panacea that can increase accessibility, foster innovation, boost completion, and control costs! Mr. Duncan sums up this incredible set of promises by saying that under this plan, colleges will have incentives to “do more with less.” Well, even Jesus stuck to one miracle at a time. In a less fantastical reality, it is well established that quality and cost tend to move in opposite directions (semiconductor industry excepted).

Consider the automotive industry. Over the past 30 years, competition in this industry has grown tremendously, first through globalization and more recently through the wider dissemination of quality and cost metrics to eager consumers. Has this yielded cars which do more for less? No. The average price for a new car grew from $10,600 in 1983 to $30,500 in 2013, a 188-percent increase that is far beyond the rate of inflation. It is instructive, however, that the Consumer Price Index rates this as only 46-percent inflation by estimating that a large portion of the price increase has been due to increased quality (notice that estimates of inflation in college costs never get adjusted like this; no wonder they seem so out of line). Brisk competition, then, has dramatically increased automotive quality. Sadly, though, it remains impossible to get something for nothing: Increased quality has significantly increased price.

If a college-rating system really could track college quality, it seems unlikely that competition would provide more for less. Like the automotive industry, consumers would probably end up getting more by paying more. Indeed, one could argue that it is the very strong consumer power of students which has helped spur runaway college costs: Students are the arbiters of where aid dollars go, and they wield this power to demand higher grades, more support services, and more amenities. The modern college campus, with its lavish dining hall, tricked-out recreational center, and rampant grade inflation is very much the mirror image of the infotainment-laden SUVs rolling down our highways. A college-rating plan seems a lot like adding fuel to the fire, particularly if it is not tied very closely to actual student learning.

There is certainly room for improvement in higher education. Students and their families do need better information to help make the difficult choice of whether and where to attend college. Colleges probably should be competing more on the merits of education than on their amenities, brochure design, and sporting success. And of course, all stakeholders are interested in reducing costs to ensure and improve the accessibility of a college education. But these goals are so critically important that we cannot be sanguine about a college-rating plan which is starting out in such an absurdly wrong direction. Let’s hope that instead of chastising critics of the plan for being silly, the department will make good on its promise to genuinely listen to feedback from all stakeholders. With some clear thinking and significant revision, a much better plan for higher education can be developed.

Bob Calin-Jageman
Associate Professor
Department of Psychology
Dominican University
River Forest, Ill.

Return to Top