• October 25, 2014

Student Learning: Measure or Perish

Measure or Perish 1

Randy Enos for The Chronicle

Enlarge Image
close Measure or Perish 1

Randy Enos for The Chronicle

For the past three months, The Chronicle's reporters have been writing a series of articles collectively titled Measuring Stick, describing the consequences of a higher-education system that refuses to consistently measure how much students learn. From maddening credit-transfer policies and barely regulated for-profit colleges to a widespread neglect of teaching, the articles show that without information about learning, many of the most intractable problems facing higher education today will go unsolved.


WANT MORE INSIGHT ON QUALITY AND ASSESSMENT?
The Chronicle's Measuring Stick series collected original reporting and expert commentary on the subjects. See what you might have missed.


Failing to fill the learning-information deficit will have many consequences:

  • The currency of exchange in higher education will continue to suffer from abrupt and unpredictable devaluation. Students trying to assemble course credits from multiple institutions into a single degree—that is, most students—frequently have their credits discounted for no good reason. That occurs not only when students transfer between the two- and four-year sectors, or when the institutions involved have divergent educational philosophies. A student trying to transfer credits from an introductory technical-math course at Bronx Community College to other colleges within the City University of New York system, for example, would be flatly denied by five institutions and given only elective credit by three others. John Jay College of Criminal Justice, by contrast, would award the student credit for an introductory modern-math course acceptable for transfer by every CUNY campus, including Bronx Community College—except that BCC would translate that course into trigonometry and college algebra, not technical math.
  • Students who emerge from this bureaucratic labyrinth should be awarded credit in Kafka studies for their trouble.

    Credit devaluation, which wastes enormous amounts of time, money, and credentialed learning every year, is rooted in mistrust. Because colleges don't know what students in other colleges learned, they're reluctant to give foreign courses their imprimaturs.

  • Taxpayers have few defenses against those who would exploit the federal financial-aid system for profit. Last year the U.S. Department of Education rightly criticized the Higher Learning Commission of the North Central Association of Colleges and Schools for accrediting American InterContinental University, despite AIU's "egregious" policy of awarding nine credits for five-week courses. But the department's follow-up proposal to solidify the traditional, time-based definition of credits as signifying one hour spent within the classroom and two without was also criticized, and for good reason. Nearly a third of all college students took online courses last year. Why would anyone define credits in terms of seat time when, increasingly, there are no seats and no fixed learning time? Because they have no other basis for doing so.
  • Lacking objective information about student learning, the crumbling quality-control triad of accreditors, states, and the federal government is faced with an unwelcome choice: Reinforce a time-based measuring stick that was already flawed when it was developed, in the late 19th century, or allow unscrupulous operators to write checks to themselves, all to be paid by the U.S. Treasury.

  • Upward mobility in higher education will remain limited to institutions that happen to be located in the cities favored by Richard Florida's "creative class." If your campus is in Greenwich Village or Foggy Bottom, the sky's the limit. If all you have to offer is unusually good teaching, you're out of luck. How can you prove it? How would anyone know? So aspiring colleges are forced to compete for students by means of marketing campaigns, recreation centers, and other expensive things that continually drive up tuition until there are no students left to pay full freight and subsidize all the rest. And then the whole rickety system comes crashing down. It's not a question of whether this will happen to many mid-tier institutions—it's when.
  • The public definition of institutional quality is left to think-tank entrepreneurs and journalists with agendas to push and magazines to sell. Those who are terrified by the notion of Congress's using such information to create an accountability system for higher education should consider that, in fact, we've had such a system in this country since 1983. It's run by U.S. News & World Report.
  • Most important, without information about learning, there is less learning. Faculty cultures and incentive regimes that systematically devalue teaching in favor of research are allowed to persist because there is no basis for fixing them and no irrefutable evidence of how much students are being shortchanged.

Reasonable higher-education leaders acknowledge all of those points. Yet the prevailing attitude toward information about learning still ranges from infinite caution to outright hostility. Assessing student learning is difficult, particularly learning at the elevated levels to which colleges ought to aspire. Still, possible instruments of assessment are seen as either gross violations of institutional autonomy or as so crude and imperfect that they require further refinement and study, lasting approximately forever. "The perfect is the enemy of the good" has become a rhetorical strategy to be deployed, rather than a problem to be avoided, when outsiders ask uncomfortable questions about teaching and learning.

American universities grant 50,000 research doctorates per year. Even if we consider only full-time staff in Ph.D. programs, there are upward of 170,000 people working in colleges today who have been rigorously trained to find meaning in chaos. They explore the furthest theoretical reaches of time and space; ponder the nature of justice, beauty, and truth; develop new ways of understanding the human condition; and contribute countless innovations that make the world a more vibrant, humane place to be. Are we to understand that it is beyond their intellectual means to produce a reasonably accurate estimate of how much chemistry majors learn at Institution A compared with Institution B? That a student's relative capacity to think analytically and write clearly is a mystery that no mortal can hope to reveal?

Nonsense. Comparable learning information doesn't exist because many groups have a strong interest in its not existing. Institutions that thrive on centuries-old reputations, despite their present-day failure to challenge students in the classroom. Companies looking to exploit the federal financial-aid system. Faculty who hate teaching and love research. Colleges that profit from forcing students to take the same course twice.

Institutional autonomy is important, and so is the academic freedom that allows faculty to shape the content and character of their courses. But there are reasonable limits to most things, including these. When the autonomy of CUNY math departments produces a Mad Hatter credit-transfer system, it's time to draw the line.

There are, of course, many people in higher education with enlightened motives and views. Public institutions are beginning to publish results from the Collegiate Learning Assessment and other assessments of critical-thinking skills. Seventy-one presidents, many from liberal-arts colleges that specialize in teaching, have formed the Presidents' Alliance for Excellence in Student Learning and Accountability. The better accreditors are using their limited leverage to prod institutions toward more assessment and transparency.

But the question remains: Will those efforts come fast enough or go far enough?

The "gainful employment" regulations that the Department of Education is working to impose on for-profit colleges are nothing less than a wholesale repudiation of traditional higher-education quality control. All of the institutions in question are accredited to do business. Yet the federal government still doesn't trust that their students are learning enough for what they're paying. So the department has chosen to define learning in purely economic terms, comparing students' postgraduate earnings with their debt.

That makes sense for vocational programs. But how long will it be before politicians who see higher education as nothing more than a way to train future workers simply cross out the "for profit" limitation on the gainful-employment measures?

College rankings, meanwhile, are proliferating as private companies compete to sate the growing appetite for comparative information among prospective students at home and abroad. As much as colleges complain that their unique essence can't be distilled into a single number, students choosing a college (or, increasingly, a course) can choose only one. Yet, rather than produce alternative rankings that reflect the core values of higher learning, many people in higher education seem to believe that the rankings genie can be put back in the bottle through a campaign of frequent, uncoordinated complaining, accompanied by the hope that U.S. News, which doesn't even publish an actual newsmagazine anymore, will somehow see the error of its ways.

Meanwhile, a few of those 170,000 smart people are actually interested in how much students learn in college, and are using new psychometric instruments to find out. When their results become public, the myth that everyone with a college degree actually learned something will be definitively punctured, and along with it any justification for keeping information on learning hidden.

The real debate shouldn't be about whether we need a measuring stick for higher education. We need a debate about who gets to design the stick, who owns it, and who decides how it will be used. If higher education has the courage to take responsibility for honestly assessing student learning and for publishing the results, the measuring stick will be a tool. If it doesn't, the stick could easily become a weapon. The time for making that choice is drawing to a close.

Kevin Carey is policy director of Education Sector, an independent think tank in Washington.

Comments

1. 11167997 - December 13, 2010 at 07:26 am

Good job, Kevin, but forget the tests. Watch, instead, the Degree Qualifications Profile that the Lumina Foundation will release in January to kick-start a transformational process of defining competency-based student learning outcomes--for associate's, bachelor's, and master's degrees. This will be a 2-3 year interative process involving engagement with the major stakeholders in higher education---accreditors, chief academic officers, students, governance bodies, faculty groups, IR officers, professional association and learned societies. Institutions will be challenged to take the Profile, sand and polish it, add competencies appropriate to their mission, but most of all to write any changes with true student learning outcomes, i.e. with active, concrete verbs that describe what students know, can do, and can apply with their knowledge and skills. It's not merely "can"; it is "that" they do to qualify for degrees. And if the verbs are operational, they lead directly to the design of assessments (assignments, examinations, performances, exhibits, field-based projects, papers) that would validate student attainment. This is a faculty perogative, and something some faculty do well now, but most do not. And by addressing three levels of degrees, each with increasing levels of challenge build into the verbs of performance, the Profile cleverly challenges us to think up the whole ladder of attainment.
We will come to know what a bachelor's degree means only if we articulate, in the same breath, what both associate's and master's degrees mean. You don't need any standardized tests after that. Again, watch for it next month!

2. jwr12 - December 13, 2010 at 08:29 am

"The public definition of institutional quality is left to think-tank entrepreneurs and journalists with agendas to push and magazines to sell."

Um, physician, heal thyself? What agenda is the think-tank you head up selling? One of educational consulting on the creation of outcomes assessment regimes, perhaps? Or is attending big picture conferences and giving keynotes more your bag?

Alright, I know that's unfair. It's unfair for someone outside your institution to accuse it of doing what it does because of crass, narrow motives. So why is it okay for this columnist to routinely denounce the cravenness and narrowness of current college faculty, all the while pretending that he and "reasonable leaders" who agree with him see the future writing on the wall?

It seems that outcomes assessment -- and the even ghastlier learning verb pyramid proposed by 1116997 -- is about making education soup to nuts linked to servile skill sets: the sorts of things that fit on the resumes of people who are looking to hold whatever niche job the corporate world of the 21st century happens to give them. Meanwhile, as has been shown elsewhere, these jobs tend to go away in about 10 years (holla C++ programmers!), and then not only do you still have to confront the meaning of life, but you have to conduct a job search as an obsolescent specialist.

I digress. How will students know the value of their degrees? They will study all the information already out there; they will talk to current students and faculty; they will consider the historic reputations and particular attributes of the campus they attend. And on the basis of all this, rather than some silly and inevitably misleading test, they will make their choices. It is folly to imagine that a system will be designed that will do better than that, and in the meantime the corporatization of the University will continue, lead by well-meaning administrators and, yes, think tank entrpreneurs.

3. educationfrontlines - December 13, 2010 at 10:56 am

Using the charge that we have a "...system that fails to consistently measure what students learn..." assumes that consistency, which is to say standardization through external assessment, is good. Of course, a wide variety of professors across a wide variety of disciplines do indeed measure what their students learn inside their classes in far greater detail; that is part of our job. To say that most of the 170,000 professor are not interested in what students learn in their college education ignores the fact that we all are concerned with what they learn in OUR class. Students' college education is the sum of those classes and programs designed to integrate those course skills. The goal is not to raise scores on a highly questionable external CLA.

We have already seen what standardization in the name of competency outcomes using external assessments has done in American schools under K-12 NCLB reforms. And 43 states are finishing the standardization job by adopting a national curriculum and external test, ignoring the deprofessionalization of teachers and the long-standing abysmal history of such standardized externally-assessed systems in other countries (many of whom are working to get off of nationalized testing).

The argument that we must measure and compare chemistry students at institution A versus institution B is needless. You have an array of students evaluated in detail, course-by-course from both institutions and it is students that graduate programs accept and individual students that industries hire. ACT/SAT and GRE provide some indication of institutional grade inflation, etc. without destroying the academic freedom and responsibility needed for creativity.

External assessments, combined with the push of online programs (fraudulent in the case of science labs and performance courses), are actually the real threat de-valuing American diplomas and college degrees in the public sector.

John Richard Schrock

4. unusedusername - December 13, 2010 at 10:57 am

Has the assessment movement improved K-12 education? It's pretty clear that the answer is no. We have taken almost all academic freedom from K-12 teachers, force them to teach to all multiple-choice tests, make them fill out reams of paperwork, and the schools are worse than they were before the whole thing started. Students don't even perform better on the multiple-choice tests much less on higher level thinking skills like writing.

A recent article in Science shows that the reason why it is so hard to get secondary science teachers is because of high turnover. The number one reason for the turnover is administrative interference: too much curriculum regulation in the classroom, and spoiled kids backed up by administrators who define student success strictly in terms of the number of students who get through.

Please assessment people, go away. Leave us alone.

5. newfudgeman - December 13, 2010 at 10:58 am

Thank you for this straightforward piece. I have been left wanting to scream after sitting in mtgs with other faculty who believe what we do "can't be assessed" (we teach intro to literature). Really? Even though every lit anthology in the world has the same basic information, the same vocabulary defined, and the same theories explained? It's not easy--but it's not that difficult either!

6. shopkow - December 13, 2010 at 11:20 am

I'm always curious about the assumption that figuring out what our students are learning would have to mean a standardized test. Faculty know what they want students to learn (or should). If departments sat down and said, "This is what our students will be able to do when they graduate with a major in our department," departments would be better able to see whether and to what degree the students could actually do it. The problem is that departments tend to ask instead, "What courses should our students have taken before they graduate?" The same sort of thinking works on the course level also. So what should a student be able to do when she or he completes an introductory literature course? This shifts the focus away from "What should the students have read" to "how should the students be able to apply what they've learned." If the students are to avoid the misfortunes of the C++ programmers, they'll need to apply what they've learned to new contexts.

You will note that this approach does not mean that every school would teach the same thing or that it would be measured in the same way. We couldn't do that anyway. We have different missions and recruit different students. We couldn't have the same expectations for students in open enrolment institutions that we would have for students had insanely selective institutions. But we can have clearly articulated expectations and we should have some means to show whether our students have met them.

Leah Shopkow
History Learning Project

7. kevincarey1 - December 13, 2010 at 01:36 pm

To be clearer than I was in the piece, I don't think standardized tests are the only way to measure learning. Surely there are many ways to gather such evidence and it's likely that many if not most won't involve standardized tests.

National standards are significant element of education policy in many countries whose students outperform ours, Finland is a good example. The idea that individual teachers or even localities should decide on their own what students should learn would strike the Finns as strange if you asked them, which I have.

"we all are concerned with what they learn in OUR class" strikes me as transparently incorrect. All?

8. jffoster - December 13, 2010 at 02:00 pm

Finnland is a small country with only a few million people and a short tradition of centralized government. The United States of America are a large country with (probably too) many millions of people and a long tradition of uncentralized federalism and state control of number of things, including education.

9. crankycat - December 13, 2010 at 02:18 pm

Tempest in a teapot - there isn't any one of those five terrible points of any interest to me. Paper tigers may eat straw men, but I'm dining elsewhere.

10. 986960 - December 13, 2010 at 02:49 pm

test

11. 986960 - December 13, 2010 at 02:59 pm

"Failing to fill the learning-information deficit will have many consequences:

The currency of exchange in higher education will continue to suffer from abrupt and unpredictable devaluation. ......"

With regard to the effect of precise learning outcome measurements upon the course/student transfer issue.

OK, so we learn how to measure what our students are learning and what our faculty are teaching. Then what?

Knowing precisely what is being taught in courses may not increase the ease with which courses transfer between institutions.

Suppose a student takes an algebra course (AC) at institution A and wants to transfer that course to institution B. Oh. Sorry, AC at institution A doesn't cover exactly the same material - and we have data to confirm the distinction between AC at institution A versus that at institution B. Your course doesn't transfer.

Currently, if the courses appear close, a student benefits from the uncertainty in the content (and mastery). It is understood that course material differences and shortcomings can be made up by effort on the student's part in the new institution.

The finer our understanding of our institution's course content and delivery, the finer the scale by which we can judge and rank our institutions. As a corollary, we will obtain a finer gradation of which institutions (and their courses) are comparable to which, and which institutions are in the same class.

The ability to transfer courses and student(s) from institutions of one class to another "of a higher class" may get more difficult, not easier.

12. rpm13 - December 13, 2010 at 05:51 pm

jwr12 is right on the mark, even if a bit too reluctant to accuse the Assessment and Think Tank industries of being self-serving. 986960 shows how just one of the arguments in this essay is wrong. Point by point, the essay is a set of post hoc, mindless rationalizations of the purportedly obvious notions that assessment and standardization are good things. Where are the data? unusedusername is the only one to hint at data, suggesting that this approach has failed even in K-12 where it is at least a plausible model.

13. jffoster - December 14, 2010 at 08:18 am

...960 (next above but one)'s ability to spot and anticipate unintended consequences is commendable and worthy of a high appointment in the Office of the Curmudgeon-General.

But in this case the consequence may be intended, assumeing Kevin Carey is that smart or conniving. I suspect what he would really like is that courses, Algebra (AC) was your example, be standardized around the country.

14. impossible_exchange - December 14, 2010 at 10:23 am

"measure how much students learn" how does one do that?
You CANNOT. It is like measuring how much I love my wife.
It is impossible to measure something like that.
No test can do it.
So all these measuring rubrics are fake.
Learning isn't like banking: Knowledge isn't placed in the student's head. Rather it evolves in their minds over time and some lessons don't bear fruit for years. Learning is collaborative and our concept of teaching, as an assertive, phallic penetration into the student's mind is complete horse dung. That isn't what happens and WE ALL KNOW IT.
What happens is the student sometimes under our direction, sometimes under their own direction, figures "it" out. They are the ones memorizing terms. We are not doing it for them. They are the ones choosing what to learn. We cannot do that for them.
The problem with measuring like this is that it seeks to quantify the unquantifiable. It is trying to measure something that CANNOT BE MEASURED.

15. inverhills_sophia - December 14, 2010 at 11:26 am

What about creativity? What about types of thinking that push the envelope, that challenge the status quo? Should this no longer be taught? Or are we suggesting a measure that would measure one's ability to learn that the measure itself is simply a construct created to allow those without graduate degrees in the humanities, for example, to tell others what is or is not occurring in said coures? Or should we focus on teaching only those things than can be easily measured, such as math and basic logic? Is that all thinking means in today's world? Really?

16. goxewu - December 14, 2010 at 01:07 pm

Tiny, but telling:

"Presidents' Alliance for Excellence in Student Learning and Accountability."

These seventy-one eminences can't even title their pretentious organization intelligibly. They want "excellence" in "student accountability"? (Those are called "A's.") No, they want excellence in student learning and somebody else--the faculty--to be accountable for it. You'd think they could get the syntax to indicate that.





Yet another pretentiously titled organization, the kind that extraneous bureaucratic opportunists such as Kevin Carey love.)

17. shirley77 - December 14, 2010 at 02:24 pm

I agree; one cannot adequately measure "student learning." It's fair to say that students who are interested and motivated will learn a great deal, while those who are not will learn far less.

It's also fair to conclude that if universities eliminated faculty course questionnaires (FCQ's), which administrators use to evaluate faculty for merit increases and tenure, faculty would feel freer to raise class expecations and students would learn considerably more. As it stands, tenure track and tenured faculty often feel compelled to water down their courses and expectations lest students retaliate on the FCQ's. Get rid of FCQ's and the bar will be raised.

18. bemsha - December 15, 2010 at 01:07 pm

Right, the measure of "customer satisfaction" and the the measure of how much students learn do not correlate, or in some cases correlate negatively.

19. tcolb01 - December 15, 2010 at 01:36 pm

For years and years, we've wasted precious energy debating how students learn best. Let's, instead, concentrate on effective teaching. Then, students will learn because of the teaching, not in spite of it.

20. gplm2000 - January 03, 2011 at 04:24 pm

Sorry Keven but you just don't get it! "Comparable learning information doesn't exist because many groups have a strong interest in its not existing." Yes, for-profit schools have the most interest in making a profit because that is what they do!

They exist for no other reason. Take away the federal largesse and they will leave the business of higher ed. Most of their customers, poor and minorities, also are interested in making a profit but for themselves; sometype of paper degree that will help them get a job. Neither group has an interest in accurate measurements of learning.

The same can be said for the athletic depts. of BCS universities. College admistrators and athletic directors want to make a profit to make teams self-sustaining as well as get an occasional new building. It is a promotion to get outside donations and government grants. Earthworm research makes big bucks for old state U.

21. texastextbook - January 04, 2011 at 09:00 am

Pennsylvania's governor, Ed Rendell, recently described as "woosies" folks who failed to assert their rights to attend a football game that would've been held during foul weather.

Nobody demands of Rendell that he explain to PA's elementary- and secondary-school students how it comes to be that pro-players aren't serving in their nation's military during this time of war, or even what the meaning of the term "pro-player" is.

Add Your Comment

Commenting is closed.

subscribe today

Get the insight you need for success in academe.