• December 18, 2014

Without Assessment, Great Teaching Stays Secret

Without Assessment, Great Teaching Stays Secret 1

Geoffrey Moss for The Chronicle

Enlarge Image
close Without Assessment, Great Teaching Stays Secret 1

Geoffrey Moss for The Chronicle

A few weeks ago, I spent a day at the University of Maryland-Baltimore County. The first thing you see on the drive into the campus is a six-foot-tall sign, stuck in the grassy median of the entrance road, that says, "WE'RE NUMBER 1" and "Top Up-and-Coming National University AGAIN!" It sets a tone: UMBC is on the move. How far it will be allowed to go is less certain.

The No. 1 designation was courtesy of U.S. News & World Report, which conducts an "up and coming" survey along with its regular annual ranking of which colleges are sitting atop the biggest piles of money and fame. The campus itself is fairly standard, with clusters of dorms encircling a compact group of grassy lawns and academic buildings. Throngs of students were out that day, lounging in the kind of late-summer sunlight that keeps brochure photographers in business. Everyone was fiddling with cellphones, and there was nowhere to park.

In other ways, UMBC is unusual. Although the present campus wasn't opened until the mid-1960s, making it an adolescent in higher-education years, UMBC has built a robust base of research faculty and financing. It is highly diverse: Half the students are white, 20 percent are black or Hispanic, and 20 percent are Asian. Its program to help minority students earn Ph.D.'s in science, engineering, and mathematics is nationally known. And its president has the kind of résumé you couldn't make up: Freeman A. Hrabowski III marched for civil rights in Birmingham with the Rev. Martin Luther King Jr. before graduating from the Hampton Institute at age 19, earned a Ph.D. in higher-education administration and statistics from the University of Illinois at Urbana-Champaign at 24, and became UMBC's president in 1992, when he was 41. He's been living, breathing, and promoting the place ever since.

But perhaps the most radical thing about UMBC is that it appears to have substantially organized itself around the task of helping students learn. After lunch with the provost and department chairs, Hrabowski took me to a science building, with frequent stops to greet students and faculty members, by name, along the way. We arrived to an unusual sight: About 50 students were milling around the door of the chemistry lab, 10 minutes before class, waiting to get in. The reason was simple: Latecomers to Chem 101 aren't allowed in the door.

Once they enter, students gather around tables with built-in computer terminals to work and learn together on group projects. Charts projected onto screens around the room as the professor moves among the groups. UMBC switched Chem 101 away from the standard lecture format five years ago. The result was an increase in the course pass rate from 71 percent to 86 percent, even though the passing standards are higher now than then. The number of students majoring in chemistry has nearly doubled, and other departments are moving to adopt the model.

For this and myriad other reasons—the latest issue of the university's journal of undergraduate research includes articles like "Parallel Performance Studies for a Numerical Simulator of Atomic Lay Deposition"—bright students in Maryland are flocking to UMBC, and people in the know cite it as a university to watch.

Yet I wonder how far UMBC can go from here. Far more people in the world remain out of the know than in.

That's because they don't have enough ways of knowing.

The Chronicle is publishing a series of articles titled "Measuring Stick," which explores the highly charged debate over assessing how much college students learn. The reaction from readers has been fascinating. While many embrace the assessment challenge, others reject it on principle. "I did not give birth to any of these students, and I am, therefore, not responsible for their ability or inability to learn anything," wrote one.

But even those with a more enlightened view of their teaching responsibilities tend to balk at anything that hints of standardization and comparability. Subjecting university teaching to the kind of public, widely shared standards of quality that we routinely apply to university research remains a bridge too far. Learning is too complex, we are told, and the available measures too crude. The specters of homogenization and government control are often invoked, and for good reason. It's not hard to imagine the consequences of assessment done wrong.

It is, by contrast, hard to fully imagine the consequences of assessment done right. And that's the problem. At the risk of sounding Rumsfeldian, not all the unknowns are known.

What if, for example, UMBC isn't actually up-and-coming? What if it's already there?

UMBC specializes in the task that every parent, pundit, and lawmaker in America most wants universities to accomplish: teaching young people to become great scientists and engineers. It may already be better at this than the Ivies and Research I universities that everyone knows.

But without reliable, public assessment information to prove that to the world, UMBC has few ways of elevating its standing to a level that matches the quality of its academic work. Right now, universities' reputations rest on wealth, fame, and selectivity. UMBC can't hit up rich alumni for giant donations because it hasn't existed long enough for many of its alumni to get rich. Starting a big-time sports program is a bad bet, as the scandal-plagued basketball program at Binghamton University, a fellow America East Conference member, shows. If UMBC becomes too selective, it risks sacrificing diversity and its obligations as a public institution. And it will be hard for whoever follows Hrabowski to match his particular talents.

Without a good measuring stick, great public universities can't prove their greatness. In the long run, that means we'll have fewer great public universities than we need. That shortage won't matter much to the institutions that control the existing higher-education power structure, or to the small number of privileged students who are allowed to attend them (that is, the groups that have the most to lose from shifting the terms of prestige toward learning). But it will be a slow-motion calamity for everyone else.

Early in the afternoon, Hrabowski led me up a narrow spiral staircase to the roof of the UMBC administration building. We followed what I suspect is a well-used path to the top of the building's north face, where he pointed to the Baltimore skyline in the distance, a new arts-and-humanities building rising from a construction site to the east, a research park in the woods behind us, and a center of NASA-financed research below. After nearly two decades in the presidency and a remarkable American journey, Freeman Hrabowski III can see every good thing about UMBC. But unless other people have the means of fully sharing his vision, it will take too long for the rest of the world to catch up.

Kevin Carey is policy director of Education Sector, an independent think tank in Washington.

Comments

1. segads - October 11, 2010 at 06:44 am

Don't do this to yourselves! Can't you see what has happened to K-12 education because of "assessments" meant to show what students have learned? These measures depend upon a student score on one day. Student motivation & background determines the results, as well as good teaching. We, too, were told that it would be schools measured; it has now become a part of our evaluations (even though research demonstrates that "value-added" evaluations are too imprecise and variable to be used in this manner).
These assessments are neither necessary nor fair. What of a state college whose students come in needing more remediation? These tests will not demonstrate growth on knowledge and skills -- it'll just compare them to other, more fortunate colleges/universities whose students began at a higher skill level. No matter what the "experts" say, these are not objective measures of learning; these "assessments" have become weapons against education and educators.

2. trendisnotdestiny - October 11, 2010 at 08:02 am


Why now? How does assessment fit into the narratives of concentrated wealth, massive inequality, wage stagnation, the enormous growth of personal debt and competitive advantages?

Why does assessment take priority during the final turn of privatization? Again Why now at this economic time and point. True, schools that succeed in producing scholars should benefit. However, remind us how this is not UMBC's "personal responsibility" to compete on their own merit of bootstrap pull-up and hard work?

It seems to me, in this world of cost-benefit analyses and students as economic, rational actors that the benefits (even after reading your article) do not account for how the system of assessment will be used by the same people who brought you:

sub-prime lending and foreclosure mills
pay-to-play corporate model of research
environmental disasters
corrupt politicians

I would really like to hear how assessment would work when accounting for our current context (instead of a branding campaign designed to build new markets around millions of young and unemployed americans).

3. 11134525 - October 11, 2010 at 09:23 am

To answer trendisnotdestiny's question: The ideology of learning assessment takes priority at "the final turn of privatization" because it provides the perfect cover to avoid investing more in the human (well paid full-time tenure track positions) and capital resources of an institution. To be more blunt, the budgets now depend on proving that students learn; no proof, no dough! And even if there is "proof", it will always be episodic and subject to skeptical scrutiny.

So the central administrators and their givenrmental allies, abetted by an ignorant public, have more than sufficient ideological ammunition to kill off inputs, demanding virtually 100% emphasis on outcomes, and thus save money.

I do not think I need to comment further on the essentially contestable concept of higher education implicit in this disingenuous posture, now the holy grail of every college administration and accrediting body in the country. All semblance of balance between inputs and outputs has been eviscerated in the never ending quest for learning assessment.

But at least quite a few folk have made profitable careers from this advance in improving education. I am sure Plato and Aristotle would have benefitted from the exercise of summative assessment techniques! (E.g., Dear students, what is the muddiest point you think you can describe about my lecture on the Theory of the Forms? You have five minutes to write it out for me on this 3X5 index card.)

4. impossible_exchange - October 11, 2010 at 09:31 am

Great comments, continuing in the same vain:
Education is not assessable on the sorts of scales being purposed here or executed in K-12.
Any fool should see that.
How, after the utter failure of assessment in K-12, are we now talking about this for college?
Because the dominate class, the leaders, are idiots who do not assess themselves. Rather they prefer to feel smug and believe their own propaganda.
The US education system worked spectacularly well once. From K-12 and at the college level, people learned, our students were the smartest in the world. Then the "guard began to change" and in came the Baby-Boomers -- a more inept generation has not been seen since Western Romans circa 454. Name one measure of success and prosperity that the Boomers haven't messed up? (The rise of technology is their only claim to fame.)
All of this garbage brought to us by the technocrats and bean counting assessors of the Baby Boomer's, this effort to "science" it up, is little more than an attempt to justify admins' over sized sense of their own merit.
If authority wants to be respected it must respect itself less than it respects those it claims to govern, lead, or direct.

5. jibarosoy - October 11, 2010 at 09:44 am

What is frightening about this radical quest to assess learning outcomes is that the advocates of "Outcomes Ass" can't see good outcomes even when it stares them in the face. Carey describes a scene of 50 students waiting to get into a lab ten minutes early because they would not be admitted into the lab late. He also mentions that, in that Chem course, the pass rate increased "from 71 percent to 86 percent, even though the passing standards are higher now than then." It seems clear to me that UMBC is doing a lot things right. Aren't grades and course completions good "indicators" of learning? I believe that if UMBC spent more time "assessing" the faculty and adminstrators would not have the time or resources to devote to helping students learn. The biggest tragedy and contradiction to the recent corporate demand that universities "prove their greatness" is how much the time spent on assessment will actually shortchange students and learning.

6. 11223435 - October 11, 2010 at 10:26 am

Ask yourselves where assessment came from. You are right if you say that it first raised its head (in academe and newspapers) in an age when eugenics was talked about openly. Yes, the dominant class measures the underlings and decides where they can be tolerated. Enlish professors at Ivy League schools once judged "Human Livestock Contests" at rural fairs, I'm told. Now we simply give the Betas and Gammas "high-stakes" tests.

7. collegeexplorations - October 11, 2010 at 10:33 am

In your thumbnail sketch of what is intriguing about UMBC, you left out reference to the availability of chess scholarships and the ongoing chess rivalries built around various international competitions (http://www.examiner.com/college-admissions-in-washington-dc/college-rivalry-plays-out-local-chess-championship). The chess program is an interesting component of the UMBC culture and definitely adds to the unique flavor of the campus.

Nancy Griesemer

8. goxewu - October 11, 2010 at 10:34 am

In essence, Mr. Carey seems to be lamenting that, while UMBCC is no long "up and coming" but has actually arrived as a very good university, there's no quick, convenient, quantitative marker (i.e., a 142.67 on the Carey Scale where 100 is average) to communicate (a) to other schools and the public.

What's the problem? The school seems to be doing better than fine, and word will get out through normal channels of communication without a team of Careyites being called in to render a formal, numerically codified assessment.

An update on the old joke: If you can't teach, assess.

9. goxewu - October 11, 2010 at 10:35 am

Sorry: "UMBC" and "no longer"

10. wilcoxlibrary - October 11, 2010 at 10:44 am

How do you know you are teaching and/or students are learning without assessment? What method do you use to evaluation your competency?

11. trendisnotdestiny - October 11, 2010 at 11:22 am

I second the great comments piece!

11134525 and jibarosoy, I am going to take with me today two phrases that I hope to co-opt

1) No proof, no dough
2) Outcomes Ass

The world is a better place because you two exist....

12. gsp_123 - October 11, 2010 at 11:33 am

All the posts before me basically hit the nail on the head.
Assessments/surveys can be useful in helping institutions get a handle on what is going on in terms of admissions. Assessments/surveys of learning seem a bit ridiculous since every student is unique and these assessments only provide a snapshot of the student at a particular point in time which could inaccurately reflect learning. The place for assessments is in an evaluation of motivation, strengths, weaknesses, etc.

13. historymistress1 - October 11, 2010 at 11:36 am

All I can say is that I am so tired of assessment as a yard stick for student learning outcomes-- how long are we going to keep pandering to the political right?

14. cwehlburgtcu - October 11, 2010 at 12:07 pm

Assessment isn't just one thing -- there are good assessments that are valid and reliable, and there are bad assessments (and everything in between). I am all for good assessment (that is my job and my education), but we need to stop trying to use assessment as a ranking tool. This is where we get into all kinds of trouble. And, as several posts have mentioned, this is at least partly how the K-12 No Child Left Behind fiasco has occurred. Assessment is important. We need to know what students are learning and what they are not learning so that we can become better educators and better institutions where education takes place. But, trying to compare universities to each other on a single scale is ludicrous. We need to continue to teach and students need to continue to learn AND we need to find ways to accurately measure learning so that we assessment can be a transformative process. And, when I say "we" I don't mean that every single faculty member has to become an expert in assessing student learning at the program and institution level -- what I mean is that higher education needs to step up to the plate and create ways of measuring learning that are viable, meaningful, appropriate, provide useful feedback, and can be done in a reasonable amount of time.

--Catherine Wehlburg, TCU
Assistant Provost for Institutional Effectiveness

15. fruupp - October 11, 2010 at 12:35 pm

Kevin Carey wrote: "And its president has the kind of résumé you couldn't make up..."

Indeed. Hrabowski is an invertebrate lackey of the UM Chancellor and Trustees. In the mid-90s he enthusiastically participated in the termination of programs and departments in the arts and social sciences in the cause of his own personal advancement. A real stand-up guy.

#4 wrote: "Baby-Boomers -- a more inept generation has not been seen since Western Romans circa 454."

LOL. As opposed to whom? The so-called "Greatest Generation"? The GenX/Y/Z generations of apex slackers? Get a grip....

16. betterschools - October 11, 2010 at 12:37 pm

Well put Catherine.

Setting aside the comments of those whose vision is limited to Marxist politics, note how quick some are to refer to 'assessment' as if it were an entity rather than the professional and scientific discipline it is. These same folks would howl if you made analogous comments with respect to their discipline.

I agree as well with respect to the commensurabilty issue and the silliness that results when you attempt to compare inter-institutional change. (Aside from the real measurement-scientific difficulty of measuring any change of this kind.) It is easy to be lulled into a false sense of similarity by the fact that we work under the appellations "college," "university," and the like, when the within and between differences are so large.

We have a long way to go, and on many fronts. There are still folks in your institution who proudly display pre-scientific reasoning when they insist that no one can measure what they teach, failing to notice that, validly or not, they managed to measure it when they assigned a grade and, sometimes too much to hope for, made distinctions among levels of student performance.

Keep up the good effort. Most of the strongest objectors will retire soon.


17. duchess_of_malfi - October 11, 2010 at 01:00 pm

I think more faculty would support the development of good assessment programs if they realized that assessment could be a better alternative to the customer satisfaction surveys that now rule our lives. But using assessment as a way to rank universities a) does not make sense and b) does not reflect the way that students actually decide which school to attend.

If the entire university is assessed, it should be assessed in terms of how well it achieves its mission. Clarifying mission, getting rid of the inflated and generic PR language, using honest language and focusing on distinctive purpose, would do a lot to help students get information about the best fit for them.

18. trendisnotdestiny - October 11, 2010 at 02:23 pm

#16 betterschools,

QUOTE
"Setting aside the comments of those whose vision is limited to Marxist politics, note how quick some are to refer to 'assessment' as if it were an entity rather than the professional and scientific discipline it is."

Bob, this comment assumes many things and obfuscates your own personal/economic interests. This comment reflects the very same process that Milton Freidman used to gain prominence in the 1950's(Economics profession in general) became legitimized as a stand alone field (Yves Smith, 2010 "Econned", P.37). Its origins and current status are at odds in this evolving market-based system.

Before we annoint a whole generation of administrators and executives with more power to discipline or preside over teachers, we might have a conversation about their own legitimacy instead of using the students as bait to overthrow shared governance and system resistance......

QUOTE
"Most of the strongest objectors will retire soon."

Spoken like a true neoliberal... (TINA: There is No Alternative)
You sound like Margaret Thatcher with your tone; Cry for Teachers in Argentina

19. maryza - October 11, 2010 at 02:51 pm

Don't we already have ways of assessing how well our programs are doing? Statistics for graduation rates,percentage of students going on to graduate school, student scores on standardized grad school admissions tests, all tell us a lot about our undergraduates.

20. unusedusername - October 11, 2010 at 03:06 pm

For those claiming that assessment is "scientific", here's one scientist who sees it as a terrible idea. It is now obvious that assessment has been bad for K-12 education. Since the assessment movement has begun, we've seen

Less emphasis on writing because writing can't be reduced to a scantron test.

Less emphasis on higher level thinking because higher level thinking can't be reduced to a scantron test.

Less academic freedom (and resulting high turnover of K-12 science teachers) because freedom can't be reduced to a scantron test.

An overemphasis on the bottom third of students to get pass rates up, and the destruction of gifted programs.

and on, and on, and on.

"Most of the strongest objectors will retire soon."
Sorry to disappoint you, but I'll be around awhile.

21. wlgoffe - October 11, 2010 at 03:21 pm

Physicists have developed a number of assessments that measure in fundamental ways how well their students learn; for a list, see http://www.ncsu.edu/PER/TestInfo.html . They're widely used and have been used to bring about better methods of instruction. One very influential paper in this regard is http://se.cersp.com/yjzy/UploadFiles_5449/200607/20060705142003187.pdf (Google Scholar reports 1,100 cites). Here's a snippet of what they found about their students' learning through one assessment (the FCI): "The implications could not be more serious. Since the students have evidently not learned the most basic Newtonian concepts, they must have failed to comprehend most of the material in the course. They have been forced to cope with the subject by rote memorization of isolated fragments and by carrying out meaningless tasks." From there, many have changed how they teach (i.e. "closed the loop"); perhaps the most influential paper there is http://web.mit.edu/rsi/www/2005/misc/minipaper/papers/Hake.pdf (Google Scholar reports 1,150 cites). Finally, a great intro to this literature is http://www.cwsei.ubc.ca/resources/files/Wieman-Change_Sept-Oct_2007.pdf . The author, Carl Wieman, is both a Nobel Laureate and a College Professor of the Year (research institutions). He's currently Associate Science Advisor to the President (for science education). It would like pay for many of us to learn what those in other disciplines are doing with regards to assessment.

22. betterschools - October 11, 2010 at 04:10 pm

wlgoffe -- These are great references. Thanks for making them available. I would note that some other professions are doing a lot in this area as well. Accounting and nursing come to mind. I also recall that a professional counseling organization deconstructed their masters in counseling to 150 or so learning objectives. I don;t know if the assessments followed but the hard work was largely done. Reading your comments made me think of accounting where some of the professionals have advanced their research on effectiveness to higher-order constructs as a test not only of whether the lower order constructs have been learned, but if they have been correctly integrated in terms of their relations, etc. I'll dig up some references when I return to the office.

23. rpm13 - October 11, 2010 at 04:37 pm

"Kevin Carey is policy director of Education Sector, an independent think tank in Washington."

Of course. If the assessment professionals and education think tankers can't sell their products, they won't have jobs. This piece is an ad for another product we don't need. I just wish they would think about something else, leave impressionable administrators alone, and let the real educators do thier jobs.

24. gahnett - October 11, 2010 at 04:41 pm

I guess the author's point is made in that I don't believe a word he says about how great UMBC is. Sounds like every other currently resourced university to me.

25. sahara - October 11, 2010 at 04:54 pm

UMBC does some great things...AND, students living in Maryland have a chance to go there...so it can't be compared to other public universities detail-for-detail because it exists to serve its own population...I would submit that there are similar institutions in every state just as worthy of recognition for the roles they serve as public institutions.

26. gahnett - October 11, 2010 at 05:27 pm

really support sahara's comment, i do...

27. barbarawright - October 11, 2010 at 05:45 pm

Come on, folks, get real. Two points. First, as noted above, there's good assessment and there's terrible assessment. The standardized testing for punitive purposes that has been inflicted on K-12 education is terrible. BUT THAT IS NOT THE WHOLE ASSESSMENT STORY. Colleges and universities (and K-12 educators, too, btw) have developed some very high-quality ways to understand student learning and foster improvement over the last 20 years. We disregard that work at our peril, and our students will suffer.

Second, assessment is a tool, not an end in itself. It can be used for good or ill. Was it invented to support the corporate power structure's takeover of HE? Sounds like a Faux News-level canard to me, but whatever. The point is to learn how to use the tool skillfully for good. We know it can help at-risk and underprepared students succeed in college, at UMBC and beyond, which is exactly what this country urgently needs at the moment. So get with the program and do the right thing. Otherwise, we "higher educators" become as laughable as the faculty member who refuses to use a computer or email.

28. archman - October 11, 2010 at 07:51 pm

I really do not think the U.S. public would like to see the results of a truly accurate learning assessment of most university students today. The results of such assessments would make parents and employers either very angry or very sad. Once the politicians got yelled at enough, colleges would be pressured to do something about it... like water down curriculum, doctor assessment reports, hire incompetent professors who replace actual instruction with test-training, blah blah.

Basically, college would *devolve* into the current version of the K-12 public education system. I believe that the people who think that national assessment standards (or anything large and sweeping) will *improve* college student performance are rather naive, or have an agenda of their own.

Given how much higher education is becoming corporatized or targeted for exploitation from the business sector, I am wary of hearing anything "innovative" that isn't coming directly from experienced, practicing, professional educators. It is becoming increasingly difficult for us to sort out expensive fads or gimmicks from actual progression in the art of teaching.

I agree with many that assessment is important and can be useful. It is important enough that the only people that should be in control of it are the faculty and the (academic) administration.

29. wlgoffe - October 11, 2010 at 09:03 pm

@archman: see http://www.changemag.org/Archives/Back%20Issues/March-April%202010/transforming-science-full.html and http://chronicle.com/article/Teaching-Experiment-Decodes-a/49140/ .

I think both have exactly what you're looking for -- assessment used by experienced, practicing, professional educators to increase student learning. One is in the sciences and the other is in the humanities.

Maybe not quite as striking, but http://www.thencat.org/RA.htm has been done in many disciplines and it is driven by faculty as well.

30. duchess_of_malfi - October 12, 2010 at 01:22 pm

I wonder how many college graduates would pass the GED.

31. betterschools - October 12, 2010 at 02:55 pm

@dutchess,

I understand your point but, in fact, virtually all would pass the GED. If you take a look at that test, you will see that it is positioned pretty low.

So much of the insider-hand-wringing I see with respect to the "decline in quality" of U.S. higher education flows from excessive focus on non-causal details that make up our wistful reveries, combined with a lack of understanding of the big picture (here's yet another column idea for Kevin).

The demand for this stuff we call 'college' and 'university' which we used to offer to only the smart and the rich, is now offered to everyone under evolving conceptions of equality. The result has been 60 years of outsize systemic growth. As the qualifications for admissions have broadened and, some would say, become more shallow, so have the qualifications to become a service provider. In IQ terms, one does not have to be nearly as smart to participate in college today --as a consumer or as a provider-- than one had to be 100 years ago. (The exception, then and now, being those among the very rich who were so unfortunate as to not also be very smart.) Similar generalizations apply to other ability constructs.

So, in 2010, the average professor and the average student isn't as intellectually capable as was his average counterpart in 1910. These higher-end ability strata still exist but now within departments or some institutions that specialize in educating the smart and the rich. So what?

The ghost of higher education's past haunts us, I think, because too many members of the professoriate find their identity and even worth in the abilities of the students they teach. Sad.

We need to get used to the fact that higher education is a family resemblance construct admitting of many different strands. The challenges and potential rewards of providing decent, cost-effective education to participants in those strands are greater than ever. Whereas we once taught largely students who were smart enough to learn in spite of our poor teaching skills, we now admit students who have to be taught by individuals who have teaching skills. This challenge is enormous and growing because to few of our professors have superior teaching skills.

32. kaune - October 12, 2010 at 05:40 pm

The problem with assessment is it first appeared on our campuses couched in the rhetoric of accountability. The people promoting legitimate student learning assessment on our campuses today have never been able to scrape that stink off their shoes.

The simple fact is if you test your students, you assess student learning. The dirty little secret of HE is that we were never taught how to teach during our graduate years, we were certainly never taught anything about psychometrics and student learning, and consequently our tests are horrible unsystematic. We simply cannot use our current testing methods to tell us much of anything about how well we have been teaching or how well our students are learning. We only use our tests to assign grades and create a final distribution of student rankings.

What student learning assessment is *supposed* to be about (if we scrape off the stink of accountability and look at it objectively) is rather simple: identify a priori what we want our students to learn in our classes and systematically devise ways to collect data on how well we are meeting those objectives. The subsequent "tests" will tell individual students what components of the course they are learning well and where they need more work. The data tell the instructors where the class is progressing well and where s/he needs to give more attention.

We simply need to apply the systematic rigor that we bring to our research and scholarship into our classrooms. Screw admin, screw the state; articulating SLOs and collecting feedback on them is for us and our students.

33. dboyles - October 12, 2010 at 06:56 pm

Management tends toward the ideal of modularization as well as disposable parts. That's pretty much all knowledge, students, or teachers are to management. The experience that is the classroom makes management uncomfortable. Too much room for litigation. Too much concern over 'accountability.' Gotta protect themselves. Assessment is the way such management knows to cover its backside. Assessment has little to commend itself other than that.

34. mrmars - October 13, 2010 at 03:33 am

When based solely on simplistic logic and a desire to see everyone succeed, assessment efforts seem to make sense, and doubtless go over well with today's general public (who think money spent on college is equivalent to buying a product) and "educators" who, for whatever reason, are out of touch with the tensions involved with applying uniform standards to todays increasingly diverse and often underprepared and under-motivated students.

If we were all producing widgets from the same raw materials, then these Henry Ford-esque assessment efforts might make some sense, but as a means to compare student outcomes over time at a given institution, or worse, compare student outcomes among institutions, they simply make no sense. At least not to anyone with any appreciation for the effects of human nature and the vastly under-appreciated impact of student effort and abilities on learning. If we could control all the variables, then assessment efforts beyond earned grades might be meaningful but we can't so they aren't.



35. betterschools - October 13, 2010 at 11:04 am

Why, I wonder, are so many otherwise well-educated and intellectually open-minded individuals seemingly incapable of understanding evaluation science accurately, much less completely? Whatever your academic specialty, do you understand how intellectually foolish it is to describe assessment as "Henry Ford-esque?"

The complexity of human learning, and the rich and indefinitely varied paths it can take with unanticipated causes and effects, is not lost on any such scientist worth his or her salt. Neither was it lost on them 30 years ago when they were first-year graduate students. How rude!

Most of you would agree that it is a bad idea to defer the evaluation of your physician's treatment 30 years, ostensibly to see how your life turns out. It is a bad idea not because his treatment is entirely irrelevant to your life 30 years hence, but because it denies him the opportunity to assess and continuously improve current treatment based on incremental results. In a few cases, it also permits us to get a few really bad physicians away from patients before they kill someone.

I'll suggest a few assessments that are easily performed with 25 year old methodologies that are orders of magnitude more valid than the validity of the average classroom assessment currently in use. Tell me which ones (a) you do now with rigor and demonstrated use of the results and (b) are a bad idea.

- Did we teach what we said we would teach and the student paid money to learn?
- Did we identify the student's goals coming in to the educational context and determine how well we met them upon graduation, including adjustments for goal migration (gamma and beta calibration error)?
- If the student sought the degree to secure a particular kind of job, did he get it?
- If he got the job, did his employer find him well prepared for it?
- If we promised a degree in four years (assuming diligence on the part of the student) did we deliver it or did it take five years because we can teach but not practice supply-chain management?
- Can the student provide us with concrete examples of ways in which his life (personal, familial, social, professional, etc.) was enriched that are attributable to his education?
- If the education we provided is related to professional certification (CPA, PA, etc.), did the student secure it? How difficult was it for hims to secure it in relation to graduates of other institutions?
- If the degree the student earned has a defined knowledge base (physics, psychology, etc.) does the student have operational mastery of it?

Many of you rail when you are labeled as Mandarins. The appellation fits when you, in effect, say that on one can assess what you do, either because it is too esoteric for any evaluation scientist to figure out or because you believe no one has the right. In my opinion, you are wrong on both counts. If you can figure out how to teach it, someone smarter than you can figure out how to evaluate it, and they do as a matter of routine. The issue of "rights" needs no comment.

Finally, I would recommend a research methodology 101 course to anyone who would say "If we could control all the variables, then assessment efforts beyond earned grades might be meaningful but we can't so they aren't." It is difficult to believe that anyone teaching at the college level could make such a statement.

36. trendisnotdestiny - October 14, 2010 at 09:27 am

@ Betterschools


I think a previous poster said it best. If you want to be the King of Outcomes Ass then go for it; you would be well suited. But do not lecture us on mandarins and how important it is to society (especially when you have a vested financial interest in it)... Sell outcomes ass somewhere else.

37. jrllanes - October 15, 2010 at 12:24 am

I've been thinking about this well-reasoned argument for assessment for many years and working at designing and conducting them for just as long. I have some ideas. Here's one http://jrllanes.wordpress.com/2010/08/15/how-to-tell-if-a-college-is-good-enough/

38. betterschools - October 15, 2010 at 02:25 pm

Your ideas are on the right track. They are a subset of a larger set of transparency requirements I and others have been urging on the Secretary of Education for some time.

Add Your Comment

Commenting is closed.

subscribe today

Get the insight you need for success in academe.