• August 29, 2014

Let's Close the Gap Between Teaching and Assessing

Let's Close the Gap Between Teaching and Assessing 1

Gianpaolo Pagni for The Chronicle

Enlarge Image
close Let's Close the Gap Between Teaching and Assessing 1

Gianpaolo Pagni for The Chronicle

Too many students are learning too little in college. Recent research in higher education has brought that lesson home, even as it has also brought home that some students do really well, and that there are clear factors that contribute to student success or lack of it.

Good assessment can give us concrete information about whether students are learning, how much they are learning, and in what areas. And there are institutions, faculty members, and administrators that not only know this body of work but make good use of it as they seek to strengthen their institutions and serve their students better. All too often, though, there is a considerable gap between institutional assessment and teaching. Some faculty members embrace assessment efforts, some are highly critical of them, but most, perhaps, are barely aware of them. Why is this so, and should we do something about it? Working with reference to major points made in the essay collection we've just co-edited, Literary Study, Measurement, and the Sublime: Disciplinary Assessment, we would like to propose the mutual benefits of closing that gap.

This lack of understanding has resulted in an unproductive divide between faculty members devoted to their disciplines and assessment practices that seem overly general and imposed from outside (even if "outside" is the institutional research office). Many campuses now use the National Survey of Student Engagement, which measures levels of student engagement in learning, or the Collegiate Learning Assessment test, which measures gains in complex skills, including critical thinking and analytical reasoning. For the most part, though, faculty members think and teach within the frameworks of their disciplines, even when they venture into interdisciplinary projects. But this disciplinary knowledge and approach to critical thought are not captured by the standardized assessment instruments that have been developed so far. Some things, faculty quickly discover, are easily measured: In the field of literary study, we can gauge students' skill as grammarians, even their ability to construct a persuasive argument. But can we get at the kind of learning that matters most? The kind of learning that leads to full engagement with a topic and a nuanced understanding of its meaning? We will not be able to achieve this until we think about the particular forms of engagement that draw us to deep learning within a discipline, and to the connections between those disciplines and larger social contexts as well as institutional goals.

Exploring forms of alignment among disciplinary goals, institutional aims, and broader social contexts will also better engage faculty in the assessment process. If faculty members were talking regularly with assessment researchers and practitioners, they would have a voice in emerging national conversations about how students learn in different disciplines and what strategies bring student learning to the highest possible levels. They would have a voice in saying what kind of learning really matters in their fields—what outcomes need to be measured—and a chance of aiding in the development of assessment methods genuinely suited to what they teach. They would, in other words, be a guiding force in the work that is a necessary first step in improving learning.

While we encourage assessment practitioners, then, to think seriously and respectfully about the potential contributions to learning through particular fields of study, we also hope that faculty members will see what can be learned from assessment research and practices. We recognize that there are still learning outcomes that seem difficult, if not impossible, to measure—those that have been described as ineffable, including both the long-term shifts in understanding that a great learning experience can provide and those moments of sudden and definitive insight that so many of us value. Yet it might be possible to understand more about both.

In our research, we insist that every point find solid evidentiary support. Most of us teach our students to do the same: "Can you back that up?" "On what basis do you reach that conclusion?" Yet when it comes to whether or not our students are learning, we rely on evidence that is dubious (teaching evaluations) or circular (grades). Or we abandon the Enlightenment altogether and lapse into faith: We just know. Again, while gut feeling is a crucial part of inquiry, most of us have been rigorously trained to interrogate both received wisdom and unexamined assumptions in our scholarship. Why should our approach to student learning be different?

We—faculty, administrators, assessment researchers—need to do better. What is at stake? Student learning, first of all. The clearer we are about our goals for learning, and the better we are at seeing whether we are meeting those goals, and then proceeding—on the basis of that evidence—to strengthen teaching and learning in our classrooms, the better our students will do. With improved learning, we also ensure the viability of higher education and of specific areas of study. Solid data on what students are learning demonstrate the value of a field: We know that students in the humanities and social sciences, for example, take more of the rigorous courses that result in gains on measures of critical thinking, analytical ability, and writing than do students in other fields.

We also know that other factors contribute to faculty (and even administrators') skepticism. Assessment at the college level has not escaped its undeniable associations with the Bush-era No Child Left Behind law, in spite of fundamental differences between the two. Concerns over standardization, especially in light of recent developments in higher education in Europe, considerably dampen enthusiasm among American faculty members. Furthermore, most faculty have long understood teaching as an individual project rather than as a collaborative one connected to larger institutional and societal goals, an attitude reinforced by most systems of evaluation. And, of course, on most campuses, assessment has been an add-on to current workloads, with little reward and in many cases little communication of any sense of its potential significance. Institutions convey their priorities through the institutional decisions they make (How much do you teach? Who gets tenure?) and the resources they allocate (Who pays for assessment efforts?).

All of that said, it is not an exaggeration to say that higher education in general, and the liberal arts in particular, are now under attack in ways that we do not need to explain for most readers of The Chronicle. In that context, resisting efforts to figure out how well our students are learning for the purposes of improvement seems counterproductive. Many academic professional organizations are wisely encouraging their members to reach out to the public and explain the value of their pursuits. We want to remind our colleagues, however, that you don't have to be on the Today show or NPR to be talking to the public. Faculty are already doing this every day, engaging groups of people who will have a disproportionate influence in society compared with their peers who are not going to college. We need to be thinking collaboratively about how best to educate them.

Donna Heiland is vice president of the Teagle Foundation, and Laura J. Rosenthal is a professor of English at the University of Maryland at College Park. Their book, Literary Study, Measurement, and the Sublime: Disciplinary Assessment, is being published this month by the Teagle Foundation.

subscribe today

Get the insight you need for success in academe.