(Yesterday we took a slightly glib look at faculty members’ fear and loathing of learning-assessment projects. Today we’re pleased to offer a more serious contribution, from Pat Hutchings, a senior associate with the Carnegie Foundation for the Advancement of Teaching, who has written about this topic for several years. —Ed.)
The barriers to faculty involvement in assessment have been extensively catalogued over the years. Promotion and tenure systems do not reward such work. Time is short and other agendas loom larger. Most faculty members have no formal training in assessment—or, for that matter, in teaching and course design. Given developments in K-12, there are concerns, too, about the misuse of data, and skepticism about whether assessment brings real benefits to learners. These and other impediments are widespread and well known, and they no doubt help to explain the findings from a 2009 NILOA survey that involving faculty members in assessment continues to be a major challenge.
But they are also generalizations—true in many settings but perhaps less (or differently) so in others. Higher education is not, after all, an even weave; assessment may be a hard sell in one setting and an integral part of institutional culture in another. Moreover, as Robin Wilson points out, some campuses have found ways to open up the assessment conversation, shifting the focus away from external reporting, and inviting faculty members to examine their own students’ learning in ways that lead to improvement. As many observers would point out, the examples she cites are part of a growing turn toward serious attention to learning and teaching in higher education.
In this spirit, maybe a next chapter in what appears to be renewed attention to the role of the faculty in assessment should include in-depth case studies of individuals (or perhaps departments) who become involved in studying their students’ learning—work that may or may not be called “assessment” but that is critical to improvement. What motivates involvement in such work—especially in contexts where impediments like those listed above are clear and present? Does engagement with assessment’s questions change the way a faculty member thinks about her students and their learning? How and under what conditions does it change what he does in his classroom—and are those changes improvements for learners? How does evidence—which can be messy, ambiguous, discouraging, or just plain wrong—actually get translated into pedagogical action? What effects—good, bad, or uncertain—might engagement in assessment have on a faculty member’s scholarship, career trajectory, or sense of professional identity?
Much of the rhetoric around assessment has discounted the possibility of serious faculty engagement. But experience on the ground, captured in honest, in-depth case studies, might just point to more complex (and hopeful) conclusions.