[This is a guest post by Jonathan Sterne, an associate professor in the Department of Art History and Communication Studies at McGill University. His latest books are MP3: The Meaning of a Format (Duke University Press) and The Sound Studies Reader (Routledge). Find him online at http://sterneworks.org and follow him on Twitter @jonathansterne.--@JBJ]
Every summer, before I assemble my fall courses, I read a book on pedagogy. Last summer’s choice is Cathy Davidson’s Now You See It (except I read it in the spring). Those who are familiar with critiques of mainstream educational practice will find many familiar arguments, but Now You See It crucially connects them with US educational policy. The book also challenges teachers who did not grow up online to think about what difference it makes that their students did. In particular, Davidson skewers pieties about attention, mastery, testing and evaluation.
The one part of the book I couldn’t make my peace with was her critique of multiple choice testing. I agree in principle with everything she says, but what can you do in large lecture situations, where many of the small class principles—like the ones she put into practice for This Is Your Brain on the Internet—won’t work simply because of the scale of the operation?
When I asked her about it, we talked about multiple choice approaches that might work. Clickers are currently popular in one corner of pedagogical theory for large lectures. Like many schools, McGill promotes them as a kind of participation (which is roughly at the level of voting on American Idol – except as Henry Jenkins shows, there’s a lot more affect invested there). I dislike clickers because they eliminate even more spontaneity from the humanities classroom than slideware already does. I prefer in-class exercises built around techniques like think-write-pair-share.
Multiple-Choice Testing for Comprehension, Not Recognition
I’ve got another system I want to share here, which is admittedly imperfect. Indeed, I brought it up because I was hoping Cathy knew a better solution for big classes. She didn’t, so I’m posting it here because it’s the best thing I currently know of.
It’s based on testing theory I read many years ago, and it seems to work in my large-lecture introduction to Communication Studies course. It is a multiple choice system that tests for comprehension, rather than recognition. As Derek Bruff explained in a 2010 ProfHacker post, multiple-choice works best when it operates at the conceptual level, rather than at the level of regurgitating facts. This works perfectly for me, since Intro to Communication Studies at McGill is largely concept-driven.
A couple caveats are in order here: 1) students generally don’t like it. It looks like other multiple choice tests but it’s not, so skills that were well developed in years of standardized testing are rendered irrelevant. 2) multiple choice is only one axis of evaluation for the course, and as with Bruff’s final, multiple-choice makes up only part of the exam, with the other part being free-written short answers. Students must write and synthesize, and they are subject to pop quizzes, which they also dislike (except for a small subset that realizes a side-effect is they keep up with readings). On the syllabus, I am completely clear about which evaluation methods are coercive (those I use to make them keep up with the reading and material) and which are creative (where they must analyze, synthesize and make ideas their own).
So, here’s my multiple choice final exam formula.
Step 1: Make it semi-open book. Each student is allowed to bring in a single sheet of 8.5″ x 11” paper, double sided, single-layered (don’t ask). On that sheet, they can write anything they want, so long as it’s in their own handwriting. They must submit the sheet with the exam.
The advantage of this method is it allows students to write down anything they have trouble memorizing, but it forces them to study and synthesize before they get to the moment of the test. Even if they copy someone else, they still have to expend all that energy writing down the information. And most students turn in very original, very intricate study guides.
Step 2: Eliminate recognition as a factor in the test.
Most multiple choice questions rely on recognition as the path to the right answer. You get a question stem, and then four or five answers, one of which will be right. Often, the right answer is something the student will recognize from the reading, while the wrong answers aren’t.
But recognition isn’t the kind of thinking we want to test for. We want to test if the student understands the reading.
The answer to this problem is simple: spend more time writing the wrong answers.
Pretty much all my multiple choice exam questions take this form:
–> Right answer
–> True statement from the same reading or a related reading, but that does not correctly answer the question
–> Argument or position author rehearsed and dismissed; or that appears in another reading that contradicts the right answer.
From here, you’re basically set, though I often add a 4th option that is “the common sense” answer (since people bring a lot of preconceptions to media studies), or I take the opportunity to crack a joke.
Step 3: Give the students practice questions, and explain the system to them. I hide nothing. I tell them how I write the questions, why I write them the way I do, and what I expect of them. I even have them talk about what to write on their sheets of paper. I use my university’s online courseware, which as Jason Jones explained in a 2010 ProfHacker post, takes the practice quiz out of class time, and lets students have multiple cracks at it as they get ready for the exam.
A few other guidelines:
- Answers should be as short as possible; most of the detail should be in the question stem
- Answers should be of roughly the same length
- I never use “all of the above” or “none of the above”
- Since we are testing on comprehension of arguments, I always attribute positions to an author (“According to Stuart Hall”), so it is not a question about reality or what the student thinks, but what the student understands authors to mean.
- Exception: I will ask categorical questions, ie, “According to Terranova, which of the following 4 items would not be an example of ‘free labour’?”
Step 4 (optional): For the first time in 2012, I had students try to write questions themselves. Over the course of about 10 weeks, I had groups of 18 students write up and post questions on the discussion board (that follow the rules above) that pertained to readings or lectures from their assigned week. A large number of them were pretty good, so I edited them and added them to my question bank for the final exam. So for fall 2012, my COMS 210 students wrote about half the questions they were likely to encounter on the final. If they were exceptionally lucky, their own question might wind up on their own exam (we used 4 different forms for the final).
- This is an imperfect system, but it’s the best I’ve found that combines an economy of labor, vigorous testing, analytical thinking (rather than recognition) and expansiveness—the students need to engage with all of the readings. It is certainly not, as Cathy says, a “boss task” – that’s the term paper.
- McGill undergraduates are generally very strong students. This format, or the optional assignment, may be less appropriate for undergrad populations who don’t arrive at university “already very good at school.”
- The optional assignment was definitely more work than just writing new questions myself. And not all the students will appreciate it (or that fact–though I only got one complaint out of 187 students). It did seem to reduce test anxiety among the students I talked with, though, which is always a good thing.
I think a lot about large-lecture pedagogy and I’d be delighted to hear from other profs—in any university field—who teach big classes and who find ways to nurture student learning and intense evaluation in an environment structured by limited resources and large numbers.