Previous
Next

An end to course evaluations

February 17, 2008, 4:49 pm

Having been on the Promotion and Tenure Committee now for two years, and having the job of reading reams of course evaluations for not only myself but many of my colleagues to determine how good a job (or not) they are doing at teaching, I have a new appreciation for just how bad of an evaluative instrument the typical student course evaluation really is. I say let’s ditch the whole system and start over.

shannon.gifI suppose I should elaborate. The whole point of any kind of evaluation on anybody is to gather information. And I think of information the way Claude Shannon did, i.e. information is that which reduces uncertainty. Alice does an evaluation of Bob for some official purpose because the people in charge do not themselves have a clear idea of what Bob is doing, and it would be a little biased to have Bob evaluate himself, so Alice goes in to provide some kind of substantive information that clears up the picture and reduces the uncertainty of the people in charge. Maybe it’s not a single Alice but a whole roomful of Alices, all of whom have been taking a course from Bob for the last 9-10 weeks. With all that information, you might have some outliers in the positive end (“He’s great!”) or the negative end (“He’s awful!”) but on the average you should get a pattern of information that provides a little more certainty as to the kind of teacher Bob really is.

Except most of the time, you don’t get the kind of information you want, or for that matter any kind of information at all. There are all kinds of problems with the evaluation form itself most of the time. The questions that ask students to give a numerical response are often ill-posed, inappropriate for students to be answering, or simply absurd. Examples:

  • Ill-posed: “The professor handed out a syllabus on the first day of class.” This (or pretty close to it) is a question on our evaluation forms, and students are asked to give an answer on a scale of 1 (strongly disagree) to 5 (strongly agree). But this is obviously a binary question — either I gave the syllabus out on the first day of class or I didn’t. You don’t “strongly agree”. Or what if I don’t hand out a paper copy but rather post it to our course web site and show students where it is? This question is kind of innocuous, so the fact that it yields no useful information due to its ill-posed nature is OK in some ways because you can just ignore it if you’re the prof or the P&T committee. But if we’re ignoring it, why is it on there in the first place?
  • Inappropriate: “The professor’s teaching methods are appropriate for this class.” Another item off our evaluation form, and I have a hard time believing most students have any idea what’s an “appropriate” teaching method or not, unless they are junior or senior education majors who have done some crossover thinking about what high school teaching techniques work for the college classroom (and what teaching techniques are ineffective in K-12 but still effective in college). If I were a student, I’d interpret “appropriate” to mean “amenable to my lifestyle”, which is not what the question has in mind at all. So again, you might get a strong pattern of data from a question like this, but it actually increases uncertainty rather than decreases it. If a prof gets evaluated really badly on an item like that, does it mean that his teaching methods are really inappropriate, or that they are but students don’t care for it? We don’t know. More uncertainty.
  • Absurd: I could go on and on. I’ll mention my favorite, which was mercifully removed from our course evaluations some years ago: “My instructor senses when some students are not understanding.” Pardon me? Sensing? I’m not a frickin’ Betazoid, folks.

Written comments are a little better but not by much. You get some very useful written comments sometimes, but you also get very many comments that are way out of context or simply unintelligible. A student may have gotten a test back with a bad grade the day of the evaluation — possibly even in another person’s class — and walk in with a chip on his shoulder and selectively ignore a semester’s worth of hard, quality work on the professor’s part just to make a point on the evaluation. The professor gets this and wonders who this person is and what class they thought they were evaluating. The P&T committee reads this and wonders what the deal was, and there are lots of questions about what really happened and what was really going on — again, the uncertainty level is raised, not lowered.

In the worst cases, students will create a meme that continues throughout all the comments on the evaluations for a single class. It’s easy to spot because it’s as if the students were copying down the same slogan onto different evaluation forms. “The professor thinks this is the only class we are taking” is one you see, verbatim, multiple times on the same evaluation — a sure sign that students have decided to group-think rather than honestly give their reasoned assessment of the course in light of everything that has taken place. This is just as bad when the meme is positive as it is when the meme is negative. When students, many of whom have been studiously avoiding being honest with the professor about their difficulties with the course or coming to office hours to talk about things, get together and adopt a slogan rather than give their own honest opinions, it raises rather than reduces uncertainty for the professor and the P&T people.

So like I said, I advocate a wholesale, unilateral rejection of the student evaluation system as we know it. There’s no point in holding fast to an information-gathering system that actually requires more information to interpret the results of the system than the system itself generates.

I do think students need to have a voice in evaluating their professors, so I wouldn’t recommend simply not having student evaluations in any form. But my ideal form sounds a little like what I used to do when I worked for the Center for Teaching at Vanderbilt University. My job was to go do a “small group analysis” (SGA) for TA’s in different departments. We’d have the TA end class 20 minutes early, and then I would go in and lead a discussion among the students where they had to voice, in person and out loud, their thoughts on a series of well-designed questions about the TA’s teaching. (I’ll try to go find a copy of the questions I used.) I took notes and directed traffic. The SGA’s were great because the students who had issues which were merely personal issues disguised as real pedagogical problems were often shouted down by other students who felt those issues were as ridiculous as they sounded. For example, a student would complain that homework wasn’t returned fast enough. “What are you talking about? He hands them back within four days, and anyhow you don’t even come to class but once a week, so what do you know?” the others would say. I saw exchanges like this, usually less pejorative but always very revealing, almost every time I did an SGA.

That’s information — a comment arises from one student and is put into context by another, and it all appears on one set of notes that the TA gets. And it takes no more time from class than the usual evaluation session. (At Vandy, students did traditional course evaluations too.) You have to hire and pay for people to run the SGA’s, but personally, I’d do it for free at my current job if I knew that I’d be getting a more sane and informative evaluation process out of it.

This entry was posted in Education, Higher ed, Life in academia, Teaching and tagged , , , , . Bookmark the permalink.