April 18, 2012, 3:31 pm
"Insufficient number of supporting examples. C-minus. Meep." (Photo by Flickr/CC user geishaboy500)
A just-released report confirms earlier studies showing that machines score many short essays about the same as human graders. Once again, panic ensues: We can’t let robots grade our students’ writing! That would be so, uh, mechanical. Admittedly, this panic isn’t about Scantron grading of multiple-choice tests, but an ideological, market- and foundation-driven effort to automate assessment of that exquisite brew of rhetoric, logic, and creativity called student writing. Without question, this study is performed by folks with huge financial stakes in the results, and they are driven by non-education motives. But isn’t the real question not whether the machines deliver similar scores, but why?