Previous
Next

The case of the curious boxplots

April 25, 2010, 1:47 pm

I just graded my second hour-long assessment for the Calculus class (yes, I do teach other courses besides MATLAB). I break these assessments up into three sections: Concept Knowledge, where students have to reason from verbal, graphical, or numerical information (24/100 points); Computations, where students do basic context-free symbol-crunching (26/100 points); and Problem Solving, consisting of problems that combine conceptual knowledge and computation (50/100 points). Here’s the Assessment itself. (There was a problem with the very last item — the function doesn’t have an inflection point — but we fixed it and students got extra time because of it.)

Unfortunately the students as a whole did quite poorly. The class average was around a 51%. As has been my practice this semester, I turn to data analysis whenever things go really badly to try and find out what might have happened. I made boxplots for each of the three sections and for the overall scores. The red bars inside the boxplots are the averages for each.

I think there’s some very interesting information in here.

The first thing I noticed was how similar the Calculations and Problem Solving distributions were. Typically students will do significantly better on Calculations than anything else, and the Problem Solving and Concept Knowledge distributions will mirror each other. But this time Calculations and Problem Solving appear to be the same.

But then you ask: Where’s the median in boxplots for these two distributions? The median shows up nicely in the first and fourth plot, but doesn’t appear in the middle two. Well, it turns out that for Calculations, the median and the 75th percentile are equal; while for Problem Solving, the median and the 25th percentile are equal! The middle half of each distribution is between 40 and 65% on each section, but the Calculation middle half is totally top-heavy while the Problem Solving middle half is totally bottom-heavy. Shocking — I guess.

So, clearly conceptual knowledge in general — the ability to reason and draw conclusions from non-computational methods — is a huge concern. That over 75% of the class is scoring less than 60% on a fairly routine conceptual problem is unacceptable. Issues with conceptual knowledge carry over to problem solving. Notice that the average on Conceptual Knowledge is roughly equal to the median on Problem Solving. And problem solving is the main purpose of having students take the course in the first place.

Computation was not as much of an issue for these students because they get tons of repetition with it (although it looks like they could use some more) via WeBWorK problems, which are overwhelmingly oriented towards context-free algebraic calculations. But what kind of repetition and supervised practice do they get with conceptual problems? We do a lot of group work, but it’s not graded. There is still a considerable amount of lecturing going on during the class period as well, and there is not an expectation that when I throw out a conceptual question to the class that it is supposed to be answered by everybody. Students do not spend nearly as much time working on conceptual problems and longer-form contextual problems as they do basic, context-free computation problems.

This has got to change in the class, both for right now — so I don’t end up failing 2/3 of my class — and for the future, so the next class will be better equipped to do calculus taught at a college level. I’m talking with the students tomorrow about the short term. As for the long term, two things come to mind that can help.

  • Clickers. Derek Bruff mentioned this in a Twitter conversation, and I think he’s right — clickers can elicit serious work on conceptual questions and alert me to how students are doing with these kinds of questions before the assessment hits and it’s too late to do anything proactive about it. I’ve been meaning to take the plunge and start using clickers and this might be the right, um, stimulus for it.
  • Inverted classroom. I’m so enthusiastic about how well the inverted classroom model has worked in the MATLAB course that I find myself projecting that model onto everything. But I do think that this model would provide students with the repetition and accountability they need on conceptual work, as well as give me the information about how they’re doing that I need. Set up some podcasts for course lectures for students to listen/watch outside of class; assign WeBWorK to assess the routine computational problems (which would be no change from what we’re doing now); and spend every class doing a graded in-class activity on a conceptual or problem-solving activity. That would take some work and a considerable amount of sales pitching to get students to buy into it, but I think I like what it might become.
This entry was posted in Calculus, Clickers, Critical thinking, Inverted classroom, Math, Teaching, Technology and tagged , , , , , . Bookmark the permalink.