My apologies for being a little behind the curve on the MATLAB-course-blogging. It’s been a very interesting last couple of weeks in the class, and there’s a lot to catch up on. The issues being brought up in this course that have to do with general thinking and learning are fascinating, deep, and complicated. It’s almost as if the course is becoming only secondarily a course on MATLAB and primarily a course on critical thinking and lifelong learning in a technological context.

This past week’s lab really brought that to the forefront. The lab was all about working with external data sets, and it involved students going to this web site and looking at this data set (XLS, 33 Kb) about electoral vote counts of the various states in the US (and the District of Columbia). One of the tasks asked students to make a scatterplot of the land area of the states versus their electoral vote counts. Once you make that scatterplot, it looks like this:

The reaction of most students to this plot was really surprising. Almost unanimously and without consulting each other, the reaction was: **“That can’t be right.” **When I’d ask them why not, they would say something like: **It looks strange**; or,** it’s not like scatter plots I’ve done before**; or,** it just doesn’t look right**.

The first instinct of those who felt like they had made a critical error in their plot was to ask me to verify whether or not they had gotten it right. That’s understandable, but it doesn’t go very far because I have a rule that I don’t answer “*Is this right?*” questions in the lab. (See the instructions in the lab assignment.) Student teams are responsible in the labs for determining by themselves the rightness or wrongness of their work. So it’s time for critical thinking to take center stage — which in this context would refer to using your brain and all available tools and information to self-verify your work. (I wrote about the idea of self-verification here using Wolfram|Alpha.)

Some of the suggestions I gave these teams were:

**Have you checked your plot against the actual data?**For example, look at the outliers. Can you find them in the data set itself? And look at the main cluster of data; given a cursory glance through the data set, does it look like most states have a land area less than \(10^6\) square miles and an electoral vote count of between 5 and 15?**Have you tried to create the same scatterplot using different tools?**For example, everybody in the class knows Excel (because we teach it in Calculus I); the data are in Excel already, so it would be virtually no work to make a scatterplot in Excel. Have you tried that? If so, does it look like what MATLAB is creating?**Have you taken a moment just to think about the possible relationship between the variables, and does the shape of the data match your expectations?**Probably we don’t really expect much of a relationship at all between the land area of a state and its electoral vote count, even with the outliers trimmed out, so a diffuse cloud of data markers is exactly what we want. If we got some sort of perfectly lined-up string of data points, we should be*suspicious*this time.

Once you phrase it like this, students pretty quickly gain confidence in their results. But, importantly, most of them have never been put into situations — at least in the classroom — where this sort of thing has been necessary. If critical thinking means anything, it means training yourself to ask questions like this and pursue their answers in an attempt to be your own judge of your work.

I was particularly surprised by the rejection of any scatter plot that doesn’t look like points on the graph of a function. “Authentic instruction” is a term without an operational definition, a lot like the term “critical thinking”, but here I think we may have a clue to what that term means. Students said their scatterplots didn’t “look right”, meaning they didn’t look like what their textbook examples had looked like, i.e. the points didn’t have an overwhelmingly strong correlation despite the existence of a few token outliers. In other words, students are trained by the use of made-up data that “right” means “strong correlation”. So when they encounter data that are very much not correlated, the scatter plot “looks wrong” rather than “looks like there’s not much correlation”. Students are somehow trained to place value judgements on scatter plots, with strong correlation = good and weak correlation = bad. I’m not sure where that perception comes from, but I bet if we gave students real data to work with, it would never take root.