Sorry for the absence, but things have been busy around here as we step fully into the new semester. The big experiment this term is with my flipped introduction to proofs class. As I wrote last time, I was pretty nervous going into the semester about the course. But things seem to be working really well so far. I don’t want to jinx the experience by saying so, but so far, nobody in either of my two sections of the course has given any indication that the flipped model isn’t working for them. In fact, I gave a survey in the first week of class that included an item soliticing their concerns or questions about the flipped model, and here’s a sample of the responses:

*I think the “flipped” structure will be better for a lot of the students and end with success from more students than normal.**I think this sounds really great. The idea of actually working on problems in class with you there to help us, I believe, will be extremely beneficial.**I enjoy the flipped structure because it uses modern technology that we have to allow us to spend the time that we have together more efficient instead of you lecturing and us taking notes.**I think it will be very effective in helping me learn and grasp the material this semester, and will help me be successful in this class.**[This] pedagogy can be tailored to human thought processes, and a willingness to do it. Please return to the 1980s and inform my HS math teachers.*

And on and on. Not every student responded to the survey or to this item, but all of them had the opportunity to do so, and *none* of the respondents said *anything* to indicate negative feelings or even uncertainty about the flipped model. This is quite a change from the last time I did a flipped class, where about half the students really struggled with this idea before coming on board and the other half never got on board (and some of whom were quite belligerent about it). I have either sold the concept really well to these folks, or I just have a bunch of students who “get it”, or both. Either way I am very thankful!

To parallel the class this semester, I’m also conducting a research study across all five sections of this course that are currently running at GVSU to try and understand two things. First, how do student attitudes and strategies toward studying and doing independent work change over the course of taking this class? And second, do those attitudes and strategies change in a qualitatively or quantitatively different way for students in a flipped classrom versus a traditional classroom?

The reason I am doing this, and with these questions, is that this course — and in my experiences, transition-to-proof courses everywhere — are both pivotal in the math major curriculum and also problematic. Many students do poorly in these despite having decent grades in their math prerequisites. At GVSU, the percentage of students during the 2011–2012 academic year who made an “F” grade in this course was 14.9% — which is really high. But what’s surprising is that this is *almost precisely the same number of students who got an “A” in the class over the same time period* — 14.6%. So there is a bimodal shape to the performance of students in the course. A lot of students do extremely well and about the same number wash out. Why? And more to the point, what’s the difference between the two groups?

The general consensus among us faculty is that some of the difference can be accounted for by mathematical preparation and skill — but not all of it. To some degree, whether or not a student does well in the proofs course, and then later on in the upper-division courses, has to do with the extent to which the student can learn to become a self-motivated learner. Some students come into the course able to learn on their own, and our sense is most of those students excel in the proofs course. Other students do not come in with strong self-learning abilities, but they are able — and willing! — to grow in those areas, and they tend to do at least OK in the course. It’s the students who fail, or refuse, to make the jump from a passive-reception model of school to an active engagement model that tend to be the ones who fail, even if they had good grades in Calculus 1 and 2. Or at least, this is our consensus. Moving beyond consensus to data is what I am hoping to accomplish through the study.

I’ve managed to get about 40–45 of the currently-enrolled students to volunteer for the study (that’s about a 50% response rate), and this week they will be taking the Motivated Strategies for Learning Questionnaire online to self-rate on an inventory of items related to their attitudes, goals, and strategies for learning in general. Then, in the last week of the semester (12 weeks from now) they will take it again. I’ll also be gathering their percentage scores on their final exam, their proof portfolios, and their course grades. I’ll be looking for changes in the self-ratings of students on key MSLQ items that correlate to self-motivated learning strategies and then correlating those with their actual performance on major course assessments.

If our department consensus is accurate, then what I’ll see is a high degree of correlation between the average jump between the two administrations of the MSLQ and assessment grades. That is, student who self-rate low on the self-motivated learning items but end high will perform well, as do students who start high and end high. And students who stay low and end low will have lower grades on those assessments. Again — this may not actually happen. It could be more complex than our consensus, but I want to know this if it’s the case.

I’ll also be looking for differences in the self-rating differentials between my sections and the unflipped sections. Will students in the flipped sections show a greater number of big jumps from low self-motivation to high self-motivation in the flipped setting than those in the traditional setting? Or will they be about the same, or less? I have a sense that being in a flipped classroom, since it forces the issue about self-motivation, causes students to adopt self-motivated learning strategies more readily and more fully. But I don’t know — hence the study.

In fact one of the big marks in the “con” column of the flipped classroom right now is the lack of systematic research on its effectiveness. There’s a lot of enthusiasm and interest but not a lot of data. I’m hoping to do my part to rectify that.