Learning in an introductory physics MOOC: All cohorts learn equally, including an on-campus class | Colvin | The International Review of Research in Open and Distance Learning

Thanks to Tony Bates for pointing to and providing a fine review of this interesting article which shows evidence of learning gain in people who were taking an xMOOC.

I have little to add to Tony’s comments apart from to mention the very obvious elephant in this room: that the sampling was skewed by the fact that it only considered considerably less than 10% of the original populace of the MOOC that actually got close to finishing it. It is not too surprising that most of those who had the substantial motivation demanded to finish the course (a large percentage of whom were very experienced learners in related fields) actually did pretty well. What it does not tell us is whether, say, a decent open textbook might have been equally or more effective for these manifestly highly motivated and proficient students. If so, it might not be a particularly cost-effective way of learning.

The study does compare performance of a remedial class of students (ie that had failed an initial course) who received plentiful further face to face support with that of the voluntarily subscribed online students. But the authors rightly note that it would be foolish to read anything into any of the differences found, including the fact that the campus-based students seemed to gain nothing from additional remedial tuition (they may be overly pessimistic about that: without that remedial effort, they might have done even worse) because the demographics and motivations of these students were a million miles removed from the rest of the cohort. Chalk and cheese.

One other interesting thing that is worth highlighting: this is one in a long line of articles focusing on interventions that, when looked at closely, suggest that people who spend more time learning learn more. I suspect that a lot of the value of this and indeed many courses comes from being given permission to learn (or, for the campus students, being made to do so) along with having a few signposts to show the way, a community to learn with, and a schedule to follow. Note that almost none of this has anything to do with how well or how badly a specific course is designed or implemented: it is in the nature of the beast itself. Systems teach as much as teachers. The example of the campus-based students suggests that this may not always be enough although, sadly, the article doesn’t compare the time on task for this group with the rest. It may well be that, despite an extra 4 hours in class each week, they still spent less time actually learning. In fact, given a prima facie case that these students had already mostly demonstrated a lack of interest and/or ability in the subject, then even that tutorial time may have not been dedicated learning time.

A small niggle: the comparison with in-class learning on different courses conducted by Hake in a 1998 study, which is mentioned a couple of times in the article, is quite spurious. There is a world of difference between predominantly extrinsically motivated classroom-bound students and those doing it because, self-evidently, they actually want to do it. If you were to extract the most motivated 10% of any class you might see rather different learning patterns too. The nearest comparison that would make a little sense here is with the remedial campus-bound students though, for aforementioned reasons, that would not be quite fair either.

Little or none of this is news to the researchers, who in their conclusion carefully write:

“Our self-selected online students are interested in learning, considerably older, and generally have many more years of college education than the on-campus freshmen with whom they have been compared. The on-campus students are taking a required course that most have failed to pass in a previous attempt. Moreover, there are more dropouts in the online course (but over 50% of students making a serious attempt at the second weekly test received certificates) and these dropouts may well be students learning less than those who remained. The pre- and posttest analysis is further blurred by the fact that the MOOC students could consult resources before answering, and, in fact, did consult within course resources significantly more during the posttest than in the pretest.”

This is a good and fair account of reasons to be wary of these results. What it boils down to is that there are almost no notable firm conclusions to be drawn from them about MOOCs in general, save that people taking them sometimes learn something or, at least, are able to past tests about them. This is also true of most people that read Wikipedia articles.

For all that, the paper is very well written, the interventions are well-described (and include some useful statistics, like the fact that 95% of the small number that attempted more than 50% of the questions went on to gain a certificate), the research methods are excellent, the analysis is very well conducted, and, in combination with others that I hope will follow, this very good paper should contribute a little to a larger body of future work from which more solid conclusions can be drawn. As Tony says, we need more studies like this.

Address of the bookmark: http://www.irrodl.org/index.php/irrodl/article/view/1902/3009

I am a professional learner, employed as a Full Professor and Associate Dean, Learning & Assessment, at Athabasca University, where I research lots of things broadly in the area of learning and technology, and I teach mainly in the School of Computing & Information Systems. I am a proud Canadian, though I was born in the UK. I am married, with two grown-up children, and three growing-up grandchildren. We all live in beautiful Vancouver.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.