Instructional quality of Massive Open Online Courses (MOOCs)

This is a very interesting, if (I will argue) flawed, paper by Margaryan, Bianco and Littlejohn using a Course Scan instrument to examine the instructional design qualities of 76 randomly selected MOOCs (26 cMOOCs and 50 xMOOCs – the imbalance was caused by difficulties finding suitable cMOOCs). The conclusions drawn are that very few MOOCs, if any, show much evidence of sound instructional design strategies. In fact they are, according to the authors, almost all an instructional designer’s worst nightmare, on at least some dimensions.  
I like this paper but I have some fairly serious concerns with the way this study was conducted, which means a very large pinch of salt is needed when considering its conclusions. The central problem lies in the use of prescriptive criteria to identify ‘good’ instructional design practice, and then using them as quantitative measures of things that are deemed essential to any completed course design. 

Doubtful criteria 

It starts reasonably well. Margaryan et al use David Merrill’s well-accepted abstracted principles for instructional design to identify kinds of activities that should be there in any course and that, being somewhat derived from a variety of models and theories, are pretty reasonable: problem centricity, activation of prior learning, expert demonstration, application and integration. However, the chinks begin to show even here, as it is not always essential that all of these are explicitly contained within a course itself, even though consideration of them may be needed in the design process – for example, in an apprenticeship model, integration might be a natural part of learners’ lives, while in an open ‘by negotiated outcome’ course (e.g. a typical European PhD) the problems may be inherent in the context. But, as a fair approximation of what activities should be in most conventional taught courses, it’s not bad at all, even though it might show some courses as ‘bad’ when they are in fact ‘good’. 
The authors also add five more criteria abstracted from literature relating rather loosely to ‘resources’, including: expert feedback; differentiation (i.e. personalization); collaboration; authentic resources; and use of collective knowledge (i.e. cooperative sharing). These are far more contentious, with the exception of feedback, which almost all would agree should be considered in some form in any learning design (and which is a process thing anyway, not a resource issue). However, even this does not always need to be the expert feedback that the authors demand: automated feedback (which is, to be fair, a kind of ossified expert feedback, at least when done right), peer feedback or, best of all, intrinsic feedback can often be at least as good in most learning contexts. Intrinsic feedback (e.g. when learning to ride a bike, falling off it or succeeding to stay upright) is almost always better than any expert feedback, albeit that it can be enhanced by expert advice. None of the rest of these ‘resources’ criteria are essential to an effective learning design. They can be very useful, for sure, although it depends a great deal on context and how it is done, and there are often many other things that may matter as much or more in a design, like including support for reflection, for example, or scope for caring or passion to be displayed, or design to ensure personal relevance. It is worth noting that Merrill observes that, beyond the areas of broad agreement (which I reckon are somewhat shoehorned to fit), there is much more in other instructional design models that demands further research and that may be equally if not more important than those identified as common.

It ain’t what you do…

Like all things in education, it ain’t what you do but how you do it that makes all the difference, and it is all massively dependent on subject, context, learners and many other things. Prescriptive measures of instructional design quality like these make no sense when applied post-hoc because they ignore all this. They are very reasonable starting frameworks for a designer that encourage focus on things that matter and can make a big difference in the design process, but real life learning designs have to take the entire context into account and can (and often should) be done differently. Learning design (I shudder at the word ‘instructional’ because it implies so many unhealthy assumptions and attitudes) is a creative and situated activity. It makes no more sense to prescribe what kinds of activities and resources should be in a course than it does to prescribe how paintings should be composed. Yes, a few basics like golden ratios, rules of thirds, colour theory, etc can help the novice painter produce something acceptable, but the fact that a painting disobeys these ‘rules’ does not make it a bad painting: sometimes, quite the opposite. Some of the finest teaching I have ever seen or partaken of has used the most appalling instructional design techniques, by any theoretical measure.

Over-rigid assumptions and requirements

One of the biggest troubles with such general-purpose abstractions is that they make some very strong prior assumptions about what a course is going to be like and the context of delivery. Thanks to their closer resemblance to traditional courses (from which it should be clearly noted that the design criteria are derived) this is, to an extent, fair-ish for xMOOCs. But, even in the case of xMOOCs, the demand that collaboration, say, must occur is a step too far: as decades of distance learning research has shown (and Athabasca University proved for decades), great learning can happen without it and, while cooperative sharing is pragmatic and cost-effective, it is not essential in every course. Yes, these things are often a very good idea. No, they are not essential. Terry Anderson’s well-verified (and possibly self-confirming, though none the worse for it) theorem of interaction equivalency  makes this pretty clear.

cMOOCs are not xMOOCs

Prescriptive criteria as a tool for evaluation make no sense whatsoever in a cMOOC context. This is made worse because the traditional model is carried to extremes in this paper, to the extent that the authors bemoan the lack of clear learning outcomes. This doesn’t naturally fall out from the design principles at all, so I don’t understand why they are even mentioned, and it seems an abitrary criterion that has no validity or justification beyond the fact that they are typically used in university teaching. As teacher-prescribed learning outcomes are anathema to Connectivism it is very surprising indeed that the cMOOCs actually scored higher than the xMOOCs on this metric, which makes me wonder whether the means of differentiation were sufficiently rigorous. A MOOC that genuinely followed Connectivist principles would not provide learning outcomes at all: foci and themes, for sure, but not ‘at the end of this course you will be able to x’. And, anyway, as a lot of research and debate has shown, learning outcomes are of far greater value to teachers and instructional designers than they are to learners, for whom they may, if not handled with great care, actually get in the way of effective learning. It’s a process thing – helpful for creating courses, almost useless for taking them. The same problem occurs in the use of course organization in the criteria – cMOOC content is organized bottom-up by learners, so it is not very surprising that they lack careful top-down planning, and that is part of the point.

Apparently, some cMOOCs are not cMOOCs either

As well as concerns about the means of differentiating courses and the metrics used, I am also concerned with how they were applied. It is surprising that there was even a single cMOOC that didn’t incorporate use of ‘collective knowledge’ (the authors’ term for cooperative sharing and knowledge construction) because, without that, it simply isn’t a cMOOC: it’s there in the definition of Connectivism . As for differentiation, part of the point of cMOOCs is that learning happens through the network which, by definition, means people are getting different options or paths, and choosing those that suit their needs. The big point in both cases is that the teacher-designed course does not contain the content in a cMOOC: beyond the process support needed to build and sustain a network, any content that may be provided by the facilitators of such a course is just a catalyst for network formation and a centre around which activity flows and learner-generated content and activity is created. With that in mind it is worth pointing out that problem-centricity in learning design is an expression of teacher control which, again, is anathema to how cMOOCs work. Assuming that a cMOOC succeeds in connecting and mobilizing a network, it is all but certain that a great deal of problem-based and inquiry-based learning will be going on as people post, others respond, and issues become problematized. Moreover, the problems and issues will be relevant and meaningful to learners in ways that no pre-designed course can ever be. The content of a cMOOC is largely learner-generated so of course a problem focus is often simply not there in static materials supplied by people running it. cMOOCs do not tell learners what to do or how to do it, beyond very broad process support which is needed to help those networks to accrete. It would therefore be more than a little weird if they adhered to instructional design principles derived from teacher-led face-to-face courses in their designed content because, if they did, they would not be cMOOCs. Of course, it is perfectly reasonable to criticize cMOOCs as a matter of principle on these grounds: given that (depending on the network) few will know much about learning and how to support it, one of the big problems with connectivist methods is that of getting lost in social space, with insufficient structure or guidance to suit all learning needs, insufficient feedback, inefficient paths and so on. I’d have some sympathy with such an argument, but it is not fair to judge cMOOCs on criteria that their instigators would reject in the first place and that they are actively avoiding. It’s like criticizing cheese for not being chalky enough.

It’s still a good paper though

For all that I find the conclusions of this paper very arguable and the methods highly criticizable, it does provide an interesting portrait of MOOCs using an unconventional lens. We need more research along these lines because, though the conclusions are mostly arguable, what is revealed in the process is a much richer picture of the kinds of things that are and are not happening in MOOCs. These are fine researchers who have told an old story in a new way, and this is enlightening stuff that is worth reading.
 
As an aside, we also need better editors and reviewers for papers like this: little tell-tales like the fact that ‘cMOOC’ gets to be defined as ‘constructivist MOOC’ at one point (I’m sure it’s just a slip of the keyboard as the authors are well aware of what they are writing about) and more typos than you might expect in a published paper suggest that not quite enough effort went into quality control at the editorial end. I note too that this is a closed journal: you’d think that they might offer better value for the money that they cream off for their services.

Address of the bookmark: http://www.sciencedirect.com/science/article/pii/S036013151400178X

Learning in an introductory physics MOOC: All cohorts learn equally, including an on-campus class | Colvin | The International Review of Research in Open and Distance Learning

Thanks to Tony Bates for pointing to and providing a fine review of this interesting article which shows evidence of learning gain in people who were taking an xMOOC.

I have little to add to Tony’s comments apart from to mention the very obvious elephant in this room: that the sampling was skewed by the fact that it only considered considerably less than 10% of the original populace of the MOOC that actually got close to finishing it. It is not too surprising that most of those who had the substantial motivation demanded to finish the course (a large percentage of whom were very experienced learners in related fields) actually did pretty well. What it does not tell us is whether, say, a decent open textbook might have been equally or more effective for these manifestly highly motivated and proficient students. If so, it might not be a particularly cost-effective way of learning.

The study does compare performance of a remedial class of students (ie that had failed an initial course) who received plentiful further face to face support with that of the voluntarily subscribed online students. But the authors rightly note that it would be foolish to read anything into any of the differences found, including the fact that the campus-based students seemed to gain nothing from additional remedial tuition (they may be overly pessimistic about that: without that remedial effort, they might have done even worse) because the demographics and motivations of these students were a million miles removed from the rest of the cohort. Chalk and cheese.

One other interesting thing that is worth highlighting: this is one in a long line of articles focusing on interventions that, when looked at closely, suggest that people who spend more time learning learn more. I suspect that a lot of the value of this and indeed many courses comes from being given permission to learn (or, for the campus students, being made to do so) along with having a few signposts to show the way, a community to learn with, and a schedule to follow. Note that almost none of this has anything to do with how well or how badly a specific course is designed or implemented: it is in the nature of the beast itself. Systems teach as much as teachers. The example of the campus-based students suggests that this may not always be enough although, sadly, the article doesn’t compare the time on task for this group with the rest. It may well be that, despite an extra 4 hours in class each week, they still spent less time actually learning. In fact, given a prima facie case that these students had already mostly demonstrated a lack of interest and/or ability in the subject, then even that tutorial time may have not been dedicated learning time.

A small niggle: the comparison with in-class learning on different courses conducted by Hake in a 1998 study, which is mentioned a couple of times in the article, is quite spurious. There is a world of difference between predominantly extrinsically motivated classroom-bound students and those doing it because, self-evidently, they actually want to do it. If you were to extract the most motivated 10% of any class you might see rather different learning patterns too. The nearest comparison that would make a little sense here is with the remedial campus-bound students though, for aforementioned reasons, that would not be quite fair either.

Little or none of this is news to the researchers, who in their conclusion carefully write:

“Our self-selected online students are interested in learning, considerably older, and generally have many more years of college education than the on-campus freshmen with whom they have been compared. The on-campus students are taking a required course that most have failed to pass in a previous attempt. Moreover, there are more dropouts in the online course (but over 50% of students making a serious attempt at the second weekly test received certificates) and these dropouts may well be students learning less than those who remained. The pre- and posttest analysis is further blurred by the fact that the MOOC students could consult resources before answering, and, in fact, did consult within course resources significantly more during the posttest than in the pretest.”

This is a good and fair account of reasons to be wary of these results. What it boils down to is that there are almost no notable firm conclusions to be drawn from them about MOOCs in general, save that people taking them sometimes learn something or, at least, are able to past tests about them. This is also true of most people that read Wikipedia articles.

For all that, the paper is very well written, the interventions are well-described (and include some useful statistics, like the fact that 95% of the small number that attempted more than 50% of the questions went on to gain a certificate), the research methods are excellent, the analysis is very well conducted, and, in combination with others that I hope will follow, this very good paper should contribute a little to a larger body of future work from which more solid conclusions can be drawn. As Tony says, we need more studies like this.

Address of the bookmark: http://www.irrodl.org/index.php/irrodl/article/view/1902/3009

Five myths about Moocs | Opinion | Times Higher Education

Diana Laurillard chipping in with a perceptive set of observations, most interestingly describing education as a personal client industry, in which tutor/student ratios are remarkably consistent at around 1:25, so it is no great surprise that it doesn’t scale up. Seems to me that she is quite rightly attacking a particular breed of EdX, Coursera, etc xMOOC but it doesn’t have to be this way, and she carefully avoids discussing *why* that ratio is really needed – her own writings and her variant on conversation theory suggest there might be alternative ways of looking at this.

Her critique that xMOOCs appear to succeed only for those that already know how to be self-guided learners is an old chestnut that hits home. She is right in saying that MOOCs (xMOOCs) are pretty poor educational vehicles if the only people who benefit are those that can already drive, and it supports her point about the need for actual teachers for most people *if* we continue to teach in a skeuomorphic manner, copying the form of traditional courses without thinking why we do what we do and how courses actually work.

For me this explains clearly once again that the way MOOCs are being implemented is wrong and that we have to get away from the ‘course’ part of the acronym, and start thinking about what learners really need, rather than what universities want to give them.

Address of the bookmark: http://www.timeshighereducation.co.uk/comment/opinion/five-myths-about-moocs/2010480.article