Very interesting new development, not quite finished yet but showing great promise – a simple means to aggregate content from your learning journey, supporting open standards. This is not so much a personal learning environment as a bit of glue to hold it together. The team putting it together have some great credentials, including one of the co-founders of Elgg (used here on the Landing) and the creator of the Curatr social learning platform.
Currently it appears that its main open standard is SCORM’s new TinCan API, but there are bigger plans afoot. I think that this kind of small, powerful service that disaggregates learning journeys from monolithic systems (including those such as the Landing, Moodle, MOOCs and Blackboard-based systems) is going to be a vital disruptive component in enabling richer, more integrated learning in the 21st Century.
This is the description of the tool from the site itself:
“It’s never been easier to be a self-directed learner. Whether you’re in school or at work, you’re always learning. And it’s not just courses that teach. The websites you visit, the blogs you write, the job you do; it’s all activity that contributes to your personal growth.
Right now you’re letting the data all this activity creates slip through your fingers. You could be taking control of your learning; storing your experience, making sense of what you do and showing off what you know.
Learning Locker helps you to aggregate and use your learning data in an environment that you control. You can use this data to help quantify your abilities, to help you reach personal targets and to let others share in what you do.
It’s time to take your data out of the hands of archaic learning management systems that you can’t reach. We use new technologies, like the xAPI, to help you take control of your learning. It’s your data. Own it.”
Every institution of higher learning I visit or talk with seems intent on joining the MOOC scrum or, if not, is coming up with arguments why it shouldn’t. There’s also a wealth of poorly considered, badly researched opinion pieces too, many of them published by otherwise fairly reputable journals and news sources. I’ve been doing my bit to add poorly researched opinion too, talking in various venues about a few ideas and opinions that are not sufficiently rigorously explored to make into a decent paper. This post is still not worthy of a paper, but I think the main idea in it is worth sharing anyway. To save you the trouble of reading the whole thing, I’m going to be making the point that MOOCs disrupt because they quietly remove two of the almost-never-questioned but most-totally-nonsensical foundations on which most traditional university teaching is based – integral accreditation and fixed course lengths – and their poor completion rates therefore encourage us/force us to ask ourselves why we do such things. My hope is that the result of such reflection will be to bring about change. To situate my opinions relative to those of others, I will start by offering a slight caricature of the three main stances that people seem to be taking on MOOCs.
Opinion 1 – it’s all rubbish and online learning is pants
The cantankerati are, of course, telling us that there is nothing new here, or that online learning isn’t as good as face to face, or that it is all hype, or that the learning outcomes are not as good as those at (insert preferred institution, preferably one’s alma mater, here) etc. This is a fad, they tell us. They look at things like drop-out rates or Udacity partnering with Georgia Tech or Coursera moving into competition with Blackboard, or the fact that millenial college students prefer traditional to online classes (err – seriously? that’s like asking iPhone users if they prefer them to Android phones) and nod their heads sagely, smugly and in an ‘I told you so’ fashion. No doubt, when the bubble bursts (as it will) they will be the first to gloat. But they are wrong about the failings of MOOCs, on most significant counts.
Opinion 2 – it’s a step in the right direction, but (insert prejudice here)
Others think that there is something worth preserving here and are trying to invent new variants – usually xOOCs of some kind, or MOOxs, or, in rare cases, xOOxs, liking some aspect of the MOOC idea such as openness or size but not liking others. The acolytes of online learning (AOLs for short, oddly enough) are getting all excited about the fact that people are at last paying attention to what they have been saying for years, though most are tempering their enthusiasm with observations about the appalling pedagogies, the creation of a two-tier system of higher education, problems with accrediting MOOC learning, and high ‘dropout’ rates. They are wondering why these MOOCish upstarts haven’t read their own august works on the subject which would obviously steer them right. They will, when pressed, grudgingly admit that these rank enthusiastic amateurs are (dammit) quite signally succeeding in ways they have only dreamed of, but they still know better. There are many of these, some of which are actually very thoughtful and penetrating and by no means unsubtle in their analysis: John Daniel’s well-informed sagacious overview, Paul Stacey’s intelligent mourning of the overshadowing of a good idea, or Carol Edwards’s slightly jaundiced but interesting and revealing first-person report for BCIT, for instance. There are far more unsubtle and far less well-informed rants that I won’t bother linking here that complain about the pedagogies, or tell us that there is nothing new at all in this, or that think they see an alternative future etc. Oh, alright – here’s one that I find particularly silly and here are my comments on it.
Opinion 3 – the sky is falling! The sky is falling!
There is a third group that is fairly sure that MOOCs are very important and that they are causing or, at least, catalyzing a seismic shift in education. The popular press clearly demonstrates that there’s a revolution happening, for better or worse, and most people who hold this position want to be on that bandwagon, wherever it may be going. If not, they fear they will be left in the dust. There are some notable holders of this perspective who justify and examine their beliefs in intelligent ways, such as the ever-brilliant Donald Clark, for example, who has recently written a great series of posts that are both critical and rabble-rousing.
And many in between…
Between and spanning these caricatures are some really interesting and perceptive commentaries, and only a few have as clear-cut an opinion as I portray here. Aaron Bady’s post casting a critical eye on the hype, for example, picks apart the sky falling very carefully, and situates itself a little in the ‘right direction’ camp without being too much on the ‘but…’ side of things. The recent Edinburgh report on their pilot MOOCs is a model of careful research and openness to critical and creative thinking. George Siemens’s excellent analysis of x-vs-c MOOCs is another great piece that avoids much bias one way or the other while identifying some of the key issues for the future.
Where I sit
You could call me a fan. My PhD (completed well over 10 years ago) was largely about how large online crowds can learn together. I’ve signed up for (but not completed) quite a few MOOCs since 2008, and I’ve been a more active participant at times, playing a teaching role in a couple and helping to lead one in early 2011. I ran my first education-oriented web server offering what we would now call open educational resources in 1993. I read an average of two or three articles on MOOCs every day, maybe more. I’ve joined up with the newly formed WideWorldEd project and have been engaged in discussions and planning about MOOCs at three different institutions.
I am definitely not one of the cantakerati though I am highly sceptical of any blanket claim that a particular flavour of teaching leads to better or worse learning than any other, be it online or not. It ain’t what you do, it’s the way that you do it.
I do not believe that the pedagogies of most MOOCs are particularly bad or retrograde. Talking heads, objective tests and other favourite tools of early xMOOC providers are not my cup of tea, and the chaos of cMOOCs (that I like a lot more) seems to favour only a few neterate winners, but most that I have seen are actually at least as good as their paid-for counterparts. There are quite a lot that do not fall neatly into either of these main camps too – e.g. http://ds106.us – and both camps share a lot in common with each other that neither camp seems particularly happy to acknowledge: connectivist networks thread through and around xMOOCs and disrupt their neat outlines, while cMOOCs often employ what look and smell a lot like instructivist lectures as significant parts of the process. But, whatever the similarities, what and how people teach is seldom what and how people actually learn so it is not that important. Quality is not a direct correlate of the pedagogies and other technologies used. In fact, it is interesting to note that a recent article on MOOC junkies highlighted the greater significance of passion in the professor, something I and many others have been saying for quite a while. It ain’t what you do, it’s the way that you do it.
For me, the sky is not falling yet though it certainly has a few more interesting colours than it had a year or two ago and there are some fascinating systemic effects that are mostly, but not all, positive. But this is not the beginning of the end of higher education as we know it. In some ways, it could be the beginning of something much more interesting.
What really appeals to me most about MOOCs is their almost universally low completion rates. Whatever this means for MOOCs themselves, and however much it upsets their providers (not their learners), in my opinion this is by far their most positive systemic feature. While It ain’t what you do, it’s the way that you do it, I have one important proviso that needs to be added to that: there are some things that you can do that will most probably and in some cases definitely fail to get results. And this is really what this post is about.
So, what about those completion rates?
One thing that many of the cantankerati, the fearfully curious and the AOLs amicably agree on is that that fact that most people drop out of most MOOCs shows that there is something wrong with the idea, or how it has been implemented, or both. Some MOOCs struggle to keep 2% of their students while the best (on horse feeding, as it happens) have managed a little over 40%. The vast majority (so far) have succeeded in keeping less than 10% of their students to the bitter end. This is particularly odd given that, on most MOOCs, the majority of course-takers have at least one degree, many are educators, and quite a few have post-graduate qualifications. These are, for the most part, mature learners who know how to learn and probably think about how they do it.
For some, this is proof that online learning doesn’t work (self-evidently wrong, I’m glad to say, or I and hundreds of thousands of others would be out of a job, Wikipedia would vanish and Google Search would be largely abandoned). For others, it is proof that the pedagogies don’t work (not entirely right either, or no one would take them). The more informed, also known as those who think about it for more than two seconds, realize pretty quickly that MOOCs do not require any strong interest, let alone any significant commitment to sign up to, nor do they demand any prerequisites. So, of course, most people ‘drop out’ within the first couple of weeks, if indeed they pay any attention at all beyond spending less than a minute signing up and vaguely thinking that it might be interesting to take part. They may have insufficient interest, they may find it too hard, too easy, too boring, or too engrossing and demanding of their time. Maybe they don’t like the professor. Maybe they have better things to do. Nor is it any surprise that people whose only commitment is time might drop out after the first couple of weeks – many get what they came for and stop, or they lose interest, or get distracted, or break their computers, or simply run out of time to keep working on it. There has been a little good research and a lot of useful speculation on this, for instance at http://www.katyjordan.com/MOOCproject.html and http://blogs.kqed.org/mindshift/2013/04/why-do-students-enroll-in-but-dont-complete-mooc-courses/ and http://www.openculture.com/2013/04/10_reasons_you_didnt_complete_a_mooc.html and http://mfeldstein.com/emerging_student_patterns_in_moocs_graphical_view/ and http://donaldclarkplanb.blogspot.ca/2013/01/moocs-dropout-category-mistake-look-at.html
But there is something odder going on here that seems to be mostly slipping under the radar, apart from the odd mention here and there by people like Alan Levine and a few others. I’ve long been bothered by the mysterious and improbable fact that, in higher education, all learning is neatly divisible into 13 (or 15, or 10, or something in that region) week chunks. This normally equates to an average of around 100 hours of study time, give or take a bit. Whatever the particular length chosen, they are almost always unaccountably multiples of chunks of the same size at any given institution, and that size is broadly comparable to other courses/modules/papers/units/etc in other institutions. It’s enough to make you wonder whether there might be a god as it suggests intelligent design may be at work here.
Actually, it’s the result of unintelligent design. This is an evolutionary process in which path dependencies pile up and push their way into adjacent possibles.
So, why do we have courses (or modules/papers/units/etc depending on your geographical region)?
Well, in the first place, it is true that some things take longer to learn than others. Not everything can be mastered by asking a question or looking it up on Wikipedia. That’s completely fair and reasonable. It doesn’t, however, explain why it takes the same amount of time (or multiples of it) for everyone, regardless of skill, experience or engagement, to master everything – Modern European Philosophy, Chemistry 101, Java Data Structures, Literary Culture & the Enlightenment, Icelandic Politics: all fit the same evenly sized periods, or multiples of them. For an explanation of that, we have to turn to a combination of harvest schedules, Christian holidays and the complexities of managing scarce physical resources that are bound by physics to a single and somewhat constrained teaching space.
The word ‘lecturer’ derives from the fact that lecturers used to read from the very valuable and scarce single copies of books held by institutions. Lecture theatres and classrooms were thus the most efficient way to get the content of books heard by the largest possible number of people. If you want to get a lot of people to listen at once then it helps if they are actually there so, if they are taking a religious holiday or helping with the harvest (this last point is a little contentious as it doesn’t fully explain a long break from July to October), there is no point in standing up and talking to an empty lecture hall. So, putting aside Easter’s irritating habit of moving around from year to year that continues to mess up university teaching schedules, this divides things up quite neatly into roughly 13 week chunks separating harvest, Christmas, and Easter breaks. The period may vary a little, but the principle is the same.
This pattern has become quite deeply set into how learning happens at most universities, even though the original reasons it occurred might have faded into insignificance had they not become firmly embedded through momentum and the power of path dependencies. Assessment became intimately linked to the schedule, with ‘mid-terms’ and ‘finals’ and then came to act as a major driver in its own right. Teacher pay and time was allocated according to easily managed chunks and resources. Enrolments, registrations, convocations and the familiar rhythms of the university calendar helped to consolidate the pattern, largely driven by a need for efficiency and bureaucratic convenience. It is really hard to allocate teachers and students to rooms. Up to this point, there was no particular reason to divide the learning experience into modularized chunks and many universities did (and some still do) simply have programs (or programmes or, to confuse matters, courses lasting 3-5 years) with perhaps a few streams but without distinct modularized elements. To cap it off and set it in stone, three forces coincided. One was a laudable desire to allow students the flexibility to take some control over what they learned. Another was the need to simplify the administration of programs. The last was the need to assert equivalence between what is taught at institutions, whether for certification purposes or for credit transfer. This last force, in particular, has meant that this way of dividing learning into modular chunks of a similar length has become a worldwide phenomenon, even in countries for which Easter and Christmas have no meaning or value.
All of this happened because there had to be a means of managing scarce resources shared among many co-present people as efficiently as possible but, for centuries, there has been no good reason for picking this particular term-length apart from the force of technological momentum. There have been innovations, here and there. Athabasca University, for instance, gives undergraduates 6 months (extendible at a price) in which to complete work in any way and timeframe that will fit their needs. Similarly, the University of Brighton runs ‘short fat’ masters modules that last for half a week, combined with a period of self-study before and after. But, in order to maintain accreditation parity, the amount of work expected of students on such courses broadly equates to what, in conventional classes, would take – yes – 13-15 weeks. Technically, thanks to a bit of reverse engineering, this translates into roughly 100 hours of study in the UK, a little more or less elsewhere, particularly where people take the insanely bad North American approach of counting teaching hours rather than study hours (what madness gripped people that made them think that was a good idea?). Whatever the rationale, this has nothing to do with learning, nothing to do with the nature of a topic or subject area, nothing to do with the best way to teach. It’s just the way it turned out, and certification requirements reinforce that anti-educational trend.
Courses are not neutral technologies. One of the least loveable things about them is that their content, form and process are, at least ostensibly, controlled by teachers from start to finish. Courses are a power trip for educators that, in institutional incarnations, often require some quite unpleasant measures to maintain control, typically based on long-discredited models of human psychology that rely heavily on rewards and punishments – grades, attendance requirements, behavioural constraints in classrooms, etc. That is just plain stupid if you actually want people to learn and believe that it is your job to help that process. There can be few methods apart from deliberate torture and punishment that more reliably take motivated, enthusiastic learners and sap the desire to learn from them. We do this because courses are a certain length and we think that students have to engage in the whole thing or not at all.
Students, meanwhile, have little choice but to accept this or to drop out of the system, but that’s tricky because those uniform-size credentials have become the currency for gaining career advancement and getting a job in the first place.
Teachers need to work on maintaining that control because there are very few topics that can, in and of themselves, sustain a large number of individuals’ interest for 13 solid weeks and those that do are highly unlikely to naturally fit into that precise timeframe. Sure, some students may passionately love the whole thing and may have learned to gain some immunity from the demotivating madness of it all, or the teacher may be one of those rare inspiring people that enthuses everyone she gets to teach. But, for most students, it will be, at best, a mixed bag. Even for those that enjoy much of it, some will be irrelevant, some too easy, some too complicated, some simply dull. But they have to do it because that what the teacher demands that they have to do, and teachers have to fit their courses to this absurd length limit because that’s what the institutions demand that they have to do, and institutions do it because that’s how it has always been done and everyone else does it.
This is not logical.
So much of what makes a great teacher is therefore the ability to overcome insanely stacked odds and work the system so that at least a fair number of people get something good out of it. Teachers have to find ways to enthuse and motivate, to design assessments that are constructively aligned, to perform magic tricks that limit the damage of grading, to build flexible activities that provide learners with a bit of self-determination and control. Sadly, many do not even do that, relying on this juggernaut and the whole unwieldy process to crush students into submission (of assignments). It really doesn’t have to work like that.
This systemic failure is tragic, but understandable and forgivable. There is massive momentum here and opposition to change is designed into the system. It would take a brave teacher to explain to administrators and examination boards that she has decided that the topic she is teaching actually only needs 4 weeks to teach. Or 33 weeks. Or whatever. And, no, it will not have any parity with other courses on the same subject: OK? I would not relish that fight. It is considerably more tragic and less easy to forgive when, without any of those constraints – no formal accreditation, no institutional timetables, no harvest, no regulations, no scarcity of resources – a few MOOC purveyors do the same thing. What is going on in their heads? My sense is that it is the Meeowmix song…
Thankfully, an increasing number are not doing that at all: a glance through the range of MOOCs currently on offer via the (excellent) MOOC aggregator at http://www.class-central.com/ shows a range of lengths between 2 and 15 weeks as well as a goodly range of self-paced courses of somewhat indeterminate length. After early attempts mostly replicated university courses, the norm now appears to be around 6 weeks, and falling fast. The rough graphs below (that I created based on class-central’s data) of those starting soon and those that have already finished illustrate this trend quite nicely. Note in particular the relative drop in 10-week and higher courses and the rise in those of 4, 6 and 8 weeks. While it is far from all being down to better teaching – some of the rise in shorter courses is notably due to a trend towards samplers that are intended to draw people in to fee-paying courses – there is a pattern here. And, to counterbalance such forces, it should be remembered that a fair number of the longer courses have ambitions to reintegrate their students within their paid-for broken systems, so they are sometimes timetabled with learning as a secondary consideration and so retain their infeasible length.
MOOC lengths till now…
Mooc lengths for courses about to start…
Getting away from courses
Though the interest in MOOCs is fuelled and sustained by the fact they are free (though sadly, increasingly not as open as they were in the halcyon days of cMOOCs), popular and online, the really interesting thing about them is the attention they are drawing to what is wrong with the notion, form and above all the length of the course. This little thing is the real revolution. It radically changes the power dynamics. If people begin to disaggregate their courses, making them shorter and less teacher-controlled, they will put learners ever more in control of their own learning, giving them choices and the power to make those choices. Better still, it means that teachers are starting to create courses without unnecessary time constraints that are the size they need to be for the subject being taught. Pedagogy, though still not coming first, is playing a more significant role. But this is just a step in the right direction.
The power of small things
People who question completion rates for MOOCs almost never ask those same questions about Q&A sites, Wikipedia, Khan Academy, Fixya or How-Stuff-Works tutorials, OERS and Google Search. Indeed, the notion of ‘completion’ probably means nothing significant for such just-in-time tools: they are useful, or they are not, they work or they don’t. People use them or they don’t. You might waste a few minutes here and there on things that are unhelpful and those minutes add up but, on the whole, just-in-time learning does what it says on the box. And people use these tools because they need to learn. If someone needs to or wants to learn, you have to try really hard to stop them. But just-in-time is not always the way to go.
Clubs, not courses
I am not a great programmer but it is something I have been doing from time to time for about 30 years. When I’m stuck, I increasingly turn to StackOverflow, a brilliant set of sites based around a collectivized form of discussion forum – a bit more sophisticated than Reddit, a bit less intimidating that SlashDot (which remains perhaps the greatest of all learning tools for anyone with geek tendencies, but that needs a fair bit of skill and effort to get the most out of). StackOverflow doesn’t have courses, but it does have answers, it does have discussions, and it does have some very powerful tools for finding answers that are reliable, useful and appropriate to any particular need. The need can range from the very specific and esoteric (‘why am I getting this error?’) to matters of principle (‘what methodology is best for this problem?’) to general learning (‘what’s the best way to get started in Ruby-on-Rails?’) and everything in between. It’s like having your own immensely wise team of personal tutors, without a beginning date, an end date, or a fixed schedule of activities. This is not a course – it’s more like a Massive Open Online Club, with no restrictions to membership, no commitments, no threshold to joining. Conveniently, this has the same acronym as a MOOC. In fact, just as MOOCs subtly transform the social contract that is involved with traditional courses, so these ‘clubs’ are not exactly like their hierarchical, closed, membership-based forebears. They are what Terry Anderson and I have described as sets: not exactly a network of people you know, certainly not a hierarchically organized system like a group, just a bunch of people with a shared interest, some of whom know more than others about some things.
But what about accreditation?
Why should accreditation be something that happens only in and as a result of a course? It is bizarre and open to abuse that the people who teach a course should also be its accreditors. It is strange in the extreme that they should be the ones to say that students have ‘failed’ when it is obvious that this failure is not just on the part of the students but also of their teachers, which makes those teachers very poor and biased judges of success. It might be just about acceptable if those teachers really are the only ones who know the topic of the course but that is rare. In Eire, students have a right to write and defend a PhD (by definition a unique bit of learning) in Gaelic. Despite the fact that the number of Gaelic speakers who are also experts in many PhD topics is not likely to be huge (unless the topic is Irish history or somesuch) they still manage to find expert examiners for them. It can be done.
At Athabasca University we have a challenge for credit option for many of our courses that can be used to demonstrate competence for certification purposes. Alternatively, if the match in knowledge is not precisely tuned to the credentials we award, we and many others have PLAR or APEL processes that typically use some form of portfolio to demonstrate competence in an area. And then there are upcoming and increasingly significant trends like the move to Open Badges, closed LinkedIn endorsements, gamified learning, or good old fashioned h-index scores that sometimes tell us more, at least as reliably, and in some ways in greater detail than many of our traditional accreditation methods.
There is seldom a good reason to closely link accreditation and learning and every reason not to. Giving rewards or punishments for learning is the academic equivalent of celery – to digest it consumes more calories than it actually provides, distorting motivation so much that it demotivates.
I have no doubt that some people might bemoan the loss of attention implied by just-in-time learning or this weakly structured club-oriented perspective on learning which has no distinct beginning and no specific end. It is true that courses do sometimes include things like ‘problem solving’, ‘argument’, ‘enquiry’, ‘research’ and ‘creativity’ among their intended outcomes and, assuming they provide opportunities to exercise and develop such skills, that’s a lot better than not having them. And some (indeed, many) courses are a genuinely good idea, because it really does take x amount of time to learn some things (where x is a large number) and learning works much more smoothly when you learn with other people and have a specific goal in mind. But many are not such a good idea, and most get the value of x completely wrong. No more should we assume that a 10-week (or 100-hour) course is the right amount of time needed to learn something than we should assume that the answer to teaching is a one-hour lecture (even though it sometimes really is part of a good answer).
There are those who cynically believe that the sole purpose of going to a university is to build a network of contacts and gain credentials that will be valuable in a future career, so you can do what you like to students while they are in college and it won’t matter a bit. In fact, there’s a fair bit of research that shows that it typically doesn’t, which is yet another reason to express concern that we are not doing it right. If that were really what universities were about then I would stop teaching now because it would be boring and pointless. I think that, if we claim that what we are doing is teaching then we should at least try to do so. But accredited, fixed-length courses get in the way of doing that.
It is true that much of the really interesting learning that goes on in courses is not really about the topic, but the process of learning itself – that is why there is a vague and hard to pin down notion of graduateness that makes a fair bit of sense even if it cannot be well expressed or measured, a problem that Dave Cormier and others have grappled with in interesting ways. I’m not at all against lengthy learning paths if that is what is needed to learn, nor do I object at all to letting someone guide you along that path if that is what will get you where you want to be, and I am very much in favour of learning with other people. My problem is that the fixed-size course with fixed learning outcomes and tightly integrated accreditation is not the only way, is seldom the best way, and is often the worst way to do it. The biggest thing that MOOCs are doing, and the most disruptive, is visibly disaggregating the learning process from the unholy alliance of mediaeval bureaucracy and Victorian accreditation methods. As long as MOOCs retain the form and structure of courses that are tied to these unholies, they will (from their purveyors’ rather than their students’ perspectives) mostly fail, and that is a good thing. Even cMOOCs, that deliberately eschew learning outcomes and fixed accreditation, still often fall into a trap of fixed lengths and processes. If we can learn something from that then they have served a useful purpose.
So there you have it – another long, opinionated piece about MOOCs with little empirical data and a lot of hot air. But I think the central point, that fixed course lengths and integrated accreditation lie at the heart of much that is wrong with traditional university education and that MOOCs bring that absurdity into sharp relief, is worth making. I hope you agree.
You may have seen my recent post on MOOPhDs and might be wondering whether I am contradicting myself here. Well, maybe a little, and there was a little hint of satirical intent when I first suggested the idea that attempted to exaggerate the concept of the MOOC to show the absurdity of courses. But the MOOPhD idea grew on me and it actually makes a little sense – it does not demand fixed length courses and completely separates out the accreditation from the process, and is far more like an open club or support network than an open course. Indeed, the way PhDs, at least those that follow a vaguely European model, tend to be taught provides an expensive-to-implement but workable model of learning that entirely (or, following a sad trend towards great bureaucratization in some countries, to a moderate extent) avoids courses. So, universities do know how to break the chains. Most just haven’t yet figured out how to do that for their mass-produced courses.
A brave or, more accurately, foolhardy attempt to marry Bloom’s (unempirical and unsubtle) taxonomy and the (equally unempirical but worthy of reflection) SAMR model of technology that categorizes technologies in terms of relative transformative capacity, with examples of appropriate iPad tools to cover each segment of both wheels. Like most such models, it is way too neat. You simply cannot categorize things that relate to the complex world of learning in such coarse and simple ways – in both the case of Bloom and of SAMR, it ain’t what you do so much as the way that you do it that makes all the difference in the world, and the tools linked to are mostly much more interesting (and, conversely, much more boring) than the diagram suggests. However, like many such models, it is not a bad bit of scaffolding or at least a springboard for reflection that encourages one to think about things that, without it, might be missed, especially if you are not an expert in pedagogy or technology.
Infographic on relative cheating rates online and not (summary – no significant difference overall). A bit lacking in references to reliable sources though – I suspect a certain amount of cherry picking has gone on here.
I’d feel a lot more positive towards this report if its abstract did not begin “Online learning is quickly gaining in importance in U.S. higher education, but little rigorous evidence exists as to its effect on student learning outcomes”
For all the ‘rigour’ of their review, it appears that they failed to do the literature review because an absolutely massive amount of rigorous evidence exists about this that shows exactly the same thing. Anyway, here is some more. And, like all the rest, the graphs look nice but otherwise it is pretty pointless. Like all the rest, you might just as well look at the effect of transistors or buildings on student learning outcomes. It ain’t what you do, it’s the way that you do it, that’s what’s missing here.
For the record, they are not actually looking at online learning but at blended (or, as they prefer, ‘hybrid’) approaches.
A charmingly naive article taking a common-sense, straightforward approach to asking whether the woefully uniform pedagogies of the more popular Coursera-style MOOCs might actually work. The authors identify the common pedagogies of popular MOOCs then use narrative analysis to see whether there has been empirical research to show whether those pedagogies can work. The answer, unsurprisingly, is that they can. It would have been a huge surprise if they couldn’t. This is a bit like asking whether email can be used to communicate.
I like the way this article is constructed and the methods used. Its biggest contribution is probably the very simple (arguably simplistic) description of the central pedagogies of MOOCs. Its ‘discoveries’ are, however, spurious. The fact that countless millions of people do learn online using some or all of the pedagogical approaches used by MOOCs is plenty evidence enough that their methods can work and it really doesn’t demand narrative analysis to demonstrate this blindingly obvious fact – one for the annals of obvious research, I think. Like all soft technologies, it ain’t what you do, it’s the way that you do it, that’s what gets results. ‘Can work well’ in general does not mean ‘does work well’ in the particular. We know that billions of people have learned well from books, but that does not mean that all books teach well, nor that books are the best way to teach any given subject.
At the Edtech Innovation 2013 conference last week I attended an impressive talk from Jose Ferreira on Knewton, a tool that does both large-scale learning analytics and adaptive teaching. Interesting and ingenious though the tool is, its implications are chilling.
Ferreira started out his talk with a view of the history educational technology that somewhat mirrors my own, starting with language as the seminal learning technology that provided the foundation for the rest (I would also consider other thinking tools like drawing, dance and music as being important here, but language is definitely a huge one). He then traced technology innovations like writing, printing, etc and, a little inaccurately, mapped these to their reach within the world population. So, printing reached more people than writing, for instance, and formal schooling opened up education to more people than earlier cottage industry approaches. That mapping was a bit selective as it ignored the near-100% reach of language, as well as the high penetration of broadcast technologies like TV and radio and cinema. But I was OK with the general idea – that educational technologies offer the potential for more people to learn more stuff. That is good.
The talk continued with a commendable dismissal of the industrial model of education that developed a couple of hundred years ago. This model made good economic sense at the time and made much of the improvement to the human condition since then possible (and the improvements are remarkable), but it makes use of a terrible process that was a necessary evil at the time but that, with modern technologies and needs, no longer makes sense. From a learning perspective it is indeed ludicrous to suggest that groups of people of a similar age should learn the same way at the same time. But there is more. Ferreira skipped over an additional, and crucial, key concern with this model of education. A central problem with the industrial model, when used for more than basic procedural knowledge, is not just that everyone is learning the same way at the same time but that they are (at least if it works, which it thankfully doesn’t) learning the same things. That is a product of the process, not its goal. No one but a fool would deliberately design a system that way: it is simply what happens when you have to find a solution to teaching a lot of people at once, with only simple technologies like timetables, classrooms and books to help, and a very limited set of teaching resources to handle it. It is not something to strive for, unless your goal is cultural and socio-economic subjugation. Although, as people like Illich and Freire eloquently demonstrated a long time ago, such oppression may be the implicit intent, most of us would prefer that not to be the case. Thankfully, what and how we think we teach is very rarely, if ever, precisely what and how people actually learn. At least, that has been the case till now. The Knewton system might actually make that process work.
Knewton has two distinct functions that were not clearly separated in Ferreira’s talk but that are fundamentally different in nature. The first is the feedback on progress for teachers and learners that the system provides. With a small proviso that bad interpretations of such data may do much harm, I think the general idea behind that is great, assuming a classroom model and the educational system that surrounds it remains much as it is now. The technology provides information about learner progress and teaching effectiveness in a palatable form that is genuinely useful in guiding teachers to better understand both how they teach and the ways that students are engaging with the work. It is technically impressive and visually appealling – little fleas on an ontology map showing animated versions of students’ learning paths are cool. Given the teaching context that it is trying to deal with, I have no problems with that idea and applaud the skill and ingenuity of the Knewton team in creating a potentially useful tool for teachers. If that were all it did, it would be excellent. However, the second and far more worrying broad function of Knewton is to channel and guide learners themselves in the ‘right’ direction. This is adaptive hypermedia writ large, and it is emphatically not great. This is particularly problematic as it is based on a (large) ontology of facts and concepts that represent what is ‘right’ from an expert perspective, not on the values of such things nor on the processes for achieving mastery, that may be very different from their ontological relationships with one another.
There is one massive problem with adaptive hypermedia of this nature, notwithstanding the technical problems thanks to the inordinate complexity of the algorithms and mass of data points used here, and ignoring the pedagogical weaknesses of treating expert understanding as a framework for teaching. The big problem is more basic: that it assumes there is a right answer to everything. This is a model of teaching and learning (in that order) that is mired in an objectives driven model. But my reaction here (and while he was talking) to Ferriera’s talk, which I assume was meant to teach me about Knewton, self-referentially shows that’s not always the main value in effective teaching and learning. Basically, what he wanted to tell me is clearly not, mainly, what I learned. And that is always the case in any decent learning experience worthy of the name. In fact, the backstories, interconnections, recursive, iterative constructions and reconstructions of knowledge that go on in most powerful learning contexts are typically the direct result of what might be perceived by those seeking efficient mastery of learning outcomes as inefficiency. In educational transactions that work as they should, some of what we learn can be described by learning outcomes but the real big learning that goes on is usually under the waterline and goes way beyond the defined objectives. While skill acquisition is a necessary part of the process and helps to provide foci and tools to think with, meaningful learning is also transformative, creative and generative, and it hooks into what we already know in unpredictable ways.
So Knewton is reinforcing a model that deals with a less-than-complete subset of the value of education. So what? There’s nothing wrong with that in principle and that’s fine if that is all it does. We don’t have to listen to its recommendations, the whole Web is just a click away and, most importantly, we can construct our own interpretations and make our own connections based on what it helps to teach us. It gives us tools to think with. If Knewton is part of a learning experience, surely there is nothing wrong with making it easier to reach certain objectives more easily? If nothing else, teaching should make learning less painful and difficult than it would otherwise have been, and that’s exactly what the system is doing. The problem though is that, if Knewton works as advertised, the paths it provides probably are the most efficient way to learn whatever fact or procedure the system is trying to teach us. This leads to the crucial problem: assuming it works, Knewton reinforces our successful learning strategies (as measured by the narrow objectives of the teacher) and encourages us to take those paths again and again. By adapting to us, rather than making us adapt ourselves, we are not stretched to have to find our own ways through something confusing or vague and we don’t get to explore less fruitful paths that sometimes lead to serendipity and, less commonly but more importantly, to transformation, and that stretch us to learn differently. Knewton, if it works as intended, makes a filter bubble that restricts the range of ways that we have to learn, creating habits of behaviour that send us ever more efficiently to our goals. Fundamentally, learning changes people, and learning how to learn in different ways, facing different problems differently, is precisely what it is all about: mechanical skills just give us better tools for doing that. The Knewton model does not encourage change and diversity in how we learn: it encourages reinforcement. That is probably fine if we want to learn (say) how to operate a machine, perform mathematical operations, or remember facts, as part of a learning process intended to achieve something more. However, though important, this is not the be-all and end-all of what effective education is all about and is arguably the lesser part of its value. Effective education is about changing how we think. Something that reinforces how we already think is therefore very bad. Human teachers model ways of knowing and thinking that open us up to different ways of thinking and learning – that’s what makes Knewton a useful tool for teachers, because it helps to better reveal how that happens and allows them to reflect and adapt.
None of this would matter all that much if Knewton remains simply one of an educational arsenal of weapons against ignorance in a diverse ecosystem of tools and methods. However, that does not match Ferriera’s ambitions for it: he wants it to reach and teach 100% of the world’s population. He wants it to be freely available and used by everyone, to be the Google of education. That makes it far more dangerous and that’s why it worries me. I am pleased to note that Ferreira is not touting the tool as having value in teaching of softer subjects like art, literature, history, philosophy, or education, and that’s good. But there are those, and I hope Ferreira is not among them, who would like to analyse development in such learning contexts and build tools that make learning in such areas easier in much the same way as Knewton currently does in objectives-driven skill learning. In fact, that is almost an inevitability, an adjacent possible that is too tempting to ignore. This is the thin end of a wedge that could, without much care, critical awareness and reflection about the broader systemic implication, be even more disastrous than the industrial model that Ferreira rightly abhors. Jose Ferreira is a likeable person with good intentions and some neat ideas, so I hope that Knewton achieves modest success for him and his company, especially as a tool for teachers. But I hope even more that it doesn’t achieve the ubiquitous penetration that he intends.
Another post about MOOCs that misses the point. The author, Ronald Legon, seems hopeful that ‘MOOC 2.0’ will arrive with better pedagogy, more support and better design. I have no doubt that what he describes will happen, at least in places, but it is certainly not worthy of the ‘2.0’ moniker. It is simply an incremental natural evolution that adds efficiency and refinement to a weak model, but it’s not a paradigm shift.
The trouble is that Legon hasn’t bothered to check the history of the genre. The xMOOCs under attack here are not far off the attempts by organizations and companies to replicate the same strategies that worked for old fashioned mass media in the 1990s. They were not so much ‘Web 1.0’ as a bastardization of what the Web was meant from the start to be. That is why those of us who had always been doing ‘Web 2.0’ stuff since the early nineties hate the term. Similarly, xMOOCs are a bastardization of what MOOCs started out to achieve and they miss the point entirely. What is the point? George Siemens explains this better than I could, so here is his take on the topic:
Happily, many people are using xMOOCs in a cMOOC-like way so they are succeeding in learning with one another despite weak pedagogies, unsuitable structures, and excessive length. While the intentions of the people that run them are quite different, many of the people using them to learn are doing so as part of a personal learning journey, in networks and learning communities with others, taking pieces that interest them from different MOOCs and mashing them up. They are in control, not the MOOC creators. Less than 10% completion rates are a worry to the people that run them, not to those that don’t complete them (true, there may be some who are discouraged by the process, but I hope not).
MOOC 2.0, like Web 2.0, is likely to be what MOOC 1.0 (the real MOOC 1.0) tried to be – a cMOOC.
I do see a glowing future for great content of the sort created for these xMOOCs (big information-heavy sites of the sort found in the 1990s have not ever gone away and continue to flourish) but they may have to adapt a little. I think that they will have to disaggregate the chunks and let go of the control. It is encouraging to see an increasing tendency to reduce their size to 4-week versions, but the whole notion of a fixed-length course is crazy. Sometimes, 4 weeks will do. Sometimes, 4 minutes would be better. Occasionally, 4 years might be the ideal length. Whatever they turn out to be, they must be seen as parts of an individually assembled whole, not as large-scale equivalents of traditional approaches to teaching that only exist due to physical constraints in the first place and that are sustained not only by continuing constraints of that nature but by a ludicrously clunky, counter-productive and unreliable accreditation process.
During my recent visit to Curtin University, Torsten Reiners, Lincoln Wood and I started brainstorming what we think might be an interesting idea. In brief, it is to build and design what should eventually become a massive, open, online PhD program. Well, nearly. This is a work in progress, but we thought it might be worth sharing the idea to help spark other ideas, get feedback and maybe gather a few people around us who might be interested in it.
The starting point for this was thinking about ways of arranging crowd funding for PhD students, which evolved into thinking about other crowd-based/funded research support tools and systems to support that. For example, we looked at possible ways to not only crowd-fund research projects but to provide structures and tools to assist the process: forming and setting up project teams, connecting with others, providing project management support, proposal writing assistance, presenting and sharing results, helping with the process of writing reports and papers for publication, and so on. Before long, what we were designing began to look a little like a research program. And hence, the MOOPhD (or MOOD – massive open online doctorate).
A MOOPhD is a somewhat different kind of animal from a MOOC. It is much longer and much bigger, for a start – more of a program than a course. For many students it might, amongst other things, encapsulate a variety of MOOCs that would help them to gain knowledge of the research process, including a range of research methods courses and perhaps some more specific subject-related courses. This is quite apart from the central process of supporting the conduct of original research that would form the ‘course’ itself.
Perhaps the biggest difference between a MOOPhD and a MOOC, at least of the xMOOC variety, is the inevitable lack of certainty about the path to the destination. MOOCs usually have a fairly fixed and clear trajectory, as well as moderately fixed content and coverage. Even cMOOCs that largely lack specified resources, outcomes and assessments, have topics and timetables mapped out in advance. While the intended outcomes of a PhD are typically pretty clear (the ability to perform original and rigorous research, to write academically sound papers and reports, to design a methodology, review literature, etc), and there are commonalities in the process and landmarks along the way, the paths to reaching those goals are anything but determined. A PhD, to a far greater degree than most courses and lower level programs, specifies a method and processes, but not the content or pathways that will be taken along the way. This raises some very interesting and challenging questions about what we mean by ‘course’ and the wisdom and validity of MOOCs in general, but discussion of that can wait for another post. Suffice to say, it is a bit different from what we have seen so far.
It is likely that, for many, a PhD or other doctorate would not be the final outcome. People would pick and choose the parts that are of value, helping them to set up projects, write papers or form networks. Others might treat it as a useful resource for a more traditional doctoral learning journey.
So what might a MOOPhD look like?
A MOOPhD would, of necessity, be highly modular, offering student-controlled support for all parts of the research process, from research process teaching, through initial proposals, through project management, through community support, through paper writing etc. Students would choose the parts that would be of value to them at different times. Different students would have different needs and interests, and would need different support at different points along the journey. For some, they might just need a bit of help with writing papers. For others, the need might be for gaining specific skills such as statistical analysis or learning how to do reviews. More broadly, the role of a supervisory team in modelling practice and attitudes would be embedded throughout.
Importantly, apart from badges and certificates of ‘attendance’, a MOOPhD would not be concerned with accreditation. We would normally expect existing processes for PhDs by publication that are available at many institutions to provide the summary assessment, so the program itself would simply be preparation for that. As a result of this process, students would accrue a body of research publications that could be used as evidence of a sustained research journey, and a set of skills that would prepare them for viva voces and other more formal assessment methods. This would be good for universities as they would be able to award more PhDs without the immense resources that are normally needed, and good for students who would need to invest less money (and maybe be surrounded by a bigger learning community).
Some features and tools
A MOOPhD might contain (amongst other things):
A community of other research students, with opportunities to build and sustain networks of both peers (other students) and established researchers
MOOCs to help cover research methods, subject specialisms, etc
A great deal of scaffolding: resources to help explain the process, information about everything from ethics to citation, means and criteria to self-assess such as wizards, forms and questionnaires, guidelines for reviewing papers, etc
Mentors (not exactly supervisors – too tricky to deal with the numbers) including both experienced academics and others further on in the PhD process. Mentors might provide input to a group/action learning set of students rather than to individuals, and thus allow students to observe behaviours that the academics model.
Exemplars – e.g. marked-up reviews of papers. This is vital as one of the ways of allowing established academics to provide role models and show what it means to be an academic
Plentiful resources and links relevant to the field (crowd-generated)
A filtering and search system to help identify people and things
A means to provide peer review to others (akin to an online journal submission system)
A means to have one’s own ideas and papers reviewed by peers
Tutorial support – most likely a variant on action learning sets to support the process. This would cover the whole process from brainstorming, to literature review, to methodology design, to conduct and analysis of research, to evaluation etc. Ideally, each set would be facilitated by a professional academic or at least an experienced peer.
A professionally peer reviewed journal system, with experienced academic editorial committees and reviewers (who would only see papers already ranked highly in peer review), leading to publication
Support for gaining funding – including crowd funding – for the research, particularly with regard to projects needing resources not already available
Support for finding collaborators
Support for managing the process – both of the whole venture as well as specific projects
Non-academic support – counselling and advice
Tools and resources to find accreditors – this is not about providing qualifications but preparing students so that they can easily get them
There are some complex and significant problems to solve before this becomes a reality, including:
The main idea behind this is to prepare students for a PhD by publication, not to award doctorates. It is essentially about managing a research learning process and helping students to publish results. However, sustaining motivation over a long period without the promise of accreditation might be an issue.
Access to resources
One of the biggest benefits of an institution for a PhD student is access to closed journals and libraries. While it is possible to pay for such access separately from a course, and a system would certainly contain links to ways of discovering open articles, this could be an obstacle. Of course, while we would not and could not condone the use of the community to share closed articles, it is hard to see how we could police such sharing.
Without an institutional backdrop, there would be no easy way to ensure ethical research. Resources could be provided, action learning sets could be used to discuss such concerns, and counselling might be available (perhaps at a price) to help ensure that a process would be followed that wouldn’t pose an obstacle to gaining accreditation, but it would be difficult to ensure an ethically sound process was followed. This is an area where different countries, regions and universities follow different procedures anyway, and there is only broad uniformity around the world, so some flexibility would be needed.
Beyond issues of ethics, there is a need to find solutions to disputes, grievances, allegations of cheating etc. This might be highly distributed and enabled through crowd-based processes. A similar issue relates to ‘approvals’ of research projects: there would probably need to be something akin to the typical review processes that determine whether a student’s progress and/or proposed path are sufficient. It is likely that action learning sets could play a big role in assisting this process.
The skills (and resources) needed for different types of PhD can vary enormously – the skills and resources needed by a mathematician are worlds away from those needed by someone engaged in literary criticism, which are worlds away from those needed by a physicist, astronomer or biologist. It would probably be too big a task to cater for all, and some might be all but impossible (e.g. if they require access to large hadron colliders or telescopes, or are performing dangerous, large scale or simply complex experiments). To some extent this is not the huge problem it first appears to be. It is likely that most of those interested in pursuing this process would already be either working in a relevant field (and thus have resources to call upon) or already be enrolled in an academic program, which would reduce some of the problem, but the chances are that the most likely areas where this process could successfully be applied would be those requiring few resources beyond a good brain, commitment and a computer. There are opportunities for multiple instances of this process across multiple subject areas and disciplines. Given our interests and constraints, we would probably aim in the first instance for people interested in education, technology, business, or some combination of these. However, there is scope for a much broader diversity of systems, probably linked in some ways to gain the benefits of common shared resources and a larger community.
As the point of this is to leverage the crowd, it will be of little value if there is not already a crowd involved. The availability of high-quality resources, links and MOOCs might be sufficient to provide an initial boost to draw people to the system, as would a team of interesting mentors and participants, but it would still take a while to pick up steam.
In some fields, students are already reluctant to share information about their research, so this might be especially tricky in an open PhD process. Building sufficient trust in action learning sets and across the broader community may be problematic. Already, the openness needed for many MOOCs poses a challenge for some, but this process would require more disclosure an an ongoing basis than normal. This might be the price to be paid for an otherwise free program. However, the anticipated high drop-out rate would make it difficult to sustain tight-knit research groups/action learning sets over a prolonged period, and we would probably need to think more about cooperative than collaborative processes, so this may be difficult to manage.
Start-up costs and maintenance
This will not be a cheap system to build, though development might be staggered. Resources would be needed for building and maintaining the server(s), creating content, managing the editing process for the journal, and so on. Potential funding models include start-up grants, company sponsorship (the value to organizations of a process like this could be immense), crowd-funding, subscription, advertising/marketing, etc. Selling lists of participants bothers me, ethically, but a voluntary entry onto a register that might be passed on to interested companies for a fee might have high value. While we might not award doctorates, those who could stay the course would clearly be very desirable potential employees or research team members.
Encouraging academics to participate
Altruism and social capital can sustain a relatively brief open course, but this kind of process would (unless a different approach can be discovered) require long term commitment and engagement by professional academics. There may be ways to provide value to academics beyond the pleasure of contributing and learning from students. For instance, students may be expected/required to cite academics as co-authors where those academics have had some input into the process, whether in feedback along the way or in reviewing/completing papers they have written, or may be granted access to data collected by students. This would provide some incentive to academics to help ensure the quality of the research, and help students by seeing an experienced academic’s thinking processes in action.
This is a work in progress and there are some big obstacles in the way of making it a reality. We would welcome any ideas, suggestions or expressions of interest!