EdTechnology Ideas – Education Technology Journal

A new open-access educational technology journal. Looks slick, CC licence, a social approach, and I know and respect a couple of the editorial team, so I think it should be reliable and interesting.

Slightly less clear about the need for yet another journal in a crowded market though I guess it’s good to have a thriving ecosystem with plenty of competing species. However, there is a balance between those benefits and the relatively small amount of attention that can be spread around. Now that there are plenty of open-access journals of this nature I see a strong place for metajournals that consolidate writings around particular themes and/or that use curational skills to identify the best of the best. To some extent this occurs in isolated pockets like blogs and curated sites like Pinterest etc, but there is scope for more concerted and formalized efforts in this field.

Address of the bookmark: http://edtechnologyideas.com/

Guesses and Hype Give Way to Data in Study of Education – NYTimes.com

This is a report on the What Works Clearinghouse, a set of ‘evidence-based’ experimental studies of things that affect learning outcomes in US schools, measured in the traditional ‘did they do better on the tests’ manner. It’s a great series of reports.

I have a number of big concerns with this approach, however, quite apart from the simplistic measurements of learning outcomes that ignore what is arguably the most important role of education – it is about changing how you think, not just about knowing stuff or acquiring specific skills. There is not much measurement of that apart from, indirectly, through the acquisition of the metaskill of passing tests, which seems counter-productive to me. What bothers me more though is the naive analogy between education and clinical practice. The problem is an old one that Checkland expressed quite nicely when talking of soft systems:

“Thus, if a reader tells the author ‘I have used your methodology and it works’, the author will have to reply ‘How do you know that better results might not have been obtained by an ad hoc approach?’ If the assertion is: ‘The methodology does not work’ the author may reply, ungraciously but with logic, ‘How do you know the poor results were not due simply to you incompetence in using the methodology?’

Not only can good methodologies be used badly, bad methodologies can be used well. Teaching and learning are creative acts, each transaction unique and unrepeatable. The worst textbook in the world can be saved by the best teacher, the best methodology can be wrecked by an incompetent or uncaring implementation. Viewed by statistical evidence alone, lectures are rubbish, but most of us who have been educated for long enough using such methods can probably identify at least the odd occasion when our learning has been transformed by one. Equally, if we have been subjected to a poorly conducted active learning methodology, we may have been untouched or, worse, put off learning about the subject. It ain’t what you do, it’s the way that you do it.

Comparing education with medicine is a category mistake. It would be better to compare it with music or painting, for instance. ‘Experimental studies show that children make better art with pencils than with paints’ might be an interesting finding as a statistical oddity, but it would be a crass mistake to therefore no longer allow children to have access to paintbrushes. ‘On average, children playing violins make a horrible noise’ would not be a reason to stop children from learning to play the violin, though it is undoubtedly true. But it is no more ridiculous than telling us that ‘textbook X leads to better outcomes than textbook Y’, that a particular pedagogy is more effective than another, or that the effectiveness of a particular piece of educational software produces no measurable improvement over not using it. Interestingly, the latter point is made in a report from the ‘What Works Clearinghouse’ site at http://ies.ed.gov/ncee/pubs/20094041/pdf/20094041.pdf which, amongst other interesting observations, makes the point that the only thing that does make a statistical difference in the study is teacher/student ratios. Low ratios allow teachers to exhibit artistry, to adapt to learners’ needs, to demonstrate caring for individuals’ learning more easily. This is not about a method that works – it is about enabling multiple methods, adapted to needs. It is about allowing the teacher to be an artist, not an assembly worker implementing a fixed set of techniques.

I am not against experimental studies as long as we are very clear and critical in our interpretation of them and do not over-generalize the results. It would be very useful to know that something really does not ever work for anyone, but I’m not aware of many unequivocal examples of this. Even reward and punishment, that fails in the overwhelming majority of cases, has at least some evidence of success in some cases for some people – very few, but enough to show it is not always wrong.

Even doing nothing which, surely, must be a prime candidate for universal failure, sometimes works very well. I was once in a maths class at school taken by a teacher who, for the last few months of the two-year course, was taken ill. His replacements (for some time we had a different teacher every week, most of whom were not maths teachers and knew nothing of the syllabus) did very little more than sit at the front of the class and keep order while we studied the textbook and chatted amongst ourselves. The average class grade in the national exams sat at the end of it all was considerably higher than had ever been achieved in that school previously – over half of us got A grades where, in the past, twenty percent would have been a good showing. Of course, ‘nothing’ does not begin to describe what actually happened in the class in the absence of a teacher. The textbook itself was a teacher and, more importantly, we were one another’s teachers. Our sick teacher had probably inspired us and the very fact that we were left adrift probably pulled us closer together and made us focus differently than we would have done in the presence of a teacher. Maybe we benefited from the diversity of stand-in teachers. We were probably the kind of group that would benefit from being given more control over our own learning – we were the top set in a school that operated a streaming policy so, had it happened to a different group, the results might have been disastrous. Perhaps we were just a statistically improbably group of math genii (not so for me, certainly, so we might rule that one out!). Maybe the test was easier that year (unlikely as about half a dozen other groups didn’t show such improvement, but perhaps we just happened to have learned the right things for that particular test). I don’t know. And that is the point: the process of learning is hugely complex, multi-faceted, influenced by millions of small and large factors. Again, this is more like art than medicine. The difference between a great painting and a mediocre one is, in many cases, quantitatively small, and often a painting that disobeys the ‘rules’ may be far greater than one that keeps to them. The difference between a competent musician and a maestro is not that great, viewed objectively. In fact, many of my favourite musicians have objectively poor technique, but I would listen to them any day rather than a ‘perfect’ rendition of a midi file played by an unerring computer. The same is true of great teaching although this doesn’t necessarily mean it is necessarily the result of a single great teacher – the role may be distributed among other learners, creators of content, designers of education systems, etc.  I’m fairly sure that, on average, removing a teacher from a classroom at a critical point would not be the best way to ensure high grades in exams, but in this case it appeared to work, for reasons that are unclear but worth investigating. An experimental study might have overlooked us and, even if it did not, would tell us very little about the most important thing here: why it worked. 

We can use experimental studies as a starting point to exploring how and why things fail and how and why they succeed. They are the beginning of a design process, or steps along the way, but they are not the end. It is useful to know that low teacher/student ratios are a strong predictor of success, but only because it encourages us to investigate why that is so. It is even more interesting to investigate why it does not always appear to work. Unlike clinical studies, the answer is seldom reduceable to science and definitely not to statistics, but knowing such things can make us better teachers.

I look forward to the corollary of the What Works Clearinghouse – the Why it Works Clearinghouse.

Address of the bookmark: http://www.nytimes.com/2013/09/03/science/applying-new-rigor-in-studying-education.html?_r=0

LinkedIn launches LinkedIn for Education

This is about connecting people you at colleges or who you went to college with, rather than being a service for academics like academia.edu or others of that ilk, and it’s an incremental change from the existing ways LinkedIn already does pull people who claim the same institutional background together, but an interesting development none the less.

 

Address of the bookmark: http://pro.gigaom.com/blog/linkedin-launches-linkedin-for-education/

MOOPhD accreditation

A recent post at http://www.insidehighered.com/views/2013/06/05/essay-two-recent-discussions-massive-open-online-education reminded me that the half-formed plan that Torsten Reiners, Lincoln Wood and I dreamt up needs a bit of work.

So, to add a little kindling to get this fire burning…

Our initial ideas centred around supporting the process of doing research and writing papers for a PhD by publication. This makes sense and, we have learned, PhDs by publication are actually the norm in many countries, including Sweden, Malaysia and elsewhere, so it is, in principle, do-able and does not require us to think more than incidentally about the process of accreditation. However, there are often invisible or visible obstacles that institutions put in place to limit the flow of PhDs by publication: residency requirements, only allowing them for existing staff, high costs, and so on.

So why stop there?

Cranking the levers of this idea pump a little further, a mischievous thought occurs to me. Why not get a PhD on reputation alone? That is, after all, exactly how any doctorate is awarded, when it comes down to it: it is basically a means of using transferable reputation (think of this as more like a disease than a gift – reputations are non-rival goods), passing it on from an institution to an awardee, with a mutational process built in whereby the institution itself gets its own research reputation enhanced by a similar pass-it-on process. This system honours the institution at least as much as the awardee, so there’s a rich interchange of honour going on here. Universities are granted the right to award PhDs, typically through a government mandate, but they sustain their reputation and capacity to do so through ongoing scholarship, publication and related activities, and through the activities of those that it honours. A university that awarded PhDs without itself being a significant producer of research, or that produced doctors who never achieved any further research of any note, would not get very far. So, a PhD is only a signal of the research competence in its holder because an awarding body with a high reputation believes the holder to be competent, and it sustains its own reputation through the activities of its members and alumni. That reputation occurs because of the existence of a network of peers, and the network has, till now, mostly been linked through journals, conferences and funding bodies. In other words, though someone goes to the trouble of aggregating the data, the actual vector of reputation transmission is through individuals and teams that are linked via a publication process. 

So why not skip the middle man? What if you could get a PhD based on the direct measures of reputation that are currently aggregated at an institutional level rather than those that have been intentionally formalized and aggregated using conventional methods?

Unpicking this a little further, the fact that someone has had papers published in journals implies that they have undergone the ordeal by fire of peer review, which should mean they are of doctoral quality. But that doesn’t mean they are any good. Journals are far from equal in their acceptance rates, the quality of their reviewers – there are those with good reputations, those with bad ones, and a lot in between. Citations by others help to assure us that they may have something of value in them, but citations often come as a result of criticism, and do not imply approval of the source. We need a means to gauge quality more accurately. That’s why h-index was invented. There are lots of reasons to be critical of this and similar measures: they fail to value great contributions (Einstein would have had a very low h-index had he only published his most important contributions), they embody the Matthew Effect in ways that make their real value questionable,  they poorly distinguish large and small contributions to collaborative papers, and the way they rank importance of journals etc is positively mediaeval. It is remarkable to me to surf through Google Scholar’s rankings and find that people who are among the most respected in my field having relatively low indexes while those that just plug away at good but mundane research having higher ones. Such indexes do none-the-less imply the positive judgements of many peers with more rigour and fairness than would normally be found in a doctoral committees, and they give a usable number to grade contributions. So, a high h-index or i10-index (Google’s measure of papers with more than 10 citations) would satisfy at least part of the need for validation of quality of research output. But, by definition, they undervalue the work of new researchers so they would be poor discriminators if they were the only means to evaluate most doctorates. On the other hand, funding councils have already developed fairly mature processes for evaluating early-career researchers, so perhaps some use could be made of those. Indeed, the fact that someone has successfully gained funding from such a council might be used as partial evidence towards accreditation.

A PhD, even one by publication, is more than just an assortment of papers. It is supposed to show a sustained research program and an original contribution to knowledge. I hope that there are few institutions that would award a PhD to someone who had simply had a few unrelated papers published over a period of years, or to someone who had done a lot of mundane but widely cited reports with no particular research merit. So, we need a bit more than citation indexes or other evidence of being a world-class researcher to offer a credible PhD-standard alternative form of certification.

One way to do this would be to broadly mirror the PhD by publication process within the MOOC. We could require peer ‘marking’, by a suitable panel, of a paper linking a range of others into a coherent bit of doctoral research and perhaps defended in a public webmeeting. This would be a little like common European defence processes, in which theses are defended not just in front of professors but also any member of the public (typically colleagues, friends and families) who would want to come along. We could increase the rigour a little by making it a requirement that those participating in such a panel should have to have a sufficiently high h-index or i-index of their own in a similar subject area, and/or have a relevant doctorate. Eventually the system could become self-supporting, once a few graduates had emerged. In time, being part of such a panel would become a mark of prestige in itself. Perhaps, for pedagogic and systemic reasons, engagement in such a panel would be a prerequisite for making your own ‘doctoral’ defence. Your rating might carry a weighting that accorded with your own reputational index, with those starting out weighted quite low and those with doctorates, ‘real’ doctoral students etc having higher indexes. The candidates themselves and other more experienced examiners might rate these novice examiners, so a great review from an early-career candidate might increase their own ranking.  It might be possible to make use of OpenBadges for this, with badges carrying different weights according to who awarded them and for what they were awarded.

Apart from issues of motivation, the big problem with the peer-based approach is that it could be seen as one of the blind leading the blind, as well as potentially raising ethical issues in terms of bias and lack of accountability. A ‘real’ PhD committee/panel/etc is made up of carefully chosen gurus with an established reputation or, at least, it should be. In North America these are normally the people that supervise the student, which is dodgy, but which normally works OK due to accountability and professional ethics. Elsewhere examiners are external and deliberately unconnected with the candidate, or consist of a mix of supervisors and externals. Whatever the details, the main point here is that the examiners are fully accredited experts, chosen and vetted by the institutional processes that make universities reliable judges in the first place. So, to make it more accountable, more use needs to be made of that reputational network that sustains traditional institutions, at least at the start. To make this work, we would need to get a lot of existing academics with the relevant skills on board. Once it had been rolling for a few years, it ought to become self-sustaining.

This is just the germ of an idea – there’s lots of ways we could build a very cheap system that would have at least as much validity as the accreditation procedures used by most universities. If I were an employer, I’d be a lot more impressed by someone with such a qualification than I would by someone with a PhD from most universities. But I’m just playing with ideas here. My intent is not to create an alternative to the educational system, though that would be very interesting and I don’t object to the idea at all, but to highlight the often weird assumptions on which our educational systems are based and ask some hard questions about them. Why and on what grounds do we set ourselves up as arbiters of competence? What value do we actually add to the process? How, given propensities of new technologies and techniques, could we do it better? 

Our educational systems are not broken at all: they are actually designed not to work. Well, ‘design’ is too strong a word as it suggests a central decision-making process has led to them, whereas they are mainly the result of many interconnected decisions (most of which made sense at the time but, in aggregate, result in strange outcomes) that stretch back to mediaeval times. Things like MOOCs (and related learning tools like Wikipedia, the Khan Academy, StackOverflow, etc) provide a good opportunity to think more clearly and concretely about how we can do it better and why we do it the way we do in the first place.

The Psychology of Hiring: Why Brainteasers Don't Belong in Job Interviews : The New Yorker

An interesting article that makes a very strightforward and obvious point, with some evidence, that brainteasers in job interviews do little more than demonstrate the candidate’s ability to do brainteasers in job interviews. They do not predict success in the jobs they are filtering for. The parallel implications relating to typical exam processes and practices in educational systems are clear. 

Address of the bookmark: http://www.newyorker.com/online/blogs/elements/2013/06/why-brainteasers-dont-belong-in-job-interviews.html

Students riot after teachers try to stop them from cheating on exams

If someone had made this up I might have thought they had gone a little too far down the satirical path to be entirely believable. And yet…

‘Outside, more than 2,000 people had gathered to vent their rage, smashing cars and chanting: “We want fairness. There is no fairness if you do not let us cheat.” The protesters claim cheating is endemic in China and that sitting the exams without help puts their children at a disadvantage.’

One parent assaulted an invigilator who had refused a bribe having confiscated a cellphone hidden in a student’s underwear. The invigilators were holed up in the examination halls and had to send calls for help over the Internet. Radio transmitters and receivers were confiscated (some hidden inventively in erasers), and at least two groups trying to communicate with examinees were found in a nearby hotel. I don’t know whether they found all of them. Probably not, if they were anything like those discussed at http://www.china.org.cn/english/China/172006.htm which reports on things like earpieces that had to be surgically removed when they got stuck or, most awe-inspiring of all, an ‘interphone’ that exploded inside a student’s abdomen.

A study at http://ojs.library.ubc.ca/index.php/cjhe/article/view/183537/183482 suggests that 58% of Canadian students cheated in high school exams, though the numbers fall as level of study increases, with ‘only’ 9% of graduate students admitting to cheating in exams. The level of cheating in coursework is significantly higher across the board. These are sobering figures, given that the results are self-reported and may thus give an optimistic picture.

From ingenious uses of high tech cameras and transmitters, watches that display books’ worth of notes, and hidden earpieces, to bottles of water with crib sheets printed on the inside of the label or engraved notes on fingernails, cheating technologies are big business.  There are some amazingly smart tools and methods available online such as those at http://24kupi.com, http://www.cheat-on-exam.com and http://www.wikihow.com/Cheat-On-a-Test (which, for any students thinking this might be a good idea, invigilators know about too Smile). However, with embeddable technologies, tattooed circuits, and increasingly tiny smart devices the possibilities are growing fast.

This is an arms race that no one can win. Cheats get smarter at least as fast as institutions get wiser but some will always be caught and all will live in fear of being caught. However, the value of a qualification is directly proportional to its validity so, if that is called into question, everyone loses – cheats, institutions, non-cheats and society as a whole. It is more than a bit worrying that there are medical professionals, safety inspectors and architects who cheated in their exams, especially as the evidence suggests this attitude persists throughout cheats’ careers. Endemic cheating is a tragedy of the commons. If you cannot trust a qualification then there is no point in having one and all become valueless.

Can we do something about it? Yes, but it requires a concerted effort, and better detection technologies are only a small part of the answer. It is perfectly possible to design assignments that are engaging, personal, relevant and largely cheat-proof. I’ve yet to find a foolproof method that cannot be foiled by a determined cheat who employs someone else to impersonate them take a whole course on their behalf. However, we can stop or render harmless simpler contract cheating, plagiarism, collusion, bribes and other common methods of cheating through simple process design. Courses where no student ever does the same thing, where learning is linked to personal interests and aspirations, where each part is interconnected with every other and the output of one part is the input of the next are both more engaging and more cheat-proof. Amazingly, I have had students who attempt to cheat even then but, because of the built-in checks of the design, they fail anyway. Multiple examiners and public displays of work are a good idea too – non-cheating students can usually be relied upon to point out examples of cheating even if the examiners miss it. We can get rid of the traditional regurgitation format of exams, or make use of alternative and less spoofable variations like oral exams, especially those that require students to draw on unique coursework experience rather than uniform replication of process and content. We can help educate students how not to cheat and make a point of reminding them that it is a bad thing to do. And we can get to know our students better, both to reduce the likelihood of cheating and to discover it more easily should it occur. Most of these methods cost time, effort, and money when compared with the common industrial one-size-fits-all models they are up against. But they all lead to better learning, provide more reliable discrimination of competence, greater immunity to cheating, and are fairer to everyone. If we stack that up against the staggeringly high costs of endemic cheating, they begin to look like much more efficient alternatives.

Address of the bookmark: http://www.theprovince.com/news/Students+China+riot+after+teachers+stop+them+from/8554083/story.html

The Roots of Grades-and-Tests

Excellent dismissal by Alfie Kohn of the massive systematic idiocy of grading and testing. Some great arguments made, but I think the main one is summarized most succinctly thus: 

“Extrinsic inducements, of which G&T is the classic example in a school setting, are devices whereby those with more power induce those with less to do something.  G&T isn’t needed for assessment, but it is very nearly indispensable for compelling students to do what they (understandably) may have very little interest in doing. “

We have to work out better ways of teaching than this. It is not right for an educational institution to continue do something so antagonistic to learning.

Address of the bookmark: http://www.alfiekohn.org/teaching/gradesandtests.htm

Learning Locker

Very interesting new development, not quite finished yet but showing great promise – a simple means to aggregate content from your learning journey, supporting open standards. This is not so much a personal learning environment as a bit of glue to hold it together. The team putting it together have some great credentials, including one of the co-founders of Elgg (used here on the Landing) and the creator of the Curatr social learning platform.

Currently it appears that its main open standard is SCORM’s new TinCan API, but there are bigger plans afoot. I think that this kind of small, powerful service that disaggregates learning journeys from monolithic systems (including those such as the Landing, Moodle, MOOCs and Blackboard-based systems) is going to be a vital disruptive component in enabling richer, more integrated learning in the 21st Century. 

This is the description of the tool from the site itself:

“It’s never been easier to be a self-directed learner. Whether you’re in school or at work, you’re always learning. And it’s not just courses that teach. The websites you visit, the blogs you write, the job you do; it’s all activity that contributes to your personal growth.

Right now you’re letting the data all this activity creates slip through your fingers. You could be taking control of your learning; storing your experience, making sense of what you do and showing off what you know.

Learning Locker helps you to aggregate and use your learning data in an environment that you control. You can use this data to help quantify your abilities, to help you reach personal targets and to let others share in what you do.

It’s time to take your data out of the hands of archaic learning management systems that you can’t reach. We use new technologies, like the xAPI, to help you take control of your learning. It’s your data. Own it.”

Address of the bookmark: http://www.learninglocker.net/

Unintelligent design and the modern MOOC

Everyone is talking about MOOCs.

Every institution of higher learning I visit or talk with seems intent on joining the MOOC scrum or, if not, is coming up with arguments why it shouldn’t. There’s also a wealth of poorly considered, badly researched opinion pieces too, many of them published by otherwise fairly reputable journals and news sources. I’ve been doing my bit to add poorly researched opinion too, talking in various venues about a few ideas and opinions that are not sufficiently rigorously explored to make into a decent paper. This post is still not worthy of a paper, but I think the main idea in it is worth sharing anyway. To save you the trouble of reading the whole thing, I’m going to be making the point that MOOCs disrupt because they quietly remove two of the almost-never-questioned but most-totally-nonsensical foundations on which most traditional university teaching is based – integral accreditation and fixed course lengths – and their poor completion rates therefore encourage us/force us to ask ourselves why we do such things. My hope is that the result of such reflection will be to bring about change. To situate my opinions relative to those of others, I will start by offering a slight caricature of the three main stances that people seem to be taking on MOOCs.

Opinion 1 – it’s all rubbish and online learning is pants

The cantankerati are, of course, telling us that there is nothing new here, or that online learning isn’t as good as face to face, or that it is all hype, or that the learning outcomes are not as good as those at (insert preferred institution, preferably one’s alma mater, here) etc. This is a fad, they tell us. They look at things like drop-out rates or Udacity partnering with Georgia Tech or Coursera moving into competition with Blackboard, or the fact that millenial college students prefer traditional to online classes (err – seriously? that’s like asking iPhone users if they prefer them to Android phones) and nod their heads sagely, smugly and in an ‘I told you so’ fashion. No doubt, when the bubble bursts (as it will) they will be the first to gloat. But they are wrong about the failings of MOOCs, on most significant counts.

Opinion 2 – it’s a step in the right direction, but (insert prejudice here)

Others think that there is something worth preserving here and are trying to invent new variants – usually xOOCs of some kind, or MOOxs, or, in rare cases, xOOxs, liking some aspect of the MOOC idea such as openness or size but not liking others. The acolytes of online learning (AOLs for short, oddly enough) are getting all excited about the fact that people are at last paying attention to what they have been saying for years, though most are tempering their enthusiasm with observations about the appalling pedagogies, the creation of a two-tier system of higher education, problems with accrediting MOOC learning, and  high ‘dropout’ rates. They are wondering why these MOOCish upstarts haven’t read their own august works on the subject which would obviously steer them right.  They will, when pressed, grudgingly admit that these rank enthusiastic amateurs are (dammit) quite signally succeeding in ways they have only dreamed of, but they still know better. There are many of these,  some of which are actually very thoughtful and penetrating and by no means unsubtle in their analysis:  John Daniel’s well-informed sagacious overviewPaul Stacey’s intelligent mourning of the overshadowing of a good idea, or Carol Edwards’s slightly jaundiced but interesting and revealing first-person report for BCIT, for instance. There are far more unsubtle and far less well-informed rants that I won’t bother linking here that complain about the pedagogies, or tell us that there is nothing new at all in this, or that think they see an alternative future etc. Oh, alright – here’s one that I find particularly silly and here are my comments on it.

Opinion 3 – the sky is falling! The sky is falling!

There is a third group that is fairly sure that MOOCs are very important and that they are causing or, at least, catalyzing a seismic shift in education. The popular press clearly demonstrates that there’s a revolution happening, for better or worse, and most people who hold this position want to be on that bandwagon, wherever it may be going. If not, they fear they will be left in the dust. There are some notable holders of this perspective who justify and examine their beliefs in intelligent ways, such as the ever-brilliant Donald Clark, for example, who has recently written a great series of posts that are both critical and rabble-rousing.

And many in between…

Between and spanning these caricatures are some really interesting and perceptive commentaries, and only a few have as clear-cut an opinion as I portray here. Aaron Bady’s post casting a critical eye on the hype, for example, picks apart the sky falling very carefully, and situates itself a little in the ‘right direction’ camp without being too much on the ‘but…’ side of things. The recent Edinburgh report on their pilot MOOCs is a model of careful research and openness to critical and creative thinking.   George Siemens’s excellent analysis of x-vs-c MOOCs is another great piece that avoids much bias one way or the other while identifying some of the key issues for the future.

Where I sit

You could call me a fan. My PhD (completed well over 10 years ago) was largely about how large online crowds can learn together. I’ve signed up for (but not completed) quite a few MOOCs since 2008, and I’ve been a more active participant at times, playing a teaching role in a couple and helping to lead one in early 2011. I ran my first education-oriented web server offering what we would now call open educational resources in 1993. I read an average of two or three articles on MOOCs every day, maybe more. I’ve joined up with the newly formed WideWorldEd project and have been engaged in discussions and planning about MOOCs at three different institutions.

I am definitely not one of the cantakerati though I am highly sceptical of any blanket claim that a particular flavour of teaching leads to better or worse learning than any other, be it online or not. It ain’t what you do, it’s the way that you do it.

I do not believe that the pedagogies of most MOOCs are particularly bad or retrograde. Talking heads, objective tests and other favourite tools of early xMOOC providers are not my cup of tea, and the chaos of cMOOCs (that I like a lot more) seems to favour only a few neterate winners, but most that I have seen are actually at least as good as their paid-for counterparts. There are quite a lot that do not fall neatly into either of these main camps too – e.g. http://ds106.us – and both camps share a lot in common with each other that neither camp seems particularly happy to acknowledge: connectivist networks thread through and around xMOOCs and disrupt their neat outlines, while cMOOCs often employ what look and smell a lot like instructivist lectures as significant parts of the process. But, whatever the similarities, what and how people teach is seldom what and how people actually learn so it is not that important. Quality is not a direct correlate of the pedagogies and other technologies used. In fact, it is interesting to note that a recent article on MOOC junkies highlighted the greater significance of passion in the professor, something I and many others have been saying for quite a while. It ain’t what you do, it’s the way that you do it.

For me, the sky is not falling yet though it certainly has a few more interesting colours than it had a year or two ago and there are some fascinating systemic effects that are mostly, but not all, positive. But this is not the beginning of the end of higher education as we know it. In some ways, it could be the beginning of  something much more interesting.

What really appeals to me most about MOOCs is their almost universally low completion rates. Whatever this means for MOOCs themselves, and however much it upsets their providers (not their learners), in my opinion this is by far their most positive systemic feature. While It ain’t what you do, it’s the way that you do it, I have one important proviso that needs to be added to that: there are some things that you can do that will most probably and in some cases definitely fail to get results. And this is really what this post is about.

So, what about those completion rates?

One thing that many of the cantankerati, the fearfully curious and the AOLs amicably agree on is that that fact that most people drop out of most MOOCs shows that there is something wrong with the idea, or how it has been implemented, or both. Some MOOCs struggle to keep 2% of their students while the best (on horse feeding, as it happens) have managed a little over 40%. The vast majority (so far) have succeeded in keeping less than 10% of their students to the bitter end. This is particularly odd given that, on most MOOCs, the majority of course-takers have at least one degree, many are educators, and quite a few have post-graduate qualifications. These are, for the most part, mature learners who know how to learn and probably think about how they do it.

For some, this is proof that online learning doesn’t work (self-evidently wrong, I’m glad to say, or I and hundreds of thousands of others would be out of a job, Wikipedia would vanish and Google Search would be largely abandoned). For others, it is proof that the pedagogies don’t work (not entirely right either, or no one would take them). The more informed, also known as those who think about it for more than two seconds, realize pretty quickly that MOOCs do not require any strong interest, let alone any significant commitment to sign up to, nor do they demand any prerequisites. So, of course, most people ‘drop out’ within the first couple of weeks, if indeed they pay any attention at all beyond spending less than a minute signing up and vaguely thinking that it might be interesting to take part. They may have insufficient interest, they may find it too hard, too easy, too boring, or too engrossing and demanding of their time. Maybe they don’t like the professor. Maybe they have better things to do. Nor is it any surprise that people whose only commitment is time might drop out after the first couple of weeks – many get what they came for and stop, or they lose interest, or get distracted, or break their computers, or simply run out of time to keep working on it. There has been a little good research and a lot of useful speculation on this, for instance at http://www.katyjordan.com/MOOCproject.html and http://blogs.kqed.org/mindshift/2013/04/why-do-students-enroll-in-but-dont-complete-mooc-courses/ and http://www.openculture.com/2013/04/10_reasons_you_didnt_complete_a_mooc.html and http://mfeldstein.com/emerging_student_patterns_in_moocs_graphical_view/ and http://donaldclarkplanb.blogspot.ca/2013/01/moocs-dropout-category-mistake-look-at.html

But there is something odder going on here that seems to be mostly slipping under the radar, apart from the odd mention here and there by people like Alan Levine and a few others.  I’ve long been bothered by the mysterious and improbable fact that, in higher education, all learning is neatly divisible into 13 (or 15, or 10, or something in that region) week chunks. This normally equates to an average of around 100 hours of study time, give or take a bit. Whatever the particular length chosen, they are almost always unaccountably multiples of chunks of the same size at any given institution, and that size is broadly comparable to other courses/modules/papers/units/etc in other institutions. It’s enough to make you wonder whether there might be a god as it suggests intelligent design may be at work here.

Actually, it’s the result of unintelligent design. This is an evolutionary process in which path dependencies pile up and push their way into adjacent possibles.

So, why do we have courses (or modules/papers/units/etc depending on your geographical region)?

Well, in the first place, it is true that some things take longer to learn than others. Not everything can be mastered by asking a question or looking it up on Wikipedia. That’s completely fair and reasonable. It doesn’t, however, explain why it takes the same amount of time (or multiples of it) for everyone, regardless of skill, experience or engagement, to master everything – Modern European Philosophy, Chemistry 101, Java Data Structures, Literary Culture & the Enlightenment, Icelandic Politics: all fit the same evenly sized periods, or multiples of them. For an explanation of that, we have to turn to a combination of harvest schedules, Christian holidays and the complexities of managing scarce physical resources that are bound by physics to a single and somewhat constrained teaching space.

The word ‘lecturer’ derives from the fact that lecturers used to read from the very valuable and scarce single copies of books held by institutions. Lecture theatres and classrooms were thus the most efficient way to get the content of books heard by the largest possible number of people. If you want to get a lot of people to listen at once then it helps if they are actually there so, if they are taking a religious holiday or helping with the harvest (this last point is a little contentious as it doesn’t fully explain a long break from July to October), there is no point in standing up and talking to an empty lecture hall. So, putting aside Easter’s irritating habit of moving around from year to year that continues to mess up university teaching schedules, this divides things up quite neatly into roughly 13 week chunks separating harvest, Christmas, and Easter breaks. The period may vary a little, but the principle is the same.

This pattern has become quite deeply set into how learning happens at most universities, even though the original reasons it occurred might have faded into insignificance had they not become firmly embedded through momentum and the power of path dependencies. Assessment became intimately linked to the schedule, with ‘mid-terms’ and ‘finals’ and then came to act as a major driver in its own right. Teacher pay and time was allocated according to easily managed chunks and resources. Enrolments, registrations, convocations and the familiar rhythms of the university calendar helped to consolidate the pattern, largely driven by a need for efficiency and bureaucratic convenience. It is really hard to allocate teachers and students to rooms. Up to this point, there was no particular reason to divide the learning experience into modularized chunks and many universities did (and some still do) simply have programs (or programmes or, to confuse matters, courses lasting 3-5 years) with perhaps a few streams but without distinct modularized elements. To cap it off and set it in stone, three forces coincided. One was a laudable desire to allow students the flexibility to take some control over what they learned.  Another was the need to simplify the administration of programs. The last was the need to assert equivalence between what is taught at institutions, whether for certification purposes or for credit transfer. This last force, in particular, has meant that this way of dividing learning into modular chunks of a similar length has become a worldwide phenomenon, even in countries for which Easter and Christmas have no meaning or value.

All of this happened because there had to be a means of managing scarce resources shared among many co-present people as efficiently as possible but, for centuries, there has been no good reason for picking this particular term-length apart from the force of technological momentum.  There have been innovations, here and there. Athabasca University, for instance, gives undergraduates 6 months (extendible at a price) in which to complete work in any way and timeframe that will fit their needs. Similarly, the University of Brighton runs ‘short fat’ masters modules that last for half a week, combined with a period of self-study before and after. But, in order to maintain accreditation parity, the amount of work expected of students on such courses broadly equates to what, in conventional classes, would take – yes – 13-15 weeks. Technically, thanks to a bit of reverse engineering, this translates into roughly 100 hours of study in the UK, a little more or less elsewhere, particularly where people take the insanely bad North American approach of counting teaching hours rather than study hours (what madness gripped people that made them think that was a good idea?).  Whatever the rationale, this has nothing to do with learning, nothing to do with the nature of a topic or subject area, nothing to do with the best way to teach. It’s just the way it turned out, and certification requirements reinforce that anti-educational trend.

So what?

Courses are not neutral technologies. One of the least loveable things about them is that their content, form and process are, at least ostensibly, controlled by teachers from start to finish. Courses are a power trip for educators that, in institutional incarnations, often require some quite unpleasant measures to maintain control, typically based on long-discredited models of human psychology that rely heavily on rewards and punishments – grades, attendance requirements, behavioural constraints in classrooms, etc. That is just plain stupid if you actually want people to learn and believe that it is your job to help that process. There can be few methods apart from deliberate torture and punishment that more reliably take motivated, enthusiastic learners and sap the desire to learn from them. We do this because courses are a certain length and we think that students have to engage in the whole thing or not at all.

Students, meanwhile, have little choice but to accept this or to drop out of the system, but that’s tricky because those uniform-size credentials have become the currency for gaining career advancement and getting a job in the first place.

Teachers need to work on maintaining that control because there are very few topics that can, in and of themselves, sustain a large number of individuals’ interest for 13 solid weeks and those that do are highly unlikely to naturally fit into that precise timeframe. Sure, some students may passionately love the whole thing and may have learned to gain some immunity from the demotivating madness of it all, or the teacher may be one of those rare inspiring people that enthuses everyone she gets to teach. But, for most students, it will be, at best, a mixed bag. Even for those that enjoy much of it, some will be irrelevant, some too easy, some too complicated, some simply dull. But they have to do it because that what the teacher demands that they have to do, and teachers have to fit their courses to this absurd length limit because that’s what the institutions demand that they have to do, and institutions do it because that’s how it has always been done and everyone else does it.

This is not logical.

So much of what makes a great teacher is therefore the ability to overcome insanely stacked odds and work the system so that at least a fair number of people get something good out of it. Teachers have to find ways to enthuse and motivate, to design assessments that are constructively aligned, to perform magic tricks that limit the damage of grading, to build flexible activities that provide learners with a bit of self-determination and control. Sadly, many do not even do that, relying on this juggernaut and the whole unwieldy process to crush students into submission (of assignments). It really doesn’t have to work like that.

This systemic failure is tragic, but understandable and forgivable. There is massive momentum here and opposition to change is designed into the system. It would take a brave teacher to explain to administrators and examination boards that she has decided that the topic she is teaching actually only needs 4 weeks to teach. Or 33 weeks. Or whatever. And, no, it will not have any parity with other courses on the same subject: OK? I would not relish that fight. It is considerably more tragic and less easy to forgive when, without any of those constraints – no formal accreditation, no institutional timetables, no harvest, no regulations, no scarcity of resources  – a few MOOC purveyors do the same thing. What is going on in their heads? My sense is that it is the Meeowmix song…

Meeow-Mix song

Thankfully, an increasing number are not doing that at all: a glance through the range of MOOCs currently on offer via the (excellent) MOOC aggregator at http://www.class-central.com/ shows a range of lengths between 2 and 15 weeks as well as a goodly range of self-paced courses of somewhat indeterminate length. After early attempts mostly replicated university courses, the norm now appears to be around 6 weeks, and falling fast. The rough graphs below (that I created based on class-central’s data) of those starting soon and those that have already finished illustrate this trend quite nicely. Note in particular the relative drop in 10-week and higher courses and the rise in those of 4, 6 and 8 weeks. While it is far from all being down to better teaching – some of the rise in shorter courses is notably due to a trend towards samplers that are intended to draw people in to fee-paying courses – there is a pattern here. And, to counterbalance such forces, it should be remembered that a fair number of the longer courses have ambitions to reintegrate their students within their paid-for broken systems, so they are sometimes timetabled with learning as a secondary consideration and so retain their infeasible length.

MOOC lengths till now…

MOOC lengths (past)

 

 

Mooc lengths for courses about to start…

MOOC lengths (future)

 

Getting away from courses

Though the interest in MOOCs is fuelled and sustained by the fact they are free (though sadly, increasingly not as open as they were in the halcyon days of cMOOCs), popular and online, the really interesting thing about them is the attention they are drawing to what is wrong with the notion, form and above all the length of the course. This little thing is the real revolution. It radically changes the power dynamics. If people begin to disaggregate their courses, making them shorter and less teacher-controlled, they will put learners ever more in control of their own learning, giving them choices and the power to make those choices. Better still, it means that teachers are starting to create courses without unnecessary time constraints that are the size they need to be for the subject being taught. Pedagogy, though still not coming first, is playing a more significant role. But this is just a step in the right direction.

The power of small things

People who question completion rates for MOOCs almost never ask those same questions about Q&A sites, Wikipedia, Khan Academy, Fixya or How-Stuff-Works tutorials, OERS and Google Search. Indeed, the notion of ‘completion’ probably means nothing significant for such just-in-time tools: they are useful, or they are not, they work or they don’t. People use them or they don’t. You might waste a few minutes here and there on things that are unhelpful and those minutes add up but, on the whole, just-in-time learning does what it says on the box. And people use these tools because they need to learn. If someone needs to or wants to learn, you have to try really hard to stop them. But just-in-time is not always the way to go.

Clubs, not courses

I am not a great programmer but it is something I have been doing from time to time for about 30 years. When I’m stuck, I increasingly turn to StackOverflow, a brilliant set of sites based around a collectivized form of discussion forum – a bit more sophisticated than Reddit, a bit less intimidating that SlashDot (which remains perhaps the greatest of all learning tools for anyone with geek tendencies, but that needs a fair bit of skill and effort to get the most out of). StackOverflow doesn’t have courses, but it does have answers, it does have discussions, and it does have some very powerful tools for finding answers that are reliable, useful and appropriate to any particular need. The need can range from the very specific and esoteric (‘why am I getting this error?’) to matters of principle (‘what methodology is best for this problem?’) to general learning (‘what’s the best way to get started in Ruby-on-Rails?’) and everything in between. It’s like having your own immensely wise team of personal tutors, without a beginning date, an end date, or a fixed schedule of activities. This is not a course – it’s more like a Massive Open Online Club, with no restrictions to membership, no commitments, no threshold to joining. Conveniently, this has the same acronym as a MOOC. In fact, just as MOOCs subtly transform the social contract that is involved with traditional courses, so these ‘clubs’ are not exactly like their hierarchical, closed, membership-based forebears. They are what Terry Anderson and I have described as sets: not exactly a network of people you know, certainly not a hierarchically organized system like a group, just a bunch of people with a shared interest, some of whom know more than others about some things.

But what about accreditation?

Why should accreditation be something that happens only in and as a result of a course? It is bizarre and open to abuse that the people who teach a course should also be its accreditors. It is strange in the extreme that they should be the ones to say that students have ‘failed’ when it is obvious that this failure is not just on the part of the students but also of their teachers, which makes those teachers very poor and biased judges of success. It might be just about acceptable if those teachers really are the only ones who know the topic of the course but that is rare. In Eire, students have a right to write and defend a PhD (by definition a unique bit of learning) in Gaelic. Despite the fact that the number of Gaelic speakers who are also experts in many PhD topics is not likely to be huge (unless the topic is Irish history or somesuch) they still manage to find expert examiners for them. It can be done.

At Athabasca University we have a challenge for credit option for many of our courses that can be used to demonstrate competence for certification purposes. Alternatively, if the match in knowledge is not precisely tuned to the credentials we award, we and many others have PLAR or APEL processes that typically use some form of portfolio to demonstrate competence in an area. And then there are upcoming and increasingly significant trends like the move to Open Badges, closed LinkedIn endorsements, gamified learning, or good old fashioned h-index scores that sometimes tell us more, at least as reliably, and in some ways in greater detail than many of our traditional accreditation methods.

There is seldom a good reason to closely link accreditation and learning and every reason not to.  Giving rewards or punishments for learning is the academic equivalent of celery – to digest it consumes more calories than it actually provides, distorting motivation so much that it demotivates.

Summing up

I have no doubt that some people might bemoan the loss of attention implied by just-in-time learning or this weakly structured club-oriented perspective on learning which has no distinct beginning and no specific end. It is true that courses do sometimes include things like ‘problem solving’, ‘argument’, ‘enquiry’, ‘research’ and ‘creativity’ among their intended outcomes and, assuming they provide opportunities to exercise and develop such skills, that’s a lot better than not having them. And some (indeed, many) courses are a genuinely good idea, because it really does take x amount of time to learn some things (where x is a large number) and learning works much more smoothly when you learn with other people and have a specific goal in mind. But many are not such a good idea, and most get the value of x completely wrong. No more should we assume that a 10-week (or 100-hour) course is the right amount of time needed to learn something than we should assume that the answer to teaching is a one-hour lecture (even though it sometimes really is part of a good answer).

There are those who cynically believe that the sole purpose of going to a university is to build a network of contacts and gain credentials that will be valuable in a future career, so you can do what you like to students while they are in college and it won’t matter a bit. In fact, there’s a fair bit of research that shows that it typically doesn’t, which is yet another reason to express concern that we are not doing it right. If that were really what universities were about then I would stop teaching now because it would be boring and pointless. I think that, if we claim that what we are doing is teaching then we should at least try to do so. But accredited, fixed-length courses get in the way of doing that.

It is true that much of the really interesting learning that goes on in courses is not really about the topic, but the process of learning itself – that is why there is a vague and hard to pin down notion of graduateness that makes a fair bit of sense even if it cannot be well expressed or measured, a problem that Dave Cormier and others have grappled with in interesting ways. I’m not at all against lengthy learning paths if that is what is needed to learn, nor do I object at all to letting someone guide you along that path if that is what will get you where you want to be, and I am very much in favour of learning with other people. My problem is that the fixed-size course with fixed learning outcomes and tightly integrated accreditation is not the only way, is seldom the best way, and is often the worst way to do it. The biggest thing that MOOCs are doing, and the most disruptive, is visibly disaggregating the learning process from the unholy alliance of mediaeval bureaucracy and Victorian accreditation methods. As long as MOOCs retain the form and structure of courses that are tied to these unholies, they will (from their purveyors’ rather than their students’ perspectives) mostly fail, and that is a good thing. Even cMOOCs, that deliberately eschew learning outcomes and fixed accreditation, still often fall into a trap of fixed lengths and processes. If we can learn something from that then they have served a useful purpose.

So there you have it – another long, opinionated piece about MOOCs with little empirical data and a lot of hot air. But I think the central point, that fixed course lengths and integrated accreditation lie at the heart of much that is wrong with traditional university education and that MOOCs bring that absurdity into sharp relief, is worth making. I hope you agree.

Afterword

You may have seen my recent post on MOOPhDs and might be wondering whether I am contradicting myself here. Well, maybe a little, and there was a little hint of satirical intent when I first suggested the idea that attempted to exaggerate the concept of the MOOC to show the absurdity of courses. But the MOOPhD idea grew on me and it actually makes a little sense – it does not demand fixed length courses and completely separates out the accreditation from the process, and is far more like an open club or support network than an open course. Indeed, the way PhDs, at least those that follow a vaguely European model, tend to be taught provides an expensive-to-implement but workable model of learning that entirely (or, following a sad trend towards great bureaucratization in some countries, to a moderate extent) avoids courses. So, universities do know how to break the chains. Most just haven’t yet figured out how to do that for their mass-produced courses.

Wheel on SAMR and Bloom's Digital Taxonomy

A brave or, more accurately, foolhardy attempt to marry Bloom’s (unempirical and unsubtle) taxonomy and the (equally unempirical but worthy of reflection) SAMR model of technology that categorizes technologies in terms of relative transformative capacity, with examples of appropriate iPad tools to cover each segment of both wheels. Like most such models, it is way too neat. You simply cannot categorize things that relate to the complex world of learning in such coarse and simple ways – in both the case of Bloom and of SAMR, it ain’t what you do so much as the way that you do it that makes all the difference in the world, and the tools linked to are mostly much more interesting (and, conversely, much more boring) than the diagram suggests. However, like many such models, it is not a bad bit of scaffolding or at least a springboard for reflection that encourages one to think about things that, without it, might be missed, especially if you are not an expert in pedagogy or technology.

Address of the bookmark: http://www.educatorstechnology.com/2013/05/a-new-wonderful-wheel-on-samr-and.html?utm_source=dlvr.it&utm_medium=linkedin