Journal of Interactive Media in Education – Open for Learning Special Issue

A special issue of JIME on open learning with 5 chapters (full disclaimer: including one by me and Terry Anderson) from a forthcoming book edited by Chris Pegler and Allison Littlejohn, ‘Reusing Open Resources: Learning in Open Networks for Work, Life and Education’.

I’ve skimmed through the pre-publication draft of the book from which these articles are taken and (not counting our own chapter, about which I may be a little biased) I’m impressed. It has some very important topics, some excellent authors, and a great pair of editors. Deserves to do well.

Terry and I were concerned when responding to the call for chapters about the irony of a book on openness appearing as a closed publication. It is therefore very pleasing that, at least for these five chapters, it is walking the talk. JIME is a fine journal and has been open since it was unfashionable to be so, so I am delighted to at last have an article appear there and congratulate Chris and Allison on a job very well done.

Address of the bookmark: http://www-jime.open.ac.uk/jime/issue/view/2014-ReusingResources-OpenforLearning

Five myths about Moocs | Opinion | Times Higher Education

Diana Laurillard chipping in with a perceptive set of observations, most interestingly describing education as a personal client industry, in which tutor/student ratios are remarkably consistent at around 1:25, so it is no great surprise that it doesn’t scale up. Seems to me that she is quite rightly attacking a particular breed of EdX, Coursera, etc xMOOC but it doesn’t have to be this way, and she carefully avoids discussing *why* that ratio is really needed – her own writings and her variant on conversation theory suggest there might be alternative ways of looking at this.

Her critique that xMOOCs appear to succeed only for those that already know how to be self-guided learners is an old chestnut that hits home. She is right in saying that MOOCs (xMOOCs) are pretty poor educational vehicles if the only people who benefit are those that can already drive, and it supports her point about the need for actual teachers for most people *if* we continue to teach in a skeuomorphic manner, copying the form of traditional courses without thinking why we do what we do and how courses actually work.

For me this explains clearly once again that the way MOOCs are being implemented is wrong and that we have to get away from the ‘course’ part of the acronym, and start thinking about what learners really need, rather than what universities want to give them.

Address of the bookmark: http://www.timeshighereducation.co.uk/comment/opinion/five-myths-about-moocs/2010480.article

Thirteen Ways of Looking at a MOOC | The Seven Futures

Charming variant on a Wallace Stevens poem, replacing the blackbird with the MOOC. A little heavy on metaphor and simile here and there but makes a lot more sense than most scholarly articles I’ve read on the subject of MOOCs, and I’ve read really too many of them.

Address of the bookmark: http://www.thesevenfutures.com/blog/thirteen-ways-looking-mooc-0

IGI Global: Open Access

This is a very interesting development. I’ve not looked fully into the fine print but it looks on the face of it as though IGI, publishers of my first book and a number of chapters and articles I have written over the years, may have seen the light and is partially moving to an open publishing model, with free and open sharing and a creative commons licensing structure. This is big news as IGI is quite a significant player in the academic publication market.

Formerly, IGI’s draconian terms and conditions and shameless profiteering at the expense of hard-working academics have put me off working with them ever again but this looks like something that might well change my mind. At the moment it is in beta and looks like it is intended only for papers, but I applaud them for taking this initiative and hope that it will be extended into their book publishing business too.

Address of the bookmark: http://www.igi-global.com/open-access/

Killing stupid software patents is really easy, and you can help

I’ve very rarely come across a software patent that is not really stupid, that does not harm everyone apart from patent trolls and lawyers, and that is not predated by earlier examples of prior art.  This article explains how anyone can easily put a stop to them before they do any damage. Great stuff.

Address of the bookmark: http://boingboing.net/2013/07/24/killing-stupid-software-patent.html

MOOPhD accreditation

A recent post at http://www.insidehighered.com/views/2013/06/05/essay-two-recent-discussions-massive-open-online-education reminded me that the half-formed plan that Torsten Reiners, Lincoln Wood and I dreamt up needs a bit of work.

So, to add a little kindling to get this fire burning…

Our initial ideas centred around supporting the process of doing research and writing papers for a PhD by publication. This makes sense and, we have learned, PhDs by publication are actually the norm in many countries, including Sweden, Malaysia and elsewhere, so it is, in principle, do-able and does not require us to think more than incidentally about the process of accreditation. However, there are often invisible or visible obstacles that institutions put in place to limit the flow of PhDs by publication: residency requirements, only allowing them for existing staff, high costs, and so on.

So why stop there?

Cranking the levers of this idea pump a little further, a mischievous thought occurs to me. Why not get a PhD on reputation alone? That is, after all, exactly how any doctorate is awarded, when it comes down to it: it is basically a means of using transferable reputation (think of this as more like a disease than a gift – reputations are non-rival goods), passing it on from an institution to an awardee, with a mutational process built in whereby the institution itself gets its own research reputation enhanced by a similar pass-it-on process. This system honours the institution at least as much as the awardee, so there’s a rich interchange of honour going on here. Universities are granted the right to award PhDs, typically through a government mandate, but they sustain their reputation and capacity to do so through ongoing scholarship, publication and related activities, and through the activities of those that it honours. A university that awarded PhDs without itself being a significant producer of research, or that produced doctors who never achieved any further research of any note, would not get very far. So, a PhD is only a signal of the research competence in its holder because an awarding body with a high reputation believes the holder to be competent, and it sustains its own reputation through the activities of its members and alumni. That reputation occurs because of the existence of a network of peers, and the network has, till now, mostly been linked through journals, conferences and funding bodies. In other words, though someone goes to the trouble of aggregating the data, the actual vector of reputation transmission is through individuals and teams that are linked via a publication process. 

So why not skip the middle man? What if you could get a PhD based on the direct measures of reputation that are currently aggregated at an institutional level rather than those that have been intentionally formalized and aggregated using conventional methods?

Unpicking this a little further, the fact that someone has had papers published in journals implies that they have undergone the ordeal by fire of peer review, which should mean they are of doctoral quality. But that doesn’t mean they are any good. Journals are far from equal in their acceptance rates, the quality of their reviewers – there are those with good reputations, those with bad ones, and a lot in between. Citations by others help to assure us that they may have something of value in them, but citations often come as a result of criticism, and do not imply approval of the source. We need a means to gauge quality more accurately. That’s why h-index was invented. There are lots of reasons to be critical of this and similar measures: they fail to value great contributions (Einstein would have had a very low h-index had he only published his most important contributions), they embody the Matthew Effect in ways that make their real value questionable,  they poorly distinguish large and small contributions to collaborative papers, and the way they rank importance of journals etc is positively mediaeval. It is remarkable to me to surf through Google Scholar’s rankings and find that people who are among the most respected in my field having relatively low indexes while those that just plug away at good but mundane research having higher ones. Such indexes do none-the-less imply the positive judgements of many peers with more rigour and fairness than would normally be found in a doctoral committees, and they give a usable number to grade contributions. So, a high h-index or i10-index (Google’s measure of papers with more than 10 citations) would satisfy at least part of the need for validation of quality of research output. But, by definition, they undervalue the work of new researchers so they would be poor discriminators if they were the only means to evaluate most doctorates. On the other hand, funding councils have already developed fairly mature processes for evaluating early-career researchers, so perhaps some use could be made of those. Indeed, the fact that someone has successfully gained funding from such a council might be used as partial evidence towards accreditation.

A PhD, even one by publication, is more than just an assortment of papers. It is supposed to show a sustained research program and an original contribution to knowledge. I hope that there are few institutions that would award a PhD to someone who had simply had a few unrelated papers published over a period of years, or to someone who had done a lot of mundane but widely cited reports with no particular research merit. So, we need a bit more than citation indexes or other evidence of being a world-class researcher to offer a credible PhD-standard alternative form of certification.

One way to do this would be to broadly mirror the PhD by publication process within the MOOC. We could require peer ‘marking’, by a suitable panel, of a paper linking a range of others into a coherent bit of doctoral research and perhaps defended in a public webmeeting. This would be a little like common European defence processes, in which theses are defended not just in front of professors but also any member of the public (typically colleagues, friends and families) who would want to come along. We could increase the rigour a little by making it a requirement that those participating in such a panel should have to have a sufficiently high h-index or i-index of their own in a similar subject area, and/or have a relevant doctorate. Eventually the system could become self-supporting, once a few graduates had emerged. In time, being part of such a panel would become a mark of prestige in itself. Perhaps, for pedagogic and systemic reasons, engagement in such a panel would be a prerequisite for making your own ‘doctoral’ defence. Your rating might carry a weighting that accorded with your own reputational index, with those starting out weighted quite low and those with doctorates, ‘real’ doctoral students etc having higher indexes. The candidates themselves and other more experienced examiners might rate these novice examiners, so a great review from an early-career candidate might increase their own ranking.  It might be possible to make use of OpenBadges for this, with badges carrying different weights according to who awarded them and for what they were awarded.

Apart from issues of motivation, the big problem with the peer-based approach is that it could be seen as one of the blind leading the blind, as well as potentially raising ethical issues in terms of bias and lack of accountability. A ‘real’ PhD committee/panel/etc is made up of carefully chosen gurus with an established reputation or, at least, it should be. In North America these are normally the people that supervise the student, which is dodgy, but which normally works OK due to accountability and professional ethics. Elsewhere examiners are external and deliberately unconnected with the candidate, or consist of a mix of supervisors and externals. Whatever the details, the main point here is that the examiners are fully accredited experts, chosen and vetted by the institutional processes that make universities reliable judges in the first place. So, to make it more accountable, more use needs to be made of that reputational network that sustains traditional institutions, at least at the start. To make this work, we would need to get a lot of existing academics with the relevant skills on board. Once it had been rolling for a few years, it ought to become self-sustaining.

This is just the germ of an idea – there’s lots of ways we could build a very cheap system that would have at least as much validity as the accreditation procedures used by most universities. If I were an employer, I’d be a lot more impressed by someone with such a qualification than I would by someone with a PhD from most universities. But I’m just playing with ideas here. My intent is not to create an alternative to the educational system, though that would be very interesting and I don’t object to the idea at all, but to highlight the often weird assumptions on which our educational systems are based and ask some hard questions about them. Why and on what grounds do we set ourselves up as arbiters of competence? What value do we actually add to the process? How, given propensities of new technologies and techniques, could we do it better? 

Our educational systems are not broken at all: they are actually designed not to work. Well, ‘design’ is too strong a word as it suggests a central decision-making process has led to them, whereas they are mainly the result of many interconnected decisions (most of which made sense at the time but, in aggregate, result in strange outcomes) that stretch back to mediaeval times. Things like MOOCs (and related learning tools like Wikipedia, the Khan Academy, StackOverflow, etc) provide a good opportunity to think more clearly and concretely about how we can do it better and why we do it the way we do in the first place.

The pedagogical foundations of massive open online courses | Glance | First Monday

A charmingly naive article taking a common-sense, straightforward approach to asking whether the woefully uniform pedagogies of the more popular Coursera-style MOOCs might actually work. The authors identify the common pedagogies of popular MOOCs then use narrative analysis to see whether there has been empirical research to show whether those pedagogies can work. The answer, unsurprisingly, is that they can. It would have been a huge surprise if they couldn’t. This is a bit like asking whether email can be used to communicate.

I like the way this article is constructed and the methods used. Its biggest contribution is probably the very simple (arguably simplistic) description of the central pedagogies of MOOCs. Its ‘discoveries’ are, however, spurious. The fact that countless millions of people do learn online using some or all of the pedagogical approaches used by MOOCs is plenty evidence enough that their methods can work and it really doesn’t demand narrative analysis to demonstrate this blindingly obvious fact – one for the annals of obvious research, I think. Like all soft technologies, it ain’t what you do, it’s the way that you do it, that’s what gets results. ‘Can work well’ in general does not mean ‘does work well’ in the particular. We know that billions of people have learned well from books, but that does not mean that all books teach well, nor that books are the best way to teach any given subject.

Address of the bookmark: http://firstmonday.org/ojs/index.php/fm/article/view/4350/3673

Elgg source code evolution (before 4th May 2013) – YouTube

A fascinating diagram showing developer contributions to the open source core of the Elgg project (used here on the Landing) over the past 5 years or so. Quite fascinating to watch, and especially pleasing to see how the number of contributors has grown over the past year or so, probably as much due to moving to Github from Trac as anything else, though the great work of the Elgg foundation team in building and employing the work of the community goes hand in hand with that. Makes me feel quite a lot more secure about the future of the technology to know that so many people are active in pushing it forward. It would be intriguing to look at the larger ecosystem of plugins that sits around that using a similar visualization.

Address of the bookmark:

Discourse – rebooted forum software

Discourse is an extremely cool and open source reinvention of forum software that is replete with modern features like real-time AJAX loading of threads (which are not the usual tree-like things but more a flat form with contextual threading as and when needed), lots of collective features including reputation management, tagging, rating and ranking, what’s-hot lists and so on. Looks slick, hooks into plenty of other services. I’d like to see something like this on the Landing instead of its simple discussion boards. Not trivial to integrate, but it does have an open and rich API so can be called easily from other systems.

Address of the bookmark: http://www.discourse.org/

MOOCs do not represent the best of online learning (essay) | Inside Higher Ed

Another post about MOOCs that misses the point. The author, Ronald Legon, seems hopeful that ‘MOOC 2.0’ will arrive with better pedagogy, more support and better design. I have no doubt that what he describes will happen, at least in places, but it is certainly not worthy of the ‘2.0’ moniker. It is simply an incremental natural evolution that adds efficiency and refinement to a weak model, but it’s not a paradigm shift. 

The trouble is that Legon hasn’t bothered to check the history of the genre. The xMOOCs under attack here are not far off the attempts by organizations and companies to replicate the same strategies that worked for old fashioned mass media in the 1990s. They were not so much ‘Web 1.0’ as a bastardization of what the Web was meant from the start to be. That is why those of us who had always been doing ‘Web 2.0’ stuff since the early nineties hate the term. Similarly, xMOOCs are a bastardization of what MOOCs started out to achieve and they miss the point entirely. What is the point? George Siemens explains this better than I could, so here is his take on the topic:

Happily, many people are using xMOOCs in a cMOOC-like way so they are succeeding in learning with one another despite weak pedagogies, unsuitable structures, and excessive length. While the intentions of the people that run them are quite different, many of the people using them to learn are doing so as part of a personal learning journey, in networks and learning communities with others, taking pieces that interest them from different MOOCs and mashing them up. They are in control, not the MOOC creators. Less than 10% completion rates are a worry to the people that run them, not to those that don’t complete them (true, there may be some who are discouraged by the process, but I hope not).

MOOC 2.0, like Web 2.0, is likely to be what MOOC 1.0 (the real MOOC 1.0) tried to be – a cMOOC.  

I do see a glowing future for great content of the sort created for these xMOOCs (big information-heavy sites of the sort found in the 1990s have not ever gone away and continue to flourish) but they may have to adapt a little. I think that they will have to disaggregate the chunks and let go of the control. It is encouraging to see an increasing tendency to reduce their size to 4-week versions, but the whole notion of a fixed-length course is crazy. Sometimes, 4 weeks will do. Sometimes, 4 minutes would be better. Occasionally, 4 years might be the ideal length. Whatever they turn out to be, they must be seen as parts of an individually assembled whole, not as large-scale equivalents of traditional approaches to teaching that only exist due to physical constraints in the first place and that are sustained not only by continuing constraints of that nature but by a ludicrously clunky, counter-productive and unreliable accreditation process.

Address of the bookmark: http://www.insidehighered.com/views/2013/04/25/moocs-do-not-represent-best-online-learning-essay