Agoraphobia and the modern learner

Abstract:Read/write social technologies enable rich pedagogies that centre on sharing and constructing content but have two notable weaknesses. Firstly, beyond the safe, nurturing environment of closed groups, students participating in more or less public network- or set-oriented communities may be insecure in their knowledge and skills, leading to resistance to disclosure. Secondly, it is hard to know who and what to trust in an open environment where others may be equally unskilled or, sometimes, malevolent. We present partial solutions to these problems through the use of collective intelligence, discretionary disclosure controls and mindful design.

Address of the bookmark: http://www-jime.open.ac.uk/jime/article/viewArticle/2014-03/html

Professor forces students to buy his own $200 textbook

This article is actually purportedly about the very unsurprising discovery that students who can’t afford textbooks are downloading them illegally, even for ethics classes. Shocking! Not. However, the thing that really shocks me about this article is the example given of the professor demanding that his students purchase his own $200 etextbook. Piracy seems a pretty minor crime compared with this apparently outrageous, blatant, extortionate abuse of power. 

 

Address of the bookmark: http://www.washingtonpost.com/blogs/answer-sheet/wp/2014/09/17/more-students-are-illegally-downloading-college-textbooks-for-free/

Teaching Crowds: Learning and Social Media

The free PDF preview of the new book by me and Terry Anderson is now available from the AU Press website. It is a complete and unabridged version of the paper book. It’s excellent value!

The book is about both how to teach crowds and how crowds can teach us, particularly at a distance and especially with the aid of social software.

For the sake of your health we do not recommend trying to read the whole thing in PDF format unless you have a very big and high resolution tablet or e-reader, or are unusually comfortable reading from a computer screen, but the PDF file is not a bad way to get a flavour of the thing, skip-read it, and/or to find or copy passages within it. You can also download individual chapters and sections if you wish. 

The paper and epub versions should be available for sale at the end of September, 2014, at a very reasonable price. 

Address of the bookmark: http://www.aupress.ca/index.php/books/120235

Book: Reusing Open Resources

Now in print, a new and interesting edited book by Allison Littlejohn and Chris Pegler on open educational resources, (disclaimer: includes a chapter by me and Terry Anderson).  Apart from us, Allison and Chris have gathered a great bunch of people together to explore issues from some distinctly learner-oriented perspectives, and across a broad range of contexts, including informal and non-formal learning as well as in formal education.

If you want to get a good flavour of the kind of chapters it contains, and in keeping with the subject matter, a few selected chapters (including ours) have been published openly at http://jime.open.ac.uk/jime/issue/view/2014-ReusingResources-OpenforLearning

Address of the bookmark: http://routledge-ny.com/books/details/9780415838696/

Journal of Interactive Media in Education – Open for Learning Special Issue

A special issue of JIME on open learning with 5 chapters (full disclaimer: including one by me and Terry Anderson) from a forthcoming book edited by Chris Pegler and Allison Littlejohn, ‘Reusing Open Resources: Learning in Open Networks for Work, Life and Education’.

I’ve skimmed through the pre-publication draft of the book from which these articles are taken and (not counting our own chapter, about which I may be a little biased) I’m impressed. It has some very important topics, some excellent authors, and a great pair of editors. Deserves to do well.

Terry and I were concerned when responding to the call for chapters about the irony of a book on openness appearing as a closed publication. It is therefore very pleasing that, at least for these five chapters, it is walking the talk. JIME is a fine journal and has been open since it was unfashionable to be so, so I am delighted to at last have an article appear there and congratulate Chris and Allison on a job very well done.

Address of the bookmark: http://www-jime.open.ac.uk/jime/issue/view/2014-ReusingResources-OpenforLearning

Five myths about Moocs | Opinion | Times Higher Education

Diana Laurillard chipping in with a perceptive set of observations, most interestingly describing education as a personal client industry, in which tutor/student ratios are remarkably consistent at around 1:25, so it is no great surprise that it doesn’t scale up. Seems to me that she is quite rightly attacking a particular breed of EdX, Coursera, etc xMOOC but it doesn’t have to be this way, and she carefully avoids discussing *why* that ratio is really needed – her own writings and her variant on conversation theory suggest there might be alternative ways of looking at this.

Her critique that xMOOCs appear to succeed only for those that already know how to be self-guided learners is an old chestnut that hits home. She is right in saying that MOOCs (xMOOCs) are pretty poor educational vehicles if the only people who benefit are those that can already drive, and it supports her point about the need for actual teachers for most people *if* we continue to teach in a skeuomorphic manner, copying the form of traditional courses without thinking why we do what we do and how courses actually work.

For me this explains clearly once again that the way MOOCs are being implemented is wrong and that we have to get away from the ‘course’ part of the acronym, and start thinking about what learners really need, rather than what universities want to give them.

Address of the bookmark: http://www.timeshighereducation.co.uk/comment/opinion/five-myths-about-moocs/2010480.article

Thirteen Ways of Looking at a MOOC | The Seven Futures

Charming variant on a Wallace Stevens poem, replacing the blackbird with the MOOC. A little heavy on metaphor and simile here and there but makes a lot more sense than most scholarly articles I’ve read on the subject of MOOCs, and I’ve read really too many of them.

Address of the bookmark: http://www.thesevenfutures.com/blog/thirteen-ways-looking-mooc-0

IGI Global: Open Access

This is a very interesting development. I’ve not looked fully into the fine print but it looks on the face of it as though IGI, publishers of my first book and a number of chapters and articles I have written over the years, may have seen the light and is partially moving to an open publishing model, with free and open sharing and a creative commons licensing structure. This is big news as IGI is quite a significant player in the academic publication market.

Formerly, IGI’s draconian terms and conditions and shameless profiteering at the expense of hard-working academics have put me off working with them ever again but this looks like something that might well change my mind. At the moment it is in beta and looks like it is intended only for papers, but I applaud them for taking this initiative and hope that it will be extended into their book publishing business too.

Address of the bookmark: http://www.igi-global.com/open-access/

Killing stupid software patents is really easy, and you can help

I’ve very rarely come across a software patent that is not really stupid, that does not harm everyone apart from patent trolls and lawyers, and that is not predated by earlier examples of prior art.  This article explains how anyone can easily put a stop to them before they do any damage. Great stuff.

Address of the bookmark: http://boingboing.net/2013/07/24/killing-stupid-software-patent.html

MOOPhD accreditation

A recent post at http://www.insidehighered.com/views/2013/06/05/essay-two-recent-discussions-massive-open-online-education reminded me that the half-formed plan that Torsten Reiners, Lincoln Wood and I dreamt up needs a bit of work.

So, to add a little kindling to get this fire burning…

Our initial ideas centred around supporting the process of doing research and writing papers for a PhD by publication. This makes sense and, we have learned, PhDs by publication are actually the norm in many countries, including Sweden, Malaysia and elsewhere, so it is, in principle, do-able and does not require us to think more than incidentally about the process of accreditation. However, there are often invisible or visible obstacles that institutions put in place to limit the flow of PhDs by publication: residency requirements, only allowing them for existing staff, high costs, and so on.

So why stop there?

Cranking the levers of this idea pump a little further, a mischievous thought occurs to me. Why not get a PhD on reputation alone? That is, after all, exactly how any doctorate is awarded, when it comes down to it: it is basically a means of using transferable reputation (think of this as more like a disease than a gift – reputations are non-rival goods), passing it on from an institution to an awardee, with a mutational process built in whereby the institution itself gets its own research reputation enhanced by a similar pass-it-on process. This system honours the institution at least as much as the awardee, so there’s a rich interchange of honour going on here. Universities are granted the right to award PhDs, typically through a government mandate, but they sustain their reputation and capacity to do so through ongoing scholarship, publication and related activities, and through the activities of those that it honours. A university that awarded PhDs without itself being a significant producer of research, or that produced doctors who never achieved any further research of any note, would not get very far. So, a PhD is only a signal of the research competence in its holder because an awarding body with a high reputation believes the holder to be competent, and it sustains its own reputation through the activities of its members and alumni. That reputation occurs because of the existence of a network of peers, and the network has, till now, mostly been linked through journals, conferences and funding bodies. In other words, though someone goes to the trouble of aggregating the data, the actual vector of reputation transmission is through individuals and teams that are linked via a publication process. 

So why not skip the middle man? What if you could get a PhD based on the direct measures of reputation that are currently aggregated at an institutional level rather than those that have been intentionally formalized and aggregated using conventional methods?

Unpicking this a little further, the fact that someone has had papers published in journals implies that they have undergone the ordeal by fire of peer review, which should mean they are of doctoral quality. But that doesn’t mean they are any good. Journals are far from equal in their acceptance rates, the quality of their reviewers – there are those with good reputations, those with bad ones, and a lot in between. Citations by others help to assure us that they may have something of value in them, but citations often come as a result of criticism, and do not imply approval of the source. We need a means to gauge quality more accurately. That’s why h-index was invented. There are lots of reasons to be critical of this and similar measures: they fail to value great contributions (Einstein would have had a very low h-index had he only published his most important contributions), they embody the Matthew Effect in ways that make their real value questionable,  they poorly distinguish large and small contributions to collaborative papers, and the way they rank importance of journals etc is positively mediaeval. It is remarkable to me to surf through Google Scholar’s rankings and find that people who are among the most respected in my field having relatively low indexes while those that just plug away at good but mundane research having higher ones. Such indexes do none-the-less imply the positive judgements of many peers with more rigour and fairness than would normally be found in a doctoral committees, and they give a usable number to grade contributions. So, a high h-index or i10-index (Google’s measure of papers with more than 10 citations) would satisfy at least part of the need for validation of quality of research output. But, by definition, they undervalue the work of new researchers so they would be poor discriminators if they were the only means to evaluate most doctorates. On the other hand, funding councils have already developed fairly mature processes for evaluating early-career researchers, so perhaps some use could be made of those. Indeed, the fact that someone has successfully gained funding from such a council might be used as partial evidence towards accreditation.

A PhD, even one by publication, is more than just an assortment of papers. It is supposed to show a sustained research program and an original contribution to knowledge. I hope that there are few institutions that would award a PhD to someone who had simply had a few unrelated papers published over a period of years, or to someone who had done a lot of mundane but widely cited reports with no particular research merit. So, we need a bit more than citation indexes or other evidence of being a world-class researcher to offer a credible PhD-standard alternative form of certification.

One way to do this would be to broadly mirror the PhD by publication process within the MOOC. We could require peer ‘marking’, by a suitable panel, of a paper linking a range of others into a coherent bit of doctoral research and perhaps defended in a public webmeeting. This would be a little like common European defence processes, in which theses are defended not just in front of professors but also any member of the public (typically colleagues, friends and families) who would want to come along. We could increase the rigour a little by making it a requirement that those participating in such a panel should have to have a sufficiently high h-index or i-index of their own in a similar subject area, and/or have a relevant doctorate. Eventually the system could become self-supporting, once a few graduates had emerged. In time, being part of such a panel would become a mark of prestige in itself. Perhaps, for pedagogic and systemic reasons, engagement in such a panel would be a prerequisite for making your own ‘doctoral’ defence. Your rating might carry a weighting that accorded with your own reputational index, with those starting out weighted quite low and those with doctorates, ‘real’ doctoral students etc having higher indexes. The candidates themselves and other more experienced examiners might rate these novice examiners, so a great review from an early-career candidate might increase their own ranking.  It might be possible to make use of OpenBadges for this, with badges carrying different weights according to who awarded them and for what they were awarded.

Apart from issues of motivation, the big problem with the peer-based approach is that it could be seen as one of the blind leading the blind, as well as potentially raising ethical issues in terms of bias and lack of accountability. A ‘real’ PhD committee/panel/etc is made up of carefully chosen gurus with an established reputation or, at least, it should be. In North America these are normally the people that supervise the student, which is dodgy, but which normally works OK due to accountability and professional ethics. Elsewhere examiners are external and deliberately unconnected with the candidate, or consist of a mix of supervisors and externals. Whatever the details, the main point here is that the examiners are fully accredited experts, chosen and vetted by the institutional processes that make universities reliable judges in the first place. So, to make it more accountable, more use needs to be made of that reputational network that sustains traditional institutions, at least at the start. To make this work, we would need to get a lot of existing academics with the relevant skills on board. Once it had been rolling for a few years, it ought to become self-sustaining.

This is just the germ of an idea – there’s lots of ways we could build a very cheap system that would have at least as much validity as the accreditation procedures used by most universities. If I were an employer, I’d be a lot more impressed by someone with such a qualification than I would by someone with a PhD from most universities. But I’m just playing with ideas here. My intent is not to create an alternative to the educational system, though that would be very interesting and I don’t object to the idea at all, but to highlight the often weird assumptions on which our educational systems are based and ask some hard questions about them. Why and on what grounds do we set ourselves up as arbiters of competence? What value do we actually add to the process? How, given propensities of new technologies and techniques, could we do it better? 

Our educational systems are not broken at all: they are actually designed not to work. Well, ‘design’ is too strong a word as it suggests a central decision-making process has led to them, whereas they are mainly the result of many interconnected decisions (most of which made sense at the time but, in aggregate, result in strange outcomes) that stretch back to mediaeval times. Things like MOOCs (and related learning tools like Wikipedia, the Khan Academy, StackOverflow, etc) provide a good opportunity to think more clearly and concretely about how we can do it better and why we do it the way we do in the first place.