The LMS as a paywall

I was writing about openness in education in a chapter I am struggling with today, and had just read Tony Bates’s comments on iQualify, an awful cloud rental service offering a monolithic locked-in throwback that just makes me exclaim, in horror, ‘Oh good grief! Seriously?’ And it got me thinking.

Learning management systems, as implemented in academia, are basically paywalls. You don’t get in unless you pay your fees. So why not pick up on what publishers infamously already do and allow people to pay per use? In a self-paced model like that used at Athabasca it makes perfect sense and most of the infrastructure – role-based time-based access etc – and of course the content already exists. Not every student needs 6 months of access or the trimmings of a whole course but, especially for those taking a challenge route (just the assessment), it would often be useful to have access to a course for a little while in order to get a sense of what the expectations might be, the scope of the content, and the norms and standards employed. On occasion, it might even be a good idea to interact with others. Perhaps we could sell daily, weekly or monthly passes. Or we could maybe do it at a finer level of granularity too/instead: a different pass for different topics, or different components like forums, quizzes or assignment marking. Together, following from the publishers’ lead, such passes might cost 10 or 20 times the total cost of simply subscribing to a whole course if every option were purchased, but students could strategically pick the parts they actually need, so reducing their own overall costs.

This idea is, of course, stupid. This is not because it doesn’t make economic and practical sense: it totally does, notwithstanding the management, technical and administrative complexity it entails. It is stupid because it flips education on its head. It makes chunks of learning into profit centres rather than the stuff of life. It makes education into a product rather than celebrating its role as an agent of personal and societal growth. It reduces the rich, intricately interwoven fabric of the educational experience to a set of instrumentally-driven isolated events and activities. It draws attention to accreditation as the be-all and end-all of the process. It is aggressively antisocial, purpose-built to reduce the chances of forming a vibrant learning community. This is beginning to sound eerily familiar. Is that not exactly what, in too high a percentage of our courses, we are doing already?

If we and other universities are to survive and thrive, the solution is not to treat courses and accreditation as products or services. The ongoing value of a university is to catalyze the production and preservation of knowledge: that is what we are here for, that is what makes us worthwhile having. Courses are just tools that support that process, though they are far from the only ones, while accreditation is not even that: it’s just a byproduct, effluent from the educational process that happens to have some practical societal value (albeit at enormous cost to learning). In physical universities there are vast numbers of alternatives that support the richer purpose of creating and sustaining knowledge: cafes, quads, hallways, common rooms, societies, clubs, open lectures, libraries, smoking areas, student accommodation, sports centres, theatres, workshops, studios, research labs and so on. Everywhere you go you are confronted with learning opportunities and people to learn with and from, and the taught courses are just part of the mix, often only a small part. At least, that is true in a slightly idealized world – sadly, the vast majority of physical universities are as stupidly focused on the tools as we are, so those benefits are an afterthought rather than the main thing to celebrate, and are often the first things to suffer when cuts come along. Online, such beyond-the-course opportunities are few and far between: the Landing is (of course) built with exactly that concern in mind, but there’s precious little sign of it anywhere else at AU, one of the most advanced online universities in the world.  The nearest thing most students get to it is the odd Facebook group or Twitter interaction, which seems an awful waste to me, though a fascinating phenomenon that blurs the lines between the institution and the broader community.

It is already possible to take a high quality course for free in almost any subject that interests you and, more damagingly, any time now there will soon be sources of accreditation that are as prestigious as those awarded by universities but orders of magnitude cheaper, not to mention compellingly cut-price  options from universities that can leverage their size and economies of scale (and, perhaps, cheap labour) to out-price the rest of us. Competing on these grounds makes no sense for a publicly funded institution the role of which is not to be an accreditation mill but to preserve, critique, observe, transform and support society as a whole. We need to celebrate and cultivate the iceberg, not just its visible tip. Our true value is not in our courses but in our people (staff and students) and the learning community that they create.

What is a BOOC?

This acronym is very ripe for satire. My local public library is full of BOOCs, quite a few of which might be thought of as big, open, offline courses.

Dan Hickey explains: a BOOC is in fact a big open online course with up to about 500 members. This size limit is to enable more interactive, participitatory and socially-driven, though still teacher-managed, pedagogies in a large-ish setting without breaking the bank. It’s an open and pedagogically enlightened version of the recently hyped SPOC or, as we normally refer to it, ‘online course’. 

Dan’s model is most interesting as a testbed for OpenBadges and the use of WikiFolios, allowing participatory, learner-driven and on-demand assessment in some very admirable and interesting ways. So, though the acronym is a little painful, the ideas behind the implementation itself are most cool. One to follow.

Address of the bookmark: http://www.indiana.edu/~booc/what-is-a-booc/

The EDUCAUSE NGDLE and an API of One's Own (Michael Feldstein)

Michael Feldstein responds on NGDLEs with a brilliant in-depth piece on the complex issues involved in building standards for online learning tool interoperability and more. I wish I’d read this before posting my own most recent response because it addresses several of the same issues with similar conclusions, but in greater depth and with more eloquence, as well as bringing up some other important points such as the very complex differences in needs between different contexts of application. My post does add things that Michael’s overlooks and the perspective is a little different (so do read it anyway!), but the overlapping parts are far better and more thoroughly expressed by Michael.

This is an idea that has been in the air and ripe for exploitation for a very long time but, as Michael says in his post and as I also claim in mine, there are some very big barriers when it comes down to implementing such a thing and a bunch of wicked problems that are very hard to resolve to everyone’s satisfaction. We have been here before, several times: let’s hope the team behind NGDLE finds ways to avoid the mistakes we made in the past.

Address of the bookmark: http://mfeldstein.com/the-educause-ngdle-and-an-api-of-ones-own/

Niggles about NGDLEs – lessons from ELF

Malcom Brown has responded to Tony Bates and me in an Educause guest post in which he defends the concept of the NGDLE and expands a bit on the purposes behind it. This does help to clarify the intent although, as I mentioned in my earlier post, I am quite firmly in favour of the idea, so I am already converted on the main points. I don’t mind the Lego metaphor if it works, but I do think we should concentrate more on the connections than the pieces. I also see that it is fairly agnostic to pedagogy, at least in principle. And I totally agree that we desperately need to build more flexible, assemblable systems along these lines if we are to enable effective teaching, management of the learning process and, much much more importantly, if we are to support effective learning. Something like the proposed environment (more of an ecosystem, I’d say) is crucial if we want to move on.

But…

It has been done before, over ten years ago in the form of ELF, in much more depth and detail and with large government and standards bodies supporting it, and it is important to learn the lessons of what was ultimately a failed initiative. Well – maybe not failed, but certainly severely stalled. Parts persist and have become absorbed, but the real value of it was as a model for building tools for learning, and that model is still not as widespread as it should be. The fact that the Educause initiative describes itself as ‘next generation’ is perhaps the most damning evidence of its failure.

Elves

Why ELF ‘failed’

I was not a part of nor close to the ELF project but, as an outsider, I suspect that it suffered from four major and interconnected problems:

  1. It was very technically driven and framed in the language of ICTs, not educators or learners. Requirements from educators were gathered in many ways, with workshops, working groups and a highly distributed team of experts in the UK, Australia, the US, Canada, the Netherlands and New Zealand (it was a very large project). Some of the central players had a very deep understanding of the pedagogical and organizational needs of not just learners but organizations that support them, and several were pioneers in personal learning environments (PLEs) that went way beyond the institution. But the focus was always on building the technical infrastructure – indeed, it had to be, in order to operationalize it. For those outside the field, who had not reflected deeply on the reasons this was necessary, it likely just seemed like a bunch of techies playing with computers. It was hard to get the message across.
  2. It was far too over-ambitious, perhaps bolstered by the large amounts of funding and support from several governments and large professional bodies. The e-learning framework was just one of several strands like e-science, e-libraries and so on, that went to make up the e-framework. After a while, it simply became the e-framework and, though conceptually wonderful, in practical terms it was attempting far too much in one fell swoop. It became so broad, complex and fuzzy that it collapsed under its own weight. It was not helped by commercial interests that were keen to keep things as proprietary and closed as they could get away with. Big players were not really on board with the idea of letting thousands of small players enter their locked-in markets, which was one of the avowed intents behind it. So, when government funding fizzled out, there was no one to take up such a huge banner. A few small flags might have been way more successful.
  3. It was too centralized (oddly, given its aggressively decentralized intent and the care taken to attempt to avoid that). With the best of intent, developers built over-engineered standards relying on web service architectures that the rest of the world was abandoning because they were too clunky, insufficiently agile and much too troublesome to implement. I am reminded, when reading many of the documents that were produced at the time, of the ISO OSI network standards of the late 80s that took decades to reach maturity through ornate webs of committees and working groups, were beautifully and carefully engineered, and that were thoroughly and completely trounced by the lighter, looser, more evolved, more distributed TCP/IP standards that are now pretty much ubiquitous. For large complex systems, evolution beats carefully designed engineering every single time.
  4. The fact that it was created by educators whose framing was entirely within the existing system meant that most of the pieces that claimed to relate to e-learning (as opposed to generic services) had nothing to do with learning at all, but were representative of institutional roles and structures: marking, grading, tracking, course management, resource management, course validation, curriculum, reporting and so on. None of this has anything to do with learning and, as I have argued on many occasions elsewhere, may often be antagonistic to learning. While there were also components that were actually about learning, they tended to be framed in the context of existing educational systems (writing lessons, creating formal portfolios, sequencing of course content, etc). Though very much built to support things like PLEs as well as institutional environments, the focus was the institution far more than the learner.

As far as I can tell, any implementation of the proposed NGDLE is going to run into exactly the same problems. Though the components described are contemporary and the odd bit of vocabulary has evolved a bit, all of them can be found in the original ELF model and the approach to achieving it seems pretty much the same. Moreover, though the proposed architecture is flexible enough to support pretty much anything – as was ELF – there is a tacit assumption that this is about education as we know it, updated to support the processes and methods that have been developed since (and often in response to) the heinous mistakes we made when we designed the LMSs that dominate education today. This is not surprising – if you ask a bunch of experts for ideas you will get their expertise, but you will not get much in the way of invention or new ideas. The methodology is therefore almost guaranteed to miss the next big thing. Those ideas may come up but they will be smoothed out in an averaging process and dissenting models will not become part of the creed. This is what I mean when I criticize it as a view from the inside.

Much better than the LMS

If implemented, a NGDLE will undoubtedly be better than any LMS, with which there are manifold problems. In the first place, LMSs are uniformly patterned on mediaeval educational systems, with all their ecclesiastic origins, power structures and rituals intact. This is crazy, and actually reinforces a lot of things we should not be doing in the first place, like courses, intimately bound assessment and accreditation, and laughably absurd attempts to exert teacher control, without the slightest consideration of the fact that pedagogies determined by the physics of spaces in which we lock doors and keep learners controlled for an hour or two at a time make no sense whatsoever in online learning. In the second place, centralized systems have to maintain an uneasy and seldom great balance between catering to every need and remaining usably simple. This inevitable leads to compromises, from small things (e.g. minor formatting annoyances in discussion forums) to the large (e.g. embedded roles or units of granularity that make everything a course). While customization options can soften this a little, centralized systems are structurally flawed by their very nature. I have discussed such things in some depth elsewhere, including both my published books. Suffice to say, the LMS shapes us in its own image, and its own image is authoritarian, teacher-controlled and archaic. So, a system that componentizes things so that we can disaggregate any or all of it, provide local control (for teachers and other learners as well as institutions and administrators) and allow creative assemblies is devoutly to be wished for. Such a system architecture can support everything from the traditional authoritarian model to the loosest of personal learning environments, and much in between.

Conclusion

NGDLE is a misnomer. We have already seen that generation come and go. But, as a broad blueprint for where we should be going and what we should be doing now, both ELF and NGDLE provide patterns that we should be using and thinking about whenever we implement online learning tools and content and, for that, I welcome it. I am particularly appreciative that NGDLE provides reinvigorated support for approaches that I have been pushing for over a decade but that ICT departments and even faculty resist implacably. It’s great to be able to point to the product of so many experts and say ‘look, I am not a crank: this is a mainstream idea’. We need a sea-change in how we think of learning technologies and such initiatives are an important part of creating the culture and ethos that lets this happen. For that I totally applaud this initiative.

In practical terms, I don’t think much of this will come from the top-down, apart from in the development of lightweight, non-prescriptive standards and the norming of the concepts behind it. Of current standards, I think TinCan is hopeful, though I am a bit concerned that it is becoming over-ornate in its emerging development. LTI is a good idea, sufficiently mature, and light enough to be usable but, again, in its new iteration it is aiming higher than might be wise. Caliper is OK but also showing signs of excessive ambition. Open Badges are great but I gather that is becoming less lightweight in its latest incarnation. We need more of such things, not more elaborate versions of them. Unfortunately, the nature of technology is that it always evolves towards increasing complexity. It would be much better if we stuck with small, working pieces and assembled those together rather than constantly embellishing good working tools. Unix provides a good model for that, with tools that have worked more or less identically for decades but that constantly gain new value in recombination.

Footnote: what became of ELF?

It is quite hard to find information about ELF today. It seems (as an outsider) that the project just ground to a halt rather than being deliberately killed. There were lots of exemplar projects, lots of hooks and plenty of small systems built that applied the idea and the standards, many of which are still in use today, but it never achieved traction. If you want to find out more, here is a small reading list:

http://www.elframework.org/ – the main site (the link to the later e-framework site leads to a broken page)

http://www.elframework.org/projects.html  – some of the relevant projects ELF incorporated.

https://web.archive.org/web/20061112235250/http://www.jisc.ac.uk/uploaded_documents/Altilab04-ELF.pd – good, brief overview from 2004 of what it involved and how it fitted together

 https://web.archive.org/web/20110522062036/http://www.jisc.ac.uk/uploaded_documents/AltilabServiceOrientedFrameworks.pdf  – spooky: this is about ‘Next Generation E-Learning Environments’ rather than digital ones. But, though framed in more technical language, the ideas are the same as NGDLE.

http://www.webarchive.org.uk/wayback/archive/20110621221935/http://www.elearning.ac.uk/features/nontechguide2 – a slightly less technical variant (links to part 1, which explains web services for non-technical people)

See also https://web.archive.org/web/20090330220421/http://www.elframework.org/general/requirements/scenarios/Scenario%20Apparatus%20UK%205%20(manchester%20lipsig).doc and https://web.archive.org/web/20090330220553/http://www.elframework.org/general/requirements/use_cases/EcSIGusecases.zip, a set of scenarios and use cases that are eerily similar to those proposed for NGDLE.

If anyone has any information about what became of ELF, or documents that describe its demise, or details of any ongoing work, I’d be delighted to learn more!

 

 

On learning styles

This post by James Atherton makes the case that, whether or not it is possible to identify distinctive learning styles or preferences, they are largely irrelevant to teaching, and are potentially even antagonistic to effective learning. Regular readers, colleagues and friends will know that this conforms well with my own analysis of learning styles literature. The notion that learning styles should determine teaching styles is utter stuff and nonsense based on a very fuzzy understanding of the relationship between teaching and learning, and a desperate urge to find a theory to make the process seem more ‘scientific’, with no believable empirical foundation whatsoever. This doesn’t make the use of learning styles pointless, however.

Teaching is a design discipline much more than it is a science. One of the biggest challenges of teaching is making it work for as many students as possible, which means thinking carefully about different needs, interests, skills, concerns and contexts. So, if learning styles theories can help you to think about different learner needs more clearly when designing a learning path then that can be a good thing.

The trouble is, thinking about personality patterns associated with learners’ astrological star signs or Chinese horoscope animals would probably work just as well. A comparative study would be a fun to do and, I think, the methodological issues would reveal a lot about how and why existing research has signally failed to find any plausible link.

There are alternatives. In the field of web design we often use personas – fictional but well fleshed-out representative individuals – in order to try to empathize with the users of our sites and to help us to see our designs through different eyes. See https://www.interaction-design.org/encyclopedia/personas.html for a thorough introduction to the area. I use these in my learning design process and find them very useful. Thinking ‘how would John Smith react to this?’ makes much more sense to me than thinking ‘would this appeal to kinaesthetic learners?’, especially as I can imagine how John Smith might change his ways of thinking as a course progresses, how different life events might affect him, and how he might interact with his peers.

Address of the bookmark: http://www.learningandteaching.info/learning/learning_styles.htm

From discipline-and-punish to a culture of prevention | Philadelphia Public School Notebook

It all sounds so reasonable – reward kids with play money for behaving the way you want them to behave. It is certainly, as the article explains, way better than punishing kids to make them behave the way you want them to behave. But, like all such things, it completely misses the point.

Best quote:

“As the day started, Dallaire said, the boy told him, “‘I’m not going to behave until it’s 9:15.’ And as soon as 9:14 hit, I swear he sat down and started to do the assignment on the board.””

Simply replacing punishments with rewards but without making the process actually rewarding is really no improvement at all. It’s crowd control, not teaching, and it preserves all the extrinsic, despotic, controlling nonsense that kills the love of it in both teachers and, above all, their students.

Address of the bookmark: http://thenotebook.org/blog/158653/from-discipline-and-punish-to-culture-of-prevention#

Surprise! Our Attention Spans Aren’t Dead!

Interesting bit of analysis from Medium, the makers of Pocket (which, self-referentially, I used to save this link to read – and bookmark here – later), showing that users of Pocket save and read articles of 2000-5000 words more often than those of other lengths, equating to around 16 minutes of reading time on average. I don’t think this means that our attention spans are alive and kicking at all  – we save things to Pocket (and ReadItLater, EverNote, etc) precisely because we are busy being distracted by other things most of the time – though it does show that at least some of us do spend more than a second or two reading some things. 

The most interesting bit of this, though, comes towards the end, under the heading of ‘self improvement’. Pocket is clearly an important part of a self-directed personal learning environment (PLE) for many people and it is highly significant that a very high percentage of what people keep for later reading are all about learning, with psychology (actuall psychology, not self-help nonsense) topping the list of saved articles, with technology, current events, culture, science and history making solid showings.  

I have highlighted this role many times in presentations I have given on the subject of PLEs myself. Pocket and others of its ilk (from the more free-form Evernote, Microsoft OneNote and Google Keep to simple browser bookmarks or the elderly but still useful del.icio.us) are important parts of the learning technology assemblies that we make to support our learning. They are a major feature of the teaching crowd, tools that allow us to make sense and meaning of the torrent of knowledge we swim in all the time. In the old days I used to keep paper notebooks but they were of limited use. Now I just have a physical whiteboard pad for odd quick jottings but, if they are important, they always make their way into my electronic notes. The fact that these notes are searchable, taggable, reorganizable in many ways, and all in one place, turns them into powerful and persistent learning technologies that are orders of magnitude more useful than the primitive self-directed learning technologies of the past. 

Apart from Pocket – a convenient, simple and very usable tool – having experimented with a great many alternatives I now almost exclusively use Apple Notes for original jottings because I like the simplicity, the cross-platform capability, the fact they are auto-saved in multiple redundant online accounts as well as locally and, above all, the fact that (underneath it all) they are actually saved as emails in IMAP folders, so are open, standards-based and not bound at all by proprietary tools. That matters: things like EverNote, Keep and OneNote are functionally great but they are built to lock you in. I forgive Pocket that flaw because it is a one-trick pony, does provide a useful RSS feed and little would be lost if I jumped ship to another tool at a moment’s notice. I stick with it because it is a high quality piece of software that does what I need. We need such things to build robust personal learning ecosystems: small, robust pieces, interconnected and replaceable. This bookmark is just another one of those pieces – part of the public face of my PLE. And it too is not bound to a single tool. You can also find it at https://jondron.ca (or will be able to when it gets harvested some time within the next 24 hours!).

Address of the bookmark: https://medium.com/@Pocket/surprise-our-attention-spans-aren-t-dead-154ce24e5aab

The Voice Magazine – Interview with Vive Kumar

Vive Kumar waxes lyrical on the differences between online and face-to-face learning, the value of analytics, and the importance of culture and spirituality in learning. Good, thought-provoking stuff. I too get a bit sentimental about some of the things about physical proximity that Vive misses in the online teaching environment, but I think there are lots of positive differences too, not least the control it offers, the student-centred shifts in power relationships it almost enforces, the rich variety of pace that it effortlessly supports, and the huge knowledge-forming benefits of reified dialogue. Also, overcoming the challenges and understanding the nature of those differences is one of the main things that keeps it interesting for me. It’s different, but that’s often a good thing.

I have greatly enjoyed this series of faculty interviews in AUSU’s Voice Magazine (and been the subject of one of them, as have Terry Anderson and George Siemens). It’s really helpful in starting to make those human connections that Vive talks about in this interview. I have also really enjoyed the student interviews with which they are interspersed, that help to provide a glimpse of the human beings that we normally only see in caricature through their learning interactions. A great series. The Voice magazine is a treasure that I only discovered as a result of being interviewed. For those that work at Athabasca U or that want to understand its culture and processes, it’s a great read. It’s a bit hard to navigate around it at times, but it’s well worth the effort.

Address of the bookmark: http://www.voicemagazine.org/articles/featuredisplay.php?ART=10501

How Do You Motivate Kids To Stop Skipping School?

Not like this!

This article starts with the line ‘it seems like a no-brainer’  and indeed it is. The no-brainer solution to low attendance is to make the schools relevant, meaningful and interesting to the kids.

However, bizarrely, that is not what seemed obvious to the writer of the article, nor to the ones that carried out this harmful and doomed research, who thought the obvious answer was an incentive scheme, and inflicted it on 799 kids, mostly age 9. Basically, they told the kids they would get two pencils and a cute eraser if they turned up 85% of the time during the 38-day study.

It seems that they did not bother with a literature review because, had they done so, they would have found out right away that rewards are totally the opposite of what is needed to motivate kids to attend school. There is over 50 years of compelling evidence from research on motivation, in many fields and from many disciplines, that demonstrates this unequivocally and beyond any reasonable doubt. The only possible consequence of this intervention would be to demotivate the kids so that, at best, they might revert to former behaviours at the end of it, and that many would be even less likely to attend when it was over.

Unsurprisingly, this is exactly what they found. The reward program did indeed increase attendance while it was in effect  (this is the allure of behaviourism and why it still holds sway – it does achieve immediate results) and, when it was over, kids were indeed even less motivated to attend than they had been before, exactly as theory and empirical research predicts. In fact, many of the kids got off very lightly: formerly high attenders and those that were not great attenders before but that succeeded in getting the reward only fell back to baseline levels as soon as it was over, which is actually pretty good going. A more significant reward or longer study period might have had worse consequences. Unfortunately, the effects on the ones that were the real target (those who were initially low attenders, 60% of whom failed to meet the goal) was disastrous: once the intervention was over, these already at-risk kids were only a quarter as likely to attend as they had been before the intervention began.

One of the surprised researchers said:

“”I almost felt badly about what we had done,” she says. “That in the end, we should not have done this reward program at all.”

Almost? Seriously. This borders on child abuse. I generally think of research ethics boards as an arguably necessary evil but, when I hear that experiments like this are still going on, I could easily become a fan.

Address of the bookmark: http://www.npr.org/sections/goatsandsoda/2015/05/22/407947554/how-do-you-motivate-kids-to-stop-skipping-school