Workflow Automates Any Task on iOS

This is a very cool app that greatly extends the capacity of an iOS device to do many different things. I used a workflow on my iPad to add this link to the Landing from an item saved in Pocket, for instance, simply by selecting the workflow I had created from Pocket’s Share menu. Now, if only I could bundle that up and share it as an app, we could make Landing bookmarking A lot easier. 

This app sorely lacks help so far, though it is early days and clearly this will be coming soon. Though the app is pretty intuitive and has helpful hints, and there are some nice examples to play with, having existing programming skills is definitely valuable. It took me about an hour of trial and error to figure this simple workflow out.

Address of the bookmark: http://pocket.co/sBfTW

Investigating student motivation in the context of a learning analytics intervention during a summer bridge program

Very interesting, carefully performed and well articulated study that seems to suggest that showing students their data from early warning systems (learning analytics systems designed to identify at-risk student behaviours, usually through their interactions, or lack of interactions, in a learning management system) generally has a negative impact on their intrinsic motivation.

This is pretty much what one might expect because, as the researchers suggest, it inevitably shifts the focus from mastery to performance, and away from doing something for its own sake. This is probably among the worst things you could do to a learner, so it is not a trivial problem. It doesn’t negate the value of an EWS when used as intended, to help identify at-risk students and to focus tutor attention where it is most needed. I believe that an EWS can be very useful, as long as it is used with care (in every sense) and the results are treated critically. But it does raise a few alarm bells about the need to educate educators not just on the effective use of EWSs but on the nature of motivation in general. 

Address of the bookmark: http://www.sciencedirect.com/science/article/pii/S0747563214003793#

Automated Collaborative Filtering and Semantic Transports – draft 0.72

I had to look up this article by the late Sasha Chislenko for a paper I was reviewing today, and I am delighted that it is still available at its original URL, though Chislenko himself died in 2000. I’ve bookmarked the page on systems dating back to 1997 but I don’t think I’ve ever done so on this site, so here it is, still open to the world. Chislenko was writing in public way before it was fashionable and, I think, probably before the first blogs – this is still and, sadly, will always be a work in progress.

This particular page was one of a handful of articles that deeply influenced my early research and set me on a course I’m still pursuing to this day. Back in 1997, as I started my PhD, I had conceived of and started to build a web-based tagging and bookmark sharing system to gather learner-generated recommendations of resources and people so that the crowd could teach itself. It seemed like a common sense idea but I was not aware of anything else like it (this was long before del.icio.us and Slashdot was just a babe in arms), so I was looking for related work and then I found this. It depressed me a little that my idea was not quite as novel as I had hoped, but this article knocked me for six then and it continues to impress me now. It’s still great reading, though many of the suggestions and hopes/fears expressed in it are so commonplace that we seldom give them a second thought any more.

This, along with a special issue of ACM Communications released the same year, was my first introduction to collaborative filtering, the technology that would soon sit behind Amazon and, later, everything from Google Search to Netflix and eBay. It gave a name to what I was doing and to the system I was building, which was consequently christened ‘CoFIND’  (Collaborative Filter in N-Dimensions). 

Chislenko was a visionary who foresaw many of the developments over the past couple of decades and, as importantly, understood many of their potential consequences.  More of his work is available at http://www.lucifer.com/~sasha/articles/ – just a small sample of his astonishing range, most of it incomplete notes and random ideas, but packed with inspiration and surprisingly accurate prediction. He died far too young.

Address of the bookmark: http://www.lucifer.com/~sasha/articles/ACF.html

Constructivism versus objectivism: Implications for interaction, course design, and evaluation in distance education.

I’d not come across this (2000) article from Vrasidas till now, more’s the pity, because it is one of the clearest papers I have read on the distinction between objectivist (behaviourist/cognitivist)  and constructivist/social-constructivist approaches to teaching. It wasn’t new by any means even 15 years ago, but it provides an excellent overview of the schism (both real and perceived) between objectivism and constructivism and, in many ways, presages a lot of the debate that has gone on since surrounding the strengths, weaknesses and novelty of connectivist approaches. Also contains some good practical hints about how to design learning activities.

Address of the bookmark: http://vrasidas.intercol.edu/continuum.pdf

Instructional quality of Massive Open Online Courses (MOOCs)

This is a very interesting, if (I will argue) flawed, paper by Margaryan, Bianco and Littlejohn using a Course Scan instrument to examine the instructional design qualities of 76 randomly selected MOOCs (26 cMOOCs and 50 xMOOCs – the imbalance was caused by difficulties finding suitable cMOOCs). The conclusions drawn are that very few MOOCs, if any, show much evidence of sound instructional design strategies. In fact they are, according to the authors, almost all an instructional designer’s worst nightmare, on at least some dimensions.  
I like this paper but I have some fairly serious concerns with the way this study was conducted, which means a very large pinch of salt is needed when considering its conclusions. The central problem lies in the use of prescriptive criteria to identify ‘good’ instructional design practice, and then using them as quantitative measures of things that are deemed essential to any completed course design. 

Doubtful criteria 

It starts reasonably well. Margaryan et al use David Merrill’s well-accepted abstracted principles for instructional design to identify kinds of activities that should be there in any course and that, being somewhat derived from a variety of models and theories, are pretty reasonable: problem centricity, activation of prior learning, expert demonstration, application and integration. However, the chinks begin to show even here, as it is not always essential that all of these are explicitly contained within a course itself, even though consideration of them may be needed in the design process – for example, in an apprenticeship model, integration might be a natural part of learners’ lives, while in an open ‘by negotiated outcome’ course (e.g. a typical European PhD) the problems may be inherent in the context. But, as a fair approximation of what activities should be in most conventional taught courses, it’s not bad at all, even though it might show some courses as ‘bad’ when they are in fact ‘good’. 
The authors also add five more criteria abstracted from literature relating rather loosely to ‘resources’, including: expert feedback; differentiation (i.e. personalization); collaboration; authentic resources; and use of collective knowledge (i.e. cooperative sharing). These are far more contentious, with the exception of feedback, which almost all would agree should be considered in some form in any learning design (and which is a process thing anyway, not a resource issue). However, even this does not always need to be the expert feedback that the authors demand: automated feedback (which is, to be fair, a kind of ossified expert feedback, at least when done right), peer feedback or, best of all, intrinsic feedback can often be at least as good in most learning contexts. Intrinsic feedback (e.g. when learning to ride a bike, falling off it or succeeding to stay upright) is almost always better than any expert feedback, albeit that it can be enhanced by expert advice. None of the rest of these ‘resources’ criteria are essential to an effective learning design. They can be very useful, for sure, although it depends a great deal on context and how it is done, and there are often many other things that may matter as much or more in a design, like including support for reflection, for example, or scope for caring or passion to be displayed, or design to ensure personal relevance. It is worth noting that Merrill observes that, beyond the areas of broad agreement (which I reckon are somewhat shoehorned to fit), there is much more in other instructional design models that demands further research and that may be equally if not more important than those identified as common.

It ain’t what you do…

Like all things in education, it ain’t what you do but how you do it that makes all the difference, and it is all massively dependent on subject, context, learners and many other things. Prescriptive measures of instructional design quality like these make no sense when applied post-hoc because they ignore all this. They are very reasonable starting frameworks for a designer that encourage focus on things that matter and can make a big difference in the design process, but real life learning designs have to take the entire context into account and can (and often should) be done differently. Learning design (I shudder at the word ‘instructional’ because it implies so many unhealthy assumptions and attitudes) is a creative and situated activity. It makes no more sense to prescribe what kinds of activities and resources should be in a course than it does to prescribe how paintings should be composed. Yes, a few basics like golden ratios, rules of thirds, colour theory, etc can help the novice painter produce something acceptable, but the fact that a painting disobeys these ‘rules’ does not make it a bad painting: sometimes, quite the opposite. Some of the finest teaching I have ever seen or partaken of has used the most appalling instructional design techniques, by any theoretical measure.

Over-rigid assumptions and requirements

One of the biggest troubles with such general-purpose abstractions is that they make some very strong prior assumptions about what a course is going to be like and the context of delivery. Thanks to their closer resemblance to traditional courses (from which it should be clearly noted that the design criteria are derived) this is, to an extent, fair-ish for xMOOCs. But, even in the case of xMOOCs, the demand that collaboration, say, must occur is a step too far: as decades of distance learning research has shown (and Athabasca University proved for decades), great learning can happen without it and, while cooperative sharing is pragmatic and cost-effective, it is not essential in every course. Yes, these things are often a very good idea. No, they are not essential. Terry Anderson’s well-verified (and possibly self-confirming, though none the worse for it) theorem of interaction equivalency  makes this pretty clear.

cMOOCs are not xMOOCs

Prescriptive criteria as a tool for evaluation make no sense whatsoever in a cMOOC context. This is made worse because the traditional model is carried to extremes in this paper, to the extent that the authors bemoan the lack of clear learning outcomes. This doesn’t naturally fall out from the design principles at all, so I don’t understand why they are even mentioned, and it seems an abitrary criterion that has no validity or justification beyond the fact that they are typically used in university teaching. As teacher-prescribed learning outcomes are anathema to Connectivism it is very surprising indeed that the cMOOCs actually scored higher than the xMOOCs on this metric, which makes me wonder whether the means of differentiation were sufficiently rigorous. A MOOC that genuinely followed Connectivist principles would not provide learning outcomes at all: foci and themes, for sure, but not ‘at the end of this course you will be able to x’. And, anyway, as a lot of research and debate has shown, learning outcomes are of far greater value to teachers and instructional designers than they are to learners, for whom they may, if not handled with great care, actually get in the way of effective learning. It’s a process thing – helpful for creating courses, almost useless for taking them. The same problem occurs in the use of course organization in the criteria – cMOOC content is organized bottom-up by learners, so it is not very surprising that they lack careful top-down planning, and that is part of the point.

Apparently, some cMOOCs are not cMOOCs either

As well as concerns about the means of differentiating courses and the metrics used, I am also concerned with how they were applied. It is surprising that there was even a single cMOOC that didn’t incorporate use of ‘collective knowledge’ (the authors’ term for cooperative sharing and knowledge construction) because, without that, it simply isn’t a cMOOC: it’s there in the definition of Connectivism . As for differentiation, part of the point of cMOOCs is that learning happens through the network which, by definition, means people are getting different options or paths, and choosing those that suit their needs. The big point in both cases is that the teacher-designed course does not contain the content in a cMOOC: beyond the process support needed to build and sustain a network, any content that may be provided by the facilitators of such a course is just a catalyst for network formation and a centre around which activity flows and learner-generated content and activity is created. With that in mind it is worth pointing out that problem-centricity in learning design is an expression of teacher control which, again, is anathema to how cMOOCs work. Assuming that a cMOOC succeeds in connecting and mobilizing a network, it is all but certain that a great deal of problem-based and inquiry-based learning will be going on as people post, others respond, and issues become problematized. Moreover, the problems and issues will be relevant and meaningful to learners in ways that no pre-designed course can ever be. The content of a cMOOC is largely learner-generated so of course a problem focus is often simply not there in static materials supplied by people running it. cMOOCs do not tell learners what to do or how to do it, beyond very broad process support which is needed to help those networks to accrete. It would therefore be more than a little weird if they adhered to instructional design principles derived from teacher-led face-to-face courses in their designed content because, if they did, they would not be cMOOCs. Of course, it is perfectly reasonable to criticize cMOOCs as a matter of principle on these grounds: given that (depending on the network) few will know much about learning and how to support it, one of the big problems with connectivist methods is that of getting lost in social space, with insufficient structure or guidance to suit all learning needs, insufficient feedback, inefficient paths and so on. I’d have some sympathy with such an argument, but it is not fair to judge cMOOCs on criteria that their instigators would reject in the first place and that they are actively avoiding. It’s like criticizing cheese for not being chalky enough.

It’s still a good paper though

For all that I find the conclusions of this paper very arguable and the methods highly criticizable, it does provide an interesting portrait of MOOCs using an unconventional lens. We need more research along these lines because, though the conclusions are mostly arguable, what is revealed in the process is a much richer picture of the kinds of things that are and are not happening in MOOCs. These are fine researchers who have told an old story in a new way, and this is enlightening stuff that is worth reading.
 
As an aside, we also need better editors and reviewers for papers like this: little tell-tales like the fact that ‘cMOOC’ gets to be defined as ‘constructivist MOOC’ at one point (I’m sure it’s just a slip of the keyboard as the authors are well aware of what they are writing about) and more typos than you might expect in a published paper suggest that not quite enough effort went into quality control at the editorial end. I note too that this is a closed journal: you’d think that they might offer better value for the money that they cream off for their services.

Address of the bookmark: http://www.sciencedirect.com/science/article/pii/S036013151400178X

7 traits of online graduates that trump campus colleagues

The excellent Donald Clark pinpointing some of the main reasons why it is great to be a graduate of an online university. I’d humbly suggest that Athabasca University is way ahead of the game here: Donald highlights how wonderful it is that these degrees in the UK can be started three whole times a year. We, of course, have a program that starts 12 times a year.

Address of the bookmark: http://donaldclarkplanb.blogspot.ca/2014/11/7-traits-of-online-graduates-that-trump.html

Microsoft Open Sources .NET, Saying It Will Run On Linux and Mac | WIRED

This is a sign of what appear to be some remarkable seismic shifts at Microsoft. To be fair, Microsoft has long been a contributor to open source initiatives but .NET was, until fairly recently, seen as one of the crown jewels only slightly less significant than Windows and Office, which makes me and the writer of this article wonder whether they might be heading towards open sourcing these at some point (Windows mobile version is already free, albeit with many provisos, terms and conditions, but that’s just common sense otherwise no one would use the substandard pile of pants at all).

Note that they are apparently only open-sourcing the core of .NET, which is not that wonderful without all the accompanying framework and goodies. The open source Mono project has provided this functionality for many years thanks to Microsoft’s wisely open approach to treating it and C# as a specification rather than a completely closed technology in the first place but, and it’s a big but, there are few Windows .NET apps that can run on Mono under Unix without some significant tweaking or acceptance of limitations and bugs, because so much relies on the premium libraries, controls and other proprietary closed tools that only paying Windows users can take advantage of. It’s much better than it used to be, but Mono is still a shim rather than a solution. I’m guessing there are few that would use it in preference to, say, Java unless their primary target were Windows machines or they were inveterate C# or VB fans.

This is probably not a sign of deeper openness, however. Microsoft, like most others in the industry, clearly see the future is in net-delivered cloud-based subscription services. Azure, Office365, Skype, Exchange Online etc etc are likely to be where most of the money comes from in the years ahead. .NET is nothing like as effective at locking people in than providing a service that handles all the data, communication and business processes of an individual or organization. Moreover, if more .NET developers can be sucked in to developing for other platforms, that means more that can be pulled in to Microsoft’s cloud systems though, to be fair, it does mean Microsoft has to actually compete on even ground to win, rather than solely relying on market dominance. But it does have a lot of cash to outspend many of its rivals, and raw computer power together with the money to support it plays a large role in achieving success in this area.

The cloud is a new (well, extremely old but now accepted and dominant) form of closed system in which the development technology really shouldn’t matter much any more. I worry a great deal about this though. In the past we were just locked in by data formats, closed licences and closed software (perniciously driven by upgrade cycles that rendered what we had purchased obsolete and unsupported), but at least the data were under our control. Now they are not. I know of no cloud-based services that have not at some point changed terms and conditions, often for the worse, few that I would trust with my data any further than I could throw them, and none at all that are impervious to bankrupcy, take-overs and mergers. When this happened in the past we always had a little lead time to look for an alternative solution and our systems kept running. Nowadays, a business can be destroyed in the seconds it takes to shut down or alter a system in the cloud.

Address of the bookmark: http://www.wired.com/2014/11/microsoft-open-sources-net-says-will-run-linux-mac/

Multiple types of motives don't multiply the motivation of West Point cadets

Interesting study analysing the relationship between internal vs instrumental (the author’s take on intrinsic vs extrinsic) motivation as revealed in entry questionnaires for West Point cadets and long-term success in army careers. As you might expect, those with intrinsic motivation significantly outperformed those with extrinsic motivation on every measure.

What is particularly interesting, however, is that extrinsic motivation crowded out the intrinsic in those with mixed motivations. Having both extrinsic and intrinsic motivation is no better than having extrinsic motivation on its own, which is to say it is virtually useless. In other words, as we already know from hundreds of experiments and studies over shorter periods but herein demonstrated for periods of over a decade, extrinsic motivation kills intrinsic motivation. This is further proof that the use of rewards (like grades, performance-related pay, and service awards) in the hope that they will motivate people is an incredibly dumb idea because they actively demotivate. 

Address of the bookmark: http://m.pnas.org/content/111/30/10990.full