For Sale: “Your Name Here” in a Prestigious Science Journal

A Scientific American article on the prevalence of plagiarism and contract cheating in journal articles.  The tl;dr version lies near the end of the article:

“Now that a number of companies have figured out how to make money off of scientific misconduct, that presumption of honesty is in danger of becoming an anachronism. ‘The whole system of peer review works on the basis of trust,’ Pattinson says. ‘Once that is damaged, it is very difficult for the peer review system to deal with.'” 

Very sad. The only heartening thing about all this is that there are now thousands of scam journals (I think I now get at least half a dozen solicitations from these every day that I have learned to junk immediately) who would be more than willing to publish such articles. I rather like the idea that worse than useless fraudulent articles might get published in worse than useless scam journals. A nice little self-contained economy. Unfortunately, some of the cheats target real journals with real reputations and, worse, may be believed by genuine researchers who are taken in by the lies they purvey, endangering the whole academic research endeavour. Apparently the going price for that in China is around 93,000RMB, or $15,000.

This is very much like the issue we face in course assessment too. In some of my own courses I have designed what I reckon to be virtually foolproof methods of preventing most forms of cheating. They mostly work pretty well, but they don’t cope much better with contract cheating than more traditional assignment/exam based courses. My only partial solution to that problem is to try to price cheats out of the market: most of my courses have to be done from start to finish in order to pass, which is a lot more time consuming than writing a few boilerplate essays, exams or exercises. For assignments and exams on most courses you can get a passing grade for as little as $5, if you are willing to take the risk. The risk of discovery is very high because the essay mills tend to plagiarize or self-plagiarize (well, they are cheats – caveat emptor!) and, due to the semi-public nature of cheating sites, it is just as easy for us to discover students seeking ghost writers as it is for them to seek a ghost writer. In fact, when we find such sites, we tend to pass on our findings to colleagues in other institutions, a nice example of informal crowd-sourcing. However, I am absolutely sure some do get away with it, and it makes little or no difference whether teaching is online or face to face. There’s an example of contract cheating in exams in today’s news, but it is hardly newsworthy, apart from that it is endemic. Beyond contract cheating, I also know that some students have family members or friends who are motivated to ‘help’, sometimes quite considerably. There was a charmingly improbable example of a mother sitting her daughter’s exam a while back, for instance. 

I suspect that the ultimate solution to this in the case of courses is structural, not technological nor even directly pedagogical. We are in an un-winnable arms war in which everyone loses as long as the purpose of courses is seen to be to get accreditation, rather than to enable learning. As long as a grade sits enticingly at the end of it, that will inevitably cause some students to seek shortcuts to getting it. Cheats destroy the credibility not just of their own qualifications but those of every other student who has honestly run the course. If we got rid of grades altogether, cheating during the learning process would dry up to the merest trickle (though, bizarrely, might not go away altogether). Making accreditation a separate issue, completely disassociated from learning and teaching, would allow us to concentrate our firepower on preventing cheating at the point of accreditation rather than distracting us during a course, so we could make our courses far more engaging, enjoyable and useful: we could simply concentrate on pedagogy rather than trying to design cheating out of them. For the (entirely separate) accreditation, we could let rip with all the weaponry at our disposal, of course: biometrics, Faraday cages, style detectors, plagiarism detection tools and all the multifarious technologies and techniques we have developed to attempt to thwart cheats could be employed with relative ease by specialists trained to spot miscreants. Better still, we could use other means of proving authenticity such as social network analysis combined with public facing posts, or employer reports, or authentic portfolios created over long periods with multiple sources of authentication. This would also have the enormous benefit of largely solving what is perhaps the biggest challenge in all of education, that of motivation, getting rid of the extrinsic driver that eats at the soul of learning in our educational systems. It would also allow learners to control how, when, with whom and what they learn, rather than having to take a course that might bore them or confuse them. They could easily take a course elsewhere – even a MOOC – and prove their knowledge separately. It would make it easier for us to design courses that are apt for the learning need, rather than having to fit everything into one uniform size and shape. It would also overcome the insane contradiction of teachers telling students they have failed to learn when, quite clearly, it is the teachers that have failed to teach. Athabasca does, of course, have the mechanisms for this, in its PLAR and challenge processes. It could easily be done.

A similar solution might work, at least a little, for journal cheaters. There are different cultural norms around cheating in China, as I have observed previously, that perhaps play a role in the preponderance of Chinese culprits mentioned in the article, but a lot of the problem might be put down to the over-valuation of publication for career progression, prestige and reward in that country. If the rewards and reputation were less tightly bound to publication and more intrinsic to the process, we might see some improvement. This could be done in many ways: for instance, greater value could be given to internal dissemination of results,  open publication (inherently less liable to fraud thanks to many eyes), team work, blogging, supervisor reports, peer review (of people, not papers) and citations (though that is inevitably going to be the next easy target for fraud, if it is not already, so should not be treated too seriously). There are lots of ways to measure academic value apart from through numbers of publications, many of which relate to hard-to-spoof process rather than an easily forged product. The worrisome trend of journals charging authors for publication is an extremely bad idea that can only exacerbate the problem: publication becomes a commodity that is bought and sold, of value in and of itself (like grades) rather than as a medium to disseminate research.

These are sad times for academia, eaten from the inside and out, but they also present an opportunity for us to rethink the process. The standards and values that have evolved over many centuries and that once stood us in good stead when adult education was an elite affair just don’t apply any more. What our forebears sought in opening up academia was to expand the reach of education to all. Instead, we turned it into a system to deliver accreditation. That system is on a self-destruct course as long as we continue to act as though nothing has really changed. 

Address of the bookmark: http://www.scientificamerican.com/article/for-sale-your-name-here-in-a-prestigious-science-journal/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%253A+ScientificAmerican-News+%2528Content%253A+News%2529

Defaults matter

I have often written about the subtle and not-so-subtle constraints of learning management systems (LMSs) that channel teaching down a limited number of paths, and so impose implicit pedagogies on us that may be highly counter productive and dissuade us from teaching well – this paper is an early expression of my thoughts on the matter. I came across another example today.

When a teacher enters comments on assignments in Moodle (and in most LMSs), it is a one-time, one-way publication event. The student gets a notification and that’s it. While it is perfectly possible for a dialogue to continue via email or internal messaging, or to avoid having to use such a system altogether, or to overlay processes on top of it to soften the hard structure of the tool, the design of the software makes it quite clear this is not expected or normal. At best, it is treated as a separate process. The design of such an assignment submission system is entirely about delivering a final judgement. It is a tacit assertion of teacher power. The most we can do to subvert that in Moodle is to return an assignment for resubmission, but that carries its own meanings and, on resubmission, still returns us to the same single feedback box.

Defaults are very powerful things that profoundly shape how we behave (e.g. see here, here and here). Imagine how different the process would be if the comment box were, by default, part of a dialogue, inviting response from the student. Imagine how different it would be if the student could respond by submitting a new version (not replacing the old) or by posting amendments in a further submission, to keep going until it is just right, not as a process of replacement but of evolution and augmentation. You might think of this as being something like a journal submission system, where revisions are made in response to reviewers until the article is acceptable. But we could go further. What if it were treated as a debugging process, using approaches like those in Bugzilla or Github to track down issues and refine solutions until they were as good as they could be, incorporating feedback and help from students and others on or beyond the course? It seems to me that, if we are serious about assignments as a formative means of helping someone to learn (and we should be), that’s what we should be doing. There is really no excuse, ever, for a committed student to get less than 100% in the end. If students are committed and willing to persist until they have learned what they come here to learn, it is not ever the students’ failure when they achieve less than the best: it is the teachers’.

This is, of course, one of the motivations behind the Landing. In part we built this site to enable pedagogies like this that do not fit the moulds that LMSs ever-so-subtly press us into. The Landing has its own set of constraints and assumptions, but it is an alternative and complementary set, albeit one that is designed to be soft and malleable in many more ways than a standard LMS. The point, though, is not that any one system is better than any other but that all of them embed pedagogical and process assumptions, some of which are inherently incompatible.

The solution is, I think, not to build a one-size-fits-all system. Yes, we could easily enough modify Moodle to behave the way I suggest and in myriad other ways (e.g. I’d love to see dialogue available in every component, to allow student-controlled spaces wherever we need them, to allow students to add to their own courses, etc) but that doesn’t work either. The more we pack in, the softer the system becomes, and so the harder it is to operate it effectively. Greater flexibility always comes at a high price, in cognitive load, technical difficulty and combinatorial complexity. Moreover, the more we make it suit one group of people, the less well it suits others. This is the nature of monolithic systems.

There are a few existing ways to greatly reduce this problem, without massive reinvention and disruption. One is to disaggregate the pieces. We could build the LMS out of interoperable blocks so that we could, for instance, replace the standard submission system with a different one, without impacting other parts of the system. That was the goal of OKI and the now-defunct E-Framework although, in both cases, assembly was almost always a centralized IT management function and not available to those who most needed it – students and teachers. Neither have really made it to the mainstream. Sakai (an also-ran LMS that still persists) continues to use OKI technologies under the hood but the e-framework (a far better idea) seems dead in the water. These were both great ideas. There just wasn’t the will or the money, and competition from incumbents like Moodle and Blackboard was too strong. Other widget-based methods (e.g. using Wookie) offer more hope, because they do not demand significant retooling of existing systems, but they are currently far from on the ascendent and the promising EU TENCompetence project that was a leader behind this seems moribund, its site offline.

Another approach is to use modules/plugins/building blocks within an existing system. However, this can be difficult or impossible to manage in a manner that delivers control to the end user without at the same time making it difficult for those that do not want or need such control, because LMSs are monoliths that have to address the needs of many people. Not everyone needs a big toolkit and, for many, it would actively make things worse if they had one. Judicious use of templates can help with that, but the real problem is that one size does not fit all. Also, it locks you in to a particular platform, making evolution dependent on designers whose goals may not align with how you want to teach.

Bearing that in mind, another way to cope with the problem is to use multiple independent systems bound by interoperability standards – LTI, OpenBadges or TinCan, for example. With such standards, different learning platforms can become part of the same federated environment, sharing data, processing, learning paths and so on, allowing records to be kept centrally while enabling incompatible pedagogies to run independently within each system. That seems to me to be the most sensible option right now. It’s still more complex for all concerned than taking the easy path, and it increases management burden as well as replicating too much functionality for no particularly good reason. But sometimes the easy path is the wrong one, and diversity drives growth and improvement.

Investigating student motivation in the context of a learning analytics intervention during a summer bridge program

Very interesting, carefully performed and well articulated study that seems to suggest that showing students their data from early warning systems (learning analytics systems designed to identify at-risk student behaviours, usually through their interactions, or lack of interactions, in a learning management system) generally has a negative impact on their intrinsic motivation.

This is pretty much what one might expect because, as the researchers suggest, it inevitably shifts the focus from mastery to performance, and away from doing something for its own sake. This is probably among the worst things you could do to a learner, so it is not a trivial problem. It doesn’t negate the value of an EWS when used as intended, to help identify at-risk students and to focus tutor attention where it is most needed. I believe that an EWS can be very useful, as long as it is used with care (in every sense) and the results are treated critically. But it does raise a few alarm bells about the need to educate educators not just on the effective use of EWSs but on the nature of motivation in general. 

Address of the bookmark: http://www.sciencedirect.com/science/article/pii/S0747563214003793#

Automated Collaborative Filtering and Semantic Transports – draft 0.72

I had to look up this article by the late Sasha Chislenko for a paper I was reviewing today, and I am delighted that it is still available at its original URL, though Chislenko himself died in 2000. I’ve bookmarked the page on systems dating back to 1997 but I don’t think I’ve ever done so on this site, so here it is, still open to the world. Chislenko was writing in public way before it was fashionable and, I think, probably before the first blogs – this is still and, sadly, will always be a work in progress.

This particular page was one of a handful of articles that deeply influenced my early research and set me on a course I’m still pursuing to this day. Back in 1997, as I started my PhD, I had conceived of and started to build a web-based tagging and bookmark sharing system to gather learner-generated recommendations of resources and people so that the crowd could teach itself. It seemed like a common sense idea but I was not aware of anything else like it (this was long before del.icio.us and Slashdot was just a babe in arms), so I was looking for related work and then I found this. It depressed me a little that my idea was not quite as novel as I had hoped, but this article knocked me for six then and it continues to impress me now. It’s still great reading, though many of the suggestions and hopes/fears expressed in it are so commonplace that we seldom give them a second thought any more.

This, along with a special issue of ACM Communications released the same year, was my first introduction to collaborative filtering, the technology that would soon sit behind Amazon and, later, everything from Google Search to Netflix and eBay. It gave a name to what I was doing and to the system I was building, which was consequently christened ‘CoFIND’  (Collaborative Filter in N-Dimensions). 

Chislenko was a visionary who foresaw many of the developments over the past couple of decades and, as importantly, understood many of their potential consequences.  More of his work is available at http://www.lucifer.com/~sasha/articles/ – just a small sample of his astonishing range, most of it incomplete notes and random ideas, but packed with inspiration and surprisingly accurate prediction. He died far too young.

Address of the bookmark: http://www.lucifer.com/~sasha/articles/ACF.html

Constructivism versus objectivism: Implications for interaction, course design, and evaluation in distance education.

I’d not come across this (2000) article from Vrasidas till now, more’s the pity, because it is one of the clearest papers I have read on the distinction between objectivist (behaviourist/cognitivist)  and constructivist/social-constructivist approaches to teaching. It wasn’t new by any means even 15 years ago, but it provides an excellent overview of the schism (both real and perceived) between objectivism and constructivism and, in many ways, presages a lot of the debate that has gone on since surrounding the strengths, weaknesses and novelty of connectivist approaches. Also contains some good practical hints about how to design learning activities.

Address of the bookmark: http://vrasidas.intercol.edu/continuum.pdf

Instructional quality of Massive Open Online Courses (MOOCs)

This is a very interesting, if (I will argue) flawed, paper by Margaryan, Bianco and Littlejohn using a Course Scan instrument to examine the instructional design qualities of 76 randomly selected MOOCs (26 cMOOCs and 50 xMOOCs – the imbalance was caused by difficulties finding suitable cMOOCs). The conclusions drawn are that very few MOOCs, if any, show much evidence of sound instructional design strategies. In fact they are, according to the authors, almost all an instructional designer’s worst nightmare, on at least some dimensions.  
I like this paper but I have some fairly serious concerns with the way this study was conducted, which means a very large pinch of salt is needed when considering its conclusions. The central problem lies in the use of prescriptive criteria to identify ‘good’ instructional design practice, and then using them as quantitative measures of things that are deemed essential to any completed course design. 

Doubtful criteria 

It starts reasonably well. Margaryan et al use David Merrill’s well-accepted abstracted principles for instructional design to identify kinds of activities that should be there in any course and that, being somewhat derived from a variety of models and theories, are pretty reasonable: problem centricity, activation of prior learning, expert demonstration, application and integration. However, the chinks begin to show even here, as it is not always essential that all of these are explicitly contained within a course itself, even though consideration of them may be needed in the design process – for example, in an apprenticeship model, integration might be a natural part of learners’ lives, while in an open ‘by negotiated outcome’ course (e.g. a typical European PhD) the problems may be inherent in the context. But, as a fair approximation of what activities should be in most conventional taught courses, it’s not bad at all, even though it might show some courses as ‘bad’ when they are in fact ‘good’. 
The authors also add five more criteria abstracted from literature relating rather loosely to ‘resources’, including: expert feedback; differentiation (i.e. personalization); collaboration; authentic resources; and use of collective knowledge (i.e. cooperative sharing). These are far more contentious, with the exception of feedback, which almost all would agree should be considered in some form in any learning design (and which is a process thing anyway, not a resource issue). However, even this does not always need to be the expert feedback that the authors demand: automated feedback (which is, to be fair, a kind of ossified expert feedback, at least when done right), peer feedback or, best of all, intrinsic feedback can often be at least as good in most learning contexts. Intrinsic feedback (e.g. when learning to ride a bike, falling off it or succeeding to stay upright) is almost always better than any expert feedback, albeit that it can be enhanced by expert advice. None of the rest of these ‘resources’ criteria are essential to an effective learning design. They can be very useful, for sure, although it depends a great deal on context and how it is done, and there are often many other things that may matter as much or more in a design, like including support for reflection, for example, or scope for caring or passion to be displayed, or design to ensure personal relevance. It is worth noting that Merrill observes that, beyond the areas of broad agreement (which I reckon are somewhat shoehorned to fit), there is much more in other instructional design models that demands further research and that may be equally if not more important than those identified as common.

It ain’t what you do…

Like all things in education, it ain’t what you do but how you do it that makes all the difference, and it is all massively dependent on subject, context, learners and many other things. Prescriptive measures of instructional design quality like these make no sense when applied post-hoc because they ignore all this. They are very reasonable starting frameworks for a designer that encourage focus on things that matter and can make a big difference in the design process, but real life learning designs have to take the entire context into account and can (and often should) be done differently. Learning design (I shudder at the word ‘instructional’ because it implies so many unhealthy assumptions and attitudes) is a creative and situated activity. It makes no more sense to prescribe what kinds of activities and resources should be in a course than it does to prescribe how paintings should be composed. Yes, a few basics like golden ratios, rules of thirds, colour theory, etc can help the novice painter produce something acceptable, but the fact that a painting disobeys these ‘rules’ does not make it a bad painting: sometimes, quite the opposite. Some of the finest teaching I have ever seen or partaken of has used the most appalling instructional design techniques, by any theoretical measure.

Over-rigid assumptions and requirements

One of the biggest troubles with such general-purpose abstractions is that they make some very strong prior assumptions about what a course is going to be like and the context of delivery. Thanks to their closer resemblance to traditional courses (from which it should be clearly noted that the design criteria are derived) this is, to an extent, fair-ish for xMOOCs. But, even in the case of xMOOCs, the demand that collaboration, say, must occur is a step too far: as decades of distance learning research has shown (and Athabasca University proved for decades), great learning can happen without it and, while cooperative sharing is pragmatic and cost-effective, it is not essential in every course. Yes, these things are often a very good idea. No, they are not essential. Terry Anderson’s well-verified (and possibly self-confirming, though none the worse for it) theorem of interaction equivalency  makes this pretty clear.

cMOOCs are not xMOOCs

Prescriptive criteria as a tool for evaluation make no sense whatsoever in a cMOOC context. This is made worse because the traditional model is carried to extremes in this paper, to the extent that the authors bemoan the lack of clear learning outcomes. This doesn’t naturally fall out from the design principles at all, so I don’t understand why they are even mentioned, and it seems an abitrary criterion that has no validity or justification beyond the fact that they are typically used in university teaching. As teacher-prescribed learning outcomes are anathema to Connectivism it is very surprising indeed that the cMOOCs actually scored higher than the xMOOCs on this metric, which makes me wonder whether the means of differentiation were sufficiently rigorous. A MOOC that genuinely followed Connectivist principles would not provide learning outcomes at all: foci and themes, for sure, but not ‘at the end of this course you will be able to x’. And, anyway, as a lot of research and debate has shown, learning outcomes are of far greater value to teachers and instructional designers than they are to learners, for whom they may, if not handled with great care, actually get in the way of effective learning. It’s a process thing – helpful for creating courses, almost useless for taking them. The same problem occurs in the use of course organization in the criteria – cMOOC content is organized bottom-up by learners, so it is not very surprising that they lack careful top-down planning, and that is part of the point.

Apparently, some cMOOCs are not cMOOCs either

As well as concerns about the means of differentiating courses and the metrics used, I am also concerned with how they were applied. It is surprising that there was even a single cMOOC that didn’t incorporate use of ‘collective knowledge’ (the authors’ term for cooperative sharing and knowledge construction) because, without that, it simply isn’t a cMOOC: it’s there in the definition of Connectivism . As for differentiation, part of the point of cMOOCs is that learning happens through the network which, by definition, means people are getting different options or paths, and choosing those that suit their needs. The big point in both cases is that the teacher-designed course does not contain the content in a cMOOC: beyond the process support needed to build and sustain a network, any content that may be provided by the facilitators of such a course is just a catalyst for network formation and a centre around which activity flows and learner-generated content and activity is created. With that in mind it is worth pointing out that problem-centricity in learning design is an expression of teacher control which, again, is anathema to how cMOOCs work. Assuming that a cMOOC succeeds in connecting and mobilizing a network, it is all but certain that a great deal of problem-based and inquiry-based learning will be going on as people post, others respond, and issues become problematized. Moreover, the problems and issues will be relevant and meaningful to learners in ways that no pre-designed course can ever be. The content of a cMOOC is largely learner-generated so of course a problem focus is often simply not there in static materials supplied by people running it. cMOOCs do not tell learners what to do or how to do it, beyond very broad process support which is needed to help those networks to accrete. It would therefore be more than a little weird if they adhered to instructional design principles derived from teacher-led face-to-face courses in their designed content because, if they did, they would not be cMOOCs. Of course, it is perfectly reasonable to criticize cMOOCs as a matter of principle on these grounds: given that (depending on the network) few will know much about learning and how to support it, one of the big problems with connectivist methods is that of getting lost in social space, with insufficient structure or guidance to suit all learning needs, insufficient feedback, inefficient paths and so on. I’d have some sympathy with such an argument, but it is not fair to judge cMOOCs on criteria that their instigators would reject in the first place and that they are actively avoiding. It’s like criticizing cheese for not being chalky enough.

It’s still a good paper though

For all that I find the conclusions of this paper very arguable and the methods highly criticizable, it does provide an interesting portrait of MOOCs using an unconventional lens. We need more research along these lines because, though the conclusions are mostly arguable, what is revealed in the process is a much richer picture of the kinds of things that are and are not happening in MOOCs. These are fine researchers who have told an old story in a new way, and this is enlightening stuff that is worth reading.
 
As an aside, we also need better editors and reviewers for papers like this: little tell-tales like the fact that ‘cMOOC’ gets to be defined as ‘constructivist MOOC’ at one point (I’m sure it’s just a slip of the keyboard as the authors are well aware of what they are writing about) and more typos than you might expect in a published paper suggest that not quite enough effort went into quality control at the editorial end. I note too that this is a closed journal: you’d think that they might offer better value for the money that they cream off for their services.

Address of the bookmark: http://www.sciencedirect.com/science/article/pii/S036013151400178X

Multiple types of motives don't multiply the motivation of West Point cadets

Interesting study analysing the relationship between internal vs instrumental (the author’s take on intrinsic vs extrinsic) motivation as revealed in entry questionnaires for West Point cadets and long-term success in army careers. As you might expect, those with intrinsic motivation significantly outperformed those with extrinsic motivation on every measure.

What is particularly interesting, however, is that extrinsic motivation crowded out the intrinsic in those with mixed motivations. Having both extrinsic and intrinsic motivation is no better than having extrinsic motivation on its own, which is to say it is virtually useless. In other words, as we already know from hundreds of experiments and studies over shorter periods but herein demonstrated for periods of over a decade, extrinsic motivation kills intrinsic motivation. This is further proof that the use of rewards (like grades, performance-related pay, and service awards) in the hope that they will motivate people is an incredibly dumb idea because they actively demotivate. 

Address of the bookmark: http://m.pnas.org/content/111/30/10990.full

Zombie Skinner returns from the dead: an educational horror story

A dark tale for the halloween season

It was a dark and stormy night when Phil deMuth, an investment advisor, sat down to pen this article for Forbes. His voodoo incantations would raise from the dead the ghastly zombified remains of behaviourist dogmatist and appropriately named Skinner and, together, their nightmarish vision would turn the nation’s young into mindless zombies: apathetic, disenfranchised, undead and unthinking fodder suited only to sustaining the ghost of an industrial past. It is because of idiotic but superficially plausible ideas like the ones in this skillfully written article that I have to struggle to unteach, to try and often fail to help students to learn how to love learning again after years of having it beaten out of them. So, in case anyone is persuaded by the slick but outrageously wrong arguments of the article, or is one of the many educators that actually make use of this claptrap, this post is meant as a small antidote to the zombie plague.

A good beginning

DeMuth’s article starts very well. The first six paragraphs of the article present a fine and impassioned analysis of the failings of popular educational technologies and methods that strikes a well-aimed blow at the heart of many of the problems in existing educational systems, the atrociousness of traditional methods, and the unwitting replication of harmful and outmoded ways of teaching in poorly designed MOOCs and misuse of Khan tutorials (he actually attacks Khan tutorials themselves but, had he thought to check out a few rather than lump them with the rest, he would have discovered that they actually closely conform to his vision). deMuth might also have attacked much of university e-learning too on the same grounds. I could not agree more. DeMuth is absolutely right to bemoan the crazy transmission model underlying much of education and the appallingness of monolithic and intimidating exams and tests, not to mention the foolishness of replicating old and weak methods in new and shiny tools. His motives are unimpeachable and he makes a very strong, eloquently argued case. His solution though is not to change that system but to make it do the (wrong) job more efficiently. Here I take issue.

And then the horror starts

The remainder of the article is entertainingly and slickly written but the worse for that, because it carries a very dangerous message indeed.  In brief, it spends a while self-referentially demonstrating the value of programmed learning, then winds up by asking for programmers/behaviourist psychologists to produce modern equivalents of Skinner’s Teaching Machine (see illustration) so that the transmission model can work better and kids can pass more tests.

Behaviourism revisited

B.F.Skinner's Teaching Machine

I’m amazed that anyone still thinks radical behaviourism, as espoused by Skinner, has any value whatsoever. I guess that some people learned this stuff before it was soundly discredited or, as I’m guessing was the case for deMuth, discovered it in passing without looking into what the rest of the world thought about it. Alas, such beliefs still do still persist. Indeed, we see behaviourist shortcuts all too regularly in education and industry to this day, even though hardly anyone who has followed any of the research over the past 40 years or so would find it at all acceptable. 

For those unfamiliar with Skinner’s radical behaviourist model, in brief, it was meant to apply a reductionist scientific method to discovering how animals (including people) learn. Recognizing that internal cognitive processes are hard to observe (and, in Skinner’s radical version of the theory, are themselves simply a consequence of external conditioning), the ‘behaviour’ part of the name is a reflection of that fact that this is what behaviourists concentrated on and, in Skinner’s case, the only thing that counted. Skinner only allowed for stimuli and responses that could be observed and measured – nothing else mattered. The brain for Skinnerian behaviourists was a black box about which they needed to know nothing apart from the effects of particular inputs and the observable outputs. They performed interventions and observed their effects on behaviour. Based on these observations (not uncommonly starting and sometimes ending with experiments on animals) those who tried to apply these methods in teaching sought to work out how to teach better, without ever having to make any assumptions about what was going on inside people’s minds. It’s a laudable goal, if Quixotic and utterly misguided. One big trouble with it (though far from the only one) is that it ignores our minds’ own inputs, that are often a great deal more significant than any external stimuli and that always modify them, in unpredictable ways, with complex effects that often fail to emerge until long after the stimuli have gone. We now know that reductionist methods simply don’t work in this context. Skinner lacked the framework of complexity and chaos theories that demonstrate the theoretical and practical impossibility of predicting even such simple causes and effects as the motion of a double pendulum, so it is perhaps a little forgiveable that he remained lost in a reductionist paradigm. We also now know that the operant conditioning methods that Skinner espoused are relatively ineffective in the short term and highly ineffective in the long term, so behaviourism fails to achieve much even on its own terms. Again, Skinner could not really be blamed for misunderstanding the significance of his results or their long-term weaknesses because such research was in its infancy while he was still alive.

So why did anyone ever believe in this stuff?

To a limited extent, behaviourism works. Among radical behaviourist ‘discoveries’, in large part guided by experiments in which Skinner was able to train animals like pigeons little by little to perform complex tasks, is the one focused on in this article: that small chunked lessons, with immediate feedback, allowing the subject to take it at his or her own pace, can reliably lead to learning. Up to a point this is absolutely true, especially in the ‘spaced’ form in which Skinner actually presented it rather than the simplistic caricature demonstrated in the article. For some kinds of rote learning, the effects of which are easily observed, small chunks and immediate feedback are a very good idea indeed. There are good reasons for this that cognitivist psychologists and constructivist thinkers had also hit upon long before Skinner came on the scene. Although not the archetypal behaviourist way of doing things, it works particularly well if that feedback is innate to the task rather than extrinsically imposed. For instance, staying upright on a bicycle, being able to recite lines in a play, being able to play a piece of music, building a program that does what it should, or writing a satisfying piece of work, are immediate forms of feedback that are intrinsic to the process. This is the form that deMuth uses in the article: he self-referentially demonstrates the effectiveness of the approach by leaving ever larger chunks out of the key terms it employs. In doing this he is actually relying on a cognitivist model of what motivates us (in this case, achievable challenges) rather than a purely behaviourist model of reward and punishment, so it’s not the greatest example of the effectiveness of behaviourism. He is not exactly an expert in the field. Behaviourists also hit on a few other good tricks, more by luck than design. It is absolutely true that putting people in control of the pace of their own learning works very well, both for obvious common-sense reasons (we don’t all learn the same things at the same speed) and for motivation: a sense of being in control is central to intrinsic motivation. This is not a behaviourist notion, but it happens to be true.

The most problematic outcome of behaviourist thinking, that follows from its wilful ignorance of internal motivations and stimuli, lies in the use of rewards and punishments to drive learning, using extrinsic motivation as though we were all pigeons. The big trouble with the reward/punishment idea is that extrinsic motivation actually eliminates intrinsic motivation, which means that a reward/punishment model is positively harmful to effective learning. There have been countless studies and experiments that show this from Deci, Ryan, Kohn and very many others. I am quite taken by a recent paper on the subject which rather neatly shows its effects using 10,000 West Point cadets tracked over 14 years, that summarizes some of the classic research quite well, as well as adding its own compelling evidence. By nature humans love to learn and enjoy achievable challenges but, if you beat or reward that love out of them for long enough, they will stop wanting to do so. The big lesson that we learn from extrinsic rewards and punishments is that learning is done to gain rewards that have no connection with that learning, or (just as bad) to avoid punishment. We also, in passing, learn that the purveyors of those rewards and punishments have power over us.

If it actually worked then even this ugly power trip might be worth it, though I have strong reservations about the ultimate value of teaching people to bow down to authority figures without question. Unfortunately, many studies have demonstrated unequivocally that, though such methods may result in short-term gains that may be sufficient (if not particularly efficient) to pass the big sticks and carrots of exams designed to test behaviour, learning this way does not persist, especially if no attention is paid to meaning, value and connections between things. This accords with common sense and experience. If we are taught that the value of what we are learning is to pass a test or get a grade then, once we have achieved that, it is perfectly natural to promptly forget it. It’s much like remembering your hotel room number: very important as long as you stay there, completely irrelevant when you leave, and therefore promptly forgotten.

Learning that persists is learning that we can continue to use, that relates to our goals, the things we want or need to do, that relates to our social context, that we can apply and that has meaning and value to us because of who we are, where we come from, what we want to do, the communities that bind us, and who we want to be. For this kind of learning, self-pacing, small chunks of increasing complexity and fast feedback can be extremely useful tools (if far from being the only ones) but the point is that it cannot be done effectively in isolation and especially not under the control of someone else. Values and meaning are not in nor can they be usefully described by the behaviourist vocabulary, but they are exactly what deMuth’s reviled educators, against the odds and against the flow of a system that is designed to work in total opposition to them, are trying to foster. At least, the good ones are doing that. Too many of us are buckled down by the system that thwarts us by standardizing learning, trying to make us teach the same things in the same temporal and physical/virtual space, at the same time, over fixed periods, without any thought for the reasons it might be worthwhile to people or their individual and unique needs. It is no wonder that education has one of the highest dropout rates of any profession. DeMuth rightly attacks education but wrongly attacks educators.

The failure of educational systems

As long as we have abominations like the core curriculum, obligatory courses with defined objectives, or coarse-grained programs that ignore individual needs, that make it a requirement to learn a specified body of facts and skills regardless of their personal value or interest to us, this will ever be so. Unless we can devise ways of doing education that will be meaningful, applicable and valuable to the individuals that are learning, without extrinsic rewards or punishments, we have failed. If we teach students that the purpose of learning is to pass tests (or receive some other extrinsic reward or punishment), we will have doubly failed, because we will have made it harder for them to learn anything ever again and, in all probability, will have fostered an aversion to what might otherwise have been an important and interesting thing if it were learned at the right time. Skinnerian teaching machines deployed without addressing these fundamental problems will simply reinforce the same old patterns, making things worse, far worse, than before.

When used to support personally and socially meaningful goals, some behaviourist methods can have limited value, though none of those methods are unique to behaviourism and most come with important provisos and modifiers. Practice can be very good for acquiring a wide range of skills, especially when interleaved and spaced (those studying for exams or learning to play a musical instrument would do well to take note of this), learning things when, how and at what pace we wish to learn them is crucial, and we do need to take things a little at a time. Some such skills are foundational and, once learned, can become self-sustaining and supportive of intrinsically motivated learning: reading and writing, for example, or arithmetic. But behaviourism is not right just because it made a few hits. DeMuth wants to improve literacy (good) but he seeks to improve it through behaviourist methods (very bad) and measure it by standardized tests (very, very bad). This is a bit like assuming that the purpose of the army is to kill people, and therefore providing all soldiers with nuclear weapons. It is putting the cart before the horse. 

The purpose of education is not to pass tests but, along with sustaining some cultural continuity, to help people both to learn and to continue to learn. Behaviourist methods may achieve short-term testing goals but are singularly poor at fostering long-term learning and are positively antagonistic to lifelong learning. They encourage a dependent and submissive attitude and stamp on critical or creative thought. We should let them rest in peace.

BOOK: Teaching Crowds: Learning and Social Media

About the Book

Within the rapidly expanding field of educational technology, learners and educators must confront a seemingly overwhelming selection of tools designed to deliver and facilitate both online and blended learning. Many of these tools assume that learning is configured and delivered in closed contexts, through learning management systems (LMS). However, while traditional “classroom” learning is by no means obsolete, networked learning is in the ascendant. A foundational method in online and blended education, as well as the most common means of informal and self-directed learning, networked learning is rapidly becoming the dominant mode of teaching as well as learning.

In Teaching Crowds, Dron and Anderson introduce a new model for understanding and exploiting the pedagogical potential of Web-based technologies, one that rests on connections — on networks and collectives — rather than on separations. Recognizing that online learning both demands and affords new models of teaching and learning, the authors show how learners can engage with social media platforms to create an unbounded field of emergent connections. These connections empower learners, allowing them to draw from one another’s expertise to formulate and fulfill their own educational goals. In an increasingly networked world, developing such skills will, they argue, better prepare students to become self-directed, lifelong learners.

 

Address of the bookmark: http://www.aupress.ca/index.php/books/120235

Transactional distance and new media literacies

Moore’s theory of transactional distance describes the communications and psychological gulf between learner and teacher in a distance education setting. The theory was formulated in a correspondence era of distance learning and matured in an era where discussion forums and virtual learning environments reduced transactional distance in a closed-group setting that enabled interactions akin to those in a traditional classroom. In recent years the growth of social networking and social interest sites has led to social forms that fit less easily in these traditional formal models of teaching and learning. When the “teacher” is distributed through the network or is an anonymous agent in a set or is an emergent actor formed by collective intelligence, transactional distance becomes a more complex variable. Evolved social literacies are mutated by new social forms and require us to establish new or modified ways of thinking about learning and teaching. In this missive we explore the notion of transactional distance and the kinds of social literacy that are required for or that emerge from network, set, and collective modes of social engagement. We discuss issues such as preferential attachment, confirmation bias, and trust and describe social literacies needed to cope with them.

Address of the bookmark: http://www.mitpressjournals.org/doi/abs/10.1162/IJLM_a_00104#.VEwtAYcfTEI