Hacking Our Brains: Motivating Others By Snatching Back Rewards

Ingenious approach to extrinsic motivation – give something, then use the threat of taking it away to ‘motivate’ people to do what you want them to do. It’s an old idea, but one that has not seen as much use as you would expect in things like student grading or occupational performance assessments. Though tied up in language of the endowment effect, the essence of this method is punishment rather than reward, and we tend to be more punishment-averse than reward-seeking, so it works ‘better’. It’s still rampant behaviourism, presented in a cognitivist wrapper to make it look shinier. 

As with all forms of extrinsic motivation, this does two things, both inimical to learning. Firstly, it leads to a focus on avoiding the punishment, rather than on the pleasure of the learning activity itself. I don’t see this as a great leap forward from rewarding with grades in a learning context – it just makes it even more extrinsic and even more likely to destroy any intrinsic motivation a learner might have had in the first place so that, once punishment has been avoided, the value of the activity itself is diminished and, mostly, the things that make it useful are forgotten. Secondly, it is an even worse assertion of power than a reward. Again, I don’t see this as having any meaningful value in a learning context. It teaches greater compliance, not the topic at hand. That’s a bad lesson, unless you think that education is preparation for life in which you should be a compliant tool that reluctantly does the bidding of those in power through fear of punishment. A society organized that way is not the kind of society I want to live in. Surely we have grown out of this? If not, surely we should?

The notion that people need to be forced to comply in order to learn what we want to teach them is barbaric, distasteful and, ultimately, deeply counter-productive. Countless generations of learners have had their love of learning viciously attacked by such attitudes, and have learned with less efficiency, less depth, and less value to society as a result. It’s a systemic failure on an unbelievably massive scale, embedded so deeply in our educational systems we hardly even notice it any more. Done to one person it is bad enough but, done systematically at a worldwide scale, to ever younger generations of children, it hampers the intelligence and compassion of our species in ways that cut deep and leave us bleeding. Despite this, most of us still manage to come out of this without all of our innate love of learning completely destroyed. Our intrinsic motivation can be a powerful counter-force, just occasionally what we are taught aligns well enough with what we want and need to learn, we discover other ways and things to learn that are meaningful and not imposed upon us, and there are quite a lot of great teachers out there that manage to enthuse and inspire despite the odds stacked against them. Few if any of us survive unscathed, though most of us get something useful here and there despite the obstacles. But we could be so much more.

Address of the bookmark: http://readwrite.com/2015/05/07/reward-then-deduct-loss-aversion-brain-hack

Scientists: Earth Endangered by New Strain of Fact-Resistant Humans

The research, conducted by the University of Minnesota, identifies a virulent strain of humans who are virtually immune to any form of verifiable knowledge, leaving scientists at a loss as to how to combat them.”

Marvellous.

Address of the bookmark: http://www.newyorker.com/humor/borowitz-report/scientists-earth-endangered-by-new-strain-of-fact-resistant-humans

Half an Hour: The Study, and Other Stuff

This is the latest in a fascinating ongoing argument between George Siemens and Stephen Downes over the value, reliability and focus of Preparing for the Digital University, a report created by George, Dragan Gasevic, Shane Dawson and many others on the current state of research and practice in (mainly) online and distance learning. Because I am quoted in Stephen’s post, and in George’s post to which it is a response, as agreeing with Stephen, I’d like to clarify just what I agree with.

Where I disagree with Stephen is that I do think it is a good report that pulls together a lot of good research as well as other sources to provide a rich and informative picture of what universities have been doing in the field of online and distance learning, and how they got there. I think its audience is mainly not seasoned edtech researchers, though there is a lot of valuable synthesis and analysis in it that those of us in the field can and will certainly use. I see it as a strength that it does not just limit itself to ‘reliable’ research (whatever that may be – I’ve seldom found an unequivocal example of that elusive beast) and I am quite happy with the range and depth of the sources it uses. This is an expert summary and analysis by some of the top experts in the field who know whereof they speak. Of course it misses some things and over-emphasizes others, but that is the nature of the animal and I think it does a very good job of remaining broad, informative and clear.

I think Stephen and I are in rough agreement, though, in observing the boundary that the report does not try too hard to cross: the challenge that some of the research presents to the very notion of the university as we now know it and, to a lesser extent, the under-representation of ideas and research that relate to that. The latter point is a tricky systemic problem because, on the whole, the majority of work and writing in that space is under-represented in literature that, because it tends to come from universities, tends to focus on universities. As this report is about universities, it is quite reasonable that this is the body of literature it uses.

The relative lack of beyond-the-institution thinking is, I believe, a concern for Stephen but, for me, it’s just something that, if I were writing it, I would want to add more of. It seems to me that this report will have most value in providing information for policy makers, managers of institutions, and those who are beginning to discover the field. It will open some eyes, help people to avoid old mistakes, and open up some important discussions. But, thanks to the intentional focus on the university and how we got to now, the structures, processes and measures are mostly rooted in an assumption that the university as we know it can and should persist. The discussion that emerges will inevitably tend to focus on how digital technologies can be used to do what we already do in (from a birds-eye perspective) only slightly different ways. In doing so, it may blind participants to the very real threats to their whole way of life as well as opportunities that are worth grasping. This may not be the best idea but it is not a weakness in the report as such – it is, after all, doing exactly what it says on the box. In fact, it is to its credit that it does address some approaches and tools that are transformative. 

I agree with George (and with Stephen’s hopes) that universities are and will continue to be really important institutions that can and should offer great value to our societies for a long time to come. We would invent them very differently, or maybe not invent anything like them at all, if we started afresh, knowing what we now know. The reality is, however, that this is what we have and it has enormous momentum that is not going to stop any time soon so, if we are to make the best use of it, we should both understand and make improvements to it. This report is a solid foundation for that. There are some risks that it might, without further reflection, lead to ‘improvement’ of the wrong things – those that are counter-productive to the goals of increasing knowledge and learning in the world – and so further ensconce harmful practices. My pet hobby horses include courses, grades, and the unholy linking of learning and accreditation, for instance.  But there are other huge problems like the trend to systemic exclusion of disadvantaged people, the treatment of students as customers and the unnatural separation of disciplines and fields. Stephen mentions more. With that in mind, it would be useful to think a bit further about ways that the foundations of the university like teaching, accreditation, community, knowledge production, knowledge dissemination, being a knowledge repository, a source of expertise and so on are being not-too-subtly eroded by things that are enabled by the net, as well as to further critique the embedded patterns, limitations, biases, and blind-spots that make those foundations brittle and liable to crack or crumble.

The basis for that is all there in this report but, on reflection, I think the discussion of those issues is something for a further report as it demands a different level and kind of analysis. I am not at all sure that the Gates Foundation would want to fund such a thing, but it should. Actually, maybe that line of thinking is a bit too narrow. After all, the exchange between George and Stephen, as well as contributions by others (e.g. George Veletsianos) is already, and self-referentially, doing much the same job such a report might do, and maybe doing it better. The learning dialogue and knowledge creation that is occurring through this distributed conversation is as rich as the report itself and, in its own way, at least as valuable. If the report had not been written then that dialogue might not have occurred, so it is a good anchor, but it is part of a richer knowledge network. And that’s exactly the point: technologies like social media are deeply subversive because they enable us do some of the job that universities traditionally do without requiring a university as a necessary intermediary, with all the limitations and exclusions that implies. The patterns, technologies, economic models, checks and balances are not there yet to replace all of a university’s functions – we have much research to do and many inventions yet to invent, and I am very aware that it is only because of the university that I for one am able to participate in this – but the change is already happening, and it is quite profound.

Address of the bookmark: http://halfanhour.blogspot.ca/2015/05/the-study-and-other-stuff.html

Half an Hour: Research and Evidence

Stephen Downes defends his attack on the recent report on the current state of online (etc) learning developed by George Siemens, Dragan Gasevic and Shane Dawson.

I have mixed feeling about this. As such reports go, I think it is a good one. It does knit together a fair sample of the literature, including bits from journalists and bloggers as well as more and less credible research, in a form that I think is digestible enough and sufficiently broad for corporate folk who need to get up to speed, including those running academies. Its methods are clear and its outputs are accessible. Though written by very well-informed researchers (not just George) and making use of copious amounts of research, I don’t think it is really aimed at researchers in the field. My impression is that it’s mainly for the under-informed policy makers that need to be better informed, not for those of use who already know this stuff, and it does that job well. It’s a lot more than journalism, but a little less than an academic paper. I can also see a useful role for it for those that need to know roughly where we are now in online learning (e.g. edtech developers), but that are not seeking to become researchers in the field. 

I think the more fundamental problem, and one that both George and Stephen seem to be fencing around, is in its title. The suggestion that it is about ‘preparing for the digital university’ is tricky on two counts. First of all, ‘preparing’ seems a funny word to use: it’s like saying we are ‘preparing’ for a storm when the waves are high around us and we are on the verge of capsize. Secondly, and more tellingly, ‘digital university’ implies an expected outcome that is rather at odds with a lot of both George and Stephen’s work. The assumption that a university is the answer to the problem (which is what the title implies) is tricky, to say the least, especially given quite a lot of the discussion surrounding incursions by commercial and alternative (especially bottom-up) forms of accreditation and learning that step far outside the realms of traditional academia and challenge its very foundations. That final chapter mentions quite a few tools and approaches that relegate the institution to a negligible role but there are hints of this scattered through much of the report, from commercial incursions to uses of reputation measures in Stack Overflow. If we are thinking of preparing for the future, the language and methods of formal education, courses, and mediaeval institutions might be a fair place to start but maybe not the place to aim for. There’s a tension throughout the report between the soft disruptive nature of digital technologies (not so much the tools but what people do with them) and the hard mechanization of arbitrarily evolved patterns. For instance, between social recommendation and automated marking. The latter reinforces the university as an institution even if it does upset some power structures and working practices a little. The former (potentially) disrupts the notion and societal role of the university itself. For the most part, this report is a review of the history and current state of online/distance/blended learning in formal education. This is in keeping with the title, but not with the ultimate thrust of at least a few of the findings. That does rather stifle the potential for really getting under the skin of the problem. It’s a view from the inside, not from above. Though it hints at transformation, it is ultimately in defence of the realm. Personally speaking, I would have liked to see a bit more critique of the realm itself. The last chapter, in particular, provides some evidence that could be used to make such a case, but does not really push it where it wants to go. But I’m not the one this report is aimed at.

 

Address of the bookmark: http://halfanhour.blogspot.ca/2015/05/research-and-evidence.html

The cost of time

A few days back, an email was sent to our ‘allstaff’ mailing list inviting us to join in a bocce tournament. This took me a bit of time to digest, not least because I felt impelled to look up what ‘bocce’ means (it’s an Italian variant of pétanque, if you are interested). I guess this took a couple of minutes of my time in total. And then I realized I was probably not alone in this – that over a thousand people had also been reading it and, perhaps, wondering the same thing. So I started thinking about how we measure costs.
 

The cost of reading an email

A single allstaff email at Athabasca will likely be read by about 1200 people, give or take. If such an email takes one minute to read, that’s 1200 minutes – 20 hours – of the institution’s time being taken up with a single message. This is not, however, counting the disruption costs of interrupting someone’s train of thought, which may be quite substantial. For example, this study from 2002 reckons that, not counting the time taken to read email, it takes an average of 64 seconds to return to previous levels of productivity after reading one. Other estimates based on different studies are much higher – some studies suggest the real recovery time from interruptions to tasks could be as high as 15-20 minutes. Conservatively, though, it is probably safe to assume that, taking interruption costs into account, an average allstaff email that is read but not acted upon consumes an average of two minutes of a person’s time: in total, that’s about 40 hours of the institution’s time, for every message sent. Put another way, we could hire another member of staff for a week for the time taken to deal with a single allstaff message, not counting the work entailed by those that do act on the message, nor the effort of writing it. It would therefore take roughly 48 such messages to account for a whole year of staff time. We get hundreds of such messages each year.
 
But it’s not just about such tangible interruptions. Accessing emails can take a lot of time before we even get so far as reading them. Page rendering just to view a list of messages on our web front end for our email system is an admirably efficient 2 seconds (i.e. 40 minutes of the organization’s time for everyone to be able to see a page of emails, not even to read their titles). Let’s say we all did that an average of 12 times a day –  that’s 8 hours, or more than a day of the institution’s time, taken up with waiting for that page to render each day. Put another way, as we measure such things, if it took four seconds, we would have to fire someone to pay for it. As it happens, for another university for which I have an account, using MS Exchange, simply getting to the login screen of its web front end takes 4 seconds. Once logged in (a further few seconds thanks to Exchange’s insistence on forcing you to tell it that your computer is not shared even though you have told it that a thousand times before), loading the page containing the list of emails takes a further 17 seconds. If AU were using the same system, using the same metric of 12 visits each day, that could equate to around 68 hours of the institution’s time every single day, simply to view a list of emails, not including a myriad of other delays and inefficiencies when it comes to reading, responding to and organizing such messages. Of course, we could just teach people to use a proper email client and reduce the delay to one that is imperceptible, because it occurs in the background – webmail is a truly terrible idea for daily use – or simply remind them not to close their web browsers so often, or to read their emails less regularly. There are many solutions to this problem. Like all technologies, especially softer ones that can be used in millions of ways, it ain’t what you do it’s the way that you do it. 
 

But wait – there’s more

Email is just a small part of the problem, though: we use a lot of other websites each day. Let’s conservatively assume that, on average, everyone at AU visits, say, 24 pages in a working day (for me that figure is always vastly much higher) and that each page averages out at about 5 seconds to load. That’s two minutes per person. Multiplied by 1200, it’s another week of the institution’s time ‘gone’ every day simply waiting to read a page.
 
And then there are the madly inefficient bureaucratized processes that are dictated and mediated by poorly tailored software. When I need to log into our CRM system I reckon that simply reading my tasks takes a good five minutes. Our leave reporting system typically eats 15 minutes of my time each time I request leave (it replaces one that took 2-3 minutes).  Our finance system used to take me about half an hour to add in expenses for a conference but, since downgrading to a baseline version, now takes me several hours, and it takes even more time from others that have to give approvals along the way. Ironically, the main intent behind implementing this was to save us money spent on staffing. 
 
I could go on, but I think you see where this is heading. Bear in mind, though, that I am just scratching the surface. 
 

Time and work

My point in writing this is not to ask for more efficient computer and admin systems, though that would indeed likely be beneficial. Much more to the point, I hope that you are feeling uncomfortable or even highly sceptical about how I am measuring this. Not with the figures: it doesn’t much matter whether I am wrong with the detailed timings or even the math. It is indisputable that we spend a lot of time dealing with computer systems and the processes that surround them every day, and small inefficiencies add up. There’s nothing particularly peculiar to ICTs about this either – for instance, think of the time taken to walk from one office to another, to visit the mailroom, to read a noticeboard, to chat with a colleague, and so on. But is that actually time lost or does it even equate precisely to time spent?  I hope you are wondering about the complex issues with equating time and dollars, how we learn, why and how we account for project costs in time, the nature of technologies, the cost vs value of ICTs, the true value of bocce tournament messages to people that have no conceivable chance of participating in them (much greater than you might at first imagine), and a whole lot more. I know I am. If there is even a shred of truth in my analysis, it does not automatically lead to the conclusion that the solution is simply more efficient computer systems and organizational procedures. It certainly does bring into question how we account for such things, though, and, more interestingly, it highlights even bigger intangibles: the nature and value of work itself, the nature and value of communities of practice, the role of computers in distributed intelligence, and the meaning, identity and purpose of organizations. I will get to that in another post, because it demands more time than I have to spend right now (perhaps because I receive around 100 emails a day, on average).
 

Can Behavioral Tools Improve Online Student Outcomes? Experimental Evidence from a Massive Open Online Course

Well-written and intelligently argued paper from Richard W. Patterson, using an experimental (well, nearly) approach to discover the effects of a commitment device, reminder and focus tool to improve course completion and performance in a MOOC.  It seems that providing tools to support students to  pre-commit to limiting ‘distracting Internet time’ (and that both measures and controls this) has a striking positive effect, though largely on those that appear to be extrinsically motivated: they want to successfully complete the course, rather than to enjoy the process of learning. Reminders are pretty useless for anyone (I concur – personally I find them irritating and, after a while, guilt-inducing and thus more liable to cause procrastination) and blocking distracting websites has very little if any effect – unsurprising really, because they don’t really block distractions at all: if you want to be distracted, you will simply find another way. This is good information.

It seems to me that those who have learned to be extrinsically motivated might benefit from this, though it will reinforce their dangerous predeliction, encourage bad habits, and benefit most those that have already figured out how to work within a traditional university system and that are focused on the end point rather than the journey. While I can see some superficially attractive merit in providing tools that help you to achieve your goals by managing the process, it reminds me a little of diet plans and techniques that, though sometimes successful in the short term, are positively harmful in the long term. This is the pattern that underlies all behaviourist models – it sort-of works up to a point (the course-setter’s goals are complied with), but the long-term impact on the learner is generally counter-productive. This approach will lead to more people completing the course, not more people learning to love the subject and hungry to apply that knowledge and learn more. In fact, it opposes such a goal. This is not about inculcating habits of mind but of making people do things that, though they want to reach some further end as a result, they do not actually want to do and, once the stimulus is taken away, will likely never want to do again. It is far better to concentrate on supporting intrinsic motivation and to build learning activities that people will actually want to do – challenges that they feel impelled to solve, supporting social needs, over which they feel some control. For that, the instructivist course format is ill suited to the needs of most. 

Abstract

Online education is an increasingly popular alternative to traditional classroom- based courses. However, completion rates in online courses are often very low. One explanation for poor performance in online courses is that aspects of the online environ- ment lead students to procrastinate, forget about, or be distracted from coursework. To address student time-management issues, I leverage insights from behavioral economics to design three software tools including (1) a commitment device that allows students to pre-commit to time limits on distracting Internet activities, (2) a reminder tool that is triggered by time spent on distracting websites, and (3) a focusing tool that allows students to block distracting sites when they go to the course website. I test the impact of these tools in a large-scale randomized experiment (n=657) conducted in a massive open online course (MOOC) hosted by Stanford University. Relative to students in the control group, students in the commitment device treatment spend 24% more time working on the course, receive course grades that are 0.29 standard deviations higher, and are 40% more likely to complete the course. In contrast, outcomes for students in the reminder and focusing treatments are not statistically distinguishable from the control. These results suggest that tools designed to address procrastination can have a significant impact on online student performance. 

Address of the bookmark: http://www.human.cornell.edu/pam/academics/phd/upload/PattersonJMP11_18.pdf

Thomas Frey: By 2030 over 50% of Colleges will Collapse

Thomas Frey provides an analysis of current trends in education (and, more broadly, learning) and predicts a grim future for colleges and, by extension, schools and universities. This is not a uniformly well-informed article – Frey is clearly an outsider with a somewhat caricatured or at least highly situated US-centric view of the educational system – but, though repeating arguments that have been made for decades and offering no novel insights, the issues are well summarized, well expressed, and the overall thrust of the article is hard to argue with.

His main points are summarized in a list:

  1. Overhead costs too high – Even if the buildings are paid for and all money-losing athletic programs are dropped, the costs associated with maintaining a college campus are very high. Everything from utilities, to insurance, to phone systems, to security, to maintenance and repair are all expenses that online courses do not have. Some of the less visible expenses involve the bonds and financing instruments used to cover new construction, campus projects, and revenue inconsistencies. The cost of money itself will be a huge factor.
  2. Substandard classes and teachers – Many of the exact same classes are taught in thousands of classroom simultaneously every semester. The law of averages tells us that 49.9% of these will be below average. Yet any college that is able to electronically pipe in a top 1% teacher will suddenly have a better class than 99% of all other colleges.
  3. Increasingly visible rating systems – Online rating systems will begin to torpedo tens of thousands of classes and teachers over the coming years. Bad ratings of one teacher and one class will directly affect the overall rating of the institution.
  4. Inconvenience of time and place – Yes, classrooms help focus our attention and the world runs on deadlines. But our willingness to flex schedules to meet someone else’s time and place requirements is shrinking. Especially when we have a more convenient option.
  5. Pricing competition – Students today have many options for taking free courses without credits vs. expensive classes with credits and very little in between. That, however, is about to change. Colleges focused primarily on course delivery will be facing an increasingly price sensitive consumer base.
  6. Credentialing system competition – Much like a doctor’s ability to write prescriptions, a college’s ability to grant credits has given them an unusual competitive advantage, something every startup entrepreneur is searching for. However, traditional systems for granting credits only work as long as people still have faith in the system. This “faith in the system” is about to be eroded with competing systems. Companies like Coursera, Udacity, and iTunesU are well positioned to start offering an entirely new credentialing system.
  7. Relationships formed in colleges will be replaced with other relationship-building systems – Social structures are changing and the value of relationships built in college, while often quite valuable, are equally often overrated. Just as a dating relationship today is far more likely to begin online, business and social relationships in the future will also happen in far different ways.
  8. Sudden realization that “the emperor has no clothes!” – Education, much like our money supply, is a system built on trust. We are trusting colleges to instill valuable knowledge in our students, and in doing so, create a more valuable workforce and society. But when those who find no tangible value begin to openly proclaim, “the emperor has no clothes!” colleges will find themselves in a hard-to-defend downward spiral.

It is notable that many of the issues raised are fully addressed by online universities like AU, and have been for decades. We have moved on to bigger and more intractible problems! In particular, the idea that classes and teachers are a fixture that cannot be changed is a bit quaint. It is also fair to say that Frey has only a rough idea of how education works: the notion that high quality lectures has anything much to do with learning or the university experience shows a failure to understand the beast – but then, the same is true of potential students and more than a few professors. But pricing competition, credentialling competition, relationship-building and, above all, the ’emporer has no clothes’ arguments hit home, and I think will have the effects he anticipates much sooner than 2050. Nothing new here, and a bit coarse, but it clearly expresses the stark reality of the consequences.

Address of the bookmark: http://www.futuristspeaker.com/2013/07/by-2030-over-50-of-colleges-will-collapse/

» Assessing teachers’ digital competencies Virtual Canuck

Terry Anderson on an Estonian approach to assessing teacher competences (and other projects) using Elgg – the same framework that underpins the Landing. I’ve downloaded the tool they have developed, Digimina, and will be trying it out, not just for exactly the purposes it was developed, but as the foundation for a more generalized toolset for sharing the process of assessment. May spark some ideas, I think.

A nice approach to methodology: Terry prefers the development of design principles as the ‘ultimate’ aim of design-based research (DBR), but I like the notion of software as a hypothesis that is used here. It’s essentially a ‘sciency’ way of describing the notion of trying out an idea to see whether it works that makes no particular claims to generality, but that both derives from and feeds a model of what can be done, what needs to be done, and why it should be done. The generalizable part is not the final stage, but the penultimate stage of design in this DBR model. In this sense, it formalizes the very informal notion of bricolage, capturing some of its iterative nature. It’s not quite enough, I think, any more than other models of DBR quite capture the process in all its richness. This is because the activity of formulating that hypothesis itself follows a very similar pattern at a much finer-grained scale to that of the bigger model. When building code, you try out ideas, see where it takes you, and that inspires new ideas through the process of writing as much as of designing and specifying. Shovelling that into a large-scale process model hides where at least an important amount of the innovation actually happens, perhaps over-emphasizing the importance of explicit evaluation phases and underplaying the role of construction itself.

Address of the bookmark: http://terrya.edublogs.org/2015/04/24/assessing-teachers-digital-competencies/