Why do we not ban use of cellphones in online learning?

Banning mobile phones is cargo cult science is a good, laudably brief, dismissive, critical review of the dangerously-reported recently published study by the London School of Economics that, amongst other things, shows a correlation between banning of mobile phones in schools and improved grades. As the title of the post suggests, the report does not show that banning mobile phones in schools is what improves grades in any way at all, despite the fact that the report writers do seem to believe that this is what they have shown: indeed, they recommend banning mobile phones as a cost effective measure to improve grades! That is so opposite to the obvious conclusion it is not even funny. To me, it shows a terrible failure at a massive systemic level. It’s not cellphone use that’s the problem – it’s the teaching. More precisely, it’s the system of teaching. I am sure that the vast majority of individual teachers are doing wonders, under extremely adverse circumstances. But they are doing so in a completely broken system. 

The interesting thing for me is that this would never come up as an issue for online and distance learners. Well, almost never: perhaps occasionally, study guides might recommend you set aside undistracted time for some (not all) kinds of study and webinar leaders might suggest that participants switch off phones and other distractions. But this is only at most a bit of practical advice, not an edict.

The point here is that command-and-control teaching methods of traditional classrooms have no meaning or relevance in online learning. This makes it all the more odd that we continue to see substantially the same pedagogies being used for online teaching as those found in the over-controlling environment of teacher-led classrooms. Obvious culprits like lecture-based MOOCs are just the more visible tip of this weird bit of skeuomorphism but the general principle runs across the board from instructivist textbooks through more enlightened uses of social constructivist methods in discussion forums. Too often, implicitly or explicitly, we act under the illusion that how we teach is how people learn, as though we still had students trapped in a classroom, controlling (almost literally) their every move.  The unholy and inseparable continued twinning of fixed-length courses and the use of grades to drive student progress is very much to blame, though a lack of imagination doesn’t help. These technologies evolved because of the physics of classrooms, not because they are good ways to support learning. In almost every way, they are actually antagonistic to learning. Online learning can, does and should liberate learners, giving them control. So let’s stop teaching them as though we were the ones in charge. It is crazy that we should voluntarily shackle ourselves when there is absolutely no need for it.

Address of the bookmark: http://www.educate1to1.org/banning-mobile-phone-is-cargo-cult-science/

Interview with George Siemens in AU student union's Voice magazine (part 3)

Final part of a three part interview with George Siemens (following from the  first and second parts), in which he describes some thoughts about the future and nature of educational systems, and in which he has some great stuff to say about motivation and assessment in particular. I like this:

Make things relevant to students, but also give students an opportunity to write themselves into the curriculum. That is, to be able to see the outcome of the benefits, the way in which it can make them a better person, and the way it can make the world a better place. You can’t directly motivate someone, but you can set conditions under which people of different attributes will become motivated.”

Exactly so – it’s about creating conditions, not about telling or controlling. It’s about making and supporting a space (physical, virtual, social, conceptual, organizational, temporal, curricular, etc) that learners both belong to and own. 

Address of the bookmark: http://www.voicemagazine.org/articles/featuredisplay.php?ART=10462

Hacking Our Brains: Motivating Others By Snatching Back Rewards

Ingenious approach to extrinsic motivation – give something, then use the threat of taking it away to ‘motivate’ people to do what you want them to do. It’s an old idea, but one that has not seen as much use as you would expect in things like student grading or occupational performance assessments. Though tied up in language of the endowment effect, the essence of this method is punishment rather than reward, and we tend to be more punishment-averse than reward-seeking, so it works ‘better’. It’s still rampant behaviourism, presented in a cognitivist wrapper to make it look shinier. 

As with all forms of extrinsic motivation, this does two things, both inimical to learning. Firstly, it leads to a focus on avoiding the punishment, rather than on the pleasure of the learning activity itself. I don’t see this as a great leap forward from rewarding with grades in a learning context – it just makes it even more extrinsic and even more likely to destroy any intrinsic motivation a learner might have had in the first place so that, once punishment has been avoided, the value of the activity itself is diminished and, mostly, the things that make it useful are forgotten. Secondly, it is an even worse assertion of power than a reward. Again, I don’t see this as having any meaningful value in a learning context. It teaches greater compliance, not the topic at hand. That’s a bad lesson, unless you think that education is preparation for life in which you should be a compliant tool that reluctantly does the bidding of those in power through fear of punishment. A society organized that way is not the kind of society I want to live in. Surely we have grown out of this? If not, surely we should?

The notion that people need to be forced to comply in order to learn what we want to teach them is barbaric, distasteful and, ultimately, deeply counter-productive. Countless generations of learners have had their love of learning viciously attacked by such attitudes, and have learned with less efficiency, less depth, and less value to society as a result. It’s a systemic failure on an unbelievably massive scale, embedded so deeply in our educational systems we hardly even notice it any more. Done to one person it is bad enough but, done systematically at a worldwide scale, to ever younger generations of children, it hampers the intelligence and compassion of our species in ways that cut deep and leave us bleeding. Despite this, most of us still manage to come out of this without all of our innate love of learning completely destroyed. Our intrinsic motivation can be a powerful counter-force, just occasionally what we are taught aligns well enough with what we want and need to learn, we discover other ways and things to learn that are meaningful and not imposed upon us, and there are quite a lot of great teachers out there that manage to enthuse and inspire despite the odds stacked against them. Few if any of us survive unscathed, though most of us get something useful here and there despite the obstacles. But we could be so much more.

Address of the bookmark: http://readwrite.com/2015/05/07/reward-then-deduct-loss-aversion-brain-hack

Scientists: Earth Endangered by New Strain of Fact-Resistant Humans

The research, conducted by the University of Minnesota, identifies a virulent strain of humans who are virtually immune to any form of verifiable knowledge, leaving scientists at a loss as to how to combat them.”

Marvellous.

Address of the bookmark: http://www.newyorker.com/humor/borowitz-report/scientists-earth-endangered-by-new-strain-of-fact-resistant-humans

Half an Hour: The Study, and Other Stuff

This is the latest in a fascinating ongoing argument between George Siemens and Stephen Downes over the value, reliability and focus of Preparing for the Digital University, a report created by George, Dragan Gasevic, Shane Dawson and many others on the current state of research and practice in (mainly) online and distance learning. Because I am quoted in Stephen’s post, and in George’s post to which it is a response, as agreeing with Stephen, I’d like to clarify just what I agree with.

Where I disagree with Stephen is that I do think it is a good report that pulls together a lot of good research as well as other sources to provide a rich and informative picture of what universities have been doing in the field of online and distance learning, and how they got there. I think its audience is mainly not seasoned edtech researchers, though there is a lot of valuable synthesis and analysis in it that those of us in the field can and will certainly use. I see it as a strength that it does not just limit itself to ‘reliable’ research (whatever that may be – I’ve seldom found an unequivocal example of that elusive beast) and I am quite happy with the range and depth of the sources it uses. This is an expert summary and analysis by some of the top experts in the field who know whereof they speak. Of course it misses some things and over-emphasizes others, but that is the nature of the animal and I think it does a very good job of remaining broad, informative and clear.

I think Stephen and I are in rough agreement, though, in observing the boundary that the report does not try too hard to cross: the challenge that some of the research presents to the very notion of the university as we now know it and, to a lesser extent, the under-representation of ideas and research that relate to that. The latter point is a tricky systemic problem because, on the whole, the majority of work and writing in that space is under-represented in literature that, because it tends to come from universities, tends to focus on universities. As this report is about universities, it is quite reasonable that this is the body of literature it uses.

The relative lack of beyond-the-institution thinking is, I believe, a concern for Stephen but, for me, it’s just something that, if I were writing it, I would want to add more of. It seems to me that this report will have most value in providing information for policy makers, managers of institutions, and those who are beginning to discover the field. It will open some eyes, help people to avoid old mistakes, and open up some important discussions. But, thanks to the intentional focus on the university and how we got to now, the structures, processes and measures are mostly rooted in an assumption that the university as we know it can and should persist. The discussion that emerges will inevitably tend to focus on how digital technologies can be used to do what we already do in (from a birds-eye perspective) only slightly different ways. In doing so, it may blind participants to the very real threats to their whole way of life as well as opportunities that are worth grasping. This may not be the best idea but it is not a weakness in the report as such – it is, after all, doing exactly what it says on the box. In fact, it is to its credit that it does address some approaches and tools that are transformative. 

I agree with George (and with Stephen’s hopes) that universities are and will continue to be really important institutions that can and should offer great value to our societies for a long time to come. We would invent them very differently, or maybe not invent anything like them at all, if we started afresh, knowing what we now know. The reality is, however, that this is what we have and it has enormous momentum that is not going to stop any time soon so, if we are to make the best use of it, we should both understand and make improvements to it. This report is a solid foundation for that. There are some risks that it might, without further reflection, lead to ‘improvement’ of the wrong things – those that are counter-productive to the goals of increasing knowledge and learning in the world – and so further ensconce harmful practices. My pet hobby horses include courses, grades, and the unholy linking of learning and accreditation, for instance.  But there are other huge problems like the trend to systemic exclusion of disadvantaged people, the treatment of students as customers and the unnatural separation of disciplines and fields. Stephen mentions more. With that in mind, it would be useful to think a bit further about ways that the foundations of the university like teaching, accreditation, community, knowledge production, knowledge dissemination, being a knowledge repository, a source of expertise and so on are being not-too-subtly eroded by things that are enabled by the net, as well as to further critique the embedded patterns, limitations, biases, and blind-spots that make those foundations brittle and liable to crack or crumble.

The basis for that is all there in this report but, on reflection, I think the discussion of those issues is something for a further report as it demands a different level and kind of analysis. I am not at all sure that the Gates Foundation would want to fund such a thing, but it should. Actually, maybe that line of thinking is a bit too narrow. After all, the exchange between George and Stephen, as well as contributions by others (e.g. George Veletsianos) is already, and self-referentially, doing much the same job such a report might do, and maybe doing it better. The learning dialogue and knowledge creation that is occurring through this distributed conversation is as rich as the report itself and, in its own way, at least as valuable. If the report had not been written then that dialogue might not have occurred, so it is a good anchor, but it is part of a richer knowledge network. And that’s exactly the point: technologies like social media are deeply subversive because they enable us do some of the job that universities traditionally do without requiring a university as a necessary intermediary, with all the limitations and exclusions that implies. The patterns, technologies, economic models, checks and balances are not there yet to replace all of a university’s functions – we have much research to do and many inventions yet to invent, and I am very aware that it is only because of the university that I for one am able to participate in this – but the change is already happening, and it is quite profound.

Address of the bookmark: http://halfanhour.blogspot.ca/2015/05/the-study-and-other-stuff.html

Half an Hour: Research and Evidence

Stephen Downes defends his attack on the recent report on the current state of online (etc) learning developed by George Siemens, Dragan Gasevic and Shane Dawson.

I have mixed feeling about this. As such reports go, I think it is a good one. It does knit together a fair sample of the literature, including bits from journalists and bloggers as well as more and less credible research, in a form that I think is digestible enough and sufficiently broad for corporate folk who need to get up to speed, including those running academies. Its methods are clear and its outputs are accessible. Though written by very well-informed researchers (not just George) and making use of copious amounts of research, I don’t think it is really aimed at researchers in the field. My impression is that it’s mainly for the under-informed policy makers that need to be better informed, not for those of use who already know this stuff, and it does that job well. It’s a lot more than journalism, but a little less than an academic paper. I can also see a useful role for it for those that need to know roughly where we are now in online learning (e.g. edtech developers), but that are not seeking to become researchers in the field. 

I think the more fundamental problem, and one that both George and Stephen seem to be fencing around, is in its title. The suggestion that it is about ‘preparing for the digital university’ is tricky on two counts. First of all, ‘preparing’ seems a funny word to use: it’s like saying we are ‘preparing’ for a storm when the waves are high around us and we are on the verge of capsize. Secondly, and more tellingly, ‘digital university’ implies an expected outcome that is rather at odds with a lot of both George and Stephen’s work. The assumption that a university is the answer to the problem (which is what the title implies) is tricky, to say the least, especially given quite a lot of the discussion surrounding incursions by commercial and alternative (especially bottom-up) forms of accreditation and learning that step far outside the realms of traditional academia and challenge its very foundations. That final chapter mentions quite a few tools and approaches that relegate the institution to a negligible role but there are hints of this scattered through much of the report, from commercial incursions to uses of reputation measures in Stack Overflow. If we are thinking of preparing for the future, the language and methods of formal education, courses, and mediaeval institutions might be a fair place to start but maybe not the place to aim for. There’s a tension throughout the report between the soft disruptive nature of digital technologies (not so much the tools but what people do with them) and the hard mechanization of arbitrarily evolved patterns. For instance, between social recommendation and automated marking. The latter reinforces the university as an institution even if it does upset some power structures and working practices a little. The former (potentially) disrupts the notion and societal role of the university itself. For the most part, this report is a review of the history and current state of online/distance/blended learning in formal education. This is in keeping with the title, but not with the ultimate thrust of at least a few of the findings. That does rather stifle the potential for really getting under the skin of the problem. It’s a view from the inside, not from above. Though it hints at transformation, it is ultimately in defence of the realm. Personally speaking, I would have liked to see a bit more critique of the realm itself. The last chapter, in particular, provides some evidence that could be used to make such a case, but does not really push it where it wants to go. But I’m not the one this report is aimed at.

 

Address of the bookmark: http://halfanhour.blogspot.ca/2015/05/research-and-evidence.html

The cost of time

A few days back, an email was sent to our ‘allstaff’ mailing list inviting us to join in a bocce tournament. This took me a bit of time to digest, not least because I felt impelled to look up what ‘bocce’ means (it’s an Italian variant of pétanque, if you are interested). I guess this took a couple of minutes of my time in total. And then I realized I was probably not alone in this – that over a thousand people had also been reading it and, perhaps, wondering the same thing. So I started thinking about how we measure costs.
 

The cost of reading an email

A single allstaff email at Athabasca will likely be read by about 1200 people, give or take. If such an email takes one minute to read, that’s 1200 minutes – 20 hours – of the institution’s time being taken up with a single message. This is not, however, counting the disruption costs of interrupting someone’s train of thought, which may be quite substantial. For example, this study from 2002 reckons that, not counting the time taken to read email, it takes an average of 64 seconds to return to previous levels of productivity after reading one. Other estimates based on different studies are much higher – some studies suggest the real recovery time from interruptions to tasks could be as high as 15-20 minutes. Conservatively, though, it is probably safe to assume that, taking interruption costs into account, an average allstaff email that is read but not acted upon consumes an average of two minutes of a person’s time: in total, that’s about 40 hours of the institution’s time, for every message sent. Put another way, we could hire another member of staff for a week for the time taken to deal with a single allstaff message, not counting the work entailed by those that do act on the message, nor the effort of writing it. It would therefore take roughly 48 such messages to account for a whole year of staff time. We get hundreds of such messages each year.
 
But it’s not just about such tangible interruptions. Accessing emails can take a lot of time before we even get so far as reading them. Page rendering just to view a list of messages on our web front end for our email system is an admirably efficient 2 seconds (i.e. 40 minutes of the organization’s time for everyone to be able to see a page of emails, not even to read their titles). Let’s say we all did that an average of 12 times a day –  that’s 8 hours, or more than a day of the institution’s time, taken up with waiting for that page to render each day. Put another way, as we measure such things, if it took four seconds, we would have to fire someone to pay for it. As it happens, for another university for which I have an account, using MS Exchange, simply getting to the login screen of its web front end takes 4 seconds. Once logged in (a further few seconds thanks to Exchange’s insistence on forcing you to tell it that your computer is not shared even though you have told it that a thousand times before), loading the page containing the list of emails takes a further 17 seconds. If AU were using the same system, using the same metric of 12 visits each day, that could equate to around 68 hours of the institution’s time every single day, simply to view a list of emails, not including a myriad of other delays and inefficiencies when it comes to reading, responding to and organizing such messages. Of course, we could just teach people to use a proper email client and reduce the delay to one that is imperceptible, because it occurs in the background – webmail is a truly terrible idea for daily use – or simply remind them not to close their web browsers so often, or to read their emails less regularly. There are many solutions to this problem. Like all technologies, especially softer ones that can be used in millions of ways, it ain’t what you do it’s the way that you do it. 
 

But wait – there’s more

Email is just a small part of the problem, though: we use a lot of other websites each day. Let’s conservatively assume that, on average, everyone at AU visits, say, 24 pages in a working day (for me that figure is always vastly much higher) and that each page averages out at about 5 seconds to load. That’s two minutes per person. Multiplied by 1200, it’s another week of the institution’s time ‘gone’ every day simply waiting to read a page.
 
And then there are the madly inefficient bureaucratized processes that are dictated and mediated by poorly tailored software. When I need to log into our CRM system I reckon that simply reading my tasks takes a good five minutes. Our leave reporting system typically eats 15 minutes of my time each time I request leave (it replaces one that took 2-3 minutes).  Our finance system used to take me about half an hour to add in expenses for a conference but, since downgrading to a baseline version, now takes me several hours, and it takes even more time from others that have to give approvals along the way. Ironically, the main intent behind implementing this was to save us money spent on staffing. 
 
I could go on, but I think you see where this is heading. Bear in mind, though, that I am just scratching the surface. 
 

Time and work

My point in writing this is not to ask for more efficient computer and admin systems, though that would indeed likely be beneficial. Much more to the point, I hope that you are feeling uncomfortable or even highly sceptical about how I am measuring this. Not with the figures: it doesn’t much matter whether I am wrong with the detailed timings or even the math. It is indisputable that we spend a lot of time dealing with computer systems and the processes that surround them every day, and small inefficiencies add up. There’s nothing particularly peculiar to ICTs about this either – for instance, think of the time taken to walk from one office to another, to visit the mailroom, to read a noticeboard, to chat with a colleague, and so on. But is that actually time lost or does it even equate precisely to time spent?  I hope you are wondering about the complex issues with equating time and dollars, how we learn, why and how we account for project costs in time, the nature of technologies, the cost vs value of ICTs, the true value of bocce tournament messages to people that have no conceivable chance of participating in them (much greater than you might at first imagine), and a whole lot more. I know I am. If there is even a shred of truth in my analysis, it does not automatically lead to the conclusion that the solution is simply more efficient computer systems and organizational procedures. It certainly does bring into question how we account for such things, though, and, more interestingly, it highlights even bigger intangibles: the nature and value of work itself, the nature and value of communities of practice, the role of computers in distributed intelligence, and the meaning, identity and purpose of organizations. I will get to that in another post, because it demands more time than I have to spend right now (perhaps because I receive around 100 emails a day, on average).
 

Can Behavioral Tools Improve Online Student Outcomes? Experimental Evidence from a Massive Open Online Course

Well-written and intelligently argued paper from Richard W. Patterson, using an experimental (well, nearly) approach to discover the effects of a commitment device, reminder and focus tool to improve course completion and performance in a MOOC.  It seems that providing tools to support students to  pre-commit to limiting ‘distracting Internet time’ (and that both measures and controls this) has a striking positive effect, though largely on those that appear to be extrinsically motivated: they want to successfully complete the course, rather than to enjoy the process of learning. Reminders are pretty useless for anyone (I concur – personally I find them irritating and, after a while, guilt-inducing and thus more liable to cause procrastination) and blocking distracting websites has very little if any effect – unsurprising really, because they don’t really block distractions at all: if you want to be distracted, you will simply find another way. This is good information.

It seems to me that those who have learned to be extrinsically motivated might benefit from this, though it will reinforce their dangerous predeliction, encourage bad habits, and benefit most those that have already figured out how to work within a traditional university system and that are focused on the end point rather than the journey. While I can see some superficially attractive merit in providing tools that help you to achieve your goals by managing the process, it reminds me a little of diet plans and techniques that, though sometimes successful in the short term, are positively harmful in the long term. This is the pattern that underlies all behaviourist models – it sort-of works up to a point (the course-setter’s goals are complied with), but the long-term impact on the learner is generally counter-productive. This approach will lead to more people completing the course, not more people learning to love the subject and hungry to apply that knowledge and learn more. In fact, it opposes such a goal. This is not about inculcating habits of mind but of making people do things that, though they want to reach some further end as a result, they do not actually want to do and, once the stimulus is taken away, will likely never want to do again. It is far better to concentrate on supporting intrinsic motivation and to build learning activities that people will actually want to do – challenges that they feel impelled to solve, supporting social needs, over which they feel some control. For that, the instructivist course format is ill suited to the needs of most. 

Abstract

Online education is an increasingly popular alternative to traditional classroom- based courses. However, completion rates in online courses are often very low. One explanation for poor performance in online courses is that aspects of the online environ- ment lead students to procrastinate, forget about, or be distracted from coursework. To address student time-management issues, I leverage insights from behavioral economics to design three software tools including (1) a commitment device that allows students to pre-commit to time limits on distracting Internet activities, (2) a reminder tool that is triggered by time spent on distracting websites, and (3) a focusing tool that allows students to block distracting sites when they go to the course website. I test the impact of these tools in a large-scale randomized experiment (n=657) conducted in a massive open online course (MOOC) hosted by Stanford University. Relative to students in the control group, students in the commitment device treatment spend 24% more time working on the course, receive course grades that are 0.29 standard deviations higher, and are 40% more likely to complete the course. In contrast, outcomes for students in the reminder and focusing treatments are not statistically distinguishable from the control. These results suggest that tools designed to address procrastination can have a significant impact on online student performance. 

Address of the bookmark: http://www.human.cornell.edu/pam/academics/phd/upload/PattersonJMP11_18.pdf