A Quartz article that claims (accurately) that p-learning bootcamps dominate for those learning programming and other technical skills and (inaccurately) that the reason for that is that e-learning is much less engaging. In fact, there’s a sneaky and almost unnoticeable sleight of hand here, because what is actually claimed is that online learning can be less engaging and, based on that indubitable fact, extrapolates from the particular to the general, asserting that all online learning suffers the same way.
Nonsense.
Yes, there is a lot of rubbish online learning and, in fairness, even a well-evolved establishment like Athabasca University has some of it, at least for some people some of the time. But that’s not at all surprising because every university (online or not) presents the same problem and, if you are trying to make a single learning design work for everyone, you are sure to make it too complex for some and too boring for others (personalization technologies and intelligent personal learning designs notwithstanding). Athabasca has a lot less of it thanks to its extremely rigorous quality assurance processes, but it would be crazy to imagine that everything it does is perfect for every learner at every time, as much as it would be crazy to imagine that everyone at a bricks and mortar institution gets a wonderful learning experience every time. Crazier, in fact.
The thing is, it ain’t what you do, it’s the way that you do it, that’s what gets results. It’s not that online learning is less effective (countless studies prove otherwise), it’s simply that it tends to be done in a way that gives control and flexibility to learners. The immersion of physical bootcamps does have one major and very distinctive benefit: that it pulls people out of everything else and forces them to engage for a lot of hours in the day. The ‘bootcamp’ part of it ensures that they are well and truly immersed, with no way to back out apart from backing out completely It’s not that the learning experience is any better – very far from it in most cases. Most bootcamps I have seen use inane pedagogies that would not pass muster even in a conventional university, let alone somewhere like Athabasca University that actually pays attention to such things. It’s just that people are there and they have to do stuff (a lot of social pressure is involved, as well as loss-aversion) so they wind up learning a lot simply because they put in the hours, and that happens simply because they are enrolled on a bootcamp and cannot get away.Not dissimilar to the ways traditional universities work, as it happens, just a lot more intense.
Online learning gives people more choice and greater control so, if they are not innately fascinated or they have not very single-mindedly put aside enough time then, of course, they wind up learning less quickly because they put in less time over a longer period. Duh. It’s not rocket science. This is not about teaching effectiveness or smart learning designs, it is simply about stopping people from being distracted and doing other things. The solution, if a solution is needed, is for online learners to block out the time and drop the distractions. I can imagine plenty of learning designs for online learning that would make that happen – simply making it real-time and using smart tools for desktop sharing, real-time interaction, and monitoring of progress would achieve much the same results, as long as the ground rules are fully understood by all concerned. I was at a conference the other day that did pretty much that. Such an approach doesn’t happen much, of course, for all the reasons people go for online learning in the first place, inasmuch as such methods take away the control and flexibility that make it so appealing. On the other hand, perhaps there is a place for such techniques. It seems there is a market and, as long as expectations are carefully managed (you don’t distract yourself with reading emails and engaging in social media, you pledge to be available, you block out your calendar) it might work pretty well. But why bother? Seems to me that online learning is better precisely because of the control it gives people. If they need extrinsic motivation to force them to learn then that’s the problem that should be solved before enrolling on any courses, and it will do them a world of good in many other situations too.
Address of the bookmark: https://qz.com/1064814/the-awkward-irony-of-not-being-able-to-take-a-good-coding-bootcamp-online/
Categories Learning
Leave a comment
Earlier today I responded to a prospective student who was, amongst other things, seeking advice on strategies for success on a couple of our self-paced programming courses. My response was just a stream of consciousness off the top of my head but I think it might be useful to others. Here, then, with some very light editing to remove references to specific courses, are a few fairly random thoughts on how to succeed on a self-paced online programming course (and, for the most part, other courses) at Athabasca University. In no particular order: Online learning can be great fun as long as you are aware of the big differences, primarily relating to control and personal agency. Our role is to provide a bit of structure and a supportive environment to enable you to learn, rather than to tell you stuff and make you do things, which can be disconcerting at first if you are used to traditional classroom learning. This puts more pressure on you, and more onus on you to organize and manage your own learning, but don’t ever forget that you are not ever really alone – we are here to help. In summary, I think it really comes down to three big things, all of which are really about motivation, and all of which are quite different when learning online compared to face-to-face: This advice is by no means comprehensive! If you have other ideas or advice, or things that have worked for you, or things that you disagree with, do feel free to share them in the comments. An interview with me by Graham Allcott, author of the bestselling How to be a productivity ninja and other books, for his podcast series Beyond Busy, and as part of the research for his next book. In it I ramble a lot about issues like social media, collective intelligence, motivation, technology, education, leadership, and learning, and Graham makes some incisive comments and asks some probing questions. The interview was conducted on the landing of the Grand Hotel, Brighton, last year. Address of the bookmark: http://getbeyondbusy.com/e/35495d7ba89876L/?platform=hootsuite The always wonderful Alfie Kohn describes an airline survey that sought to find out how it compared with others, which he chose not to answer because the airline was thus signalling no interest in providing the best quality experience possible, just aiming to do enough to beat the competition. The thrust of his article is that much the same is true of standardized tests in schools. As Kohn rightly observes, the central purpose of testing as it tends to be used in schools and beyond is not to evaluate successful learning but to compare students (or teachers, or institutions, or regions) with one another in order to identify winners and losers. ‘When you think about it, all standardized tests — not just those that are norm-referenced — are based on this compulsion to compare. If we were interested in educational excellence, we could use authentic forms of assessment that are based on students’ performance at a variety of classroom projects over time. The only reason to standardize the process, to give all kids the same questions under the same conditions on a contrived, one-shot, high-stakes test, is if what we wanted to know wasn’t “How well are they learning?” but “Who’s beating whom?”‘ It’s a good point, but I think it is not just an issue with standardized tests. The problem occurs with all the summative assessments (the judgments) we use. Our educational assessment systems are designed to create losers as much as they a made to find winners. Whether they follow the heinous practice of norm-referencing or not, they are sorting machines, built to discover competent people, and to discard the incompetent. In fact, as Kohn notes, when there are too many winners we are accused of grade inflation or a dropping of standards. This makes no sense if you believe, as I do, that the purpose of education is to educate. In a system that demands grading, unless 100% of students that want to succeed get the best possible grades, then we have failed to meet the grade ourselves. The problem, though, is not so much the judgments themselves as it is the intimate, inextricable binding of judgmental with learning processes. Given enough time, effort, and effective teaching, almost anyone can achieve pretty much any skill or competence, as long as they stick at it. We have very deliberately built a system that does not aim for that at all. Instead, it aims to sort wheat from chaff. That’s not why I do the job I do, and I hope it is not why you do it either, but that’s exactly what the system is made to do. And yet we (at least I) think of ourselves as educators, not judges. These two roles are utterly separate and inconsolably inconsistent. It might be argued that some students don’t actually want to get the best possible grades. True. And sure, we don’t always want or need to learn everything we could learn. If I am learning how to use a new device or musical instrument I sometimes read/watch enough to get me started and do not go any further, or skim through to get the general gist. Going for a less-than-perfect understanding is absolutely fine if that’s all you need right now. But that’s not quite how it works in formal education, in part because we punish those that make such choices (by giving lower grades) and in part because we systematically force students to learn stuff they neither want nor need to learn, at a time that we choose, using the lure of the big prizes at the end to coax them. Even those that actually do want or need to learn a topic must stick with it to the bitter end regardless of whether it is useful to do the whole thing, regardless of whether they need more or less of it, regardless of whether it is the right time to learn it, regardless of whether it is the right way for them to learn it. They must do all that we say they must do, or we won’t give them the gold star. That’s not even a good way to train a dog. It gets worse. At least dogs normally get a second chance. Having set the bar, we normally give just a single chance at winning or, at best, an option to be re-tested (often at a price and usually only once), rather than doing the human thing of allowing people to take the time they need and learn from their mistakes until they get as good as they want or need to get. We could learn a thing or two from computer games – the ability to repeat over and over, achieving small wins all along the way without huge penalties for losing, is a powerful way to gain competence and sustain motivation. It is better if students have some control over the pacing but, even at Athabasca, an aggressively open university that does its best to give everyone all the opportunity they need to succeed, where self-paced learners can choose the point at which they are ready to take the assessments, we still have strict cut-offs for contract periods and, like all the rest, we still tend to allow just a single stab at each assessment. In most of my own self-paced courses (and in some others) we try to soften that by allowing students to iterate without penalty until the end but, when that end comes, that’s still it. This is not for the benefit of the students: this is for our convenience. Yes, there is a cost to giving greater freedom – it takes time, effort, and compassion – but that’s a business problem to solve, not an insuperable barrier. WGU’s subscription model, for instance, in which students pay for an all-you-can-eat smorgasbord, appears to work pretty well. It might be argued that there are other important lessons that we teach when we competitively grade. Some might suggest that competition is a good thing to learn in and of itself, because it is one of the things that drives society and everyone has to do it at least sometimes. Sure, but cooperation and mutual support is usually better, or at least an essential counterpart, so embedding competition as the one and only modality seems a bit limiting. And, if we are serious about teaching people about how to compete, then that is what we should do, and not actively put them in jeopardy to achieve that: as Jerome Bruner succinctly put it, ‘Learning something with the aid of an instructor should, if instruction is effective, be less dangerous or risky or painful than learning on one’s own’ (Bruner 1966, p.44). Others might claim that sticking with something you don’t like doing is a necessary lesson if people are to play a suitably humble/productive role in society. Such lessons have a place, I kind-of agree. Just not a central place, just not a pervasive place that underpins or, worse, displaces everything else. Yes, grit can be really useful, if you are pursuing your goals or helping others to reach theirs. By all means, let’s teach that, let’s nurture that, and by all means let’s do what we can to help students see how learning something we are teaching can help them to reach their goals, even though it might be difficult or unpleasant right now. But there’s a big difference between doing something for self or others, and subservient compliance with someone else’s demands. ‘Grit’ does not have to be synonymous with ‘taking orders’. Doing something distasteful because we feel we must, because it aligns with our sense of self-worth, because it will help those we care about, because it will lead us where we want to be, is all good. Doing something because someone else is making us do it (with the threat/reward of grades) might turn us into good soldiers, might generate a subservient workforce in a factory or coal face, might keep an unruly subjugated populace in check, but it’s not the kind of attitude that is going to be helpful if we want to nurture creative, caring, useful members of 21st Century society. It might be argued that accreditation serves a powerful societal function, ranking and categorizing people in ways that (at least for the winners and for consumers of graduates) have some value. It’s a broken and heartless system, but our societies do tend to be organized around it and it would be quite disruptive if we got rid of it without finding some replacement. Without it, employers might actually need to look at evidence of what people have done, for instance, rather than speedily weeding out those with insufficient grades. Moreover, circularly enough, most of our students currently want and expect it because it’s how things are done in our culture. Even I, a critic of the system, proudly wear the label ‘Doctor’, because it confers status and signals particular kinds of achievement, and there is no doubt that it and other qualifications have been really quite useful in my career. If that were all accreditation did then I could quite happily live with it, even though the fact that I spent a few years researching something interesting about 15 years ago probably has relatively little bearing on what I do or can do now. The problem is not accreditation in itself, but that it is inextricably bound to the learning process. Under such conditions, educational assessment systems are positively harmful to learning. They are anti-educative. Of necessity, due to the fact that they tend to determine precisely what students should do and how they should do it, they sap intrinsic motivation and undermine love of learning. Even the staunchest of defenders of tightly integrated learning and judgment would presumably accept that learning is at least as important as grading so, if grading undermines learning (and it quite unequivocally does), something is badly broken. It does not have to be this way. I’ve said it before but it bears repeating: at least a large part of the solution is to decouple learning and accreditation altogether. There is a need for some means to indicate prowess, sure. But the crude certificates we currently use may not be the best way to do that in all cases, and it doesn’t have to dominate the learning process to the point of killing love of learning. If we could drop the accreditation role during the teaching process we could focus much more on providing useful feedback, on valorizing failures as useful steps towards success, on making interesting diversions, on tailoring the learning experience to the learner’s interests and capabilities rather than to credential requirements, on providing learning experiences that are long enough and detailed enough for the students’ needs, rather than a uniform set of fixed lengths to suit our bureaucracies. Equally, we could improve our ability to provide credentials. For those that need it, we could still offer plenty of accreditation opportunities, for example through a portfolio-based approach and/or collecting records of learning or badges along the way. We could even allow for some kind of testing like oral, written, or practical exams for those that must, where it is appropriate to the competence (not, as now, as a matter of course) and we could actually do it right, rather than in ways that positively enable and reward cheating. None of this has to bound to specific courses. This decoupling would also give students the freedom to choose other ways of learning apart from our own courses, which would be quite a strong incentive for us to concentrate on teaching well. It might challenge us to come up with authentic forms of assessment that allow students to demonstrate competence through practice, or to use evidence from multiple sources, or to show their particular and unique skillset. It would almost certainly let us do both accreditation and teaching better. And it’s not as though we have no models to work from: from driving tests to diving tests to uses of portfolios in job interviews, there are plenty of examples of ways this can work already. Apart from some increased complexities of managing such a system (which is where online tools can come in handy and where opportunities exist for online institutions that conventional face-to-face institutions cannot compete with) this is not a million miles removed from what we do now: it doesn’t require a revolution, just a simple shift in emphasis, and a separation of two unnecessarily and mutually inconsistent intertwined roles. Especially when processes and tools already exist for that, as they do at Athabasca University, it would not even be particularly costly. Inertia would be a bigger problem than anything else, but even big ships can eventually be steered in other directions. We just have to choose to make it so. Bruner, J. S. (1966). Toward a Theory of Instruction. Cambridge MA: The Belknap Press of Harvard University Press. Excellent post from Mike Taylor on the inevitable consequences of the use of incentives to shape a system (in this case, an educational system). As Mike notes, the problem is well-known and well understood, yet otherwise intelligent people continue to rely on extrinsic incentives to attempt to shape behaviour. It’s a classic Monkey’s Paw problem – you get what you wish for but something very bad will inevitably happen, often worse than the problem you are trying to solve. We can make people do things with extrinsic incentives (reward and punishment), sure, but in doing so we change the focus from what we want to achieve to the reward itself, which invariably destroys intrinsic motivation to do what we want done, reinforces our power (and thus the weakness of those we ‘incentivize’), and ultimately backfires on us in tragically predictable ways, because what we actually want done is almost never the thing we choose to measure. Our educational systems (and many others) are built around extrinsic incentives, from grades through to performance-related pay through to misguided research assessment exercises, evaluations based on publication records, etc. The consequences are uniformly dire. Mike quotes Tim Harford (from http://timharford.com/2016/09/4035/) as providing what seems to me to be the only sensible solution: “The basic principle for any incentive scheme is this: can you measure everything that matters? If you can’t, then high-powered financial incentives will simply produce short-sightedness, narrow-mindedness or outright fraud. If a job is complex, multifaceted and involves subtle trade-offs, the best approach is to hire good people, pay them the going rate and tell them to do the job to the best of their ability.” Well said. Except that I would add that the effects on motivation of any incentive scheme are always awful, and that’s the biggest reason not to do it. It’s not just that it doesn’t achieve the results we hope for: it’s that it is unkind and dehumanizing. With that in mind, I wouldn’t tell them to do the job to the best of their ability. I might ask them. I might help to structure a system so that they and everyone else can see the positive and negative consequences of actions they take. I might try to nurture a community where people value one another and are mutually supportive. I might talk to them about what they are doing and offer my support in helping them to do it better. I might try to structure the system around what people want to do rather than trying to make them fit in the system I want to build. At least, that’s what I would do on a good day. On a bad day, under pressure from multiple quarters, overworked and overstressed, I might fall back on a three line whip or a plea to do their bit. I might make trades (‘do this and I will take away that’) or appeal to a higher authority (‘the Dean says we must…’) or to my own authority (‘this has to be done and you are the best one to do it..’), or to duty (‘it is in our contract that we have to do performance assessments…’). And that’s where the problems begin. Mike recommends Tim Harford’s ‘The Undercover Economist’ as a way out of this loop. I will read this, as I have read many books offering similar insights. It seems at first glance to fit very well with the findings of self-determination theory as well as behavioural economics. However, though the causes described here are the result of a failure to understand human motivation, this is, at heart, a systems problem of a broader nature: I recommend The Systems Bible (formerly Systemantics) by John Gall Systemantics by John Gall (formerly the Systems Bible) for a comprehensive set of explanations of the kinds of phenomena that give rise to stupid behaviour by groups of intelligent people. The book is deliberately funny, but the underlying theory on which it is based is extremely sound. Address of the bookmark: https://svpow.com/2017/03/17/every-attempt-to-manage-academia-makes-it-worse/ A short article from Lisa Legault that summarizes self-determination theory (SDT) and its findings very succinctly and clearly. It’s especially effective at highlighting the way the spectrum of extrinsic-to-intrinsic motivation works (including the integrated/identified/introjected continuum), and in describing the relationships between autonomy, competence, and relatedness. Nothing new here, nothing inspirational, just a useful resource to point people at so they can learn about the central tenets of SDT Address of the bookmark: https://www.researchgate.net/profile/Lisa_Legault/publication/311692691_Intrinsic_and_Extrinsic_Motivation/links/5856e60d08ae77ec37094289.pdf A nice one-minute summary of Alfie Kohn’s case against grades at www.youtube.com/watch?v=EQt-ZI58wpw There’s a great deal more Kohn has to say on the subject that is worth reading, such as at http://www.alfiekohn.org/article/case-grades/ or http://www.alfiekohn.org/article/grading/ or an interview at http://www.education.com/magazine/article/Grades_Any_Good/ From that interview, this captures the essence of the case pretty well: “The research suggests three consistent effects of giving students grades – or leading them to focus on what grade they’ll get. First, their interest in the learning itself is diminished. Second, they come to prefer easier tasks – not because they’re lazy, but because they’re rational. After all, if the point is to get an A, your odds are better if you avoid taking intellectual risks. Third, students tend to think in a more superficial fashion – and to forget what they learned more quickly – when grades are involved. To put it positively, students who are lucky enough to be in schools (or classrooms) where they don’t get letter or number grades are more likely to want to continue exploring whatever they’re learning, more likely to want to challenge themselves, and more likely to think deeply. The evidence on all of these effects is very clear, and it seems to apply to students of all ages. As far as I can tell, there are absolutely no benefits of giving grades to balance against these three powerful negative consequences – except that doing so is familiar to us and doesn’t take much effort.” Note: if this video shows up as a blank space in your browser, then your security settings are preventing embedding of untrusted content in a trusted page. This video is totally trustworthy, so look for the alert to override it, typically near the address bar in your browser. Address of the bookmark: I describe some of what I do as ‘unteaching’, so I find this highly critical article by Miss Smith – The Unlearning Zone – interesting. Miss Smith dislikes the terms ‘ unteaching’ and ‘unlearning’ for some well-expressed aesthetic and practical reasons: as she puts it, they are terms “that would not be out of place in a particularly self-satisfied piece of poststructuralist literary analysis circa 1994.” I partially agree. However, she also seems equally unenamoured with what she thinks they stand for. I disagree with her profoundly on this so, as she claims to be new to these terms, here is my attempt to explain a little about what I mean by them and why I think they are a useful part of the educators’ lexicon, and why they are crucially important for learners’ development in general. Yes, ‘unteaching’ is an ugly neoligism and it doesn’t really make sense: that’s part of the appeal of using it – a bit of cognitive dissonance can be useful for drawing attention to something. However, it is totally true that someone who is untaught is just someone who has not (yet) been taught, so ‘unteaching’, seen through that light, is at best pointless, at worst self-contradictory. On the other hand, it does seem to follow pretty naturally from ‘unlearning’ which, contrary to Miss Smith’s assertion, has been in common use for centuries and makes perfect sense. Have you ever had to unlearn bad habits? Me too. As I understand it, ‘unteach’ is to ‘teach’ as ‘undo’ is to ‘do’. Unteaching is still teaching, just as undoing is still doing, and unlearning is still learning. Perhaps deteaching would be a better term. Whatever we choose to call it, unteaching is concerned with intentionally dismantling the taught belief that teaching is about exerting power over learners to teach, and replacing it with the attitude that teachers are there to empower learners to learn. This is not a particularly radical idea. It is what all teachers should do anyway, I reckon. But it is worth drawing attention to it as a distinct activity because it runs counter to the tide, and the problem it addresses is virtually ubiquitous in education up to, and sometimes at, doctoral level. Traditional teaching of the sort Miss Smith seems to defend in her critique does a lot more than teach a subject, skill, or way of thinking. It teaches that learning is a chore that is not valuable in and of itself, that learners must be forced to do it for some other purpose, often someone else’s purpose. It teaches that teaching is something done to students by a teacher: at its worst, it teaches that teaching is telling; at best, that teaching involves telling someone to do something. It’s not that (many) teachers deliberately seek these outcomes, but that they are the most likely lessons to be learned, because they are the ones that are repeated most often. The need for unteaching arises because traditional teaching, with luck in addition to whatever it intends to teach, teaches some terrible lessons about learning and the role of teaching in that process that must be unlearned. Miss Smith claims that unteaching means “open plan classes, unstructured lessons and bean bags.” That’s not the way I see it at all. Unlike traditional teaching, with its timetables, lesson plans, learning objectives, and uniform tests, unteaching does not have its own technologies and methods, though it does, for sure, tend to be a precursor to connectivist, social constructivist, constructionist, and other more learner-centred ways of thinking about the learning process, which may sometimes be used as part of the process of unteaching itself. Such methods, models, and attitudes emerge fairly naturally when you stop forcing people to do your bidding. However, they are just as capable of being used in a controlling way as the worst of instructivist methods: the number of reports on such interventions that include words like ‘students must…’, ‘I make my students…’ or (less blatantly) ‘students (do X)’ far outnumber all others, and that is the very opposite of unteaching. The specific technologies (including pedagogies as much as open-plan classrooms and beanbags) are not the point. Lectures, drill-and-practice and other instructivist methods are absolutely fine, as long as: No matter how cool and groovy your problem-based, inquiry-based, active methods might be, if they are imposed on students (especially with the use of threats for non-compliance and rewards for compliance – e.g. qualifications, grades, etc) then it is not unteaching at all: it’s just another way of doing the same kind of teaching that caused the problem in the first place. But if students have control – and ‘control’ includes being able to delegate control to someone else who can scaffold, advise, assist, instruct, direct, and help them when needed, as well as being able to take it back whenever they wish – then such methods can be very useful. So can lectures. To all those educational researchers that object to lectures, I ask whether they have ever found them valuable in a conference (and , if not, why did they go to a conference in the first place?). It’s not the pedagogy of lectures that is at fault. It’s the requirement to attend them and the accompanying expectation that people are going to learn what you are teaching as a result. That’s, simply put, empirically wrong. It doesn’t mean that lecturees learn nothing. Far from it. But what you teach and what they learn are different kinds of animal. It’s really easy to be a bad unteacher – I think that is what Miss Smith is railing against, and it’s a fair criticism. I’m often pretty bad at it myself, though I have had a few successes along the way too. Unteaching and, especially, the pedagogies that result from having done unteaching, are far more likely to go wrong, and they take a lot more emotional, intellectual, and social effort than traditional teaching because they don’t come pre-assembled. They have no convenient structures and processes in place to do the teaching for you. Traditional teaching ‘works’ even when it doesn’t. If you throw someone into a school system, with all its attendant rewards, punishments, timetables, rules and curricula, and if you give them the odd textbook and assessment along the way, then most students will wind up learning something like what is intended to be taught by the system, no matter how awful the teachers might be. In such a system, students will rarely learn well, rarely persistently, rarely passionately, seldom kindly, and the love of learning will have been squashed out of many of them along the way (survivors often become academics and teachers themselves). But they will mostly pass tests at the end of it. With a bit of luck many might even have gained a bit of useful knowledge or skill, albeit that much will be not just wasted and forgotten as easily as a hotel room number when your stay is over, but actively disliked by the end of it. And, of course, they will have learned dependent ways of learning that will serve them poorly outside institutional systems. To make things far worse, those very structures that assist the traditional teacher (grades, compulsory attendance, fixed outcomes, concept of failure, etc) are deeply antagonistic to unteaching and are exactly why it is needed in the first place. Unteachers face a huge upstream struggle against an overwhelming tide that threatens to drown passionate learning every inch of the way. The results of unteaching can be hard to defend within a traditional educational system because, by conventional measures, it is often inefficient and time-consuming. But conventional measures only make sense when you are trying to make everyone do the same things, through the same means, with the same ends, measured by and in order to meet the same criteria. That’s precisely the problem. The final nail in unteaching’s coffin is that it is applied very unevenly across the educational system, so every freedom it brings is counterbalanced by a mass of reiterated antagonistic lessons from other courses and programs. Every time we unteach someone, two others reteach them. Ideally, we should design educational systems that are friendlier to and more supportive of learner autonomy, and that are (above all else) respectful of learners as human beings. In K-12 teaching there are plenty of models to draw from, including Summerhill, Steiner (AKA Waldorf) schools, Montessori schools, Experiential Learning Schools etc. Few are even close to perfect, but most are at least no worse than their conventional counterparts, and they start with an attitude of respect for the children rather than a desire to make them conform. That alone makes them worthwhile. There are even some regional systems, such as those found in Finland or (recently) British Columbia, that are heading broadly in the right direction. In universities and colleges there are plenty of working models, from Oxford tutorials to Cambridge supervisions, to traditional theses and projects, to independent study courses and programs, to competency-based programs, to PLAR/APEL portfolios, and much more. It is not a new idea at all. There is copious literature and many theoretical models that have stood the test of time, from andragogy to communities of practice, through to teachings from Freire, Illich, Dewey and even (a bit quirkily) Vygotsky. Furthermore, generically and innately, most distance and e-learning unteaches better than its p-learning counterparts because teachers cannot exert the same level of control and students must learn to learn independently. Sadly, much of it is spoiled by coercing students with grades, thereby providing the worst of both worlds: students are forced to behave as the teacher demands in their terminal behaviours but, without physical copresence, are less empowered by guidance and emotional/social support with the process. Much of my own research and teaching is concerned with inverting that dynamic – increasing empowerment and social support through online learning, while decreasing coercion. I’d like to believe that my institution, Athabasca University, is largely dedicated to the same goal, though we do mostly have a way to go before we get it right. Unteaching is to a large extent concerned with helping learners – including adult learners – to get back to the point at which most children start their school careers – driven by curiosity, personal interest, social value, joy, delight – but that is schooled out of them over years of being taught dependency. Once misconceptions about what education is for, what teachers do, and how we learn, have been removed, teaching can happen much more effectively: supporting, nurturing, inspiring, challenging, responding, etc, but not controlling, not making students do things they are not ready to do for reasons that mean little to them and have even less to do with what they are learning. However, though it is an immensely valuable terminal outcome, improved learning is perhaps not the biggest reason for unteaching. The real issue is moral: it’s simply the right thing to do. The greatest value is that students are far more likely to have been treated with the respect, care, and honour that all human beings deserve along the way. Not ‘care’ of the sort you would give to a dog when you train it to be obedient and well behaved. Care of the sort that recognizes and valorizes autonomy and diversity, that respects individuals, that cherishes their creativity and passion, that sees learners as ends in themselves, not products or (perish the thought) customers. That’s a lesson worth teaching, a way of being that is worth modelling. If that demands more effort, if it is more fallible, and if it means that fewer students pass your tests, then I’m OK with that. That’s the price of admission to the unlearning zone. This is a report by, Simon Burgess, Robert Metcalfe, and Sally Sadoff on a large scale study conducted in the UK on the effects of financial and non-financial incentives on GCSE scores (GCSEs are UK qualifications usually taken around age 16 and usually involving exams), involving over 10,000 students in 63 schools being given cash or ‘non-financial incentives’. ‘Non-financial incentives’ did not stretch as far as a pat on the back or encouragement given by caring teachers – this was about giving tickets for appealing events. The rewards were given not for getting good results but for particular behaviours the researchers felt should be useful proxies for effective study: specifically, attendance, conduct, homework, and classwork. None of the incentives were huge rewards to those already possessing plenty of creature comforts but, for poorer students, they might have seemed substantial. Effectiveness of the intervention was measured in terminal grades. The researchers were very thorough and were very careful to observe limitations and concerns. It is as close to an experimental design as you can get in a messy real-world educational intervention, with numbers that are sufficient and diverse enough to make justifiable empirical claims about the generalizability of the results. Rewards had little effect on average marks overall, and it made little difference whether rewards were financial or not. However, in high risk groups (poor, immigrants, etc) there was a substantial improvement in GCSE results for those given rewards, compared with the control groups. The only thing that does surprise me a little is that so little effect was seen overall, but I hypothesize that the reward/punishment conditions are so extreme already among GCSE students that it made little difference to add any more to the mix. The only ones that might be affected would be those for whom the extrinsic motivation is not already strong enough. There is also a possibility that the demotivating effects for some were balanced out by the compliance effects for others: averages are incredibly dangerous things, and this study is big on averages. What makes me sad is that there appears to be no sense of surprise or moral outrage about this basic premise in this report. It appears reasonable at first glance: who would not want kids to be more successful in their exams? When my own kids had to do this sort of thing I would have been very keen on something that would improve their chances of success, and would be especially keen on something that appears to help to reduce systemic inequalities. But this is not about helping students to learn or improving education: this is completely and utterly about enforcing compliance and improving exam results. The fact that there might be a perceived benefit to the victims is a red herring: it’s like saying that hitting dogs harder is good for the dogs because it makes them behave better than hitting them gently. The point is that we should not be hitting them at all. It’s not just morally wrong, it doesn’t even work very well, and only continues to work at all if you keep hitting them. It teaches students that the end matters more than the process, that learning is inherently undesirable and should only done when there is a promise of a reward or threat of punishment, and that they are not in charge of it. The inevitable result of increasing rewards (or punishments – they are functionally equivalent) is to further quench any love of learning that might be left at this point in their school careers, to reinforce harmful beliefs about how to learn, and to further put students off the subjects they might have loved under other circumstances for life. In years to come people will look back on barbaric practices like this much as we now look back at the slave trade or pre-emancipation rights for women. Studies like this make me feel a bit sick. Address of the bookmark: http://www.efm.bris.ac.uk/economics/working_papers/pdffiles/dp16678.pdf An interesting observation… “Helen Abadzi, an expert in cognitive psychology and neuroscience, who was an education specialist at the World Bank, said that pupils who “overlearn” and repeatedly practise tasks, such as mental arithmetic, free up their working memory for more “higher order” analytical thinking.” Yes, they do, good point. We should not forget that. Unfortunately, she goes way beyond her field of expertise and explicitly picks on Sir Ken Robinson in the process… “Go out and play, well sure – but is that going to teach me mental math so I can go to a store and instantly make a decision about what is the best offer to buy?” she said. I cannot be certain but, as far as I know, and although he has made the occasional wild assertion, Sir Ken has never for one moment suggested that overlearning should be avoided. In fact, that’s rather obvious from the examples he gives in what the article acknowledges is the most popular TED talk of all time. I’ve yet to meet a good ballerina that has not practiced until it hurt. When you get into the flow of something and truly play, rote learning is exactly what you do. I have practiced my guitar until my fingers bled. Indeed, for each of my many interests in life, I have very notably repeatedly practiced again, again, and again, doing it until I get it right (or at least right enough). I’m doing it right now. I am fairly certain that you have done the same. To suggest that play does not involve an incredible amount of gruelling repetition and rote learning (particularly valuable when done from different angles, in different contexts, and with different purposes, a point Abadzi fails to highlight but I am sure understands) is bizarre. Even my cats do it. It is even more bizarre to leap from suggesting that overlearning is necessary to a wildly wrong and completely unsubstantiated statement like: “People may not like methods like direct instruction – “repeat after me” – but they help students to remember over the long term. A class of children sitting and listening is viewed as a negative thing, yet lecturing is highly effective for brief periods.” Where the hell did that come from? A scientist should be ashamed of such unsupported and unsupportable tripe. It does not follow from the premises. We need to practice, so extrinsic motivation is needed to make students learn? And play is not essential? Seriously? Such idiocy needs to be stamped on, stamped out, and stamped out hard. This is a good case study in why neuroscience is inadequate as a means to explain learning, and is completely inadequate as a means to explain education. In the interests of fairness, I should note that brief lectures (and, actually, even long lectures) can indeed lead to effective learning, albeit not necessarily of what is being lectured about and only when they are actually interesting. The problem is not lectures per se, but the fact that people are forced to attend them, and that they are expected to learn what the lecturer intends to teach.Strategies for successful learning at AU
Professor Jon Dron | Beyond Busy
Our educational assessment systems are designed to create losers
Who needs 100%?
Meta lessons
Societal roles
A simple solution?
Reference
Every attempt to manage academia makes it worse
Intrinsic and Extrinsic Motivation
Alfie Kohn: "It’s bad news if students are motivated to get A’s" – YouTube
The cost of admission to the unlearning zone
First the terms…
What is unteaching?
Problems with unteaching
Why it matters
Understanding the response to financial and non-financial incentives in education: Field experimental evidence using high-stakes assessments
What they did
What they found
My thoughts
‘Rote learning, not play, is essential for a child’s education’ – seriously?