Posts by Matthew Prineas – Athabasca University’s new Provost and Vice-president, Academic

I suspect everyone on Athabasca University’s staff will be very interested in these posts by Matthew Prineas, who we will welcome on September 5th as our new provost and VPA, that show a great understanding of at least some of the benefits and challenges of distance learning. Amongst other things, he has done some really good work on embedding OERs at UMUC, and has strong credentials (!) in the field of competency based methods of learning and accreditation. These things matter a great deal to our future. It also seems that he has a subtle appreciation of our distributed teaching approach, though I should note that there are more ways to skin this cat than the industrial model – we need to aim for post-industrial, where we achieve economies of scale not (just) by write-once-deliver-many teaching but by leveraging the value of human interaction on a large scale that distributed network technologies enable. It is great, though, that we’re getting a VPA who seems aligned with our mission and who reaches out to the world through social media. See, too, his Twitter posts at https://twitter.com/mprineas?lang=en

These are exciting times at AU!

Address of the bookmark: https://evolllution.com/author/matthew-prineas/

Original page

Babies in the learning-style bathwater

A recent Guardian article reports on a letter sent to the paper by 30 eminent academics from neuroscience, education, and psychology disciplines, voicing concerns about the absurd popularity of learning styles among teachers.

They are, of course, correct to be concerned. There is no good evidence that being taught according to your learning style has any positive value, despite decades of spurious attempts to show a correlation. Moreover, even if there were such a correlation, it would behoove teachers to help learners to learn using different styles because real-life learning doesn’t come neatly packaged in forms that fit with how we want/are constituted to learn, and teaching should primarily be concerned with supporting learners’ capacity to learn. The fact that there are scores if not hundreds of incompatible learning style theories, most of which have similarly (un)compelling evidence to support them, should be a clue that there is something seriously wrong with the whole idea. And it’s not a harmless foible. Not only is it a massive waste of time and money, not to mention a terrible example to set in truthiness acceptance, it can be actively harmful to learners, teaching them to believe that they can only learn properly if things are packaged to suit their style.

What’s shocking in the article is the report on the number of teachers who, despite a total lack of evidence and copious amounts of debunking, continue to use and believe in the things. To our shame, I have even seen examples of it at AU (our own Math Site mentions them) where we really ought to know better. But we are not unusual in this. Not at all. In the UK and Netherlands in 2012, 80% of teachers apparently believed that individuals learned better when doing so in a manner according with their preferred learning style. This is like discovering that 80% of the world’s scientists believe that their horoscopes determine the results of their experiments.

That said, there’s a baby in this very dirty bathwater that should not be thrown out.

If a belief in learning styles means that teachers feel challenged to design learning experiences in different ways to suit more diverse needs, that’s not a bad thing, apart from that it increases the costs of learning development. In fairness, it would work at least as well if they used astrological star sign personality characteristics as a basis but, whatever the reasons, giving students choices is a worthwhile outcome. And, just like horoscopes, there is value to learners themselves in providing an opportunity and a framework for reflection, even if the framework itself is erroneous and based on fallacies.

I’m a sceptic, but even I use variants on the theme. For example, I often try to provide versions of learning content that are meant to cater for serialist and holist ways of learning (Gordon Pask’s approach to categorizing learning strategies). Notwithstanding the extra effort and cost of designing at least two ways to approach a topic, it’s a good creative catalyst for me, and it gives students greater choice and control over their own learning.

And, in fairness, not all learning-style types of theory are equally awful. Slightly less harmful variants talk of learning preferences rather than styles, which does not necessarily imply that those preferences are a good idea nor that they even need to be catered for, though it still perpetuates the myth that there are relatively fixed characteristics in such things. Much better ones, including Pask’s, talk of selectable learning strategies rather than stable characteristics or preferences of learners, which seems eminently sensible to me: it’s just about general pedagogical patterns. It’s not about labelling learners, though (sadly) some do try to apply the labels to learners, and even Pask himself (arguably) sometimes seems to present it in that way. The best of breed models recognize that learning strategies can and should change in different learning contexts as well as over time, and make no attempt to label or pigeon hole learners themselves at all. I think it is really useful to find regularities and patterns in learning designs, and that’s the baby we should not throw out when we (rightly) reject learning style theories.

Address of the bookmark: https://www.theguardian.com/education/2017/mar/13/teachers-neuromyth-learning-styles-scientists-neuroscience-education

Original page

TEL MOOC from Athabasca University

Starts today…

Course Description

Teachers who want to learn more about teaching with technology will find this Massive Open Online Course (MOOC), Introduction to Technology-Enabled Learning (TEL), informative and engaging. Using up-to-date learning design and simple, accessible technology, the course runs on an easy-to-use learning platform available via the Internet. The course is designed for teachers who want to build on their knowledge and practice in teaching and learning with technology. It will run over five weeks and requires approximately three to five hours of time each week. Designed to accommodate teachers’ busy schedules, the course offers flexibility with options for learning the content. You will learn from readings, videos, discussions with other participants and instructors, meaningful exercises, quizzes and short assignments. Certification is available for those who wish to complete all required exercises and quizzes.

Address of the bookmark: https://www.telmooc.org/

Original page

The cost of admission to the unlearning zone

picture of dull classroom (pubic domain)I describe some of what I do as ‘unteaching’, so I find this highly critical article by Miss Smith – The Unlearning Zone –  interesting. Miss Smith dislikes the terms ‘ unteaching’ and ‘unlearning’ for some well-expressed aesthetic and practical reasons: as she puts it, they are terms “that would not be out of place in a particularly self-satisfied piece of poststructuralist literary analysis circa 1994.”  I partially agree. However, she also seems equally unenamoured with what she thinks they stand for. I disagree with her profoundly on this so, as she claims to be new to these terms, here is my attempt to explain a little about what I mean by them and why I think they are a useful part of the educators’ lexicon, and why they are crucially important for learners’ development in general.

First the terms…

Yes, ‘unteaching’ is an ugly neoligism and it doesn’t really make sense: that’s part of the appeal of using it – a bit of cognitive dissonance can be useful for drawing attention to something. However, it is totally true that someone who is untaught is just someone who has not (yet) been taught, so ‘unteaching’, seen through that light, is at best pointless, at worst self-contradictory.  On the other hand, it does seem to follow pretty naturally from ‘unlearning’ which, contrary to Miss Smith’s assertion, has been in common use for centuries and makes perfect sense. Have you ever had to unlearn bad habits? Me too.

As I understand it, ‘unteach’ is to ‘teach’ as ‘undo’ is to ‘do’.  Unteaching is still teaching, just as undoing is still doing, and unlearning is still learning. Perhaps deteaching would be a better term. Whatever we choose to call it, unteaching is concerned with intentionally dismantling the taught belief that teaching is about exerting power over learners to teach, and replacing it with the attitude that teachers are there to empower learners to learn. This is not a particularly radical idea. It is what all teachers should do anyway, I reckon. But it is worth drawing attention to it as a distinct activity because it runs counter to the tide, and the problem it addresses is virtually ubiquitous in education up to, and sometimes at, doctoral level.

Traditional teaching of the sort Miss Smith seems to defend in her critique does a lot more than teach a subject, skill, or way of thinking. It teaches that learning is a chore that is not valuable in and of itself, that learners must be forced to do it for some other purpose, often someone else’s purpose. It teaches that teaching is something done to students by a teacher: at its worst, it teaches that teaching is telling; at best, that teaching involves telling someone to do something. It’s not that (many) teachers deliberately seek these outcomes, but that they are the most likely lessons to be learned, because they are the ones that are repeated most often. The need for unteaching arises because traditional teaching, with luck in addition to whatever it intends to teach, teaches some terrible lessons about learning and the role of teaching in that process that must be unlearned.

What is unteaching?

Miss Smith claims that unteaching means “open plan classes, unstructured lessons and bean bags.” That’s not the way I see it at all. Unlike traditional teaching, with its timetables, lesson plans, learning objectives, and uniform tests, unteaching does not have its own technologies and methods, though it does, for sure, tend to be a precursor to connectivist, social constructivist, constructionist, and other more learner-centred ways of thinking about the learning process, which may sometimes be used as part of the process of unteaching itself. Such methods, models, and attitudes emerge fairly naturally when you stop forcing people to do your bidding. However, they are just as capable of being used in a controlling way as the worst of instructivist methods: the number of reports on such interventions that include words like ‘students must…’, ‘I make my students…’ or (less blatantly) ‘students (do X)’ far outnumber all others, and that is the very opposite of unteaching. The specific technologies (including pedagogies as much as open-plan classrooms and beanbags) are not the point. Lectures, drill-and-practice and other instructivist methods are absolutely fine, as long as:

  1. they at least attempt to do the job that students want or need,
  2. they are willingly and deliberately chosen by students,
  3. students are well-informed enough to make those choices, and
  4. students can choose to learn otherwise at any time.

No matter how cool and groovy your problem-based, inquiry-based, active methods might be, if they are imposed on students (especially with the use of threats for non-compliance and rewards for compliance – e.g. qualifications, grades, etc) then it is not unteaching at all: it’s just another way of doing the same kind of teaching that caused the problem in the first place. But if students have control – and ‘control’ includes being able to delegate control to someone else who can scaffold, advise, assist, instruct, direct, and help them when needed, as well as being able to take it back whenever they wish – then such methods can be very useful. So can lectures. To all those educational researchers that object to lectures, I ask whether they have ever found them valuable in a conference (and , if not, why did they go to a conference in the first place?). It’s not the pedagogy of lectures that is at fault. It’s the requirement to attend them and the accompanying expectation that people are going to learn what you are teaching as a result. That’s, simply put, empirically wrong. It doesn’t mean that lecturees learn nothing. Far from it. But what you teach and what they learn are different kinds of animal.

Problems with unteaching

It’s really easy to be a bad unteacher – I think that is what Miss Smith is railing against, and it’s a fair criticism. I’m often pretty bad at it myself, though I have had a few successes along the way too. Unteaching and, especially, the pedagogies that result from having done unteaching, are far more likely to go wrong, and they take a lot more emotional, intellectual, and social effort than traditional teaching because they don’t come pre-assembled. They have no convenient structures and processes in place to do the teaching for you.  Traditional teaching ‘works’ even when it doesn’t. If you throw someone into a school system, with all its attendant rewards, punishments, timetables, rules and curricula, and if you give them the odd textbook and assessment along the way, then most students will wind up learning something like what is intended to be taught by the system, no matter how awful the teachers might be. In such a system, students will rarely learn well, rarely persistently, rarely passionately, seldom kindly, and the love of learning will have been squashed out of many of them along the way (survivors often become academics and teachers themselves). But they will mostly pass tests at the end of it. With a bit of luck many might even have gained a bit of useful knowledge or skill, albeit that much will be not just wasted and forgotten as easily as a hotel room number when your stay is over, but actively disliked by the end of it. And, of course, they will have learned dependent ways of learning that will serve them poorly outside institutional systems.

To make things far worse, those very structures that assist the traditional teacher (grades, compulsory attendance, fixed outcomes, concept of failure, etc) are deeply antagonistic to unteaching and are exactly why it is needed in the first place. Unteachers face a huge upstream struggle against an overwhelming tide that threatens to drown passionate learning every inch of the way. The results of unteaching can be hard to defend within a traditional educational system because, by conventional measures, it is often inefficient and time-consuming. But conventional measures only make sense when you are trying to make everyone do the same things, through the same means, with the same ends, measured by and in order to meet the same criteria. That’s precisely the problem.

The final nail in unteaching’s coffin is that it is applied very unevenly across the educational system, so every freedom it brings is counterbalanced by a mass of reiterated antagonistic lessons from other courses and programs. Every time we unteach someone, two others reteach them.  Ideally, we should design educational systems that are friendlier to and more supportive of learner autonomy, and that are (above all else) respectful of learners as human beings. In K-12 teaching there are plenty of models to draw from, including Summerhill, Steiner (AKA Waldorf) schools, Montessori schools, Experiential Learning Schools etc. Few are even close to perfect, but most are at least no worse than their conventional counterparts, and they start with an attitude of respect for the children rather than a desire to make them conform. That alone makes them worthwhile. There are even some regional systems, such as those found in Finland or (recently) British Columbia, that are heading broadly in the right direction. In universities and colleges there are plenty of working models, from Oxford tutorials to Cambridge supervisions, to traditional theses and projects, to independent study courses and programs, to competency-based programs, to PLAR/APEL portfolios, and much more. It is not a new idea at all. There is copious literature and many theoretical models that have stood the test of time, from andragogy to communities of practice, through to teachings from Freire, Illich, Dewey and even (a bit quirkily) Vygotsky. Furthermore, generically and innately, most distance and e-learning unteaches better than its p-learning counterparts because teachers cannot exert the same level of control and students must learn to learn independently. Sadly, much of it is spoiled by coercing students with grades, thereby providing the worst of both worlds: students are forced to behave as the teacher demands in their terminal behaviours but, without physical copresence, are less empowered by guidance and emotional/social support with the process. Much of my own research and teaching is concerned with inverting that dynamic – increasing empowerment and social support through online learning, while decreasing coercion. I’d like to believe that my institution, Athabasca University, is largely dedicated to the same goal, though we do mostly have a way to go before we get it right.

Why it matters

Unteaching is to a large extent concerned with helping learners – including adult learners – to get back to the point at which most children start their school careers – driven by curiosity, personal interest, social value, joy, delight – but that is schooled out of them over years of being taught dependency.  Once misconceptions about what education is for, what teachers do, and how we learn, have been removed, teaching can happen much more effectively: supporting, nurturing, inspiring, challenging, responding, etc, but not controlling, not making students do things they are not ready to do for reasons that mean little to them and have even less to do with what they are learning.

However, though it is an immensely valuable terminal outcome, improved learning is perhaps not the biggest reason for unteaching. The real issue is moral: it’s simply the right thing to do. The greatest value is that students are far more likely to have been treated with the respect, care, and honour that all human beings deserve along the way. Not ‘care’ of the sort you would give to a dog when you train it to be obedient and well behaved. Care of the sort that recognizes and valorizes autonomy and diversity, that respects individuals, that cherishes their creativity and passion, that sees learners as ends in themselves, not products or (perish the thought) customers. That’s a lesson worth teaching, a way of being that is worth modelling. If that demands more effort, if it is more fallible, and if it means that fewer students pass your tests, then I’m OK with that. That’s the price of admission to the unlearning zone.

 

Udacity Partners with IBM, Amazon for Artificial Intelligence 'Degree'

http://fortune.com/2016/10/25/udacity-ibm-amazon-ai/

Udacity is now valued at over $1b. This seems a long way from the dream of open (libre and free) learning of the early MOOC pioneers (pre-Thrun):

“Earlier this year, Udacity’s revenue from Nanodegrees was growing nearly 30% month over month and the initiative is profitable, according to Thrun. According to one source, Udacity was on track to make $24 million this year. Udacity also just became a unicorn—a startup valued at or above $1 billion—in its most recent $105 million funding round in 2015.”

This should also be a wake-up call to universities that believe their value is measurable by the employability of their graduates. Udacity has commitments from huge companies like IBM, BMW, Tata and others to accept its nanodegree graduates. Nanodegrees are becoming a serious currency in the job market, at lower cost and higher productivity than anything universities can match, with all the notable benefits of online delivery and timeframes that make lifelong learning of up-to-date competencies a reality, not an aspiration. If we don’t adapt to this then universities are, if not dead in the water, definitely at risk of becoming of less relevance.

I recently posted a response to Dave Cormier’s question about the goals of education in which I suggested that our educational institutions play an important sustaining and generative role in cultures  – not just in large-scale societal level culture, but in the myriad overlapping and contained cultures within societies. Though I have reservations about the risks of government involvement in education, I am a little fearful but also a little intrigued about what happens when private organizations start to make a substantial contribution to that role. There have always been a few such cases, and that has always been a useful thing. Having a few alternatives nipping around your heels and introducing fresh ideas helps to keep an ecosystem from stagnating. But this is big scale stuff, and it’s part of a trend that worries me. We are already seeing extremely large contributions to traditional education from private donors like the Gates and Zuckerberg foundations that reinforce dreadful misguided beliefs about what education is, or what it is for. With big funding, these become self-fulfilling beliefs. As long as we can sustain diversity then I think it is not a bad thing, but the massive influence of a few (even well-meaning) individuals with the spending power of nations is  very, very dangerous.

Original post

Cocktails and educational research

A lot of progress has been made in medicine in recent years through the application of cocktails of drugs. Those used to combat AIDS are perhaps the most well-known, but there are many other applications of the technique to everything from lung cancer to Hodgkin’s lymphoma. The logic is simple. Different drugs attack different vulnerabilities in the pathogens etc they seek to kill. Though evolution means that some bacteria, viruses or cancers are likely to be adapted to escape one attack, the more different attacks you make, the less likely it will be that any will survive.

Simulated learningUnfortunately, combinatorial complexity means this is not a simply a question of throwing a bunch of the best drugs of each type together and gaining their benefits additively. I have recently been reading John H. Miller’s ‘A crude look at the whole: the science of complex systems in business, life and society‘ which is, so far, excellent, and that addresses this and many other problems in complexity science. Miller uses the nice analogy of fashion to help explain the problem: if you simply choose the most fashionable belt, the trendiest shoes, the latest greatest shirt, the snappiest hat, etc, the chances of walking out with the most fashionable outfit by combining them together are virtually zero. In fact, there’s a very strong chance that you will wind up looking pretty awful. It is not easily susceptible to reductive science because the variables all affect one another deeply. If your shirt doesn’t go with your shoes, it doesn’t matter how good either are separately. The same is true of drugs. You can’t simply pick those that are best on their own without understanding how they all work together. Not only may they not additively combine, they may often have highly negative effects, or may prevent one another being effective, or may behave differently in a different sequence, or in different relative concentrations. To make matters worse, side effects multiply as well as therapeutic benefits so, at the very least, you want to aim for the smallest number of compounds in the cocktail that you can get away with. Even were the effects of combining drugs positive, it would be premature to believe that it is the best possible solution unless you have actually tried them all. And therein lies the rub, because there are really a great many ways to combine them.

Miller and colleagues have been using the ideas behind simulated annealing to create faster, better ways to discover working cocktails of drugs. They started with 19 drugs which, a small bit of math shows, could be combined in 2 to the power of 19 different ways – about half a million possible combinations (not counting sequencing or relative strength issues). As only 20 such combinations could be tested each week, the chances of finding an effective, let alone the best combination, were slim within any reasonable timeframe. Simplifying a bit, rather than attempting to cover the entire range of possibilities, their approach finds a local optimum within one locale by picking a point and iterating variations from there until the best combination is found for that patch of the fitness landscape. It then checks another locale and repeats the process, and iterates until they have covered a large enough portion of the fitness landscape to be confident of having found at least a good solution: they have at least several peaks to compare. This also lets them follow up on hunches and to use educated guesses to speed up the search. It seems pretty effective, at least when compared with alternatives that attempt a theory-driven intentional design (too many non-independent variables), and is certainly vastly superior to methodically trying every alternative, inasmuch as it is actually possible to do this within acceptable timescales.

The central trick is to deliberately go downhill on the fitness landscape, rather than following an uphill route of continuous improvement all the time, which may simply get you to the top of an anthill rather than the peak of Everest in the fitness landscape. Miller very effectively shows that this is the fundamental error committed by followers of the Six-Sigma approach to management, an iterative method of process improvement originally invented to reduce errors in the manufacturing process: it may work well in a manufacturing context with a small number of variables to play with in a fixed and well-known landscape, but it is much worse than useless when applied in a creative industry like, say, education, because the chances that we are climbing a mountain and not an anthill are slim to negligible. In fact, the same is true even in manufacturing: if you are just making something inherently weak as good as it can be, it is still weak. There are lessons here for those that work hard to make our educational systems work better. For instance, attempts to make examination processes more reliable are doomed to fail because it’s exams that are the problem, not the processes used to run them. As I finish this while listening to a talk on learning analytics, I see dozens of such examples: most of the analytics tools described are designed to make the various parts of the educational machine work ‘ better’, ie. (for the most part) to help ensure that students’ behaviour complies with teachers’ intent. Of course, the only reason such compliance was ever needed was for efficient use of teaching resources, not because it is good for learning. Anthills.

This way of thinking seems to me to have potentially interesting applications in educational research. We who work in the area are faced with an irreducibly large number of recombinable and mutually affective variables that make any ethical attempt to do experimental research on effectiveness (however we choose to measure that – so many anthills here) impossible. It doesn’t stop a lot of people doing it, and telling us about p-values that prove their point in more or less scupulous studies, but they are – not to put too fine a point on it – almost always completely pointless.  At best, they might be telling us something useful about a single, non-replicable anthill, from which we might draw a lesson or two for our own context. But even a single omitted word in a lecture, a small change in inflection, let alone an impossibly vast range of design, contextual, historical and human factors, can have a substantial effect on learning outcomes and effectiveness for any given individual at any given time. We are always dealing with a lot more than 2 to the power of 19 possible mutually interacting combinations in real educational contexts. For even the simplest of research designs in a realistic educational context, the number of possible combinations of relevant variables is more likely closer to 2 to the power of 100 (in base 10 that’s  1,267,650,600,228,229,401,496,703,205,376). To make matters worse, the effects we are looking for may sometimes not be apparent for decades (having recombined and interacted with countless others along the way) and, for anything beyond trivial reductive experiments that would tell us nothing really useful, could seldom be done at a rate of more than a handful per semester, let alone 20 per week. This is a very good reason to do a lot more qualitative research, seeking meanings, connections, values and stories rather than trying to prove our approaches using experimental results. Education is more comparable to psychology than medicine and suffers the same central problem, that the general does not transfer to the specific, as well as a whole bunch of related problems that Smedslund recently coherently summarized. The article is paywalled, but Smedlund’s abstract states his main points succinctly:

“The current empirical paradigm for psychological research is criticized because it ignores the irreversibility of psychological processes, the infinite number of influential factors, the pseudo-empirical nature of many hypotheses, and the methodological implications of social interactivity. An additional point is that the differences and correlations usually found are much too small to be useful in psychological practice and in daily life. Together, these criticisms imply that an objective, accumulative, empirical and theoretical science of psychology is an impossible project.”

You could simply substitute ‘education’ for ‘psychology’ in this, and it would read the same. But it gets worse, because education is as much about technology and design as it is about states of mind and behaviour, so it is orders of magnitude more complex than psychology. The potential for invention of new ways of teaching and new states of learning is essentially infinite. Reductive science thus has a very limited role in educational research, at least as it has hitherto been done.

But what if we took the lessons of simulated annealing to heart? I recently bookmarked an approach to more reliable research suggested by the Christensen Institute that might provide a relevant methodology. The idea behind this is (again, simplifying a bit) to do the experimental stuff, then to sweep the normal results to one side and concentrate on the outliers, performing iterations of conjectures and experiments on an ever more diverse and precise range of samples until a richer, fuller picture results. Although it would be painstaking and longwinded, it is a good idea. But one cycle of this is a bit like a single iteration of Miller’s simulated annealing approach, a means to reach the top of one peak in the fitness landscape, that may still be a low-lying peak. However if, having done that, we jumbled up the variables again and repeated it starting in a different place, we might stand a chance of climbing some higher anthills and, perhaps, over time we might even hit a mountain and begin to have something that looks like a true science of education, in which we might make some reasonable predictions that do not rely on vague generalizations. It would either take a terribly long time (which itself might preclude it because, by the time we had finished researching, the discipline will have moved somewhere else) or would hit some notable ethical boundaries (you can’t deliberately mis-teach someone), but it seems more plausible than most existing techniques, if a reductive science of education is what we seek.

To be frank, I am not convinced it is worth the trouble. It seems to me that education is far closer as a discipline to art and design than it is to psychology, let alone to physics. Sure, there is a lot of important and useful stuff to be learned about how we learn: no doubt about that at all, and a simulated annealing approach might speed up that kind of research. Painters need to know what paints do too. But from there to prescribing how we should therefore teach spans a big chasm that reductive science cannot, in principle or practice, cross. This doesn’t mean that we cannot know anything: it just means it’s a different kind of knowledge than reductive science can provide. We are dealing with emergent phenomena in complex systems that are ontologically and epistemologically different from the parts of which they consist. So, yes, knowledge of the parts is valuable, but we can no more predict how best to teach or learn from those parts than we can predict the shape and function of the heart from knowledge of cellular organelles in its constituent cells. But knowledge of the cocktails that result – that might be useful.

 

 

Interview with George Siemens in AU student union's Voice magazine (part 3)

Final part of a three part interview with George Siemens (following from the  first and second parts), in which he describes some thoughts about the future and nature of educational systems, and in which he has some great stuff to say about motivation and assessment in particular. I like this:

Make things relevant to students, but also give students an opportunity to write themselves into the curriculum. That is, to be able to see the outcome of the benefits, the way in which it can make them a better person, and the way it can make the world a better place. You can’t directly motivate someone, but you can set conditions under which people of different attributes will become motivated.”

Exactly so – it’s about creating conditions, not about telling or controlling. It’s about making and supporting a space (physical, virtual, social, conceptual, organizational, temporal, curricular, etc) that learners both belong to and own. 

Address of the bookmark: http://www.voicemagazine.org/articles/featuredisplay.php?ART=10462

Hacking Our Brains: Motivating Others By Snatching Back Rewards

Ingenious approach to extrinsic motivation – give something, then use the threat of taking it away to ‘motivate’ people to do what you want them to do. It’s an old idea, but one that has not seen as much use as you would expect in things like student grading or occupational performance assessments. Though tied up in language of the endowment effect, the essence of this method is punishment rather than reward, and we tend to be more punishment-averse than reward-seeking, so it works ‘better’. It’s still rampant behaviourism, presented in a cognitivist wrapper to make it look shinier. 

As with all forms of extrinsic motivation, this does two things, both inimical to learning. Firstly, it leads to a focus on avoiding the punishment, rather than on the pleasure of the learning activity itself. I don’t see this as a great leap forward from rewarding with grades in a learning context – it just makes it even more extrinsic and even more likely to destroy any intrinsic motivation a learner might have had in the first place so that, once punishment has been avoided, the value of the activity itself is diminished and, mostly, the things that make it useful are forgotten. Secondly, it is an even worse assertion of power than a reward. Again, I don’t see this as having any meaningful value in a learning context. It teaches greater compliance, not the topic at hand. That’s a bad lesson, unless you think that education is preparation for life in which you should be a compliant tool that reluctantly does the bidding of those in power through fear of punishment. A society organized that way is not the kind of society I want to live in. Surely we have grown out of this? If not, surely we should?

The notion that people need to be forced to comply in order to learn what we want to teach them is barbaric, distasteful and, ultimately, deeply counter-productive. Countless generations of learners have had their love of learning viciously attacked by such attitudes, and have learned with less efficiency, less depth, and less value to society as a result. It’s a systemic failure on an unbelievably massive scale, embedded so deeply in our educational systems we hardly even notice it any more. Done to one person it is bad enough but, done systematically at a worldwide scale, to ever younger generations of children, it hampers the intelligence and compassion of our species in ways that cut deep and leave us bleeding. Despite this, most of us still manage to come out of this without all of our innate love of learning completely destroyed. Our intrinsic motivation can be a powerful counter-force, just occasionally what we are taught aligns well enough with what we want and need to learn, we discover other ways and things to learn that are meaningful and not imposed upon us, and there are quite a lot of great teachers out there that manage to enthuse and inspire despite the odds stacked against them. Few if any of us survive unscathed, though most of us get something useful here and there despite the obstacles. But we could be so much more.

Address of the bookmark: http://readwrite.com/2015/05/07/reward-then-deduct-loss-aversion-brain-hack

Half an Hour: The Study, and Other Stuff

This is the latest in a fascinating ongoing argument between George Siemens and Stephen Downes over the value, reliability and focus of Preparing for the Digital University, a report created by George, Dragan Gasevic, Shane Dawson and many others on the current state of research and practice in (mainly) online and distance learning. Because I am quoted in Stephen’s post, and in George’s post to which it is a response, as agreeing with Stephen, I’d like to clarify just what I agree with.

Where I disagree with Stephen is that I do think it is a good report that pulls together a lot of good research as well as other sources to provide a rich and informative picture of what universities have been doing in the field of online and distance learning, and how they got there. I think its audience is mainly not seasoned edtech researchers, though there is a lot of valuable synthesis and analysis in it that those of us in the field can and will certainly use. I see it as a strength that it does not just limit itself to ‘reliable’ research (whatever that may be – I’ve seldom found an unequivocal example of that elusive beast) and I am quite happy with the range and depth of the sources it uses. This is an expert summary and analysis by some of the top experts in the field who know whereof they speak. Of course it misses some things and over-emphasizes others, but that is the nature of the animal and I think it does a very good job of remaining broad, informative and clear.

I think Stephen and I are in rough agreement, though, in observing the boundary that the report does not try too hard to cross: the challenge that some of the research presents to the very notion of the university as we now know it and, to a lesser extent, the under-representation of ideas and research that relate to that. The latter point is a tricky systemic problem because, on the whole, the majority of work and writing in that space is under-represented in literature that, because it tends to come from universities, tends to focus on universities. As this report is about universities, it is quite reasonable that this is the body of literature it uses.

The relative lack of beyond-the-institution thinking is, I believe, a concern for Stephen but, for me, it’s just something that, if I were writing it, I would want to add more of. It seems to me that this report will have most value in providing information for policy makers, managers of institutions, and those who are beginning to discover the field. It will open some eyes, help people to avoid old mistakes, and open up some important discussions. But, thanks to the intentional focus on the university and how we got to now, the structures, processes and measures are mostly rooted in an assumption that the university as we know it can and should persist. The discussion that emerges will inevitably tend to focus on how digital technologies can be used to do what we already do in (from a birds-eye perspective) only slightly different ways. In doing so, it may blind participants to the very real threats to their whole way of life as well as opportunities that are worth grasping. This may not be the best idea but it is not a weakness in the report as such – it is, after all, doing exactly what it says on the box. In fact, it is to its credit that it does address some approaches and tools that are transformative. 

I agree with George (and with Stephen’s hopes) that universities are and will continue to be really important institutions that can and should offer great value to our societies for a long time to come. We would invent them very differently, or maybe not invent anything like them at all, if we started afresh, knowing what we now know. The reality is, however, that this is what we have and it has enormous momentum that is not going to stop any time soon so, if we are to make the best use of it, we should both understand and make improvements to it. This report is a solid foundation for that. There are some risks that it might, without further reflection, lead to ‘improvement’ of the wrong things – those that are counter-productive to the goals of increasing knowledge and learning in the world – and so further ensconce harmful practices. My pet hobby horses include courses, grades, and the unholy linking of learning and accreditation, for instance.  But there are other huge problems like the trend to systemic exclusion of disadvantaged people, the treatment of students as customers and the unnatural separation of disciplines and fields. Stephen mentions more. With that in mind, it would be useful to think a bit further about ways that the foundations of the university like teaching, accreditation, community, knowledge production, knowledge dissemination, being a knowledge repository, a source of expertise and so on are being not-too-subtly eroded by things that are enabled by the net, as well as to further critique the embedded patterns, limitations, biases, and blind-spots that make those foundations brittle and liable to crack or crumble.

The basis for that is all there in this report but, on reflection, I think the discussion of those issues is something for a further report as it demands a different level and kind of analysis. I am not at all sure that the Gates Foundation would want to fund such a thing, but it should. Actually, maybe that line of thinking is a bit too narrow. After all, the exchange between George and Stephen, as well as contributions by others (e.g. George Veletsianos) is already, and self-referentially, doing much the same job such a report might do, and maybe doing it better. The learning dialogue and knowledge creation that is occurring through this distributed conversation is as rich as the report itself and, in its own way, at least as valuable. If the report had not been written then that dialogue might not have occurred, so it is a good anchor, but it is part of a richer knowledge network. And that’s exactly the point: technologies like social media are deeply subversive because they enable us do some of the job that universities traditionally do without requiring a university as a necessary intermediary, with all the limitations and exclusions that implies. The patterns, technologies, economic models, checks and balances are not there yet to replace all of a university’s functions – we have much research to do and many inventions yet to invent, and I am very aware that it is only because of the university that I for one am able to participate in this – but the change is already happening, and it is quite profound.

Address of the bookmark: http://halfanhour.blogspot.ca/2015/05/the-study-and-other-stuff.html

Half an Hour: Research and Evidence

Stephen Downes defends his attack on the recent report on the current state of online (etc) learning developed by George Siemens, Dragan Gasevic and Shane Dawson.

I have mixed feeling about this. As such reports go, I think it is a good one. It does knit together a fair sample of the literature, including bits from journalists and bloggers as well as more and less credible research, in a form that I think is digestible enough and sufficiently broad for corporate folk who need to get up to speed, including those running academies. Its methods are clear and its outputs are accessible. Though written by very well-informed researchers (not just George) and making use of copious amounts of research, I don’t think it is really aimed at researchers in the field. My impression is that it’s mainly for the under-informed policy makers that need to be better informed, not for those of use who already know this stuff, and it does that job well. It’s a lot more than journalism, but a little less than an academic paper. I can also see a useful role for it for those that need to know roughly where we are now in online learning (e.g. edtech developers), but that are not seeking to become researchers in the field. 

I think the more fundamental problem, and one that both George and Stephen seem to be fencing around, is in its title. The suggestion that it is about ‘preparing for the digital university’ is tricky on two counts. First of all, ‘preparing’ seems a funny word to use: it’s like saying we are ‘preparing’ for a storm when the waves are high around us and we are on the verge of capsize. Secondly, and more tellingly, ‘digital university’ implies an expected outcome that is rather at odds with a lot of both George and Stephen’s work. The assumption that a university is the answer to the problem (which is what the title implies) is tricky, to say the least, especially given quite a lot of the discussion surrounding incursions by commercial and alternative (especially bottom-up) forms of accreditation and learning that step far outside the realms of traditional academia and challenge its very foundations. That final chapter mentions quite a few tools and approaches that relegate the institution to a negligible role but there are hints of this scattered through much of the report, from commercial incursions to uses of reputation measures in Stack Overflow. If we are thinking of preparing for the future, the language and methods of formal education, courses, and mediaeval institutions might be a fair place to start but maybe not the place to aim for. There’s a tension throughout the report between the soft disruptive nature of digital technologies (not so much the tools but what people do with them) and the hard mechanization of arbitrarily evolved patterns. For instance, between social recommendation and automated marking. The latter reinforces the university as an institution even if it does upset some power structures and working practices a little. The former (potentially) disrupts the notion and societal role of the university itself. For the most part, this report is a review of the history and current state of online/distance/blended learning in formal education. This is in keeping with the title, but not with the ultimate thrust of at least a few of the findings. That does rather stifle the potential for really getting under the skin of the problem. It’s a view from the inside, not from above. Though it hints at transformation, it is ultimately in defence of the realm. Personally speaking, I would have liked to see a bit more critique of the realm itself. The last chapter, in particular, provides some evidence that could be used to make such a case, but does not really push it where it wants to go. But I’m not the one this report is aimed at.

 

Address of the bookmark: http://halfanhour.blogspot.ca/2015/05/research-and-evidence.html