Brilliant. The short answer is, of course, yes, and it doesn’t do a bad job of it. This is conceptual art of the highest order.
This is the preprint of a paper written by GPT-3 (as first author) about itself, submitted to “a well-known peer-reviewed journal in machine intelligence”. The second and third authors provided guidance about themes, datasets, weightings, etc, but that’s as far as it goes. They do provide commentary as the paper progresses, but they tried to keep that as minimal as needed, so that the paper could stand or fall on its own merits. The paper is not too bad. A bit repetitive, a bit shallow, but it’s just a 500 word paper- hardly even an extended abstract – so that’s about par for the course. The arguments and supporting references are no worse than many I have reviewed, and considerably better than some. The use of English is much better than that of the majority of papers I review.
In an article about it in Scientific American the co-authors describe some of the complexities in the submission process. They actually asked GPT-3 about its consent to publication (it said yes), but this just touches the surface of some of the huge ethical, legal, and social issues that emerge. Boy there are a lot of those! The second and third authors deserve a prize for this. But what about the first author? Well, clearly it does not, because its orchestration of phenomena is not for its own use, and it is not even aware that it is doing the orchestration. It has no purpose other than that of the people training it. In fact, despite having written a paper about itself, it doesn’t even know what ‘itself’ is in any meaningful way. But it raises a lot of really interesting questions.
It would be quite interesting to train GPT-3 with (good) student assignments to see what happens. I think it would potentially do rather well. If I were an ethically imperfect, extrinsically-driven student with access to this, I might even get it to write my assignments for me. The assignments might need a bit of tidying here and there, but the quality of prose and the general quality of the work would probably result in a good B and most likely an A, with very little extra tweaking. With a bit more training it could almost certainly mimic a particular student’s style, including all the quirks that would make it seem more human. Plagiarism detectors wouldn’t stand a chance, and I doubt that many (if any) humans would be able to say with any assurance that it was not the student’s own work.
If it’s not already happening, this is coming soon, so I’m wondering what to do about it. I think my own courses are slightly immune thanks to the personal and creative nature of the work and big emphasis on reflection in all of them (though those with essays would be vulnerable), but it would not take too much ingenuity to get GPT-3 to deal with that problem, too: at least, it could greatly reduce the effort needed. I guess we could train our own AIs to recognize the work of other AIs, but that’s an arms war we’d never be able to definitively win. I can see the exam-loving crowd loving this, but they are in another arms war that they stopped winning long ago – there’s a whole industry devoted to making cheating in exams pay, and it’s leaps ahead of the examiners, including those with both online and in-person proctors. Oral exams, perhaps? That would make it significantly more difficult (though far from impossible) to cheat. I rather like the notion that the only summative assessment model that stands a fair chance of working is the one with which academia began.
It seems to me that the only way educators can sensibly deal with the problem is to completely divorce credentialling from learning and teaching, so there is no incentive to cheat during the learning process. This would have the useful side-effect that our teaching would have to be pretty good and pretty relevant, because students would only come to learn, not to get credentials, so we would have to focus solely on supporting them, rather than controlling them with threats and rewards. That would not be such a bad thing, I reckon, and it is long overdue. Perhaps this will be the catalyst that makes it happen.
As for credentials, that’s someone else’s problem. I don’t say that because I want to wash my hands of it (though I do) but because credentialling has never had anything whatsoever to do with education apart from in its appalling inhibition of effective learning. It only happens at the moment because of historical happenstance, not because it ever made any pedagogical sense. I don’t see why educators should have anything to do with it. Assessment (by which I solely mean feedback from self or others that helps learners to learn – not grades!) is an essential part of the learning and teaching process, but credentials are positively antagonistic to it.
Originally posted at: https://landing.athabascau.ca/bookmarks/view/14216255/can-gpt-3-write-an-academic-paper-on-itself-with-minimal-human-input
This is a chapter by me and Terry Anderson for Springer’s new Handbook of Open, Distance, and Digital Education that updates and refines our popular (1658 citations, and still rising, for the original paper alone) but now long-in-the-tooth ‘three generations’ model of distance learning pedagogy. We have changed the labels for the pedagogical families this time round to ones that I think are more coherent, divided according to their epistemological underpinnings: the objectivist, the subjectivist, and the complexivist. and we have added some speculations about whether further paradigms might have started to emerge in the 11 years since our original paper was published. Our main conclusion, though, is that no single pedagogical paradigm will dominate in the foreseeable future: that we are in an era of great pedagogical diversity, and that this diversity will only increase as time goes by.
The three major paradigms
Objectivist: previously known as ‘behaviourist/cognitivist’, what characterizes objectivist pedagogies is that they are both defined by assumptions of an objective external reality, and driven by (usually teacher-defined) objectives. It’s a paradigm of teaching, where teachers are typically sages on the stage using methods intended to achieve effective learning of defined facts and skills. Examples include behaviourism, learning styles theories, brain-based approaches, multiple intelligence models, media theories, and similar approaches where the focus is on efficient transmission and replication of received knowledge.
Subjectivist: formerly known as ‘social constructivist’, subjectivist pedagogies are concerned with – well – subjects: they are concerned with the personal and social co-construction of knowledge, recognizing its situated and always unique nature, saying little about methods but a lot about meaning-making. It’s a paradigm of learning, where teachers are typically guides on the side, supporting individuals and groups to learn in complex, situated contexts. Examples include constructivist, social constructivist, constructionist, and similar families of theory where the emphasis is as much on the learners’ growth and development in a human society as it is on what is being learned.
Complexivist: originally described as ‘connectivist’ (which was confusing and inaccurate), complexivist pedagogies acknowledge and exploit the complex nature of our massively distributed cognition, including its richly recursive self-organizing and emergent properties, its reification through shared tools and artefacts, and its many social layers. It’s a paradigm of knowledge, where teachers are fellow learners, co-travellers and role models, and knowledge exists not just in individual minds but in our minds’ extensions, in both other people and what we collectively create. Examples include connectivism, rhizomatic learning, distributed cognition, cognitive apprenticeship, networks of practice, and similar theories (including my own co-participation model, as it happens). We borrow the term ‘complexivist’ from Davis and Sumara, whose 2006 book on the subject is well worth reading, albeit grounded mainly in in-person learning.
No one paradigm dominates: all typically play a role at some point of a learning journey, all build upon and assemble ideas that are contained in the others (theories are technologies too), and all have been around as ways of learning for as long as humans have existed.
Beyond these broad families, we speculate on whether any new pedagogical paradigms are emerging or have emerged within the 12 years since we first developed these ideas. We come up with the following possible candidates:
Theory-free: this is a digitally native paradigm that typically employs variations of AI technologies to extract patterns from large amounts of data on how people learn, and that provides support accordingly. This is the realm of adaptive hypermedia, learning analytics, and data mining. While the vast majority of such methods are very firmly in the objectivist tradition (the models are trained or designed by identifying what leads to ‘successful’ achievement of outcomes) a few look beyond defined learning products into social engagement or other measures of the learning process, or seek open-ended patterns in emergent collective behaviours. We see the former as a dystopic trend, but find promise in the latter, notwithstanding the risks of filter bubbles and systemic bias.
Hologogic: this is a nascent paradigm that treats learning as a process of enculturation. It’s about how we come to find our places in our many overlapping cultures, where belonging to and adopting the values and norms of the sets to which we belong (be it our colleagues, our ancestors, our subject-matter peers, or whatever) is the primary focus. There are few theories that apply to this paradigm, as yet, but it is visible in many online and in-person communities, and is/has been of particular significance in collectivist cultures where the learning of one is meaningless unless it is also the learning of all (sometimes including the ancestors). We see this as a potentially healthy trend that takes us beyond the individualist assumptions underpinning much of the field, though there are risks of divisions and echo chambers that pit one culture against others. We borrow the term from Cumbie and Wolverton.
Bricolagogic: this is a free-for-all paradigm, a kind of meta-pedagogy in which any pedagogical method, model, or theory may be used, chosen for pragmatic or personal reasons, but in which the primary focus of learning is in choosing how (in any given context) we should learn. Concepts of charting and wayfinding play a strong role here. This resembles what we originally identified as an emerging ‘holistic’ model, but we now see it not as a simple mish-mash of pedagogical paradigms but rather as a pedagogic paradigm in its own right.
Another emerging paradigm?
I have recently been involved in a lengthy Twitter thread, started by Tim Fawns on the topic of his recent paper on entangled pedagogy, which presents a view very similar indeed to my own (e.g. here and here), albeit expressed rather differently (and more eloquently). There are others in the same thread who express similar views. I suggested in this thread that we might be witnessing the birth of a new ‘entanglist’ paradigm that draws very heavily on complexivism (and that could certainly be seen as part of the same family) but that views the problem from a rather different perspective. It is still very much about complexity, emergence, extended minds, recursion, and networks, and it negates none of that, but it draws its boundaries around the networked nodes at a higher level than theories like Connectivism, yet with more precision than those focused on human learning interactions such as networks of practice or rhizomatic learning. Notably, it leaves room for design (and designed objects), for meaning, and for passion as part of the deeply entangled complex system of learning in which we all participate, willingly or not. It’s not specifically a pedagogical model – it’s broader than that – though it does imply many things about how we should and should not teach, and about how we should understand pedagogies as part of a massively distributed system in which designated teachers account for only a fraction of the learning and teaching process. The title of my book on the subject (that has been under review for 16 months – grrr) sums this up quite well, I think: “How Education Works”. The book has now (as of a few days ago) received a very positive response from reviewers and is due to be discussed by the editorial committee at the end of this month, so I’m hoping that it may be published in the not-too-distant future. Watch this space!
Here’s the chapter abstract:
Building on earlier work that identified historical paradigm shifts in open and distance learning, this chapter is concerned with analyzing the three broad pedagogical paradigms – objectivist, subjectivist, and complexivist – that have characterized learning and teaching in the field over the past half century. It goes on to discuss new paradigms that are starting to emerge, most notably in “theory-free” models enabled by developments in artificial intelligence and analytics, hologogic methods that recognize the many cultures to which we belong, and a “bricolagogic,” theory-agnostic paradigm that reflects the field’s growing maturity and depth.
Dron J., Anderson T. (2022) Pedagogical Paradigms in Open and Distance Education. In: Zawacki-Richter O., Jung I. (eds) Handbook of Open, Distance and Digital Education. Springer, Singapore. https://doi.org/10.1007/978-981-19-0351-9_9-1
This is the (near enough final) English version of my journal paper, translated into Chinese by Junhong Xiao and published last year (with a CC licence) in Distance Education in China. (Reference: Dron, Jon (2021). Technology, technique, and culture in educational systems: breaking the iron triangle (translated by Junhong Xiao). Distance Education in China, 1, 37-49. DOI:10.13541/j.cnki.chinade.2021.01.005).
The underlying theory is the same as that in my paper Educational technology: what it is and how it works (Reference: Dron, J. Educational technology: what it is and how it works. AI & Soc 37, 155–166 (2022). https://doi.org/10.1007/s00146-021-01195-z direct link for reading, link to downloadable preprint) but this one focuses more on what it means for ways we go about distance learning. It’s essentially about ways to solve problems that we created for ourselves by solving problems in the context of in-person learning that we inappropriately transferred to a distance context.
Here’s the abstract: This paper presents arguments for a different way of thinking about how distance education should be designed. The paper begins by explaining education as a technological process, in which we are not just users of technologies for learning but coparticipants in their instantiation and design, implying that education is a fundamentally distributed technology. However, technological and physical constraints have led to processes (including pedagogies) and path dependencies in In-person education that have tended to massively over-emphasize the designated teacher as the primary controller of the process. This has resulted in the development of many counter technologies to address the problems this causes, from classrooms to grades to timetables, most of which have unnecessarily been inherited by distance education. By examining the different strengths and weaknesses of distance education, the paper suggests an alternative model of distance education that is more personal, more situated in communities and cultures, and more appropriate to the needs of learners and society.
I started working on a revised version of this (with a snappier title) to submit to an English language journal last year but got waylaid. If anyone is interested in publishing this, I’m open to submitting it!
This is a long paper (about 10,000 words), that summarizes some of the central elements of the theoretical model of learning, teaching and technology developed in my recently submitted book (still awaiting review) and that gives a few examples of its application. For instance, it explains:
why, on average researchers find no significant difference between learning with and without tech.
why learning styles theories are a) inherently unprovable, b) not important even if they were, and c) a really bad idea in any case.
why bad teaching sometimes works (and, conversely, why good teaching sometimes fails)
why replication studies cannot be done for most educational interventions (and, for the small subset that are susceptible to reductive study, all you can prove is that your technology works as intended, not whether it does anything useful).
This theoretical paper elucidates the nature of educational technology and, in the process, sheds light on a number of phenomena in educational systems, from the no-significant-difference phenomenon to the singular lack of replication in studies of educational technologies. Its central thesis is that we are not just users of technologies but coparticipants in them. Our participant roles may range from pressing power switches to designing digital learning systems to performing calculations in our heads. Some technologies may demand our participation only in order to enact fixed, predesigned orchestrations correctly. Other technologies leave gaps that we can or must fill with novel orchestrations, that we may perform more or less well. Most are a mix of the two, and the mix varies according to context, participant, and use. This participative orchestration is highly distributed: in educational systems, coparticipants include the learner, the teacher, and many others, from textbook authors to LMS programmers, as well as the tools and methods they use and create. From this perspective, all learners and teachers are educational technologists. The technologies of education are seen to be deeply, fundamentally, and irreducibly human, complex, situated and social in their constitution, their form, and their purpose, and as ungeneralizable in their effects as the choice of paintbrush is to the production of great art.
Originally posted at: https://landing.athabascau.ca/bookmarks/view/8692242/my-latest-paper-educational-technology-what-it-is-and-how-it-works
These are the slides from my keynote at the University of Ottawa’s “Scaffolding a Transformative Transition to Distance and Online Learning” symposium today. In the presentation I discussed why distance learning really is different from in-person learning, focusing primarily on the fact that they are the motivational inverse of one another. In-person teaching methods evolved in response to the particular constraints and boundaries imposed by physics, and consist of many inventions – pedagogical and otherwise – that are counter-technologies designed to cope with the consequences of teaching in a classroom, a lot of which are not altogether wise. Many of those constraints do not exist online, and yet we continue to do very similar things, especially those that control and dictate what students should do, as well as when, and how they should do it. This makes no sense, and is actually antagonistic to the natural flow of online learning. I provided a few simple ideas and prompts for thinking about how to go more with the flow.
The presentation was only 20 minutes of a lively and inspiring hour-long session, which was fantastic fun and provided me with many interesting questions and a chance to expand further on the ideas.
A group of us at AU have begun discussions about how we might transform our assessment practices, in the light of the far-reaching AU Imagine plan and principles. This is a rare and exciting opportunity to bring about radical and positive change in how learning happens at the institution. Hard technologies influence soft more than vice versa, and assessments (particularly when tied to credentials) tend to be among the hardest of all technologies in any pedagogical intervention. They are therefore a powerful lever for change. Equally, and for the same reasons, they are too often the large, slow, structural elements that infest systems to stunt progress and innovation.
Almost all learning must involve assessment, whether it be of one’s own learning, or provided by other people or machines. Even babies constantly assess their own learning. Reflection is assessment. It is completely natural and it only gets weird when we treat it as a summative judgment, especially when we add grades or credentials to the process, thus normally changing the purpose of learning from achieving competence to achieving a reward. At best it distorts learning, making it seem like a chore rather than a delight, at worst it destroys it, even (and perhaps especially) when learners successfully comply with the demands of assessors and get a good grade. Unfortunately, that’s how most educational systems are structured, so the big challenge to all teachers must be to eliminate or at least to massively reduce this deeply pernicious effect. A large number of the pedagogies that we most value are designed to solve problems that are directly caused by credentials. These pedagogies include assessment practices themselves.
With that in mind, before the group’s first meeting I compiled a list of some of the main principles that I adhere to when designing assessments, most of which are designed to reduce or eliminate the structural failings of educational systems. The meeting caused me to reflect a bit more. This is the result:
Principles applying to all assessments
The primary purpose of assessment is to help the learner to improve their learning. All assessment should be formative.
Assessment without feedback (teacher, peer, machine, self) is judgement, not assessment, pointless.
Ideally, feedback should be direct and immediate or, at least, as prompt as possible.
Feedback should only ever relate to what has been done, never the doer.
No criticism should ever be made without also at least outlining steps that might be taken to improve on it.
Grades (with some very rare minor exceptions where the grade is intrinsic to the activity, such as some gaming scenarios or, arguably, objective single-answer quizzes with T/F answers) are not feedback.
Assessment should never ever be used to reward or punish particular prior learning behaviours (e.g. use of exams to encourage revision, grades as goals, marks for participation, etc) .
Students should be able to choose how, when and on what they are assessed.
Where possible, students should participate in the assessment of themselves and others.
Assessment should help the teacher to understand the needs, interests, skills, and gaps in knowledge of their students, and should be used to help to improve teaching.
Assessment is a way to show learners that we care about their learning.
Specific principles for summative assessments
A secondary (and always secondary) purpose of assessment is to provide evidence for credentials. This is normally described as summative assessment, implying that it assesses a state of accomplishment when learning has ended. That is a completely ridiculous idea. Learning doesn’t end. Human learning is not in any meaningful way like programming a computer or storing stuff in a database. Knowledge and skills are active, ever-transforming, forever actively renewed, reframed, modified, and extended. They are things we do, not things we have.
With that in mind, here are my principles for assessment for credentials (none of which supersede or override any of the above core principles for assessment, which always apply):
There should be no assessment task that is not in itself a positive learning activity. Anything else is at best inefficient, at worst punitive/extrinsically rewarding.
Assessment for credentials must be fairly applied to all students.
Credentials should never be based on comparisons between students (norm-referenced assessment is always, unequivocally, and unredeemably wrong).
The criteria for achieving a credential should be clear to the learner and other interested parties (such as employers or other institutions), ideally before it happens, though this should not forestall the achievement and consideration of other valuable outcomes.
There is no such thing as failure, only unfinished learning. Credentials should only celebrate success, not punish current inability to succeed.
Students should be able to choose when they are ready to be assessed, and should be able to keep trying until they succeed.
Credentials should be based on evidence of competence and nothing else.
It should be impossible to compromise an assessment by revealing either the assessment or solutions to it.
There should be at least two ways to demonstrate competence, ideally more. Students should only have to prove it once (though may do so in many ways and many times, if they wish).
More than one person should be involved in judging competence (at least as an option, and/or on a regularly taken sample).
Students should have at least some say in how, when, and where they are assessed.
Where possible (accepting potential issues with professional accreditation, credit transfer, etc) they should have some say over the competencies that are assessed, in weighting and/or outcome.
Grades and marks should be avoided except where mandated elsewhere. Even then, all passes should be treated as an ‘A’ because students should be able to keep trying until they excel.
Great success may sometimes be worthy of an award – e.g. a distinction – but such an award should never be treated as a reward.
Assessment for credentials should demonstrate the ability to apply learning in an authentic context. There may be many such contexts.
Ideally, assessment for credentials should be decoupled from the main teaching process, because of risks of bias, the potential issues of teaching to the test (regardless of individual needs, interests and capabilities) and the dangers to motivation of the assessment crowding out the learning. However, these risks are much lower if all the above principles are taken on board.
I have most likely missed a few important issues, and there is a bit of redundancy in all this, but this is a work in progress. I think it covers the main points.
Further random reflections
There are some overriding principles and implied specifics in all of this. For instance, respect for diversity, accessibility, respect for individuals, and recognition of student control all fall out of or underpin these principles. It implies that we should recognize success, even when it is not the success we expected, so outcome harvesting makes far more sense than measurement of planned outcomes. It implies that failure should only ever be seen as unfinished learning, not as a summative judgment of terminal competence, so appreciative inquiry is far better than negative critique. It implies flexibility in all aspects of the activity. It implies, above and beyond any other purpose, that the focus should always be on learning. If assessment for credentials adversely affects learning then it should be changed at once.
In terms of implementation, while objective quizzes and their cousins can play a useful formative role in helping students to self-assess and to build confidence, machines (whether implemented by computers or rule-following humans) should normally be kept out of credentialling. There’s a place for AI but only when it augments and informs human intelligence, never when it behaves autonomously. Written exams and their ilk should be avoided, unless they conform to or do not conflict with all the above principles: I have found very few examples like this in the real world, though some practical demonstrations of competence in an authentic setting (e.g. lab work and reporting) and some reflective exercises on prior work can be effective.
A portfolio of evidence, including a reflective commentary, is usually going to be the backbone of any fair, humane, effective assessment: something that lets students highlight successes (whether planned or not), that helps them to consolidate what they have learned, and that is flexible enough to demonstrate competence shown in any number of ways. Outputs or observations of authentic activities are going to be important contributors to that. My personal preference in summative assessments is to only use the intended (including student-generated) and/or harvested outcomes for judging success, not for mandated assignments. This gives flexibility, it works for every subject, and it provides unquivocal and precise evidence of success. It’s also often good to talk with students, perhaps formally (e.g. a presentation or oral exam), in order to tease out what they really know and to give instant feedback. It is worth noting that, unlike written exams and their ilk, such methods are actually fun for all concerned, albeit that the pleasure comes from solving problems and overcoming challenges, so it is seldom easy.
Interestingly, there are occasions in traditional academia where these principles are, for the most part, already widely applied. A typical doctoral thesis/dissertation, for example, is often quite close to it (especially in more modern professional forms that put more emphasis on recording the process), as are some student projects. We know that such things are a really good idea, and lead to far richer, more persistent, more fulfilling learning for everyone. We do not do them ubiquitously for reasons of cost and time. It does take a long time to assess something like this well, and it can take more time during the rest of the teaching process thanks to the personalization (real personalization, not the teacher-imposed form popularized by learning analytics aficionados) and extra care that it implies. It is an efficient use of our time, though, because of its active contribution to learning, unlike a great many traditional assessment methods like teacher-set assignments (minimal contribution) and exams (negative contribution). A lot of the reason for our reticence, though, is the typical university’s schedule and class timetabling, which makes everything pile on at once in an intolerable avalanche of submissions. If we really take autonomy and flexibility on board, it doesn’t have to be that way. If students submit work when it is ready to be submitted, if they are not all working in lock-step, and if it is a work of love rather than compliance, then assessment is often a positively pleasurable task and is naturally staggered. Yes, it probably costs a bit more time in the end (though there are plenty of ways to mitigate that, from peer groups to pedagogical design) but every part of it is dedicated to learning, and the results are much better for everyone.
Some useful further reading
This is a fairly random selection of sources that relate to the principles above in one way or another. I have definitely missed a lot. Sorry for any missing URLs or paywalled articles: you may be able to find downloadable online versions somewhere.
Boud, D., & Falchikov, N. (2006). Aligning assessment with long-term learning. Assessment & Evaluation in Higher Education, 31(4), 399-413. Retrieved from https://www.jhsph.edu/departments/population-family-and-reproductive-health/_docs/teaching-resources/cla-01-aligning-assessment-with-long-term-learning.pdf
Boud, D. (2007). Reframing assessment as if learning were important. Retrieved from https://www.researchgate.net/publication/305060897_Reframing_assessment_as_if_learning_were_important
Cooperrider, D. L., & Srivastva, S. (1987). Appreciative inquiry in organizational life. Research in organizational change and development, 1, 129-169.
Deci, E. L., Vallerand, R. J., Pelletier, L. G., & Ryan, R. M. (1991). Motivation and education: The self-determination perspective. Educational Psychologist, 26(3/4), 325-346.
Hussey, T., & Smith, P. (2002). The trouble with learning outcomes. Active Learning in Higher Education, 3(3), 220-233.
Kohn, A. (1999). Punished by rewards: The trouble with gold stars, incentive plans, A’s, praise, and other bribes (Kindle ed.). Mariner Books. (this one is worth forking out money for).
Kohn, A. (2011). The case against grades. Educational Leadership, 69(3), 28-33.
Kohn, A. (2015). Four Reasons to Worry About “Personalized Learning”. Retrieved from http://www.alfiekohn.org/blogs/personalized/ (check out Alfie Kohn’s whole site for plentiful other papers and articles – consistently excellent).
Reeve, J. (2002). Self-determination theory applied to educational settings. In E. L. Deci & R. M. Ryan (Eds.), Handbook of Self-Determination research (pp. 183-203). Rochester, NY: The University of Rochester Press.
Ryan, R. M., & Deci, E. L. (2017). Self-determination theory: Basic psychological needs in motivation, development, and wellness. Guilford Publications. (may be worth paying for if such things interest you).
Wilson-Grau, R., & Britt, H. (2012). Outcome harvesting. Cairo: Ford Foundation. http://www.managingforimpact.org/sites/default/files/resource/outome_harvesting_brief_final_2012-05-2-1.pdf.
These are the slides from my recent talk with students studying the philosophy of education at Pace University.
This is a mashup of various talks I have given in recent years, with a little new stuff drawn from my in-progress book. It starts with a discussion of the nature of technology, and the distinction between hard and soft technologies that sees relative hardness as the amount of pre-orchestration in a technology (be it a machine or a legal system or whatever). I observe that pedagogical methods (‘pedagogies’ for short) are soft technologies to those who are applying them, if not to those on the receiving end. It is implied (though I forgot to explicitly mention) that hard technologies are always more structurally significant than soft ones: they frame what is possible.
All technologies are assemblies, and (in education), the pedagogies applied by learners are always the most important parts of those assemblies. However, in traditional in-person classrooms, learners are (by default) highly controlled due to the nature of physics – the need to get a bunch of people together in one place at one time, scarcity of resources, the limits of human voice and hearing, etc – and the consequent power relationships and organizational constraints that occur. The classroom thus becomes the environment that frames the entire experience, which is very different from what are inaccurately described as online learning environments (which are just parts of a learner’s environment).
Because of physical constraints, the traditional classroom context is inherently very bad for intrinsic motivation. It leads to learners who don’t necessarily want to be there, having to do things they don’t necessarily want to do, often being either bored or confused. By far the most common solution to that problem is to apply externally regulated extrinsic motivation, such as grades, punishments for non-attendance, rules of classroom behaviour, and so on. This just makes matters much worse, and makes the reward (or the avoidance of punishment) the purpose of learning. Intelligent responses to this situation include cheating, short-term memorization strategies, satisficing, and agreeing with the teacher. It’s really bad for learning. Such issues are not at all surprising: all technologies create as well as solve problems, so we need to create counter technologies to deal with them. Thus, what we normally recognize as good pedagogy is, for the most part, a set of solutions to the problems created by the constraints of in-person teaching, to bring back the love of learning that is destroyed by the basic set-up. A lot of good teaching is therefore to do with supporting at least better, more internally regulated forms of extrinsic motivation.
Because pedagogies are soft technologies, skill is needed to use them well. Harder pedagogies, such as Direct Instruction, that are more prescriptive of method tend (on average) to work better than softer pedagogies such as problem-based learning, because most teachers tend towards being pretty average: that’s implicit in the term, after all. Lack of skill can be compensated for through the application of a standard set of methods that only need to be done correctly in order to work. Because such methods can also work for good teachers as well as the merely average or bad, their average effectiveness is, of course, high. Softer pedagogical methods such as active learning, problem-based learning, inquiry-based learning, and so on rely heavily on passionate, dedicated, skilled, time-rich teachers and so, on average, tend to be less successful. However, when done well, they outstrip more prescriptive methods by a large margin, and lead to richer, more expansive outcomes that go far beyond those specified in a syllabus or test. Softer technologies, by definition, allow for greater creativity, flexibility, adaptability, and so on than harder technologies but are therefore difficult to implement. There is no such thing as a purely hard or purely soft technology, though, and all exist on a spectrum,. Because all pedagogies are relatively soft technologies, even those that are quite prescriptive, almost any pedagogical method can work if it is done well: clunky, ugly, weak pedagogies used by a fantastic teacher can lead to great, persistent, enthusiastic learning. As Hattie observes, almost everything works – at least, that’s true of most things that are reported on in educational research studies :-). But (and this is the central message of my book, the consequences of which are profound) it ain’t what you do, it’s the way that you do it, that’s what gets results.
Problems can occur, though, when we use the same methods that work in person in a different context for which they were not designed. Online learning is by far the most dominant mode of learning (for those with an Internet connection – some big social, political, economic, and equity issues here) on the planet. Google, YouTube, Wikipedia, Reddit, StackExchange, Quora, etc, etc, etc, not to mention email, social networking sites, and so on, are central to how most of us in the online world learn anything nowadays. The weird thing about online education (in the institutional sense) is that online learning is far less obviously dominant, and tends to be viewed in a far less favourable light when offered as an option. Given the choice, and without other constraints, most students would rather learn in-person than online. At least in part, this is due to the fact that those of us working in formal online education continue to apply pedagogies and organizational methods that solved problems in in-person classrooms, especially with regard to teacher control: the rewards and punishments of grades, fixed length courses, strictly controlled pathways, and so on are solutions to problems that do not exist or that exist in very different forms for online learners, whose learning environment is never entirely controlled by a teacher.
The final section of the presentation is concerned with what – in very broad terms – native distance pedagogies might look like. Distance pedagogies need to acknowledge the inherently greater freedoms of distance learners and the inherently distributed nature of distance learning. Truly learner-centric teaching does not seek to control, but to support, and to acknowledge the massively distributed nature of the activity, in which everyone (including emergent collective and networked forms arising from their interactions) is part of the gestalt teacher, and each learner is – from their perspective – the most important part of all of that. To emphasize that none of this is exactly new (apart from the massive scale of connection, which does matter a lot), I include a slide of Leonardo’s to-do list that describes much the same kinds of activity as those that are needed of modern learners and teachers.
For those seeking more detail, I list a few of what Terry Anderson and I described as ‘Connectivist-generation’ pedagogical models. These are far more applicable to native online learning than earlier pedagogical generations that were invented for an in-person context. In my book I am now describing this new, digitally native generation as ‘complexivist’ pedagogies, which I think is a more accurate and less confusing name. It also acknowledges that many theories and models in the family (such as John Seely Brown’s distributed cognitive apprenticeship) predate Connectivism itself. The term comes from Davis’s and Sumara’s 2006 book, ‘Complexity and Education‘, which is a great read that deserves more attention than it received when it was published.
The Verge reports on a variety of studies that show taking notes with laptops during lectures results in decreased learning when compared with notes taken using pen and paper. This tells me three things, none of which is what the article is aiming to tell me:
That the institutions are teaching very badly. Countless decades of far better evidence than that provided in these studies shows that giving lectures with the intent of imparting information like this is close to being the worst way to teach. Don’t blame the students for poor note taking, blame the institutions for poor teaching. Students should not be put in such an awful situation (nor should teachers, for that matter). If students have to take notes in your lectures then you are doing it wrong.
That the students are not skillful laptop notetakers. These studies do not imply that laptops are bad for notetaking, any more than giving students violins that they cannot play implies that violins are bad for making music. It ain’t what you do, it’s the way that you do it. If their classes depend on effective notetaking then teachers should be teaching students how to do it. But, of course, most of them probably never learned to do it well themselves (at least using laptops). It becomes a vicious circle.
That laptop and, especially, software designers have a long way to go before their machines disappear into the background like a pencil and paper. This may be inherent in the medium, inasmuch as a) they are vastly more complex toolsets with much more to learn about, and b) interfaces and apps constantly evolve so, as soon as people have figured out one of them, everything changes under their feet. It becomes a vicious cycle.
The extra cognitive load involved in manipulating a laptop app (and stopping the distractions that manufacturers seem intent on providing even if you have the self-discipline to avoid proactively seeking them yourself) can be a hindrance unless you are proficient to the point that it becomes an unconscious behaviour. Few of us are. Tablets are a better bet, for now, though they too are becoming overburdened with unsought complexity and unwanted distractions. I have for a couple of years now been taking most of my notes at conferences etc with an Apple Pencil and an iPad Pro, because I like the notetaking flexibility, the simplicity, the lack of distraction (albeit that I have to actively manage that), and the tactile sensation of drawing and doodling. All of that likely contributes to making it easier to remember stuff that I want to remember. The main downside is that, though I still gain laptop-like benefits of everything being in one place, of digital permanence, and of it being distributed to all my devices, I have, in the process, lost a bit in terms of searchability and reusability. I may regret it in future, too, because graphic formats tend to be less persistent over decades than text. On the bright side, using a tablet, I am not stuck in one app. If I want to remember a paper or URL (which is most of what I normally want to remember other than my own ideas and connections that are sparked by the speaker) I tend to look it up immediately and save it to Pocket so that I can return to it later, and I do still make use of a simple notepad for things I know I will need later. Horses for courses, and you get a lot more of both with a tablet than you do with a pencil and paper. And, of course, I can still use pen and paper if I want a throwaway single-use record – conference programs can be useful for that.
It is not much of a surprise that many apps are designed to be addictive, nor that there is a whole discipline behind making them so, but I was particularly interested in the delightfully named Dopamine Labs‘ use of behaviourist techniques (operant conditioning with variable ratio scheduling, I think), and the reasoning behind it. As the article puts it:
One of the most popular techniques … is called variable reinforcement or variable rewards. It involves three steps: a trigger, an action and a reward. A push notification, such as a message that someone has commented on your Facebook photo, is a trigger; opening the app is the action; and the reward could be a “like” or a “share” of a message you posted. These rewards trigger the release of dopamine in the brain, making the user feel happy, possibly even euphoric, Brown says. “Just by controlling when and how you give people that little burst of dopamine, you can get them to go from using [the app] a couple times a week to using it dozens of times a week.”
For well-designed social media and games, the reward is intrinsic to the activity, and perfectly aligned with its function. If the intent is to create addicts – which, in both kinds of system, it probably is – the trick is to design an environment that builds rewards into the algorithms (the rules) of the system, and to keep them coming, ideally making it possible for the rewards to increase in intensity as the user gains greater expertise or experience, but varying ratios or intervals between rewards to keep things interesting. Though this particular example falls out from behaviourist theory, it is also well supported by cognitivist and brain-based understandings of how we think. Drug dealers know this too, as it happens. If you want to keep people using your product, this is how to make your product particularly addictive.
Lovers of learning experience addiction too. The more we learn, the more there is to learn, the greater the depth and pleasure there is to be found in doing so, and the sporadic ups and downs, especially when faced with challenges we eventually solve, are part of the joy of it. Increasing mastery of anything is a reward in itself that seems quite intrinsic to our make-up, and to that of many other animals. Doing it in a social context is even better, as we share in the learning of others and gain value (social capital, different perspectives, help overcoming problems, etc) in the process. We gain greater control, greater autonomy, greater capability to live our lives as we want to live them, which is very motivating. As long as the reward comes from the activity itself, and the activity is not harmful, this is good news. It makes sense from an evolutionary perspective. We are innately motivated to learn, because learning is an extremely valuable survival characteristic. Learning generally makes dopamine positively drip from our eyeballs.
So what’s the problem with applying the principle in education?
None at all, until you hit something that you do not wish to learn, that is too difficult to master right now, that is too boring, that has no obvious rewards in and of itself. The correct response to this problem is, ideally, to find what there is to love in it. Good teachers can help with that a lot, inspiring, revealing, supporting, demonstrating, and discussing. Other learners can make a huge difference too, supporting, modelling behaviours, filling gaps, and so on. We very often learn things for other people, with other people, or because of other people. Educational systems offer a good substrate for that.
If intrinsic motivation fails to move us, then at least the motivation should be self-determined. Figure 2 shows a very successful and well-validated model of motivation (from Ryan and Deci) that, amongst other things, usefully describes differing degrees of extrinsic motivation (external, introjected, identified, and integrated) that, as they approach the right of the diagram, increasingly approach intrinsic motivation in value, though ‘external regulation’ is rather different, of which more soon. When intrinsic motivation fails, what we need is some kind of internal regulation to push us onwards. It is not a bad idea to find some internally regulated reason that aligns with your beliefs about yourself and your goals, or that at least fits with some purpose or goal that you find valuable. It’s sometimes useful to develop a bit of ‘grit‘ – to be able to do something that you don’t love doing in order to be able to do things that you do love doing, to find reasons for learning stuff that are meaningful and fit with your personal values, even if the immediately presenting activity is not fun in itself. Again, teachers and other people can help a lot with that, by showing ways that they are doing so themselves, by providing support, by engaging, or by being the reason that we do something in the first place. It’s all very social, at its heart.
Figure 2: Forms of motivation
That social element is important, and not clearly represented in the diagram, despite being a critical aspect of intrinsic motivation and mattering a lot for the ‘higher’ identified forms of extrinsic motivation. From an evolutionary perspective, I suspect this ability to learn because of the presence of others accounts for our species’ apparent dominance in our ecosystems. We are not particularly clever as independent individuals but, collectively, we are mighty smart. This could not be the case without having an innate inclination to value, and to gain value from, other people, and for this to have the consequence that others very materially contribute towards our motivation to do something. I guess I should mention that ‘innate’ does not mean ‘pre-programmed’ – this is almost certainly an emergent phenomenon. But it is a big part of who we are.
So far so good. Educational systems are, at least in principle, very effective ways of bringing people together. It all goes horribly wrong, however, when the educators’ response to amotivation (or worse, to motivation to avoid) is to change the rules by throwing in extrinsic rewards and punishments, like grades, say, or applying other controls to the process like forced attendance. Externally regulated extrinsic motivation is extremely dangerous.
Extrinsic rewards and punishments do work, in the sense that they coerce people and other animals into behaving as the giver of the rewards or punishments wishes them to behave. And yes, dopamine is implicated. This immediate effectiveness is what makes them so alluring. But it’s like giving an athlete performance-enhancing but ultimately harmful drugs. Rewards and punishments are also highly addictive and, like other addictions, you need more and more to sustain your addiction because you become inured to the effects, and withdrawal gets more painful the longer you are addicted. This works two ways. Those that get the rewards (the good grades, gold stars, praise, whatever) go on to want more of them, and will do what they need to get them, whether or not there are any further benefits (like, say, learning). Cheating is one popular way to do this. Tactical study, where the student tries to do what will get good grades rather than learn for the love of it, is another. But grading, though extrinsically motivating for the most part, is not always effective: bad grades can achieve the opposite effect, like drugs spiked with something horrible. Those that get grades as punishments often try to avoid them by whatever means they can: dropping out and cheating (a way to bypass the system to get hold of the good stuff) are popular solutions.
The biggest problems, however, come when you take the rewards/punishments away. As a vast body of research has shown and continues to show, this diminishes intrinsic motivation and often eliminates it altogether. If people are not very inclined to do something then you can temporarily boost interest by adding extrinsic rewards or punishments but, when you take them away, people are considerably less inclined to do the thing than they were before your started even when they originally liked to do it. At a high level this can be explained by the fact that, in giving a reward or punishment, you are drawing attention away from (crowding out) the thing itself and, at the same time, sending a strong signal that the activity itself is not rewarding enough in itself to be worth doing. But I am not sure that this fully explains the very strong negative effects on motivation that we actually see when rewards or punishments are withdrawn. I idly speculate that part of the reason for this effect might be the dopamine crash. We come to associate an activity with a dopamine boost and, when that boost is no longer forthcoming, it can be very disappointing, like smoking a nicotine-free cigarette (trust me – that’s awful). Cold turkey is not the best state to be in, especially when you associate it with an activity like learning something. It could really put you off a subject. This is just a thought: I know of no evidence that it is true, but it seems a plausible hypothesis that would be worth testing.
Whatever the cause, the effects are terrible. By extrinsically driving our students, we kill the love of the activity itself for those that might have loved it, and permanently prevent those that might have later found it valuable from ever wanting to do it again. Remarkably few survive unscathed, and a disproportionate number of those that do go on to become teachers, and so the cycle continues. I don’t think this is how education should be, and I don’t think it is what most of us in the system intend from it.
Getting out of the loop
The only really effective way to ensure lifelong interest and ongoing love of learning is to find the reward in the activity itself, not in an extrinsic reward. The games and social applications described in this article do that very well but it is important to remember that the intent of the designers of the applications is to increase addiction to them in order to sell or promote the product, and that there is perfect alignment between the reward and the activity itself. This is built into the rule system. In an education system that is driven by marks, we are making grades (not learning) the product, and making those the source of the addiction. This is very different. It has nothing to do with the activity of learning itself: it is extrinsic to the process. It might be even more effective give our students addictive drugs (higher concentrations equate to higher grades) to increase the incentive. I’m surprised no one has tried this.
But, seriously, what we really need to be doing is to make learning the addiction.
We can reduce the harm to an extent by removing grades from the teaching process and focusing on useful feedback and encouragement instead. If forced to judge, we can use pass/fail grades that are still harmful but not quite as controlling. If we are inexplicably drawn to grading, then we can build systems similar to those of ‘likes’ and badges of social media where, instead of rewards we give awards – in other words, we remove the expectation of a grade but, where merit is found, sometimes show our approval – and we can make that a social process, so that it is not dominated by a teacher and therefore does not involve exercise of arbitrary power. We can use pedagogies that give teachers and students the chance to model and demonstrate their passion and interest. We can encourage students to reflect on why they are doing it, ideally shared so they can gain inspiration from others. We can help students to integrate work with other things that matter to them. We can help them personalize their own learning so that it is appropriately challenging, not too dull, not to hard, and so that it matches the goals they set for themselves. We can help them to set those goals, and help them to figure out how to attain them. We can make them participants in the grading process, picking outcomes and assessments that match their interests and needs. We can build communities that support and nourish learning through sharing and mutual support. This is just a small sample of ways – there are really quite a few things that we can do, even within a broken system, to make learning addictive, to find ways to make it rewarding in and of itself, even when there is little initial interest to build upon. But we are still stuck in a system that treats grades as rewards, so we are still faced with a furious current pushing against all of our efforts.
Really, we need to change the system, but just a bit: our current educational systems have evolved for pragmatic reasons, mainly because alternatives are too expensive or inconvenient for teachers to manage, not because they are any good for learners. One of the consequences of that is that it is almost impossible to run an institutional course or program without at least some form of grading, even if only at pass/fail level, even if only at the end.
An obvious big part of the solution is to decouple learning and grading. Some more advanced competency-based approaches already do that, as do things like challenge assessments and assessment of prior experience and learning, to some extent project/essay/thesis paths, outcomes-based programs, and even some kinds of professional exams (the latter not in a good way, for the most part, because they tend to drive the process). However, there are risks that universities might turn into an up-market version of driving schools, teaching how to pass the tests and doing just as they are doing now, rather than enabling more expansive learning as they should. To avoid that, it is critical that learners are involved in helping to determine their own personalized outcomes, and very much not to have those learning outcomes ‘personalized’ for them – personal, not personalized, as Alfie Kohn puts it and as Stephen Downes agrees. Grades that learners control, for activities that they choose to undertake, are many times better than grades that someone else imposes. It would also be a good idea either to split teaching activities into assemblable chunks, or into open narratives, without alignment with specific awards or qualifications. Students might build competences from smaller pieces – often from different sources – in order to seek a specific award, or might gain more than one award from a single learning narrative (or perhaps from a couple that overlap). It would be a very good idea to provide ways to mentor and help learners to seek appropriate paths, perhaps through personal tuition, and/or through automated help, and/or through membership of supportive communities (I am a fan of action learning sets for this kind of thing). Such mechanisms might also assist in the preparation of portfolios of evidence that would be an obvious way to manage the formal assessment process. I’m not in any way suggesting that we educators (especially for adult learners) should get rid of our accreditation role, merely that we should stop using it to drive our teaching and to enforce compliance in our students.
I think that such relatively small tweaks to how we teach and assess could have massive benefits further upstream. In one fell swoop it would change the focus of educational systems from grades to learning, and change the reward structure from extrinsic to intrinsic. Instead of building fixed-length courses with measurable outcomes that we the teachers control, we could create ecosystems for learning, where cooperation and collaboration would have greater value than competition, where learners are really part of a club, not a cohort, where teachers are perceived as enablers of learning, not as causes, and certainly not as judges. The words ‘learner-centred’ have been much over-used, often being a shorthand for ‘a friendlier way of making students comply with our demands’ or ‘helping students to get better grades’, but I think they fairly accurately denote what this sort of system would entail when taken seriously. Some of my friends and colleagues prefer ‘learning-centred’ and that works for me too. But really this is about being more human and more humane. It’s about breaking the machines that determine what we do and how we do it, and focusing instead on what we – collectively and individually – want to be. We can do this by thinking carefully about what motivates people, as opposed to attempting to motivate them. As soon as our attitude is one of ‘how can we make our students to this?’ rather than ‘how can we help our students to do this?’ we have failed. It’s easy to create addicts of extrinsic motivation. It is hard to make addicts of learning. But, sometimes, the hard way is the right way.
I describe some of what I do as ‘unteaching’, so I find this highly critical article by Miss Smith –The Unlearning Zone – interesting. Miss Smith dislikes the terms ‘ unteaching’ and ‘unlearning’ for some well-expressed aesthetic and practical reasons: as she puts it, they are terms “that would not be out of place in a particularly self-satisfied piece of poststructuralist literary analysis circa 1994.” I partially agree. However, she also seems equally unenamoured with what she thinks they stand for. I disagree with her profoundly on this so, as she claims to be new to these terms, here is my attempt to explain a little about what I mean by them and why I think they are a useful part of the educators’ lexicon, and why they are crucially important for learners’ development in general.
First the terms…
Yes, ‘unteaching’ is an ugly neoligism and it doesn’t really make sense: that’s part of the appeal of using it – a bit of cognitive dissonance can be useful for drawing attention to something. However, it is totally true that someone who is untaught is just someone who has not (yet) been taught, so ‘unteaching’, seen through that light, is at best pointless, at worst self-contradictory. On the other hand, it does seem to follow pretty naturally from ‘unlearning’ which, contrary to Miss Smith’s assertion, has been in common use for centuries and makes perfect sense. Have you ever had to unlearn bad habits? Me too.
As I understand it, ‘unteach’ is to ‘teach’ as ‘undo’ is to ‘do’. Unteaching is still teaching, just as undoing is still doing, and unlearning is still learning. Perhaps deteaching would be a better term. Whatever we choose to call it, unteaching is concerned with intentionally dismantling the taught belief that teaching is about exerting power over learners to teach, and replacing it with the attitude that teachers are there to empower learners to learn. This is not a particularly radical idea. It is what all teachers should do anyway, I reckon. But it is worth drawing attention to it as a distinct activity because it runs counter to the tide, and the problem it addresses is virtually ubiquitous in education up to, and sometimes at, doctoral level.
Traditional teaching of the sort Miss Smith seems to defend in her critique does a lot more than teach a subject, skill, or way of thinking. It teaches that learning is a chore that is not valuable in and of itself, that learners must be forced to do it for some other purpose, often someone else’s purpose. It teaches that teaching is something done to students by a teacher: at its worst, it teaches that teaching is telling; at best, that teaching involves telling someone to do something. It’s not that (many) teachers deliberately seek these outcomes, but that they are the most likely lessons to be learned, because they are the ones that are repeated most often. The need for unteaching arises because traditional teaching, with luck in addition to whatever it intends to teach, teaches some terrible lessons about learning and the role of teaching in that process that must be unlearned.
What is unteaching?
Miss Smith claims that unteaching means “open plan classes, unstructured lessons and bean bags.” That’s not the way I see it at all. Unlike traditional teaching, with its timetables, lesson plans, learning objectives, and uniform tests, unteaching does not have its own technologies and methods, though it does, for sure, tend to be a precursor to connectivist, social constructivist, constructionist, and other more learner-centred ways of thinking about the learning process, which may sometimes be used as part of the process of unteaching itself. Such methods, models, and attitudes emerge fairly naturally when you stop forcing people to do your bidding. However, they are just as capable of being used in a controlling way as the worst of instructivist methods: the number of reports on such interventions that include words like ‘students must…’, ‘I make my students…’ or (less blatantly) ‘students (do X)’ far outnumber all others, and that is the very opposite of unteaching. The specific technologies (including pedagogies as much as open-plan classrooms and beanbags) are not the point. Lectures, drill-and-practice and other instructivist methods are absolutely fine, as long as:
they at least attempt to do the job that students want or need,
they are willingly and deliberately chosen by students,
students are well-informed enough to make those choices, and
students can choose to learn otherwise at any time.
No matter how cool and groovy your problem-based, inquiry-based, active methods might be, if they are imposed on students (especially with the use of threats for non-compliance and rewards for compliance – e.g. qualifications, grades, etc) then it is not unteaching at all: it’s just another way of doing the same kind of teaching that caused the problem in the first place. But if students have control – and ‘control’ includes being able to delegate control to someone else who can scaffold, advise, assist, instruct, direct, and help them when needed, as well as being able to take it back whenever they wish – then such methods can be very useful. So can lectures. To all those educational researchers that object to lectures, I ask whether they have ever found them valuable in a conference (and , if not, why did they go to a conference in the first place?). It’s not the pedagogy of lectures that is at fault. It’s the requirement to attend them and the accompanying expectation that people are going to learn what you are teaching as a result. That’s, simply put, empirically wrong. It doesn’t mean that lecturees learn nothing. Far from it. But what you teach and what they learn are different kinds of animal.
Problems with unteaching
It’s really easy to be a bad unteacher – I think that is what Miss Smith is railing against, and it’s a fair criticism. I’m often pretty bad at it myself, though I have had a few successes along the way too. Unteaching and, especially, the pedagogies that result from having done unteaching, are far more likely to go wrong, and they take a lot more emotional, intellectual, and social effort than traditional teaching because they don’t come pre-assembled. They have no convenient structures and processes in place to do the teaching for you. Traditional teaching ‘works’ even when it doesn’t. If you throw someone into a school system, with all its attendant rewards, punishments, timetables, rules and curricula, and if you give them the odd textbook and assessment along the way, then most students will wind up learning something like what is intended to be taught by the system, no matter how awful the teachers might be. In such a system, students will rarely learn well, rarely persistently, rarely passionately, seldom kindly, and the love of learning will have been squashed out of many of them along the way (survivors often become academics and teachers themselves). But they will mostly pass tests at the end of it. With a bit of luck many might even have gained a bit of useful knowledge or skill, albeit that much will be not just wasted and forgotten as easily as a hotel room number when your stay is over, but actively disliked by the end of it. And, of course, they will have learned dependent ways of learning that will serve them poorly outside institutional systems.
To make things far worse, those very structures that assist the traditional teacher (grades, compulsory attendance, fixed outcomes, concept of failure, etc) are deeply antagonistic to unteaching and are exactly why it is needed in the first place. Unteachers face a huge upstream struggle against an overwhelming tide that threatens to drown passionate learning every inch of the way. The results of unteaching can be hard to defend within a traditional educational system because, by conventional measures, it is often inefficient and time-consuming. But conventional measures only make sense when you are trying to make everyone do the same things, through the same means, with the same ends, measured by and in order to meet the same criteria. That’s precisely the problem.
The final nail in unteaching’s coffin is that it is applied very unevenly across the educational system, so every freedom it brings is counterbalanced by a mass of reiterated antagonistic lessons from other courses and programs. Every time we unteach someone, two others reteach them. Ideally, we should design educational systems that are friendlier to and more supportive of learner autonomy, and that are (above all else) respectful of learners as human beings. In K-12 teaching there are plenty of models to draw from, including Summerhill, Steiner (AKA Waldorf) schools, Montessori schools, Experiential Learning Schools etc. Few are even close to perfect, but most are at least no worse than their conventional counterparts, and they start with an attitude of respect for the children rather than a desire to make them conform. That alone makes them worthwhile. There are even some regional systems, such as those found in Finland or (recently) British Columbia, that are heading broadly in the right direction. In universities and colleges there are plenty of working models, from Oxford tutorials to Cambridge supervisions, to traditional theses and projects, to independent study courses and programs, to competency-based programs, to PLAR/APEL portfolios, and much more. It is not a new idea at all. There is copious literature and many theoretical models that have stood the test of time, from andragogy to communities of practice, through to teachings from Freire, Illich, Dewey and even (a bit quirkily) Vygotsky. Furthermore, generically and innately, most distance and e-learning unteaches better than its p-learning counterparts because teachers cannot exert the same level of control and students must learn to learn independently. Sadly, much of it is spoiled by coercing students with grades, thereby providing the worst of both worlds: students are forced to behave as the teacher demands in their terminal behaviours but, without physical copresence, are less empowered by guidance and emotional/social support with the process. Much of my own research and teaching is concerned with inverting that dynamic – increasing empowerment and social support through online learning, while decreasing coercion. I’d like to believe that my institution, Athabasca University, is largely dedicated to the same goal, though we do mostly have a way to go before we get it right.
Why it matters
Unteaching is to a large extent concerned with helping learners – including adult learners – to get back to the point at which most children start their school careers – driven by curiosity, personal interest, social value, joy, delight – but that is schooled out of them over years of being taught dependency. Once misconceptions about what education is for, what teachers do, and how we learn, have been removed, teaching can happen much more effectively: supporting, nurturing, inspiring, challenging, responding, etc, but not controlling, not making students do things they are not ready to do for reasons that mean little to them and have even less to do with what they are learning.
However, though it is an immensely valuable terminal outcome, improved learning is perhaps not the biggest reason for unteaching. The real issue is moral: it’s simply the right thing to do. The greatest value is that students are far more likely to have been treated with the respect, care, and honour that all human beings deserve along the way. Not ‘care’ of the sort you would give to a dog when you train it to be obedient and well behaved. Care of the sort that recognizes and valorizes autonomy and diversity, that respects individuals, that cherishes their creativity and passion, that sees learners as ends in themselves, not products or (perish the thought) customers. That’s a lesson worth teaching, a way of being that is worth modelling. If that demands more effort, if it is more fallible, and if it means that fewer students pass your tests, then I’m OK with that. That’s the price of admission to the unlearning zone.