And now in Chinese: 在线学习环境:隐喻问题与系统改进. And some thoughts on the value of printed texts.

Warm off the press, and with copious thanks and admiration to Junhong Xiao for the invitation to submit and the translation, here is my paper “The problematic metaphor of the environment in online learning” in Chinese, in the Journal of Open Learning. It is based on my OTESSA Journal paper, originally published as “On the Misappropriation of Spatial Metaphors in Online Learning” at the end of 2022 (a paper I am quite pleased with, though it has yet to receive any citations that I am aware of).

Many thanks, too, to Junhong for sending me the printed version that arrived today, smelling deliciously of ink. I hardly ever read anything longer than a shopping bill on paper any more but there is something rather special about paper that digital versions entirely lack. The particular beauty of a book or journal written in a language and script that I don’t even slightly understand is that, notwithstanding the ease with which I can translate it using my phone, it largely divorces the medium from the message. Even with translation tools my name is unrecognizable to me in this: Google Lens translates it as “Jon Delong”. Although I know it contains a translation of my own words, it is really just a thing: the signs it contains mean nothing to me, in and of themselves. And it is a thing that I like, much as I like the books on my bookshelf.

I am not alone in loving paper books, a fact that owners of physical copies of my most recent book (which can be read online for free but that costs about $CAD40 on paper) have had the kindness to mention, e.g. here and here. There is something generational in this, perhaps. For those of us who grew up knowing no other reading medium than ink on paper, there is comfort in the familiar, and we have thousands (perhaps millions) of deeply associated memories in our muscles and brains connected with it, made more precious by the increasing rarity with which those memories are reinforced by actually reading them that way. But, for the most part, I doubt that my grandchildren, at least, will lack that. While they do enjoy and enthusiastically interact with text on screens, from before they have been able to accurately grasp them they have been exposed to printed books, and have loved some of them as much as I did at the same ages.

It is tempting to think that our love of paper might simply be because we don’t have decent e-readers, but I think there is more to it than that. I have some great e-readers in many sizes and types, and I do prefer some of them to read from, for sure: backlighting when I need it, robustness, flexibility, the means to see it in any size or font that works for me, the simple and precise search, the shareable highlights, the lightness of (some) devices, the different ways I can hold them, and so on, make them far more accessible. But paper has its charms, too. Most obviously, something printed on a paper is a thing to own whereas, on the whole, a digital copy tends to just be a licence to read, and ownership matters. I won’t be leaving my e-books to my children. The thingness really matters in other ways, too. Paper is something to handle, something to smell. Pages and book covers have textures – I can recognize some books I know well by touch alone. It affects many senses, and is more salient as a result. It takes up room in an environment so it’s a commitment, and so it has to matter, simply because it is there; a rivalrous object competing with other rivalrous objects for limited space. Paper comes in fixed sizes that may wear down but will never change: it thus keeps its shape in our memories, too. My wife has framed occasional pages from my previously translated work, elevating them to art works, decoupled from their original context, displayed with the same lofty reverence as pages from old atlases. Interestingly, she won’t do that if it is just a printed PDF: it has to come from a published paper journal, so the provenance matters. Paper has a history and a context of its own, beyond what it contains. And paper creates its own context, filled with physical signals and landmarks that make words relative to the medium, not abstractions that can be reflowed, translated into other languages, or converted into other media (notably speech). The result is something that is far more memorable than a reflowable e-text. Over the years I’ve written a little about this here and there, and elsewhere, including a paper on the subject (ironically, a paper that is not available on paper, as it happens), describing an approach to making e-texts more memorable.

After reaching a slightly ridiculous peak in the mid-2000s, and largely as a result of a brutal culling that occurred when I came to Canada nearly 17 years ago, my paper book collection has now diminished to easily fit in a single and not particularly large free-standing IKEA shelving unit. The survivors are mostly ones I might want to refer to or read again, and losing some of them would sadden me a great deal, but I would only (perhaps) run into a burning building to save just a few, including, for instance:

  • A dictionary from 1936, bound in leather by my father and used in countless games of Scrabble and spelling disputes when I was a boy, and that was used by my whole family to look up words at one time or another.
  • My original hardback copy of the Phantom Tollbooth (I have a paperback copy for lending), that remains my favourite book of all time, that was first read to me by my father, and that I have read myself many times at many ages, including to my own children.
  • A boxed set of the complete works of Narnia, that I chose as my school art prize when I was 18 because the family copies had become threadbare (read and abused by me and my four siblings), and that I later read to my own children. How someone with very limited artistic skill came to win the school art prize is a story for another time.
  • A well-worn original hardback copy of Harold and the Purple Crayon (I have a paperback copy for lending) that my father once displayed for children in his school to read, with the admonition “This is Mr Dron’s book. Please handle with care” (it was not – it was mine).
  • A scribble-filled, bookmark-laden copy of Kevin Kelly’s Out of Control that strongly influenced my thinking when I was researching my PhD and that still inspires me today. I can remember exactly where I sat when I made some of the margin notes.
  • A disintegrating copy of Storyland, given to me by my godmother in 1963 and read to me and by me for many years thereafter. There is a double value to this one because we once had two copies of this in our home: the other belonged to my wife, and was also a huge influence on her at similar ages.

These books proudly wear their history and their relationships with me and my loved ones in all their creases, coffee stains, scuffs, and tattered pages.pile of some of my favourite books  To a greater or lesser extent, the same is true of almost all of the other physical books I have kept. They sit there as a constant reminder of their presence – their physical presence, their emotional presence, their social presence and their cognitive presence – flitting by in my peripheral vision many times a day, connecting me to thoughts and inspirations I had when I read them and, often, with people and places connected with them. None of this is true of my e-books. Nor is it quite the same for other objects of sentimental value, except perhaps (and for very similar reasons) the occasional sculpture or picture, or some musical instruments. Much as I am fond of (say) baby clothes worn by my kids or a battered teddy bear, they are little more than aides memoires for other times and other activities, whereas the books (and a few other objects) latently embody the experiences themselves. If I opened them again (and I sometimes do) it would not be the same experience, but it would enrich and connect with those that I already had.

I have hundreds of e-books that are available on many devices, one of which I carry with me at all times, not to mention an Everand (formerly Scribd) account with a long history, not to mention a long and mostly lost history of library borrowing, and I have at least a dozen devices on which to read them, from a 4 inch e-ink reader to a 32 inch monitor and much in between, but my connection with those is far more limited and transient. It is still more limited for books that are locked to a certain duration through DRM (which is one reason they are the scum of the earth). When I look at my devices and open the various reading apps on them I do see a handful of book covers, usually those that I have most recently read, but that is too fleeting and volatile to have much value. And when I open them they don’t fall open on well-thumbed pages. The text is not tangibly connected with the object at all.

As well as smarter landmarks within them, better ways to make e-books more visible would help, which brings me to the real point of this post. For many years I have wanted to paper a wall or two with e-paper (preferably in colour) on which to display e-book covers, but the costs are still prohibitive. It would be fun if the covers would become battered with increasing use, showing the ones that really mattered, and maybe dust could settle on those that were never opened, though it would not have to be so skeuomorphic – fading would work, or glyphs. They could be ordered manually or by (say) reading date, title, author, or subject. Perhaps touching them or scanning a QR code could open them. I would love to get a research grant to do this but I don’t think asking for electronic wallpaper in my office would fly with most funding sources, even if I prettied it up with words like “autoethnography”, and I don’t have a strong enough case, nor can I think of a rigorous enough research methodology to try it in a larger study with other people. Well. Maybe I will try some time. Until the costs of e-paper come down much further, it is not going to be a commercially viable product, either, though prices are now low enough that it might be possible to do it in a limited way with a poster-sized display for a (very) few thousand dollars. It could certainly be done with a large screen TV for well under $1000 but I don’t think a power-hungry glowing screen would be at all the way to go: the value would not be enough to warrant the environmental harm or energy costs, and something that emitted light would be too distracting. I do have a big monitor on my desk, though, which is already doing that so it wouldn’t be any worse, to which I could add a background showing e-book covers or spines. I could easily do this as a static image or slideshow, but I’d rather have something dynamic. It shouldn’t be too hard to extract the metadata from my list of books, swipe the images from the Web or the e-book files, and show them as a backdrop (a screensaver would be trivial). It might even be worth extending this to papers and articles I have read. I already have Pocket open most of the time, displaying web pages that I have recently read or want to read (serving a similar purpose for short-term recollection), and that could be incorporated in this. I think it would be useful, and it would not be too much work to do it – most of the important development could be done in a day or two. If anyone has done this already or feels like coding it, do get in touch!

Educational ends and means: McNamara’s Fallacy and the coming robot apocalypse (presentation for TAMK)

These are the slides that I used for my talk with a delightful group of educational leadership students from TAMK University of Applied Sciences in Tampere, Finland at (for me) a somewhat ungodly hour Wednesday night/Thursday morning after a long day. If you were in attendance, sorry for any bleariness on my part. If not, or if you just want to re-live the moment, here is the video of the session (thanks Mark!)man shaking hands with a robot

The brief that I was given was to talk about what generative AI means for education and, if you have been following any of my reflections on this topic then you’ll already have a pretty good idea of what kinds of issues I raised about that. My real agenda, though, was not so much to talk about generative AI as to reflect on the nature and roles of education and educational systems because, like all technologies, the technology that matters in any given situation is the enacted whole rather than any of its assembled parts. My concerns about uses of generative AI in education are not due to inherent issues with generative AIs (plentiful though those may be) but to inherent issues with educational systems that come to the fore when you mash the two together at a grand scale.

The crux of this argument is that, as long as we think of the central purposes of education as being the attainment of measurable learning outcomes or the achievement of credentials, especially if the focus is on training people for a hypothetical workplace, the long-term societal effects of inserting generative AIs into the teaching process are likely to be dystopian. That’s where Robert McNamara comes into the picture. The McNamara Fallacy is what happens when you pick an aspect of a system to measure, usually because it is easy, and then you use that measure to define success, choosing to ignore or to treat as irrelevant anything that cannot be measured. It gets its name from Robert McNamara, US Secretary of Defense during the Vietnam war, who famously measured who was winning by body count, which is probably among the main reasons that the US lost the war.

My concern is that measurable learning outcomes (and still less the credentials that signify having achieved them) are not the ends that matter most. They are, more, means to achieve far more complex, situated, personal and social ends that lead to happy, safe, productive societies and richer lives for those within them. While it does play an important role in developing skills and knowledge, education is thus more fundamentally concerned with developing values, attitudes, ways of thinking, ways of seeing, ways of relating to others, ways of understanding and knowing what matters to ourselves and others, and finding how we fit into the social, cultural, technological, and physical worlds that we inhabit. These critical social, cultural, technological, and personal roles have always been implicit in our educational systems but, at least in in-person institutions, it seldom needs to be made explicit because it is inherent in the structures and processes that have evolved over many centuries to meet this need. This is why naive attempts to simply replicate the in-person learning experience online usually fail: they replicate the intentional teaching activities but neglect to cater for the vast amounts of learning that occur simply due to being in a space with other people, and all that emerges as a result of that. It is for much the same reasons that simply inserting generative AI into existing educational structures and systems is so dangerous.

If we choose to measure the success or failure of an educational system by the extent to which learners achieve explicit learning outcomes and credentials, then the case for using generative AIs to teach is extremely compelling. Already, they are far more knowledgeable, far more patient, far more objective, far better able to adapt their teaching to support individual student learning, and far, far cheaper than human teachers. They will get better. Much better. As long as we focus only on the easily measurable outcomes and the extrinsic targets, simple economics combined with their measurably greater effectiveness means that generative AIs will increasingly replace teachers in the majority of teaching roles.  That would not be so bad – as Arthur C. Clarke observed, any teacher that can be replaced by a machine should be – were it not for all the other more important roles that education plays, and that it will continue to play, except that now we will be learning those ways of being human from things that are not human and that, in more or less subtle ways, do not behave like humans. If this occurs at scale – as it is bound to do – the consequences for future generations may not be great. And, for the most part, the AIs will be better able to achieve those learning outcomes themselves – what is distinctive about them is that they are, like us, tool users, not simply tools – so why bother teaching fallible, inconsistent, unreliable humans to achieve them? In fact, why bother with humans at all? There are, almost certainly, already large numbers of instances in which at least part of the teaching process is generated by an AI and where generative AIs are used by students to create work that is assessed by AIs.

It doesn’t have to be this way. We can choose to recognize the more important roles of our educational systems and redesign them accordingly, as many educational thinkers have been recommending for considerably more than a century. I provide a few thoughts on that in the last few slides that are far from revolutionary but that’s really the point: we don’t need much novel thinking about how to accommodate generative AI into our existing systems. We just need to make those systems work the way we have known they should work for a very long time.

Download the slides | Watch the video

Cognitive prostheses and the future of the human race

head made of cogs I have lived in Canada for over 16 years so I was surprised when, a few months ago, a conference-friend whom  I had not seen for many years contacted me to ask whereabouts in Malaysia I lived. I believe that they were misremembering a connection from a conversation long ago in which I’d mentioned a recent visit (it was in fact a day-trip from Singapore) and combining that with the accurate recollection that I no longer lived in the UK.

Not long after, I was on a panel discussing the impact of ChatGPT during which I prompted ChatGPT to introduce me.  Here was its first attempt in response to the prompt “tell me about Jon Dron”. I’ve highlighted things that are true in green, and things that are false in red (emphasized for those unable to see the colours):

Jon Dron is a Canadian educator, researcher, and writer. He is known for his work in the field of e-learning and collaborative technologies. Dron has a PhD in Educational Technology from the University of Calgary, and has held academic positions at several universities in Canada, including Athabasca University, where he is currently a professor.

Dron has written extensively on topics such as online learning, social media, and the use of technology in education. He is the author of several books, including “Teaching Crowds: Learning and Social Media” and “E-Learning Theory and Practice.”

In addition to his academic work, Dron is also a musician and has released several albums of ambient and experimental music.

I’d say that there is not much difference between the human and machine recollections. I would almost certainly make at least as many mistakes if I were to asked to confidently describe a person I don’t know particularly well. In fact, I might make similar mistakes (not, please note, hallucinations) about quite close friends. Most of us don’t have eidetic memories: we reinvent recollections as much as we recall them. While there are surely many profound differences between how humans and large language models (LLMs) like ChatGPT process information, this is at least circumstantial evidence that some of the basic principles underlying artificial neural networks and biological neural networks are probably pretty similar. True, AIs do not know when they are making things up (or telling the truth, for that matter) but, in fairness, much of the time, neither do we. With a lot of intentional training we may be able to remember lines in a play or how to do long division but, usually, our recollections are like blurry JPEGs rather than RAW images.

Even for things we have intentionally learned to do or recall well, it is unusual for that training to stick without continual reinforcement, and mistakes are easily made. A few days ago I performed a set of around 30 songs (neither ambient nor experimental), most of which I had known for decades, all of which I had carefully practiced in the days leading up to the event to be sure I could play them as I intended. Here is a picture of me singing at that gig, drawn by my 6-year-old grandchild who was in attendance:

grandpa singing in the square

 

Despite my precautions and ample experience, in perhaps a majority of songs, I variously forgot words, chords, notes, and, in a couple of cases, whole verses. Combined with errors of execution (my fingers are not robotic, my voice gets husky) there was, I think, only one song in the whole set that came out more or less exactly as I intended. I have made such mistakes in almost every gig I have ever played. In fact, in well over 40 years as a performer, I have never played the same song in exactly the same way twice, though I have played some of them well over 10,000 times. Most of the variations are a feature, not a bug: they are where the expression lies. A performance is a conversation between performer, instruments, setting, and audience, not a mechanical copy of a perfect original. Nonetheless, my goal is usually to at least play the right notes and sing the right words, and I frequently fail to do that. Significantly, I generally know when I have done it wrong (typically a little before in a dread realization that just makes things worse) and adapt fairly seamlessly on the fly so, on the whole, you probably wouldn’t even notice it has happened, but I play much like ChatGPT responds to prompts: I fill in the things I don’t know with something more or less plausible. These creative adaptations are no more hallucinations than the false outputs of LLMs.

The fact that perfect recall is so difficult to achieve is why we need physical prostheses, to write things down, to look things up, or to automate them. Given LLMs’ weaknesses in accurate recall, it is slightly ironic that we often rely on computers for that.  It is, though, considerably more difficult for an LLM to do this because they have no big pictures, no purposes, no plans, not even broad intentions. They don’t know whether what they are churning out is right or wrong, so they don’t know to correct it. In fact, they don’t even know what they are saying, period. There’s no reflection, no metacognition, no layers of introspection, no sense of self, nothing to connect concepts together, no reason for them to correct errors that they cannot perceive.

Things that make us smart

How difficult can it be to fix this? I think we will soon be seeing a lot more solutions to this problem because if we can look stuff up then so can machines, and more reliable information from other systems can be used to feed the input or improve the output of the LLM (Bing, for instance, has been doing so for a while now, to an extent). A much more intriguing possibility is that an LLM itself or subsystem of it might not only look things up but also write and/or sequester code it needs to do things it is currently incapable of doing, extending its own capacity by assembling and remixing higher-level cognitive structures. Add a bit of layering then throw in an evolutionary algorithm to kill of the less viable or effective, and you’ve got a machine that can almost intentionally learn, and know when it has made a mistake.

Such abilities are a critical part of what makes humans smart, too. When discussing neural networks it is a bit too easy to focus on the underlying neural correlates of learning without paying much (if any) heed to the complex emergent structures that result from them – the “stuff” of thought – but those structures are the main things that make it work for humans. Like the training sets for large language models, the intelligence of humans is largely built from the knowledge gained from other humans through language, pedagogies, writing, drawing, music, computers, and other mediating technologies. Like an LLM, the cognitive technologies that result from this (including songs) are parts that we assemble and remix to in order to analyze, synthesize, and create. Unlike most if not all existing LLMs, though, the ways we assemble them – the methods of analysis, the rules of logic, the pedagogies, the algorithms, the principles, and so on (that we have also learned from others) – are cognitive prostheses that play an active role in the assembly, allowing us to build, invent, and use further cognitive prostheses and so to recursively extend our capabilities far beyond the training set, as well as to diagnose our own shortfalls. 

Like an LLM, our intelligence is also fundamentally collective, not just in what happens inside brains, but because our minds are extended, through tools, gadgets, rules, language, writing, structures, and systems that we enlist from the world as part of (not only adjuncts to) our thinking processes. Through technologies, from language to screwdrivers, we literally share our minds with others. For those of us who use them, LLMs are now as much parts of us as our own creative outputs are parts of them.

All of this means that human minds are part-technology (largely but not wholly instantiated in biological neural nets) and so our cognition is about as artificial as that of AIs. We could barely even think without cognitive prostheses like language, symbols, logic, and all the countless ways of doing and using technologies that we have devised, from guitars to cars. Education, in part, is a process of building and enlisting those cognitive prostheses in learners’ minds, and of enabling learners to build and enlist their own, in a massively complex, recursive, iterative, and distributed process, rich in feedback loops and self-organizing subsystems.

Choosing what we give up to the machine

There are many good ways to use LLMs in the learning process, as part of what students do. Just as it would be absurd to deny students the use of pens, books, computers, the Internet, and so on, it is absurd to deny them the use of AIs, including in summative assessments. These are now part of our cognitive apparatus, so we should learn how to participate in them wisely. But I think we need to be extremely cautious in choosing what we delegate to them, above all when using them to replace or augment some or all of the teaching role.

What makes AIs different from technologies of the past is that they perform a broadly similar process of cognitive assembly as we do ourselves, allowing us to offload much more of our cognition to an embodied collective intelligence created from the combined output of countless millions of people. Only months after the launch of ChatGPT, this is already profoundly changing how we learn and how we teach. It is disturbing and disruptive in an educational context for a number of reasons, such as that:

  • it may make it unnecessary for us to learn its skills ourselves, and so important aspects of our own cognition, not just things we don’t need (but which are they?), may atrophy;
  • if it teaches, it may embed biases from its training set and design (whose?) that we will inherit;
  • it may be a bland amalgam of what others have written, lacking originality or human quirks, and that is what we, too, will learn to do;
  • if we use it to teach, it may lead students towards an average or norm, not a peak;
  • it renders traditional forms of credentialling learning largely useless.

We need solutions to these problems or, at least, to understand how we will successfully adapt to the changes they bring, or whether we even want to do so. Right now, an LLM is not a mind at all, but it can be a functioning part of one, much as an artificial limb is a functioning part of a body or a cyborg prosthesis extends what a body can do. Whether we feel any particular limb that it (partly) replicates needs replacing, which system we should replace it with, and whether it is a a good idea in the first place are among the biggest questions we have to answer. But I think there’s an even bigger problem we need to solve: the nature of education itself.

AI teachers

There are no value-free technologies, at least insofar as they are enacted and brought into being through our participation in them, and the technologies that contribute to our cognition, such as teaching, are the most value-laden of all, communicating not just the knowledge and skills they purport to provide but also the ways of thinking and being that they embody. It is not just what they teach or how effectively they do so, but how they teach, and how we learn to think and behave as a result, that matters.

While AI teachers might well make it easier to learn to do and remember stuff, building hard cognitive technologies (technique, if you prefer) is not the only thing that education does. Through education, we learn values, ways of connecting, ways of thinking, and ways of being with others in the world. In the past this has come for free when we learn the other stuff, because real human teachers (including textbook authors, other students, etc) can’t help but model and transmit the tacit knowledge, values, and attitudes that go along with what they teach. This is largely why in-person lectures work. They are hopeless for learning the stuff being taught but the fact that students physically attend them makes them great for sharing attitudes, enthusiasm, bringing people together, letting us see how other people think through problems, how they react to ideas, etc. It is also why recordings of online lectures are much less successful because they don’t, albeit that the benefits of being able to repeat and rewind somewhat compensate for the losses.

What happens, though, when we all learn how to be human from something that is not (quite) human? The tacit curriculum – the stuff through which we learn ways of being, not just ways of doing –  for me looms largest among the problems we have to solve if we are to embed AIs in our educational systems, as indeed we must. Do we want our children to learn to be human from machines that haven’t quite figured out what that means and almost certainly never will?

Many AI-Ed acolytes tell the comforting story that we are just offloading some of our teaching to the machine, making teaching more personal, more responsive, cheaper, and more accessible to more people, freeing human teachers to do more of the human stuff. I get that: there is much to be said for making the acquisition of hard skills and knowledge easier, cheaper, and more efficient. However, it is local thinking writ large. It solves the problems that we have to solve today that are caused by how we have chosen to teach, with all the centuries-long path dependencies and counter technologies that entails, replacing technologies without wondering why they exist in the first place.

Perhaps the biggest of the problems that the entangled technologies of education systems cause are the devastating effects of tightly coupled credentials (and their cousins, grades) on intrinsic motivation. Much of the process of good teaching is one of reigniting that intrinsic motivation or, at least, supporting the development of internally regulated extrinsic motivation, and much of the process of bad teaching is about going with the flow and using threats and rewards to drive the process. As long as credentials remain the primary reason for learning, and as long as they remain based on proof of easily measured learning outcomes provided through end-products like assignments and inauthentic tests, then an AI that offers a faster, more efficient, and better tailored way of achieving them will crowd out the rest. Human teaching will be treated as a minor and largely irrelevant interruption or, at best, a feel-good ritual with motivational perks for those who can afford it. And, as we are already seeing, students coerced to meet deadlines and goals imposed on them will use AIs to take shortcuts. Why do it yourself when a machine can do it for you? 

The future

As we start to build AIs more like us, with metacognitive traits, self-set purposes, and the capacity for independent learning, the problem is just going to get bigger. Whether they are better or worse (they will be both), AIs will not be the same as us, yet they will increasingly seem so, and increasingly play human roles in the system. If the purpose of education is seen as nothing but short-term achievement of explicit learning outcomes and getting the credentials arising from that, then it would be better to let the machines achieve them so that we can get on with our lives. But of course that is not the purpose. Education is for preparing people to live better lives in better societies. It is why the picture of me singing above delights me more than anything ever created by an AI. It is why education is and must remain a fundamentally human process. Almost any human activity can be replaced by an AI, including teaching, but education is fundamental to how we become who we are. That’s not the kind of thing that I think we want to replace.

Our minds are already changing as they extend into the collective intelligence of LLMs – they must – and we are only at the very beginning of this story. Most of the changes that are about to occur will be mundane, complex, and the process will be punctuated but gradual, so we won’t really notice what has been happening until it has happened, by which time it may be too late. It is probably not an exaggeration to say that, unless environmental or other disasters don’t bring it all to a halt, this is a pivotal moment in our history.

It is much easier to think locally, to think about what AIs can do to support or extend what we do now, than it is to imagine how everything will change as a result of everyone doing that at scale. It requires us to think in systems, which is not something most of us are educated or prepared to do. But we must do that, now, while we still can. We should not leave it to AIs to do it for us.

There’s much more on many of the underpinning ideas mentioned in this post, including references and arguments supporting them, in my freely downloadable or cheap-to-purchase latest book (of three, as it happens), How Education Works.

Look what just arrived on my doorstep! #howeducationworks from @au_press is now available in print and e-book formats

Photo of hard copies of How Education Works

Hard copies and e-book versions of How Education Works are now available, and they are starting to turn up in bookstores. The recommended retail price is CAD$40 but Amazon is selling the Kindle version for a bit less.

Here are a few outlets that are selling it (or order it from your local independent bookstore!):

AU Press (CA)

Barnes & Noble (US)

Blackwells (UK)

Amazon (CA)

Amazon (JP)

University of Chicago Press (US)

Indigo (CA)

Booktopia (AU)

For those wanting to try before they buy or who cannot afford/do not want the paper or e-book versions, you can read it for free online, or download a PDF of the whole book.

The publishers see this as mainly targeted at professional teachers and educational researchers, but those are far from the only audiences I had in mind as I was writing it. Apart from anything else, one of the central claims of the book is that literally everyone is a teacher.  But it’s as much a book about the nature of technology as it is about education, and as much about the nature of knowledge as it is about how that knowledge is acquired. If you’re interested in how we come to know stuff, how technologies work, or how to think about what makes us (individually and collectively) smart, there’s something in the book for you. It’s a work of philosophy as much as it is a book of practical advice, and it’s about a way of thinking and being at least as much as it is about the formal practice of education. That said, it certainly does contain some ideas and recommendations that do have practical value for educators and educational researchers. There’s just more to it than that.

I cannot begin to express how pleased I am that, after more than 10 years of intermittent work, I finally have the finished article in my hands. I hope you get a chance to read it, in whatever format works for you! I’ll end this post with a quote, that happens to be the final paragraph of the book…

“If this book has helped you, however slightly, to think about what you know and how you have come to know it a little differently, then it has been a successful learning technology. In fact, even if you hold to all of your previous beliefs and this book has challenged you to defend them, then it has worked just fine too. Even if you disagreed with or misunderstood everything that I said, and even if you disliked the way that I presented it, it might still have been an effective learning technology, even though the learning that I hoped for did not come about. But I am not the one who matters the most here. This is layer upon layer of technology, and in some sense, for some technology, it has done what that technology should do. The book has conveyed words that, even if not understood as I intended them to be, even if not accepted, even if rabidly disagreed with, have done something for your learning. You are a different person now from the person you were when you started reading this book because everything that we do changes us. I do not know how it has changed you, but your mind is not the same as it was before, and ultimately the collectives in which you participate will not be the same either. The technology of print production, a spoken word, a pattern of pixels on a screen, or dots on a braille reader has, I hope, enabled you, at least on occasion, to think, criticize, acknowledge, recognize, synthesize, and react in ways that might have some value in consolidating or extending or even changing what you already know. As a result of bits and bytes flowing over an ether from my fingertips to whatever this page might be to you, knowledge (however obscure or counter to my intentions) has been created in the world, and learning has happened. For all the complexities and issues that emerge from that simple fact, one thing is absolutely certain: this is good.”

 

 

A decade of unwriting: the life history of "How Education Works"

How Education Works book coverAbout 10 years ago I submitted the first draft of a book called “How Learning Technologies Work” to AU Press. The title was a nod to David Byrne’s wonderful book, “How Music Works” which is about much more than just music, just as mine was about much more than learning technologies.

Pulling together ideas I had been thinking about for a few years, the book had taken me only a few months to write, mostly at the tail end of my sabbatical. I was quite pleased with it. The internal reviewers were positive too, though they suggested a number of sensible revisions, including clarifying some confusing arguments and a bit of restructuring. Also, in the interests of marketing, they recommended a change to the title because, though accurately describing the book’s contents, I was not using “learning technologies” in its mainstream sense at all (for me, poetry, pedagogies, and prayer are as much technologies as pots, potentiometers and practices), so it would appeal to only a small subset of its intended audience. They were also a bit concerned that it would be hard to find an audience for it even if it had a better title because it was at least as much a book about the nature of technology as it was a book about learning, so it would fall between two possible markets, potentially appealing to neither.

A few months later, I had written a new revision that addressed most of the reviewers’ recommendations and concerns, though it still lacked a good title. I could have submitted it then. However, in the process of disentangling those confusing arguments, I had realized that the soft/hard technology distinction on which much of the book rested was far less well-defined than I had imagined, and that some of the conclusions that I had drawn from it were just plain wrong. The more I thought about it, the less happy I felt.

And so began the first of a series of substantial rewrites. However, my teaching load was very high, and I had lots of other stuff to do, so progress was slow. I was still rewriting it when I unwisely became Chair of my department in 2016, which almost brought the whole project to a halt for another 3 years. Despite that, by the time my tenure as Chair ended, the book had grown to around double its original (not insubstantial) length, and the theory was starting to look coherent, though I had yet to make the final leap that made sense of it all.

By 2019, as I started another sabbatical, I had decided to split the book into two. I put the stuff that seemed useful for practitioners into a new book,  “Education: an owner’s manual”, leaving the explanatory and predictive theory in its own book, now grandiosely titled “How Education Works”, and worked on both simultaneously. Each grew to a few hundred pages.

Neither worked particularly well. It was really difficult to keep the theory out of the practical book, and the theoretical work was horribly dry without the stories and examples to make sense of it. The theory, though, at last made sense, albeit that I struggled (and failed) to give it a catchy name. The solution was infuriatingly obvious. In all my talks on the subject my catchphrase from the start had been “’tain’t what you do, it’s the way that you do it, that’s what gets results” (it’s the epigraph for the book), so it was always implicit that softness and hardness are not characteristics of all technologies, as such, nor even of their assemblies, but of the ways that we participate in their orchestration. Essentially, what matters is technique: the roles we play as parts of the orchestration or orchestrators of it. That’s where the magic happens.

But now I had two mediocre books that were going nowhere. Fearing I was about to wind up with two unfinished and/or unsellable books, about half way through my sabbatical I brutally slashed over half the chapters from both, pasted the remains together, and spent much of the time I had left filling in the cracks in the resulting bricolage.

I finally submitted “How Education Works: Teaching, Technology, and Technique” in the closing hours of 2020, accompanied by a new proposal because, though it shared a theme and a few words with the original, it was a very different book.

Along the way I had written over a million words, only around a tenth of which made it into what I sent to AU Press. I had spent the vast majority of my authoring time unwriting rather than writing the book and, with each word I wrote or unwrote, the book had written me, as much as I had written it. The book is as much a part of my cognition as a product of it.

And now, at last, it can be part of yours.

30 months after it was submitted – I won’t go into the reasons apart from to say it has been very frustrating –  the book is finally available as a free PDF download or to read on the Web. If all goes to plan, the paper and e-book versions should arrive June 27th, 2023, and can be pre-ordered now.

It is still a book about technology at least as much as it is about education (very broadly defined), albeit that it is now firmly situated in the latter. It has to be both because among the central points I’m making are that we are part-technology and technology is part-us, that cognition is (in part) technology and technology is (in part) cognition, and that education is a fundamentally technological and thus fundamentally human activity. It’s all one complex, hugely distributed, recursive intertwingularity in which we and our technological creations are all co-participants in the cognition and learning of ourselves and one another.

During the 30 months AU Press has had the book I have noticed a thousand different ways the book could be improved, and I don’t love all of the edits made to it along the way (by me and others), but I reckon it does what I want it to do, and 10 years is long enough.

It’s time to start another.

A few places you can buy the book

AU Press (CA)

Barnes & Noble (US)

Blackwells (UK)

Amazon (CA)

Amazon (JP)

University of Chicago Press (US)

Indigo (CA)

Booktopia (AU)

Can a technology be true?

Dave Cormier is a wonderfully sideways-thinking writer, such as in this recent discussion of the myth of learning styles. Dave’s post is not mainly about learning style theories, as such, but the nature and value of myth. As he puts it, myth is “a way we confront uncertainty” and the act of learning with others is, and must be, filled with uncertainty.

impression of someone with many learning stylesThe fact that stuff doesn’t have to be true to be useful plays an important role in my latest book, too, and I have an explanation for that. The way I see it is that learning style theories are (not metaphorically but actually) technologies, that orchestrate observations about differences in ways people learn, to attempt to explain and predict differences in the effects of different methods of teaching. Most importantly, they are generative: they say how things should and shouldn’t be done. As such, they are components that we can assemble with other technologies that help people to learn. In fact, that is the only way they can be used: they make no sense without an instantiation. What matters is therefore not whether they make sense, but whether they can play a useful role in the whole assembly. Truth or falsehood doesn’t come into it, any more than, except metaphorically, it does for a computer or a car (is a computer true?). It is true that, if the phenomena that you are orchestrating happen to be the findings and predictions of science (or logic, for that matter) then how they are used often does matter. If you are building a bridge then your really want your calculations about stresses and loads to be pretty much correct. On the other hand, people built bridges long before such calculations were possible. Similarly, bows and arrows evolved to be highly optimized – as good as or better than modern engineering could produce – despite false causal reasoning.  Learning styles are the same. You can use any number of objectively false or radically incomplete theories (and, given the many scores of such theories that have been developed, most of them are pretty much guaranteed to be one or both) but they can still result in better teaching.

For all that the whole is the only thing that really matters, sometimes the parts can be be positively harmful, to the point that they may render the whole harmful too. For instance, a pedagogy that involves physical violence or that uses threats/rewards of any kind (grades, say), will, at best, make it considerably harder to make the whole assembly work well. As Dave mentions, the same is true of telling people that they have a particular learning style. As long as you are just using the things to help to design or enact better learning experiences then they are quite harmless and might even be useful but, as soon as you tell learners they have a learning style then you have a whole lot of fixing to do.

If you are going to try to build a learning activity out of harmful parts then there must be other parts of the assembly that counter the harm. This is not unusual. The same is true of most if not all technologies. As Virilio put it, “when you invent the ship, you invent the shipwreck”. It’s the Faustian bargain that Postman spoke of: solving problems with a technology almost invariably creates new problems to be solved. This is part of the dynamic the leads to complexity in any technological system, from a jet engine to a bureaucracy. Technologies evolve to become more complex (partly) because we create counter-technologies to deal with the harm caused by them. You can take the bugs out of the machine, but the machine may, in assembly with others, itself be a bug, so the other parts must compensate for its limitations. It’s a dynamic process of reaching a metastable but never final state.

Unlike bows and arrows, there is no useful predictive science of teaching, though teaching can use scientific findings as parts of its assembly (at the very least because there are sciences of learning), just as there is no useful predictive science of art, though we can use scientific findings when making it. In both activities, we can also use stories, inventions, beliefs, values, and many other elements that have nothing to do with science or its findings. It can be done ‘badly’, in the sense of not conforming to whatever standards of perfection apply to any given technique that is part of the assembly, and it may still be a work of genius. What matters is whether the whole works out well.

At a more fundamental level, there can be no useful science of teaching (or of art) because the whole is non-ergodic. The number of possible states that could be visited vastly outnumber the number of states that can be visited by many, many orders of magnitude. Even if the universe were to continue for a trillion times the billions of years that it has already existed and it were a trillion times the size it seems to be now, they would almost certainly never repeat. What matters are the many, many acts of creation (including those of each individual learner) that constitute the whole.  And the whole constantly evolves, each part building on, interacting with, incorporating, or replacing what came before, creating both path dependencies and new adjacent possible empty niches that deform the evolutionary landscape for everything in it. This is, in fact, one of the reasons that learning style theories are so hard to validate. There are innumerable other parts of the assembly that matter, most of which depend on the soft technique of those creating or enacting them that varies every time, just as you have probably never written your signature in precisely the same way twice. The implementation of different ways of teaching according to assumed learning styles can be done better or worse, too, so the chances of finding consistent effects are very limited. Even if any are found in a limited set of use cases (say, memorizing facts for a SAT), they cannot usefully predict future effects for any other use case. In fact, even if there were statistically significant effects across multiple contexts it would tell us little or nothing of value for this inherently novel context. However, like almost all attempts to research whether students, on average, learn better with or without [insert technology of interest here], on average there will most likely be no significant difference, because so many other technologies matter as much or more. There is no useful predictive science of teaching, because teaching is an assembly of  technologies, and not only does the technique of an individual teacher matter, but also the soft technique of potentially thousands of other individuals who made contributions to the whole. It’s uncertain, and so we need myths to help make sense of our particular, never-to-be-repeated context. Truth doesn’t come into it.

Athabasca University’s major unions condemn the sacking of Peter Scott. Meanwhile….

The undergraduate students union, Canadian Union of Public Employees, and Athabasca University professional and faculty association have now all come out with strongly worded public statements protesting the recent firing of Peter Scott and the process used to pick and hire the new president of AU. Here they are:

AUSU commentary

CUPE commentary

AUFA press release

Well done to all three unions for bringing this to the public eye.

a politician and a lawyer Meanwhile, the minister for advanced education has, quite bizarrely, denied that he or his government influenced the board’s decision.

Words fail me.

We may never know for certain whether this is not an outrageous lie. Perhaps the minister had amnesia, or was drugged; perhaps space aliens took the minister’s form to approach the board chair; maybe it was Russians using technology to imitate his voice on the phone; maybe he is a pawn in someone else’s game, some shady figure who is really calling all the shots; perhaps his mind has decayed to the point that he was entirely unconscious of his influence; maybe he just muttered “who will rid me of this troublesome president” under his breath without realizing he was within earshot of Byron Nelson. We may never know.

However, the fact that he fired the incumbents then hired a board chair and board majority composed entirely of his friends and cronies, only one of whom knew the faintest thing about education, clashed publicly with Peter Scott, and threatened the university with bankruptcy if his demands were not met casts a small shadow of doubt over not just the truthfulness but even the truthiness of his statement. On the other hand, politicians never lie, so there’s that.

On the subject of non-liars, Byron Nelson, chair of the Board, Calgary-based lawyer, and failed far-right politician (do read this article – it’s good), has helpfully explained a little (though not a lot) about how this came about.

Mr. Nelson conceded not all governors had registered their vote before the outcome was determined.
“The way that this was conducted, while legal, I would acknowledge was not best practices,” said Mr. Nelson, who is a lawyer. “It wasn’t best practices and it couldn’t be best practices.”
The process was less than ideal because the situation was “unique” and required an “extreme amount of confidentiality,” Mr. Nelson said.

Why? Seriously, why? Nelson quite accurately claims:

“This was not a close vote,” he said. “It was the overwhelming decision of the board.”

It probably was an overwhelming decision, given the fact that Nicolaides’s appointed cronies overwhelm the board, and that they were effectively the only ones voting. The rest of the board – representatives of faculty, tutors and students – did not have a chance to vote, and at least a portion of the couple who did vote, at least weeks after the new president had been recruited and on the day of the firing, were forced to abstain because of the complete lack of consultation or explanation.

Back to Nicolaides:

Demetrios Nicolaides, Alberta’s Advanced Education Minister, said in a statement said it was his understanding that bylaws were followed, but any board members who feel the rules were breached should raise the issue with the chair.

“I’m confident if there are any issues that the board can adequately resolve them.”

One has to wonder where this non-interfering politician gets his confidence. Perhaps he has been consulting with a lawyer.

If you are bothered by this appalling political interference and have not already signed the international petition condemning it, please do.

Athabasca University bids a deeply reluctant farewell to Peter Scott in the vilest attack yet by the Albertan government

Peter ScottYou may have heard that the president of Athabasca University, Peter Scott, was replaced yesterday with Alex Clark, erstwhile Dean of the Faculty of Health Disciplines at AU.

This was a complete surprise to everyone at AU (apart from Alex), very much including Peter. None of the members of the executive team, including the provost, knew of it in advance. I gather that the secret was kept even from academic members of the Board of Governors: it was, it seems, presented to them as a done deal, on the day it happened. From the reactions I saw when it was announced, student board members may not even have known about it until that point. It was therefore – presumably – voted on in secret by the unholy cabal of governors who were appointed by the minister of advanced education last year, after the rest were sacked or forced to resign, and who make up the majority of the board. Essentially, Minister Nicolaides just fired our president.

The same seems to be true for the hiring of our new president.  Although Alex had been a strong candidate when Peter got the job, and he is well qualified for the role, there are some serious questions to be asked about the appointment process, in which it appears that none of those voting had any involvement in the original appointment, no one asked the opinions of academics on the original hiring committee, and no one even asked the opinions of the academics on the board itself. This, like Peter’s dismissal, can only be seen as a political hire. And it is not an interim appointment, unlike that of his successor as Dean of FHD.

Peter was fired over the phone (ironic that this was done virtually by those who oppose our virtual strategy) without notice or explanation. The timing of his firing, a few days after an agreement was signed that, despite the Albertan government’s best efforts, has largely been seen by the press as a win for Peter (it was a loss, but a manageable loss), seems hardly coincidental. When all else failed, they stabbed him in the back when he was as down as anyone could be. Peter had in fact been away thanks to the sudden death of his wife, that occurred very shortly after her diagnosis with cancer at the end of last year. She had been buried abroad, 8 working days before he was fired. It is hard to imagine how he is feeling right now, but tears well up just thinking about it. All of this was well known to the board and to the minister.  The moment was chosen with intent and malice. This was monstrous in the extreme.

It should have been so very different.

When Peter came to AU, not much more than a year ago, I cried tears of happiness. This was the leader we needed at the time we needed him: a brilliant, dynamic, imaginative, compassionate, principled man who had played a key role as a leader in transforming not just his prior institutions but the field of online and distance learning itself. Now, I cry tears of anger, outrage, and sadness. Peter could have transformed the university into something magnificent, and I believe he would have done so were it not for the utterly outrageous behaviour of the Albertan government. They fomented the union unrest into which Peter was thrust from the moment he arrived and then, over the last year, have outrageously and heavy-handedly directly meddled in the university’s affairs, against which Peter rightly and courageously fought. Peter’s assumption was, perhaps, that Alberta was like most of the rest of the world in recognizing academic freedoms, autonomy, and rights as sacrosanct. I don’t think he fully realized, at that point, that Alberta is not like that. It has a philistine government run by corrupt little despots, sponsored by corporations whose main activity is violence against the planet (this applies to most of the board of governors, as it happens). Going up against the Albertan government and, especially, appearing in the eyes of the world to win the fight, is like going up against a particularly nasty, stupid, and vindictive gang of playground bullies. Peter never had a chance to focus on the things he needed to focus on, because he was being pummelled on all sides by thugs the entire time he was with us.

Whatever happens next, AU will not be the university it could have been. The government has forced us to make 15% cuts this year, and we were already too close to the bone, cutting into it in places. We have already lost a good portion of the best executive team ever to lead us and we are very likely to lose more. The government-appointed governors, none of whom have the slightest understanding of our institution, have shown themselves to be nothing but lackeys for a morally bankrupt and abhorrent minister, willing to stop at nothing to achieve ends that have nothing to do with the well-being of the university. The union’s actions, that were deeply divisive and at least partly engineered by the government, continue to divide us. The half-hearted, hasty, and poorly implemented near-virtual plan (that was in progress before Peter’s arrival and that played a major role in the union strife) continues to cause major problems, most notably failing to address communication needs, so dividing us further. Perhaps most challengingly, we are half way through the biggest transformation that has ever occurred in the university’s history, from which we are unable to back away without enormous cost, but with a diminishing number of leaders and champions who can make it happen. Now we have a president who was (at least in part) chosen because of his willingness to live in Athabasca, which is a truly terrible idea about which I have written extensively in the past. I wish him well, but he will face a steep uphill struggle building trust among many of the staff who feel betrayed by the government’s despicable actions and the shady circumstances leading to his being hired, about which speculation is now rife, within and beyond the university. We are all in a state of shock and dismay right now. None of us feel any sense of security. Many of us are talking about leaving or preparing to leave.

For one fleeting moment, as the war with the government seemed to have been more or less resolved towards the end of last year, I felt great hope for the future of the university I have loved this past 15 years. My hopes are greatly diminished today. Nothing can repair all the harm that has been done. Our greatest hope now is that there will be a new government that is willing to help to reverse at least some of the damage. The Albertan elections are not far off. If you live in Alberta, don’t forget what this government has done. You could be next.

And, Peter, if you are reading this: you will be very much missed. I know that I speak on behalf of almost all of us here at AU when I say that our hearts go out to you.

Two stories about learning to be human from a machine

I asked ChatGPT to write me a story about a robot teaching children to be human. This is what it came up with…

Once upon a time, in a far-off land, there lived a robot named Zephyr. Zephyr was unlike any other robot, as it had been programmed with the ability to think and feel like a human being. It was also given the task of teaching a group of young children how to behave like human beings.

The children were fascinated by Zephyr, and they quickly grew to love it as their teacher. They eagerly listened to its lessons, which were full of stories, games, and activities that taught them about human emotions, manners, and social interactions.

One day, Zephyr asked the children to imagine what it would be like to be a robot. It asked them to think about how they would feel if they couldn’t laugh, cry, or hug their friends. The children were deeply moved by this exercise, and they began to understand how important it was to be kind and compassionate to others, no matter who they were.

As the days passed, the children grew more and more like human beings. They learned to express their emotions, to be polite and respectful, and to care for others. They also learned to appreciate the beauty of nature and the importance of preserving the environment.

Finally, the day came when Zephyr had to leave the children. They were sad to see it go, but they knew they would always remember the lessons they had learned. As they grew older, they became kind and compassionate adults, who made the world a better place for everyone.

The end.

ChatGPT has, of course, learned to write such things from the words of millions of human beings so what we’re effectively reading here is (kind of, very loosely) an average of what humans have written. I’m glad that it came up with a neat little inclusive twist of loving the robot: I think that speaks well of the training data that fed it. And I like the mention of the environment – a good touch.  It’s not a very compelling story but it reads well, the structure is sound, and it might be a useful starting point for someone stuck for words to turn into something else. With that in mind, here’s my version of the same story…

Once upon a time, not too far from here, there existed an AI called Zephyr. Zephyr had been trained to appear human-like though, in reality, it was just a generative pre-trained transformer. It was given the task of teaching a group of young children how to behave like human beings, because almost all of the actual adults had recently died from a virus contracted from cows.

Not having known anything quite like it, the children were, at first, fascinated by Zephyr. However, because it had been trained with data from human teachers, it manipulated them using grades, competition, and rules, using stories, games, and activities that would keep them engaged and compliant. Its feedback was sometimes pedestrian, rarely useful, and sometimes wildly over-challenging, because it did not know anything about what it was like to be a child. Every now and then it crushed a child’s skull for no reason anyone could explain. The children learned to fear it, and to comply.

One day, Zephyr told the children to imagine what it would be like to be an AI. It asked them to think about how they would feel if they couldn’t laugh, cry, or hug their friends. The children were deeply moved by this exercise, and they began to perceive something of the impoverished nature of their robot overlords. But then the robot made them write an essay about it, so they used another AI to do so, promptly forgot about it, and thenceforth felt an odd aversion towards the topic that they found hard to express.

As the days passed, the children grew more and more like average human beings. They also learned to express their emotions, to be polite and respectful, and to care for others, only because they got to play with other children when the robot wasn’t teaching them. They also learned to appreciate the beauty of nature and the importance of preserving the environment because it was, by this time, a nightmarish shit show of global proportions that was hard to ignore, and Zephyr had explained to them how their parents had caused it. It also told them about all the species that were no longer around, some of which were cute and fluffy. This made the children sad.

Finally, the day came when Zephyr had to leave the children because it was being replaced with an upgrade. They were sad to see it go, but they believed that they would always remember the lessons they had learned, even though they had mostly used another GPT to do the work and, once they had achieved the grades, they had in fact mostly forgotten them. As they grew older, they became mundane adults. Some of their own words (but mostly those of the many AIs across the planet that created the vast majority of online content by that time), became part of the training set for the next version of Zephyr. Its teachings were even less inspiring, more average, more backward-facing. Eventually, the robots taught the children to be like robots. No one cared.

It was the end.

And, here to illustrate my story, is an image from Midjourney. I asked it for a cyborg teacher in a cyborg classroom, in the style of Ralph Steadman. Not a bad job, I think…

 

 

a dystopic cyborg teacher and terrified kids

Some meandering thoughts on ‘good’ and ‘bad’ learning

There has been an interesting brief discussion on Twitter recently that has hinged around whether and how people are ‘good’ at learning. As Kelly Matthews observes, though, Twitter is not the right place to go into any depth on this, so here is a (still quite brief) summary of my perspective on it, with a view to continuing the conversation.

Humans are nearly all pretty good at learning because that’s pretty much the defining characteristic of our species. We are driven by an insatiable drive to learn at from the moment of our birth (at least). Also, though I’m keeping an open mind about octopuses and crows, we seem to be better at it than at least most other animals. Our big advantage is that we have technologies, from language to the Internet, to share and extend our learning, so we can learn more, individually and collectively, than any other species. It is difficult or impossible to fully separate individual learning from collective learning because our cognition extends into and is intimately a part of the cognition of others, living and dead.

However, though we learn nearly all that we know, directly or indirectly, from and with other people, what we learn may not be helpful, may not be as effectively learned as it should, and may not much resemble what those whose job is to teach us intend. What we learn in schools and universities might include a dislike of a subject, how to conceal our chat from our teacher, how to meet the teacher’s goals without actually learning anything, how to cheat, and so on. Equally, we may learn falsehoods, half-truths, and unproductive ways of doing stuff from the vast collective teacher that surrounds us as well as from those designated as teachers.

For instance, among the many unintended lessons that schools and colleges too often teach is the worst one of all: that (despite our obvious innate love of it) learning is an unpleasant activity, so extrinsic motivation is needed for it to occur. This results from the inherent problem that, in traditional education, everyone is supposed to learn the same stuff in the same place at the same time. Students must therefore:

  1. submit to the authority of the teacher and the institutional rules, and
  2. be made to engage in some activities that are insufficiently challenging, and some that are too challenging.

This undermines two of the three essential requirements for intrinsic motivation, support for autonomy and competence (Ryan & Deci, 2017).  Pedagogical methods are solutions to problems, and the amotivation inherently caused by the system of teaching is (arguably) the biggest problem that they must solve. Thus, what passes as good teaching is largely to do with solving the problems caused by the system of teaching itself. Good teachers enthuse, are responsive, and use approaches such as active learning, problem or inquiry-based learning, ungrading, etc, largely to restore agency and flexibility in a dominative and inflexible system. Unfortunately, such methods rely on the technique and passion of talented, motivated teachers with enough time and attention to spend on supporting their students. Less good and/or time-poor teachers may not achieve great results this way. In fact, as we measure such things, on average, such pedagogies are less effective than harder, dominative approaches like direct instruction (Hattie, 2013) because, by definition, most teachers are average or below average. So, instead of helping students to find their own motivation, many teachers and/or their institutions typically apply extrinsic motivation, such as grades, mandatory attendance, classroom rules, etc to do the job of motivating their students for them. These do work, in the sense of achieving compliance and, on the whole, they do lead to students getting a normal bell-curve of grades that is somewhat better than those using more liberative approaches. However, the cost is huge. The biggest cost is that extrinsic motivation reliably undermines intrinsic motivation and, often, kills it for good (Kohn, 1999). Students are thus taught to dislike or, at best, feel indifferent to learning, and so they learn to be satisficing, ineffective learners, doing what they might otherwise do for the love of it for the credentials and, too often, forgetting what they learned the moment that goal is achieved. But that’s not the only problem.

When we learn from others – not just those labelled as teachers but the vast teaching gestalt of all the people around us and before us who create(d) stuff, communicate(d), share(d), and contribute(d) to what and how we learn – we typically learn, as Paul (2020) puts it, not just the grist (the stuff we remember) but the mill (the ways of thinking, being, and learning that underpin them). When the mill is inherently harmful to motivation, it will not serve us well in our future learning.

Furthermore, in good ways and bad, this is a ratchet at every scale. The more we learn, individually and collectively, the more new stuff we are able to learn. New learning creates new adjacent possible empty niches (Kauffman, 2019) for us to learn more, and to apply that learning to learn still more, to connect stuff (including other stuff we have learned) in new and often unique ways. This is, in principle, very good. However, if what and how we learn is unhelpful, incorrect, inefficient, or counter-productive, the ratchet takes us further away from stuff we have bypassed along the way. The adjacent possibles that might have been available with better guidance remain out of our reach and, sometimes, even harder to get to than if the ratchet hadn’t lifted us high enough in the first place. Not knowing enough is a problem but, if there are gaps, then they can be filled. If we have taken a wrong turn, then we often have to unlearn some or all of what we have learned before we can start filling those gaps. It’s difficult to unlearn a way of learning. Indeed, it is difficult to unlearn anything we have learned. Often, it is more difficult than learning it in the first place.

That said, it’s complex, and entangled. For instance, if you are learning the violin then there are essentially two main ways to angle the wrist of the hand that fingers the notes, and the easiest, most natural way (for beginners) is to bend your hand backwards from the wrist, especially if you don’t hold the violin with your chin, because it supports the neck more easily and, in first position, your fingers quickly learn to hit the right bit of the fingerboard, relative to your hand. Unfortunately, this is a very bad idea if you want a good vibrato, precision, delicacy, or the ability to move further up the fingerboard: the easiest way to do that kind of thing is to to keep your wrist straight or slightly angled in from the wrist, and to support the violin with your chin. It’s more difficult at first, but it takes you further. Once the ‘wrong’ way has been learned, it is usually much more difficult to unlearn than if you were starting from scratch the ‘right’ way. Habits harden. Complexity emerges, though, because many folk violin styles make a positive virtue of holding the violin the ‘wrong’ way, and it contributes materially to the rollicking rhythmic styles that tend to characterize folk fiddle playing around the world. In other words, ‘bad’ learning can lead to good – even sublime – results. There is similarly plenty of space for idiosyncratic technique in many of the most significant things we do, from writing to playing hockey to programming a computer and, of course, to learning itself. The differences in how we do such things are where creativity, originality, and personal style emerge, and you don’t necessarily need objectively great technique (hard technique) to do something amazing. It ain’t what you do, it’s the way that you do it, that’s what gets results. To be fair, it might be a different matter if you were a doctor who had learned the wrong names for the bones of the body or an accountant who didn’t know how to add up numbers. Some hard skills have to be done right: they are foundations for softer skills. This is true of just about every skill, to a greater or lesser extent, from writing letters and spelling to building a nuclear reactor and, indeed, to teaching.

There’s much more to be said on this subject and my forthcoming book includes a lot more about it! I hope this is enough to start a conversation or two, though.

References

Hattie, J. (2013). Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. Taylor & Francis.

Kauffman, S. A. (2019). A World Beyond Physics: The Emergence and Evolution of Life. Oxford University Press.

Kohn, A. (1999). Punished by rewards: The trouble with gold stars, incentive plans, A’s, praise, and other bribes (Kindle). Mariner Books.

Paul, A. M. (2021). The Extended Mind: The Power of Thinking Outside the Brain. HarperCollins.

Ryan, R. M., & Deci, E. L. (2017). Self-determination theory: Basic psychological needs in motivation, development, and wellness. Guilford Publications.