▶ I got air: interview with Terry Greene

Since 2018, Terry Greene has been producing a wonderful series of podcast interviews with open and online learning researchers and practitioners called Getting Air. Prompted by the publication of How Education Works, (Terry is also responsible for the musical version of the book, so I think he likes it) this week’s episode features an interview with me.

I probably should have been better prepared. Terry asked some probing, well-informed, and sometimes disarming questions, most of which led to me rambling more than I might have done if I’d thought about them in advance. It was fun, though, drifting through a broad range of topics from the nature of technology to music to the perils of generative AI (of course).

I hope that Terry does call his PhD dissertation “Getting rid of instructional designers”.

Educational ends and means: McNamara’s Fallacy and the coming robot apocalypse (presentation for TAMK)

 

These are the slides that I used for my talk with a delightful group of educational leadership students from TAMK University of Applied Sciences in Tampere, Finland at (for me) a somewhat ungodly hour Wednesday night/Thursday morning after a long day. If you were in attendance, sorry for any bleariness on my part. If not, or if you just want to re-live the moment, here is the video of the session (thanks Mark!)man shaking hands with a robot

The brief that I was given was to talk about what generative AI means for education and, if you have been following any of my reflections on this topic then you’ll already have a pretty good idea of what kinds of issues I raised about that. My real agenda, though, was not so much to talk about generative AI as to reflect on the nature and roles of education and educational systems because, like all technologies, the technology that matters in any given situation is the enacted whole rather than any of its assembled parts. My concerns about uses of generative AI in education are not due to inherent issues with generative AIs (plentiful though those may be) but to inherent issues with educational systems that come to the fore when you mash the two together at a grand scale.

The crux of this argument is that, as long as we think of the central purposes of education as being the attainment of measurable learning outcomes or the achievement of credentials, especially if the focus is on training people for a hypothetical workplace, the long-term societal effects of inserting generative AIs into the teaching process are likely to be dystopian. That’s where Robert McNamara comes into the picture. The McNamara Fallacy is what happens when you pick an aspect of a system to measure, usually because it is easy, and then you use that measure to define success, choosing to ignore or to treat as irrelevant anything that cannot be measured. It gets its name from Robert McNamara, US Secretary of Defense during the Vietnam war, who famously measured who was winning by body count, which is probably among the main reasons that the US lost the war.

My concern is that measurable learning outcomes (and still less the credentials that signify having achieved them) are not the ends that matter most. They are, more, means to achieve far more complex, situated, personal and social ends that lead to happy, safe, productive societies and richer lives for those within them. While it does play an important role in developing skills and knowledge, education is thus more fundamentally concerned with developing values, attitudes, ways of thinking, ways of seeing, ways of relating to others, ways of understanding and knowing what matters to ourselves and others, and finding how we fit into the social, cultural, technological, and physical worlds that we inhabit. These critical social, cultural, technological, and personal roles have always been implicit in our educational systems but, at least in in-person institutions, it seldom needs to be made explicit because it is inherent in the structures and processes that have evolved over many centuries to meet this need. This is why naive attempts to simply replicate the in-person learning experience online usually fail: they replicate the intentional teaching activities but neglect to cater for the vast amounts of learning that occur simply due to being in a space with other people, and all that emerges as a result of that. It is for much the same reasons that simply inserting generative AI into existing educational structures and systems is so dangerous.

If we choose to measure the success or failure of an educational system by the extent to which learners achieve explicit learning outcomes and credentials, then the case for using generative AIs to teach is extremely compelling. Already, they are far more knowledgeable, far more patient, far more objective, far better able to adapt their teaching to support individual student learning, and far, far cheaper than human teachers. They will get better. Much better. As long as we focus only on the easily measurable outcomes and the extrinsic targets, simple economics combined with their measurably greater effectiveness means that generative AIs will increasingly replace teachers in the majority of teaching roles.  That would not be so bad – as Arthur C. Clarke observed, any teacher that can be replaced by a machine should be – were it not for all the other more important roles that education plays, and that it will continue to play, except that now we will be learning those ways of being human from things that are not human and that, in more or less subtle ways, do not behave like humans. If this occurs at scale – as it is bound to do – the consequences for future generations may not be great. And, for the most part, the AIs will be better able to achieve those learning outcomes themselves – what is distinctive about them is that they are, like us, tool users, not simply tools – so why bother teaching fallible, inconsistent, unreliable humans to achieve them? In fact, why bother with humans at all? There are, almost certainly, already large numbers of instances in which at least part of the teaching process is generated by an AI and where generative AIs are used by students to create work that is assessed by AIs.

It doesn’t have to be this way. We can choose to recognize the more important roles of our educational systems and redesign them accordingly, as many educational thinkers have been recommending for considerably more than a century. I provide a few thoughts on that in the last few slides that are far from revolutionary but that’s really the point: we don’t need much novel thinking about how to accommodate generative AI into our existing systems. We just need to make those systems work the way we have known they should work for a very long time.

Download the slides | Watch the video

Presentation – Generative AIs in Learning & Teaching: the Case Against

Here are the slides from my presentation at AU’s Lunch ‘n’ Learn session today. The presentation itself took 20 minutes and was followed by a wonderfully lively and thoughtful conversation for another 40 minutes, though it was only scheduled for half an hour. Thanks to all who attended for a very enjoyable discussion! self portrait of chatGPT, showing an androgynous human face overlaid with circuits

The arguments made in this were mostly derived from my recent paper on the subject (Dron, J. (2023). The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education. Digital, 3(4), 319–335. https://doi.org/10.3390/digital3040020) but, despite the title, my point was not to reject the use of generative AIs at all. The central message I was hoping to get across was a simpler and more important one: to encourage attendees to think about what education is for, and what we would like it to be. As the slides suggest, I believe that is only partially to do with the objectives and outcomes we set out to achieve,  that it is nothing much at all to do with the products of the system such as grades and credentials, and that focus on those mechanical aspects of the system often creates obstacles to the achievement of it. Beyond those easily measured things, education is about the values, beliefs, attitudes, relationships, and development of humans and their societies.  It’s about ways of being, not just capacity to do stuff. It’s about developing humans, not (just) developing skills. My hope is that the disruptions caused by generative AIs are encouraging us to think like the Amish, and to place greater value on the things we cannot measure. These are good conversations to have.

Published in Digital – The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education

A month or two ago I shared a “warts-and-all” preprint of this paper on the risks of educational uses of generative AIs. The revised, open-access published version, The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education is now available in the Journal Digital.

The process has been a little fraught. Two reviewers really liked the paper and suggested minimal but worthwhile changes. One quite liked it but had a few reasonable suggestions for improvements that mostly helped to make the paper better. The fourth, though, was bothersome in many ways, and clearly wanted me to write a completely different paper altogether. Despite this, I did most of what they asked, even though some of the changes, in my opinion, made the paper a bit worse. However, I drew the line at the point that they demanded (without giving any reason) that I should refer to 8 very mediocre, forgettable, cookie cutter computer science papers which, on closer inspection, had all clearly been written by the reviewer or their team. The big problem I had with this was not so much the poor quality of the papers, nor even the blatant nepotism/self-promotion of the demand, but the fact that none were in any conceivable way relevant to mine, apart from being about AI: they were about algorithm-tweaking, mostly in the context of traffic movements in cities.  It was as ridiculous as a reviewer of a work on Elizabethan literature requiring the author to refer to papers on slightly more efficient manufacturing processes for staples. Though it is normal and acceptable for reviewers to suggest reference to their own papers when it would clearly lead to improvements, this was an utterly shameless abuse of power of a scale and kind that I have never seen before. I politely refused, making it clear that I was on to their game but not directly calling them out on it.

In retrospect, I slightly regret not calling them out. For a grizzly old researcher like me who could probably find another publisher without too much hassle, it doesn’t matter much if I upset a reviewer enough to make them reject my paper. However, for early-career researchers stuck in the publish-or-perish cycle, it would be very much harder to say no. This kind of behaviour is harmful for the author, the publisher, the reader, and the collective intelligence of the human race. The fact that the reviewer was so desperate to get a few more citations for their own team with so little regard for quality or relevance seems to me to be a poor reflection on them and their institution but, more so, a damning indictment of a broken system of academic publishing, and of the reward systems driving academic promotion and recognition. I do blame the reviewer, but I understand the pressures they might have been under to do such a blatantly immoral thing.

As it happens, my paper has more than a thing or two to say about this kind of McNamara phenomenon, whereby the means used to measure success in a system become and warp its purpose, because it is among the main reasons that generative AIs pose such a threat. It is easy to forget that the ways we establish goals and measure success in educational systems are no more than signals of a much more complex phenomenon with far more expansive goals that are concerned with helping humans to be, individually and in their cultures and societies, as much as with helping them to do particular things. Generative AIs are great at both generating and displaying those signals – better than most humans in many cases – but that’s all they do: the signals signify nothing. For well-defined tasks with well-defined goals they provide a lot of opportunities for cost-saving, quality improvement, and efficiency and, in many occupations, that can be really useful. If you want to quickly generate some high quality advertising copy, the intent of which is to sell a product, then it makes good sense to use a generative AI. Not so much in education, though, where it is too easy to forget that learning objectives, learning outcomes, grades, credentials, and so on are not the purposes of learning but just means for and signals of achieving them.

Though there are other big reasons to be very concerned about using generative AIs in education, some of which I explore in the paper, this particular problem is not so much with the AIs themselves as with the technological systems into which they are, piecemeal, inserted. It’s a problem with thinking locally, not globally; of focusing on one part of the technology assembly without acknowledging its role in the whole. Generative AIs could, right now and with little assistance,  perform almost every measurable task in an educational system from (for students) producing essays and exam answers, to (for teachers) writing activities and assignments, or acting as personal tutors. They could do so better than most people. If that is all that matters to us then we might as well therefore remove the teachers and the students from the system because, quite frankly, they only get in the way. This absurd outcome is more or less exactly the end game that will occur though, if we don’t rethink (or double down on existing rethinking of) how education should work and what it is for, beyond the signals that we usually use to evaluate success or intent. Just thinking of ways to use generative AIs to improve our teaching is well-meaning, but it risks destroying the woods by focusing on the trees. We really need to step back a bit and think of why we bother in the first place.

For more on this, and for my tentative partial solutions to these and other related problems, do read the paper!

Abstract and citation

This paper analyzes the ways that the widespread use of generative AIs (GAIs) in education and, more broadly, in contributing to and reflecting the collective intelligence of our species, can and will change us. Methodologically, the paper applies a theoretical model and grounded argument to present a case that GAIs are different in kind from all previous technologies. The model extends Brian Arthur’s insights into the nature of technologies as the orchestration of phenomena to our use by explaining the nature of humans’ participation in their enactment, whether as part of the orchestration (hard technique, where our roles must be performed correctly) or as orchestrators of phenomena (soft technique, performed creatively or idiosyncratically). Education may be seen as a technological process for developing these soft and hard techniques in humans to participate in the technologies, and thus the collective intelligence, of our cultures. Unlike all earlier technologies, by embodying that collective intelligence themselves, GAIs can closely emulate and implement not only the hard technique but also the soft that, until now, was humanity’s sole domain; the very things that technologies enabled us to do can now be done by the technologies themselves. Because they replace things that learners have to do in order to learn and that teachers must do in order to teach, the consequences for what, how, and even whether learning occurs are profound. The paper explores some of these consequences and concludes with theoretically informed approaches that may help us to avert some dangers while benefiting from the strengths of generative AIs. Its distinctive contributions include a novel means of understanding the distinctive differences between GAIs and all other technologies, a characterization of the nature of generative AIs as collectives (forms of collective intelligence), reasons to avoid the use of GAIs to replace teachers, and a theoretically grounded framework to guide adoption of generative AIs in education.

Dron, J. (2023). The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education. Digital, 3(4), 319–335. https://doi.org/10.3390/digital3040020

Originally posted at: https://landing.athabascau.ca/bookmarks/view/21104429/published-in-digital-the-human-nature-of-generative-ais-and-the-technological-nature-of-humanity-implications-for-education

▶ How Education Works, the audio book: now with beats

My book has been set to music!

Many thanks to Terry Greene for converting How Education Works into the second in his inspired series of podcasts, EZ Learning – Audio Books with Beats. There’s a total of 15 episodes that can be listened to online, subscribed to with your preferred podcast app, or downloaded for later listening, read by a computer-generated voice and accompanied by some cool, soothing beats.

Terry chose a deep North American voice for the reader and Eaters In Coffeeshops Mix 1 by Eaters to accompany my book. I reckon it works really well. It’s bizarre, at first – the soothing robotic voice introduces weird pauses, mispronunciations, and curious emphases, and there are occasional voice parts in the music that can be slightly distracting – but you soon get used to it if you relax into the rhythm, and it leads to the odd serendipitous emphasis that enhances rather than detracts from the text. Oddly, in some ways it almost feels more human as a result. Though it can be a bit disconcerting at times and there’s a fair chance of being lulled to sleep by the gentle rhythm, I have a hunch that the addition of music might make it easier to remember passages from it, for reasons discussed in a paper I wrote with Rory McGreal, VIve Kumar, and Jennifer Davies a year or so ago.

I have been slowly and painfully working on a manually performed audiobook of How Education Works but it is taking much longer than expected thanks to living on the flight path of a surprising number of float planes, being in a city built on a rain forest with a noisy gutter outside my window, having two very vocal cats, and so on, not to mention not having a lot of free time to work on it, so I am very pleased that Terry has done this. I won’t stop working on the human-read version – I think this fills a different and very complementary niche – but it’s great to have something to point people towards when they ask for an audio version.

The first season of Audio Books with Beats, appearing in the feed after the podcasts for my book chapters, was another AU Press book, Terry Anderson’s Theory and Practice of Online Learning which is also well worth a listen – those chapters follow directly from mine in the list of episodes. I hope and expect there will be more seasons to come so, if you are reading this some time after it was posted, you may need to scroll down through other podcasts until you reach the How Education Works. In case it’s hard to find, here’s a list of direct links to the episodes.

Acknowledgements, Prologue, introduction

Chapter 1: A Handful of Anecdotes About Elephants

Chapter 2:  A Handful of Observations About Elephants

Part 1: All About Technology

Chapter 3: Organizing Stuff to Do Stuff

Chapter 4: How Technologies Work

Chapter 5: Participation and Technique

Part II: Education as a Technological System

Chapter 6: A Co-Participation Model of Teaching

Chapter 7: Theories of Teaching

Chapter 8: Technique, Expertise, and Literacy

Part III: Applying the Co-Participation Model

Chapter 9: Revealing Elephants

Chapter 10: How Education Works

Epilogue

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20936998/%E2%96%B6-how-education-works-the-audio-book-now-with-beats

Recording and slides from my ESET 2023 keynote: Artificial humanity and human artificiality

Here are the slides from my keynote at ESET23 in Taiwan (I was online, alas, not in Taipei!).

I will try to remember to update this post with a link to the recording, when it is available.

Here’s a recording of the actual keynote.

The themes of my talk will be familiar to anyone who follows my blog or who has read my recent paper on the subject. This is about applying the coparticipation theory from How Education Works to generative AI, raising concerns about the ways it mimics the soft technique of humans, and discussing how problematic that will be if the skills it replaces atrophy or are never learned in the first place, amongst other issues.

This is the abstract:

We are participants in, not just users of technologies. Sometimes we participate as orchestrators (for instance, when choosing words that we write) and sometimes as part of the orchestration (for instance, when spelling those words correctly). Usually, we play both roles.  When we automate aspects of technologies in which we are just parts of the orchestration, it frees us up to be able to orchestrate more, to do creative and problem-solving tasks, while our tools perform the hard, mechanical tasks better, more consistently, and faster than we could ourselves. Collectively and individually, we therefore become smarter. Generative AIs are the first of our technologies to successfully automate those soft, open-ended, creative cognitive tasks. If we lack sufficient time and/or knowledge to do what they do ourselves, they are like tireless, endlessly flexible personal assistants, expanding what we can do alone. If we cannot draw, or draw up a rental agreement, say, an AI will do it for us, so we may get on with other things. Teachers are therefore scrambling to use AIs to assist in their teaching as fast as students use AIs to assist with their assessments.

For achieving measurable learning outcomes, AIs are or will be effective teachers, opening up greater learning opportunities that are more personalized, at lower cost, in ways that are superior to average human teachers.  But human teachers, be they professionals, other students, or authors of websites, do more than help learners to achieve measurable outcomes. They model ways of thinking, ways of being, tacit knowledge, and values: things that make us human. Education is a preparation to participate in human cultures, not just a means of imparting economically valuable skills. What will happen as we increasingly learn those ways of being from a machine? If machines can replicate skills like drawing, reasoning, writing, and planning, will humans need to learn them at all? Are there aspects of those skills that must not atrophy, and what will happen to us at a global scale if we lose them? What parts of our cognition should we allow AIs to replace? What kinds of credentials, if any, will be needed? In this talk I will use the theory presented in my latest book, How Education Works: Teaching, Technology, and Technique to provide a framework for exploring why, how, and for what purpose our educational institutions exist, and what the future may hold for them.

Pre-conference background reading, including the book, articles, and blog posts on generative AI and education may be found linked from https://howeducationworks.ca

Preprint – The human nature of generative AIs and the technological nature of humanity: implications for education

Here is a preprint of a paper I just submitted to MDPI’s Digital journal that applies the co-participation model that underpins How Education Works (and a number of my papers over the last few years) to generative AIs (GAIs). I don’t know whether it will be accepted and, even if it is, it is very likely that some changes will be required. This is a warts-and-all raw first submission. It’s fairly long (around 10,000 words).

The central observation around which the paper revolves is that, for the first time in the history of technology, recent generations of GAIs automate (or at least appear to automate) the soft technique that has, till now, been the sole domain of humans. Up until now, every technology we have ever created, be it physically instantiated, cognitive, organizational, structural, or conceptual, has left all of the soft part of the orchestration to human beings.

The fact that GAIs replicate the soft stuff is a matter for some concern when they start to play a role in education, mainly because:

  • the skills they replace may atrophy or never be learned in the first place. This is not even slightly like replacing hard skills of handwriting or arithmetic: we are talking about skills like creativity, problem-solving, critical inquiry, design, and so on. We’re talking about the stuff that GAIs are trained with.
  • the AIs themselves are an amalgam, an embodiment of our collective intelligence, not actual people. You can spin up any kind of persona you like and discard it just as easily. Much of the crucially important hidden/tacit curriculum of education is concerned with relationships, identity, ways of thinking, ways of being, ways of working and playing with others. It’s about learning to be human in a human society. It is therefore quite problematic to delegate how we learn to be human to a machine with (literally and figuratively) no skin in the game, trained on a bunch of signals signifying nothing but more signals.

On the other hand, to not use them in educational systems would be as stupid as to not use writing. These technologies are now parts of our extended cognition, intertwingled with our collective intelligence as much as any other technology, so of course they must be integrated in our educational systems. The big questions are not about whether we should embrace them but how, and what soft skills they might replace that we wish to preserve or develop. I hope that we will value real humans and their inventions more, rather than less, though I fear that, as long as we retain the main structural features of our education systems without significant adjustments to how they work, we will no longer care, and we may lose some of our capacity for caring.

I suggest a few ways we might avert some of the greatest risks by, for instance, treating them as partners/contractors/team members rather than tools, by avoiding methods of “personalization” that simply reinforce existing power imbalances and pedagogies designed for better indoctrination, by using them to help connect us and support human relationships, by doing what we can to reduce extrinsic drivers, by decoupling learning and credentials, and by doubling down on the social aspects of learning. There is also an undeniable explosion in adjacent possibles, leading to new skills to learn, new ways to be creative, and new possibilities for opening up education to more people. The potential paths we might take from now on are unprestatable and multifarious but, once we start down them, resulting path dependencies may lead us into great calamity at least as easily as they may expand our potential. We need to make wise decisions now, while we still have the wisdom to make them.

MDPI invited me to submit this article free of their normal article processing charge (APC). The fact that I accepted is therefore very much not an endorsement of APCs, though I respect MDPI’s willingness to accommodate those who find payment difficult, the good editorial services they provide, and the fact that all they publish is open. I was not previously familiar with the Digital journal itself. It has been publishing 4 articles a year since 2021, mostly offering a mix of reports on application designs and literature reviews. The quality seems good.

Abstract

This paper applies a theoretical model to analyze the ways that widespread use of generative AIs (GAIs) in education and, more broadly, in contributing to and reflecting the collective intelligence of our species, can and will change us. The model extends Brian Arthur’s insights into the nature of technologies as the orchestration of phenomena to our use by explaining the nature of humans participation in their enactment, whether as part of the orchestration (hard technique, where our roles must be performed correctly) or as orchestrators of phenomena (soft technique performed creatively or idiosyncratically). Education may be seen as a technological process for developing the soft and hard techniques of humans to participate in the technologies and thus the collective intelligence of our cultures. Unlike all earlier technologies, by embodying that collective intelligence themselves, GAIs can closely emulate and implement not only the hard technique but also the soft that, until now, was humanity’s sole domain: the very things that technologies enabled us to do can now be done by the technologies themselves. The consequences for what, how, and even whether we learn are profound. The paper explores some of these consequences and concludes with theoretically informed approaches that may help us to avert some dangers while benefiting from the strengths of generative AIs.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20512771/preprint-the-human-nature-of-generative-ais-and-the-technological-nature-of-humanity-implications-for-education

Cognitive prostheses and the future of the human race

head made of cogs I have lived in Canada for over 16 years so I was surprised when, a few months ago, a conference-friend whom  I had not seen for many years contacted me to ask whereabouts in Malaysia I lived. I believe that they were misremembering a connection from a conversation long ago in which I’d mentioned a recent visit (it was in fact a day-trip from Singapore) and combining that with the accurate recollection that I no longer lived in the UK.

Not long after, I was on a panel discussing the impact of ChatGPT during which I prompted ChatGPT to introduce me.  Here was its first attempt in response to the prompt “tell me about Jon Dron”. I’ve highlighted things that are true in green, and things that are false in red (emphasized for those unable to see the colours):

Jon Dron is a Canadian educator, researcher, and writer. He is known for his work in the field of e-learning and collaborative technologies. Dron has a PhD in Educational Technology from the University of Calgary, and has held academic positions at several universities in Canada, including Athabasca University, where he is currently a professor.

Dron has written extensively on topics such as online learning, social media, and the use of technology in education. He is the author of several books, including “Teaching Crowds: Learning and Social Media” and “E-Learning Theory and Practice.”

In addition to his academic work, Dron is also a musician and has released several albums of ambient and experimental music.

I’d say that there is not much difference between the human and machine recollections. I would almost certainly make at least as many mistakes if I were to asked to confidently describe a person I don’t know particularly well. In fact, I might make similar mistakes (not, please note, hallucinations) about quite close friends. Most of us don’t have eidetic memories: we reinvent recollections as much as we recall them. While there are surely many profound differences between how humans and large language models (LLMs) like ChatGPT process information, this is at least circumstantial evidence that some of the basic principles underlying artificial neural networks and biological neural networks are probably pretty similar. True, AIs do not know when they are making things up (or telling the truth, for that matter) but, in fairness, much of the time, neither do we. With a lot of intentional training we may be able to remember lines in a play or how to do long division but, usually, our recollections are like blurry JPEGs rather than RAW images.

Even for things we have intentionally learned to do or recall well, it is unusual for that training to stick without continual reinforcement, and mistakes are easily made. A few days ago I performed a set of around 30 songs (neither ambient nor experimental), most of which I had known for decades, all of which I had carefully practiced in the days leading up to the event to be sure I could play them as I intended. Here is a picture of me singing at that gig, drawn by my 6-year-old grandchild who was in attendance:

grandpa singing in the square

 

Despite my precautions and ample experience, in perhaps a majority of songs, I variously forgot words, chords, notes, and, in a couple of cases, whole verses. Combined with errors of execution (my fingers are not robotic, my voice gets husky) there was, I think, only one song in the whole set that came out more or less exactly as I intended. I have made such mistakes in almost every gig I have ever played. In fact, in well over 40 years as a performer, I have never played the same song in exactly the same way twice, though I have played some of them well over 10,000 times. Most of the variations are a feature, not a bug: they are where the expression lies. A performance is a conversation between performer, instruments, setting, and audience, not a mechanical copy of a perfect original. Nonetheless, my goal is usually to at least play the right notes and sing the right words, and I frequently fail to do that. Significantly, I generally know when I have done it wrong (typically a little before in a dread realization that just makes things worse) and adapt fairly seamlessly on the fly so, on the whole, you probably wouldn’t even notice it has happened, but I play much like ChatGPT responds to prompts: I fill in the things I don’t know with something more or less plausible. These creative adaptations are no more hallucinations than the false outputs of LLMs.

The fact that perfect recall is so difficult to achieve is why we need physical prostheses, to write things down, to look things up, or to automate them. Given LLMs’ weaknesses in accurate recall, it is slightly ironic that we often rely on computers for that.  It is, though, considerably more difficult for an LLM to do this because they have no big pictures, no purposes, no plans, not even broad intentions. They don’t know whether what they are churning out is right or wrong, so they don’t know to correct it. In fact, they don’t even know what they are saying, period. There’s no reflection, no metacognition, no layers of introspection, no sense of self, nothing to connect concepts together, no reason for them to correct errors that they cannot perceive.

Things that make us smart

How difficult can it be to fix this? I think we will soon be seeing a lot more solutions to this problem because if we can look stuff up then so can machines, and more reliable information from other systems can be used to feed the input or improve the output of the LLM (Bing, for instance, has been doing so for a while now, to an extent). A much more intriguing possibility is that an LLM itself or subsystem of it might not only look things up but also write and/or sequester code it needs to do things it is currently incapable of doing, extending its own capacity by assembling and remixing higher-level cognitive structures. Add a bit of layering then throw in an evolutionary algorithm to kill of the less viable or effective, and you’ve got a machine that can almost intentionally learn, and know when it has made a mistake.

Such abilities are a critical part of what makes humans smart, too. When discussing neural networks it is a bit too easy to focus on the underlying neural correlates of learning without paying much (if any) heed to the complex emergent structures that result from them – the “stuff” of thought – but those structures are the main things that make it work for humans. Like the training sets for large language models, the intelligence of humans is largely built from the knowledge gained from other humans through language, pedagogies, writing, drawing, music, computers, and other mediating technologies. Like an LLM, the cognitive technologies that result from this (including songs) are parts that we assemble and remix to in order to analyze, synthesize, and create. Unlike most if not all existing LLMs, though, the ways we assemble them – the methods of analysis, the rules of logic, the pedagogies, the algorithms, the principles, and so on (that we have also learned from others) – are cognitive prostheses that play an active role in the assembly, allowing us to build, invent, and use further cognitive prostheses and so to recursively extend our capabilities far beyond the training set, as well as to diagnose our own shortfalls. 

Like an LLM, our intelligence is also fundamentally collective, not just in what happens inside brains, but because our minds are extended, through tools, gadgets, rules, language, writing, structures, and systems that we enlist from the world as part of (not only adjuncts to) our thinking processes. Through technologies, from language to screwdrivers, we literally share our minds with others. For those of us who use them, LLMs are now as much parts of us as our own creative outputs are parts of them.

All of this means that human minds are part-technology (largely but not wholly instantiated in biological neural nets) and so our cognition is about as artificial as that of AIs. We could barely even think without cognitive prostheses like language, symbols, logic, and all the countless ways of doing and using technologies that we have devised, from guitars to cars. Education, in part, is a process of building and enlisting those cognitive prostheses in learners’ minds, and of enabling learners to build and enlist their own, in a massively complex, recursive, iterative, and distributed process, rich in feedback loops and self-organizing subsystems.

Choosing what we give up to the machine

There are many good ways to use LLMs in the learning process, as part of what students do. Just as it would be absurd to deny students the use of pens, books, computers, the Internet, and so on, it is absurd to deny them the use of AIs, including in summative assessments. These are now part of our cognitive apparatus, so we should learn how to participate in them wisely. But I think we need to be extremely cautious in choosing what we delegate to them, above all when using them to replace or augment some or all of the teaching role.

What makes AIs different from technologies of the past is that they perform a broadly similar process of cognitive assembly as we do ourselves, allowing us to offload much more of our cognition to an embodied collective intelligence created from the combined output of countless millions of people. Only months after the launch of ChatGPT, this is already profoundly changing how we learn and how we teach. It is disturbing and disruptive in an educational context for a number of reasons, such as that:

  • it may make it unnecessary for us to learn its skills ourselves, and so important aspects of our own cognition, not just things we don’t need (but which are they?), may atrophy;
  • if it teaches, it may embed biases from its training set and design (whose?) that we will inherit;
  • it may be a bland amalgam of what others have written, lacking originality or human quirks, and that is what we, too, will learn to do;
  • if we use it to teach, it may lead students towards an average or norm, not a peak;
  • it renders traditional forms of credentialling learning largely useless.

We need solutions to these problems or, at least, to understand how we will successfully adapt to the changes they bring, or whether we even want to do so. Right now, an LLM is not a mind at all, but it can be a functioning part of one, much as an artificial limb is a functioning part of a body or a cyborg prosthesis extends what a body can do. Whether we feel any particular limb that it (partly) replicates needs replacing, which system we should replace it with, and whether it is a a good idea in the first place are among the biggest questions we have to answer. But I think there’s an even bigger problem we need to solve: the nature of education itself.

AI teachers

There are no value-free technologies, at least insofar as they are enacted and brought into being through our participation in them, and the technologies that contribute to our cognition, such as teaching, are the most value-laden of all, communicating not just the knowledge and skills they purport to provide but also the ways of thinking and being that they embody. It is not just what they teach or how effectively they do so, but how they teach, and how we learn to think and behave as a result, that matters.

While AI teachers might well make it easier to learn to do and remember stuff, building hard cognitive technologies (technique, if you prefer) is not the only thing that education does. Through education, we learn values, ways of connecting, ways of thinking, and ways of being with others in the world. In the past this has come for free when we learn the other stuff, because real human teachers (including textbook authors, other students, etc) can’t help but model and transmit the tacit knowledge, values, and attitudes that go along with what they teach. This is largely why in-person lectures work. They are hopeless for learning the stuff being taught but the fact that students physically attend them makes them great for sharing attitudes, enthusiasm, bringing people together, letting us see how other people think through problems, how they react to ideas, etc. It is also why recordings of online lectures are much less successful because they don’t, albeit that the benefits of being able to repeat and rewind somewhat compensate for the losses.

What happens, though, when we all learn how to be human from something that is not (quite) human? The tacit curriculum – the stuff through which we learn ways of being, not just ways of doing –  for me looms largest among the problems we have to solve if we are to embed AIs in our educational systems, as indeed we must. Do we want our children to learn to be human from machines that haven’t quite figured out what that means and almost certainly never will?

Many AI-Ed acolytes tell the comforting story that we are just offloading some of our teaching to the machine, making teaching more personal, more responsive, cheaper, and more accessible to more people, freeing human teachers to do more of the human stuff. I get that: there is much to be said for making the acquisition of hard skills and knowledge easier, cheaper, and more efficient. However, it is local thinking writ large. It solves the problems that we have to solve today that are caused by how we have chosen to teach, with all the centuries-long path dependencies and counter technologies that entails, replacing technologies without wondering why they exist in the first place.

Perhaps the biggest of the problems that the entangled technologies of education systems cause are the devastating effects of tightly coupled credentials (and their cousins, grades) on intrinsic motivation. Much of the process of good teaching is one of reigniting that intrinsic motivation or, at least, supporting the development of internally regulated extrinsic motivation, and much of the process of bad teaching is about going with the flow and using threats and rewards to drive the process. As long as credentials remain the primary reason for learning, and as long as they remain based on proof of easily measured learning outcomes provided through end-products like assignments and inauthentic tests, then an AI that offers a faster, more efficient, and better tailored way of achieving them will crowd out the rest. Human teaching will be treated as a minor and largely irrelevant interruption or, at best, a feel-good ritual with motivational perks for those who can afford it. And, as we are already seeing, students coerced to meet deadlines and goals imposed on them will use AIs to take shortcuts. Why do it yourself when a machine can do it for you? 

The future

As we start to build AIs more like us, with metacognitive traits, self-set purposes, and the capacity for independent learning, the problem is just going to get bigger. Whether they are better or worse (they will be both), AIs will not be the same as us, yet they will increasingly seem so, and increasingly play human roles in the system. If the purpose of education is seen as nothing but short-term achievement of explicit learning outcomes and getting the credentials arising from that, then it would be better to let the machines achieve them so that we can get on with our lives. But of course that is not the purpose. Education is for preparing people to live better lives in better societies. It is why the picture of me singing above delights me more than anything ever created by an AI. It is why education is and must remain a fundamentally human process. Almost any human activity can be replaced by an AI, including teaching, but education is fundamental to how we become who we are. That’s not the kind of thing that I think we want to replace.

Our minds are already changing as they extend into the collective intelligence of LLMs – they must – and we are only at the very beginning of this story. Most of the changes that are about to occur will be mundane, complex, and the process will be punctuated but gradual, so we won’t really notice what has been happening until it has happened, by which time it may be too late. It is probably not an exaggeration to say that, unless environmental or other disasters don’t bring it all to a halt, this is a pivotal moment in our history.

It is much easier to think locally, to think about what AIs can do to support or extend what we do now, than it is to imagine how everything will change as a result of everyone doing that at scale. It requires us to think in systems, which is not something most of us are educated or prepared to do. But we must do that, now, while we still can. We should not leave it to AIs to do it for us.

There’s much more on many of the underpinning ideas mentioned in this post, including references and arguments supporting them, in my freely downloadable or cheap-to-purchase latest book (of three, as it happens), How Education Works.

The artificial curriculum

evolving into a robot “Shaping the Future of Education: Exploring the Potential and Consequences of AI and ChatGPT in Educational Settings” by Simone Grassini is a well-researched, concise but comprehensive overview of the state of play for generative AI (GAI) in education. It gives a very good overview of current uses, by faculty and students, and provides a thoughtful discussion of issues and concerns arising. It addresses technical, ethical, and pragmatic concerns across a broad spectrum. If you want a great summary of where we are now, with tons of research-informed suggestions as to what to do about it, this is a very worthwhile read.

However, underpinning much of the discussion is an implied (and I suspect unintentional) assumption that education is primarily concerned with achieving and measuring explicit specified outcomes. This is particularly obvious in the discussions of ways GAIs can “assist” with instruction. I have a problem with that.

There has been an increasing trend in recent decades towards the mechanization of education: modularizing rather than integrating, measuring what can be easily measured, creating efficiencies, focusing on an end goal of feeding industry, and so on. It has resulted in a classic case of the McNamara Fallacy, that starts with a laudable goal of measuring success, as much as we are able, and ends with that measure defining success, to the exclusion anything we do not or cannot measure. Learning becomes the achievement of measured outcomes.

It is true that consistent, measurable, hard techniques must be learned to achieve almost anything in life, and that it takes sustained effort and study to achieve most of them that educators can and should help with. Measurable learning outcomes and what we do with them matter. However, the more profound and, I believe, the more important ends of education, regardless of the subject, are concerned with ways of being in the world, with other humans. It is the tacit curriculum that ultimately matters more: how education affects the attitudes, the values, the ways we can adapt, how we can create, how we make connections, pursue our dreams, live fulfilling lives, engage with our fellow humans as parts of cultures and societies.

By definition, the tacit curriculum cannot be meaningfully expressed in learning outcomes or measured on a uniform scale. It can be expressed only obliquely, if it can be expressed at all, in words. It is largely emergent and relational, expressed in how we are, interacting with one another, not as measurable functions that describe what we can do. It is complex, situated, and idiosyncratic. It is about learning to be human, not achieving credentials.

Returning to the topic of AI, to learn to be human from a blurry JPEG of the web, or autotune for knowledge, especially given the fact that training sets will increasingly be trained on the output of earlier training sets, seems to me to be a very bad idea indeed.

The real difficulty that teachers face is not that students solve the problems set to them using large language models, but that in so doing they bypass the process, thus avoiding the tacit learning outcomes we cannot or choose not to measure. And the real difficulty that those students face is that, in delegating the teaching process to an AI, their teachers are bypassing the teaching process, thus failing to support the learning of those tacit outcomes or, at best, providing an averaged-out caricature of them. If we heedlessly continue along this path, it will wind up with machines teaching machines, with humans largely playing the roles of cogs and switches in them.

Some might argue that, if the machines do a good enough job of mimicry then it really doesn’t matter that they happen to be statistical models with no feelings, no intentions, no connection, and no agency. I disagree. Just as it makes a difference whether a painting ascribed to Picasso is a fake or not, or whether a letter is faxed or delivered through the post, or whether this particular guitar was played by John Lennon, it matters that real humans are on each side of a learning transaction. It means something different for an artifact to have been created by another human, even if the form of the exchange, in words or whatever, is the same. Current large language models have flaws, confidently spout falsehoods, fail to remember previous exchanges, and so on, so they are easy targets for criticism. However, I think it will be even worse when AIs are “better” teachers. When what they seem to be is endlessly tireless, patient, respectful and responsive; when the help they give is unerringly accurately personal and targeted; when they accurately draw on knowledge no one human could ever possess, they will not be modelling human behaviour. The best case scenario is that they will not be teaching students how to be, they will just be teaching them how to do, and that human teachers will provide the necessary tacit curriculum to support the human side of learning. However, the two are inseparable, so that is not particularly likely. The worst scenarios are that they will be teaching students how to be machines, or how to be an average human (with significant biases introduced by their training), or both.

And, frankly, if AIs are doing such a good job of it then they are the ones who should be doing whatever it is that they are training students to do, not the students. This will most certainly happen: it already is (witness the current actors and screenwriters strike). For all the disruption that results, it’s not necessarily a bad thing, because it increases the adjacent possible for everyone in so many ways. That’s why the illustration to this post is made to my instructions by Midjourney, not drawn by me. It does a much better job of it than I could do.

In a rational world we would not simply incorporate AI into teaching as we have always taught. It makes no more sense to let it replace teachers than it does to let it replace students. We really need to rethink what and why we are teaching in the first place. Unfortunately, such reinvention is rarely if ever how technology works. Technology evolves by assembly with and in the context of other technology, which is how come we have inherited mediaeval solutions to indoctrination as a fundamental mainstay of all modern education (there’s a lot more about such things in my book, How Education Works if you want to know more about that). The upshot will be that, as we integrate rather than reinvent, we will keep on doing what we have always done, with a few changes to topics, a few adjustments in how we assess, and a few “efficiencies”, but we will barely notice that everything has changed because students will still be achieving the same kinds of measured outcomes.

I am not much persuaded by most apocalyptic visions of the potential threat of AI. I don’t think that AI is particularly likely to lead to the world ending with a bang, though it is true that more powerful tools do make it more likely that evil people will wield them. Artificial General Intelligence, though, especially anything resembling consciousness, is very little closer today than it was 50 years ago and most attempts to achieve it are barking in the wrong forest, let alone up the wrong tree. The more likely and more troubling scenario is that, as it embraces GAIs but fails to change how everything is done, the world will end with a whimper, a blandification, a leisurely death like that of lobsters in water coming slowly to a boil. The sad thing is that, by then, with our continued focus on just those things we measure, we may not even notice it is happening. The sadder thing still is that, perhaps, it already is happening.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/19390937/the-artificial-curriculum

Look what just arrived on my doorstep! #howeducationworks from @au_press is now available in print and e-book formats

Photo of hard copies of How Education Works

Hard copies and e-book versions of How Education Works are now available, and they are starting to turn up in bookstores. The recommended retail price is CAD$40 but Amazon is selling the Kindle version for a bit less.

Here are a few outlets that are selling it (or order it from your local independent bookstore!):

AU Press (CA)

Barnes & Noble (US)

Blackwells (UK)

Amazon (CA)

Amazon (JP)

University of Chicago Press (US)

Indigo (CA)

Booktopia (AU)

For those wanting to try before they buy or who cannot afford/do not want the paper or e-book versions, you can read it for free online, or download a PDF of the whole book.

The publishers see this as mainly targeted at professional teachers and educational researchers, but those are far from the only audiences I had in mind as I was writing it. Apart from anything else, one of the central claims of the book is that literally everyone is a teacher.  But it’s as much a book about the nature of technology as it is about education, and as much about the nature of knowledge as it is about how that knowledge is acquired. If you’re interested in how we come to know stuff, how technologies work, or how to think about what makes us (individually and collectively) smart, there’s something in the book for you. It’s a work of philosophy as much as it is a book of practical advice, and it’s about a way of thinking and being at least as much as it is about the formal practice of education. That said, it certainly does contain some ideas and recommendations that do have practical value for educators and educational researchers. There’s just more to it than that.

I cannot begin to express how pleased I am that, after more than 10 years of intermittent work, I finally have the finished article in my hands. I hope you get a chance to read it, in whatever format works for you! I’ll end this post with a quote, that happens to be the final paragraph of the book…

“If this book has helped you, however slightly, to think about what you know and how you have come to know it a little differently, then it has been a successful learning technology. In fact, even if you hold to all of your previous beliefs and this book has challenged you to defend them, then it has worked just fine too. Even if you disagreed with or misunderstood everything that I said, and even if you disliked the way that I presented it, it might still have been an effective learning technology, even though the learning that I hoped for did not come about. But I am not the one who matters the most here. This is layer upon layer of technology, and in some sense, for some technology, it has done what that technology should do. The book has conveyed words that, even if not understood as I intended them to be, even if not accepted, even if rabidly disagreed with, have done something for your learning. You are a different person now from the person you were when you started reading this book because everything that we do changes us. I do not know how it has changed you, but your mind is not the same as it was before, and ultimately the collectives in which you participate will not be the same either. The technology of print production, a spoken word, a pattern of pixels on a screen, or dots on a braille reader has, I hope, enabled you, at least on occasion, to think, criticize, acknowledge, recognize, synthesize, and react in ways that might have some value in consolidating or extending or even changing what you already know. As a result of bits and bytes flowing over an ether from my fingertips to whatever this page might be to you, knowledge (however obscure or counter to my intentions) has been created in the world, and learning has happened. For all the complexities and issues that emerge from that simple fact, one thing is absolutely certain: this is good.”