Educational ends and means: McNamara’s Fallacy and the coming robot apocalypse (presentation for TAMK)

 

These are the slides that I used for my talk with a delightful group of educational leadership students from TAMK University of Applied Sciences in Tampere, Finland at (for me) a somewhat ungodly hour Wednesday night/Thursday morning after a long day. If you were in attendance, sorry for any bleariness on my part. If not, or if you just want to re-live the moment, here is the video of the session (thanks Mark!)man shaking hands with a robot

The brief that I was given was to talk about what generative AI means for education and, if you have been following any of my reflections on this topic then you’ll already have a pretty good idea of what kinds of issues I raised about that. My real agenda, though, was not so much to talk about generative AI as to reflect on the nature and roles of education and educational systems because, like all technologies, the technology that matters in any given situation is the enacted whole rather than any of its assembled parts. My concerns about uses of generative AI in education are not due to inherent issues with generative AIs (plentiful though those may be) but to inherent issues with educational systems that come to the fore when you mash the two together at a grand scale.

The crux of this argument is that, as long as we think of the central purposes of education as being the attainment of measurable learning outcomes or the achievement of credentials, especially if the focus is on training people for a hypothetical workplace, the long-term societal effects of inserting generative AIs into the teaching process are likely to be dystopian. That’s where Robert McNamara comes into the picture. The McNamara Fallacy is what happens when you pick an aspect of a system to measure, usually because it is easy, and then you use that measure to define success, choosing to ignore or to treat as irrelevant anything that cannot be measured. It gets its name from Robert McNamara, US Secretary of Defense during the Vietnam war, who famously measured who was winning by body count, which is probably among the main reasons that the US lost the war.

My concern is that measurable learning outcomes (and still less the credentials that signify having achieved them) are not the ends that matter most. They are, more, means to achieve far more complex, situated, personal and social ends that lead to happy, safe, productive societies and richer lives for those within them. While it does play an important role in developing skills and knowledge, education is thus more fundamentally concerned with developing values, attitudes, ways of thinking, ways of seeing, ways of relating to others, ways of understanding and knowing what matters to ourselves and others, and finding how we fit into the social, cultural, technological, and physical worlds that we inhabit. These critical social, cultural, technological, and personal roles have always been implicit in our educational systems but, at least in in-person institutions, it seldom needs to be made explicit because it is inherent in the structures and processes that have evolved over many centuries to meet this need. This is why naive attempts to simply replicate the in-person learning experience online usually fail: they replicate the intentional teaching activities but neglect to cater for the vast amounts of learning that occur simply due to being in a space with other people, and all that emerges as a result of that. It is for much the same reasons that simply inserting generative AI into existing educational structures and systems is so dangerous.

If we choose to measure the success or failure of an educational system by the extent to which learners achieve explicit learning outcomes and credentials, then the case for using generative AIs to teach is extremely compelling. Already, they are far more knowledgeable, far more patient, far more objective, far better able to adapt their teaching to support individual student learning, and far, far cheaper than human teachers. They will get better. Much better. As long as we focus only on the easily measurable outcomes and the extrinsic targets, simple economics combined with their measurably greater effectiveness means that generative AIs will increasingly replace teachers in the majority of teaching roles.  That would not be so bad – as Arthur C. Clarke observed, any teacher that can be replaced by a machine should be – were it not for all the other more important roles that education plays, and that it will continue to play, except that now we will be learning those ways of being human from things that are not human and that, in more or less subtle ways, do not behave like humans. If this occurs at scale – as it is bound to do – the consequences for future generations may not be great. And, for the most part, the AIs will be better able to achieve those learning outcomes themselves – what is distinctive about them is that they are, like us, tool users, not simply tools – so why bother teaching fallible, inconsistent, unreliable humans to achieve them? In fact, why bother with humans at all? There are, almost certainly, already large numbers of instances in which at least part of the teaching process is generated by an AI and where generative AIs are used by students to create work that is assessed by AIs.

It doesn’t have to be this way. We can choose to recognize the more important roles of our educational systems and redesign them accordingly, as many educational thinkers have been recommending for considerably more than a century. I provide a few thoughts on that in the last few slides that are far from revolutionary but that’s really the point: we don’t need much novel thinking about how to accommodate generative AI into our existing systems. We just need to make those systems work the way we have known they should work for a very long time.

Download the slides | Watch the video

Presentation – Generative AIs in Learning & Teaching: the Case Against

Here are the slides from my presentation at AU’s Lunch ‘n’ Learn session today. The presentation itself took 20 minutes and was followed by a wonderfully lively and thoughtful conversation for another 40 minutes, though it was only scheduled for half an hour. Thanks to all who attended for a very enjoyable discussion! self portrait of chatGPT, showing an androgynous human face overlaid with circuits

The arguments made in this were mostly derived from my recent paper on the subject (Dron, J. (2023). The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education. Digital, 3(4), 319–335. https://doi.org/10.3390/digital3040020) but, despite the title, my point was not to reject the use of generative AIs at all. The central message I was hoping to get across was a simpler and more important one: to encourage attendees to think about what education is for, and what we would like it to be. As the slides suggest, I believe that is only partially to do with the objectives and outcomes we set out to achieve,  that it is nothing much at all to do with the products of the system such as grades and credentials, and that focus on those mechanical aspects of the system often creates obstacles to the achievement of it. Beyond those easily measured things, education is about the values, beliefs, attitudes, relationships, and development of humans and their societies.  It’s about ways of being, not just capacity to do stuff. It’s about developing humans, not (just) developing skills. My hope is that the disruptions caused by generative AIs are encouraging us to think like the Amish, and to place greater value on the things we cannot measure. These are good conversations to have.

Published in Digital – The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education

A month or two ago I shared a “warts-and-all” preprint of this paper on the risks of educational uses of generative AIs. The revised, open-access published version, The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education is now available in the Journal Digital.

The process has been a little fraught. Two reviewers really liked the paper and suggested minimal but worthwhile changes. One quite liked it but had a few reasonable suggestions for improvements that mostly helped to make the paper better. The fourth, though, was bothersome in many ways, and clearly wanted me to write a completely different paper altogether. Despite this, I did most of what they asked, even though some of the changes, in my opinion, made the paper a bit worse. However, I drew the line at the point that they demanded (without giving any reason) that I should refer to 8 very mediocre, forgettable, cookie cutter computer science papers which, on closer inspection, had all clearly been written by the reviewer or their team. The big problem I had with this was not so much the poor quality of the papers, nor even the blatant nepotism/self-promotion of the demand, but the fact that none were in any conceivable way relevant to mine, apart from being about AI: they were about algorithm-tweaking, mostly in the context of traffic movements in cities.  It was as ridiculous as a reviewer of a work on Elizabethan literature requiring the author to refer to papers on slightly more efficient manufacturing processes for staples. Though it is normal and acceptable for reviewers to suggest reference to their own papers when it would clearly lead to improvements, this was an utterly shameless abuse of power of a scale and kind that I have never seen before. I politely refused, making it clear that I was on to their game but not directly calling them out on it.

In retrospect, I slightly regret not calling them out. For a grizzly old researcher like me who could probably find another publisher without too much hassle, it doesn’t matter much if I upset a reviewer enough to make them reject my paper. However, for early-career researchers stuck in the publish-or-perish cycle, it would be very much harder to say no. This kind of behaviour is harmful for the author, the publisher, the reader, and the collective intelligence of the human race. The fact that the reviewer was so desperate to get a few more citations for their own team with so little regard for quality or relevance seems to me to be a poor reflection on them and their institution but, more so, a damning indictment of a broken system of academic publishing, and of the reward systems driving academic promotion and recognition. I do blame the reviewer, but I understand the pressures they might have been under to do such a blatantly immoral thing.

As it happens, my paper has more than a thing or two to say about this kind of McNamara phenomenon, whereby the means used to measure success in a system become and warp its purpose, because it is among the main reasons that generative AIs pose such a threat. It is easy to forget that the ways we establish goals and measure success in educational systems are no more than signals of a much more complex phenomenon with far more expansive goals that are concerned with helping humans to be, individually and in their cultures and societies, as much as with helping them to do particular things. Generative AIs are great at both generating and displaying those signals – better than most humans in many cases – but that’s all they do: the signals signify nothing. For well-defined tasks with well-defined goals they provide a lot of opportunities for cost-saving, quality improvement, and efficiency and, in many occupations, that can be really useful. If you want to quickly generate some high quality advertising copy, the intent of which is to sell a product, then it makes good sense to use a generative AI. Not so much in education, though, where it is too easy to forget that learning objectives, learning outcomes, grades, credentials, and so on are not the purposes of learning but just means for and signals of achieving them.

Though there are other big reasons to be very concerned about using generative AIs in education, some of which I explore in the paper, this particular problem is not so much with the AIs themselves as with the technological systems into which they are, piecemeal, inserted. It’s a problem with thinking locally, not globally; of focusing on one part of the technology assembly without acknowledging its role in the whole. Generative AIs could, right now and with little assistance,  perform almost every measurable task in an educational system from (for students) producing essays and exam answers, to (for teachers) writing activities and assignments, or acting as personal tutors. They could do so better than most people. If that is all that matters to us then we might as well therefore remove the teachers and the students from the system because, quite frankly, they only get in the way. This absurd outcome is more or less exactly the end game that will occur though, if we don’t rethink (or double down on existing rethinking of) how education should work and what it is for, beyond the signals that we usually use to evaluate success or intent. Just thinking of ways to use generative AIs to improve our teaching is well-meaning, but it risks destroying the woods by focusing on the trees. We really need to step back a bit and think of why we bother in the first place.

For more on this, and for my tentative partial solutions to these and other related problems, do read the paper!

Abstract and citation

This paper analyzes the ways that the widespread use of generative AIs (GAIs) in education and, more broadly, in contributing to and reflecting the collective intelligence of our species, can and will change us. Methodologically, the paper applies a theoretical model and grounded argument to present a case that GAIs are different in kind from all previous technologies. The model extends Brian Arthur’s insights into the nature of technologies as the orchestration of phenomena to our use by explaining the nature of humans’ participation in their enactment, whether as part of the orchestration (hard technique, where our roles must be performed correctly) or as orchestrators of phenomena (soft technique, performed creatively or idiosyncratically). Education may be seen as a technological process for developing these soft and hard techniques in humans to participate in the technologies, and thus the collective intelligence, of our cultures. Unlike all earlier technologies, by embodying that collective intelligence themselves, GAIs can closely emulate and implement not only the hard technique but also the soft that, until now, was humanity’s sole domain; the very things that technologies enabled us to do can now be done by the technologies themselves. Because they replace things that learners have to do in order to learn and that teachers must do in order to teach, the consequences for what, how, and even whether learning occurs are profound. The paper explores some of these consequences and concludes with theoretically informed approaches that may help us to avert some dangers while benefiting from the strengths of generative AIs. Its distinctive contributions include a novel means of understanding the distinctive differences between GAIs and all other technologies, a characterization of the nature of generative AIs as collectives (forms of collective intelligence), reasons to avoid the use of GAIs to replace teachers, and a theoretically grounded framework to guide adoption of generative AIs in education.

Dron, J. (2023). The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education. Digital, 3(4), 319–335. https://doi.org/10.3390/digital3040020

Originally posted at: https://landing.athabascau.ca/bookmarks/view/21104429/published-in-digital-the-human-nature-of-generative-ais-and-the-technological-nature-of-humanity-implications-for-education

10 minute chats on Generative AI – a great series, now including an interview with me

This is a great series of brief interviews between Tim Fawns and an assortment of educators and researchers from across the world on the subject of generative AI and its impact on learning and teaching.

The latest (tenth in the series) is with me.

Tim asked us all to come up with 3 key statements beforehand that he used to structure the interviews. I only realized that I had to do this on the day of the interview so mine are not very well thought-through, but there follows a summary of very roughly what I would have said about each if my wits were sharper. The reality was, of course, not quite like this. I meandered around a few other ideas and we ran out of time, but I think this captures the gist of what I actually wanted to convey:

Key statement 1: Most academics are afraid of AIs being used by students to cheat. I am afraid of AIs being used by teachers to cheat. cyborg teacher

For much the same reasons that many of us balk at students using, say, ChatGPT to write part or all of their essays or code, I think we should be concerned when teachers use it to replace or supplement their teaching, whether it be for writing course outlines, assessing student work, or acting as intelligent tutors (to name but a few common uses).  The main thing that bothers me is that human teachers (including other learners, authors, and many more) do not simply help learners to achieve specified learning outcomes. In the process, they model ways of thinking, values, attitudes, feelings, and a host of other hard-to-measure tacit and implicit phenomena that relate to ways of being, ways of interacting, ways of responding, and ways of connecting with others. There can be huge value in seeing the world through another’s eyes, of interacting with them, adapting your responses, seeing how they adapt to yours, and so on. This is a critical part of how we learn the soft stuff, the ways of doing things, the meaning, the social value, the connections with our own motivations, and so on. In short, education is as much about being a human being, living in human communities, as it is about learning facts and skills. Even when we are not interacting but, say, simply reading a book, we are learning not just the contents but the ways the contents are presented, the quirks, the passions, the ways the authors think of their readers, their implicit beliefs, and so on.

While a generative AI can mimic this pretty well, it is by nature a kind of average, a blurry reconstruction mashed up from countless examples of the work of real humans. It is human-like, not human. It can mimic a wide assortment of nearly-humans without identity, without purpose, without persistence, without skin in the game. As things currently stand (though this will change) it is also likely to be pretty bland – good enough, but not great.

It might be argued that this is better than nothing at all, or that it augments rather than replaces human teachers, or it helps with relatively mundance chores, or it provides personalized support and efficiencies in learning hard skills, or it allows teachers to focus on those human aspects, or even that using a generative AI is a good way of learning in itself. Right now and in the near future, this may be true because we are in a system on the verge of disruption, not yet in the thick of it, and we come to it with all our existing skills and structures intact. My concern is what happens as it scales and becomes ubiquitous; as the bean-counting focus on efficiencies that relate solely to measurable outcomes increasingly crowd out the time spent with other humans; as the generative AIs feed on one another becoming more and more divorced from their human originals; as the skills of teaching that are replaced by AIs atrophy in the next generation; as time we spend with one another is replaced with time spent with not-quite human simulacra; as the AIs themselves become more and more a part of our cognitive apparatus in both what is learned and how we learn it. There are Monkeys’ Paws all the way down the line: for everything that might improved, there are at least as many things that can and will get worse.

Key statement 2: We and our technologies are inherently intertwingled so it makes no more sense to exclude AIs from the classroom than it would to exclude, say, books or writing. The big questions are about what we need to keep. intertwingled technologies and humans

Our cognition is fundamentally intertwingled with the technologies that we use, both physical and cognitive, and those technologies are intertwingled with one another, and that’s how our collective intelligence emerges. For all the vital human aspects mentioned above, a significant part of the educational process is concerned with building cognitive gadgets that enable us to participate in the technologies of our cultures, from poetry and long division to power stations and web design. Through that participation our cognition is highly distributed, and our intelligence is fundamentally collective. Now that generative AIs are part of that, it would be crazy to exclude them from classrooms or from their use in assessments. It does, however, raise more than a few questions about what cognitive activities we still need to keep for ourselves.

Technologies expand or augment what we can do unaided. Writing, say, allows us (among other things) to extend our memories. This creates many adjacent possibles, including sharing them with others, and allowing us to construct more complex ideas using scaffolding that would be very difficult to construct on our own because our memories are not that great.

Central to the nature of writing is that, as with most technologies, we don’t just use it but we participate in its enactment, performing part of the orchestration ourselves (for instance we choose what words and ideas we write – the soft stuff), but also being part of its orchestration (e.g we must typically spell words and use grammar sufficiently uniformly that others can understand them – the hard stuff).

In the past, we used to do nearly all of that writing by hand. Handwriting was a hard skill that had to be learned well enough that others could read what we have written, a process that typically required years of training and practice, demanding mastery of a wide range of technical proficiencies from spelling and punctuation to manual dexterity and the ability to sharpen a quill/fill a fountain pen/insert a cartridge, etc. To an increasingly large extent we have now offloaded many of those hard skills, first to typewriters and now to computers. While some of the soft aspects of handwriting have been lost – the cognitive processes that affect how we write and how we think, the expressiveness of the never-perfect ways we write letters on a page, etc – this was a sensible thing to do. From a functional perspective, text produced by a computer is far more consistent, far more readable, far more adaptable, far more reusable, and far more easily communicated. Why should we devote so much effort and time to learning to be part of a machine when a machine can do that part for us, and do it better?

Something that can free us from having to act as an inflexible machine seems, by and large, like a good thing. If we don’t have to do it ourselves then we can spend more time and effort on what we do, how we do it, the soft stuff, the creative stuff, the problem-solving stuff, and so on. It allows us to be more capable, to reach further, to communicate more clearly. There are some really big issues relating to the ways that the constraints of handwriting such as the relative difficulty of making corrections, the physicality of the movements, and the ways our brains are changed by handwriting that result in different ways of thinking, some of which may be very valuable. But, as Postman wrote, all technologies are Faustian bargains involving losses and harms as well as gains and benefits. A technology that thrives is usually (at least in the short term) one in which the gains are perceived to outweigh the losses. And, even when largely replaced, old technologies seldom if ever die, so it is usually possible to retrieve what is lost, at least until the skills atrophy, components are no longer made, or they are designed to die (old printers with chip-protected cartridges that are no longer made, for instance).

What is fundamentally different about generative AIs, however, is that they allow us to offload exactly the soft, creative, problem solving aspects of our cognition, that technologies normally support and expand, to a machine. They provide extremely good pastiches of human thought and creativity that can act well enough to be considered as drop-in replacements. In many cases, they can do so a lot better – from the point of view of someone seeing only the outputs – than an average human. An AI image generator can draw a great deal better than me, for instance. But, given that these machines are now part of our extended, intertwingled minds, what is left for us? What parts of our minds should they or will they replace? How can we use them without losing the capacity to do at least some of the things they do better or as well as us? What happens if we lack those cognitive gadgets we never installed in our minds because AIs did it for us? This is not the same as, say, not knowing how to make a bow and arrow or write in cuneiform. Even when atrophied, such skills can be recovered. This is the stuff that we learn the other stuff for. It is especially important in the field of education which, traditionally at least, has been deeply concerned with cultivating the hard skills largely if not solely so that we can use them creatively, socially and productively once they are learned. If the machines are doing that for us, what is our role? This is not (yet) Kurzweil’s singularity, the moment when machines exceed our own intelligence and start to develop on their own, but it is the (drawn-out, fragmented) moment that machines have become capable of participating in soft, creative technologies on at least equal footing to humans. That matters. This leads to my final key statement.

Key statement 3: AIs create countless new adjacent possible empty niches. They can augment what we can do, but we need to go full-on Amish when deciding whether they should replace what we already do. Amish cyborg

Every new creation in the world opens up new and inherently unprestatable adjacent possible empty niches for further creation, not just in how it can be used as part of new assemblies but in how it connects with those that already exist. It’s the exponential dynamic ratchet underlying natural evolution as much as technology, and it is what results in the complexity of the universe. The rapid acceleration in use and complexity of generative AIs – itself enabled by the adjacent possibles of the already highly disruptive Internet – that we have seen over the past couple of years has resulted in a positive explosion of new adjacent possibles, in turn spawning others, and so on, at a hitherto unprecedented scale and speed.

This is exactly what we should expect in an exponentially growing system. It makes it increasingly difficult to predict what will happen next, or what skills, attitudes, and values we will need to deal with it, or how we will affected by it. As the number of possible scenarios increases at the same exponential rate, and the time between major changes gets ever shorter, patterns of thinking, ways of doing things, skills we need, and the very structures of our societies must change in unpredictable ways, too. Occupations, including in education, are already being massively disrupted, for better and for worse. Deeply embedded systems, from assessment for credentials to the mass media, are suddenly and catastrophically breaking.  Legislation, regulations, resistance from groups of affected individuals, and other checks and balances may slightly alter the rate of change, but likely not enough to matter. Education serves both a stabilizing and a generative role in society, but educators are at least as unprepared and at least as disrupted as anyone else. We don’t – in fact we cannot – know what kind of world we are preparing our students for, and the generative technologies that now form part of our cognition are changing faster than we can follow. Any AI literacies we develop will be obsolete in the blink of an eye. And, remember, generative AIs are not just replacing hard skills. They are replacing the soft ones, the things that we use our hard skills to accomplish.

This is why I believe we would do well to heed the example of the Amish, who (contrary to popular belief) are not opposed to modern technologies but, in their communities, debate and discuss the merits and disadvantages of any technology that is available, considering the ways in which it might affect or conflict with their values, only adopting those agreed to be, on balance, good, and only doing so in ways that accord with those values. Different communities make different choices according to their contexts and needs. In order to do that, we have to have values in the first place. But what are the values that matter in education?

With a few exceptions (laws and regulations being the main ones) technologies do not determine how we will act but, through the ways they integrate with our shared cognition, existing technologies, and practices, they have a lot of momentum and, unchecked, generative AIs will inherit the values associated with what currently exists. In educational systems that are increasingly regulated by government mandates that focus on nothing but their economic contributions to industry, where success or failure is measured solely by proxy criteria like predetermined outcomes of learning and enrolments, where a millennium of path dependencies still embodies patterns of teacher control and indoctrination that worked for mediaeval monks and skillsets that suited the demands of factory owners during the industrial revolution, this will not end well. Now seems the time we most need to reassert and double down on the human, the social, the cultural, the societal, the personal, and the tacit value of our institutions. This is the time to talk about those values, locally and globally. This is the time to examine what matters, what we care about, what we must not lose, and why we must not lose it. Tomorrow it will be too late. I think this is a time of great risk but it is also a time of great opportunity, a chance to reflect on and examine the value and nature of education itself. Some of us have been wanting to have these conversations for decades.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20146256/10-minute-chats-on-generative-ai-a-great-series-now-including-an-interview-with-me

The artificial curriculum

evolving into a robot “Shaping the Future of Education: Exploring the Potential and Consequences of AI and ChatGPT in Educational Settings” by Simone Grassini is a well-researched, concise but comprehensive overview of the state of play for generative AI (GAI) in education. It gives a very good overview of current uses, by faculty and students, and provides a thoughtful discussion of issues and concerns arising. It addresses technical, ethical, and pragmatic concerns across a broad spectrum. If you want a great summary of where we are now, with tons of research-informed suggestions as to what to do about it, this is a very worthwhile read.

However, underpinning much of the discussion is an implied (and I suspect unintentional) assumption that education is primarily concerned with achieving and measuring explicit specified outcomes. This is particularly obvious in the discussions of ways GAIs can “assist” with instruction. I have a problem with that.

There has been an increasing trend in recent decades towards the mechanization of education: modularizing rather than integrating, measuring what can be easily measured, creating efficiencies, focusing on an end goal of feeding industry, and so on. It has resulted in a classic case of the McNamara Fallacy, that starts with a laudable goal of measuring success, as much as we are able, and ends with that measure defining success, to the exclusion anything we do not or cannot measure. Learning becomes the achievement of measured outcomes.

It is true that consistent, measurable, hard techniques must be learned to achieve almost anything in life, and that it takes sustained effort and study to achieve most of them that educators can and should help with. Measurable learning outcomes and what we do with them matter. However, the more profound and, I believe, the more important ends of education, regardless of the subject, are concerned with ways of being in the world, with other humans. It is the tacit curriculum that ultimately matters more: how education affects the attitudes, the values, the ways we can adapt, how we can create, how we make connections, pursue our dreams, live fulfilling lives, engage with our fellow humans as parts of cultures and societies.

By definition, the tacit curriculum cannot be meaningfully expressed in learning outcomes or measured on a uniform scale. It can be expressed only obliquely, if it can be expressed at all, in words. It is largely emergent and relational, expressed in how we are, interacting with one another, not as measurable functions that describe what we can do. It is complex, situated, and idiosyncratic. It is about learning to be human, not achieving credentials.

Returning to the topic of AI, to learn to be human from a blurry JPEG of the web, or autotune for knowledge, especially given the fact that training sets will increasingly be trained on the output of earlier training sets, seems to me to be a very bad idea indeed.

The real difficulty that teachers face is not that students solve the problems set to them using large language models, but that in so doing they bypass the process, thus avoiding the tacit learning outcomes we cannot or choose not to measure. And the real difficulty that those students face is that, in delegating the teaching process to an AI, their teachers are bypassing the teaching process, thus failing to support the learning of those tacit outcomes or, at best, providing an averaged-out caricature of them. If we heedlessly continue along this path, it will wind up with machines teaching machines, with humans largely playing the roles of cogs and switches in them.

Some might argue that, if the machines do a good enough job of mimicry then it really doesn’t matter that they happen to be statistical models with no feelings, no intentions, no connection, and no agency. I disagree. Just as it makes a difference whether a painting ascribed to Picasso is a fake or not, or whether a letter is faxed or delivered through the post, or whether this particular guitar was played by John Lennon, it matters that real humans are on each side of a learning transaction. It means something different for an artifact to have been created by another human, even if the form of the exchange, in words or whatever, is the same. Current large language models have flaws, confidently spout falsehoods, fail to remember previous exchanges, and so on, so they are easy targets for criticism. However, I think it will be even worse when AIs are “better” teachers. When what they seem to be is endlessly tireless, patient, respectful and responsive; when the help they give is unerringly accurately personal and targeted; when they accurately draw on knowledge no one human could ever possess, they will not be modelling human behaviour. The best case scenario is that they will not be teaching students how to be, they will just be teaching them how to do, and that human teachers will provide the necessary tacit curriculum to support the human side of learning. However, the two are inseparable, so that is not particularly likely. The worst scenarios are that they will be teaching students how to be machines, or how to be an average human (with significant biases introduced by their training), or both.

And, frankly, if AIs are doing such a good job of it then they are the ones who should be doing whatever it is that they are training students to do, not the students. This will most certainly happen: it already is (witness the current actors and screenwriters strike). For all the disruption that results, it’s not necessarily a bad thing, because it increases the adjacent possible for everyone in so many ways. That’s why the illustration to this post is made to my instructions by Midjourney, not drawn by me. It does a much better job of it than I could do.

In a rational world we would not simply incorporate AI into teaching as we have always taught. It makes no more sense to let it replace teachers than it does to let it replace students. We really need to rethink what and why we are teaching in the first place. Unfortunately, such reinvention is rarely if ever how technology works. Technology evolves by assembly with and in the context of other technology, which is how come we have inherited mediaeval solutions to indoctrination as a fundamental mainstay of all modern education (there’s a lot more about such things in my book, How Education Works if you want to know more about that). The upshot will be that, as we integrate rather than reinvent, we will keep on doing what we have always done, with a few changes to topics, a few adjustments in how we assess, and a few “efficiencies”, but we will barely notice that everything has changed because students will still be achieving the same kinds of measured outcomes.

I am not much persuaded by most apocalyptic visions of the potential threat of AI. I don’t think that AI is particularly likely to lead to the world ending with a bang, though it is true that more powerful tools do make it more likely that evil people will wield them. Artificial General Intelligence, though, especially anything resembling consciousness, is very little closer today than it was 50 years ago and most attempts to achieve it are barking in the wrong forest, let alone up the wrong tree. The more likely and more troubling scenario is that, as it embraces GAIs but fails to change how everything is done, the world will end with a whimper, a blandification, a leisurely death like that of lobsters in water coming slowly to a boil. The sad thing is that, by then, with our continued focus on just those things we measure, we may not even notice it is happening. The sadder thing still is that, perhaps, it already is happening.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/19390937/the-artificial-curriculum