Preprint – The human nature of generative AIs and the technological nature of humanity: implications for education

Here is a preprint of a paper I just submitted to MDPI’s Digital journal that applies the co-participation model that underpins How Education Works (and a number of my papers over the last few years) to generative AIs (GAIs). I don’t know whether it will be accepted and, even if it is, it is very likely that some changes will be required. This is a warts-and-all raw first submission. It’s fairly long (around 10,000 words).

The central observation around which the paper revolves is that, for the first time in the history of technology, recent generations of GAIs automate (or at least appear to automate) the soft technique that has, till now, been the sole domain of humans. Up until now, every technology we have ever created, be it physically instantiated, cognitive, organizational, structural, or conceptual, has left all of the soft part of the orchestration to human beings.

The fact that GAIs replicate the soft stuff is a matter for some concern when they start to play a role in education, mainly because:

  • the skills they replace may atrophy or never be learned in the first place. This is not even slightly like replacing hard skills of handwriting or arithmetic: we are talking about skills like creativity, problem-solving, critical inquiry, design, and so on. We’re talking about the stuff that GAIs are trained with.
  • the AIs themselves are an amalgam, an embodiment of our collective intelligence, not actual people. You can spin up any kind of persona you like and discard it just as easily. Much of the crucially important hidden/tacit curriculum of education is concerned with relationships, identity, ways of thinking, ways of being, ways of working and playing with others. It’s about learning to be human in a human society. It is therefore quite problematic to delegate how we learn to be human to a machine with (literally and figuratively) no skin in the game, trained on a bunch of signals signifying nothing but more signals.

On the other hand, to not use them in educational systems would be as stupid as to not use writing. These technologies are now parts of our extended cognition, intertwingled with our collective intelligence as much as any other technology, so of course they must be integrated in our educational systems. The big questions are not about whether we should embrace them but how, and what soft skills they might replace that we wish to preserve or develop. I hope that we will value real humans and their inventions more, rather than less, though I fear that, as long as we retain the main structural features of our education systems without significant adjustments to how they work, we will no longer care, and we may lose some of our capacity for caring.

I suggest a few ways we might avert some of the greatest risks by, for instance, treating them as partners/contractors/team members rather than tools, by avoiding methods of “personalization” that simply reinforce existing power imbalances and pedagogies designed for better indoctrination, by using them to help connect us and support human relationships, by doing what we can to reduce extrinsic drivers, by decoupling learning and credentials, and by doubling down on the social aspects of learning. There is also an undeniable explosion in adjacent possibles, leading to new skills to learn, new ways to be creative, and new possibilities for opening up education to more people. The potential paths we might take from now on are unprestatable and multifarious but, once we start down them, resulting path dependencies may lead us into great calamity at least as easily as they may expand our potential. We need to make wise decisions now, while we still have the wisdom to make them.

MDPI invited me to submit this article free of their normal article processing charge (APC). The fact that I accepted is therefore very much not an endorsement of APCs, though I respect MDPI’s willingness to accommodate those who find payment difficult, the good editorial services they provide, and the fact that all they publish is open. I was not previously familiar with the Digital journal itself. It has been publishing 4 articles a year since 2021, mostly offering a mix of reports on application designs and literature reviews. The quality seems good.

Abstract

This paper applies a theoretical model to analyze the ways that widespread use of generative AIs (GAIs) in education and, more broadly, in contributing to and reflecting the collective intelligence of our species, can and will change us. The model extends Brian Arthur’s insights into the nature of technologies as the orchestration of phenomena to our use by explaining the nature of humans participation in their enactment, whether as part of the orchestration (hard technique, where our roles must be performed correctly) or as orchestrators of phenomena (soft technique performed creatively or idiosyncratically). Education may be seen as a technological process for developing the soft and hard techniques of humans to participate in the technologies and thus the collective intelligence of our cultures. Unlike all earlier technologies, by embodying that collective intelligence themselves, GAIs can closely emulate and implement not only the hard technique but also the soft that, until now, was humanity’s sole domain: the very things that technologies enabled us to do can now be done by the technologies themselves. The consequences for what, how, and even whether we learn are profound. The paper explores some of these consequences and concludes with theoretically informed approaches that may help us to avert some dangers while benefiting from the strengths of generative AIs.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20512771/preprint-the-human-nature-of-generative-ais-and-the-technological-nature-of-humanity-implications-for-education

10 minute chats on Generative AI – a great series, now including an interview with me

This is a great series of brief interviews between Tim Fawns and an assortment of educators and researchers from across the world on the subject of generative AI and its impact on learning and teaching.

The latest (tenth in the series) is with me.

Tim asked us all to come up with 3 key statements beforehand that he used to structure the interviews. I only realized that I had to do this on the day of the interview so mine are not very well thought-through, but there follows a summary of very roughly what I would have said about each if my wits were sharper. The reality was, of course, not quite like this. I meandered around a few other ideas and we ran out of time, but I think this captures the gist of what I actually wanted to convey:

Key statement 1: Most academics are afraid of AIs being used by students to cheat. I am afraid of AIs being used by teachers to cheat. cyborg teacher

For much the same reasons that many of us balk at students using, say, ChatGPT to write part or all of their essays or code, I think we should be concerned when teachers use it to replace or supplement their teaching, whether it be for writing course outlines, assessing student work, or acting as intelligent tutors (to name but a few common uses).  The main thing that bothers me is that human teachers (including other learners, authors, and many more) do not simply help learners to achieve specified learning outcomes. In the process, they model ways of thinking, values, attitudes, feelings, and a host of other hard-to-measure tacit and implicit phenomena that relate to ways of being, ways of interacting, ways of responding, and ways of connecting with others. There can be huge value in seeing the world through another’s eyes, of interacting with them, adapting your responses, seeing how they adapt to yours, and so on. This is a critical part of how we learn the soft stuff, the ways of doing things, the meaning, the social value, the connections with our own motivations, and so on. In short, education is as much about being a human being, living in human communities, as it is about learning facts and skills. Even when we are not interacting but, say, simply reading a book, we are learning not just the contents but the ways the contents are presented, the quirks, the passions, the ways the authors think of their readers, their implicit beliefs, and so on.

While a generative AI can mimic this pretty well, it is by nature a kind of average, a blurry reconstruction mashed up from countless examples of the work of real humans. It is human-like, not human. It can mimic a wide assortment of nearly-humans without identity, without purpose, without persistence, without skin in the game. As things currently stand (though this will change) it is also likely to be pretty bland – good enough, but not great.

It might be argued that this is better than nothing at all, or that it augments rather than replaces human teachers, or it helps with relatively mundance chores, or it provides personalized support and efficiencies in learning hard skills, or it allows teachers to focus on those human aspects, or even that using a generative AI is a good way of learning in itself. Right now and in the near future, this may be true because we are in a system on the verge of disruption, not yet in the thick of it, and we come to it with all our existing skills and structures intact. My concern is what happens as it scales and becomes ubiquitous; as the bean-counting focus on efficiencies that relate solely to measurable outcomes increasingly crowd out the time spent with other humans; as the generative AIs feed on one another becoming more and more divorced from their human originals; as the skills of teaching that are replaced by AIs atrophy in the next generation; as time we spend with one another is replaced with time spent with not-quite human simulacra; as the AIs themselves become more and more a part of our cognitive apparatus in both what is learned and how we learn it. There are Monkeys’ Paws all the way down the line: for everything that might improved, there are at least as many things that can and will get worse.

Key statement 2: We and our technologies are inherently intertwingled so it makes no more sense to exclude AIs from the classroom than it would to exclude, say, books or writing. The big questions are about what we need to keep. intertwingled technologies and humans

Our cognition is fundamentally intertwingled with the technologies that we use, both physical and cognitive, and those technologies are intertwingled with one another, and that’s how our collective intelligence emerges. For all the vital human aspects mentioned above, a significant part of the educational process is concerned with building cognitive gadgets that enable us to participate in the technologies of our cultures, from poetry and long division to power stations and web design. Through that participation our cognition is highly distributed, and our intelligence is fundamentally collective. Now that generative AIs are part of that, it would be crazy to exclude them from classrooms or from their use in assessments. It does, however, raise more than a few questions about what cognitive activities we still need to keep for ourselves.

Technologies expand or augment what we can do unaided. Writing, say, allows us (among other things) to extend our memories. This creates many adjacent possibles, including sharing them with others, and allowing us to construct more complex ideas using scaffolding that would be very difficult to construct on our own because our memories are not that great.

Central to the nature of writing is that, as with most technologies, we don’t just use it but we participate in its enactment, performing part of the orchestration ourselves (for instance we choose what words and ideas we write – the soft stuff), but also being part of its orchestration (e.g we must typically spell words and use grammar sufficiently uniformly that others can understand them – the hard stuff).

In the past, we used to do nearly all of that writing by hand. Handwriting was a hard skill that had to be learned well enough that others could read what we have written, a process that typically required years of training and practice, demanding mastery of a wide range of technical proficiencies from spelling and punctuation to manual dexterity and the ability to sharpen a quill/fill a fountain pen/insert a cartridge, etc. To an increasingly large extent we have now offloaded many of those hard skills, first to typewriters and now to computers. While some of the soft aspects of handwriting have been lost – the cognitive processes that affect how we write and how we think, the expressiveness of the never-perfect ways we write letters on a page, etc – this was a sensible thing to do. From a functional perspective, text produced by a computer is far more consistent, far more readable, far more adaptable, far more reusable, and far more easily communicated. Why should we devote so much effort and time to learning to be part of a machine when a machine can do that part for us, and do it better?

Something that can free us from having to act as an inflexible machine seems, by and large, like a good thing. If we don’t have to do it ourselves then we can spend more time and effort on what we do, how we do it, the soft stuff, the creative stuff, the problem-solving stuff, and so on. It allows us to be more capable, to reach further, to communicate more clearly. There are some really big issues relating to the ways that the constraints of handwriting such as the relative difficulty of making corrections, the physicality of the movements, and the ways our brains are changed by handwriting that result in different ways of thinking, some of which may be very valuable. But, as Postman wrote, all technologies are Faustian bargains involving losses and harms as well as gains and benefits. A technology that thrives is usually (at least in the short term) one in which the gains are perceived to outweigh the losses. And, even when largely replaced, old technologies seldom if ever die, so it is usually possible to retrieve what is lost, at least until the skills atrophy, components are no longer made, or they are designed to die (old printers with chip-protected cartridges that are no longer made, for instance).

What is fundamentally different about generative AIs, however, is that they allow us to offload exactly the soft, creative, problem solving aspects of our cognition, that technologies normally support and expand, to a machine. They provide extremely good pastiches of human thought and creativity that can act well enough to be considered as drop-in replacements. In many cases, they can do so a lot better – from the point of view of someone seeing only the outputs – than an average human. An AI image generator can draw a great deal better than me, for instance. But, given that these machines are now part of our extended, intertwingled minds, what is left for us? What parts of our minds should they or will they replace? How can we use them without losing the capacity to do at least some of the things they do better or as well as us? What happens if we lack those cognitive gadgets we never installed in our minds because AIs did it for us? This is not the same as, say, not knowing how to make a bow and arrow or write in cuneiform. Even when atrophied, such skills can be recovered. This is the stuff that we learn the other stuff for. It is especially important in the field of education which, traditionally at least, has been deeply concerned with cultivating the hard skills largely if not solely so that we can use them creatively, socially and productively once they are learned. If the machines are doing that for us, what is our role? This is not (yet) Kurzweil’s singularity, the moment when machines exceed our own intelligence and start to develop on their own, but it is the (drawn-out, fragmented) moment that machines have become capable of participating in soft, creative technologies on at least equal footing to humans. That matters. This leads to my final key statement.

Key statement 3: AIs create countless new adjacent possible empty niches. They can augment what we can do, but we need to go full-on Amish when deciding whether they should replace what we already do. Amish cyborg

Every new creation in the world opens up new and inherently unprestatable adjacent possible empty niches for further creation, not just in how it can be used as part of new assemblies but in how it connects with those that already exist. It’s the exponential dynamic ratchet underlying natural evolution as much as technology, and it is what results in the complexity of the universe. The rapid acceleration in use and complexity of generative AIs – itself enabled by the adjacent possibles of the already highly disruptive Internet – that we have seen over the past couple of years has resulted in a positive explosion of new adjacent possibles, in turn spawning others, and so on, at a hitherto unprecedented scale and speed.

This is exactly what we should expect in an exponentially growing system. It makes it increasingly difficult to predict what will happen next, or what skills, attitudes, and values we will need to deal with it, or how we will affected by it. As the number of possible scenarios increases at the same exponential rate, and the time between major changes gets ever shorter, patterns of thinking, ways of doing things, skills we need, and the very structures of our societies must change in unpredictable ways, too. Occupations, including in education, are already being massively disrupted, for better and for worse. Deeply embedded systems, from assessment for credentials to the mass media, are suddenly and catastrophically breaking.  Legislation, regulations, resistance from groups of affected individuals, and other checks and balances may slightly alter the rate of change, but likely not enough to matter. Education serves both a stabilizing and a generative role in society, but educators are at least as unprepared and at least as disrupted as anyone else. We don’t – in fact we cannot – know what kind of world we are preparing our students for, and the generative technologies that now form part of our cognition are changing faster than we can follow. Any AI literacies we develop will be obsolete in the blink of an eye. And, remember, generative AIs are not just replacing hard skills. They are replacing the soft ones, the things that we use our hard skills to accomplish.

This is why I believe we would do well to heed the example of the Amish, who (contrary to popular belief) are not opposed to modern technologies but, in their communities, debate and discuss the merits and disadvantages of any technology that is available, considering the ways in which it might affect or conflict with their values, only adopting those agreed to be, on balance, good, and only doing so in ways that accord with those values. Different communities make different choices according to their contexts and needs. In order to do that, we have to have values in the first place. But what are the values that matter in education?

With a few exceptions (laws and regulations being the main ones) technologies do not determine how we will act but, through the ways they integrate with our shared cognition, existing technologies, and practices, they have a lot of momentum and, unchecked, generative AIs will inherit the values associated with what currently exists. In educational systems that are increasingly regulated by government mandates that focus on nothing but their economic contributions to industry, where success or failure is measured solely by proxy criteria like predetermined outcomes of learning and enrolments, where a millennium of path dependencies still embodies patterns of teacher control and indoctrination that worked for mediaeval monks and skillsets that suited the demands of factory owners during the industrial revolution, this will not end well. Now seems the time we most need to reassert and double down on the human, the social, the cultural, the societal, the personal, and the tacit value of our institutions. This is the time to talk about those values, locally and globally. This is the time to examine what matters, what we care about, what we must not lose, and why we must not lose it. Tomorrow it will be too late. I think this is a time of great risk but it is also a time of great opportunity, a chance to reflect on and examine the value and nature of education itself. Some of us have been wanting to have these conversations for decades.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20146256/10-minute-chats-on-generative-ai-a-great-series-now-including-an-interview-with-me