The collective ochlotecture of large language models: slides from my talk at CI.edu, 2024

Here are my slides from the 1st International Symposium on Educating for Collective Intelligence, last week, here is my paper on which it was based, and here is the video of the talk itself:

You can find this and videos of the rest of the stunning line-up of speakers at https://www.youtube.com/playlist?list=PLcS9QDvS_uS6kGxefLFr3kFToVIvIpisn It was an incredibly engaging and energizing event: the chat alone was a masterclass in collective intelligence that was difficult to follow at times but that was filled with rich insights and enlightening debates. The symposium site, that has all this and more, is at https://cic.uts.edu.au/events/collective-intelligence-edu-2024/

Collective intelligence, represented in the style of 1950s children's books.With just 10 minutes to make the case and 10 minutes for discussion, none of us were able to go into much depth in our talks. In mine I introduced the term “ochlotecture”, from the Classical Greek ὄχλος (ochlos), meaning  “multitude” and τέκτων (tektōn) meaning “builder” to describe the structures and processes that define the stuff that gives shape and form to collections of people and their interactions. I think we need such a term because there are virtually infinite ways that such things can be configured, and the configuration makes all the difference. We blithely talk of things like groups, teams, clubs, companies, squads, and, of course, collectives, assuming that others will share an understanding of what we mean when, of course, they don’t. There were at least half a dozen quite distinct uses of the term “collective intelligence” in this symposium alone. I’m still working on a big paper on this subject that goes into some depth on the various dimensions of interest as they pertain to a wide range of social organizations but, for this talk, I was only concerned with the ochlotecture of collectives (a term I much prefer to “collective intelligence” because intelligence is such a slippery word, and collective stupidity is at least as common). From an ochlotectural perspective, these consist of a means of collecting crowd-generated information, processing it, and presenting the processed results back to the crowd. Human collective ochlotectures often contain other elements – group norms, structural hierarchies, schedules, digital media, etc – but I think those are the defining features. If I am right then large language models (LLMs) are collectives, too, because that is exactly what they do. Unlike most other collectives, though (a collectively driven search engine like Google Search being one of a few partial exceptions) the processing is unique to each run of the cycle, generated via a prompt or similar input. This is what makes them so powerful, and it is what makes their mimicry of human soft technique so compelling.

I did eventually get around to the theme of the conference. I spent a while discussing why LLMs are troubling – the fact that we learn values, attitudes, ways of being, etc from interacting with them; the risks to our collective intelligence caused by them being part of the crowd, not just aggregators and processors of its outputs; and the potential loss of the soft, creative skills they can replace – and ended with what that implies for how we should act as educators: essentially, to focus on the tacit curriculum that has, till now, always come from free; to focus on community because learning to be human from and with other humans is what it is all about; and to decouple credentials so as to reduce the focus on measurable outcomes that AIs can both teach and achieve better than an average human. I also suggested a couple of principles for dealing with generative AIs: to treat them as partners rather than tools, and to use them to support and nurture human connections, as ochlotects as much as parts of the ochlotecture.

I had a point to make in a short time, so the way I presented it was a bit of a caricature of my more considered views on the matter. If you want a more balanced view, and to get a bit more of the theoretical backdrop to all this, Tim Fawns’s talk (that follows mine and that will probably play automatically after it if you play the video above) says it all, with far greater erudition and lucidity, and adds a few very valuable layers of its own. Though he uses different words and explains it far better than I, his notion of entanglement closely echoes my own ideas about the nature of technology and the roles it plays in our cognition. I like the word “intertwingled” more than “entangled” because of its more positive associations and the sense of emergent order it conveys, but we mean substantially the same thing: in fact, the example he gave of a car is one that I have frequently used myself, in exactly the same way.

New paper: The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future

I’m proud to be the 7th of 47 authors on this excellent new paper, led by the indefatigable Aras Bozkurt and featuring some of the most distinguished contemporary researchers in online, open, mobile, distance, e- and [insert almost any cognate sub-discipline here] learning, as well as a few of us hanging on their coat tails like me.

AI negaiveAs the title suggests, it is a manifesto: it makes a series of statements (divided into 15 positive and 20 negative themes) about what is or what should be, and it is underpinned by a firm set of humanist pedagogical and ethical attitudes that are anything but neutral. What makes it interesting to me, though, can mostly be found in the critical insights that accompany each theme, that capture a little of the complexity of the discussions that led to them, and that add a lot of nuance. The research methodology, a modified and super-iterative Delphi design in which all participants are also authors is, I think, an incredibly powerful approach to research in the technology of education (broadly construed) that provides rigour and accountability without succumbing to science-envy.

 

AI-positiveNotwithstanding the lion’s share of the work of leading, assembling, editing, and submitting the paper being taken on by Aras and Junhong, it was a truly collective effort so I have very little idea about what percentage of it could be described as my work. We were thinking and writing together.  Being a part of that was a fantastic learning experience for many of us, that stretched the limits of what can be done with tracked changes and comments in a Google Doc, with contributions coming in at all times of day and night and just about every timezone, over weeks. The depth and breadth of dialogue was remarkable, as much an organic process of evolution and emergence as intelligent design, and one in which the document itself played a significant participant role. I felt a strong sense of belonging, not so much as part of a community but as part of a connectome.

For me, this epitomizes what learning technologies are all about. It would be difficult if not impossible to do this in an in-person setting: even if the researchers worked together on an online document, the simple fact that they met in person would utterly change the social dynamics, the pacing, and the structure. Indeed, even online, replicating this in a formal institutional context would be very difficult because of the power relationships, assessment requirements, motivational complexities and artificial schedules that formal institutions add to the assembly. This was an online-native way of learning of a sort I aspire to but seldom achieve in my own teaching.

The paper offers a foundational model or framework on which to build or situate further work as well as providing a moderately succinct summary of  a very significant percentage of the issues relating to generative AI and education as they exist today. Even if it only ever gets referred to by each of its 47 authors this will get more citations than most of my papers, but the paper is highly cite-able in its own right, whether you agree with its statements or not. I know I am biased but, if you’re interested in the impacts of generative AI on education, I think it is a must-read.

The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future

Bozkurt, A., Xiao, J., Farrow, R., Bai, J. Y. H., Nerantzi, C., Moore, S., Dron, J., … Asino, T. I. (2024). The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future. Open Praxis, 16(4), 487–513. https://doi.org/10.55982/openpraxis.16.4.777

Full list of authors:

  • Aras Bozkurt
  • Junhong Xiao
  • Robert Farrow
  • John Y. H. Bai
  • Chrissi Nerantzi
  • Stephanie Moore
  • Jon Dron
  • Christian M. Stracke
  • Lenandlar Singh
  • Helen Crompton
  • Apostolos Koutropoulos
  • Evgenii Terentev
  • Angelica Pazurek
  • Mark Nichols
  • Alexander M. Sidorkin
  • Eamon Costello
  • Steven Watson
  • Dónal Mulligan
  • Sarah Honeychurch
  • Charles B. Hodges
  • Mike Sharples
  • Andrew Swindell
  • Isak Frumin
  • Ahmed Tlili
  • Patricia J. Slagter van Tryon
  • Melissa Bond
  • Maha Bali
  • Jing Leng
  • Kai Zhang
  • Mutlu Cukurova
  • Thomas K. F. Chiu
  • Kyungmee Lee
  • Stefan Hrastinski
  • Manuel B. Garcia
  • Ramesh Chander Sharma
  • Bryan Alexander
  • Olaf Zawacki-Richter
  • Henk Huijser
  • Petar Jandrić
  • Chanjin Zheng
  • Peter Shea
  • Josep M. Duart
  • Chryssa Themeli
  • Anton Vorochkov
  • Sunagül Sani-Bozkurt
  • Robert L. Moore
  • Tutaleni Iita Asino

Abstract

This manifesto critically examines the unfolding integration of Generative AI (GenAI), chatbots, and algorithms into higher education, using a collective and thoughtful approach to navigate the future of teaching and learning. GenAI, while celebrated for its potential to personalize learning, enhance efficiency, and expand educational accessibility, is far from a neutral tool. Algorithms now shape human interaction, communication, and content creation, raising profound questions about human agency and biases and values embedded in their designs. As GenAI continues to evolve, we face critical challenges in maintaining human oversight, safeguarding equity, and facilitating meaningful, authentic learning experiences. This manifesto emphasizes that GenAI is not ideologically and culturally neutral. Instead, it reflects worldviews that can reinforce existing biases and marginalize diverse voices. Furthermore, as the use of GenAI reshapes education, it risks eroding essential human elements—creativity, critical thinking, and empathy—and could displace meaningful human interactions with algorithmic solutions. This manifesto calls for robust, evidence-based research and conscious decision-making to ensure that GenAI enhances, rather than diminishes, human agency and ethical responsibility in education.

Announcing the First International Symposium on Educating for Collective Intelligence (and some thoughts on collective intelligence)

First International Symposium on Educating for Collective Intelligence | UTS:CIC

Free-to-register International online symposium, December 5th, 2024, 12-3pm PST

Start time:

This is going to be an important symposium, I think.

I will be taking 3 very precious hours out of my wedding anniversary to attend, in fairness unintentionally: I did not do the timezone conversion when I submitted my paper so I thought it was the next day. However,  I have not cancelled despite the potentially dire consequences, partly because the line-up of speakers is wonderful, partly because we all use the words “collective intelligence” (CI) but we come from diverse disciplinary areas and we mean sometimes very different things by them (so there will be some potentially inspiring conversations) and partly for a bigger reason that I will get to at the end of this post.  You can read abstracts and most of the position papers on the symposium website,

In my own position paper  I have invented the term ochlotecture (from the Classical Greek ὄχλος (ochlos), meaning something like “multitude” and τέκτων (tektōn) meaning “builder”) to describe the structures and processes of a collection of people, whether it be a small seminar group, a network of researchers, or a set of adherents to a world religion. An ochlotecture includes elements like names, physical/virtual spaces, structural hierarchies, rules, norms, mythologies, vocabularies, and purposes, as well as emergent phenomena occurring through individual and subgroup interactions, most notably the recursive cycle of information capture, processing, and (re)presentation that I think characterizes any CI. Through this lens, I can see both what is common and what distinguishes the different kinds of CI described in these position papers a bit more clearly. In fact, my own use of the term has changed a few times over the years so it helps me make sense of my own thoughts on the matter too.

Where I’ve come from that leads me here

symbolic representation of collective intelligenceI have been researching CI and education for a long time. Initially, I used the term very literally to describe something very distinct from individual intelligence, and largely independent of it.  My PhD, started in 1997, was inspired by the observation that (even then) there were at least tens of thousands of very good resources (people, discussions, tutorials, references, videos, courseware etc) openly available on the Web to support learners in most subject areas, that could meet almost any conceivable learning need. The problem was and remains how to find the right ones. These were pre-Google times but even the good-Google of olden days (a classic application of collective intelligence as I was using the term) only showed the most implicitly popular, not those that would best meet a particular learner’s needs. As a novice teacher, I also observed that, in a typical classroom, the students’ combined knowledge and ability to seek more of it far exceeded my own.  I therefore hit upon the idea of using a nature-inspired evolutionary approach to collectively discover and recommend resources, that led me very quickly into the realm of evolutionary theory and thence to the dynamics of self-organizing systems, complex adaptive systems, stigmergy, flocking, city planning, markets, and collective intelligence.

And so I became an ochlotect. I built a series of self-organizing social software systems that used stuff like social navigation (stigmergy), evolutionary, and flocking algorithms to create environments that both shaped and were shaped by the crowd. Acknowledging that “intelligence” is a problematic word, I simply called these collectives, a name inspired by Star Trek TNG’s Borg (the pre-Borg-Queen Borg, before the writers got bored or lazy). The intelligence of a “pure” collective as I conceived it back then was largely to be found in the algorithm, not the individual agents. Human stock markets are no smarter than termite mounds by this way of thinking (and they are not). I was trying to amplify the intelligence of crowds while avoiding the stupidity of mobs by creating interfaces and algorithms that made value to learners a survival characteristic. I was building systems that played some of the roles of a teacher but that were powered by collectives consisting of learners.  Some years later, Mark Zuckerberg hit on the idea of doing the exact opposite, with considerably greater success, making a virtue out of systems that amplified collective stupidity, but the general principles behind both EdgeRank and my algorithms were similar.

When I say that I “built” systems, though, I mean that I built the software part. I came to increasingly realize that the largest part of all of them was always the human part: what the individuals did, and the surrounding context in which they did it, including the norms, the processes, the rules, the structures, the hierarchies, and everything else that formed the ochlotecture, was intrinsic to their success or failure.  Some of those human-enacted parts were as algorithmic as the software environments I provided and were no smarter than those used by termites (e.g. “click on the results from the top of the list or in bigger fonts”), but many others were designed, and played critical roles.  This slightly more complex concept of CI played a major supporting role in my first book providing a grounded basis for the design of social software systems that could support maximal learner control. In it I wound up offering a set of 10 design principles that addressed human, organizational, pedagogical and tech factors as well as emergent collective characteristics that were prerequisites if social software systems were to evolve to become educationally useful.

Collectives also formed a cornerstone of my work with Terry Anderson over the next decade or so, and our use of the term evolved further. In our first few papers, starting  in 2007, we conflated the dynamic process with the individual agents who made it happen: for us back then, a collective was the people and processes (a sort of cross between my original definition and a social configuration the Soviets were once fond of) and so we treated a collective as somewhat akin to a group or a network. Before too long we realized that was dumb and separated these elements out, categorizing three primary social forms (the set, the net, and the group) that could blend, and from which collectives could emerge and interact, as a different kind of ochlotectural entity altogether. This led us to a formal abstract definition of collectives that continues to get the odd citation to this day. We wrote a book about social media and learning in which this abstract definition of collectives figured largely, and designed The Landing to take advantage of it (not well – it was a learning experience). It appears in my position paper, too.

Collectives have come back with a vengeance but wearing different clothes in my work of the last decade, including my most recent book. I am a little less inclined to use the word “collective” now because I have come to understand all intelligence as collective, almost all of it mediated and often enacted through technologies. Technologies are the assemblies we construct from stuff to do stuff, and the stuff that they do then forms some of the stuff from which we construct more stuff to do stuff. A single PC alone, for instance, might contain hundreds of billions of instances of technologies in its assembly. A shelf of books might contain almost as many, not just in words and letters but in the concepts, theories, and models they make. As for the processes of making them, editing them, manufacturing the paper and the ink, printing them, distributing them, reading them, and so on… it’s a massive, constantly evolving, ever-adapting, partly biological system, not far off from natural ecosystems in its complexity, and equally diverse. Every use of a technology is also a technology, from words in your head to flying a space ship, and it becomes part of the stuff that can be organized by yourself or others. Through technique (technologies enacted intracranially), technologies are parts of us and we are parts of them, and that is what makes us smart.  Collective behaviour in humans can occur without technologies but what makes it collective intelligence is a technological connectome that grows, adapts, evolves, replicates, and connects every one of us to every other one of us: most of what we think is the direct result of assembling what we and others, stretching back in time and outward in space, have created. The technological connectome continuously evolves as we connect and orchestrate the vast web of technologies in which we participate, creating assemblies that have never occurred the same way twice, maybe thousands of times every day: have you ever even brushed your teeth or eaten a mouthful of cereal exactly the same way twice, in your whole life? Every single one of us is doing this, and quite a few of those technologies magnify the effects, from words to drawing to numbers to  writing to wheels to screws to ships to postal services to pedagogical methods to printing to newspapers to libraries to broadcast networks to the Internet to the World Wide Web to generative AI. It is not just how we are able to be individually smart: it is an indivisible part of that smartness. Or stupidity. Whatever. The jury is out. Global warming, widening inequality, war, epidemics of obesity, lies, religious bigotry, famine and many other dire phenomena are a direct result of this collective “intelligence”, as much as Vancouver, the Mona Lisa, and space telescopes. Let’s just stick with “collective”.

The obligatory LLM connection and the big reason I’m attending the symposium

My position paper for this symposium wanders a bit circuitously towards a discussion of the collective nature of large language models (LLMs) and their consequent global impact on our education systems. LLMs are collectives in their own right, with algorithms that are not only orders of magnitude more complex than any of their predecessors, but that are unique to every instantiation of them, operating from and on vast datasets, presenting results to users who also feed those datasets. This is what makes them capable of very convincingly simulating both the hard (inflexible, correct) and the soft (flexible, creative) technique of humans, which is both their super-power and the cause of the biggest threat they pose. The danger is that a) they replace the need to learn the soft technique ourselves (not necessarily a disaster if we use them creatively in further assemblies) and, more worryingly, b) that we learn ways of being human from collectives that, though made of human stuff, are not human. They will in turn become parts of all the rest of the collectives in which we participate. This can and will change us. It is happening now, frighteningly fast, even faster and at a greater scale than similar changes that the Zuckerbergian style of social media have also brought about.

As educators, we should pay attention to this. Unfortunately, with their emphasis on explicit measurable outcomes,  combined with the extrinsic lure of credentials, the ochlotecture of our chronically underfunded educational systems is not geared towards compensating for these tendencies. In fact, exactly the reverse. LLMs can already both teach and meet those explicit outcomes far more effectively than most humans, at a very compelling price so, more and more, they will. Both students and teachers are replaceable components in such a system. The saving grace and/or problem is that, though they matter, and they are how we measure educational success, those explicit outcomes are not in fact the most important ends of education, albeit that they are means to those ends.

The things that matter more are the human ways of thinking, of learning, and of seeing, that we learn while achieving such outcomes; the attitudes, values, connections, and relationships; our identities and the ways we learn to exist in our societies and cultures. It’s not just about doing and knowing: it’s about being, it’s about love, fear, wonder, and hunger. We don’t have to (and can’t) measure those because they all come for free when humans and the stuff they create are the means through which explicit outcomes are achieved. It’s an unavoidable tacit curriculum that underpins every kind of intentional and most unintentional learning we undertake, for better or (too often) for worse. It’s the (largely) non-technological consequence of the technologies in which we participate, and how we participate in them. Technologies don’t make us less human, on the whole: they are exactly what make us human.

We will learn such things from generative AIs, too, thanks to the soft technique they mimic so well, but what we will learn to be as a result will not be quite human. Worse, the outputs of the machines will begin to dominate their own inputs, and the rest will come from humans who have been changed by their interactions with them, like photocopies of photocopies, constantly and recursively degrading. In my position paper I argue for the need to therefore cherish the human parts of these new collectives in our education systems far more than we have before, and I suggest some ways of doing that. It matters not just to avoid model collapse in LLMs, but to prevent model collapse in the collective intelligence of the whole human race. I think that is quite important, and that’s the real reason I will spend some of my wedding anniversary talking with some very intelligent and influential people about it.

 

 

Recording and slides from my ESET 2023 keynote: Artificial humanity and human artificiality

Here are the slides from my keynote at ESET23 in Taiwan (I was online, alas, not in Taipei!).

I will try to remember to update this post with a link to the recording, when it is available.

Here’s a recording of the actual keynote.

The themes of my talk will be familiar to anyone who follows my blog or who has read my recent paper on the subject. This is about applying the coparticipation theory from How Education Works to generative AI, raising concerns about the ways it mimics the soft technique of humans, and discussing how problematic that will be if the skills it replaces atrophy or are never learned in the first place, amongst other issues.

This is the abstract:

We are participants in, not just users of technologies. Sometimes we participate as orchestrators (for instance, when choosing words that we write) and sometimes as part of the orchestration (for instance, when spelling those words correctly). Usually, we play both roles.  When we automate aspects of technologies in which we are just parts of the orchestration, it frees us up to be able to orchestrate more, to do creative and problem-solving tasks, while our tools perform the hard, mechanical tasks better, more consistently, and faster than we could ourselves. Collectively and individually, we therefore become smarter. Generative AIs are the first of our technologies to successfully automate those soft, open-ended, creative cognitive tasks. If we lack sufficient time and/or knowledge to do what they do ourselves, they are like tireless, endlessly flexible personal assistants, expanding what we can do alone. If we cannot draw, or draw up a rental agreement, say, an AI will do it for us, so we may get on with other things. Teachers are therefore scrambling to use AIs to assist in their teaching as fast as students use AIs to assist with their assessments.

For achieving measurable learning outcomes, AIs are or will be effective teachers, opening up greater learning opportunities that are more personalized, at lower cost, in ways that are superior to average human teachers.  But human teachers, be they professionals, other students, or authors of websites, do more than help learners to achieve measurable outcomes. They model ways of thinking, ways of being, tacit knowledge, and values: things that make us human. Education is a preparation to participate in human cultures, not just a means of imparting economically valuable skills. What will happen as we increasingly learn those ways of being from a machine? If machines can replicate skills like drawing, reasoning, writing, and planning, will humans need to learn them at all? Are there aspects of those skills that must not atrophy, and what will happen to us at a global scale if we lose them? What parts of our cognition should we allow AIs to replace? What kinds of credentials, if any, will be needed? In this talk I will use the theory presented in my latest book, How Education Works: Teaching, Technology, and Technique to provide a framework for exploring why, how, and for what purpose our educational institutions exist, and what the future may hold for them.

Pre-conference background reading, including the book, articles, and blog posts on generative AI and education may be found linked from https://howeducationworks.ca

Preprint – The human nature of generative AIs and the technological nature of humanity: implications for education

Here is a preprint of a paper I just submitted to MDPI’s Digital journal that applies the co-participation model that underpins How Education Works (and a number of my papers over the last few years) to generative AIs (GAIs). I don’t know whether it will be accepted and, even if it is, it is very likely that some changes will be required. This is a warts-and-all raw first submission. It’s fairly long (around 10,000 words).

The central observation around which the paper revolves is that, for the first time in the history of technology, recent generations of GAIs automate (or at least appear to automate) the soft technique that has, till now, been the sole domain of humans. Up until now, every technology we have ever created, be it physically instantiated, cognitive, organizational, structural, or conceptual, has left all of the soft part of the orchestration to human beings.

The fact that GAIs replicate the soft stuff is a matter for some concern when they start to play a role in education, mainly because:

  • the skills they replace may atrophy or never be learned in the first place. This is not even slightly like replacing hard skills of handwriting or arithmetic: we are talking about skills like creativity, problem-solving, critical inquiry, design, and so on. We’re talking about the stuff that GAIs are trained with.
  • the AIs themselves are an amalgam, an embodiment of our collective intelligence, not actual people. You can spin up any kind of persona you like and discard it just as easily. Much of the crucially important hidden/tacit curriculum of education is concerned with relationships, identity, ways of thinking, ways of being, ways of working and playing with others. It’s about learning to be human in a human society. It is therefore quite problematic to delegate how we learn to be human to a machine with (literally and figuratively) no skin in the game, trained on a bunch of signals signifying nothing but more signals.

On the other hand, to not use them in educational systems would be as stupid as to not use writing. These technologies are now parts of our extended cognition, intertwingled with our collective intelligence as much as any other technology, so of course they must be integrated in our educational systems. The big questions are not about whether we should embrace them but how, and what soft skills they might replace that we wish to preserve or develop. I hope that we will value real humans and their inventions more, rather than less, though I fear that, as long as we retain the main structural features of our education systems without significant adjustments to how they work, we will no longer care, and we may lose some of our capacity for caring.

I suggest a few ways we might avert some of the greatest risks by, for instance, treating them as partners/contractors/team members rather than tools, by avoiding methods of “personalization” that simply reinforce existing power imbalances and pedagogies designed for better indoctrination, by using them to help connect us and support human relationships, by doing what we can to reduce extrinsic drivers, by decoupling learning and credentials, and by doubling down on the social aspects of learning. There is also an undeniable explosion in adjacent possibles, leading to new skills to learn, new ways to be creative, and new possibilities for opening up education to more people. The potential paths we might take from now on are unprestatable and multifarious but, once we start down them, resulting path dependencies may lead us into great calamity at least as easily as they may expand our potential. We need to make wise decisions now, while we still have the wisdom to make them.

MDPI invited me to submit this article free of their normal article processing charge (APC). The fact that I accepted is therefore very much not an endorsement of APCs, though I respect MDPI’s willingness to accommodate those who find payment difficult, the good editorial services they provide, and the fact that all they publish is open. I was not previously familiar with the Digital journal itself. It has been publishing 4 articles a year since 2021, mostly offering a mix of reports on application designs and literature reviews. The quality seems good.

Abstract

This paper applies a theoretical model to analyze the ways that widespread use of generative AIs (GAIs) in education and, more broadly, in contributing to and reflecting the collective intelligence of our species, can and will change us. The model extends Brian Arthur’s insights into the nature of technologies as the orchestration of phenomena to our use by explaining the nature of humans participation in their enactment, whether as part of the orchestration (hard technique, where our roles must be performed correctly) or as orchestrators of phenomena (soft technique performed creatively or idiosyncratically). Education may be seen as a technological process for developing the soft and hard techniques of humans to participate in the technologies and thus the collective intelligence of our cultures. Unlike all earlier technologies, by embodying that collective intelligence themselves, GAIs can closely emulate and implement not only the hard technique but also the soft that, until now, was humanity’s sole domain: the very things that technologies enabled us to do can now be done by the technologies themselves. The consequences for what, how, and even whether we learn are profound. The paper explores some of these consequences and concludes with theoretically informed approaches that may help us to avert some dangers while benefiting from the strengths of generative AIs.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20512771/preprint-the-human-nature-of-generative-ais-and-the-technological-nature-of-humanity-implications-for-education

10 minute chats on Generative AI – a great series, now including an interview with me

This is a great series of brief interviews between Tim Fawns and an assortment of educators and researchers from across the world on the subject of generative AI and its impact on learning and teaching.

The latest (tenth in the series) is with me.

Tim asked us all to come up with 3 key statements beforehand that he used to structure the interviews. I only realized that I had to do this on the day of the interview so mine are not very well thought-through, but there follows a summary of very roughly what I would have said about each if my wits were sharper. The reality was, of course, not quite like this. I meandered around a few other ideas and we ran out of time, but I think this captures the gist of what I actually wanted to convey:

Key statement 1: Most academics are afraid of AIs being used by students to cheat. I am afraid of AIs being used by teachers to cheat. cyborg teacher

For much the same reasons that many of us balk at students using, say, ChatGPT to write part or all of their essays or code, I think we should be concerned when teachers use it to replace or supplement their teaching, whether it be for writing course outlines, assessing student work, or acting as intelligent tutors (to name but a few common uses).  The main thing that bothers me is that human teachers (including other learners, authors, and many more) do not simply help learners to achieve specified learning outcomes. In the process, they model ways of thinking, values, attitudes, feelings, and a host of other hard-to-measure tacit and implicit phenomena that relate to ways of being, ways of interacting, ways of responding, and ways of connecting with others. There can be huge value in seeing the world through another’s eyes, of interacting with them, adapting your responses, seeing how they adapt to yours, and so on. This is a critical part of how we learn the soft stuff, the ways of doing things, the meaning, the social value, the connections with our own motivations, and so on. In short, education is as much about being a human being, living in human communities, as it is about learning facts and skills. Even when we are not interacting but, say, simply reading a book, we are learning not just the contents but the ways the contents are presented, the quirks, the passions, the ways the authors think of their readers, their implicit beliefs, and so on.

While a generative AI can mimic this pretty well, it is by nature a kind of average, a blurry reconstruction mashed up from countless examples of the work of real humans. It is human-like, not human. It can mimic a wide assortment of nearly-humans without identity, without purpose, without persistence, without skin in the game. As things currently stand (though this will change) it is also likely to be pretty bland – good enough, but not great.

It might be argued that this is better than nothing at all, or that it augments rather than replaces human teachers, or it helps with relatively mundance chores, or it provides personalized support and efficiencies in learning hard skills, or it allows teachers to focus on those human aspects, or even that using a generative AI is a good way of learning in itself. Right now and in the near future, this may be true because we are in a system on the verge of disruption, not yet in the thick of it, and we come to it with all our existing skills and structures intact. My concern is what happens as it scales and becomes ubiquitous; as the bean-counting focus on efficiencies that relate solely to measurable outcomes increasingly crowd out the time spent with other humans; as the generative AIs feed on one another becoming more and more divorced from their human originals; as the skills of teaching that are replaced by AIs atrophy in the next generation; as time we spend with one another is replaced with time spent with not-quite human simulacra; as the AIs themselves become more and more a part of our cognitive apparatus in both what is learned and how we learn it. There are Monkeys’ Paws all the way down the line: for everything that might improved, there are at least as many things that can and will get worse.

Key statement 2: We and our technologies are inherently intertwingled so it makes no more sense to exclude AIs from the classroom than it would to exclude, say, books or writing. The big questions are about what we need to keep. intertwingled technologies and humans

Our cognition is fundamentally intertwingled with the technologies that we use, both physical and cognitive, and those technologies are intertwingled with one another, and that’s how our collective intelligence emerges. For all the vital human aspects mentioned above, a significant part of the educational process is concerned with building cognitive gadgets that enable us to participate in the technologies of our cultures, from poetry and long division to power stations and web design. Through that participation our cognition is highly distributed, and our intelligence is fundamentally collective. Now that generative AIs are part of that, it would be crazy to exclude them from classrooms or from their use in assessments. It does, however, raise more than a few questions about what cognitive activities we still need to keep for ourselves.

Technologies expand or augment what we can do unaided. Writing, say, allows us (among other things) to extend our memories. This creates many adjacent possibles, including sharing them with others, and allowing us to construct more complex ideas using scaffolding that would be very difficult to construct on our own because our memories are not that great.

Central to the nature of writing is that, as with most technologies, we don’t just use it but we participate in its enactment, performing part of the orchestration ourselves (for instance we choose what words and ideas we write – the soft stuff), but also being part of its orchestration (e.g we must typically spell words and use grammar sufficiently uniformly that others can understand them – the hard stuff).

In the past, we used to do nearly all of that writing by hand. Handwriting was a hard skill that had to be learned well enough that others could read what we have written, a process that typically required years of training and practice, demanding mastery of a wide range of technical proficiencies from spelling and punctuation to manual dexterity and the ability to sharpen a quill/fill a fountain pen/insert a cartridge, etc. To an increasingly large extent we have now offloaded many of those hard skills, first to typewriters and now to computers. While some of the soft aspects of handwriting have been lost – the cognitive processes that affect how we write and how we think, the expressiveness of the never-perfect ways we write letters on a page, etc – this was a sensible thing to do. From a functional perspective, text produced by a computer is far more consistent, far more readable, far more adaptable, far more reusable, and far more easily communicated. Why should we devote so much effort and time to learning to be part of a machine when a machine can do that part for us, and do it better?

Something that can free us from having to act as an inflexible machine seems, by and large, like a good thing. If we don’t have to do it ourselves then we can spend more time and effort on what we do, how we do it, the soft stuff, the creative stuff, the problem-solving stuff, and so on. It allows us to be more capable, to reach further, to communicate more clearly. There are some really big issues relating to the ways that the constraints of handwriting such as the relative difficulty of making corrections, the physicality of the movements, and the ways our brains are changed by handwriting that result in different ways of thinking, some of which may be very valuable. But, as Postman wrote, all technologies are Faustian bargains involving losses and harms as well as gains and benefits. A technology that thrives is usually (at least in the short term) one in which the gains are perceived to outweigh the losses. And, even when largely replaced, old technologies seldom if ever die, so it is usually possible to retrieve what is lost, at least until the skills atrophy, components are no longer made, or they are designed to die (old printers with chip-protected cartridges that are no longer made, for instance).

What is fundamentally different about generative AIs, however, is that they allow us to offload exactly the soft, creative, problem solving aspects of our cognition, that technologies normally support and expand, to a machine. They provide extremely good pastiches of human thought and creativity that can act well enough to be considered as drop-in replacements. In many cases, they can do so a lot better – from the point of view of someone seeing only the outputs – than an average human. An AI image generator can draw a great deal better than me, for instance. But, given that these machines are now part of our extended, intertwingled minds, what is left for us? What parts of our minds should they or will they replace? How can we use them without losing the capacity to do at least some of the things they do better or as well as us? What happens if we lack those cognitive gadgets we never installed in our minds because AIs did it for us? This is not the same as, say, not knowing how to make a bow and arrow or write in cuneiform. Even when atrophied, such skills can be recovered. This is the stuff that we learn the other stuff for. It is especially important in the field of education which, traditionally at least, has been deeply concerned with cultivating the hard skills largely if not solely so that we can use them creatively, socially and productively once they are learned. If the machines are doing that for us, what is our role? This is not (yet) Kurzweil’s singularity, the moment when machines exceed our own intelligence and start to develop on their own, but it is the (drawn-out, fragmented) moment that machines have become capable of participating in soft, creative technologies on at least equal footing to humans. That matters. This leads to my final key statement.

Key statement 3: AIs create countless new adjacent possible empty niches. They can augment what we can do, but we need to go full-on Amish when deciding whether they should replace what we already do. Amish cyborg

Every new creation in the world opens up new and inherently unprestatable adjacent possible empty niches for further creation, not just in how it can be used as part of new assemblies but in how it connects with those that already exist. It’s the exponential dynamic ratchet underlying natural evolution as much as technology, and it is what results in the complexity of the universe. The rapid acceleration in use and complexity of generative AIs – itself enabled by the adjacent possibles of the already highly disruptive Internet – that we have seen over the past couple of years has resulted in a positive explosion of new adjacent possibles, in turn spawning others, and so on, at a hitherto unprecedented scale and speed.

This is exactly what we should expect in an exponentially growing system. It makes it increasingly difficult to predict what will happen next, or what skills, attitudes, and values we will need to deal with it, or how we will affected by it. As the number of possible scenarios increases at the same exponential rate, and the time between major changes gets ever shorter, patterns of thinking, ways of doing things, skills we need, and the very structures of our societies must change in unpredictable ways, too. Occupations, including in education, are already being massively disrupted, for better and for worse. Deeply embedded systems, from assessment for credentials to the mass media, are suddenly and catastrophically breaking.  Legislation, regulations, resistance from groups of affected individuals, and other checks and balances may slightly alter the rate of change, but likely not enough to matter. Education serves both a stabilizing and a generative role in society, but educators are at least as unprepared and at least as disrupted as anyone else. We don’t – in fact we cannot – know what kind of world we are preparing our students for, and the generative technologies that now form part of our cognition are changing faster than we can follow. Any AI literacies we develop will be obsolete in the blink of an eye. And, remember, generative AIs are not just replacing hard skills. They are replacing the soft ones, the things that we use our hard skills to accomplish.

This is why I believe we would do well to heed the example of the Amish, who (contrary to popular belief) are not opposed to modern technologies but, in their communities, debate and discuss the merits and disadvantages of any technology that is available, considering the ways in which it might affect or conflict with their values, only adopting those agreed to be, on balance, good, and only doing so in ways that accord with those values. Different communities make different choices according to their contexts and needs. In order to do that, we have to have values in the first place. But what are the values that matter in education?

With a few exceptions (laws and regulations being the main ones) technologies do not determine how we will act but, through the ways they integrate with our shared cognition, existing technologies, and practices, they have a lot of momentum and, unchecked, generative AIs will inherit the values associated with what currently exists. In educational systems that are increasingly regulated by government mandates that focus on nothing but their economic contributions to industry, where success or failure is measured solely by proxy criteria like predetermined outcomes of learning and enrolments, where a millennium of path dependencies still embodies patterns of teacher control and indoctrination that worked for mediaeval monks and skillsets that suited the demands of factory owners during the industrial revolution, this will not end well. Now seems the time we most need to reassert and double down on the human, the social, the cultural, the societal, the personal, and the tacit value of our institutions. This is the time to talk about those values, locally and globally. This is the time to examine what matters, what we care about, what we must not lose, and why we must not lose it. Tomorrow it will be too late. I think this is a time of great risk but it is also a time of great opportunity, a chance to reflect on and examine the value and nature of education itself. Some of us have been wanting to have these conversations for decades.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20146256/10-minute-chats-on-generative-ai-a-great-series-now-including-an-interview-with-me

Cognitive prostheses and the future of the human race

head made of cogs I have lived in Canada for over 16 years so I was surprised when, a few months ago, a conference-friend whom  I had not seen for many years contacted me to ask whereabouts in Malaysia I lived. I believe that they were misremembering a connection from a conversation long ago in which I’d mentioned a recent visit (it was in fact a day-trip from Singapore) and combining that with the accurate recollection that I no longer lived in the UK.

Not long after, I was on a panel discussing the impact of ChatGPT during which I prompted ChatGPT to introduce me.  Here was its first attempt in response to the prompt “tell me about Jon Dron”. I’ve highlighted things that are true in green, and things that are false in red (emphasized for those unable to see the colours):

Jon Dron is a Canadian educator, researcher, and writer. He is known for his work in the field of e-learning and collaborative technologies. Dron has a PhD in Educational Technology from the University of Calgary, and has held academic positions at several universities in Canada, including Athabasca University, where he is currently a professor.

Dron has written extensively on topics such as online learning, social media, and the use of technology in education. He is the author of several books, including “Teaching Crowds: Learning and Social Media” and “E-Learning Theory and Practice.”

In addition to his academic work, Dron is also a musician and has released several albums of ambient and experimental music.

I’d say that there is not much difference between the human and machine recollections. I would almost certainly make at least as many mistakes if I were to asked to confidently describe a person I don’t know particularly well. In fact, I might make similar mistakes (not, please note, hallucinations) about quite close friends. Most of us don’t have eidetic memories: we reinvent recollections as much as we recall them. While there are surely many profound differences between how humans and large language models (LLMs) like ChatGPT process information, this is at least circumstantial evidence that some of the basic principles underlying artificial neural networks and biological neural networks are probably pretty similar. True, AIs do not know when they are making things up (or telling the truth, for that matter) but, in fairness, much of the time, neither do we. With a lot of intentional training we may be able to remember lines in a play or how to do long division but, usually, our recollections are like blurry JPEGs rather than RAW images.

Even for things we have intentionally learned to do or recall well, it is unusual for that training to stick without continual reinforcement, and mistakes are easily made. A few days ago I performed a set of around 30 songs (neither ambient nor experimental), most of which I had known for decades, all of which I had carefully practiced in the days leading up to the event to be sure I could play them as I intended. Here is a picture of me singing at that gig, drawn by my 6-year-old grandchild who was in attendance:

grandpa singing in the square

 

Despite my precautions and ample experience, in perhaps a majority of songs, I variously forgot words, chords, notes, and, in a couple of cases, whole verses. Combined with errors of execution (my fingers are not robotic, my voice gets husky) there was, I think, only one song in the whole set that came out more or less exactly as I intended. I have made such mistakes in almost every gig I have ever played. In fact, in well over 40 years as a performer, I have never played the same song in exactly the same way twice, though I have played some of them well over 10,000 times. Most of the variations are a feature, not a bug: they are where the expression lies. A performance is a conversation between performer, instruments, setting, and audience, not a mechanical copy of a perfect original. Nonetheless, my goal is usually to at least play the right notes and sing the right words, and I frequently fail to do that. Significantly, I generally know when I have done it wrong (typically a little before in a dread realization that just makes things worse) and adapt fairly seamlessly on the fly so, on the whole, you probably wouldn’t even notice it has happened, but I play much like ChatGPT responds to prompts: I fill in the things I don’t know with something more or less plausible. These creative adaptations are no more hallucinations than the false outputs of LLMs.

The fact that perfect recall is so difficult to achieve is why we need physical prostheses, to write things down, to look things up, or to automate them. Given LLMs’ weaknesses in accurate recall, it is slightly ironic that we often rely on computers for that.  It is, though, considerably more difficult for an LLM to do this because they have no big pictures, no purposes, no plans, not even broad intentions. They don’t know whether what they are churning out is right or wrong, so they don’t know to correct it. In fact, they don’t even know what they are saying, period. There’s no reflection, no metacognition, no layers of introspection, no sense of self, nothing to connect concepts together, no reason for them to correct errors that they cannot perceive.

Things that make us smart

How difficult can it be to fix this? I think we will soon be seeing a lot more solutions to this problem because if we can look stuff up then so can machines, and more reliable information from other systems can be used to feed the input or improve the output of the LLM (Bing, for instance, has been doing so for a while now, to an extent). A much more intriguing possibility is that an LLM itself or subsystem of it might not only look things up but also write and/or sequester code it needs to do things it is currently incapable of doing, extending its own capacity by assembling and remixing higher-level cognitive structures. Add a bit of layering then throw in an evolutionary algorithm to kill of the less viable or effective, and you’ve got a machine that can almost intentionally learn, and know when it has made a mistake.

Such abilities are a critical part of what makes humans smart, too. When discussing neural networks it is a bit too easy to focus on the underlying neural correlates of learning without paying much (if any) heed to the complex emergent structures that result from them – the “stuff” of thought – but those structures are the main things that make it work for humans. Like the training sets for large language models, the intelligence of humans is largely built from the knowledge gained from other humans through language, pedagogies, writing, drawing, music, computers, and other mediating technologies. Like an LLM, the cognitive technologies that result from this (including songs) are parts that we assemble and remix to in order to analyze, synthesize, and create. Unlike most if not all existing LLMs, though, the ways we assemble them – the methods of analysis, the rules of logic, the pedagogies, the algorithms, the principles, and so on (that we have also learned from others) – are cognitive prostheses that play an active role in the assembly, allowing us to build, invent, and use further cognitive prostheses and so to recursively extend our capabilities far beyond the training set, as well as to diagnose our own shortfalls. 

Like an LLM, our intelligence is also fundamentally collective, not just in what happens inside brains, but because our minds are extended, through tools, gadgets, rules, language, writing, structures, and systems that we enlist from the world as part of (not only adjuncts to) our thinking processes. Through technologies, from language to screwdrivers, we literally share our minds with others. For those of us who use them, LLMs are now as much parts of us as our own creative outputs are parts of them.

All of this means that human minds are part-technology (largely but not wholly instantiated in biological neural nets) and so our cognition is about as artificial as that of AIs. We could barely even think without cognitive prostheses like language, symbols, logic, and all the countless ways of doing and using technologies that we have devised, from guitars to cars. Education, in part, is a process of building and enlisting those cognitive prostheses in learners’ minds, and of enabling learners to build and enlist their own, in a massively complex, recursive, iterative, and distributed process, rich in feedback loops and self-organizing subsystems.

Choosing what we give up to the machine

There are many good ways to use LLMs in the learning process, as part of what students do. Just as it would be absurd to deny students the use of pens, books, computers, the Internet, and so on, it is absurd to deny them the use of AIs, including in summative assessments. These are now part of our cognitive apparatus, so we should learn how to participate in them wisely. But I think we need to be extremely cautious in choosing what we delegate to them, above all when using them to replace or augment some or all of the teaching role.

What makes AIs different from technologies of the past is that they perform a broadly similar process of cognitive assembly as we do ourselves, allowing us to offload much more of our cognition to an embodied collective intelligence created from the combined output of countless millions of people. Only months after the launch of ChatGPT, this is already profoundly changing how we learn and how we teach. It is disturbing and disruptive in an educational context for a number of reasons, such as that:

  • it may make it unnecessary for us to learn its skills ourselves, and so important aspects of our own cognition, not just things we don’t need (but which are they?), may atrophy;
  • if it teaches, it may embed biases from its training set and design (whose?) that we will inherit;
  • it may be a bland amalgam of what others have written, lacking originality or human quirks, and that is what we, too, will learn to do;
  • if we use it to teach, it may lead students towards an average or norm, not a peak;
  • it renders traditional forms of credentialling learning largely useless.

We need solutions to these problems or, at least, to understand how we will successfully adapt to the changes they bring, or whether we even want to do so. Right now, an LLM is not a mind at all, but it can be a functioning part of one, much as an artificial limb is a functioning part of a body or a cyborg prosthesis extends what a body can do. Whether we feel any particular limb that it (partly) replicates needs replacing, which system we should replace it with, and whether it is a a good idea in the first place are among the biggest questions we have to answer. But I think there’s an even bigger problem we need to solve: the nature of education itself.

AI teachers

There are no value-free technologies, at least insofar as they are enacted and brought into being through our participation in them, and the technologies that contribute to our cognition, such as teaching, are the most value-laden of all, communicating not just the knowledge and skills they purport to provide but also the ways of thinking and being that they embody. It is not just what they teach or how effectively they do so, but how they teach, and how we learn to think and behave as a result, that matters.

While AI teachers might well make it easier to learn to do and remember stuff, building hard cognitive technologies (technique, if you prefer) is not the only thing that education does. Through education, we learn values, ways of connecting, ways of thinking, and ways of being with others in the world. In the past this has come for free when we learn the other stuff, because real human teachers (including textbook authors, other students, etc) can’t help but model and transmit the tacit knowledge, values, and attitudes that go along with what they teach. This is largely why in-person lectures work. They are hopeless for learning the stuff being taught but the fact that students physically attend them makes them great for sharing attitudes, enthusiasm, bringing people together, letting us see how other people think through problems, how they react to ideas, etc. It is also why recordings of online lectures are much less successful because they don’t, albeit that the benefits of being able to repeat and rewind somewhat compensate for the losses.

What happens, though, when we all learn how to be human from something that is not (quite) human? The tacit curriculum – the stuff through which we learn ways of being, not just ways of doing –  for me looms largest among the problems we have to solve if we are to embed AIs in our educational systems, as indeed we must. Do we want our children to learn to be human from machines that haven’t quite figured out what that means and almost certainly never will?

Many AI-Ed acolytes tell the comforting story that we are just offloading some of our teaching to the machine, making teaching more personal, more responsive, cheaper, and more accessible to more people, freeing human teachers to do more of the human stuff. I get that: there is much to be said for making the acquisition of hard skills and knowledge easier, cheaper, and more efficient. However, it is local thinking writ large. It solves the problems that we have to solve today that are caused by how we have chosen to teach, with all the centuries-long path dependencies and counter technologies that entails, replacing technologies without wondering why they exist in the first place.

Perhaps the biggest of the problems that the entangled technologies of education systems cause are the devastating effects of tightly coupled credentials (and their cousins, grades) on intrinsic motivation. Much of the process of good teaching is one of reigniting that intrinsic motivation or, at least, supporting the development of internally regulated extrinsic motivation, and much of the process of bad teaching is about going with the flow and using threats and rewards to drive the process. As long as credentials remain the primary reason for learning, and as long as they remain based on proof of easily measured learning outcomes provided through end-products like assignments and inauthentic tests, then an AI that offers a faster, more efficient, and better tailored way of achieving them will crowd out the rest. Human teaching will be treated as a minor and largely irrelevant interruption or, at best, a feel-good ritual with motivational perks for those who can afford it. And, as we are already seeing, students coerced to meet deadlines and goals imposed on them will use AIs to take shortcuts. Why do it yourself when a machine can do it for you? 

The future

As we start to build AIs more like us, with metacognitive traits, self-set purposes, and the capacity for independent learning, the problem is just going to get bigger. Whether they are better or worse (they will be both), AIs will not be the same as us, yet they will increasingly seem so, and increasingly play human roles in the system. If the purpose of education is seen as nothing but short-term achievement of explicit learning outcomes and getting the credentials arising from that, then it would be better to let the machines achieve them so that we can get on with our lives. But of course that is not the purpose. Education is for preparing people to live better lives in better societies. It is why the picture of me singing above delights me more than anything ever created by an AI. It is why education is and must remain a fundamentally human process. Almost any human activity can be replaced by an AI, including teaching, but education is fundamental to how we become who we are. That’s not the kind of thing that I think we want to replace.

Our minds are already changing as they extend into the collective intelligence of LLMs – they must – and we are only at the very beginning of this story. Most of the changes that are about to occur will be mundane, complex, and the process will be punctuated but gradual, so we won’t really notice what has been happening until it has happened, by which time it may be too late. It is probably not an exaggeration to say that, unless environmental or other disasters don’t bring it all to a halt, this is a pivotal moment in our history.

It is much easier to think locally, to think about what AIs can do to support or extend what we do now, than it is to imagine how everything will change as a result of everyone doing that at scale. It requires us to think in systems, which is not something most of us are educated or prepared to do. But we must do that, now, while we still can. We should not leave it to AIs to do it for us.

There’s much more on many of the underpinning ideas mentioned in this post, including references and arguments supporting them, in my freely downloadable or cheap-to-purchase latest book (of three, as it happens), How Education Works.

The artificial curriculum

evolving into a robot “Shaping the Future of Education: Exploring the Potential and Consequences of AI and ChatGPT in Educational Settings” by Simone Grassini is a well-researched, concise but comprehensive overview of the state of play for generative AI (GAI) in education. It gives a very good overview of current uses, by faculty and students, and provides a thoughtful discussion of issues and concerns arising. It addresses technical, ethical, and pragmatic concerns across a broad spectrum. If you want a great summary of where we are now, with tons of research-informed suggestions as to what to do about it, this is a very worthwhile read.

However, underpinning much of the discussion is an implied (and I suspect unintentional) assumption that education is primarily concerned with achieving and measuring explicit specified outcomes. This is particularly obvious in the discussions of ways GAIs can “assist” with instruction. I have a problem with that.

There has been an increasing trend in recent decades towards the mechanization of education: modularizing rather than integrating, measuring what can be easily measured, creating efficiencies, focusing on an end goal of feeding industry, and so on. It has resulted in a classic case of the McNamara Fallacy, that starts with a laudable goal of measuring success, as much as we are able, and ends with that measure defining success, to the exclusion anything we do not or cannot measure. Learning becomes the achievement of measured outcomes.

It is true that consistent, measurable, hard techniques must be learned to achieve almost anything in life, and that it takes sustained effort and study to achieve most of them that educators can and should help with. Measurable learning outcomes and what we do with them matter. However, the more profound and, I believe, the more important ends of education, regardless of the subject, are concerned with ways of being in the world, with other humans. It is the tacit curriculum that ultimately matters more: how education affects the attitudes, the values, the ways we can adapt, how we can create, how we make connections, pursue our dreams, live fulfilling lives, engage with our fellow humans as parts of cultures and societies.

By definition, the tacit curriculum cannot be meaningfully expressed in learning outcomes or measured on a uniform scale. It can be expressed only obliquely, if it can be expressed at all, in words. It is largely emergent and relational, expressed in how we are, interacting with one another, not as measurable functions that describe what we can do. It is complex, situated, and idiosyncratic. It is about learning to be human, not achieving credentials.

Returning to the topic of AI, to learn to be human from a blurry JPEG of the web, or autotune for knowledge, especially given the fact that training sets will increasingly be trained on the output of earlier training sets, seems to me to be a very bad idea indeed.

The real difficulty that teachers face is not that students solve the problems set to them using large language models, but that in so doing they bypass the process, thus avoiding the tacit learning outcomes we cannot or choose not to measure. And the real difficulty that those students face is that, in delegating the teaching process to an AI, their teachers are bypassing the teaching process, thus failing to support the learning of those tacit outcomes or, at best, providing an averaged-out caricature of them. If we heedlessly continue along this path, it will wind up with machines teaching machines, with humans largely playing the roles of cogs and switches in them.

Some might argue that, if the machines do a good enough job of mimicry then it really doesn’t matter that they happen to be statistical models with no feelings, no intentions, no connection, and no agency. I disagree. Just as it makes a difference whether a painting ascribed to Picasso is a fake or not, or whether a letter is faxed or delivered through the post, or whether this particular guitar was played by John Lennon, it matters that real humans are on each side of a learning transaction. It means something different for an artifact to have been created by another human, even if the form of the exchange, in words or whatever, is the same. Current large language models have flaws, confidently spout falsehoods, fail to remember previous exchanges, and so on, so they are easy targets for criticism. However, I think it will be even worse when AIs are “better” teachers. When what they seem to be is endlessly tireless, patient, respectful and responsive; when the help they give is unerringly accurately personal and targeted; when they accurately draw on knowledge no one human could ever possess, they will not be modelling human behaviour. The best case scenario is that they will not be teaching students how to be, they will just be teaching them how to do, and that human teachers will provide the necessary tacit curriculum to support the human side of learning. However, the two are inseparable, so that is not particularly likely. The worst scenarios are that they will be teaching students how to be machines, or how to be an average human (with significant biases introduced by their training), or both.

And, frankly, if AIs are doing such a good job of it then they are the ones who should be doing whatever it is that they are training students to do, not the students. This will most certainly happen: it already is (witness the current actors and screenwriters strike). For all the disruption that results, it’s not necessarily a bad thing, because it increases the adjacent possible for everyone in so many ways. That’s why the illustration to this post is made to my instructions by Midjourney, not drawn by me. It does a much better job of it than I could do.

In a rational world we would not simply incorporate AI into teaching as we have always taught. It makes no more sense to let it replace teachers than it does to let it replace students. We really need to rethink what and why we are teaching in the first place. Unfortunately, such reinvention is rarely if ever how technology works. Technology evolves by assembly with and in the context of other technology, which is how come we have inherited mediaeval solutions to indoctrination as a fundamental mainstay of all modern education (there’s a lot more about such things in my book, How Education Works if you want to know more about that). The upshot will be that, as we integrate rather than reinvent, we will keep on doing what we have always done, with a few changes to topics, a few adjustments in how we assess, and a few “efficiencies”, but we will barely notice that everything has changed because students will still be achieving the same kinds of measured outcomes.

I am not much persuaded by most apocalyptic visions of the potential threat of AI. I don’t think that AI is particularly likely to lead to the world ending with a bang, though it is true that more powerful tools do make it more likely that evil people will wield them. Artificial General Intelligence, though, especially anything resembling consciousness, is very little closer today than it was 50 years ago and most attempts to achieve it are barking in the wrong forest, let alone up the wrong tree. The more likely and more troubling scenario is that, as it embraces GAIs but fails to change how everything is done, the world will end with a whimper, a blandification, a leisurely death like that of lobsters in water coming slowly to a boil. The sad thing is that, by then, with our continued focus on just those things we measure, we may not even notice it is happening. The sadder thing still is that, perhaps, it already is happening.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/19390937/the-artificial-curriculum