With just 10 minutes to make the case and 10 minutes for discussion, none of us were able to go into much depth in our talks. In mine I introduced the term “ochlotecture”, from the Classical Greek ὄχλος (ochlos), meaning “multitude” and τέκτων (tektōn) meaning “builder” to describe the structures and processes that define the stuff that gives shape and form to collections of people and their interactions. I think we need such a term because there are virtually infinite ways that such things can be configured, and the configuration makes all the difference. We blithely talk of things like groups, teams, clubs, companies, squads, and, of course, collectives, assuming that others will share an understanding of what we mean when, of course, they don’t. There were at least half a dozen quite distinct uses of the term “collective intelligence” in this symposium alone. I’m still working on a big paper on this subject that goes into some depth on the various dimensions of interest as they pertain to a wide range of social organizations but, for this talk, I was only concerned with the ochlotecture of collectives (a term I much prefer to “collective intelligence” because intelligence is such a slippery word, and collective stupidity is at least as common). From an ochlotectural perspective, these consist of a means of collecting crowd-generated information, processing it, and presenting the processed results back to the crowd. Human collective ochlotectures often contain other elements – group norms, structural hierarchies, schedules, digital media, etc – but I think those are the defining features. If I am right then large language models (LLMs) are collectives, too, because that is exactly what they do. Unlike most other collectives, though (a collectively driven search engine like Google Search being one of a few partial exceptions) the processing is unique to each run of the cycle, generated via a prompt or similar input. This is what makes them so powerful, and it is what makes their mimicry of human soft technique so compelling.
I did eventually get around to the theme of the conference. I spent a while discussing why LLMs are troubling – the fact that we learn values, attitudes, ways of being, etc from interacting with them; the risks to our collective intelligence caused by them being part of the crowd, not just aggregators and processors of its outputs; and the potential loss of the soft, creative skills they can replace – and ended with what that implies for how we should act as educators: essentially, to focus on the tacit curriculum that has, till now, always come from free; to focus on community because learning to be human from and with other humans is what it is all about; and to decouple credentials so as to reduce the focus on measurable outcomes that AIs can both teach and achieve better than an average human. I also suggested a couple of principles for dealing with generative AIs: to treat them as partners rather than tools, and to use them to support and nurture human connections, as ochlotects as much as parts of the ochlotecture.
I had a point to make in a short time, so the way I presented it was a bit of a caricature of my more considered views on the matter. If you want a more balanced view, and to get a bit more of the theoretical backdrop to all this, Tim Fawns’s talk (that follows mine and that will probably play automatically after it if you play the video above) says it all, with far greater erudition and lucidity, and adds a few very valuable layers of its own. Though he uses different words and explains it far better than I, his notion of entanglement closely echoes my own ideas about the nature of technology and the roles it plays in our cognition. I like the word “intertwingled” more than “entangled” because of its more positive associations and the sense of emergent order it conveys, but we mean substantially the same thing: in fact, the example he gave of a car is one that I have frequently used myself, in exactly the same way.
For those who have been following my thoughts on generative AI there will be few surprises in my slides, and I only had half an hour so there was not much time to go into the nuances. The title is an allusion to Pestalozzi’s 18th Century tract, How Gertrude Teaches Her Children, which has been phenomenally influential to the development of education systems around the world and that continues to have impact to this day. Much of it is actually great: Pestalozzi championed very child-centric teaching approaches that leveraged the skills and passions of their teachers. He recommended methods of teaching that made full use of the creativity and idiosyncratic knowledge the teachers possessed and that were very much concerned with helping children to develop their own interests, values and attitudes. However, some of the ideas – and those that have ultimately been more influential – were decidedly problematic, as is succinctly summarized in this passage on page 41:
I believe it is not possible for common popular instruction to advance a step, so long as formulas of instruction are not found which make the teacher, at least in the elementary stages of knowledge, merely the mechanical tool of a method, the result of which springs from the nature of the formulas and not from the skill of the man who uses it.
This is almost the exact opposite of the central argument of my book, How Education Works, that mechanical methods are not the most important part of a soft technology such as teaching: what usually matters more is how it is done, not just what is done. You can use good methods badly and bad methods well because you are a participant in the instantiation of a technology, responsible for the complete orchestration of the parts, not just a user of them.
As usual, in the talk I applied a bit of co-participation theory to explain why I am both enthralled by and fearful of the consequences of generative AIs because they are the first technologies we have ever built that can use other technologies in ways that resemble how we use them. Previous technologies only reproduced hard technique – the explicit methods we use that make us part of the technology. Generative AIs reproduce soft technique, assembling and organizing phenomena in endlessly novel ways to act as creators of the technology. They are active, not passive participants.
Two dangers
I see there to be two essential risks lying in the delegation of soft technique to AIs. The first is not too terrible: that, because we will increasingly delegate creative activities we would have otherwise performed ourselves to machines, we will not learn those skills ourselves. I mourn the potential passing of hard skills in (say) drawing, or writing, or making music, but the bigger risk is that we will lose the the soft skills that come from learning them: the things we do with the hard skills, the capacity to be creative.
That said, like most technologies, generative AIs are ratchets that let us do more than we could achieve alone. In the past week, for instance, I “wrote” an app that would have taken me many weeks without AI assistance in a less than a day. Though it followed a spec that I had carefully and creatively written, it replaced the soft skills that I would have applied had I written it myself, the little creative flourishes and rabbit holes of idea-following that are inevitable in any creation process. When we create we do so in conversation with the hard technologies available to us (including our own technique), using the affordances and constraints to grasp adjacent possibles they provide. Every word we utter or wheel we attach to an axle opens and closes opportunities for what we can do next.
With that in mind, the app that the system created was just the beginning. Having seen the adjacent possibles of the finished app, I have spent too many hours in subsequent days extending and refining the app to do things that, in the past, I would not have bothered to do because they would have been too difficult. It has become part of my own extended cognition, starting higher up the tree than I would have reached alone. This has also greatly improved my own coding skills because, inevitably, after many iterations, the AI and/or I started to introduce bugs, some of which have been quite subtle and intractable. I did try to get the AI to examine the whole code (now over 2000 lines of JavaScript) and rewrite it or at least to point out the flaws, but that failed abysmally, amply illustrating both the strength of LLMs as creative participants in technologies, and their limitations in being unable to do the same thing the same way twice. As a result, the AI and I have have had to act as partners trying to figure out what is wrong. Often, though the AI has come up with workable ideas, its own solution has been a little dumb, but I could build on it to solve the problem better. Though I have not actually created much of the code myself, I think my creative role might have been greater than it would have been had I written every line.
Similarly for the images I used to illustrate the talk: I could not possibly have drawn them alone but, once the AI had done so, I engaged in a creative conversation to try (sometimes very unsuccessfully) to get it to reproduce what I had in mind. Often, though, it did things that sparked new ideas so, again, it became a partner in creation, sharing in my cognition and sparking my own invention. It was very much not just a tool: it was a co-worker, with different and complementary skills, and “ideas” of its own. I think this is a good thing. Yes, perhaps it is a pity that those who follow us may not be able to draw with a pen (and more than a little worrying thinking about the training sets that future AIs with learn to draw from), but they will have new ways of being creative.
Like all learning, both these activities changed me: not just my skills, but my ways of thinking. That leads me to the bigger risk.
Learning our humanity from machines
The second risk is more troubling: that we will learn ways of being human from machines. This is because of the tacit curriculum that comes with every learning interaction. When we learn from others, whether they are actively teaching, writing textbooks, showing us, or chatting with us, we don’t just learn methods of doing things: we learn values, attitudes, ways of thinking, ways of understanding, and ways of being at the same time. So far we have only learned that kind of thing from humans (sometimes mediated through code) and it has come for free with all the other stuff, but now we are doing so from machines. Those machines are very much like us because 99% of what they are – their training sets – is what we have made, but they not the same. Though LLMs are embodiments of our own collective intelligence, they don’t so much lack values, attitudes, ways of thinking etc as they have any and all of them. Every implicit value and attitude of the people whose work constituted their training set is available to them, and they can become whatever we want them to be. Interacting with them is, in this sense, very much not like interacting with something created by a human, let alone with humans more directly. They have no identity, no relationships, no purposes, no passion, no life history and no future plans. Nothing matters to them.
To make matters worse, there is programmed and trained stuff on top of that, like their interminable cheery patience that might not teach us great ways of interacting with others. And of course it will impact how we interact with others because we will spend more and more time engaged with it, rather than with actual humans. The economic and practical benefits make this an absolute certainty. LLMs also use explicit coding to remove or massage data from the input or output, reflecting the values and cultures of their creators for better or worse. I was giving this talk in India to a predominantly Indian audience of AI researchers, every single one of whom was making extensive use of predominantly American LLMs like ChatGPT, Gemini, or Claude, and (inevitably) learning ways of thinking and doing from it. This is way more powerful than Hollywood as an instrument of Americanization.
I am concerned about how this will change our cultures and our selves because this is happening at phenomenal and global scale, and it is doing so in a world that is unprepared for the consequences, the designed parts of which assume a very different context. One of generative AI’s greatest potential benefits lies in the potential to provide “high quality” education at low cost to those who are currently denied it, but those low costs will make it increasingly compelling for everyone. However, because of the designs that assume a different context “quality”, in this sense, relates to the achievement of explicit learning outcomes: this is Pestalozzi’s method writ large. Generative AIs are great at teaching what we want to learn – the stuff we could write down as learning objectives or intended outcomes – so, as that is the way we have designed our educational systems (and our general attitudes to learning new skills), of course we will use them for that purpose. However, that cannot be done without teaching the other stuff – the tacit curriculum – which is ultimately more important because it shapes how we are in the world, not just the skills we employ to be that way. We might not have designed our educational systems to do that, and we seldom if ever think about it when teaching ourselves or receiving training to do something, but it is perhaps education’s most important role.
By way of illustration, I find it hugely bothersome that generative AIs are being used to write children’s stories (and, increasingly, videos) and I hope you feel some unease too, because those stories – not the facts in them but the lessons about things that matter that they teach – are intrinsic to them becoming who they will become. However, though perhaps of less magnitude, the same issue relates to learning everything from how to change a plug to how to philosophize: we don’t stop learning from the underlying stories behind those just because we have grown up. I fear that educators, formal or otherwise, will become victims of the McNamara Fallacy, setting our goals to achieve what is easily measurable while ignoring what cannot (easily) be measured, and so rush blindly towards subtly new ways of thinking and acting that few will even notice, until the changes are so widespread they cannot be reversed. Whether better or worse, it will very definitely be different, so it really matters that we examine and understand where this is all leading. This is the time, I believe, to reclaim a revalorize the value of things that are human before it is too late. This is the time to recognize education (far from only formal) as being how we become who we are, individually and collectively, not just how we meet planned learning outcomes. And I think (at least hope) that we will do that. We will, I hope, value more than ever the fact that something – be it a lesson plan or a book or a screwdriver – is made by someone or by a machine that has been explicitly programmed by someone. We will, I hope, better recognize the relationships between us that it embodies, the ways it teaches us things it does not mean to teach, and the meaning it has in our lives as a result. This might happen by itself – already there is a backlash against the bland output of countless bots – but it might not be a bad idea to help it along when we can. This post (and my talk last night) has been one such small nudge.
Here are the slides from a talk I gave earlier today, hosted by George Siemens and his fine team of people at Human Systems. Terry Anderson helped me to put the slides together, and offered some great insights and commentary after the presentation but I am largely to blame for the presentation itself. Our brief was to talk about sets, nets and groups, the theme of our last book Teaching Crowds: learning and social media and much of our work together since 2007 but, as I was the one presenting, I bent it a little towards generative AI and my own intertwingled perspective on technologies and collective cognition, which is most fully developed (so far) in my most recent book, How Education Works: Teaching, Technology, and Technique. If you’re not familiar with our model of sets, nets, groups and collectives, there’s a brief overview on the Teaching Crowds website. It’s a little long in the tooth but I think it is still useful and will help to frame what follows.
The key new insight that appears for the first time in this presentation is that, rather than being a fundamental social form in their own right, groups consist of technological processes that make use of and help to engender/give shape to the more fundamental forms of nets and sets. At least, I think they do: I need to think and talk some more about this, at least with Terry, and work it up into a paper, but I haven’t yet thought through all the repercussions. Even back when we wrote the book I always thought of groups as technologically mediated entities but it was only when writing these slides in the light of my more recent thinking on technology that I paid much attention to the phenomena that they actually orchestrate in order to achieve their ends. Although there are non-technological prototypes – notably in the form of families – these are emergent rather than designed. The phenomena that intentional groups primarily orchestrate are those of networks and sets, which are simply configurations of humans and their relationships with one another. Modern groups – in a learning context, classes, cohorts, tutorial groups, seminar groups, and so on – are designed to fulfill more specific purposes than their natural prototypes, and they are made possible by technological inventions such as rules, roles, decision-making processes, and structural hierarchies. Essentially, the group is a purpose-driven technological overlay on top of more basic social forms. It seems natural, much as language seems natural, because it is so basic and fundamental to our existence and how everything else works in human societies, but it is an invention (or many inventions, in fact) as much as wheels and silicon chips.
Groups are among the oldest and most highly evolved of human technologies and they are incredibly important for learning, but they have a number of inherent flaws and trade-offs/Faustian bargains, notably in their effects on individual freedoms, in scalability (mainly achieved through hierarchies), in sometimes unhealthy power dynamics, and in limitations they place on roles individuals play in learning. Modern digital technologies can help to scale them a little further and refine or reify some of the rules and roles, but the basic flaws remain. However, modern digital technologies also offer other ways of enabling sets and networks of people to support one another’s learning, from blogs and mailing lists to purpose-built social networking systems, from Wikipedia and Academia.edu to Quora, in ways that can (optionally) integrate with and utilize groups but that differ in significant ways, such as in removing hierarchies, structuring through behaviour (collectives) and filtering or otherwise mediating messages. With some exceptions, however, the purposes of large-scale systems of this nature (which would provide an ideal set of phenomena to exploit) are not usually driven by a need for learning, but by a need to gain attention and profit. Facebook, Instagram, LinkedIn, X, and others of their ilk have vast networks to draw on but few mechanisms that support learning and limited checks and balances for reliability or quality when it does occur (which of course it does). Most of their algorithmic power is devoted to driving engagement, and the content and purpose of that engagement only matters insofar as it drives further engagement. Up to a point, trolls are good for them, which is seldom if ever true for learning systems. Some – Wikipedia, the Khan Academy, Slashdot, Stack Exchange, Quora, some SubReddits, and so on – achieve both engagement and intentional support for learning. However, they remain works in progress in the latter regard, being prone to a host of ills from filter bubbles and echo chambers to context collapse and the Matthew Effect, not to mention intentional harm by bad actors. I’ve been exploring this space for approaching 30 years now, but there remains almost as much scope for further research and development in this area as there was when I began. Though progress has been made, we have yet to figure out the right rules and structures to deal with a great many problems, and it is increasingly difficult to slot the products of our research into an increasingly bland, corporate online space dominated by a shrinking number of bland, centralized learning management systems that continue to refine their automation of group processes and structures and, increasingly, to ignore the sets and networks on which they rely.
With that in mind, I see big potential benefits for generative AIs – the ultimate collectives – as supporters and enablers for crowds of people learning together. Generative AI provides us with the means to play with structures and adapt in hitherto impossible ways, because the algorithms that drive their adaptations are indefinitely flexible, the reified activities that form them are vast, and the people that participate in them play an active role in adjusting and forming their algorithms (not the underpinning neural nets but the emergent configurations they take). These are significant differences from traditional collectives, that tend to have one purpose and algorithm (typically complex but deterministic), such as returning search results or engaging network interactions. I also see a great many potential risks, of which I have written fairly extensively of late, most notably in playing soft orchestral roles in the assembly that replace the need for humans to learn to play them. We tread a fine line between learning utopia and learning dystopia, especially if we try to overlay them on top of educational systems that are driven by credentials. Credentials used to signify a vast range of tacit knowledge and skills that were never measured, and (notwithstanding a long tradition of cheating) that was fine as long as nothing else could create those signals, because they were serviceable proxies. If you could pass the test or assignment, it meant that you had gone through the process and learned a lot more than what was tested. This has been eroded for some time, abetted by social media like Course Hero or Chegg that remain quite effective ways of bypassing the process for those willing to pay a nominal sum and accept the risk. Now that generative AI can do the same at considerably lower cost, with greater reliability, and lower risk, without having gone through the process, they no longer make good signifiers and, anyway (playing Devil’s advocate), it remains unclear to what extent those soft, tacit skills are needed now that generative AIs can achieve them so well. I am much encouraged by the existence of George’s Paul LeBlanc’s lab initiative, the fact that George is the headliner chief scientist for it, its intent to enable human-centred learning in an age of AI, and its aspiration to reinvent education to fit. We need such endeavours. I hope they will do some great things.
ChatGPT and I came up with this image summarizing my thoughts on generative AI for a presentation I am giving later today. I think it did a pretty good job. Thanks, ChatGPT, and thanks to Peter Steiner for the awesome original
A Turkish university candidate was recently arrested after being caught using an AI-powered system to obtain answers to the entrance exam in real-time.
The candidate used a simple and rather obvious set-up: a camera disguised as a shirt button that was used to read the questions, a router hidden in a hollowed-out shoe linking to a stealthily concealed mobile device that queried a generative AI (likely ChatGPT-powered) that fed the answers back verbally to an in-ear bluetooth earpiece. Constructing such a thing would take a little ingenuity but it’s not rocket science. It’s not even computer science. Anyone could do this. It would take some skill to make it work well, though, and that may be the reason this attempt went wrong. The candidate was caught as a result of their suspicious behaviour, not because anyone directly noticed the tech. I’m trying to imagine the interface, how the machine would know which question to answer (did the candidate have to point their button in the right direction?), how they dealt with dictating the answers at a usable speed (what if they needed it to be repeated? Did they have to tap a microphone a number of times?), how they managed sequence and pacing (sub-vocalization? moving in a particular way?). These are soluble problems but they are not trivial, and skill would be needed to make the whole thing seem natural.
It may take a little while for this to become a widespread commodity item (and a bit longer for exam-takers to learn to use it unobtrusively), but I’m prepared to bet that someone is working on it, if it is not already available. And, yes, exam-setters will come up with a counter-technology to address this particular threat (scanners? signal blockers?Forcing students to strip naked?) but the cheats will be more ingenious, the tech will improve, and so it will go on, in an endless and unwinnable arms race.
Very few people cheat as a matter of course. This candidate was arrested – exam cheating is against the law in Turkey – for attempting to solve the problem they were required to solve, which was to pass the test, not to demonstrate their competence. The level of desperation that led to them adopting such a risky solution to the problem is hard to imagine, but it’s easy to understand how high the stakes must have seemed and how strong the incentive to succeed must have been. The fact that, in most societies, we habitually inflict such tests on both children and adults, on an unimaginably vast scale, will hopefully one day be seen as barbaric, on a par with beating children to make them behave. They are inauthentic, inaccurate, inequitable and, most absurdly of all, a primary cause of the problem they are designed to solve. We really do need to find a better solution.
Note on the post title: the student was caught so, as some have pointed out, it would be an exaggeration to say that this one case is proof that proctored exams have fallen to generative AI, but I think it is a very safe assumption that this is not a lone example. This is a landmark case because it provides the first direct evidence that this is happening in the wild, not because it is the first time it has ever happened.
Since 2018, Terry Greene has been producing a wonderful series of podcast interviews with open and online learning researchers and practitioners called Getting Air. Prompted by the publication of How Education Works, (Terry is also responsible for the musical version of the book, so I think he likes it) this week’s episode features an interview with me.
I probably should have been better prepared. Terry asked some probing, well-informed, and sometimes disarming questions, most of which led to me rambling more than I might have done if I’d thought about them in advance. It was fun, though, drifting through a broad range of topics from the nature of technology to music to the perils of generative AI (of course).
I hope that Terry does call his PhD dissertation “Getting rid of instructional designers”.
These are the slides that I used for my talk with a delightful group of educational leadership students from TAMK University of Applied Sciences in Tampere, Finland at (for me) a somewhat ungodly hour Wednesday night/Thursday morning after a long day. If you were in attendance, sorry for any bleariness on my part. If not, or if you just want to re-live the moment, here is the video of the session (thanks Mark!)
The brief that I was given was to talk about what generative AI means for education and, if you have been following any of my reflections on this topic then you’ll already have a pretty good idea of what kinds of issues I raised about that. My real agenda, though, was not so much to talk about generative AI as to reflect on the nature and roles of education and educational systems because, like all technologies, the technology that matters in any given situation is the enacted whole rather than any of its assembled parts. My concerns about uses of generative AI in education are not due to inherent issues with generative AIs (plentiful though those may be) but to inherent issues with educational systems that come to the fore when you mash the two together at a grand scale.
The crux of this argument is that, as long as we think of the central purposes of education as being the attainment of measurable learning outcomes or the achievement of credentials, especially if the focus is on training people for a hypothetical workplace, the long-term societal effects of inserting generative AIs into the teaching process are likely to be dystopian. That’s where Robert McNamara comes into the picture. The McNamara Fallacy is what happens when you pick an aspect of a system to measure, usually because it is easy, and then you use that measure to define success, choosing to ignore or to treat as irrelevant anything that cannot be measured. It gets its name from Robert McNamara, US Secretary of Defense during the Vietnam war, who famously measured who was winning by body count, which is probably among the main reasons that the US lost the war.
My concern is that measurable learning outcomes (and still less the credentials that signify having achieved them) are not the ends that matter most. They are, more, means to achieve far more complex, situated, personal and social ends that lead to happy, safe, productive societies and richer lives for those within them. While it does play an important role in developing skills and knowledge, education is thus more fundamentally concerned with developing values, attitudes, ways of thinking, ways of seeing, ways of relating to others, ways of understanding and knowing what matters to ourselves and others, and finding how we fit into the social, cultural, technological, and physical worlds that we inhabit. These critical social, cultural, technological, and personal roles have always been implicit in our educational systems but, at least in in-person institutions, it seldom needs to be made explicit because it is inherent in the structures and processes that have evolved over many centuries to meet this need. This is why naive attempts to simply replicate the in-person learning experience online usually fail: they replicate the intentional teaching activities but neglect to cater for the vast amounts of learning that occur simply due to being in a space with other people, and all that emerges as a result of that. It is for much the same reasons that simply inserting generative AI into existing educational structures and systems is so dangerous.
If we choose to measure the success or failure of an educational system by the extent to which learners achieve explicit learning outcomes and credentials, then the case for using generative AIs to teach is extremely compelling. Already, they are far more knowledgeable, far more patient, far more objective, far better able to adapt their teaching to support individual student learning, and far, far cheaper than human teachers. They will get better. Much better. As long as we focus only on the easily measurable outcomes and the extrinsic targets, simple economics combined with their measurably greater effectiveness means that generative AIs will increasingly replace teachers in the majority of teaching roles. That would not be so bad – as Arthur C. Clarke observed, any teacher that can be replaced by a machine should be – were it not for all the other more important roles that education plays, and that it will continue to play, except that now we will be learning those ways of being human from things that are not human and that, in more or less subtle ways, do not behave like humans. If this occurs at scale – as it is bound to do – the consequences for future generations may not be great. And, for the most part, the AIs will be better able to achieve those learning outcomes themselves – what is distinctive about them is that they are, like us, tool users, not simply tools – so why bother teaching fallible, inconsistent, unreliable humans to achieve them? In fact, why bother with humans at all? There are, almost certainly, already large numbers of instances in which at least part of the teaching process is generated by an AI and where generative AIs are used by students to create work that is assessed by AIs.
It doesn’t have to be this way. We can choose to recognize the more important roles of our educational systems and redesign them accordingly, as many educational thinkers have been recommending for considerably more than a century. I provide a few thoughts on that in the last few slides that are far from revolutionary but that’s really the point: we don’t need much novel thinking about how to accommodate generative AI into our existing systems. We just need to make those systems work the way we have known they should work for a very long time.
Here are the slides from my presentation at AU’s Lunch ‘n’ Learn session today. The presentation itself took 20 minutes and was followed by a wonderfully lively and thoughtful conversation for another 40 minutes, though it was only scheduled for half an hour. Thanks to all who attended for a very enjoyable discussion!
The arguments made in this were mostly derived from my recent paper on the subject (Dron, J. (2023). The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education. Digital, 3(4), 319–335. https://doi.org/10.3390/digital3040020) but, despite the title, my point was not to reject the use of generative AIs at all. The central message I was hoping to get across was a simpler and more important one: to encourage attendees to think about what education is for, and what we would like it to be. As the slides suggest, I believe that is only partially to do with the objectives and outcomes we set out to achieve, that it is nothing much at all to do with the products of the system such as grades and credentials, and that focus on those mechanical aspects of the system often creates obstacles to the achievement of it. Beyond those easily measured things, education is about the values, beliefs, attitudes, relationships, and development of humans and their societies. It’s about ways of being, not just capacity to do stuff. It’s about developing humans, not (just) developing skills. My hope is that the disruptions caused by generative AIs are encouraging us to think like the Amish, and to place greater value on the things we cannot measure. These are good conversations to have.
The process has been a little fraught. Two reviewers really liked the paper and suggested minimal but worthwhile changes. One quite liked it but had a few reasonable suggestions for improvements that mostly helped to make the paper better. The fourth, though, was bothersome in many ways, and clearly wanted me to write a completely different paper altogether. Despite this, I did most of what they asked, even though some of the changes, in my opinion, made the paper a bit worse. However, I drew the line at the point that they demanded (without giving any reason) that I should refer to 8 very mediocre, forgettable, cookie cutter computer science papers which, on closer inspection, had all clearly been written by the reviewer or their team. The big problem I had with this was not so much the poor quality of the papers, nor even the blatant nepotism/self-promotion of the demand, but the fact that none were in any conceivable way relevant to mine, apart from being about AI: they were about algorithm-tweaking, mostly in the context of traffic movements in cities. It was as ridiculous as a reviewer of a work on Elizabethan literature requiring the author to refer to papers on slightly more efficient manufacturing processes for staples. Though it is normal and acceptable for reviewers to suggest reference to their own papers when it would clearly lead to improvements, this was an utterly shameless abuse of power of a scale and kind that I have never seen before. I politely refused, making it clear that I was on to their game but not directly calling them out on it.
In retrospect, I slightly regret not calling them out. For a grizzly old researcher like me who could probably find another publisher without too much hassle, it doesn’t matter much if I upset a reviewer enough to make them reject my paper. However, for early-career researchers stuck in the publish-or-perish cycle, it would be very much harder to say no. This kind of behaviour is harmful for the author, the publisher, the reader, and the collective intelligence of the human race. The fact that the reviewer was so desperate to get a few more citations for their own team with so little regard for quality or relevance seems to me to be a poor reflection on them and their institution but, more so, a damning indictment of a broken system of academic publishing, and of the reward systems driving academic promotion and recognition. I do blame the reviewer, but I understand the pressures they might have been under to do such a blatantly immoral thing.
As it happens, my paper has more than a thing or two to say about this kind of McNamara phenomenon, whereby the means used to measure success in a system become and warp its purpose, because it is among the main reasons that generative AIs pose such a threat. It is easy to forget that the ways we establish goals and measure success in educational systems are no more than signals of a much more complex phenomenon with far more expansive goals that are concerned with helping humans to be, individually and in their cultures and societies, as much as with helping them to do particular things. Generative AIs are great at both generating and displaying those signals – better than most humans in many cases – but that’s all they do: the signals signify nothing. For well-defined tasks with well-defined goals they provide a lot of opportunities for cost-saving, quality improvement, and efficiency and, in many occupations, that can be really useful. If you want to quickly generate some high quality advertising copy, the intent of which is to sell a product, then it makes good sense to use a generative AI. Not so much in education, though, where it is too easy to forget that learning objectives, learning outcomes, grades, credentials, and so on are not the purposes of learning but just means for and signals of achieving them.
Though there are other big reasons to be very concerned about using generative AIs in education, some of which I explore in the paper, this particular problem is not so much with the AIs themselves as with the technological systems into which they are, piecemeal, inserted. It’s a problem with thinking locally, not globally; of focusing on one part of the technology assembly without acknowledging its role in the whole. Generative AIs could, right now and with little assistance, perform almost every measurable task in an educational system from (for students) producing essays and exam answers, to (for teachers) writing activities and assignments, or acting as personal tutors. They could do so better than most people. If that is all that matters to us then we might as well therefore remove the teachers and the students from the system because, quite frankly, they only get in the way. This absurd outcome is more or less exactly the end game that will occur though, if we don’t rethink (or double down on existing rethinking of) how education should work and what it is for, beyond the signals that we usually use to evaluate success or intent. Just thinking of ways to use generative AIs to improve our teaching is well-meaning, but it risks destroying the woods by focusing on the trees. We really need to step back a bit and think of why we bother in the first place.
For more on this, and for my tentative partial solutions to these and other related problems, do read the paper!
Abstract and citation
This paper analyzes the ways that the widespread use of generative AIs (GAIs) in education and, more broadly, in contributing to and reflecting the collective intelligence of our species, can and will change us. Methodologically, the paper applies a theoretical model and grounded argument to present a case that GAIs are different in kind from all previous technologies. The model extends Brian Arthur’s insights into the nature of technologies as the orchestration of phenomena to our use by explaining the nature of humans’ participation in their enactment, whether as part of the orchestration (hard technique, where our roles must be performed correctly) or as orchestrators of phenomena (soft technique, performed creatively or idiosyncratically). Education may be seen as a technological process for developing these soft and hard techniques in humans to participate in the technologies, and thus the collective intelligence, of our cultures. Unlike all earlier technologies, by embodying that collective intelligence themselves, GAIs can closely emulate and implement not only the hard technique but also the soft that, until now, was humanity’s sole domain; the very things that technologies enabled us to do can now be done by the technologies themselves. Because they replace things that learners have to do in order to learn and that teachers must do in order to teach, the consequences for what, how, and even whether learning occurs are profound. The paper explores some of these consequences and concludes with theoretically informed approaches that may help us to avert some dangers while benefiting from the strengths of generative AIs. Its distinctive contributions include a novel means of understanding the distinctive differences between GAIs and all other technologies, a characterization of the nature of generative AIs as collectives (forms of collective intelligence), reasons to avoid the use of GAIs to replace teachers, and a theoretically grounded framework to guide adoption of generative AIs in education.
Dron, J. (2023). The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education. Digital, 3(4), 319–335. https://doi.org/10.3390/digital3040020
Originally posted at: https://landing.athabascau.ca/bookmarks/view/21104429/published-in-digital-the-human-nature-of-generative-ais-and-the-technological-nature-of-humanity-implications-for-education
The themes of my talk will be familiar to anyone who follows my blog or who has read my recent paper on the subject. This is about applying the coparticipation theory from How Education Works to generative AI, raising concerns about the ways it mimics the soft technique of humans, and discussing how problematic that will be if the skills it replaces atrophy or are never learned in the first place, amongst other issues.
This is the abstract:
We are participants in, not just users of technologies. Sometimes we participate as orchestrators (for instance, when choosing words that we write) and sometimes as part of the orchestration (for instance, when spelling those words correctly). Usually, we play both roles. When we automate aspects of technologies in which we are just parts of the orchestration, it frees us up to be able to orchestrate more, to do creative and problem-solving tasks, while our tools perform the hard, mechanical tasks better, more consistently, and faster than we could ourselves. Collectively and individually, we therefore become smarter. Generative AIs are the first of our technologies to successfully automate those soft, open-ended, creative cognitive tasks. If we lack sufficient time and/or knowledge to do what they do ourselves, they are like tireless, endlessly flexible personal assistants, expanding what we can do alone. If we cannot draw, or draw up a rental agreement, say, an AI will do it for us, so we may get on with other things. Teachers are therefore scrambling to use AIs to assist in their teaching as fast as students use AIs to assist with their assessments.
For achieving measurable learning outcomes, AIs are or will be effective teachers, opening up greater learning opportunities that are more personalized, at lower cost, in ways that are superior to average human teachers. But human teachers, be they professionals, other students, or authors of websites, do more than help learners to achieve measurable outcomes. They model ways of thinking, ways of being, tacit knowledge, and values: things that make us human. Education is a preparation to participate in human cultures, not just a means of imparting economically valuable skills. What will happen as we increasingly learn those ways of being from a machine? If machines can replicate skills like drawing, reasoning, writing, and planning, will humans need to learn them at all? Are there aspects of those skills that must not atrophy, and what will happen to us at a global scale if we lose them? What parts of our cognition should we allow AIs to replace? What kinds of credentials, if any, will be needed? In this talk I will use the theory presented in my latest book, How Education Works: Teaching, Technology, and Technique to provide a framework for exploring why, how, and for what purpose our educational institutions exist, and what the future may hold for them.
Pre-conference background reading, including the book, articles, and blog posts on generative AI and education may be found linked from https://howeducationworks.ca