The paper itself delves a bit into the theory and dynamics of genAI, cognition, and education. It draws heavily from how the theory in my last book, has evolved, adding a few of its own refinements here and there, most notably in its distinction of use-as-purpose vs use-as-process. Because genAIs are not tools but cognitive Santa Claus machines, this helps to explain how the use of genAI can simultaneously enhance and diminish learning, both individually and collectively, to varying degrees that range from cognitive apocalypse to cognitive nirvana, depending on what we define learning to be, whose learning we care about, and what kind of learning gets enhanced or diminished. A fair portion of the paper is taken up with explaining why, in a traditional credentials-driven, fixed-outcomes-focused institutional context, generative AI will usually fail to enhance learning and, in many typical learning and institutional designs, may even diminish our individual (and ultimately collective) capacity to do so. As always, it is only the whole assembly that matters, especially the larger structural elements, and genAI can easily short-circuit a few of those, making the whole seem more effective (courses seem to work better, students seem to display better evidence of success) but the things that actually matter get left out of the circuit.
The conclusion describes the broad characteristics of educational paths that will tend to lead towards learning enhancement by, first of all, focusing our energies on education’s social role in building and sharing tacit knowledge, then on ways of using genAI to do more that we could do alone, and, underpinning this, on expanding our definitions of what “learning” means beyond the narrow confines of “individuals meeting measurable learning outcomes”. The devil is in the detail and there are certainly other ways to get there than by the broad paths I recommend but I think that, if we start with the assumption that our students are neither products nor consumers nor vessels for learning outcomes, but co-participants in our richly complex, ever evolving, technologically intertwingled learning communities, we probably won’t go too far wrong.
Abstract:
Every technology we create, from this sentence to the Internet, changes us but, through generative AI (genAI), we can now access a kind of cognitive Santa Claus machine that invents other technologies, so the rate of change is exponentially rising. Educators struggle to maintain a balance between sustaining pre-genAI values and skills, and using the new possibilities genAIs offer. This paper provides a conceptual lens for understanding and responding to this tension. It argues that, on the one hand, educators must acknowledge and embrace the changes genAI brings to our extended cognition while, on the other, that we must valorize and double-down on the tacit curriculum, through which we learn ways of being human in the world.
This inaugural issue is a great start to what I think will come to be recognized as a leading journal in the field of AI and education. As not just an author but an associate editor I am naturally a little biased but I’m very picky about the journals I work with and this one ticks all the right boxes. It is genuinely open, without fees for authors or readers. It is explicitly very multidisciplinary. The editors – Mike Searson, Theo Bastiaens and Gary Marks – are truly excellent, and prominent in the field of online and technology-enhanced learning. The publisher, AACE is a very well-oiled, prominent, professional, and likeable organization that has been a major player in the field for over 30 years, with extensive reach into institutional libraries the world over via LearnTechLib.
And the journal has an attitude that I like very much: it’s about learning enhancement through AI, not just AI and education. This fills a huge pragmatic need in an area where many practitioners are like deer caught in the headlights when it comes to thinking about what positive things we can do with our new robot friends/overlords/interlopers, and where too much of the conversation is implicitly focused on protecting the traditional forms and structures of our mediaeval education systems and the kinds of knowledge generative AI can more easily and effectively replicate.
This first issue crosses many disciplinary boundaries and aspects of the educational endeavour with a very diverse range of reflective papers by recognized experts in many facets of AI, education, and learning. All are ultimately optimistic about the potential for learning enhancement but few back away from the wicked problems and potential for the opposite effect. My own paper finds a thread of hope that we might not so much reinvent as simply notice what education currently does (it’s about learning to be as much as learning to do), and that we might recognize generative AIs not as tools but as cognitive Santa Claus machines, sharing their cognitive gifts to help us collectively achieve things we could not dream of before. It has a bit of theory to back that up.
If you have influence over such things, do encourage your libraries to subscribe!
For a few minutes the other day I thought that I had invented a new kind of fallacy or, at least, a great term to describe it. Disappointingly, a quick search revealed that it was not only an old idea but one that has been independently invented at least twice before (Berry & Martin, 1974; Weinstock, 1981). Here is its definition from Weinstock (1981):
“a synecdochic fallacy is a deceptive, misleading, erroneous, or false notion, belief, idea, or statement where a part is substituted for a whole, a whole for a part, cause for effect, effect for cause, and so on.”
Most synecdoches (syn-NEK-doh-kees in case you were wondering – I have been getting it totally wrong for decades) are positively useful. Synecdoches make aspects of a whole more salient by focusing on the parts. No one, for instance, thinks “all hands on deck” actually means the crew should put their hands on the deck let alone that disembodied hands should crew the ship, but it does focus on an aspect of the whole that is of great interest: that there is an expectation that those hands will be used to do what hands do. Equally, synecdoches can make the parts more salient by focusing on the whole. When we say “Canada beat the USA in the finals” no one thinks that one literal country got up and thrashed the other, but it draws attention to a symbolic aspect of a hockey game that reveals one of its richer social roles. It becomes a fallacy only when we take it literally. Unfortunately, doing so is surprisingly common in research about education and educational technologies.
Technologies as synecdoches
The labels we use for technologies are very liable to be synecdochic (syn-nek-DOH-kik if you were wondering): it is almost a defining characteristic. Technologies are assemblies, and parts of assemblies, often contained by other technologies, often containing an indeterminate number of technologies that themselves consist of indeterminate numbers of technologies, that participate in richly recursive webs of further technologies with dynamic boundaries, where the interplay of process, product, structure, and use constantly shifts and shimmers. The labels we give to technologies are as much descriptions of sets of dynamic relationships as they are of objects (cognitive, physical, virtual, organizational, etc) in the world, and the boundaries we use to distinguish one from another are very, very fluid.
There is no technology that cannot be combined with different others or in different ways in order to create a different whole. Without changing or adding anything to the physical assembly a screwdriver, say, can be a paint stirrer, a pointer, a weapon, or unprestatably many other technologies, far from all of which are so easily labelled. Virtually every use of a technology is itself a technology, and it is often one that has never occurred in exactly the same way in the entire history of the universe. This sentence is one such technology: though there may be lots of sentences that are similar, the chances that anyone has ever used exactly this combination of words and punctuation before now are close to zero. Same for this post. This post has a title: that is the name of this technology, though it is a synecdoche for… what? The words it contains? Not quite, because now (literally as I write) it contains more of them but it is still this post. Is it still this post when it is syndicated? If the URL changes? Or the title? Or if I read it and turn it into podcast? I don’t know. This sentence does not have a name, but it is no less a technology. So is your reading of it. So is much of what is involved in the sense you are making of it, and that is the technology that probably matters most right now. No one has ever made sense of anything in exactly this way, right now, the way you are doing it, and no one ever will. The technosphere is almost as awesomely complex as the biosphere and, in education, the technosphere extends deep into every learner, not just as an object of learning but as part of learning itself.
Synecdoches and educational/edtech research
Let’s say you wanted to investigate the effects of putting computers in classrooms. It seems reasonable enough: after all, it’s a big investment so you’d want to know whether it was worth it. But what do you actually learn from doing so apart from that, in this particular instance, with this particular set of orchestrations and uses, something happened? Yes, computers might have been prerequisites for it happening but so what? An infinite number of different things could have happened if you had done something else even slightly different with them, there are infinitely many other things you could have done that might have been better, and all bets would be off if the computers themselves had been different. The same is equally true for what happens in classrooms without computers. What can you predict as a result? Even if you were to find that, 100% of the time until now, computers in classrooms led to better/worse learning (whatever that might mean to you) I guarantee that I could find plenty of ways of using them to do the precise opposite. This is functionally similar to taking “all hands on deck” literally: the hands may be very salient but, without taking into account the people they are attached to and exactly what they are doing with those hands, there is little or no value in making comparisons. Averages, maybe; patterns, perhaps, as long as you can keep everything else more or less similar (e.g. a traditional formal school setting); but reliable predictions of cause and effect? No. Or anything that can usefully transfer to a different setting (e.g. unschooling or – ha – online learning)? Not at all.
Conversely but following the same synecdochic logic we might ask questions about the effectiveness of online and distance learning (the whole), comparing it with in-person learning. Both encompass immense numbers of wildly diverse technologies, including not just course and class technologies but things like pedagogical techniques, institutional structures, and national standards, instantiated with wildly varying degrees of skill and talent, all of which matter at least as much as the fact that it is online and at a distance. Many may matter more. This is functionally similar to taking “Canada beat the US” literally. It did not. It remains a fallacy even if, on average, Canada (the hockey team) does win more often, or if online and distance learning is generally more effective than in-person learning, whatever that means. The problem is that it does not distinguish which of the many millions of parts of the distance or the in-person orchestration of phenomena matter and, for aforementioned and soon-to-be-mentioned reasons, it cannot.
Beyond causing physical harm – and even then with caveats – there is virtually nothing you could do or use to teach someone that, if you modified some other part of the assembly or organized the parts a little differently, could not have exactly the opposite effect the next time you do or use it. This sentence, say, will have quite different effects from the next despite using almost the exact same components. Almost components effects next the despite using different quite will sentence, say, this have the from exact. It’s a silly example and it is not difficult to argue that further components (rules of grammar, say) are sufficiently different that the comparison is flawed, but that’s exactly the point: all instantiations of educational technologies are different, in countless significant ways, each of which impacts lots of others which in turn impact others, in a complex adaptive system filled with positive and negative feedback loops, emergence, evolution, and random impacts from the systems that surround it. I didn’t actually even have to mix up the words. Had I repeated the exact same statement, its impact would have been different from the first because something else in the system had changed as a result of it: you and the sentence after. And this is just one sentence, and you are just one reader. Things get much more complex really fast.
In a nutshell, the synecdochic fallacy is why reductive research methods that serve us so well in the natural sciences are often completely inappropriate in the field of technology in general and education in particular. Natural science seeks and studies invariant phenomena but, because every use (at least in education) is a unique orchestration, technologies as they are actually enacted (i.e. the whole, including the current use) are never invariant and, even on those odd occasions that they do remain sufficiently similar for long enough to make study worthwhile, it just takes one small tweak to render useless everything we have learned about them.
All is not lost
There are lots of useful and effective kinds of research that we can do about educational technologies. Reductive science is great for identifying phenomena and what we can do with them in a technological assembly, and that can include other technologies that are parts of assemblies. It is really useful, say, to know about the properties of nuts and bolts used to build desks or computers, the performance characteristics of a database, or that students have persistent difficulties answering a particular quiz question. We can use this information to make good creative choices when changing or creating designs. Notice, though, that this is not a science of teaching or education. This is a science of parts and, if we do it with caution, their interactions with other parts. It is never going to tell us anything useful about, say, whether teaching to learning styles has any positive effect, that direct instruction is better than problem based learning, or that blended learning is better than in-person or online learning, but it might help us build a better LMS or design a lesson or two more effectively, if (and only if) we used the information creatively and wisely.
Other effective methods involve the telling of rich stories that reveal phenomena of interest and reasons for or effects of decisions we made about putting them together: these can help others faced with similar situations, providing inspirations and warnings that might be very useful. If we find new ways of assembling or orchestrating the parts (we do something no one has done before) then it is really helpful to share what we have done: this helps others to invent because it expands the adjacent possible. Similarly we can look for patterns in the assembly that seem to work and that we can re-use (as parts) in other assemblies. We can sometimes come up with rules of thumb that might help us to (though never to predict that we will) build better new ones. We can share plans. We can describe reasons.
What this all boils down to is that we can and we should learn a great deal that is useful about the component technologies and we can and should seek broad patterns in ways that they intertwingle. What we cannot do, neither in principle nor in practice, is to use what we have learned to accurately predict anything specific about what happens when we put them together to support learning. It’s about improving the palette, not improving the painting. As Longo & Kauffman (2012) put it, in a complex system of this nature – and this applies as much to the biosphere, culture, and economics as it does to education and technology – there are no laws of entailment, just of enablement. We are firmly in the land of emergence, evolution, craft, design, and bricolage, not engineering, manufacture and mass-production. I find this quite liberating.
References
Berry, K. J., & Martin, T. W. (1974). The Synecdochic Fallacy: A Challenge to Recent Research and Theory-Building in Sociology. Pacific Sociological Review, 17(2), 139–166. https://doi.org/10.2307/1388339
Longo, G., Montévil, M., & Kauffman, S. (2012). No entailing laws, but enablement in the evolution of the biosphere. Proceedings of the 14th Annual Conference Companion on Genetic and Evolutionary Computation, 1379–1392. https://doi.org/10.1145/2330784.2330946
Here’s a characteristically well-expressed and succinct summary of the complex nature of technologies, our relationships with them, and what that means for education by the ever-wonderful Tim Fawns. I like it a lot, and it expresses much what I have tried to express about the nature and value of technologies, far better than I could do it and in far fewer words. Some of it, though, feels like it wants to be unpacked a little further, especially the notions that there are no tools, that tools are passive, and that tools are technologies. None of what follows contradicts or negates Tim’s points, but I think it helps to reveal some of the complexities.
There are tools
Tim starts provocatively with the claim that:
There are no tools. Tools are passive, neutral. They can be picked up and put down, used to achieve human goals without changing the user (the user might change, but the change is not attributed to the tool).
I get the point about the connection between tools and technology (in fact it is very similar to one I make in the “Not just tools” section of Chapter 3 of How Education Works) and I understand where Tim is going with it (which is almost immediately to consciously sort-of contradict himself), but I think it is a bit misleading to claim there are no tools, even in the deliberately partial and over-literal sense that Tim uses the term. This is because to call something a tool is to describe a latent or actual relationship between it and an agent (be it a person, a crow, or a generative AI), not just to describe the object itself. At the point at which that relationship is instantiated it very much changes the agent: at the very least, they now have a capability that they did not have before, assuming the tool works and is used for a purpose. Figuring out how to use the tool is not just a change to the agent but a change to what the agent may become that expands the adjacent possible. And, of course, many tools are intracranial so, by definition, having them and using them changes the user. This is particularly obvious when the tool in question is a word, a concept, a model, or a theory, but it is just as true of a hammer, a whiteboard, an iPhone, or a stick picked up from the ground with some purpose in mind, because of the roles we play in them.
Tools are not (exactly) technologies
Tim goes on to claim:
Tools are really technologies. Each technology creates new possibilities for acting, seeing and organising the world.
Again, he is sort-of right and, again, not quite, because “tool” is (as he says) a relational term. When it is used a tool is always part of a technology because the technique needed to use it is a technology that is part of the assembly, and the assembly is the technology that matters. However, the thing that is used – the tool itself – is not necessarily a technology in its own right. A stick on the ground that might be picked up to hit something, point to something, or scratch something is simply a stick.
Tools are not neutral
Tim says:
So a hammer is not just sitting there waiting to be picked up, it is actively involved in possibility-shaping, which subtly and unsubtly entangles itself with social, cognitive, material and digital activity. A hammer brings possibilities of building and destroying, threatening and protecting, and so forth, but as part of a wider, complex activity.
I like this: by this point, Tim is telling us that there are tools and that they are not neutral, in an allusion to Culkin’s/McLuhan’s dictum that we shape our tools and thereafter our tools shape us. Every new tool changes us, for sure, and it is an active participant in cognition, not a non-existent neutral object. But our enactment of the technology in which the tool participates is what defines it as a tool, so we don’t so much shape it as we are part of the shape of it, and it is that participation that changes us. We are our tools, and our tools are us.
There is interpretive flexibility in this – a natural result of the adjacent possibles that all technologies enable – which means that any technology can be combined with others to create a new technology. An iPhone, say, can be used by anyone, including monkeys, to crack open nuts (I wonder whether that is covered by AppleCare?), but this does not make the iPhone neutral to someone who is enmeshed in the web of technologies of which the iPhone is designed to be a part. As the kind of tool (actually many tools) it is designed to be, it plays quite an active role in the orchestration: as a thing, it is not just used but using. The greater the pre-orchestration of any tool, the more its designers are co-participants in the assembled technology, and it can often be a dominant role that is anything but neutral.
Most things that we call tools (Tim uses the hammer as an example) are also technologies in their own right, regardless of their tooliness: they are phenomena orchestrated with a purpose, stuff that is organized to do stuff and, though softer tools like hammers have a great many adjacent possibles that provide almost infinite interpretive flexibility, they also – as Tim suggests – have propensities that invite very particular kinds of use. A good hardware store sells at least a dozen different kinds of hammer with slightly different propensities, labelled for different uses. All demand a fair amount of skill to use them as intended. Such stores also sell nail guns, though, that reduce the amount of skill needed by automating elements of the process. While they do open up many further adjacent possibles (with chainsaws, making them mainstays of a certain kind of horror movie), and they demand their own sets of skills to use them safely, the pre-orchestration in nail guns greatly reduces many of the adjacent possibles of a manual hammer: they aren’t much good for, say, prying things open, or using as a makeshift anchor for a kayak, or propping up the lid of a tin of paint. Interestingly, nor are they much use for quite a wide range of nail hammering tasks where delicacy or precision are needed. All of this is true because, as a nail driver, there is a smaller gap between intention and execution that needs to be filled than for even the most specialized manual hammer, due to the creators of the nail gun having already filled a lot of it, thus taking quite a few choices away from the tool user. This is the essence of my distinction between hard and soft technologies, and it is exactly the point of making a device of this nature. By filling gaps, the hardness simplifies many of the complexities and makes for greater speed and consistency which in turn makes more things possible (because we no longer have to spend so much time being part of a hammer) but, in the process, it eliminates other adjacent possibles. The gaps can be filled further. The person using such a machine to, say, nail together boxes on a production line is not so much a tool user as a part of someone else’s tool. Their agency is so much reduced that they are just a component, albeit a relatively unreliable component.
Being tools
In an educational context, a great deal of hardening is commonplace, which simplifies the teaching process and allows things to be done at scale. This in turn allows us to do something approximating reductive science, which gives us the comforting feeling that there is some objective value in how we teach. We can, for example, look at the effects of changes to pre-specified lesson plans on SAT results, if both lesson plans and SATs are very rigid, and infer moderately consistent relationships between the two, and so we can improve the process and measure our success quite objectively. The big problem here, though, is what we do not (and cannot) examine by such approaches, such as the many other things that are learned as a result of being treated as cogs in a mechanical system, the value of learning vs the value of grades, or our places in social hierarchies in which we are forced to comply with a very particular kind of authority. SATs change us, in many less than savoury ways. SATs also fail to capture more than a miniscule fraction of the potentially useful learning that also (hopefully) occurred. As tools for sorting learners by levels of competence, SATs are as far from neutral as you can get, and as situated as they could possibly be. As tools for learning or for evaluating learning they are, to say the least, problematic, at least in part because they make the learner a part of the tool rather than a user of it. Either way, you cannot separate them from their context because, if you did, it would be a different technology. If I chose to take a SAT for fun (and I do like puzzles and quizzes, so this is not improbable) it would be a completely different technology than for a student, or a teacher, or an administrator in an educational system. They are all, in very different ways, parts of the tool that is in part made of SATs. I would be a user of it.
All of this reinforces Tim’s main and extremely sound points, that we are embroiled in deeply intertwingled relationships with all of our technologies, and that they cannot be de-situated. I prefer the term “intertwingled” to the term “entangled” that Tim uses because, to me, “entangled” implies chaos and randomness but, though there may (formally) be chaos involved, in the sense of sensitivity to initial conditions and emergence, this is anything but random. It is an extremely complex system but it is highly self-organizing, filled with metastabilities and pockets of order, each of which acts as a further entity in the complex system from which it emerges.
It is incredibly difficult to write about the complex wholes of technological systems of this nature. I think the hardest problem of all is the massive amount of recursion it entails. We are in the realms of what Kauffman calls Kantian Wholes, in which the whole exists for and by means of the parts, and the parts exist for and by means of the whole, but we are talking about many wholes that are parts of or that depend on many other wholes and their parts that are wholes, and so on ad infinitum, often crossing and weaving back and forth so that we sometimes wind up with weird situations in which it seems that a whole is part of another whole that is also part of the whole that is a part of it, thanks to the fact that this is a dynamic system, filled with emergence and in a constant state of becoming. Systems don’t stay still: their narratives are cyclic, recursive, and only rarely linear. Natural language cannot easily do this justice, so it is not surprising that, in his post, Tim is essentially telling us both that tools are neutral and that they are not, that tools exist and that they do not, and that tools are technologies and they are not. I think that I just did pretty much the same thing.
I’m 16th of 47 coauthors, led by the truly wonderful Junhong Xiao, who is the primary orchestrator and mastermind behind it. This is a companion piece to our Manifesto for Teaching and Learning in a Time of Generative AI and it starts where the other paper left off, delving further into what we don’t know (or at least do not agree that we know) about and (taking up most of the paper) what we might do about that lack of knowledge. I think this presents a pretty useful and wide-ranging research agenda for anyone with an interest in AI and education.
Methodologically, it emerged through a collaborative writing process between a very multinational group of international researchers in open, digital, and online learning. It’s not a random sample of people who happen to know one another: the huge group represents a rich mix of (extremely) well-established and (excellent) emerging researchers from a broad set of cultural backgrounds, covering a wide range of research interests in the field. Junhong does a great job of extracting the themes and organizing all of that into a coherent narrative.
In many ways I like this paper more than its companion piece. I think this is because, though its findings are – as the title implies – less well-defined than the first, I am more closely aligned with the underlying assumptions, attitudes and values that underpin the analysis. It grapples more firmly with the wicked problems and it goes deeper into the broader, situated, human nature of the systems in which generative AI is necessarily intertwingled, skimming over the more simplistic conversations about cheating, reliability, and so on to get at some meatier but more fundamental issues that, ultimately, relate to how and why we do this education thing in the first place.
Abstract
Advocates of AI in Education (AIEd) assert that the current generation of technologies, collectively dubbed artificial intelligence, including generative artificial intelligence (GenAI), promise results that can transform our conceptions of what education looks like. Therefore, it is imperative to investigate how educators perceive GenAI and its potential use and future impact on education. Adopting the methodology of collective writing as an inquiry, this study reports on the participating educators’ perceived grey areas (i.e. issues that are unclear and/or controversial) and recommendations on future research. The grey areas reported cover decision-making on the use of GenAI, AI ethics, appropriate levels of use of GenAI in education, impact on learning and teaching, policy, data, GenAI outputs, humans in the loop and public–private partnerships. Recommended directions for future research include learning and teaching, ethical and legal implications, ownership/authorship, funding, technology, research support, AI metaphor and types of research. Each theme or subtheme is presented in the form of a statement, followed by a justification. These findings serve as a call to action to encourage a continuing debate around GenAI and to engage more educators in research. The paper concludes that unless we can ask the right questions now, we may find that, in the pursuit of greater efficiency, we have lost the very essence of what it means to educate and learn.
Reference
Xiao, J., Bozkurt, A., Nichols, M., Pazurek, A., Stracke, C. M., Bai, J. Y. H., Farrow, R., Mulligan, D., Nerantzi, C., Sharma, R. C., Singh, L., Frumin, I., Swindell, A., Honeychurch, S., Bond, M., Dron, J., Moore, S., Leng, J., van Tryon, P. J. S., … Themeli, C. (2025). Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research. TechTrends. https://doi.org/10.1007/s11528-025-01060-6
The content is in fact the product of two discussions, one coming from student questions at the end of a talk that I gave for the Asian University for Women just before Christmas, the other asynchronously with Stefanie herself.
Stefanie did a very good job of making sense of my rambling replies to the students that spanned quite a few issues, including some from my book, How Education Works, some with (mainly) generative AI, and a little about the intersection of collective and artificial intelligence. Stefanie’s own prompts were great: they encouraged me to think a little differently, and to take some enjoyable detours along the way around the evils of learning management systems, artificially-generated music, and social media, as well as a discussion of the impact of generative AI on learning designers, thoughts on legislation to control AI, and assessment.
Here are the slides from that talk at AUW – I’ve not posted this separately because hardly any are new: it mostly cobbles together two recent talks, one for Contact North and the other my keynote for ICEEL ’24. The conversation afterwards was great, though, thanks to a wonderfully thoughtful and enthusiastic bunch of very smart students.
I’m proud to be the 7th of 47 authors on this excellent new paper, led by the indefatigable Aras Bozkurt and featuring some of the most distinguished contemporary researchers in online, open, mobile, distance, e- and [insert almost any cognate sub-discipline here] learning, as well as a few of us hanging on their coat tails like me.
As the title suggests, it is a manifesto: it makes a series of statements (divided into 15 positive and 20 negative themes) about what is or what should be, and it is underpinned by a firm set of humanist pedagogical and ethical attitudes that are anything but neutral. What makes it interesting to me, though, can mostly be found in the critical insights that accompany each theme, that capture a little of the complexity of the discussions that led to them, and that add a lot of nuance. The research methodology, a modified and super-iterative Delphi design in which all participants are also authors is, I think, an incredibly powerful approach to research in the technology of education (broadly construed) that provides rigour and accountability without succumbing to science-envy.
Notwithstanding the lion’s share of the work of leading, assembling, editing, and submitting the paper being taken on by Aras and Junhong, it was a truly collective effort so I have very little idea about what percentage of it could be described as my work. We were thinking and writing together. Being a part of that was a fantastic learning experience for many of us, that stretched the limits of what can be done with tracked changes and comments in a Google Doc, with contributions coming in at all times of day and night and just about every timezone, over weeks. The depth and breadth of dialogue was remarkable, as much an organic process of evolution and emergence as intelligent design, and one in which the document itself played a significant participant role. I felt a strong sense of belonging, not so much as part of a community but as part of a connectome.
For me, this epitomizes what learning technologies are all about. It would be difficult if not impossible to do this in an in-person setting: even if the researchers worked together on an online document, the simple fact that they met in person would utterly change the social dynamics, the pacing, and the structure. Indeed, even online, replicating this in a formal institutional context would be very difficult because of the power relationships, assessment requirements, motivational complexities and artificial schedules that formal institutions add to the assembly. This was an online-native way of learning of a sort I aspire to but seldom achieve in my own teaching.
The paper offers a foundational model or framework on which to build or situate further work as well as providing a moderately succinct summary of a very significant percentage of the issues relating to generative AI and education as they exist today. Even if it only ever gets referred to by each of its 47 authors this will get more citations than most of my papers, but the paper is highly cite-able in its own right, whether you agree with its statements or not. I know I am biased but, if you’re interested in the impacts of generative AI on education, I think it is a must-read.
The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future
Bozkurt, A., Xiao, J., Farrow, R., Bai, J. Y. H., Nerantzi, C., Moore, S., Dron, J., … Asino, T. I. (2024). The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future. Open Praxis, 16(4), 487–513. https://doi.org/10.55982/openpraxis.16.4.777
Full list of authors:
Aras Bozkurt
Junhong Xiao
Robert Farrow
John Y. H. Bai
Chrissi Nerantzi
Stephanie Moore
Jon Dron
Christian M. Stracke
Lenandlar Singh
Helen Crompton
Apostolos Koutropoulos
Evgenii Terentev
Angelica Pazurek
Mark Nichols
Alexander M. Sidorkin
Eamon Costello
Steven Watson
Dónal Mulligan
Sarah Honeychurch
Charles B. Hodges
Mike Sharples
Andrew Swindell
Isak Frumin
Ahmed Tlili
Patricia J. Slagter van Tryon
Melissa Bond
Maha Bali
Jing Leng
Kai Zhang
Mutlu Cukurova
Thomas K. F. Chiu
Kyungmee Lee
Stefan Hrastinski
Manuel B. Garcia
Ramesh Chander Sharma
Bryan Alexander
Olaf Zawacki-Richter
Henk Huijser
Petar Jandrić
Chanjin Zheng
Peter Shea
Josep M. Duart
Chryssa Themeli
Anton Vorochkov
Sunagül Sani-Bozkurt
Robert L. Moore
Tutaleni Iita Asino
Abstract
This manifesto critically examines the unfolding integration of Generative AI (GenAI), chatbots, and algorithms into higher education, using a collective and thoughtful approach to navigate the future of teaching and learning. GenAI, while celebrated for its potential to personalize learning, enhance efficiency, and expand educational accessibility, is far from a neutral tool. Algorithms now shape human interaction, communication, and content creation, raising profound questions about human agency and biases and values embedded in their designs. As GenAI continues to evolve, we face critical challenges in maintaining human oversight, safeguarding equity, and facilitating meaningful, authentic learning experiences. This manifesto emphasizes that GenAI is not ideologically and culturally neutral. Instead, it reflects worldviews that can reinforce existing biases and marginalize diverse voices. Furthermore, as the use of GenAI reshapes education, it risks eroding essential human elements—creativity, critical thinking, and empathy—and could displace meaningful human interactions with algorithmic solutions. This manifesto calls for robust, evidence-based research and conscious decision-making to ensure that GenAI enhances, rather than diminishes, human agency and ethical responsibility in education.
Here are the slides from from my keynote at the 8th International Conference on Education and E-Learning in Tokyo yesterday. Sadly I was not actually in Tokyo for this but the online integration was well done and there was some good audience interaction. I am also the conference chair (an honorary title) so I may be a bit biased, but I think it’s a really good conference, with an increasingly rare blend of both the tech and the pedagogical aspects of the field, and some wonderfully diverse keynotes ranging in subject matter from the hardest computer science to reflections on literature and love (thanks to its collocation with ICLLL, a literature and linguistics conference). My keynote was somewhere in between, and deliberately targeted at the conference theme, “Transformative Learning in the Digital Era: Navigating Innovation and Inclusion.”
As my starting point for the talk I introduced the concept of the technological connectome, about which I have just written a paper (currently under revision, hopefully due for publication in a forthcoming issue of the new Journal of Open, Distance, and Digital Education), which is essentially a way of talking about extended cognition from a technological rather than a cognitive perspective. From there I moved on to the adjacent possible and the exponential growth in technology that has, over the past century or so, reached such a breakneck rate of change that innovations such as generative AI, the transformation I particularly focused on (because it is topical), can transform vast swathes of culture and practice in months if not in weeks. This is a bit of a problem for traditional educators, who are as unprepared as anyone else for it, but who find themselves in a system that could not be more vulnerable to the consequences. At the very least it disrupts the learning outcomes-driven teacher-centric model of teaching that still massively dominates institutional learning the world over, both in the mockery it makes of traditional assessment practices and in the fact that generative AIs make far better teachers if all you care about are the measurable outcomes.
The solutions I presented and that formed the bulk of the talk, largely informed by the model of education presented in How Education Works, were mostly pretty traditional, emphasizing the value of community, and of passion for learning, along with caring about, respecting, and supporting learners. There were also some slightly less conventional but widely held perspectives on assessment, plus a bit of complexivist thinking about celebrating the many teachers and acknowledging the technological connectome as the means, the object and the subject of learning, but nothing Earth-shatteringly novel. I think this is as it should be. We don’t need new values and attitudes; we just need to emphasize those that are learning-positive rather than the increasingly mainstream learning-negative, outcomes-driven, externally regulated approaches that the cult of measurement imposes on us.
Post-secondary institutions have had to grapple with their learning-antagonistic role of summative assessment since not long after their inception so this is not a new problem but, until recent decades, the two roles have largely maintained an uneasy truce. A great deal of the impetus for the shift has come from expanding access to PSE. This has resulted in students who are less able, less willing, and less well-supported than their forebears who were, on average, far more advantaged in ability, motivation, and unencumbered time simply because fewer were able to get in. In the past, teachers hardly needed to teach. The students were already very capable, and had few other demands on their time (like working to get through college), so they just needed to hang out with smart people, some of whom who knew the subject and could guide them through it in order to know what to learn and whether they had been successful, along with the time and resources to support their learning. Teachers could be confident that, as long as students had the resources (libraries, lecture notes, study time, other students) they would be sufficiently driven by the need to pass the assessments and/or intrinsic interest, that they could largely be left to their own devices (OK, a slight caricature, but not far off the reality).
Unfortunately, though this is no longer even close to the norm, it is still the model on which most universities are based. Most of the time professors are still hired because of their research skills, not teaching ability, and it is relatively rare that they are expected to receive more than the most perfunctory training, let alone education, in how to teach. Those with an interest usually have opportunities to develop their skills but, if they do not, there are few consequences. Thanks to the technological connectome, the rewards and punishments of credentials continue to do the job well enough, notwithstanding the vast amounts of cheating, satisficing, student suffering, and lost love of learning that ensues. There are still plenty of teachers: students have textbooks, YouTube tutorials, other students, help sites, and ChatGPT, to name but a few, of which there are more every day. This is probably all that is propping up a fundamentally dysfunctional system. Increasingly, the primary value of post-secondary education comes to lie in its credentialling function.
No one who wants to teach wants this, but virtually all of those who teach in universities are the ones who succeeded in retaining their love of learning for its own sake despite it, so they find it hard to understand students who don’t. Too many (though, I believe, a minority) are positively hostile to their students as a result, believing that most students are lazy, willing to cheat, or to otherwise game the system, and they set up elaborate means of control and gotchas to trap them. The majority who want the best for their students, however, are also to blame, seeing their purpose as to improve grades, using “learning science” (which is like using colour theory to paint – useful, not essential) to develop methods that will, on average, do so more effectively. In fairness, though grades are not the purpose, they are not wrong about the need to teach the measurable stuff well: it does matter to achieve the skills and knowledge that students set out to achieve. However, it is only part of the purpose. Mostly, education is a means to less measurable ends; of forming identities, attitudes, values, ways of relating to others, ways of thinking, and ways of being. You don’t need the best teaching methods to achieve that: you just need to care, and to create environments and structures that support stuff like community, diversity, connection, sharing, openness, collaboration, play, and passion.
The keynote was recorded but I am not sure if or when it will be available. If it is released on a public site, I will share it here.
Free-to-register International online symposium, December 5th, 2024, 12-3pm PST
Start time:
This is going to be an important symposium, I think.
I will be taking 3 very precious hours out of my wedding anniversary to attend, in fairness unintentionally: I did not do the timezone conversion when I submitted my paper so I thought it was the next day. However, I have not cancelled despite the potentially dire consequences, partly because the line-up of speakers is wonderful, partly because we all use the words “collective intelligence” (CI) but we come from diverse disciplinary areas and we mean sometimes very different things by them (so there will be some potentially inspiring conversations) and partly for a bigger reason that I will get to at the end of this post. You can read abstracts and most of the position papers on the symposium website,
In my own position paper I have invented the term ochlotecture (from the Classical Greek ὄχλος (ochlos), meaning something like “multitude” and τέκτων (tektōn) meaning “builder”) to describe the structures and processes of a collection of people, whether it be a small seminar group, a network of researchers, or a set of adherents to a world religion. An ochlotecture includes elements like names, physical/virtual spaces, structural hierarchies, rules, norms, mythologies, vocabularies, and purposes, as well as emergent phenomena occurring through individual and subgroup interactions, most notably the recursive cycle of information capture, processing, and (re)presentation that I think characterizes any CI. Through this lens, I can see both what is common and what distinguishes the different kinds of CI described in these position papers a bit more clearly. In fact, my own use of the term has changed a few times over the years so it helps me make sense of my own thoughts on the matter too.
Where I’ve come from that leads me here
I have been researching CI and education for a long time. Initially, I used the term very literally to describe something very distinct from individual intelligence, and largely independent of it. My PhD, started in 1997, was inspired by the observation that (even then) there were at least tens of thousands of very good resources (people, discussions, tutorials, references, videos, courseware etc) openly available on the Web to support learners in most subject areas, that could meet almost any conceivable learning need. The problem was and remains how to find the right ones. These were pre-Google times but even the good-Google of olden days (a classic application of collective intelligence as I was using the term) only showed the most implicitly popular, not those that would best meet a particular learner’s needs. As a novice teacher, I also observed that, in a typical classroom, the students’ combined knowledge and ability to seek more of it far exceeded my own. I therefore hit upon the idea of using a nature-inspired evolutionary approach to collectively discover and recommend resources, that led me very quickly into the realm of evolutionary theory and thence to the dynamics of self-organizing systems, complex adaptive systems, stigmergy, flocking, city planning, markets, and collective intelligence.
And so I became an ochlotect. I built a series of self-organizing social software systems that used stuff like social navigation (stigmergy), evolutionary, and flocking algorithms to create environments that both shaped and were shaped by the crowd. Acknowledging that “intelligence” is a problematic word, I simply called these collectives, a name inspired by Star Trek TNG’s Borg (the pre-Borg-Queen Borg, before the writers got bored or lazy). The intelligence of a “pure” collective as I conceived it back then was largely to be found in the algorithm, not the individual agents. Human stock markets are no smarter than termite mounds by this way of thinking (and they are not). I was trying to amplify the intelligence of crowds while avoiding the stupidity of mobs by creating interfaces and algorithms that made value to learners a survival characteristic. I was building systems that played some of the roles of a teacher but that were powered by collectives consisting of learners. Some years later, Mark Zuckerberg hit on the idea of doing the exact opposite, with considerably greater success, making a virtue out of systems that amplified collective stupidity, but the general principles behind both EdgeRank and my algorithms were similar.
When I say that I “built” systems, though, I mean that I built the software part. I came to increasingly realize that the largest part of all of them was always the human part: what the individuals did, and the surrounding context in which they did it, including the norms, the processes, the rules, the structures, the hierarchies, and everything else that formed the ochlotecture, was intrinsic to their success or failure. Some of those human-enacted parts were as algorithmic as the software environments I provided and were no smarter than those used by termites (e.g. “click on the results from the top of the list or in bigger fonts”), but many others were designed, and played critical roles. This slightly more complex concept of CI played a major supporting role in my first book providing a grounded basis for the design of social software systems that could support maximal learner control. In it I wound up offering a set of 10 design principles that addressed human, organizational, pedagogical and tech factors as well as emergent collective characteristics that were prerequisites if social software systems were to evolve to become educationally useful.
Collectives also formed a cornerstone of my work with Terry Anderson over the next decade or so, and our use of the term evolved further. In our first few papers, starting in 2007, we conflated the dynamic process with the individual agents who made it happen: for us back then, a collective was the people and processes (a sort of cross between my original definition and a social configuration the Soviets were once fond of) and so we treated a collective as somewhat akin to a group or a network. Before too long we realized that was dumb and separated these elements out, categorizing three primary social forms (the set, the net, and the group) that could blend, and from which collectives could emerge and interact, as a different kind of ochlotectural entity altogether. This led us to a formal abstract definition of collectives that continues to get the odd citation to this day. We wrote a book about social media and learning in which this abstract definition of collectives figured largely, and designed The Landing to take advantage of it (not well – it was a learning experience). It appears in my position paper, too.
Collectives have come back with a vengeance but wearing different clothes in my work of the last decade, including my most recent book. I am a little less inclined to use the word “collective” now because I have come to understand all intelligence as collective, almost all of it mediated and often enacted through technologies. Technologies are the assemblies we construct from stuff to do stuff, and the stuff that they do then forms some of the stuff from which we construct more stuff to do stuff. A single PC alone, for instance, might contain hundreds of billions of instances of technologies in its assembly. A shelf of books might contain almost as many, not just in words and letters but in the concepts, theories, and models they make. As for the processes of making them, editing them, manufacturing the paper and the ink, printing them, distributing them, reading them, and so on… it’s a massive, constantly evolving, ever-adapting, partly biological system, not far off from natural ecosystems in its complexity, and equally diverse. Every use of a technology is also a technology, from words in your head to flying a space ship, and it becomes part of the stuff that can be organized by yourself or others. Through technique (technologies enacted intracranially), technologies are parts of us and we are parts of them, and that is what makes us smart. Collective behaviour in humans can occur without technologies but what makes it collective intelligence is a technological connectome that grows, adapts, evolves, replicates, and connects every one of us to every other one of us: most of what we think is the direct result of assembling what we and others, stretching back in time and outward in space, have created. The technological connectome continuously evolves as we connect and orchestrate the vast web of technologies in which we participate, creating assemblies that have never occurred the same way twice, maybe thousands of times every day: have you ever even brushed your teeth or eaten a mouthful of cereal exactly the same way twice, in your whole life? Every single one of us is doing this, and quite a few of those technologies magnify the effects, from words to drawing to numbers to writing to wheels to screws to ships to postal services to pedagogical methods to printing to newspapers to libraries to broadcast networks to the Internet to the World Wide Web to generative AI. It is not just how we are able to be individually smart: it is an indivisible part of that smartness. Or stupidity. Whatever. The jury is out. Global warming, widening inequality, war, epidemics of obesity, lies, religious bigotry, famine and many other dire phenomena are a direct result of this collective “intelligence”, as much as Vancouver, the Mona Lisa, and space telescopes. Let’s just stick with “collective”.
The obligatory LLM connection and the big reason I’m attending the symposium
My position paper for this symposium wanders a bit circuitously towards a discussion of the collective nature of large language models (LLMs) and their consequent global impact on our education systems. LLMs are collectives in their own right, with algorithms that are not only orders of magnitude more complex than any of their predecessors, but that are unique to every instantiation of them, operating from and on vast datasets, presenting results to users who also feed those datasets. This is what makes them capable of very convincingly simulating both the hard (inflexible, correct) and the soft (flexible, creative) technique of humans, which is both their super-power and the cause of the biggest threat they pose. The danger is that a) they replace the need to learn the soft technique ourselves (not necessarily a disaster if we use them creatively in further assemblies) and, more worryingly, b) that we learn ways of being human from collectives that, though made of human stuff, are not human. They will in turn become parts of all the rest of the collectives in which we participate. This can and will change us. It is happening now, frighteningly fast, even faster and at a greater scale than similar changes that the Zuckerbergian style of social media have also brought about.
As educators, we should pay attention to this. Unfortunately, with their emphasis on explicit measurable outcomes, combined with the extrinsic lure of credentials, the ochlotecture of our chronically underfunded educational systems is not geared towards compensating for these tendencies. In fact, exactly the reverse. LLMs can already both teach and meet those explicit outcomes far more effectively than most humans, at a very compelling price so, more and more, they will. Both students and teachers are replaceable components in such a system. The saving grace and/or problem is that, though they matter, and they are how we measure educational success, those explicit outcomes are not in fact the most important ends of education, albeit that they are means to those ends.
The things that matter more are the human ways of thinking, of learning, and of seeing, that we learn while achieving such outcomes; the attitudes, values, connections, and relationships; our identities and the ways we learn to exist in our societies and cultures. It’s not just about doing and knowing: it’s about being, it’s about love, fear, wonder, and hunger. We don’t have to (and can’t) measure those because they all come for free when humans and the stuff they create are the means through which explicit outcomes are achieved. It’s an unavoidable tacit curriculum that underpins every kind of intentional and most unintentional learning we undertake, for better or (too often) for worse. It’s the (largely) non-technological consequence of the technologies in which we participate, and how we participate in them. Technologies don’t make us less human, on the whole: they are exactly what make us human.
We will learn such things from generative AIs, too, thanks to the soft technique they mimic so well, but what we will learn to be as a result will not be quite human. Worse, the outputs of the machines will begin to dominate their own inputs, and the rest will come from humans who have been changed by their interactions with them, like photocopies of photocopies, constantly and recursively degrading. In my position paper I argue for the need to therefore cherish the human parts of these new collectives in our education systems far more than we have before, and I suggest some ways of doing that. It matters not just to avoid model collapse in LLMs, but to prevent model collapse in the collective intelligence of the whole human race. I think that is quite important, and that’s the real reason I will spend some of my wedding anniversary talking with some very intelligent and influential people about it.
For those with an interest, here are the slides from my webinar for Contact North | Contact Nord that I gave today: How to be an educational technology (warning: large download, about 32MB).
that how we do teaching matters more than what we do (“T’ain’t what you do, it’s the way that you do it”) and
that we can only understand the process if we examine the whole complex assembly of teaching (very much including the technique of all who contribute to it, including learners, textbooks, and room designers) not just the individual parts.
Along the way I had a few other things to say about why that must be the case, the nature of teaching, the nature of collective cognition, and some of the profound consequences of seeing the world this way. I had fun persuading ChatGPT to illustrate the slides in a style that was not that of Richard Scarry (ChatGPT would not do that, for copyright reasons) but that was reminiscent of it, so there are lots of cute animals doing stuff with technologies on the slides.
I rushed and rambled, I sang, I fumbled and stumbled, but I think it sparked some interest and critical thinking. Even if it didn’t, some learning happened, and that is always a good thing. The conversations in the chat went too fast for me to follow but I think there were some good ones. If nothing else, though I was very nervous, I had fun, and it was lovely to notice a fair number of friends, colleagues, and even the odd relative among the audience. Thank you all who were there, and thank you anyone who catches the recording later.