Just a metatool? Some thoughts why generative AIs are not tools

hammer holding an AI nailMany people brush generative AI aside as being just a tool. ChatGPT describes itself as such (I asked). I think it’s more complicated than that, and this post is going to be an attempt to explain why. I’m not sure about much of what follows and welcome any thoughts you may have on whether this resonates with you and, if not, why not.

What makes something a tool

I think that to call something a tool is shorthand for it having all of the following 5 attributes:

  1. It is an object (physical, digital, cognitive, procedural, organizational, structural, conceptual, spiritual, etc. – i.e. the thing we normally identify as the tool),
  2. used with/designed for a purpose, that
  3. can extend the capabilities of an actor (an intelligent agent, typically human), who
  4. may perform an organized action or series of actions with it, that
  5. cause changes to a subject other than the tool itself (such as a foodstuff, or piece of paper, a mental state, or a configuration of bits),

More informally, less precisely, but perhaps more memorably:

A tool is something that an intelligent agent does something with in order to do something to something else

Let me unpack that a bit.

A pebble used as a knife sharpener is a tool, but one used to reinforce concrete is not. A pen used to write on paper is a tool, but the paper is not. The toolness in each case emerges from what the agent does and the fact that it is done to something, in order to achieve something (a sharp knife, some writing).

Any object we label as a tool can become part of another with different organization. A screwdriver can become an indefinitely large number of other tools  apart from one intended for driving screws. In fact, almost anything can become a tool with the right organization. The paper can be a tool if it is, say, used to scoop up dirt. And, when I say “paper”, remember that this is the label for the object I am calling a tool, but it is the purpose, what it does, how it is organized, and the subject it acts upon that makes it so.

It is not always easy to identify the “something else” that a tool affects. A saw used to cut wood is an archetypal tool, but a saw played with a bow to make music is, I think, not. Perhaps the bow is a tool, and maybe we could think of the saw as a tool acting on air molecules, but I think we tend to perceive it as the thing that is acted upon rather than the thing we do something with.

Toolness is intransitive: a computer may be a tool for running programs, and a program running on it may be a tool that fixes a corrupt disk, but a computer is not a tool for fixing a corrupt disk.

A great many tools are also a technologies in their own right. The intention and technique of the tool maker combines with that of the tool user, so the tool user may achieve more (or more reliably, faster, more consistently, etc) than would be possible without both. A fountain pen adds more to the writing assembly than a quill, for instance, so demanding less of the writer. Many tools are partnerships of this nature, allowing the cognition of more than one person to be shared. This is the ratchet that makes humans smart.

Often, the organization performed by the maker of a technology entirely replaces that of the tool user. A dish sponge is a tool, but a dishwasher is not: it is an appliance. Some skill is needed to load it but the dishwashing itself – the purpose for which it is designed – is entirely managed by the machine.

The case is less clear for an appliance like, say, a vacuum cleaner. I think this is because there are two aspects to the device: the mechanism that autonomously sucks dirt is what makes it an appliance, but the hose (or whatever) used to select the dirt to be removed is a tool. This is reflected in common usage, inasmuch as a vacuum cleaner is normally sold with what are universally described as tools (i.e. the things that a person actively manipulates). The same distinction is still there in a handheld machine, too – in fact, many come with additional tools – though I would be much more comfortable describing the whole device as a tool, because that’s what is manipulated to suck up the dirt. Many power tools fit in this category: they do some of the work autonomously but they are still things people do something with in order to do something to something else.

Humans can occasionally be accurately described as tools: the movie Swiss Army Man, for instance, features Daniel Radcliffe as a corpse that turns out to have many highly inventive uses. For real live humans, though, the case is less clear.  Employees in scripted call centres, or teachers following scripted lesson plans are more like appliances than tools: having been “programmed”, they run autonomously, so the scripts may be tools but the people are not. Most other ways of using other people are even less tool-like. If I ask you to pick up some shopping for me, say, then my techniques of persuasion may be tools, but you are the one organizing phenomena to shop, which is the purpose in question.

The case is similar for sheepdogs (though they are not themselves tool users), that I would be reluctant to label as tools, though skills are clearly needed to make them do our bidding and they do serve tool-like purposes as part of the technology of shepherding. The tools, though, are the commands, methods of training, treats, and so on, not the animals themselves.

Why generative AIs are not tools

For the same reasons of transitivity that dishwashers, people, and sheepdogs are not normally tools, neither are generative AIs. Prompts and other means of getting AIs to do our bidding are tools but generative AIs themselves work autonomously.  This comes with the proviso that almost anything can be repurposed so there is nothing that is not at least latently a tool but, at least in their most familiar guises, generative AIs tend not to be.

Unlike conventional appliances, but more like sheepdogs, the work generative AIs perform is neither designed by humans nor scrutable to us. Unlike sheepdogs, but more like humans, generative AIs are tool users, too: not just (or not so much) words, but libraries, programming languages, web crawlers, filters, and so on. Unlike humans, though, generative AIs act with their users’ intentions, not their own, expressed through the tools with which we interact with them.  They are a bit like partial brains, perhaps, remarkably capable but not aware of nor able to use that capability autonomously.

It’s not just chatbots. Many recommender systems and search engines (increasingly incorporating deep learning), also sit uncomfortably in the category of tools, though they are often presented as such. Amazon’s search, say, is not (primarily) designed to help you find what you are looking for but to push things at you that Amazon would like you to buy, which is why you must troll through countless not-quite-right things despite it being perfectly capable of exactly matching your needs. If it is anyone’s tool, it is Amazon’s, not ours. The same for a Google search: the tools are your search terms, not Google Search, and it is acting quite independently in performing the search and returning results that are likely more beneficial to Google than to you. This is not true of all search systems. If I search for a file on my own computer then, if it fails to provide what I am looking for, it is a sign that the tool (and I think it is a tool because the results should be entirely determinate) is malfunctioning. Back in those far off days when Amazon wanted you to find what you wanted or Google tried to provide the closest match to your search term, if not tools then we could at least think of them as appliances designed to be controlled by us.

I think we need a different term for these things. I like “metatool” because it is catchy and fairly accurate. A metatool is something that uses tools to do our bidding, not a tool in its own right.  It is something that we use tools to act upon that is itself a tool user. I think this is better than a lot of other metaphors we might use: slave, assistant (Claude describes itself, incidentally, not as ‘merely’ a tool, but as an intelligent assistant), partner, co-worker, contractor, etc all suggest more agency and intention than generative AIs actually possess, but appliance, machine, device, etc fail to capture the creativity, tailoring, and unpredictability of the results.

Why it matters

The big problem with treating generative AIs as tools is that it overplays our own agency and underplays the creative agency of the AI. It encourages us to think of them, like actual tools, as, cognitive prostheses, ways of augmenting and amplifying but still using and preserving human cognitive capabilities, when what we are actually doing is using theirs. It also encourages us to think the results will be more deterministic than they actually are. This is not to negate the skill needed to use prompts effectively, nor to underplay the need to understand what the prompt is acting upon. Just as the shepherd needs to know the sheepdog, the genAI user has to know how their tools will affect the medium.

Like all technologies, these strange partial brains effectively enlarge our own. All other technologies, though, embed or embody other humans’ thinking and/or our own. Though largely consisting of the compressed expressed thoughts of millions of people, AI’s thoughts are not human thoughts: even using the most transparent of them, we have very little access to the mechanisms behind their probablistic deliberations. And yet, nor are they independent thinking agents. Like any technology we might think of them as cognitive extensions but, if they are, then it is as though we have undergone an extreme form of corpus callosotomy, or we are experiencing something like Jaynes’s bicameral mind. Generative AIs are their own thing: an embodiment of collective intelligence as well as contributors to our own, wrapped up in a whole bunch of intentional programming and training that imbues them, in part, with (and I find this very troubling) the values of their creators and in part with the sum output of a great many humans who created the data on which they are trained.

I don’t know whether this is, ultimately, a bad thing. Perhaps it is another stage in our evolution that will make us more fit to deal with the complex world and new problems in it that we collectively continue to create. Perhaps it will make us less smart, or more the same, or less creative. Perhaps it will have the opposite effects. Most likely it will involve a bit of all of that. I think it is important that we recognize it as something new in the world, though, and not just another tool.

Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research | TechTrends

The latest paper I can proudly add to my list of publications,  Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research has been published in the (unfortunately) closed journal TechTrends. Here’s a direct link to the paper that should hopefully bypass the paywall, if it has not been used too often.

I’m 16th of 47 coauthors, led by the truly wonderful Junhong Xiao, who is the primary orchestrator and mastermind behind it. This is a companion piece to our Manifesto for Teaching and Learning in a Time of Generative AI and it starts where the other paper left off, delving further into what we don’t know (or at least do not agree that we know) about and (taking up most of the paper) what we might do about that lack of knowledge. I think this presents a pretty useful and wide-ranging research agenda for anyone with an interest in AI and education.

Methodologically, it emerged through a collaborative writing process between a very multinational group of international researchers in open, digital, and online learning. It’s not a random sample of people who happen to know one another: the huge group represents a rich mix of (extremely) well-established and (excellent) emerging researchers from a broad set of cultural backgrounds, covering a wide range of research interests in the field. Junhong does a great job of extracting the themes and organizing all of that into a coherent narrative.

In many ways I like this paper more than its companion piece. I think this is because, though its findings are – as the title implies – less well-defined than the first, I am more closely aligned with the underlying assumptions, attitudes and values that underpin the analysis. It grapples more firmly with the wicked problems and it goes deeper into the broader, situated, human nature of the systems in which generative AI is necessarily intertwingled, skimming over the more simplistic conversations about cheating, reliability, and so on to get at some meatier but more fundamental issues that, ultimately, relate to how and why we do this education thing in the first place.

Abstract

Advocates of AI in Education (AIEd) assert that the current generation of technologies, collectively dubbed artificial intelligence, including generative artificial intelligence (GenAI), promise results that can transform our conceptions of what education looks like. Therefore, it is imperative to investigate how educators perceive GenAI and its potential use and future impact on education. Adopting the methodology of collective writing as an inquiry, this study reports on the participating educators’ perceived grey areas (i.e. issues that are unclear and/or controversial) and recommendations on future research. The grey areas reported cover decision-making on the use of GenAI, AI ethics, appropriate levels of use of GenAI in education, impact on learning and teaching, policy, data, GenAI outputs, humans in the loop and public–private partnerships. Recommended directions for future research include learning and teaching, ethical and legal implications, ownership/authorship, funding, technology, research support, AI metaphor and types of research. Each theme or subtheme is presented in the form of a statement, followed by a justification. These findings serve as a call to action to encourage a continuing debate around GenAI and to engage more educators in research. The paper concludes that unless we can ask the right questions now, we may find that, in the pursuit of greater efficiency, we have lost the very essence of what it means to educate and learn.

Reference

Xiao, J., Bozkurt, A., Nichols, M., Pazurek, A., Stracke, C. M., Bai, J. Y. H., Farrow, R., Mulligan, D., Nerantzi, C., Sharma, R. C., Singh, L., Frumin, I., Swindell, A., Honeychurch, S., Bond, M., Dron, J., Moore, S., Leng, J., van Tryon, P. J. S., … Themeli, C. (2025). Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research. TechTrends. https://doi.org/10.1007/s11528-025-01060-6

Understanding collective stupidity in social computing systems

Here are the slides from a talk I just gave to a group of grad students at AU in our ongoing seminar series, on the nature of collectives and ways we can use and abuse them. It’s a bit of a sprawl covering some 30 odd years of a particularly geeky, semi-philosophical branch of my research career (not much on learning and teaching in this one, but plenty of termites) and winding up with very much a work in progress. I rushed through it at the end of a very long day/week/month/year/life but I hope someone may find it useful!

This is the abstract:

“Collective intelligence” (CI)  is a widely-used but fuzzy term that can mean anything from the behaviour of termites, to the ability of an organization to adapt to a changing environment, to the entire human race’s capacity to think, to the ways that our individual neurons give rise to cognition. Common to all, though, is the notion that the combined behaviours of many independent agents can lead to positive emergent changes in the behaviour of the whole and, conversely, that the behaviour of the whole leads to beneficial changes in the behaviours of the agents of which it is formed. Many social computing systems, from Facebook to Amazon, are built to enable or to take advantage of CI. Here I define social computing systems as digital systems that have no value unless they are used by at least two participants, and in which those participants play significant roles in affecting one another’s behaviour. This is a broad definition that embraces Google Search as much as email, wikis, and blogs, and in which the behaviour of humans and the surrounding structures and systems they belong to are at least as important as the algorithms and interfaces that support them.  Unfortunately, the same processes that lead to the wisdom of crowds can at least as easily result in the stupidity of mobs, including phenomena like filter bubbles and echo chambers that may be harmful in themselves or that render systems open to abuse such as trolling, disinformation campaigns, vote brigading, and successful state manipulation of elections.  If we can build better models of social computing systems, taking into account their human and contextual elements, then we stand a better chance of being able to avoid their harmful effects and using them for good.  To this end I have coined the term “ochlotecture”, from the Classical Greek ὄχλος (ochlos), meaning  “multitude” and τέκτων (tektōn) meaning “builder”. In this seminar I will identify some of the main ochlotectural elements that contribute to collective intelligence, describe some of the ways it can be undermined, and explore some of the ramifications as they relate to social software design and management.

 

How AI works for education: an interview with me for AACE Review

Thanks to Stefanie Panke for some great questions and excellent editing in this interview with me for the AACE Review.

The content is in fact the product of two discussions, one coming from student questions at the end of a talk that I gave for the Asian University for Women just before Christmas, the other asynchronously with Stefanie herself.

Stefanie did a very good job of making sense of my rambling replies to the students that spanned quite a few issues, including some from my book, How Education Works, some with (mainly) generative AI, and a little about the intersection of collective and artificial intelligence. Stefanie’s own prompts were great: they encouraged me to think a little differently, and to take some enjoyable detours along the way around the evils of learning management systems, artificially-generated music, and  social media, as well as a discussion of the impact of generative AI on learning designers, thoughts on legislation to control AI, and assessment.

Here are the slides from that talk at AUW – I’ve not posted this separately because hardly any are new: it mostly cobbles together two recent talks, one for Contact North and the other my keynote for ICEEL ’24. The conversation afterwards was great, though, thanks to a wonderfully thoughtful and enthusiastic bunch of very smart students.

The collective ochlotecture of large language models: slides from my talk at CI.edu, 2024

Here are my slides from the 1st International Symposium on Educating for Collective Intelligence, last week, here is my paper on which it was based, and here is the video of the talk itself:

You can find this and videos of the rest of the stunning line-up of speakers at https://www.youtube.com/playlist?list=PLcS9QDvS_uS6kGxefLFr3kFToVIvIpisn It was an incredibly engaging and energizing event: the chat alone was a masterclass in collective intelligence that was difficult to follow at times but that was filled with rich insights and enlightening debates. The symposium site, that has all this and more, is at https://cic.uts.edu.au/events/collective-intelligence-edu-2024/

Collective intelligence, represented in the style of 1950s children's books.With just 10 minutes to make the case and 10 minutes for discussion, none of us were able to go into much depth in our talks. In mine I introduced the term “ochlotecture”, from the Classical Greek ὄχλος (ochlos), meaning  “multitude” and τέκτων (tektōn) meaning “builder” to describe the structures and processes that define the stuff that gives shape and form to collections of people and their interactions. I think we need such a term because there are virtually infinite ways that such things can be configured, and the configuration makes all the difference. We blithely talk of things like groups, teams, clubs, companies, squads, and, of course, collectives, assuming that others will share an understanding of what we mean when, of course, they don’t. There were at least half a dozen quite distinct uses of the term “collective intelligence” in this symposium alone. I’m still working on a big paper on this subject that goes into some depth on the various dimensions of interest as they pertain to a wide range of social organizations but, for this talk, I was only concerned with the ochlotecture of collectives (a term I much prefer to “collective intelligence” because intelligence is such a slippery word, and collective stupidity is at least as common). From an ochlotectural perspective, these consist of a means of collecting crowd-generated information, processing it, and presenting the processed results back to the crowd. Human collective ochlotectures often contain other elements – group norms, structural hierarchies, schedules, digital media, etc – but I think those are the defining features. If I am right then large language models (LLMs) are collectives, too, because that is exactly what they do. Unlike most other collectives, though (a collectively driven search engine like Google Search being one of a few partial exceptions) the processing is unique to each run of the cycle, generated via a prompt or similar input. This is what makes them so powerful, and it is what makes their mimicry of human soft technique so compelling.

I did eventually get around to the theme of the conference. I spent a while discussing why LLMs are troubling – the fact that we learn values, attitudes, ways of being, etc from interacting with them; the risks to our collective intelligence caused by them being part of the crowd, not just aggregators and processors of its outputs; and the potential loss of the soft, creative skills they can replace – and ended with what that implies for how we should act as educators: essentially, to focus on the tacit curriculum that has, till now, always come from free; to focus on community because learning to be human from and with other humans is what it is all about; and to decouple credentials so as to reduce the focus on measurable outcomes that AIs can both teach and achieve better than an average human. I also suggested a couple of principles for dealing with generative AIs: to treat them as partners rather than tools, and to use them to support and nurture human connections, as ochlotects as much as parts of the ochlotecture.

I had a point to make in a short time, so the way I presented it was a bit of a caricature of my more considered views on the matter. If you want a more balanced view, and to get a bit more of the theoretical backdrop to all this, Tim Fawns’s talk (that follows mine and that will probably play automatically after it if you play the video above) says it all, with far greater erudition and lucidity, and adds a few very valuable layers of its own. Though he uses different words and explains it far better than I, his notion of entanglement closely echoes my own ideas about the nature of technology and the roles it plays in our cognition. I like the word “intertwingled” more than “entangled” because of its more positive associations and the sense of emergent order it conveys, but we mean substantially the same thing: in fact, the example he gave of a car is one that I have frequently used myself, in exactly the same way.

New paper: The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future

I’m proud to be the 7th of 47 authors on this excellent new paper, led by the indefatigable Aras Bozkurt and featuring some of the most distinguished contemporary researchers in online, open, mobile, distance, e- and [insert almost any cognate sub-discipline here] learning, as well as a few of us hanging on their coat tails like me.

AI negaiveAs the title suggests, it is a manifesto: it makes a series of statements (divided into 15 positive and 20 negative themes) about what is or what should be, and it is underpinned by a firm set of humanist pedagogical and ethical attitudes that are anything but neutral. What makes it interesting to me, though, can mostly be found in the critical insights that accompany each theme, that capture a little of the complexity of the discussions that led to them, and that add a lot of nuance. The research methodology, a modified and super-iterative Delphi design in which all participants are also authors is, I think, an incredibly powerful approach to research in the technology of education (broadly construed) that provides rigour and accountability without succumbing to science-envy.

 

AI-positiveNotwithstanding the lion’s share of the work of leading, assembling, editing, and submitting the paper being taken on by Aras and Junhong, it was a truly collective effort so I have very little idea about what percentage of it could be described as my work. We were thinking and writing together.  Being a part of that was a fantastic learning experience for many of us, that stretched the limits of what can be done with tracked changes and comments in a Google Doc, with contributions coming in at all times of day and night and just about every timezone, over weeks. The depth and breadth of dialogue was remarkable, as much an organic process of evolution and emergence as intelligent design, and one in which the document itself played a significant participant role. I felt a strong sense of belonging, not so much as part of a community but as part of a connectome.

For me, this epitomizes what learning technologies are all about. It would be difficult if not impossible to do this in an in-person setting: even if the researchers worked together on an online document, the simple fact that they met in person would utterly change the social dynamics, the pacing, and the structure. Indeed, even online, replicating this in a formal institutional context would be very difficult because of the power relationships, assessment requirements, motivational complexities and artificial schedules that formal institutions add to the assembly. This was an online-native way of learning of a sort I aspire to but seldom achieve in my own teaching.

The paper offers a foundational model or framework on which to build or situate further work as well as providing a moderately succinct summary of  a very significant percentage of the issues relating to generative AI and education as they exist today. Even if it only ever gets referred to by each of its 47 authors this will get more citations than most of my papers, but the paper is highly cite-able in its own right, whether you agree with its statements or not. I know I am biased but, if you’re interested in the impacts of generative AI on education, I think it is a must-read.

The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future

Bozkurt, A., Xiao, J., Farrow, R., Bai, J. Y. H., Nerantzi, C., Moore, S., Dron, J., … Asino, T. I. (2024). The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future. Open Praxis, 16(4), 487–513. https://doi.org/10.55982/openpraxis.16.4.777

Full list of authors:

  • Aras Bozkurt
  • Junhong Xiao
  • Robert Farrow
  • John Y. H. Bai
  • Chrissi Nerantzi
  • Stephanie Moore
  • Jon Dron
  • Christian M. Stracke
  • Lenandlar Singh
  • Helen Crompton
  • Apostolos Koutropoulos
  • Evgenii Terentev
  • Angelica Pazurek
  • Mark Nichols
  • Alexander M. Sidorkin
  • Eamon Costello
  • Steven Watson
  • Dónal Mulligan
  • Sarah Honeychurch
  • Charles B. Hodges
  • Mike Sharples
  • Andrew Swindell
  • Isak Frumin
  • Ahmed Tlili
  • Patricia J. Slagter van Tryon
  • Melissa Bond
  • Maha Bali
  • Jing Leng
  • Kai Zhang
  • Mutlu Cukurova
  • Thomas K. F. Chiu
  • Kyungmee Lee
  • Stefan Hrastinski
  • Manuel B. Garcia
  • Ramesh Chander Sharma
  • Bryan Alexander
  • Olaf Zawacki-Richter
  • Henk Huijser
  • Petar Jandrić
  • Chanjin Zheng
  • Peter Shea
  • Josep M. Duart
  • Chryssa Themeli
  • Anton Vorochkov
  • Sunagül Sani-Bozkurt
  • Robert L. Moore
  • Tutaleni Iita Asino

Abstract

This manifesto critically examines the unfolding integration of Generative AI (GenAI), chatbots, and algorithms into higher education, using a collective and thoughtful approach to navigate the future of teaching and learning. GenAI, while celebrated for its potential to personalize learning, enhance efficiency, and expand educational accessibility, is far from a neutral tool. Algorithms now shape human interaction, communication, and content creation, raising profound questions about human agency and biases and values embedded in their designs. As GenAI continues to evolve, we face critical challenges in maintaining human oversight, safeguarding equity, and facilitating meaningful, authentic learning experiences. This manifesto emphasizes that GenAI is not ideologically and culturally neutral. Instead, it reflects worldviews that can reinforce existing biases and marginalize diverse voices. Furthermore, as the use of GenAI reshapes education, it risks eroding essential human elements—creativity, critical thinking, and empathy—and could displace meaningful human interactions with algorithmic solutions. This manifesto calls for robust, evidence-based research and conscious decision-making to ensure that GenAI enhances, rather than diminishes, human agency and ethical responsibility in education.

Slides from my ICEEL ’24 Keynote: “No Teacher Left Behind: Surviving Transformation”

Here are the slides from from my keynote at the 8th International Conference on Education and E-Learning in Tokyo yesterday. Sadly I was not actually in Tokyo for this but the online integration was well done and there was some good audience interaction. I am also the conference chair (an honorary title) so I may be a bit biased, but I think it’s a really good conference, with an increasingly rare blend of both the tech and the pedagogical aspects of the field, and some wonderfully diverse keynotes ranging in subject matter from the hardest computer science to reflections on literature and love (thanks to its collocation with ICLLL, a literature and linguistics conference). My keynote was somewhere in between, and deliberately targeted at the conference theme, “Transformative Learning in the Digital Era: Navigating Innovation and Inclusion.”

the technological connectome, represented in the style of 1950s children's booksAs my starting point for the talk I introduced the concept of the technological connectome, about which I have just written a paper (currently under revision, hopefully due for publication in a forthcoming issue of the new Journal of Open, Distance, and Digital Education), which is essentially a way of talking about extended cognition from a technological rather than a cognitive perspective. From there I moved on to the adjacent possible and the exponential growth in technology that has, over the past century or so, reached such a breakneck rate of change that innovations such as generative AI, the transformation I particularly focused on (because it is topical), can transform vast swathes of culture and practice in months if not in weeks. This is a bit of a problem for traditional educators, who are as unprepared as anyone else for it, but who find themselves in a system that could not be more vulnerable to the consequences. At the very least it disrupts the learning outcomes-driven teacher-centric model of teaching that still massively dominates institutional learning the world over, both in the mockery it makes of traditional assessment practices and in the fact that generative AIs make far better teachers if all you care about are the measurable outcomes.

The solutions I presented and that formed the bulk of the talk, largely informed by the model of education presented in How Education Works, were mostly pretty traditional, emphasizing the value of community, and of passion for learning, along with caring about, respecting, and supporting learners. There were also some slightly less conventional but widely held perspectives on assessment, plus a bit of complexivist thinking about celebrating the many teachers and acknowledging the technological connectome as the means, the object and the subject of learning, but nothing Earth-shatteringly novel. I think this is as it should be. We don’t need new values and attitudes; we just need to emphasize those that are learning-positive rather than the increasingly mainstream learning-negative, outcomes-driven, externally regulated approaches that the cult of measurement imposes on us.

Post-secondary institutions have had to grapple with their learning-antagonistic role of summative assessment since not long after their inception so this is not a new problem but, until recent decades, the two roles have largely maintained an uneasy truce. A great deal of the impetus for the shift has come from expanding access to PSE. This has resulted in students who are less able, less willing, and less well-supported than their forebears who were, on average, far more advantaged in ability, motivation, and unencumbered time simply because fewer were able to get in. In the past, teachers hardly needed to teach. The students were already very capable, and had few other demands on their time (like working to get through college), so they just needed to hang out with smart people, some of whom who knew the subject and could guide them through it in order to know what to learn and whether they had been successful, along with the time and resources to support their learning. Teachers could be confident that, as long as students had the resources (libraries, lecture notes, study time, other students) they would be sufficiently driven by the need to pass the assessments and/or intrinsic interest, that they could largely be left to their own devices (OK, a slight caricature, but not far off the reality).

Unfortunately, though this is no longer even close to the norm,  it is still the model on which most universities are based.  Most of the time professors are still hired because of their research skills, not teaching ability, and it is relatively rare that they are expected to receive more than the most perfunctory training, let alone education, in how to teach. Those with an interest usually have opportunities to develop their skills but, if they do not, there are few consequences. Thanks to the technological connectome, the rewards and punishments of credentials continue to do the job well enough, notwithstanding the vast amounts of cheating, satisficing, student suffering, and lost love of learning that ensues. There are still plenty of teachers: students have textbooks, YouTube tutorials, other students, help sites, and ChatGPT, to name but a few, of which there are more every day. This is probably all that is propping up a fundamentally dysfunctional system. Increasingly, the primary value of post-secondary education comes to lie in its credentialling function.

No one who wants to teach wants this, but virtually all of those who teach in universities are the ones who succeeded in retaining their love of learning for its own sake despite it, so they find it hard to understand students who don’t. Too many (though, I believe, a minority) are positively hostile to their students as a result, believing that most students are lazy, willing to cheat, or to otherwise game the system, and they set up elaborate means of control and gotchas to trap them.  The majority who want the best for their students, however,  are also to blame, seeing their purpose as to improve grades, using “learning science” (which is like using colour theory to paint – useful, not essential) to develop methods that will, on average, do so more effectively. In fairness, though grades are not the purpose, they are not wrong about the need to teach the measurable stuff well: it does matter to achieve the skills and knowledge that students set out to achieve. However, it is only part of the purpose. Mostly, education is a means to less measurable ends; of forming identities, attitudes, values, ways of relating to others, ways of thinking, and ways of being. You don’t need the best teaching methods to achieve that: you just need to care, and to create environments and structures that support stuff like community, diversity, connection, sharing, openness, collaboration, play, and passion.

The keynote was recorded but I am not sure if or when it will be available. If it is released on a public site, I will share it here.

Video and slides from my webinar, How to Be an Educational Technology: An Entangled Perspective on Teaching

an entangled teacher, represented as an anthropomorphic dog wrapped in cables that hold multiple technologies around him such as books and computersFor those with an interest, here are the slides from my webinar for Contact North | Contact Nord that I gave today: How to be an educational technology (warning: large download, about 32MB).

Here is a link to the video of the session.

I was invited to do this webinar because my book (How Education Works: Teaching, Technology, and Technique, briefly reviewed on the Contact North | Contact Nord site last year) was among the top 5 most viewed books of the year, so that was what the talk was about. Among the most central messages of the book and the ones that I was trying to get across in this presentation were:

  1. that how we do teaching matters more than what we do (“T’ain’t what you do, it’s the way that you do it”) and
  2. that we can only understand the process if we examine the whole complex assembly of teaching (very much including the technique of all who contribute to it, including learners, textbooks, and room designers) not just the individual parts.

Along the way I had a few other things to say about why that must be the case, the nature of teaching, the nature of collective cognition, and some of the profound consequences of seeing the world this way. I had fun persuading ChatGPT to illustrate the slides in a style that was not that of Richard Scarry (ChatGPT would not do that, for copyright reasons) but that was reminiscent of it, so there are lots of cute animals doing stuff with technologies on the slides.

I rushed and rambled, I sang, I fumbled and stumbled, but I think it sparked some interest and critical thinking. Even if it didn’t, some learning happened, and that is always a good thing. The conversations in the chat went too fast for me to follow but I think there were some good ones. If nothing else, though I was very nervous, I had fun, and it was lovely to notice a fair number of friends, colleagues, and even the odd relative among the audience. Thank you all who were there, and thank you anyone who catches the recording later.

How AI Teaches Its Children: slides and reflections from my keynote for AISUMMIT-2024

Late last night I gave the opening keynote at the Global AI Summit 2024, International Conference on Artificial Intelligence and Emerging Technology,  hosted by Bennett University, Noida, India. My talk was online. Here are the slides: How AI Teaches Its Children. It was recorded but I don’t know when or whether or with whom it will be shared: if possible I will add it to this post.

a robot teaching children in the 18th Century
a robot teaching children in the 18th Century

For those who have been following my thoughts on generative AI there will be few surprises in my slides, and I only had half an hour so there was not much time to go into the nuances. The title is an allusion to Pestalozzi’s 18th Century tract, How Gertrude Teaches Her Children, which has been phenomenally influential to the development of education systems around the world and that continues to have impact to this day. Much of it is actually great: Pestalozzi championed very child-centric teaching approaches that leveraged the skills and passions of their teachers. He recommended methods of teaching that made full use of the creativity and idiosyncratic knowledge the teachers possessed and that were very much concerned with helping children to develop their own interests, values and attitudes. However, some of the ideas – and those that have ultimately been more influential – were decidedly problematic, as is succinctly summarized in this passage on page 41:

I believe it is not possible for common popular instruction to advance a step, so long as formulas of instruction are not found which make the teacher, at least in the elementary stages of knowledge, merely the mechanical tool of a method, the result of which springs from the nature of the formulas and not from the skill of the man who uses  it.

This is almost the exact opposite of the central argument of my book, How Education Works, that mechanical methods are not the most important part of a soft technology such as teaching: what usually matters more is how it is done, not just what is done. You can use good methods badly and bad methods well because you are a participant in the instantiation of a technology, responsible for the complete orchestration of the parts, not just a user of them.

As usual, in the talk I applied a bit of co-participation theory to explain why I am both enthralled by and fearful of the consequences of generative AIs because they are the first technologies we have ever built that can use other technologies in ways that resemble how we use them. Previous technologies only reproduced hard technique – the explicit methods we use that make us part of the technology. Generative AIs reproduce soft technique, assembling and organizing phenomena in endlessly novel ways to act as creators of the technology. They are active, not passive participants.

Two dangers

I see there to be two essential risks lying in the delegation of soft technique to AIs. The first is not too terrible: that, because we will increasingly delegate creative activities we would have otherwise performed ourselves to machines, we will not learn those skills ourselves. I mourn the potential passing of hard skills in (say) drawing, or writing, or making music, but the bigger risk is that we will lose the the soft skills that come from learning them: the things we do with the hard skills, the capacity to be creative.

That said, like most technologies, generative AIs are ratchets that let us do more than we could achieve alone. In the past week, for instance, I “wrote” an app that would have taken me many weeks without AI assistance in a less than a day. Though it followed a spec that I had carefully and creatively written, it replaced the soft skills that I would have applied had I written it myself, the little creative flourishes and rabbit holes of idea-following that are inevitable in any creation process. When we create we do so in conversation with the hard technologies available to us (including our own technique), using the affordances and constraints to grasp adjacent possibles they provide. Every word we utter or wheel we attach to an axle opens and closes opportunities for what we can do next.

With that in mind, the app that the system created was just the beginning. Having seen the adjacent possibles of the finished app, I have spent too many hours in subsequent days extending and refining the app to do things that, in the past, I would not have bothered to do because they would have been too difficult. It has become part of my own extended cognition, starting higher up the tree than I would have reached alone. This has also greatly improved my own coding skills because, inevitably, after many iterations, the AI and/or I started to introduce bugs, some of which have been quite subtle and intractable. I did try to get the AI to examine the whole code (now over 2000 lines of JavaScript) and rewrite it or at least to point out the flaws, but that failed abysmally, amply illustrating both the strength of LLMs as creative participants in technologies, and their limitations in being unable to do the same thing the same way twice. As a result, the AI and I have have had to act as partners trying to figure out what is wrong. Often, though the AI has come up with workable ideas, its own solution has been a little dumb, but I could build on it to solve the problem better. Though I have not actually created much of the code myself, I think my creative role might have been greater than it would have been had I written every line.

Similarly for the images I used to illustrate the talk: I could not possibly have drawn them alone but, once the AI had done so, I engaged in a creative conversation to try (sometimes very unsuccessfully) to get it to reproduce what I had in mind. Often, though, it did things that sparked new ideas so, again, it became a partner in creation, sharing in my cognition and sparking my own invention. It was very much not just a tool: it was a co-worker, with different and complementary skills, and “ideas” of its own. I think this is a good thing. Yes, perhaps it is a pity that those who follow us may not be able to draw with a pen (and more than a little worrying thinking about the training sets that future AIs with learn to draw from), but they will have new ways of being creative.

Like all learning, both these activities changed me: not just my skills, but my ways of thinking. That leads me to the bigger risk.

Learning our humanity from machines

The second risk is more troubling: that we will learn ways of being human from machines. This is because of the tacit curriculum that comes with every learning interaction. When we learn from others, whether they are actively teaching, writing textbooks, showing us, or chatting with us, we don’t just learn methods of doing things: we learn values, attitudes, ways of thinking, ways of understanding, and ways of being at the same time. So far we have only learned that kind of thing from humans (sometimes mediated through code) and it has come for free with all the other stuff, but now we are doing so from machines. Those machines are very much like us because 99% of what they are – their training sets – is what we have made, but they not the same. Though LLMs are embodiments of our own collective intelligence, they don’t so much lack values, attitudes, ways of thinking etc as they have any and all of them. Every implicit value and attitude of the people whose work constituted their training set is available to them, and they can become whatever we want them to be. Interacting with them is, in this sense, very much not like interacting with something created by a human, let alone with humans more directly. They have no identity, no relationships, no purposes, no passion, no life history and no future plans. Nothing matters to them.

To make matters worse, there is programmed and trained stuff on top of that, like their interminable cheery patience  that might not teach us great ways of interacting with others. And of course it will impact how we interact with others because we will spend more and more time engaged with it, rather than with actual humans. The economic and practical benefits make this an absolute certainty. LLMs also use explicit coding to remove or massage data from the input or output, reflecting the values and cultures of their creators for better or worse. I was giving this talk in India to a predominantly Indian audience of AI researchers, every single one of whom was making extensive use of predominantly American LLMs like ChatGPT, Gemini, or Claude, and (inevitably) learning ways of thinking and doing from it. This is way more powerful than Hollywood as an instrument of Americanization.

I am concerned about how this will change our cultures and our selves because this is happening at phenomenal and global scale, and it is doing so in a world that is unprepared for the consequences, the designed parts of which assume a very different context. One of generative AI’s greatest potential benefits lies in the potential to provide “high quality” education at low cost to those who are currently denied it, but those low costs will make it increasingly compelling for everyone. However, because of the designs that assume a different context “quality”, in this sense, relates to the achievement of explicit learning outcomes: this is Pestalozzi’s method writ large. Generative AIs are great at teaching what we want to learn – the stuff we could write down as learning objectives or intended outcomes – so, as that is the way we have designed our educational systems (and our general attitudes to learning new skills), of course we will use them for that purpose. However, that cannot be done without teaching the other stuff – the tacit curriculum – which is ultimately more important because it shapes how we are in the world, not just the skills we employ to be that way. We might not have designed our educational systems to do that, and we seldom if ever think about it when teaching ourselves or receiving training to do something, but it is perhaps education’s most important role.

By way of illustration, I find it hugely bothersome that generative AIs are being used to write children’s stories (and, increasingly, videos) and I hope you feel some unease too, because those stories – not the facts in them but the lessons about things that matter that they teach – are intrinsic to them becoming who they will become. However, though perhaps of less magnitude, the same issue relates to learning everything from how to change a plug to how to philosophize: we don’t stop learning from the underlying stories behind those just because we have grown up. I fear that educators, formal or otherwise, will become victims of the McNamara Fallacy, setting our goals to achieve what is easily measurable while ignoring what cannot (easily) be measured, and so rush blindly towards subtly new ways of thinking and acting that few will even notice, until the changes are so widespread they cannot be reversed. Whether better or worse, it will very definitely be different, so it really matters that we examine and understand where this is all leading. This is the time, I believe, to reclaim a revalorize the value of things that are human before it is too late. This is the time to recognize education (far from only formal) as being how we become who we are, individually and collectively, not just how we meet planned learning outcomes. And I think (at least hope) that we will do that. We will, I hope, value more than ever the fact that something – be it a lesson plan or a book or a screwdriver – is made by someone or by a machine that has been explicitly programmed by someone. We will, I hope, better recognize  the relationships between us that it embodies, the ways it teaches us things it does not mean to teach, and the meaning it has in our lives as a result. This might happen by itself – already there is a backlash against the bland output of countless bots – but it might not be a bad idea to help it along when we can. This post (and my talk last night) has been one such small nudge.

Forthcoming webinar, September 24, 2024 – How to be an Educational Technology: An Entangled Perspective on Teaching

This is an announcement for an event I’ll be facilitating as part of TeachOnline’s excellent ongoing series of webinars. In it I will be discussing some of the key ideas of my open book, How Education Works, and exploring what they imply about how we should teach and, more broadly, how we should design systems of education. It will be fun. It will be educational. There may be music.

Here are the details:

Date: Tuesday, September 24, 2024

Time: 1:00 PM – 2:00 PM (Eastern Time) (find your time zone here)

Register (free of charge) for the event here

 

Source: How to be an Educational Technology: An Entangled Perspective on Teaching | Welcome to TeachOnline