Just a metatool? Some thoughts why generative AIs are not tools

hammer holding an AI nailMany people brush generative AI aside as being just a tool. ChatGPT describes itself as such (I asked). I think it’s more complicated than that, and this post is going to be an attempt to explain why. I’m not sure about much of what follows and welcome any thoughts you may have on whether this resonates with you and, if not, why not.

What makes something a tool

I think that to call something a tool is shorthand for it having all of the following 5 attributes:

  1. It is an object (physical, digital, cognitive, procedural, organizational, structural, conceptual, spiritual, etc. – i.e. the thing we normally identify as the tool),
  2. used with/designed for a purpose, that
  3. can extend the capabilities of an actor (an intelligent agent, typically human), who
  4. may perform an organized action or series of actions with it, that
  5. cause changes to a subject other than the tool itself (such as a foodstuff, or piece of paper, a mental state, or a configuration of bits),

More informally, less precisely, but perhaps more memorably:

A tool is something that an intelligent agent does something with in order to do something to something else

Let me unpack that a bit.

A pebble used as a knife sharpener is a tool, but one used to reinforce concrete is not. A pen used to write on paper is a tool, but the paper is not. The toolness in each case emerges from what the agent does and the fact that it is done to something, in order to achieve something (a sharp knife, some writing).

Any object we label as a tool can become part of another with different organization. A screwdriver can become an indefinitely large number of other tools  apart from one intended for driving screws. In fact, almost anything can become a tool with the right organization. The paper can be a tool if it is, say, used to scoop up dirt. And, when I say “paper”, remember that this is the label for the object I am calling a tool, but it is the purpose, what it does, how it is organized, and the subject it acts upon that makes it so.

It is not always easy to identify the “something else” that a tool affects. A saw used to cut wood is an archetypal tool, but a saw played with a bow to make music is, I think, not. Perhaps the bow is a tool, and maybe we could think of the saw as a tool acting on air molecules, but I think we tend to perceive it as the thing that is acted upon rather than the thing we do something with.

Toolness is intransitive: a computer may be a tool for running programs, and a program running on it may be a tool that fixes a corrupt disk, but a computer is not a tool for fixing a corrupt disk.

A great many tools are also a technologies in their own right. The intention and technique of the tool maker combines with that of the tool user, so the tool user may achieve more (or more reliably, faster, more consistently, etc) than would be possible without both. A fountain pen adds more to the writing assembly than a quill, for instance, so demanding less of the writer. Many tools are partnerships of this nature, allowing the cognition of more than one person to be shared. This is the ratchet that makes humans smart.

Often, the organization performed by the maker of a technology entirely replaces that of the tool user. A dish sponge is a tool, but a dishwasher is not: it is an appliance. Some skill is needed to load it but the dishwashing itself – the purpose for which it is designed – is entirely managed by the machine.

The case is less clear for an appliance like, say, a vacuum cleaner. I think this is because there are two aspects to the device: the mechanism that autonomously sucks dirt is what makes it an appliance, but the hose (or whatever) used to select the dirt to be removed is a tool. This is reflected in common usage, inasmuch as a vacuum cleaner is normally sold with what are universally described as tools (i.e. the things that a person actively manipulates). The same distinction is still there in a handheld machine, too – in fact, many come with additional tools – though I would be much more comfortable describing the whole device as a tool, because that’s what is manipulated to suck up the dirt. Many power tools fit in this category: they do some of the work autonomously but they are still things people do something with in order to do something to something else.

Humans can occasionally be accurately described as tools: the movie Swiss Army Man, for instance, features Daniel Radcliffe as a corpse that turns out to have many highly inventive uses. For real live humans, though, the case is less clear.  Employees in scripted call centres, or teachers following scripted lesson plans are more like appliances than tools: having been “programmed”, they run autonomously, so the scripts may be tools but the people are not. Most other ways of using other people are even less tool-like. If I ask you to pick up some shopping for me, say, then my techniques of persuasion may be tools, but you are the one organizing phenomena to shop, which is the purpose in question.

The case is similar for sheepdogs (though they are not themselves tool users), that I would be reluctant to label as tools, though skills are clearly needed to make them do our bidding and they do serve tool-like purposes as part of the technology of shepherding. The tools, though, are the commands, methods of training, treats, and so on, not the animals themselves.

Why generative AIs are not tools

For the same reasons of transitivity that dishwashers, people, and sheepdogs are not normally tools, neither are generative AIs. Prompts and other means of getting AIs to do our bidding are tools but generative AIs themselves work autonomously.  This comes with the proviso that almost anything can be repurposed so there is nothing that is not at least latently a tool but, at least in their most familiar guises, generative AIs tend not to be.

Unlike conventional appliances, but more like sheepdogs, the work generative AIs perform is neither designed by humans nor scrutable to us. Unlike sheepdogs, but more like humans, generative AIs are tool users, too: not just (or not so much) words, but libraries, programming languages, web crawlers, filters, and so on. Unlike humans, though, generative AIs act with their users’ intentions, not their own, expressed through the tools with which we interact with them.  They are a bit like partial brains, perhaps, remarkably capable but not aware of nor able to use that capability autonomously.

It’s not just chatbots. Many recommender systems and search engines (increasingly incorporating deep learning), also sit uncomfortably in the category of tools, though they are often presented as such. Amazon’s search, say, is not (primarily) designed to help you find what you are looking for but to push things at you that Amazon would like you to buy, which is why you must troll through countless not-quite-right things despite it being perfectly capable of exactly matching your needs. If it is anyone’s tool, it is Amazon’s, not ours. The same for a Google search: the tools are your search terms, not Google Search, and it is acting quite independently in performing the search and returning results that are likely more beneficial to Google than to you. This is not true of all search systems. If I search for a file on my own computer then, if it fails to provide what I am looking for, it is a sign that the tool (and I think it is a tool because the results should be entirely determinate) is malfunctioning. Back in those far off days when Amazon wanted you to find what you wanted or Google tried to provide the closest match to your search term, if not tools then we could at least think of them as appliances designed to be controlled by us.

I think we need a different term for these things. I like “metatool” because it is catchy and fairly accurate. A metatool is something that uses tools to do our bidding, not a tool in its own right.  It is something that we use tools to act upon that is itself a tool user. I think this is better than a lot of other metaphors we might use: slave, assistant (Claude describes itself, incidentally, not as ‘merely’ a tool, but as an intelligent assistant), partner, co-worker, contractor, etc all suggest more agency and intention than generative AIs actually possess, but appliance, machine, device, etc fail to capture the creativity, tailoring, and unpredictability of the results.

Why it matters

The big problem with treating generative AIs as tools is that it overplays our own agency and underplays the creative agency of the AI. It encourages us to think of them, like actual tools, as, cognitive prostheses, ways of augmenting and amplifying but still using and preserving human cognitive capabilities, when what we are actually doing is using theirs. It also encourages us to think the results will be more deterministic than they actually are. This is not to negate the skill needed to use prompts effectively, nor to underplay the need to understand what the prompt is acting upon. Just as the shepherd needs to know the sheepdog, the genAI user has to know how their tools will affect the medium.

Like all technologies, these strange partial brains effectively enlarge our own. All other technologies, though, embed or embody other humans’ thinking and/or our own. Though largely consisting of the compressed expressed thoughts of millions of people, AI’s thoughts are not human thoughts: even using the most transparent of them, we have very little access to the mechanisms behind their probablistic deliberations. And yet, nor are they independent thinking agents. Like any technology we might think of them as cognitive extensions but, if they are, then it is as though we have undergone an extreme form of corpus callosotomy, or we are experiencing something like Jaynes’s bicameral mind. Generative AIs are their own thing: an embodiment of collective intelligence as well as contributors to our own, wrapped up in a whole bunch of intentional programming and training that imbues them, in part, with (and I find this very troubling) the values of their creators and in part with the sum output of a great many humans who created the data on which they are trained.

I don’t know whether this is, ultimately, a bad thing. Perhaps it is another stage in our evolution that will make us more fit to deal with the complex world and new problems in it that we collectively continue to create. Perhaps it will make us less smart, or more the same, or less creative. Perhaps it will have the opposite effects. Most likely it will involve a bit of all of that. I think it is important that we recognize it as something new in the world, though, and not just another tool.

We are (in part) our tools and they are (in part) us

anthropomorphized hammer using a person as a toolHere’s a characteristically well-expressed and succinct summary of the complex nature of technologies, our relationships with them, and what that means for education by the ever-wonderful Tim Fawns. I like it a lot, and it expresses much what I have tried to express about the nature and value of technologies, far better than I could do it and in far fewer words. Some of it, though, feels like it wants to be unpacked a little further, especially the notions that there are no tools, that tools are passive, and that tools are technologies. None of what follows contradicts or negates Tim’s points, but I think it helps to reveal some of the complexities.

There are tools

Tim starts provocatively with the claim that:

There are no tools. Tools are passive, neutral. They can be picked up and put down, used to achieve human goals without changing the user (the user might change, but the change is not attributed to the tool).

I get the point about the connection between tools and technology (in fact it is very similar to one I make in the “Not just tools” section of Chapter 3 of How Education Works) and I understand where Tim is going with it (which is almost immediately to consciously sort-of contradict himself), but I think it is a bit misleading to claim there are no tools, even in the deliberately partial and over-literal sense that Tim uses the term. This is because to call something a tool is to describe a latent or actual relationship between it and an agent (be it a person, a crow, or a generative AI), not just to describe the object itself. At the point at which that relationship is instantiated it very much changes the agent: at the very least, they now have a capability that they did not have before, assuming the tool works and is used for a purpose. Figuring out how to use the tool is not just a change to the agent but a change to what the agent may become that expands the adjacent possible. And, of course, many tools are intracranial so, by definition, having them and using them changes the user. This is particularly obvious when the tool in question is a word, a concept, a model, or a theory, but it is just as true of a hammer, a whiteboard, an iPhone, or a stick picked up from the ground with some purpose in mind, because of the roles we play in them.

Tools are not (exactly) technologies

Tim goes on to claim:

Tools are really technologies. Each technology creates new possibilities for acting, seeing and organising the world.

Again, he is sort-of right and, again, not quite, because “tool” is (as he says) a relational term. When it is used a tool is always part of a technology because the technique needed to use it is a technology that is part of the assembly, and the assembly is the technology that matters. However, the thing that is used – the tool itself – is not necessarily a technology in its own right. A stick on the ground that might be picked up to hit something, point to something, or scratch something is simply a stick.

Tools are not neutral

Tim says:

So a hammer is not just sitting there waiting to be picked up, it is actively involved in possibility-shaping, which subtly and unsubtly entangles itself with social, cognitive, material and digital activity. A hammer brings possibilities of building and destroying, threatening and protecting, and so forth, but as part of a wider, complex activity.

I like this: by this point, Tim is telling us that there are tools and that they are not neutral, in an allusion to Culkin’s/McLuhan’s dictum that we shape our tools and thereafter our tools shape us.  Every new tool changes us, for sure, and it is an active participant in cognition, not a non-existent neutral object. But our enactment of the technology in which the tool participates is what defines it as a tool, so we don’t so much shape it as we are part of the shape of it, and it is that participation that changes us. We are our tools, and our tools are us.

There is interpretive flexibility in this – a natural result of the adjacent possibles that all technologies enable – which means that any technology can be combined with others to create a new technology. An iPhone, say, can be used by anyone, including monkeys, to crack open nuts (I wonder whether that is covered by AppleCare?), but this does not make the iPhone neutral to someone who is enmeshed in the web of technologies of which the iPhone is designed to be a part. As the kind of tool (actually many tools) it is designed to be, it plays quite an active role in the orchestration: as a thing, it is not just used but using. The greater the pre-orchestration of any tool, the more its designers are co-participants in the assembled technology, and it can often be a dominant role that is anything but neutral.

Most things that we call tools (Tim uses the hammer as an example) are also technologies in their own right, regardless of their tooliness: they are phenomena orchestrated with a purpose, stuff that is organized to do stuff and, though softer tools like hammers have a great many adjacent possibles that provide almost infinite interpretive flexibility, they also – as Tim suggests – have propensities that invite very particular kinds of use. A good hardware store sells at least a dozen different kinds of hammer with slightly different propensities, labelled for different uses. All demand a fair amount of skill to use them as intended. Such stores also sell nail guns, though, that reduce the amount of skill needed by automating elements of the process. While they do open up many further adjacent possibles (with chainsaws, making them mainstays of a certain kind of horror movie), and they demand their own sets of skills to use them safely, the pre-orchestration in nail guns greatly reduces many of the adjacent possibles of a manual hammer: they aren’t much good for, say, prying things open, or using as a makeshift anchor for a kayak, or propping up the lid of a tin of paint. Interestingly, nor are they much use for quite a wide range of nail hammering tasks where delicacy or precision are needed. All of this is true because, as a nail driver, there is a smaller gap between intention and execution that needs to be filled than for even the most specialized manual hammer, due to the creators of the nail gun having already filled a lot of it, thus taking quite a few choices away from the tool user. This is the essence of my distinction between hard and soft technologies, and it is exactly the point of making a device of this nature. By filling gaps, the hardness simplifies many of the complexities and makes for greater speed and consistency which in turn makes more things possible (because we no longer have to spend so much time being part of a hammer) but, in the process, it eliminates other adjacent possibles. The gaps can be filled further. The person using such a machine to, say, nail together boxes on a production line is not so much a tool user as a part of someone else’s tool. Their agency is so much reduced that they are just a component, albeit a relatively unreliable component.

Being tools

In an educational context, a great deal of hardening is commonplace, which simplifies the teaching process and allows things to be done at scale. This in turn allows us to do something approximating reductive science, which gives us the comforting feeling that there is some objective value in how we teach. We can, for example, look at the effects of changes to pre-specified lesson plans on SAT results, if both lesson plans and SATs are very rigid, and infer moderately consistent relationships between the two, and so we can improve the process and measure our success quite objectively. The big problem here, though, is what we do not (and cannot) examine by such approaches, such as the many other things that are learned as a result of being treated as cogs in a mechanical system, the value of learning vs the value of grades, or our places in social hierarchies in which we are forced to comply with a very particular kind of authority. SATs change us, in many less than savoury ways. SATs also fail to capture more than a miniscule fraction of the potentially useful learning that also (hopefully) occurred. As tools for sorting learners by levels of competence, SATs are as far from neutral as you can get, and as situated as they could possibly be. As tools for learning or for evaluating learning they are, to say the least, problematic, at least in part because they make the learner a part of the tool rather than a user of it. Either way, you cannot separate them from their context because, if you did, it would be a different technology. If I chose to take a SAT for fun (and I do like puzzles and quizzes, so this is not improbable) it would be a completely different technology than for a student, or a teacher, or an administrator in an educational system. They are all, in very different ways, parts of the tool that is in part made of SATs. I would be a user of it.

All of this reinforces Tim’s main and extremely sound points, that we are embroiled in deeply intertwingled relationships with all of our technologies, and that they cannot be de-situated. I prefer the term “intertwingled” to the term “entangled” that Tim uses because, to me, “entangled” implies chaos and randomness but, though there may (formally) be chaos involved, in the sense of sensitivity to initial conditions and emergence, this is anything but random. It is an extremely complex system but it is highly self-organizing, filled with metastabilities and pockets of order, each of which acts as a further entity in the complex system from which it emerges.

It is incredibly difficult to write about the complex wholes of technological systems of this nature. I think the hardest problem of all is the massive amount of recursion it entails. We are in the realms of what Kauffman calls Kantian Wholes, in which the whole exists for and by means of the parts, and the parts exist for and by means of the whole, but we are talking about many wholes that are parts of or that depend on many other wholes and their parts that are wholes, and so on ad infinitum, often crossing and weaving back and forth so that we sometimes wind up with weird situations in which it seems that a whole is part of another whole that is also part of the whole that is a part of it, thanks to the fact that this is a dynamic system, filled with emergence and in a constant state of becoming. Systems don’t stay still: their narratives are cyclic, recursive, and only rarely linear. Natural language cannot easily do this justice, so it is not surprising that, in his post, Tim is essentially telling us both that tools are neutral and that they are not, that tools exist and that they do not, and that tools are technologies and they are not. I think that I just did pretty much the same thing.

Source: There are no tools – Timbocopia

Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research | TechTrends

The latest paper I can proudly add to my list of publications,  Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research has been published in the (unfortunately) closed journal TechTrends. Here’s a direct link to the paper that should hopefully bypass the paywall, if it has not been used too often.

I’m 16th of 47 coauthors, led by the truly wonderful Junhong Xiao, who is the primary orchestrator and mastermind behind it. This is a companion piece to our Manifesto for Teaching and Learning in a Time of Generative AI and it starts where the other paper left off, delving further into what we don’t know (or at least do not agree that we know) about and (taking up most of the paper) what we might do about that lack of knowledge. I think this presents a pretty useful and wide-ranging research agenda for anyone with an interest in AI and education.

Methodologically, it emerged through a collaborative writing process between a very multinational group of international researchers in open, digital, and online learning. It’s not a random sample of people who happen to know one another: the huge group represents a rich mix of (extremely) well-established and (excellent) emerging researchers from a broad set of cultural backgrounds, covering a wide range of research interests in the field. Junhong does a great job of extracting the themes and organizing all of that into a coherent narrative.

In many ways I like this paper more than its companion piece. I think this is because, though its findings are – as the title implies – less well-defined than the first, I am more closely aligned with the underlying assumptions, attitudes and values that underpin the analysis. It grapples more firmly with the wicked problems and it goes deeper into the broader, situated, human nature of the systems in which generative AI is necessarily intertwingled, skimming over the more simplistic conversations about cheating, reliability, and so on to get at some meatier but more fundamental issues that, ultimately, relate to how and why we do this education thing in the first place.

Abstract

Advocates of AI in Education (AIEd) assert that the current generation of technologies, collectively dubbed artificial intelligence, including generative artificial intelligence (GenAI), promise results that can transform our conceptions of what education looks like. Therefore, it is imperative to investigate how educators perceive GenAI and its potential use and future impact on education. Adopting the methodology of collective writing as an inquiry, this study reports on the participating educators’ perceived grey areas (i.e. issues that are unclear and/or controversial) and recommendations on future research. The grey areas reported cover decision-making on the use of GenAI, AI ethics, appropriate levels of use of GenAI in education, impact on learning and teaching, policy, data, GenAI outputs, humans in the loop and public–private partnerships. Recommended directions for future research include learning and teaching, ethical and legal implications, ownership/authorship, funding, technology, research support, AI metaphor and types of research. Each theme or subtheme is presented in the form of a statement, followed by a justification. These findings serve as a call to action to encourage a continuing debate around GenAI and to engage more educators in research. The paper concludes that unless we can ask the right questions now, we may find that, in the pursuit of greater efficiency, we have lost the very essence of what it means to educate and learn.

Reference

Xiao, J., Bozkurt, A., Nichols, M., Pazurek, A., Stracke, C. M., Bai, J. Y. H., Farrow, R., Mulligan, D., Nerantzi, C., Sharma, R. C., Singh, L., Frumin, I., Swindell, A., Honeychurch, S., Bond, M., Dron, J., Moore, S., Leng, J., van Tryon, P. J. S., … Themeli, C. (2025). Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research. TechTrends. https://doi.org/10.1007/s11528-025-01060-6

Slides from my TRU TPC keynote: “It’s a technology problem: How education doesn’t work and why we shouldn’t fix it”

mediaeval zoom session, with a professor holding a carrot and stick, looking puzzled and a bit surprised, presumably because Zoom was not a popular technology in mediaeval times Here are the slides from my keynote at Thompson Rivers University’s Teaching Practices Colloquium this morning. I quite like the mediaeval theme (thanks ChatGPT), which I created to provide a constant reminder that the problems we have to solve are the direct result of decisions made 1000 years ago. There was a lot of stuff from my last book in the talk, framed in terms of Faustian Bargains, intrinsic motivation, counter technologies, and adjacent possibles. This was the abstract:

Why is it that educators feel it is necessary to motivate students to learn when love of learning is a defining characteristic of our species? Why do students disengage from education? Why do so many cheat? How can we be better teachers? What does “good teaching” even mean? And what role does technology play in all of this? Drawing on ideas, theories, and models from his book, How Education Works: Teaching, Technology, and Technique, Jon Dron will provide some answers to these and many more questions through a tale that straddles most of a millennium, during which you may encounter a mutilated monk, a man who lost a war, a robot named Claude, part of a monkey, and an unsuccessful Swiss farmer who made a Faustian bargain and changed education forever. Along the way you will learn why most educational science is pointless, why the best teaching methods fail, why the worst succeed, and why you should learn to love learning technologies. There may be singing.

I had a lot of fun –  there was indeed singing, a silicone gorilla hand that turned out to be really useful, and some fun activities from which I learned stuff. I think it worked fine as a hybrid event. It was a sympathetic audience, online and in-person. TRU has a really interesting (and tension-filled, in good and bad ways) mix of online and in-person teaching practices, and I’ve met and listened to some really smart, thoughtful, reflective practitioners today. Almost all cross disciplinary boundaries – who knew you could combine culinary science and nursing? – so there’s a lot of invention going on. Unexpectedly, and far more than from a lot of bigger International conferences,  I’m going to go home armed with a whole bunch of of new ideas.

Understanding collective stupidity in social computing systems

Here are the slides from a talk I just gave to a group of grad students at AU in our ongoing seminar series, on the nature of collectives and ways we can use and abuse them. It’s a bit of a sprawl covering some 30 odd years of a particularly geeky, semi-philosophical branch of my research career (not much on learning and teaching in this one, but plenty of termites) and winding up with very much a work in progress. I rushed through it at the end of a very long day/week/month/year/life but I hope someone may find it useful!

This is the abstract:

“Collective intelligence” (CI)  is a widely-used but fuzzy term that can mean anything from the behaviour of termites, to the ability of an organization to adapt to a changing environment, to the entire human race’s capacity to think, to the ways that our individual neurons give rise to cognition. Common to all, though, is the notion that the combined behaviours of many independent agents can lead to positive emergent changes in the behaviour of the whole and, conversely, that the behaviour of the whole leads to beneficial changes in the behaviours of the agents of which it is formed. Many social computing systems, from Facebook to Amazon, are built to enable or to take advantage of CI. Here I define social computing systems as digital systems that have no value unless they are used by at least two participants, and in which those participants play significant roles in affecting one another’s behaviour. This is a broad definition that embraces Google Search as much as email, wikis, and blogs, and in which the behaviour of humans and the surrounding structures and systems they belong to are at least as important as the algorithms and interfaces that support them.  Unfortunately, the same processes that lead to the wisdom of crowds can at least as easily result in the stupidity of mobs, including phenomena like filter bubbles and echo chambers that may be harmful in themselves or that render systems open to abuse such as trolling, disinformation campaigns, vote brigading, and successful state manipulation of elections.  If we can build better models of social computing systems, taking into account their human and contextual elements, then we stand a better chance of being able to avoid their harmful effects and using them for good.  To this end I have coined the term “ochlotecture”, from the Classical Greek ὄχλος (ochlos), meaning  “multitude” and τέκτων (tektōn) meaning “builder”. In this seminar I will identify some of the main ochlotectural elements that contribute to collective intelligence, describe some of the ways it can be undermined, and explore some of the ramifications as they relate to social software design and management.

 

Published in JODDE – Learning: A technological perspective

Dron, J. (2024). Learning: A technological perspective. Journal of Open, Distance, and Digital Education, 1(2), Article 2. https://doi.org/10.25619/dpvg4687

abstract representation of the technological connectomeMy latest paper, Learning: A technological perspective, was published today in the (open) Journal of Open, Distance, and Digital Education.  Methodologically, it provides a connected series of (I think) reasonable and largely uncontroversial assertions about the nature of technology and, for each assertion, offers some examples of why that matters to educators. In the process it wends its way towards a view of learning that is firmly situated in the field of extended cognition (and related complexivist learning theories such as Connectivism, Rhizomatic Learning, Networks of Practice, etc), with a technological twist that is, I think, pragmatically useful and theoretically interesting. Much of it repeats ideas from How Education Works but it extends and generalizes them further into the realms of intelligence and cognition through what I describe as the technological connectome.

I wrote this paper to align with the themes of the journal so, as a result, it has a greater focus on education than on the technological connectome, but I intend to write more on the subject some time soon. The essence of the idea is that what we recognize as intelligent behaviour consists largely of intracranial technologies like words, symbols, theories, models, procedures, structures, skills, ways of doing things, and so on – our cognitive gadgets – that we largely share with others, and that exist in vastly interconnected, hugely recursive, massively layered assemblies in and beyond our heads. I invoke Reed’s Law to help explain how and why this makes our intracranial cognition so much greater than the neural networks that host it: it’s not just the neural connections but the groups and multi-scaled clusters of technological entities that emerge as a result that can then be a part of the network that embodies them, and of one another, and so on and so on. In passing, I have a vague and hard-to-express hunch that the “and so on” is at least part of the answer to the hard problem: networks that form other networks that themselves become parts of the networks that form them (rinse and repeat) seems like a potential path to self-consciousness to me. However,  the ludicrous levels of intertwingularity implied by this, not to mention an almost total absence of any idea about the underlying mechanism, ties my little mind in knots that I cannot yet and probably will never unravel.

At least as importantly, these private intracranial technologies are in turn parts of even greater assemblies that extend into our bodies, our environments, and above all into the technologies around us, and thence into the minds of others. To a large extent it is our ability to make use of and participate in this extended technological connectome, that is both within us and beyond us, that forms the object, the subject, and the purpose of education. Our technologies as much form a part of our cognition as they enable it. We continuously shape and are shaped by them, assembling and reassembling them as we move into the adjacent possibles that result, creating further adjacent possibles every time we do, for ourselves and others. There is something incredibly awesome about that.

Abstract

This paper frames technology as a phenomenon that is inextricable from individual and collective cognition. Technologies are not “the other”, separate from us: we are parts of them and they are parts of us. We learn to be technologies as much as we learn to use them, and each use is itself a technology through which we participate both as parts and as creators of nodes in a vast technological connectome of awesome complexity. The technological connectome in turn forms a major part of what makes us, individually and collectively, smart. With that framing in mind, the paper is presented as a series of sets of observations about the nature of technology followed by examples of consequences for educators that illustrate some of the potential value of understanding technology this way, ending with an application of the model to provide actionable insights into what large language models imply for how we should teach.

How AI works for education: an interview with me for AACE Review

Thanks to Stefanie Panke for some great questions and excellent editing in this interview with me for the AACE Review.

The content is in fact the product of two discussions, one coming from student questions at the end of a talk that I gave for the Asian University for Women just before Christmas, the other asynchronously with Stefanie herself.

Stefanie did a very good job of making sense of my rambling replies to the students that spanned quite a few issues, including some from my book, How Education Works, some with (mainly) generative AI, and a little about the intersection of collective and artificial intelligence. Stefanie’s own prompts were great: they encouraged me to think a little differently, and to take some enjoyable detours along the way around the evils of learning management systems, artificially-generated music, and  social media, as well as a discussion of the impact of generative AI on learning designers, thoughts on legislation to control AI, and assessment.

Here are the slides from that talk at AUW – I’ve not posted this separately because hardly any are new: it mostly cobbles together two recent talks, one for Contact North and the other my keynote for ICEEL ’24. The conversation afterwards was great, though, thanks to a wonderfully thoughtful and enthusiastic bunch of very smart students.