Edison’s Infinite Workshop: Innovation and education in the age of Cognitive Santa Claus Machines (slides from my keynote for IFERP’s EdInnovate 2026)

Nek Chand's Rock Garden, illustrating the power of bricolage as a creative process
Statues in Nek Chand’s Rock Garden (photo by the author)

I’ve just finished giving a brief keynote for IFERP’s 3rd EdInnovate conference in Tokyo (sadly, because I love Tokyo in the Spring,  I was online). Here are the slides. The conference was great: they put all of the keynotes and invited talks on a single day, with a very international and cross-disciplinary bunch of thought leaders (and me), and many of us were talking about very closely related themes, of rehumanizing and transforming education, from very different perspectives. Though most of it confirmed what I already know, I learned a lot.

The gist of my talk was that generative AI challenges us to transform both how we teach and what we teach. I have spoken quite a bit about the “how” in the past – essentially it is to double down on the tacit, the relational, and the social, to care about and to empower learners, to focus on what it means to be a human in whatever fields we are trying to teach. The stuff we should already have been doing.

The “what” is new. GenAIs are pretty good at creating stuff, and that’s a problem because it is very, very tempting to get them to think for us (hence cognitive Santa Claus machines: we delegate the thinking to them so that we don’t have to). We now have access to most human knowledge, at a (mostly) expert level, with little skill needed to elicit any of it. These things are like search engines that actually give us what we are searching for, in detail, and then do whatever it was that we were planning to do with the search results on our behalf. If our descendants are not to be less than us (and I really want more for my own grandchildren), we now have to figure out what to do with that. If the answer is to turn in an essay or perform an assignment that any AI could do at least as well, then the world will end with a whimper. Our jobs are to take that, problematize it, and use it to create more than any of us (human or machine) could have created alone. Luckily we already have a model for that: bricolage, or tinkering.

Bricolage has got a bad rap in the past, often compared unfavourably with engineering (notably by Levi Strauss, who defined it and saw it as primitive) but, as Papert and Turkle wrote many years ago, it is a very legitimate way of engaging with the concrete, a highly creative activity in its own right, and it can be a very powerful approach to design. The photo at the top of this post shows just a handful of the thousands of stunning artworks created by Nek Chand and his team, all of it built from the waste products of the industrial city of Chandigarh – pieces of wire, chunks of porcelain, sacks of concrete, and other found objects. I have visited twice and cried at the beauty of it both times.

I have written of bricolage before, e.g. here and here (nicely reported on and more clearly expressed by Stefanie Panke), as a means of researching things that don’t (yet) exist, and I intend to write more. It seems to me, though, that this is one of the key skills that we should be developing for ourselves and for our students, not just for research but as a process and product of learning. It is the natural evolution of the steady progress from high-resolution to low-resolution cognition that has driven human progress for millennia. In the past we built on and with what other humans had already done: it is and has always been what makes us smart that we can, through technologies (including language and art), share parts of our cognition: we think with our creations. The more we create, the more we can create. Now we have machines that are themselves bricoleurs par excellence, capable of producing any parts or pieces we can imagine, at vast scale, and quite a few we cannot. This is different. If we take advantage of it, we can continue the technology-fuelled exponential growth that is a hallmark of our species (and, to be perfectly clear, art, writing, poetry, architecture, music, and all the humanities are among the most significant of those technologies). If we don’t, we face not just the model collapse of genAIs but, ultimately, of our own cognition. This is not about replicating what we can already do. It’s about being able to do what we cannot yet imagine. This seems like a good mission for education to me.

Just a metatool? Some thoughts why generative AIs are not tools

hammer holding an AI nailMany people brush generative AI aside as being just a tool. ChatGPT describes itself as such (I asked). I think it’s more complicated than that, and this post is going to be an attempt to explain why. I’m not sure about much of what follows and welcome any thoughts you may have on whether this resonates with you and, if not, why not.

What makes something a tool

I think that to call something a tool is shorthand for it having all of the following 5 attributes:

  1. It is an object (physical, digital, cognitive, procedural, organizational, structural, conceptual, spiritual, etc. – i.e. the thing we normally identify as the tool),
  2. used with/designed for a purpose, that
  3. can extend the capabilities of an actor (an intelligent agent, typically human), who
  4. may perform an organized action or series of actions with it, that
  5. cause changes to a subject other than the tool itself (such as a foodstuff, or piece of paper, a mental state, or a configuration of bits),

More informally, less precisely, but perhaps more memorably:

A tool is something that an intelligent agent does something with in order to do something to something else

Let me unpack that a bit.

A pebble used as a knife sharpener is a tool, but one used to reinforce concrete is not. A pen used to write on paper is a tool, but the paper is not. The toolness in each case emerges from what the agent does and the fact that it is done to something, in order to achieve something (a sharp knife, some writing).

Any object we label as a tool can become part of another with different organization. A screwdriver can become an indefinitely large number of other tools  apart from one intended for driving screws. In fact, almost anything can become a tool with the right organization. The paper can be a tool if it is, say, used to scoop up dirt. And, when I say “paper”, remember that this is the label for the object I am calling a tool, but it is the purpose, what it does, how it is organized, and the subject it acts upon that makes it so.

It is not always easy to identify the “something else” that a tool affects. A saw used to cut wood is an archetypal tool, but a saw played with a bow to make music is, I think, not. Perhaps the bow is a tool, and maybe we could think of the saw as a tool acting on air molecules, but I think we tend to perceive it as the thing that is acted upon rather than the thing we do something with.

Toolness is intransitive: a computer may be a tool for running programs, and a program running on it may be a tool that fixes a corrupt disk, but a computer is not a tool for fixing a corrupt disk.

A great many tools are also a technologies in their own right. The intention and technique of the tool maker combines with that of the tool user, so the tool user may achieve more (or more reliably, faster, more consistently, etc) than would be possible without both. A fountain pen adds more to the writing assembly than a quill, for instance, so demanding less of the writer. Many tools are partnerships of this nature, allowing the cognition of more than one person to be shared. This is the ratchet that makes humans smart.

Often, the organization performed by the maker of a technology entirely replaces that of the tool user. A dish sponge is a tool, but a dishwasher is not: it is an appliance. Some skill is needed to load it but the dishwashing itself – the purpose for which it is designed – is entirely managed by the machine.

The case is less clear for an appliance like, say, a vacuum cleaner. I think this is because there are two aspects to the device: the mechanism that autonomously sucks dirt is what makes it an appliance, but the hose (or whatever) used to select the dirt to be removed is a tool. This is reflected in common usage, inasmuch as a vacuum cleaner is normally sold with what are universally described as tools (i.e. the things that a person actively manipulates). The same distinction is still there in a handheld machine, too – in fact, many come with additional tools – though I would be much more comfortable describing the whole device as a tool, because that’s what is manipulated to suck up the dirt. Many power tools fit in this category: they do some of the work autonomously but they are still things people do something with in order to do something to something else.

Humans can occasionally be accurately described as tools: the movie Swiss Army Man, for instance, features Daniel Radcliffe as a corpse that turns out to have many highly inventive uses. For real live humans, though, the case is less clear.  Employees in scripted call centres, or teachers following scripted lesson plans are more like appliances than tools: having been “programmed”, they run autonomously, so the scripts may be tools but the people are not. Most other ways of using other people are even less tool-like. If I ask you to pick up some shopping for me, say, then my techniques of persuasion may be tools, but you are the one organizing phenomena to shop, which is the purpose in question.

The case is similar for sheepdogs (though they are not themselves tool users), that I would be reluctant to label as tools, though skills are clearly needed to make them do our bidding and they do serve tool-like purposes as part of the technology of shepherding. The tools, though, are the commands, methods of training, treats, and so on, not the animals themselves.

Why generative AIs are not tools

For the same reasons of transitivity that dishwashers, people, and sheepdogs are not normally tools, neither are generative AIs. Prompts and other means of getting AIs to do our bidding are tools but generative AIs themselves work autonomously.  This comes with the proviso that almost anything can be repurposed so there is nothing that is not at least latently a tool but, at least in their most familiar guises, generative AIs tend not to be.

Unlike conventional appliances, but more like sheepdogs, the work generative AIs perform is neither designed by humans nor scrutable to us. Unlike sheepdogs, but more like humans, generative AIs are tool users, too: not just (or not so much) words, but libraries, programming languages, web crawlers, filters, and so on. Unlike humans, though, generative AIs act with their users’ intentions, not their own, expressed through the tools with which we interact with them.  They are a bit like partial brains, perhaps, remarkably capable but not aware of nor able to use that capability autonomously.

It’s not just chatbots. Many recommender systems and search engines (increasingly incorporating deep learning), also sit uncomfortably in the category of tools, though they are often presented as such. Amazon’s search, say, is not (primarily) designed to help you find what you are looking for but to push things at you that Amazon would like you to buy, which is why you must troll through countless not-quite-right things despite it being perfectly capable of exactly matching your needs. If it is anyone’s tool, it is Amazon’s, not ours. The same for a Google search: the tools are your search terms, not Google Search, and it is acting quite independently in performing the search and returning results that are likely more beneficial to Google than to you. This is not true of all search systems. If I search for a file on my own computer then, if it fails to provide what I am looking for, it is a sign that the tool (and I think it is a tool because the results should be entirely determinate) is malfunctioning. Back in those far off days when Amazon wanted you to find what you wanted or Google tried to provide the closest match to your search term, if not tools then we could at least think of them as appliances designed to be controlled by us.

I think we need a different term for these things. I like “metatool” because it is catchy and fairly accurate. A metatool is something that uses tools to do our bidding, not a tool in its own right.  It is something that we use tools to act upon that is itself a tool user. I think this is better than a lot of other metaphors we might use: slave, assistant (Claude describes itself, incidentally, not as ‘merely’ a tool, but as an intelligent assistant), partner, co-worker, contractor, etc all suggest more agency and intention than generative AIs actually possess, but appliance, machine, device, etc fail to capture the creativity, tailoring, and unpredictability of the results.

Why it matters

The big problem with treating generative AIs as tools is that it overplays our own agency and underplays the creative agency of the AI. It encourages us to think of them, like actual tools, as, cognitive prostheses, ways of augmenting and amplifying but still using and preserving human cognitive capabilities, when what we are actually doing is using theirs. It also encourages us to think the results will be more deterministic than they actually are. This is not to negate the skill needed to use prompts effectively, nor to underplay the need to understand what the prompt is acting upon. Just as the shepherd needs to know the sheepdog, the genAI user has to know how their tools will affect the medium.

Like all technologies, these strange partial brains effectively enlarge our own. All other technologies, though, embed or embody other humans’ thinking and/or our own. Though largely consisting of the compressed expressed thoughts of millions of people, AI’s thoughts are not human thoughts: even using the most transparent of them, we have very little access to the mechanisms behind their probablistic deliberations. And yet, nor are they independent thinking agents. Like any technology we might think of them as cognitive extensions but, if they are, then it is as though we have undergone an extreme form of corpus callosotomy, or we are experiencing something like Jaynes’s bicameral mind. Generative AIs are their own thing: an embodiment of collective intelligence as well as contributors to our own, wrapped up in a whole bunch of intentional programming and training that imbues them, in part, with (and I find this very troubling) the values of their creators and in part with the sum output of a great many humans who created the data on which they are trained.

I don’t know whether this is, ultimately, a bad thing. Perhaps it is another stage in our evolution that will make us more fit to deal with the complex world and new problems in it that we collectively continue to create. Perhaps it will make us less smart, or more the same, or less creative. Perhaps it will have the opposite effects. Most likely it will involve a bit of all of that. I think it is important that we recognize it as something new in the world, though, and not just another tool.