Cognitive Santa Claus machines

cognitive santa claus machine receiving human cognitive products and outputting thoughtsI’ve just submitted a journal paper (shameless plug: to AACE’s AIEL, of which I am an associate editor) in which I describe generative AIs as cognitive Santa Claus machines. I don’t know if it’s original but the idea appeals to me. Whatever thought we ask for, genAIs will provide it, mining their deep, deep wells of lossily compressed recorded human knowledge to provide us with skills and knowledge we do not currently have. Often they surprise us with unwanted gifts and some are not employing the smartest elves in the block but, by and large, they give us the thinking (or near facsimile) we want without having to wait until Christmas Eve.

Having submitted the paper, it now occurs to me that they are not just standalone thinking appliances: they can potentially be drivers of general-purpose Santa Claus machines. As active users of and, above all, creators of all sorts of digital technologies, I have found them, for example, incredibly handy for quickly churning out small apps and utilities that are useful but that would not be worth the week or more of effort they would otherwise take me to build. It is already often quicker to build a Quick Action for my Mac Finder than it would be to seek out an existing utility on the Web. The really interesting thing, though, is that they are perfectly capable of creating .scad files (or similar) that can be 3D printed. My own 3D printer has been gathering dust in a basement with a dead power supply for a few years so I’ve not tested the output yet, but I have already used Claude, ChatGPT and Gemini to design and provide full instructions and software for some quite complex electronics projects: between them they do a very good job, by and large, notwithstanding odd hallucinations and memory lapses. My own terrible soldering and construction skills are the only really weak points in the process.

One way or another, for the first time in the existence of our species, we now have machines that do not just perform predetermined orchestrations or participate as tools in our own orchestrations: they do the orchestration for us. We therefore have at our fingertips machines that are able (in principle) to make any technology – including any other machine (including another 3D printer) – we can imagine. The intellectual property complexities that will emerge when you can ask ChatGPT to, say, make you a smartphone or a house to your precise specifications make current copyright disputes pale by comparison. Phones might be tricky, for now, but houses are definitely possible. There are many (including my own son) who are looking further than that, down to a molecular level for what we can build, and that’s not to mention the long gestating field of nanobots.

This is a level of abundance that has only been the stuff of speculative fiction until now and, for the most part, even scifi mostly talks of replicators, not active creators of something new. Much as in the evolution of life, there have been moments in the evolution of technology when evolvability itself has evolved: inventions like writing, technologies of transport, the Internet, the electronic valve, the wheel, or steam power, for example, have disproportionately accelerated the rate of evolution, bringing exponential increases in the adjacent possible. This might just be the biggest such moment yet.

Education in the age of Santa Claus machines

Where education sits in all of this is complicated. To a very large extent, at least the explicit goal of educational systems is to teach us how to operate the tools and other technologies of our cultures, by which I mean the literacies that allow us to participate in a complex technologically mediated society, from writing to iambic pentameter, from experiments to theories. In brief, the stuff you can specify as learning outcomes. Even now, with the breakneck exponential increase in technologies of all kinds that has characterized the last couple of centuries, the rate of change is slow enough and the need for complex skills is growing steadily enough that there is a very clear demand for educational systems to provide them, and there are roughly enough skilled teachers to teach them.

The need persists because, when we create technologies we are not just creating processes, objects, structures, and tools: we are creating gaps in them that humans must fill with soft or hard technique, because the use of a technology is also a technology.  This means that the more technologies we create (up until now) the more we have had to learn in order to use them. Though offset somewhat by the deskilling orchestrations built into the machines we create (often the bulk of the code in a digital project is concerned with lessening the cognitive load, and even a humble door handle is a cognitive load-reducer)  the world really is and always has been getting more complex than it was. We need education more than ever.

Generative AIs modify that equation. Without genAI, creating 3D designs, say, and turning them into printed objects still demands vast amounts of human skill – skills using quite complex software, math, geometry, materials science, machinery, screwdrivers, ventilation, spatial reasoning, etc, etc etc. Black-boxing and automation can help: some of that complexity may be encapsulated in smart interfaces and algorithms that simplify the choices needed but, until now, there has usually been a trade-off between fine-grained control and ease of use. GenAIs restore that fine-grained control, to a large extent, without demanding immense skill. We just have to be able to describe what we want, and to follow instructions for playing our remaining roles like applying glue sticks or dunking objects in acetone baths. The same is true for non-physical genAI products.

So what does it mean to be able to use the technologies of your culture if there are literally millions of new and unique ones every day? Not just new arrangements of the same existing technologies like words, code, or images but heterogenous assemblies that no one has ever thought of before, tailor-made to your precise specifications. I have so many things I want to make this way. Some assembly will still be needed for many years to come but we will get ever closer to Theodore Taylor’s original vision of a fully self-contained Santa Claus machine, needing nothing but energy and raw materials to make anything we can imagine. If educational institutions are still needed, what will they teach and how will they teach it? One way they may respond is to largely ignore the problem, as most are doing now.

If educational systems do continue – without significant modification, without fully embracing the new adjacent possibles – to do nothing but teach and assess existing skills that AIs can easily perform at least as well, two weird things will happen. Firstly, sensible time-poor students will use the AIs to do the work or, at the very least, to help them. Secondly, sensible time-poor teachers will use the the AIs to teach because, if all you care about is achieving measurable learning outcomes, AIs can or will be able to do that better, faster, and cheaper. That would make both roles rather pointless. But teaching doesn’t just teach measurable skills; it teaches ways of being human. The same is true when AIs do it, too. It’s just that we learn ways of being human from machines. All of which (and much more, that I have written and spoken about more than enough in the past) suggests that continuing along our existing outcomes-driven educational path might not be the smartest move – or failure to move – we have ever made.

It’s a systems thing. GenAIs are coming into a world that is already full of systems, and systems above all else have a will to survive. In our education systems we are still dealing with the problems caused by mediaeval monks solving problems with the limited technologies available to them because, once things start to depend on other things and subsystems form, people within them get very invested in solving local problems, not system-level problems, and those solutions cause problems for other local subsystems, and so it goes on in a largely unbroken chain, rich in recursive sub-cycles, until any change made in one part is counter-acted by changes in others. What we fondly think of as good pedagogy, for instance, is not a universal law of teaching: it is how we solve problems caused by how our systems have evolved to teach. I think the worst thing we can possibly do right now is to use genAIs to solve the local problems we face as teachers, as learners, as administrators, etc. If we use them to replicate the practices we have inherited from mediaeval monks, instead of transforming our educational systems it will actively reinforce everything that is wrong with them because it will just make them better or faster at doing what they already do.

But of course we will do exactly that because what else can we do? We have problems to solve and genAIs offer solutions.

Three hopeful paths

I reckon that there are three hopeful, interlocking, and complementary paths we can take to prevent at least the worst case impacts of what happens when genAI is combined with local thinking:

I. embrace the machine

The first hopeful path is to embrace the machine. It seems to me that we should be focusing a bit less on how to use or replicate the technologies we already have and a lot more on the technologies we can dream of creating. If we wish (and have the imagination to persuade a genAI to do it) we can choose exactly how much human skill is needed for any technological assembly so the black-boxing trade-off that automation has always imposed upon us is not necessarily an issue any more: we can choose exactly the amount of soft technique we want to leave for humans in any given assembly instead of having it foisted upon us. For the first time, we can adjust the granularity of our cognition to match our needs and wishes rather than the availability of technologies. As a trivial example, if you want to nurture the creative skills of, say, drawing, you can build a technology that supports it, while automating the things you’d rather not think about like, say, colouring it in. From an educational perspective this is transformative. It frees us from the need for prerequisite skills and scaffolding, because they can be provided by the genAI, which in turn gives us a laser focus on what we want to learn, not the peripheral parts of the assembly. At one fell swoop (think about it) that negates the need for disciplinary boundaries, courses, and cognitive barriers to participation, and that’s just a start: there are many dominoes that fall once we start pushing at the foundations. It makes the accomplishment of authentic, meaningful, personally relevant, sufficiently challenging but not overwhelming tasks within everyone’s reach. As well as shaping education to the technologies of our cultures, we can shape the technologies to the education.

A potential obstacle to all of that is that very few of us have any idea where the adjacent possibles lie so how can we teach what, by definition, we do not know? I think the answer to that is simple: just let go, because that’s not what or how we should be teaching anyway. We should be teaching ways of making that journey,  supporting learners along the way, nurturing communities, and learning with them, not providing maps for getting there. GenAIs can help with that, nudging, connecting, summarizing, and so on. They can also help us to track progress and harvest learning outcomes if we still really need that credentialing role. And, if we don’t know how to do that, they can teach us what we need to know. That’s one of the really cool things about genAIs: we don’t need to be trained to use them. They can teach us what we need themselves. But, on its own, this is not enough.

II. embrace the tacit dimension

With the explicit learning outcomes taken care of (OK, that’s a bit of an exaggeration), the second hopeful path is to celebrate and double down on the tacit curriculum: to focus on the values, ways of thinking, passions, relationships, and meaning-making that learning from other humans has always provide for free while we teach students to meet those measurable learning outcomes. If we accept the primary role of educational systems as being social, to do with meaning-making, identity, and growth, treating everyone as an end in themselves, not as a means to an end, it avoids or mitigates most of the risks of learning to be human through machines and that is something that even those of us who have no idea how to use genAI can contribute to in a meaningful and useful way. Again, this is highly transformative. We must focus on the implicit, the tacit, and the idiosyncratic, because that’s what’s left when you take the learning outcomes away. Imagine a world in which learners choose an institution because of its communities and the quality of human relationships it supports, not its academic excellence. Imagine that this is what “academic excellence” means. I like this world.

III. embrace the human

The third hopeful path, interlocked with the other two, is to more fully celebrate the value of people doing things despite the fact that machines can do them better.

Though genAIs are a wholly new kind of technology that change a lot of rules, so we should be very wary of drawing too much from lessons of the past, it is worth reflecting on how the introduction of new technologies that appear to replace older technologies has worked before. When photography was new, for instance, photographers often tried to replicate painterly styles but it also led to an explosion of new aesthetics for painting and a re-evaluation of what value a human artist creates. Without photography it is unlikely that Impressionism would have happened, at least at the point in history that it did: photography’s superior accuracy in rendering images of the world freed painters from the expectation of realism and eventually led to a different and more human understanding of what “realism” means, as well as many new kinds of visual abstraction. Photography also created its own adjacent possibles, influencing composition and choices of subject matter for painters and, of course, it became a major art form in its own right. The fact that AIs can (or at least eventually will) produce better images than most humans does not mean we should or will stop drawing. It just means the reasons for doing so will be fewer and/or that the balance of reasons for doing it will shift. There might not be so many jobs that involve drawing or painting, but we will almost certain value what humans produce more than ever, both in the product and the process. We will care about what of and how it expresses our human experience, and its cognitive benefits, perhaps, rather than its technical precision: exactly the kinds of things that make it valuable for human infants to learn, as it happens. On the subject of human infants, this is why there are probably many more of us with our children’s or grandchildren’s pictures than the products of diffusion models on our refrigerators, and why they often share pride of place with the work of great masters on our walls.

The same is almost certainly true for teaching: generative AIs are, I hope, teaching’s photography moment, the point in history at which we step back and notice that what makes the activity valuable is not the transfer of explicit skills and knowledge so much as the ways of being human that are communicated with that: the passion (or even the lack of it), the meaning, the values, the attitudes, the ways of thinking.  When the dust settles, we are going to be far more appreciative of the products of humans working with dumb technologies than the products of genAIs, even when the genAI does it measurably better. I think that is mostly a good thing, especially taking into account the many potential new heights of as-yet-unforeseeable creation that will be possible when we partner up with the machines and step into more of the adjacent possibles.

Embracing the right things

Technologies are often seen as solutions to problems but that is only (and often the least interesting) part of what they do. Firstly, they also and invariably create new problems to solve. Secondly, and maybe more importantly, they create new adjacent possibles. Both of these other roles are open-ended and unprestateable: no amount of prior research will tell us more than a fraction of these. Finally, therefore, and as an overarching rule of thumb, I think it is beholden on all of us who are engaged in the educational endeavour to play with these things in order to discover those adjacent possibles, and, if we do choose to use them to solve our immediate problems, to discover as much as we can of the Faustian bargains they entail. Deontology is our friend in this: when we use it for a purpose we should always ask ourselves what would happen if everyone in the world who was in a similar situation used genAI for that purpose, and would we want to live in that world? What would our days be like if they did? This is not as hypothetical as it is for most ethical decisions: there is a very strong chance that, for instance, a large percentage of teaching to learning outcomes will very soon be performed (directly or indirectly) by genAI, and we know that a significant (though hard-to-quantify) amount of student work is already the direct or indirect result of them. The decisions we are faced with are faced by many others and they are happening at scale. We may have some substantial ethical concerns about using these things – I certainly do – but I think the consequences of not doing so are considerably worse. We’re not going to stop it by refusing to engage. We are the last generation to grow up without genAI so it is our job to try to preserve what should be preserved, and to try to change what shouldn’t.