Cognitive Santa Claus machines

cognitive santa claus machine receiving human cognitive products and outputting thoughtsI’ve just submitted a journal paper (shameless plug: to AACE’s AIEL, of which I am an associate editor) in which I describe generative AIs as cognitive Santa Claus machines. I don’t know if it’s original but the idea appeals to me. Whatever thought we ask for, genAIs will provide it, mining their deep, deep wells of lossily compressed recorded human knowledge to provide us with skills and knowledge we do not currently have. Often they surprise us with unwanted gifts and some are not employing the smartest elves in the block but, by and large, they give us the thinking (or near facsimile) we want without having to wait until Christmas Eve.

Having submitted the paper, it now occurs to me that they are not just standalone thinking appliances: they can potentially be drivers of general-purpose Santa Claus machines. As active users of and, above all, creators of all sorts of digital technologies, I have found them, for example, incredibly handy for quickly churning out small apps and utilities that are useful but that would not be worth the week or more of effort they would otherwise take me to build. It is already often quicker to build a Quick Action for my Mac Finder than it would be to seek out an existing utility on the Web. The really interesting thing, though, is that they are perfectly capable of creating .scad files (or similar) that can be 3D printed. My own 3D printer has been gathering dust in a basement with a dead power supply for a few years so I’ve not tested the output yet, but I have already used Claude, ChatGPT and Gemini to design and provide full instructions and software for some quite complex electronics projects: between them they do a very good job, by and large, notwithstanding odd hallucinations and memory lapses. My own terrible soldering and construction skills are the only really weak points in the process.

One way or another, for the first time in the existence of our species, we now have machines that do not just perform predetermined orchestrations or participate as tools in our own orchestrations: they do the orchestration for us. We therefore have at our fingertips machines that are able (in principle) to make any technology – including any other machine (including another 3D printer) – we can imagine. The intellectual property complexities that will emerge when you can ask ChatGPT to, say, make you a smartphone or a house to your precise specifications make current copyright disputes pale by comparison. Phones might be tricky, for now, but houses are definitely possible. There are many (including my own son) who are looking further than that, down to a molecular level for what we can build, and that’s not to mention the long gestating field of nanobots.

This is a level of abundance that has only been the stuff of speculative fiction until now and, for the most part, even scifi mostly talks of replicators, not active creators of something new. Much as in the evolution of life, there have been moments in the evolution of technology when evolvability itself has evolved: inventions like writing, technologies of transport, the Internet, the electronic valve, the wheel, or steam power, for example, have disproportionately accelerated the rate of evolution, bringing exponential increases in the adjacent possible. This might just be the biggest such moment yet.

Education in the age of Santa Claus machines

Where education sits in all of this is complicated. To a very large extent, at least the explicit goal of educational systems is to teach us how to operate the tools and other technologies of our cultures, by which I mean the literacies that allow us to participate in a complex technologically mediated society, from writing to iambic pentameter, from experiments to theories. In brief, the stuff you can specify as learning outcomes. Even now, with the breakneck exponential increase in technologies of all kinds that has characterized the last couple of centuries, the rate of change is slow enough and the need for complex skills is growing steadily enough that there is a very clear demand for educational systems to provide them, and there are roughly enough skilled teachers to teach them.

The need persists because, when we create technologies we are not just creating processes, objects, structures, and tools: we are creating gaps in them that humans must fill with soft or hard technique, because the use of a technology is also a technology.  This means that the more technologies we create (up until now) the more we have had to learn in order to use them. Though offset somewhat by the deskilling orchestrations built into the machines we create (often the bulk of the code in a digital project is concerned with lessening the cognitive load, and even a humble door handle is a cognitive load-reducer)  the world really is and always has been getting more complex than it was. We need education more than ever.

Generative AIs modify that equation. Without genAI, creating 3D designs, say, and turning them into printed objects still demands vast amounts of human skill – skills using quite complex software, math, geometry, materials science, machinery, screwdrivers, ventilation, spatial reasoning, etc, etc etc. Black-boxing and automation can help: some of that complexity may be encapsulated in smart interfaces and algorithms that simplify the choices needed but, until now, there has usually been a trade-off between fine-grained control and ease of use. GenAIs restore that fine-grained control, to a large extent, without demanding immense skill. We just have to be able to describe what we want, and to follow instructions for playing our remaining roles like applying glue sticks or dunking objects in acetone baths. The same is true for non-physical genAI products.

So what does it mean to be able to use the technologies of your culture if there are literally millions of new and unique ones every day? Not just new arrangements of the same existing technologies like words, code, or images but heterogenous assemblies that no one has ever thought of before, tailor-made to your precise specifications. I have so many things I want to make this way. Some assembly will still be needed for many years to come but we will get ever closer to Theodore Taylor’s original vision of a fully self-contained Santa Claus machine, needing nothing but energy and raw materials to make anything we can imagine. If educational institutions are still needed, what will they teach and how will they teach it? One way they may respond is to largely ignore the problem, as most are doing now.

If educational systems do continue – without significant modification, without fully embracing the new adjacent possibles – to do nothing but teach and assess existing skills that AIs can easily perform at least as well, two weird things will happen. Firstly, sensible time-poor students will use the AIs to do the work or, at the very least, to help them. Secondly, sensible time-poor teachers will use the the AIs to teach because, if all you care about is achieving measurable learning outcomes, AIs can or will be able to do that better, faster, and cheaper. That would make both roles rather pointless. But teaching doesn’t just teach measurable skills; it teaches ways of being human. The same is true when AIs do it, too. It’s just that we learn ways of being human from machines. All of which (and much more, that I have written and spoken about more than enough in the past) suggests that continuing along our existing outcomes-driven educational path might not be the smartest move – or failure to move – we have ever made.

It’s a systems thing. GenAIs are coming into a world that is already full of systems, and systems above all else have a will to survive. In our education systems we are still dealing with the problems caused by mediaeval monks solving problems with the limited technologies available to them because, once things start to depend on other things and subsystems form, people within them get very invested in solving local problems, not system-level problems, and those solutions cause problems for other local subsystems, and so it goes on in a largely unbroken chain, rich in recursive sub-cycles, until any change made in one part is counter-acted by changes in others. What we fondly think of as good pedagogy, for instance, is not a universal law of teaching: it is how we solve problems caused by how our systems have evolved to teach. I think the worst thing we can possibly do right now is to use genAIs to solve the local problems we face as teachers, as learners, as administrators, etc. If we use them to replicate the practices we have inherited from mediaeval monks, instead of transforming our educational systems it will actively reinforce everything that is wrong with them because it will just make them better or faster at doing what they already do.

But of course we will do exactly that because what else can we do? We have problems to solve and genAIs offer solutions.

Three hopeful paths

I reckon that there are three hopeful, interlocking, and complementary paths we can take to prevent at least the worst case impacts of what happens when genAI is combined with local thinking:

I. embrace the machine

The first hopeful path is to embrace the machine. It seems to me that we should be focusing a bit less on how to use or replicate the technologies we already have and a lot more on the technologies we can dream of creating. If we wish (and have the imagination to persuade a genAI to do it) we can choose exactly how much human skill is needed for any technological assembly so the black-boxing trade-off that automation has always imposed upon us is not necessarily an issue any more: we can choose exactly the amount of soft technique we want to leave for humans in any given assembly instead of having it foisted upon us. For the first time, we can adjust the granularity of our cognition to match our needs and wishes rather than the availability of technologies. As a trivial example, if you want to nurture the creative skills of, say, drawing, you can build a technology that supports it, while automating the things you’d rather not think about like, say, colouring it in. From an educational perspective this is transformative. It frees us from the need for prerequisite skills and scaffolding, because they can be provided by the genAI, which in turn gives us a laser focus on what we want to learn, not the peripheral parts of the assembly. At one fell swoop (think about it) that negates the need for disciplinary boundaries, courses, and cognitive barriers to participation, and that’s just a start: there are many dominoes that fall once we start pushing at the foundations. It makes the accomplishment of authentic, meaningful, personally relevant, sufficiently challenging but not overwhelming tasks within everyone’s reach. As well as shaping education to the technologies of our cultures, we can shape the technologies to the education.

A potential obstacle to all of that is that very few of us have any idea where the adjacent possibles lie so how can we teach what, by definition, we do not know? I think the answer to that is simple: just let go, because that’s not what or how we should be teaching anyway. We should be teaching ways of making that journey,  supporting learners along the way, nurturing communities, and learning with them, not providing maps for getting there. GenAIs can help with that, nudging, connecting, summarizing, and so on. They can also help us to track progress and harvest learning outcomes if we still really need that credentialing role. And, if we don’t know how to do that, they can teach us what we need to know. That’s one of the really cool things about genAIs: we don’t need to be trained to use them. They can teach us what we need themselves. But, on its own, this is not enough.

II. embrace the tacit dimension

With the explicit learning outcomes taken care of (OK, that’s a bit of an exaggeration), the second hopeful path is to celebrate and double down on the tacit curriculum: to focus on the values, ways of thinking, passions, relationships, and meaning-making that learning from other humans has always provide for free while we teach students to meet those measurable learning outcomes. If we accept the primary role of educational systems as being social, to do with meaning-making, identity, and growth, treating everyone as an end in themselves, not as a means to an end, it avoids or mitigates most of the risks of learning to be human through machines and that is something that even those of us who have no idea how to use genAI can contribute to in a meaningful and useful way. Again, this is highly transformative. We must focus on the implicit, the tacit, and the idiosyncratic, because that’s what’s left when you take the learning outcomes away. Imagine a world in which learners choose an institution because of its communities and the quality of human relationships it supports, not its academic excellence. Imagine that this is what “academic excellence” means. I like this world.

III. embrace the human

The third hopeful path, interlocked with the other two, is to more fully celebrate the value of people doing things despite the fact that machines can do them better.

Though genAIs are a wholly new kind of technology that change a lot of rules, so we should be very wary of drawing too much from lessons of the past, it is worth reflecting on how the introduction of new technologies that appear to replace older technologies has worked before. When photography was new, for instance, photographers often tried to replicate painterly styles but it also led to an explosion of new aesthetics for painting and a re-evaluation of what value a human artist creates. Without photography it is unlikely that Impressionism would have happened, at least at the point in history that it did: photography’s superior accuracy in rendering images of the world freed painters from the expectation of realism and eventually led to a different and more human understanding of what “realism” means, as well as many new kinds of visual abstraction. Photography also created its own adjacent possibles, influencing composition and choices of subject matter for painters and, of course, it became a major art form in its own right. The fact that AIs can (or at least eventually will) produce better images than most humans does not mean we should or will stop drawing. It just means the reasons for doing so will be fewer and/or that the balance of reasons for doing it will shift. There might not be so many jobs that involve drawing or painting, but we will almost certain value what humans produce more than ever, both in the product and the process. We will care about what of and how it expresses our human experience, and its cognitive benefits, perhaps, rather than its technical precision: exactly the kinds of things that make it valuable for human infants to learn, as it happens. On the subject of human infants, this is why there are probably many more of us with our children’s or grandchildren’s pictures than the products of diffusion models on our refrigerators, and why they often share pride of place with the work of great masters on our walls.

The same is almost certainly true for teaching: generative AIs are, I hope, teaching’s photography moment, the point in history at which we step back and notice that what makes the activity valuable is not the transfer of explicit skills and knowledge so much as the ways of being human that are communicated with that: the passion (or even the lack of it), the meaning, the values, the attitudes, the ways of thinking.  When the dust settles, we are going to be far more appreciative of the products of humans working with dumb technologies than the products of genAIs, even when the genAI does it measurably better. I think that is mostly a good thing, especially taking into account the many potential new heights of as-yet-unforeseeable creation that will be possible when we partner up with the machines and step into more of the adjacent possibles.

Embracing the right things

Technologies are often seen as solutions to problems but that is only (and often the least interesting) part of what they do. Firstly, they also and invariably create new problems to solve. Secondly, and maybe more importantly, they create new adjacent possibles. Both of these other roles are open-ended and unprestateable: no amount of prior research will tell us more than a fraction of these. Finally, therefore, and as an overarching rule of thumb, I think it is beholden on all of us who are engaged in the educational endeavour to play with these things in order to discover those adjacent possibles, and, if we do choose to use them to solve our immediate problems, to discover as much as we can of the Faustian bargains they entail. Deontology is our friend in this: when we use it for a purpose we should always ask ourselves what would happen if everyone in the world who was in a similar situation used genAI for that purpose, and would we want to live in that world? What would our days be like if they did? This is not as hypothetical as it is for most ethical decisions: there is a very strong chance that, for instance, a large percentage of teaching to learning outcomes will very soon be performed (directly or indirectly) by genAI, and we know that a significant (though hard-to-quantify) amount of student work is already the direct or indirect result of them. The decisions we are faced with are faced by many others and they are happening at scale. We may have some substantial ethical concerns about using these things – I certainly do – but I think the consequences of not doing so are considerably worse. We’re not going to stop it by refusing to engage. We are the last generation to grow up without genAI so it is our job to try to preserve what should be preserved, and to try to change what shouldn’t.

 

Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research | TechTrends

The latest paper I can proudly add to my list of publications,  Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research has been published in the (unfortunately) closed journal TechTrends. Here’s a direct link to the paper that should hopefully bypass the paywall, if it has not been used too often.

I’m 16th of 47 coauthors, led by the truly wonderful Junhong Xiao, who is the primary orchestrator and mastermind behind it. This is a companion piece to our Manifesto for Teaching and Learning in a Time of Generative AI and it starts where the other paper left off, delving further into what we don’t know (or at least do not agree that we know) about and (taking up most of the paper) what we might do about that lack of knowledge. I think this presents a pretty useful and wide-ranging research agenda for anyone with an interest in AI and education.

Methodologically, it emerged through a collaborative writing process between a very multinational group of international researchers in open, digital, and online learning. It’s not a random sample of people who happen to know one another: the huge group represents a rich mix of (extremely) well-established and (excellent) emerging researchers from a broad set of cultural backgrounds, covering a wide range of research interests in the field. Junhong does a great job of extracting the themes and organizing all of that into a coherent narrative.

In many ways I like this paper more than its companion piece. I think this is because, though its findings are – as the title implies – less well-defined than the first, I am more closely aligned with the underlying assumptions, attitudes and values that underpin the analysis. It grapples more firmly with the wicked problems and it goes deeper into the broader, situated, human nature of the systems in which generative AI is necessarily intertwingled, skimming over the more simplistic conversations about cheating, reliability, and so on to get at some meatier but more fundamental issues that, ultimately, relate to how and why we do this education thing in the first place.

Abstract

Advocates of AI in Education (AIEd) assert that the current generation of technologies, collectively dubbed artificial intelligence, including generative artificial intelligence (GenAI), promise results that can transform our conceptions of what education looks like. Therefore, it is imperative to investigate how educators perceive GenAI and its potential use and future impact on education. Adopting the methodology of collective writing as an inquiry, this study reports on the participating educators’ perceived grey areas (i.e. issues that are unclear and/or controversial) and recommendations on future research. The grey areas reported cover decision-making on the use of GenAI, AI ethics, appropriate levels of use of GenAI in education, impact on learning and teaching, policy, data, GenAI outputs, humans in the loop and public–private partnerships. Recommended directions for future research include learning and teaching, ethical and legal implications, ownership/authorship, funding, technology, research support, AI metaphor and types of research. Each theme or subtheme is presented in the form of a statement, followed by a justification. These findings serve as a call to action to encourage a continuing debate around GenAI and to engage more educators in research. The paper concludes that unless we can ask the right questions now, we may find that, in the pursuit of greater efficiency, we have lost the very essence of what it means to educate and learn.

Reference

Xiao, J., Bozkurt, A., Nichols, M., Pazurek, A., Stracke, C. M., Bai, J. Y. H., Farrow, R., Mulligan, D., Nerantzi, C., Sharma, R. C., Singh, L., Frumin, I., Swindell, A., Honeychurch, S., Bond, M., Dron, J., Moore, S., Leng, J., van Tryon, P. J. S., … Themeli, C. (2025). Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research. TechTrends. https://doi.org/10.1007/s11528-025-01060-6

Can GPT-3 write an academic paper on itself, with minimal human input?

Brilliant. The short answer is, of course, yes, and it doesn’t do a bad job of it. This is conceptual art of the highest order.

This is the preprint of a paper written by GPT-3 (as first author) about itself, submitted to “a well-known peer-reviewed journal in machine intelligence”. The second and third authors provided guidance about themes, datasets, weightings, etc, but that’s as far as it goes. They do provide commentary as the paper progresses, but they tried to keep that as minimal as needed, so that the paper could stand or fall on its own merits. The paper is not too bad. A bit repetitive, a bit shallow, but it’s just a 500 word paper- hardly even an extended abstract – so that’s about par for the course. The arguments and supporting references are no worse than many I have reviewed, and considerably better than some. The use of English is much better than that of the majority of papers I review.

In an article about it in Scientific American the co-authors describe some of the complexities in the submission process. They actually asked GPT-3 about its consent to publication (it said yes), but this just touches the surface of some of the huge ethical, legal, and social issues that emerge. Boy there are a lot of those! The second and third authors deserve a prize for this. But what about the first author? Well, clearly it does not, because its orchestration of phenomena is not for its own use, and it is not even aware that it is doing the orchestration. It has no purpose other than that of the people training it. In fact, despite having written a paper about itself, it doesn’t even know what ‘itself’ is in any meaningful way. But it raises a lot of really interesting questions.

It would be quite interesting to train GPT-3 with (good) student assignments to see what happens. I think it would potentially do rather well. If I were an ethically imperfect, extrinsically-driven student with access to this, I might even get it to write my assignments for me. The assignments might need a bit of tidying here and there, but the quality of prose and the general quality of the work would probably result in a good B and most likely an A, with very little extra tweaking. With a bit more training it could almost certainly mimic a particular student’s style, including all the quirks that would make it seem more human. Plagiarism detectors wouldn’t stand a chance, and I doubt that many (if any) humans would be able to say with any assurance that it was not the student’s own work.

If it’s not already happening, this is coming soon, so I’m wondering what to do about it. I think my own courses are slightly immune thanks to the personal and creative nature of the work and big emphasis on reflection in all of them (though those with essays would be vulnerable), but it would not take too much ingenuity to get GPT-3 to deal with that problem, too: at least, it could greatly reduce the effort needed. I guess we could train our own AIs to recognize the work of other AIs, but that’s an arms war we’d never be able to definitively win. I can see the exam-loving crowd loving this, but they are in another arms war that they stopped winning long ago – there’s a whole industry devoted to making cheating in exams pay, and it’s leaps ahead of the examiners, including those with both online and in-person proctors. Oral exams, perhaps? That would make it significantly more difficult (though far from impossible) to cheat. I rather like the notion that the only summative assessment model that stands a fair chance of working is the one with which academia began.

It seems to me that the only way educators can sensibly deal with the problem is to completely divorce credentialling from learning and teaching, so there is no incentive to cheat during the learning process. This would have the useful side-effect that our teaching would have to be pretty good and pretty relevant, because students would only come to learn, not to get credentials, so we would have to focus solely on supporting them, rather than controlling them with threats and rewards. That would not be such a bad thing, I reckon, and it is long overdue. Perhaps this will be the catalyst that makes it happen.

As for credentials, that’s someone else’s problem. I don’t say that because I want to wash my hands of it (though I do) but because credentialling has never had anything whatsoever to do with education apart from in its appalling inhibition of effective learning. It only happens at the moment because of historical happenstance, not because it ever made any pedagogical sense. I don’t see why educators should have anything to do with it. Assessment (by which I solely mean feedback from self or others that helps learners to learn – not grades!) is an essential part of the learning and teaching process, but credentials are positively antagonistic to it.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/14216255/can-gpt-3-write-an-academic-paper-on-itself-with-minimal-human-input

Ernst & Young fined $100 million after employees cheated in exams

Not just any exams: ethics exams.

These are the very accountants who are supposed to catch cheats. I guess at least they’ll understand their clientele pretty well.

But how did this happen? There are clues in the article:

“Many of the employees interviewed during the federal investigation said they knew cheating was a violation of the company’s code of conduct but did it anyway because of work commitments or the fact that they couldn’t pass training exams after multiple tries.” (my emphasis).

I think there might have been a clue about their understanding of ethical behaviour in that fact alone, don’t you? But I don’t think it’s really their fault: at least, it’s completely predictable to anyone with even the slightest knowledge of how motivation works.

If passing the exam is, by design, much more important than actually being able to do what is being examined, then of course people will cheat. For those with too much else to do or too little interest to succeed, when the pressure is high and the stakes are higher, it’s a perfectly logical course of action. But, even for all the rest who don’t cheat, the main focus for them will be on passing the exam, not on gaining any genuine competence or interest in the subject. It’s not their fault: that’s how it is designed. In fact, the strong extrinsic motivation it embodies is pretty much guaranteed to (at best) persistently numb their intrinsic interest in ethics, if it doesn’t extinguish it altogether. Most will do enough to pass and no more, taking shortcuts wherever possible, and there’s a good chance they will forget most of it as soon as they have done so.

Just to put the cherry on the pie, and not unexpectedly, EY refer to the process by which their accountants are expected to learn about ethics as ‘training’ and it is mandatory. So you have a bunch of unwilling people who are already working like demons to meet company demands, to whom you are doing something normally reserved for dogs or AI models, and then you are forcing them to take high-stakes exams about it, on which their futures depend. It’s a perfect shit storm. I’d not trust a single one of their graduates, exam cheats or not, and the tragedy is that the people who were trying to force them to behave ethically were the ones directly responsible for their unethical behaviour.

There may be a lesson or two to be learned from this for academics, who tend to be the biggest exam fetishists around, and who seem to love to control what their students do.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/14163409/ernst-young-fined-100-million-after-employees-cheated-in-exams