Paradigm shifts, bricoleers [sic], and other animals

Bricoleering, or adaptafacture illustratedBen Werdmuller is a serial innovator, edtech veteran, and deeply insightful commentator on the tech industry whose skills defy easy categorization. I like him a lot. In One size fits none: let communities build for themselves Ben tells us about how to build digital social systems that fit the needs of their communities, and it is well worth reading if you have any interest in social software.

The post starts with description of the reaction of developers when, in the Summer of 2007, at an Elgg-jam at my then-university in Brighton, Ben first introduced the newly refactored Elgg 1.0 framework. In its several pre-version-1 iterations, Elgg was not a development framework but a full-blown web application. It had blogs, wikis, file sharing, bookmarking, groups, and much more, all wrapped up in a robust social network system with smart discretionary access, extensible very easily through a simple-to-use plugin system. It was easy to use, rich in features, highly adaptable, and it might have been the most popular open source social networking system on the planet at that point. It was a bit hacked-together and not exactly an engineering masterpiece, but it worked really well.

What Ben announced that day stripped away virtually all of its existing functionality, leaving only a tiny core that could do almost nothing user-facing on its own apart from simple user management, the display of activities, and some basic admin tasks. I don’t think it was even possible to create a post and I have a feeling there were floppy disks around at the time onto which the whole thing could fit.  The idea was that it was up to developers to provide plugins that end-users could configure to create any kind of social system they wanted, with the core providing the API and data structures to support and greatly simplify their development. A few common tools like blogs, wikis, file sharing, and bookmarks were provided in a package of core plugins to help get things started, but all were (and are) optional. It was extremely elegant.

I believe that I was the person Ben refers to who, many years later (at another Elgg-jam, in San Francisco, as it happens), described his “big reveal” as a mind-blowing moment. Almost every hair on my body stood on end. I got it immediately because I had been thinking along very similar lines – there’s a chapter on such things in my first book, published earlier the same year – and had been, up until that point, intending to spend my newly-acquired national teaching fellowship money on building it. Instead I went with Elgg, which provided the framework on which the Landing and a few other sites (including the one at Brighton to which Ben refers) were built, and the money mostly went towards plugin development for it. 

In fact, in the form in which it first launched, Elgg 1.0 wasn’t exactly what I wanted. My vision was more distributed and centred around small services, loosely joined, rather than a single monolithic plugin-based server. The roadmap, though, that Ben described that day made exactly that possible, with plans for a robust and extensible range of services and standards for information interchange that, had they gained any traction, would have made a federated social system of almost any kind simple to create and evolve.

They didn’t gain that traction.

I think a big part of the reason might be that, with no backwards compatibility at all with the older version, and no good migration path for those already running Elgg, it lost almost all of the momentum and good will it had previously gained, and others had moved into the space in the interim that could provide an off-the-shelf experience that was at least as good as the replacement, without the need for further development. In particular, WordPress and Buddypress were already on the rise. Ben eventually moved on to do other things, Elgg gained a loyal and slowly growing following and became a foundation, but its focus shifted to being a development platform for building bespoke servers rather than a distributed social system.  The web services and neat ODD protocol never took off enough to be usable beyond some very limited use cases. However, the plugin-based architecture and tiny core was still a cool idea and building using small pieces for almost everything seemed to me to be a really good way to build a social system, so that’s what I and my teams did. It turns out to be much less cool when you want to maintain it, though, a fact that I was quite well aware of but failed to grasp in its full magnitude until it was too late.

Red Queen development

Running ever faster to stay in the same placeAs we built the Landing we soon ran into the painful flipsides of plugins, which include the fact that you can’t easily remove them once many people use them, the large number of dependencies they create, and the fact that they have to be maintained, at least every time the core gets updated. It is not helped by the fact that, I think for efficiency,  backwards compatibility is still rarely much of a consideration when Elgg gets an upgrade: though they will generally survive (with deprecation notices) for a version or two, many old plugins will simply break if they are not updated, often in subtle, difficult to debug ways. And part of the elegance of the design is also one of its greatest flaws: that, though you can design things in a more robust way, any plugin can fully override almost anything provided by any other simply by including a file of the same name and position in the directory hierarchy. This plays havoc with new versions, and makes plugins far more co-dependent than the very self-contained, well-encapsulated services I had been imagining. To make things worse, it does not scale at all well: Elgg’s object-over-relational data model is very elegant, but it is not very efficient when your site grows large, and every data-storing plugin adds to the problem.

At one point the Landing had 116 plugins (admittedly with a few turned off by default), about a third of which we built, a third of which were distributed with the core, and a third of which were community-developed. As well as our own plugins, we gradually had to take on more and more of the community-plugin development ourselves as original developers abandoned them, or face the wrath of those who needed them. Of the 90 or so that are left today, about half are our/my responsibility. On average, when things were going well and we had the funding for a full-time developer,  I reckon most plugins averaged about a person-week of design, development, and testing to upgrade, though the various dependencies and bottlenecks meant that it was rarely less than a month from start to finish before they arrived on the site. Meanwhile, the core was getting updates, sometimes more than once a year. With very little spare cash, especially after losing our full-time developer, there was no way that we could ever hope to keep up with the release cycles of the core and keep the number of plugins we had to maintain. We were stuck in a Red Queen Regime, running harder and harder to stay in the same place. Some call this a technological debt, but it’s just the price of ownership, and we couldn’t pay enough. 

It may be a blessing in disguise then, that, some 10 or 11 years ago,  the decision over whether to continue development was taken out of our hands by a CIO who refused us any resources to even test let alone to install anything, as a result of a grossly misguided “back to baseline” principle that ravaged many good systems during his tenure, even though we (then) had plenty of money to continue and offered to put it all into his budget. The Landing limped along regardless because it was embedded in many courses, research groups, centres, and so on, so it couldn’t simply be switched off, no off-the-shelf alternative came close to doing anything similar, and we built it to be robust (though never expecting it to still be around, almost unaltered, over a decade later) so it carried on working. With the help of less hostile but never exactly enthusiastic CIOs, we have limped along ever since, very slowly creeping up through the versions on a shoestring budget and odd moments of my own spare time, but we are very far behind the cutting edge.

And then came ChatGPT

LLMs – Claude in particular – can be great at coding, especially for small projects like plugins. I have been vibe coding for a few years now, and it has been incredibly useful in many aspects of my life. However,  even the best of them tend to struggle with Elgg plugins. I think it is because there is not enough Elgg code out in the wild, and there have been too many versions and too many approaches to development, so there’s not enough good quality training data. Since the first week of the launch of ChatGPT, I have been trying to get genAIs to help me with Elgg plugin upgrades and bug fixing but, though I have picked up some very helpful ideas in the midst of some very bad attempts at solutions and they have spotted a few bugs for me, not a single line of actual AI-generated code has ever made it onto the Landing. This is going to change. 

A few days before Ben wrote his post, on a hunch, after some frustrating attempts at getting Claude, ChatGPT and Gemini to upgrade an existing plugin that was too difficult for me to take on alone, I instead simply asked Claude to make me a new one, with specs I had extracted from the original (using ChatGPT and tweaking the output), but giving it no access to any of the original’s source code or program structure.

Apart from a couple of minor syntax problems that took hardly a minute to fix, it worked first time. It was considerably more polished than the original and, indeed, than almost all the plugins we had written ourselves or commissioned at costs of up to $10,000. It has no deprecated code at all – something that is not even true of plugins in the core for our current Elgg version – and it has all sorts of useful little configuration options that Claude extrapolated from the specs and that I would have been too lazy to bother with, but that make it way more adaptable than its predecessor. It even has a complete set of language files for both French and English – extremely rare in human-made plugins – and it would be trivial to ask it for other languages if we needed them.

I think this works because of the different way Claude approaches the problem compared with how it handles an existing plugin. When trying to fix a broken or obsolete plugin, the plugin itself plays a large influencing role, then Claude pulls on a ragtag bunch of existing plugins as examples, but the paucity and mixed quality of the training data means they are less than wonderful role models. Almost all of its prior attempts included code from a future version of Elgg, or an older one, or one that has never existed, and it quite often did things in a very non-Elgg way. In contrast, when building a new plugin from scratch, its strategy appears to be to read the entire core codebase and all of the official documentation, then to build the plugin to fit, with little or no reference to any existing plugins beyond those that come with the core distribution. When things go wrong, it goes straight to the definitive source of a function in the core, not to a muddle of existing solutions, and its context window (at least in the paid versions) is now large enough for it to contain much if not all of the whole thing, or at least for retrieval-augmented generation to deal with the correct pieces. The small core that was so useful to human developers turns out to be ideal for LLMs.

The key lesson to be drawn from this is that, if the architecture is sufficiently and cleanly modular (as Elgg’s is), then it may now be more effective to recreate components from scratch than to maintain the ones you have already written. If it continues to pan out as it has so far done, I’d say this is a potential game changer. As well as making development extremely agile, it even improves the security of the system because, though any one plugin may yet have flaws despite the apparently high quality of coding, it is not going to stick around for long enough for them to be exploited, and anyone who follows this approach is not going to have the same plugins as anyone else so it’s not worth anyone’s while to develop a specific hack for it. The next upgrade is almost ready so I am only going to use this approach sparingly for now but, when the time comes for the next major upgrade, this is how I intend to do most of it. I won’t let it near core plugins or still-maintained community plugins but, for all those we inherited or created, ChatGPT or Gemini will provide me with the spec. I’ll then run each spec through Claude, getting it to produce the complete plugin including unit tests. It will still take time, and I don’t expect it to work as well all the time, but much of that time will be spent by Claude, not me. At one fell swoop, this almost eliminates the technological debt.illustrating spec extraction and plugin creation using LLMS, in the style of Alice in Wonderland

This principle is not necessarily limited to elegantly engineered systems like Elgg. A night or two ago I went through my regular quandary about how to schedule ad hoc meetings for one of my courses. In the past I’ve used wikis, discussion forums, various free (but not quite right) poll-based schedulers like Doodle, and more. None were great, and the ones that worked best raised potential privacy concerns that I was not willing to grapple with. The length of time it takes to get a plugin to production made a Landing plugin a non-starter.  Then it struck me that my own personal website would be more private and controllable than any of those, and hosted on Canadian soil (unlike any of the rest) so I went in search of a plugin. WordPress is very inelegant, sprawling software, and plugin development is positively painful compared with Elgg, but the vast numbers of WP developers mean that, among the many tens of thousands of plugins, no matter what the task, at least one will do the job I want, or close enough for me to tweak so that it does. At least that had always been the case until now. To my great surprise, this time, there were none.  Something like the functionality does exist in a few polling and scheduling plugins, but with very complex configurations and a lot of unwanted fluff around them, not to mention the need to get premium non-open versions to do what I want. I just wanted a small subset of Doodle’s functionality, that would not store any private data, nor cater for needs I don’t have. So I asked Claude to make it, knowing that it would already be quite skilled in WP development because of the vast number of examples to learn from. It took about 4 attempts to get exactly what I wanted. Overall the whole process took about an hour, including writing the spec, Claude’s thinking time, and the time it took to upload, configure and test it. It works really nicely. I actually spent more time earlier looking for the right software than it took to make it from scratch. I have some experience writing specs, but even a beginner could do this with a bit of help from the AI. 

Ochlotecture management

I might ask an LLM to build the Spec Manager – essentially a means of managing the application architecture, not unlike a traditional source code management system –  that Ben writes about, to simplify and automate some of the workflow, not that it is particularly onerous. However, the time it would save would allow me more time to work on another idea sparked by Ben’s post.

Doing what we already do, better, cheaper, and faster, is quite cool, but the most significant benefits of any new technology come from being able to do things that were previously impossible: it is the adjacent possibles they create and we exploit that drive progress. As Ben says, some of the biggest things that matter in a social system are the what, why, and for whom, and that’s very true, but there’s more. I’ve written previously of the ochlotecture of a social system, by which I mean all the human as well as non-human elements of it that make it do what it does, including the whats, whys, and for-whoms: the written and unwritten rules, the structural topography (networks, group hierarchies, set clusters, etc) , the norms around posting, the pace, the interests of the community, the cross-cutting networks,  the ethical principles, the aesthetic preferences, the physical spaces they inhabit, and so on, that combine to give shape to a community. In essence it is much like a user model, only for crowds.

It strikes me that it should be possible to build an Ochlotecture Manager in much the same way as we might build the Spec Manager. Exactly how this would work is to be determined but I envisage it including an assortment of personas and scenarios as well as rules, demographics, contextual information, and network/group/set structures. The idea is to try to get away from the traditional functional definitions and instead describe relationships, policies, norms, and so on in a way that, with a bit of work, LLMs will be able to interpret and thus to better fit the site to its community. This would be particularly useful in a learning context, where a lot of software is built or chosen to perform a function, with far too little regard to how it achieves it. It almost never fits exactly what a teacher would like to do, because it ain’t what you do, it’s the way that you do it, that’s what gets results, and you can’t do the same thing the same way for everyone and expect it to be a perfect fit for all of them. The app will most like generate some YAML or JSON and instructions about how to deal with it. But this doesn’t end with the design.

A much under-utilized adjacent possible of LLMs lies in their potential to connect people and sustain communities. From summarizing conversations or connecting individuals with complementary needs, to nudging conversations or analyzing sentiment, there are many ways LLMs can catalyze interaction, not as a participant but an enabler. Having a clearly specified ochlotecture would make this much easier to achieve. It might not be a bad ochlotectural analyst, too, suggesting and implementing improvements in the design based on not user models but crowd models.

Having done that, it opens up the potential to make this a truly adaptive system, not just changing data and parameters but also the underlying code itself as a community evolves. Imagine, to give a simple example, a discussion forum in which the system observes people regularly responding with “this is great” or similar replies. The system could identify a need for some kind of rating system and, rather than simply implementing a “like” button (which is far from ideal in all situations) it could consult its ochlotectural model to identify what would work best. This could range from a simple change of wording – “recommend”, perhaps, or “rate”, depending on the community – to a multi-dimensional ranking system, that might work better if more precise feedback is needed (e.g. in peer review). More complex changes are possible: it might build a system to (say) manage events, or create photo albums, or implement breakout spaces, or shift between threaded and non-threaded discussions. Perhaps it could shuffle menus to better fit community needs, or fix accessibility issues, or identify more relevant posts. I’d be extremely nervous of taking humans out of that loop – that way disaster lies – but perhaps the humans would not need to be developers as long as a developer had crafted the spec and the ochlotecture carefully enough in the first place. Community members themselves could suggest things, the LLM could present them to the group (perhaps creating a poll system for voting, or some other dispute-settling mechanism to do so), and it could use the ochlotectural and architectural models to help guide the actual development. It might even do a bit of proactive A/B testing, making an evolutionary (survival of the fittest) approach possible. Ultimately, it might even evolve how it evolves, developing its own strategies for engaging the community and responding to changing needs. It would be no more annoying that it constantly changes than it is for existing cloud services, with the added benefit that, if the community doesn’t like it, they can fix it. 

In my perfect world all of this would rely on a local, open LLM but, though some are now extremely good for coding assistance, none currently have the large context windows and sophisticated tuning of the bigger commercial models. This will probably change. A hybrid approach might work in the interim, where the local model deals with everything apart from the coding itself, and the commercial model does the rest, but I’ve not thought through the economics of that.

Bricoleering: a new paradigm?

We are at the bottom of a learning curve with genAI right now. Most of us are simply replacing things we already do with LLMs, and that is highly problematic for reasons I and many others have written about extensively (see at least half my posts at https://jondron.ca/ai). In a world with machines that can creatively replicate almost any human cognitive skill, often at an expert level, there are high risks that our descendants are going to lose at least a portion of their own capacity to do so unaided. That’s not necessarily a bad thing, in itself. Few of us can still recite every word of a novel from memory, or create a bow and arrow, or perform complex mental arithmetic, because we don’t need to. Coarse grained cognition – thinking in bigger chunks, using the products of our own and other humans’ thought – is what has let us build pyramids, spaceships, welfare systems and virtually every invention ever, including this sentence. It’s our collective, extended cognition that makes it possible to constantly create more. That’s more of a problem when creativity itself is at stake, however, because we risk delegating too much of it to the machine, and allowing our own capabilities to atrophy. Already, I quite often tell the machine what I’m trying to do then ask it for a list of ideas and select one, rather than trying to think of one myself: that’s how the picture at the top of this post was conceived. At scale, this is not a great idea.  If the world is going to be a better and not a worse place, we need to learn to be creative with the creative outputs of the cognitive Santa Claus machines, not simply to specify and use them. I think that the idea I suggest above is one of the ways this can happen. A plugin-based (or other component-oriented) approach enables us to do bricolage with the pieces, assembling, disassembling, and reassembling them in new and creative ways that neither we nor genAIs could do alone. It is not Levi Strauss’s bricolage of the “savage mind”, however, nor is it engineering. I think it is a new paradigm in which we do not simply assemble pieces we happen to have lying around but actively help to shape them so that they will fit. Our roles are closer to those of architects like Frank Gehry, who famously couldn’t use the machines that were essential to creating his iconic machine-made designs, instead relying on hand-drawn sketches to communicate his idea to those who could. I don’t know what to call this: “bricoleering” perhaps, or “adaptafacture”?

 

 

Edison’s Infinite Workshop: Innovation and education in the age of Cognitive Santa Claus Machines (slides from my keynote for IFERP’s EdInnovate 2026)

Nek Chand's Rock Garden, illustrating the power of bricolage as a creative process
Statues in Nek Chand’s Rock Garden (photo by the author)

I’ve just finished giving a brief keynote for IFERP’s 3rd EdInnovate conference in Tokyo (sadly, because I love Tokyo in the Spring,  I was online). Here are the slides. The conference was great: they put all of the keynotes and invited talks on a single day, with a very international and cross-disciplinary bunch of thought leaders (and me), and many of us were talking about very closely related themes, of rehumanizing and transforming education, from very different perspectives. Though most of it confirmed what I already know, I learned a lot.

The gist of my talk was that generative AI challenges us to transform both how we teach and what we teach. I have spoken quite a bit about the “how” in the past – essentially it is to double down on the tacit, the relational, and the social, to care about and to empower learners, to focus on what it means to be a human in whatever fields we are trying to teach. The stuff we should already have been doing.

The “what” is new. GenAIs are pretty good at creating stuff, and that’s a problem because it is very, very tempting to get them to think for us (hence cognitive Santa Claus machines: we delegate the thinking to them so that we don’t have to). We now have access to most human knowledge, at a (mostly) expert level, with little skill needed to elicit any of it. These things are like search engines that actually give us what we are searching for, in detail, and then do whatever it was that we were planning to do with the search results on our behalf. If our descendants are not to be less than us (and I really want more for my own grandchildren), we now have to figure out what to do with that. If the answer is to turn in an essay or perform an assignment that any AI could do at least as well, then the world will end with a whimper. Our jobs are to take that, problematize it, and use it to create more than any of us (human or machine) could have created alone. Luckily we already have a model for that: bricolage, or tinkering.

Bricolage has got a bad rap in the past, often compared unfavourably with engineering (notably by Levi Strauss, who defined it and saw it as primitive) but, as Papert and Turkle wrote many years ago, it is a very legitimate way of engaging with the concrete, a highly creative activity in its own right, and it can be a very powerful approach to design. The photo at the top of this post shows just a handful of the thousands of stunning artworks created by Nek Chand and his team, all of it built from the waste products of the industrial city of Chandigarh – pieces of wire, chunks of porcelain, sacks of concrete, and other found objects. I have visited twice and cried at the beauty of it both times.

I have written of bricolage before, e.g. here and here (nicely reported on and more clearly expressed by Stefanie Panke), as a means of researching things that don’t (yet) exist, and I intend to write more. It seems to me, though, that this is one of the key skills that we should be developing for ourselves and for our students, not just for research but as a process and product of learning. It is the natural evolution of the steady progress from high-resolution to low-resolution cognition that has driven human progress for millennia. In the past we built on and with what other humans had already done: it is and has always been what makes us smart that we can, through technologies (including language and art), share parts of our cognition: we think with our creations. The more we create, the more we can create. Now we have machines that are themselves bricoleurs par excellence, capable of producing any parts or pieces we can imagine, at vast scale, and quite a few we cannot. This is different. If we take advantage of it, we can continue the technology-fuelled exponential growth that is a hallmark of our species (and, to be perfectly clear, art, writing, poetry, architecture, music, and all the humanities are among the most significant of those technologies). If we don’t, we face not just the model collapse of genAIs but, ultimately, of our own cognition. This is not about replicating what we can already do. It’s about being able to do what we cannot yet imagine. This seems like a good mission for education to me.

More than a game: some thoughts on David Wiley’s “Random Audits as a Scalable Deterrent to Cheating”

Source: Random Audits as a Scalable Deterrent to Cheating: Using Game Theory to Design Fair and Effective Academic Integrity Systems for the AI Era by David Wiley Though not particularly common, the general principle of only assessing a sample of work with oral exams (viva voces) is well established, and is common practice in a number of institutions (e.g. UC Berkeley or UC London). What’s smart and novel about David Wiley’s new variation on the theme is the rigour with which he approaches the problem. The headliner is his use of game theory to identify the optimum sample range (no point in auditing mediocre results or fails), sample rate (to make the risk of detection significant enough to deter wrongdoers), penalty for failure (neither so small that the risk is acceptable nor so large that people are deterred from applying it), and appropriate audit bonus (so honest students gain some but not too much benefit from being audited to make up for the discomfort, inconvenience, and pain). It’s a nicely balanced process, playing with the incentives so as to take some of the sting out of being selected to be assessed by offering opportunities to increase grades. There’s also a lot of careful thought given to the administrative and pedagogical details of how to make it all work, so that students are forced to think clearly about the pros and cons of cheating, and it is all done fairly and efficiently. It’s a very well considered set of techniques for reducing the faculty workload and reducing the chances of cheating.

For all that is good about it, I think it’s almost exactly the wrong idea, though I have an idea to save it.

Problems with oral exams

For the majority of students in search of credentials, oral exams are at the better end of the summative assessment spectrum, because they are:
  • efficient (on average, it takes no longer to ascertain someone knows what they are talking about than it does to properly mark an exam or assignment and, crucially, it demands less time from the student),
  • reliable (very hard, though not impossible to fake or cheat),
  • personal (you can explore personal strengths and misconceptions),
  • responsive (feedback can be immediate),
  • social (caring can be demonstrated),
  • often authentic (depends on context), and, above all,
  • useful learning experiences in their own right, for all concerned, including examiners.
In universities, oral exams predate written exams by many, many centuries. It was by far the most common way to assess students for credentials right up to at least the 19th Century, and it generally worked well, notwithstanding the problems dealing with geometry and other visual disciplines that led to the Cambridge Tripos (the first modern written exams) in the late C18th. It’s still very popular in some regions, especially for higher degrees, though it has fallen out of favour across much of higher education because it is hard work and difficult to scale. While each one is quite efficient in itself, when you have to do schedule a few hundred of them it really eats into your time and energy.  There are some major issues for students who have speech impediments, hearing problems, or who are simply using a foreign language, so alternatives or workarounds must be available, and extraordinary care must be taken to avoid personal biases because it is prohibitively expensive and impractical to anonymize them. All in all, though, for most students it is one of the least bad of a bad bunch.

Unfortunately, oral exams have one very fatal flaw inasmuch as, far more than for written exams (which are unpleasant enough for most students), they can be incredibly intimidating. Few students actually like them but, for a significant number, they are beyond mortifying. I have known students to freeze, cry, walk out, and even fail an entire PhD (though that was later corrected) as a result of having to defend their work this way. The stress can be mitigated somewhat with counselling, therapy, practice, caring tuition, and sensitive questioning, but it is difficult if not impossible to completely eliminate this problem, and time spent developing counter-technologies to the technology of assessment is time better spent learning the subject in question.

I think that David’s rational game-theoretic approach fails to take this sufficiently into account. For students facing the prospect of extreme trauma, no matter how competent they might be in the subject, the most rational course of action in David’s system would often be to aim for a low mark that would not get audited rather than risk having to be examined. There are plenty of students who don’t need high GPAs, for whom a straight pass is a rational choice. However, in itself, this would be a risky strategy because it is really difficult to tread the fine line between a low pass and a fail or higher pass, either of which would be very bad news, all of which would add stress not just at exam time but throughout the course. Under such circumstances, a student who had taken the game theory to heart would probably realize that the most effective way to be likely to get a low pass would be to ask a generative AI to produce work that that level: in my own experiments I have found them to be remarkably good at targeting a particular grade, as long as you feed them half-decent rubrics.

It is also far from infallible, because few of us are rational game players. On the whole, cheating tends to occur when students are very stressed and they panic: it’s often barely a rational choice at all. Few actually want to cheat and all of them already know it is a risky option: it’s just the least bad of a limited number of very bad alternatives. Making the risks higher and quantifying them is not a solution to this. If anything, for at least a few of the most at-risk students, it will just make the problem worse because the pressure is greater. Also, for the truly disengaged students who are most likely to cheat, this might just be another thing they do not learn, so they would not even be playing the game, though they would certainly come to regret it if they were audited.

Sampling problems

Another problem with David’s approach is that it is a very much stronger signal of the authority and control that the teacher/institution has over the the student than the conventional process, with no pretence that it serves any further purpose than to catch cheats. If it were to support learning then everyone should be doing it, and the fact that there is a reward for being audited just further emphasizes that it is an undesirable activity that students are being forced to do. At least as bad, it doesn’t just allow but it actively recommends an instrumental approach to learning: it literally teaches students how to game the system. For anyone wanting to use this approach, I would therefore strongly recommend combining it with ways to attempt to restore lost autonomy, for example by encouraging students to design some of their own outcomes, or to have input into the means of assessment, or to have plenty of flexibility in the timing of submissions, or at the very least to be able to choose different ways of demonstrating their competence from a range of options. Among the benefits of doing this, the chances of them cheating in the first place would be significantly reduced.

There is also a time commitment to learning how to play that game rather than learning the stuff the course is actually about.  I don’t see an easy way of avoiding this altogether though, if it were applied across the board to a whole program, the proportion of time spent on it could be reduced for each course. It would be a brilliant idea to use it in a course on game theory, of course.

It bothers me that the method deliberately excludes students who don’t get great results. It seems to me that they are the ones who would most benefit from a chance to improve them, so it amplifies the divide between the haves and the have-nots. At the very least, it should be possible for such students to ask for an oral exam, under the same conditions as those who get selected for random testing. The selection process again sends a bad message: that high achievement makes you a suspect.

While the proposed sample rates make sense for a single course, if all courses worked this way then, by the end of the program, almost every student would have at some point been audited, most likely more than once. For someone with a strong phobia, this might actually be worse than having to do it for every course: knowing that, at any point, your worst nightmare is going to happen is probably not going to improve your chances of persisting to the end of a program. It’s a problem both in the stress-filled build-up and (if not selected) the massive surge of relief that follows. The pain/relief patterns are not dissimilar to those of, say, gambling or drug addiction.

Motivation problems

David claims that it is not a technology problem but an incentive problem. I disagree. This very much is a technology problem, and David’s solution is totally a technological solution: it’s just not a digital technology problem. And, in the context of the technology in question – that of credentialing – it is not an incentive problem but a motivation problem. Treating it as an incentive problem limits it to the subset of motivation that is both extrinsic and externally regulated: the worst possible kind. Externally regulated extrinsic motivation reliably kills intrinsic motivation so this both takes away the love of simply doing the work and actively harms motivation to do so in future.

The trouble with David’s solution is that it doesn’t deal with or consider the reasons that students cheat in the first place: it’s just a response to the fact that some do. Vanishingly few students start out a course with the intention of cheating their way through it. Rather, the pressures they face (almost all extrinsic) make cheating a rational response and/or the result of panic. All that David’s solution does is to make it a bit less rational. Students will still do it for irrational, emotionally charged reasons, and it not only does nothing to eliminate the root causes but it actually amplifies them, piling on additional pressure.

Like all technologies, there are other ways to solve this problem and, like all technologies, it is a Faustian bargain that creates new problems of its own. David’s solution, with the aforementioned provisos, is a potentially effective and efficient solution to cheating but it is likely to have the opposite effect on learning, especially once the course is over. It’s just a counter-technology for dealing with flaws in the underlying credentialing approach, and it demands further counter-technologies of its own to deal with its big fatal flaw if it is going to work at all well. It’s not at all unusual in this.

A better solution?

I think this is fixable. I reckon David’s solution would work a lot better if, instead of auditing assignments or exams for a single course, it were applied to a basket of courses (say, 3-6 of them) and, in the oral exam, students were asked to synthesize, connect and utilize what they have learned in all of them. This is not unlike some fairly common approaches to PhDs or capstone projects, where students create something then talk about it in more or less formal ways (presentations, demos, crits, viva voces, etc). If done with commitment, it could largely decouple learning and assessment because instrumental revision would not be an option: the only way to revise effectively would be to engage in positive learning activities that involve exactly the kind of synthesis we would examine, which would make it personal, relevant, and interesting, especially if (to make it authentic) it were done with other people.

With a bit of ingenuity, it might be possible to remove all grades and credit for the courses themselves, so students could learn without the usual extrinsic pressures. Every student would automatically get a provisional generic pass on each of the basket of courses, no questions asked. If they were audited then they might improve that (or fail), as David suggests. For the sake of equity, every student would have the right to ask to be audited, so the high-flyers who cared about getting a high grade could have an opportunity to get one. The rest could learn with significantly reduced pressure.

An obvious objection is that it would increase the high stakes when that assessment did actually happen. One way to reduce that problem would be to allow repeated attempts, with no additional penalty, or to make it a “best of three” of something along those lines. Though that would somewhat reduce the efficiency of the solution, as long as it were structured to make it relatively rare, it would be worth the extra bother. It would also be good to provide coaching, counselling, and plentiful opportunities to practice. For some subjects there might be less pressured approaches than oral exams that would achieve similar results, such as observation studies of them working on a problem, or group discussions, or structured peer interviews. Perhaps it could be a series of conversations throughout the program, none of which carries a definitive grade in itself but that, together, add up to an overall assessment. There’s scope for further innovation here.

It would be more important than ever to provide plentiful formative assessment during the courses themselves, and to provide ways of practising those skills in synthesis. The latter could be done within those courses or, perhaps better, a “synthesis” course could be provided for this purpose, operating in much the same way as Brunel’s assessment modules in their Integrated Programme Assessment approach. Among the advantages of this, it would allow students to do some work that might be used as part of an alternative assessment for those suffering from extreme fear of or difficulties participating in the oral exam.

It is not perfect, and it would be no use for situations such as those at Athabasca University, where many students are taking only one or two courses, often as visitors from other programs. However, for program students, even more than David’s approach, this would massively reduce the marking burden while making a positive contribution to learning and motivation to learn.    

Is higher education broken? Not exactly.

a university in collapse in the style of illustrations of the Fall of the House of UsherWhat does it mean for higher education to work?

The problem with claiming (as I sometimes do) that higher education is broken and needs to be transformed is that it begs the question of what it means for higher education to work, and that depends what you think it is for.

From the name you’d expect that higher education might be for …well… education, assuming that to be concerned with learning and teaching, but it outgrew that single purpose a very long time ago. Yes, learning & teaching still looms large, but credentialing is at least as significant (often more so) and, at least for some, so are research or various forms of service.  But, depending on your perspective and context, a university or college might also or alternatively be thought of quite differently as, for example:

  • a driver of peace or prosperity in a society;
  • a creator of knowledge in the world;
  • a support for local economies;
  • training for industry;
  • a market for contract cheating;
  • a home for sports teams;
  • a sharer and preserver of cultural artifacts;
  • an incubator for the performing arts;
  • a means to get a better job;
  • a medical facility;
  • a production line for professors;
  • an enabler of social mobility;
  • a profit-/surplus-making business;
  • a political pawn;
  • a selection filter for smart people;
  • and so on, and on, and on.

You might reasonably object that you could take any one of these away apart from the teaching role and you would still be left with a recognizable educational institution and, indeed, some are possible only because of the teaching role. However, to some people, somewhere, some time, every one of those roles is the role that matters most, and might be a target for transformation. Like every instantiated technology, a university or college is an assembly. In fact it is a huge assembly. It is part of and contains countless other assemblies, and is thoroughly, deeply entangled with a host of other systems and subsystems on which it depends and that depend on it.  Everyone within it or interacting with it perceives it from a different perspective, in different ways at different times, working together or independently as mutually affective coparticipants to do whatever it is that, from each of those different perspectives, it does. In many ways, as a whole, it thus resembles an ecosystem and, like an ecosystem, each individual part can be perceived as having a goal and a relationship with other parts, and with the whole, but the whole itself does not. I think this is probably a feature of institutions in general, and may be what distinguishes them most clearly from simple organizations and businesses.

So what?

As long as the distinct roles, from each individual’s perspective, do their jobs, this is not a problem. If you are interested in, say, in getting an education then you can largely ignore everything else an educational institution does and judge it solely by whether it teaches, notwithstanding the huge complexities of knowing what that even means, let alone with what proxies to measure it.

Unfortunately, a fair number of these roles deeply and negatively impact others. For me, by far the biggest problem is that the credentialing role is fundamentally at odds with the teaching role, due to the profound negative impact of extrinsic motivation on intrinsic motivation (I’ve written a lot about this, e.g. in these slides and in How Education Works so I won’t repeat the arguments again here). Combined with the side effects of trying to teach everyone the same thing at the same time, this results in the vast majority of our most cherished teaching and assessment methods being nothing more than ways of restoring or replacing the intrinsic motivation sucked out of students by how we teach and assess.  Other big conflicts matter too, though. For instance, when patents or copyrights are at stake, the business role battles with the underlying goal of increasing knowledge in the world, turning non-rival knowledge into a rivalrous commodity; ditto for the insanity that is journal publishing, where the public pays us to provide our editorial and reviewing services for papers on research that they also pay for, then the journals sell the papers back to us or charge us for sharing them, making obscene profits for an increasingly trivial service; similarly, the research role, that should in principle exist in a virtuous circle with teaching, is too often in competition with it and, in many institutions, teaching loses; the filtering role that rewards most universities (not mine) for excluding as many students as possible is in direct conflict with a mission to bring higher forms of learning to as many people as possible, and undermines the incentive to teach well because those carefully selected students will learn pretty well regardless of how well they are taught. There are countless other examples like this: public vs private good, excellence vs equity, local vs global responsibilities, supporting student diversity vs economic stability, and so on. Fixing one role invariably impacts others, usually negatively. These are structural issues that will persist as long as higher education continues to play those roles. The solutions to the problems in one role are the problems that other roles have to solve, and (to a large extent) they must be.

At a micro scale the problem is even more ubiquitous. Everyone is solving problems in their own local sphere, creating problems for others in their own local spheres, whose solutions cause problems for others, and so it goes around and comes around. Every time we create a solution to one problem we give rise to other problems elsewhere. To give a few trivial and commonplace examples of issues I am trying to deal with right now:

  • I recently learned of two courses that could not be launched because tutors for the single course that they replace would have to be rehired and lose benefits gained for long service. In terms of priorities and primary roles, this implies that offering stable employment to staff matters more than teaching. That’s not the intent of any particular individual involved in the process but it’s how the system works, thanks to union agreements that solved different problems a long time ago.
  • For nearly 50 years now, our undergraduate students have had 6 months to complete a course, unless they are grant-funded (an important minority), in which case they only get 4 months because funding bodies assume universities always teach in semesters of a standardized length and demand results within that timeframe. And so we are in the process of making all contracts 4 months, knowing full well that students will be more pressured, cheating will increase, and pass rates will go down, but at least it will be fairer.
  • When we commit structures to code they are supposed to model the system but, having done so, they normally dictate it. For instance, my need for all of our faculty to be able to see the teaching sites of all of our courses (a critical part of my strategy to improve our teaching) is under threat due to the cascading roles used to determine who can do what that are baked into the implementation of our LMS and that make it difficult and long-winded for our editors to edit our courses, because the roles have to be modified each time they use its impersonation function that is necessary for viewing courses as they will be experienced. The obvious solution is to fix those roles, not remove access for those who need it, but the editors lack such rights, and those who have them support other faculties with different and conflicting needs.
  • We have recently shifted to a centralized front-line support system, explicitly to deal with common difficulties students have in navigating and using our administrative systems and websites. The more obvious solution would be to make those systems work better in the first place. Instead, we employ vast numbers of people whose job it is to patch over gaps, errors, and poor design decisions made elsewhere. This reduces the pressure to fix the systems, so the need persists, except that now we have a whole load of people with jobs that would be in jeopardy if we fix them. We employ many people whose job is to fix problems caused by issues with how others do theirs: people dedicated to exam cheating, say, or accommodating disabilities, or the aforementioned editors. There’s a fine and indistinct line between dividing a workload so that people with the right expertise do the right things, and creating a workload because people with the wrong expertise have done the wrong things.

I could easily write pages of similar examples and, if you work for a university or college, I’m sure you could too: the specific problems may be peculiar to Athabasca University, but the underlying dynamics are ubiquitous in higher education and, for that matter, most large organizations. And I’m sure that you can think of ways to deal with any of them but that’s exactly the point: fixing them is what we all do, all the time, every day, on a grand scale, and educators have been doing so for nearly 1000 years so the number of fixes to fixes to fixes to fixes is vast.  For almost any role or activity, no matter how small or how large, there is probably another role and set of activities on which it impinges, directly or otherwise.

The big problem is that, on the whole, we create counter-technologies to fix the worst of the problems and that’s a policy of despair, every counter-technology creating new problems for further counter-technologies to solve. In fact, a large part of the reason for all those many roles is precisely because counter-technologies were created to solve what probably seemed like pressing problems and, in an inevitable Faustian bargain, created the problems we now need to address. Every one of these counter-technologies increases the robustness of the whole, increasing the interdependencies, making the patterns more and more indelible so, even if we do occasionally come up with something truly different, the overall system holds together as a massive web of mutually interdependent pieces more strongly than ever.

The more things change…

For all the many structural problems, it would be a synecdochic fallacy of mistaking the part for the whole to describe higher education as broken. Sure, thanks to all those competing roles (especially credentialing) it is not particularly great at education (at least), so transformation is devoutly to be wished for but, by the most basic and essential criterion of all –  survival – it is rampantly successful. In fact, it is exactly those competing and complementary roles that have sustained it because a diverse ecosystem is a resilient ecosystem. The webs of dependencies are mutually sustaining even, to a well-evolved point, when one is antagonistic to the other.

For nearly a millennium the university and its brethren have not only survived but have now spread to almost every populated region of the world, and they continue to expand. Within my lifetime, in my country of birth, enrolments in higher education have risen from around 5% of the population to around 50%. To achieve such success, it has had to evolve: the invention of written exams, say, in the 18th Century, Humboldtian models that justified and embedded research, the adoption of flexible curricula, or the admittance of women in the 19th Century, were all huge changes. It has lost the trivium and quadrivium along the way, and diversified enormously in the range of subjects taught. The technological systems are way more advanced and varied than they were.  There are regional variations, and a few speciated niches (colleges, open universities, distance education, etc). Administratively, a lot has changed, from recruitment and enrolment to the roles of professional bodies, industry, and governments.  It is constantly evolving, for sure.

But.

The main technological features that universities acquired in the first century of their existence are still fully present, in virtually unaltered form.  Courses, classes, terms/semesters, professors, credentials, methods of teaching, organizational structures, methods of assessment, and plenty more are visibly the same species as their mediaeval forebears, and remain the central motifs of virtually all formal higher education. We may use a few more polyesters and zippers, and the gowns now come in women’s sizes but, at least once a year, many of us even dress the same, a behaviour shared with only a few other institutions like (in some countries) the legal profession or the church. On the subject of which, most universities continue to have roles like dean, chancellor, rector, provost, registrar, bursar and even the odd beadle (what even is that?) that not only reveal their ecclesiastic origins but also how little the basic entities in the system have since evolved.

If the purpose of higher education were simply to educate then we would expect it to work a lot better and to see a whole load more variation in how it is done, especially given the wide range of technologies that can now be used to overcome the problems caused by those features, but we don’t. It’s not just the purpose that survives: it’s the form. We can radically alter a great many processes  but changing at least one or two of the central motifs themselves – which, to me, is what “transformation” must entail – is hardly never even on the table.

Adaptation, not transformation

If the institution had a clear overriding goal then we could re-engineer it to work differently, but this is not an engineering problem: it’s an evolutionary problem. We build with what we have on what we have, a process of tinkering or bricolage that is anything but engineered. It is, though, not natural but technological evolution. In natural ecosystems massive disruption can occur when populations become isolated, or when the environment radically changes. Technological evolution emerges through recombination and assembly of parts, not genes, and the technologies of higher education have evolved to be globally connected and massively intertwingled with nearly every other part of nearly every society, making isolation virtually impossible. In nature, ecosystems can be disrupted by invasive species, parasites, etc, but our educational systems – technologies one and all – have evolved to be great at absorbing stuff rather than competing with it, so even that path is fraught. Even something as apparently disruptive as generative AI, which is impacting almost every aspect of the system and all the systems with which it interacts, is currently causing reinforcement of objectives-driven models of teaching, (at least in Western countries) cultural individualism, and highly traditionalist solutions to fears of cheating like written and oral exams at least as much as it is inspiring change.

For those of us who care about the education role, there are plenty of ways we could actually transform it if we had the power to make the necessary changes. Decoupling learning and assessment would be a good start. Not just separating teaching and tests: that would just result in teaching to the test, as we see now. The decoupling would have to be asymmetrical, so the assessed tasks would demand synthesis of many taught things. Or we could get rid of classes and courses: to a large extent, this is what (despite the name) many Connectivist MOOCs have attempted to do, and it is also the pattern behind things like the Kahn Academy or Connect North’s AI Tutor Pro, not to mention traditional PhDs (at least in some countries), apprenticeship models of learning, most instructional videos on sites like YouTube, or Stack Exchange or Quora, and the bulk of student projects (like MOOCs, labelled as courses but lacking most if not all of their traditional trappings). Or we could keep courses but drop the schedules and time limits. If nothing else, imagining how things might work if we messed with those central motifs is a good way to stimulate creative use of what we have. If done at scale, such things could make a huge impact on our educational systems.

But they probably won’t.

The problem always comes back to the fact that, though (collectively) we could change the fitness landscape itself, making survival dependent on whatever we think matters most, we are unlikely to agree what does matter most. For some, better higher education would be measured in credentials, or explicit learning outcomes, or better fits with industry needs. Others would like it to advance their personal careers or status, or to do research without a profit motive. For me, improvements would be in far harder-to-measure aspects like building safer, kinder, smarter, more creative societies. Unfortunately (for me and others who feel that way), thanks to pace layering, the ones who could shape the fitness landscape the most are governments, and they are the least likely to do so. Governments tend to prefer things that are easier to measure, quicker to show results, that are most likely to keep voters voting for them and sponsors (especially from industry) sponsoring them. Increasingly, institutional mandates are measured by industry-impact, which does erode some traditional aspects of higher education but that reinforces the big ones, like the measurable, assessed, outcome-driven course, with its classes, its schedules, its semesters, its textbooks, its assessments, its teachers, and so on. It doesn’t have to, in principle but, in practice, those are not the things we adapt. If radical transformation ever does occur it will therefore most likely be the result of something so disruptive that the loss of higher education would be a minor concern: devastation caused by climate change, or nuclear war, or being hit by a large asteroid, for instance. And, to be honest, I’m not even sure that would be enough.

The limited chances of success should not discourage us from tinkering, all the time, whenever we can. Evolution must happen because the world that higher education inhabits evolves so, if this is the system we are stuck with, we should make it do what we want it to do as best we can.  There are usually ways to reduce dependencies, techniques to decouple antagonistic roles, strategies of simplification, approaches to parcellating the landscape (skunkworks, etc), and values-based principles for prioritizing activities that can make it more likely that the changes will be successful and persistent. However, if we have learned anything from biological studies over the past many decades, it is that you shouldn’t mess with an ecosystem. Whatever we do will put it out of balance, and self-organizing dynamics will ensure that either the balance will be restored, or that it spirals out of control and breaks altogether. Either way, it will never be exactly what we planned and, on average, it will tend to eventually keep things much the same as they are while making most of it worse while it restabilizes itself.

Knowing that, though, can be useful. If every change will result in changes elsewhere, it is not enough to monitor the direct impact of an intervention: rather, we need to figure out ways of harvesting the outcomes across the system and/or, as best we are able, to model them in advance. No one has access to more than a fraction of the information needed, not least because a because a significant amount of it is tacit, embedded in the culture and practices of people and communities within the system. However, we can try to intentionally capture it, to tell stories, to share experiences and understandings across all those many niches. We can do what we can to make the invisible visible. We can talk. And we have technologies to help, inasmuch as we can train AIs to know our stories and ask them about the impacts of things we do, and point out impacts that would be difficult if not impossible for any person to do. And that, I think, is the only viable path we have. The problems we generally have to deal with are a direct result of local thinking: solutions in one space that cause problems in another. The less locally we think about such things, the greater the chances that we will avoid unwanted impacts elsewhere or, equally good, that we will cause wanted impacts. To achieve that demands openness and dialogue, channels through which we can share and communicate, and some way of compressing, parsing, and relaying all that so that sharing and communication is not the only thing we ever do. This is not an impossibly tall order but it certainly isn’t easy.

Tool-using tools – Perceptions and misperceptions of generative AI (slides from my keynote for the Global AI Summit, 2025, at Bennett University, India)

tool-using robotHere are the slides from the first of my two keynotes last week, Tool-using tools – Perceptions and misperceptions of generative AI. This one was for the Global AI Summit 2025, hosted at Bennett University in India.

The talk covered ground that I’ve already blogged about. My big point is that it is not just inaccurate but misleading to think of genAIs as tools: it grants us too much agency. If you have to use an existing term then I think “appliance” is a much more accurate label because they are technologies that do thinking for us, much as refrigerators do cooling for us, or dishwashers wash our dishes. Just as some skill is needed to use a dishwasher or fridge, some skill is needed to get a genAI to think: it’s OK to think of prompts as tools for that purpose. However, it is not our thinking, and that matters. GenAIs are unlike any prior technology because they are, like us, tool users and creators. It is possible to ask genAIs to act as (or at least create and host) tools. It’s just not what we usually use them for. I think “metatool” is a better term.

I gave this talk online, at 4am Wednesday morning, finishing less than an hour before I had to leave for the airport for Japan, where I was due to give my second keynote of the week,  on generative vs degenerative AI, so I might not have been at the top of my game!

Generative vs Degenerative AI (my ICEEL 2025 keynote slides)

AI Santa fighting KrampusI gave my second keynote of the week last week (in person!) at the excellent ICEEL conference in Tokyo.  Here are the slides: Generative AI vs degenerative AI: steps towards the constructive transformation of education in the digital age. The conference theme was “AI-Powered Learning: Transforming Education in the Digital Age”,  so this is roughly what I talked about…

Transformation in (especially higher) education is quite difficult to achieve.  There is gradual evolution, for sure, and the occasional innovation, but the basic themes, motifs, and patterns – the stuff universities do and the ways they do it – have barely changed in nigh-on a millennium. A mediaeval professor or student would likely feel right at home in most modern institutions, now and then right down to the clothing. There are lots of path dependencies that have led to this, but a big part of the reason is down to the multiple subsystems that have evolved within education, and the vast number of supersystems in which education participates. Anything new has to thrive in an ecosystem along with countless other parts that have co-evolved together over the last thousand years. There aren’t a lot of new niches, the incumbents are very well established, and they are very deeply enmeshed.

There are several reasons that things may be different now that generative AI has joined the mix. Firstly, generative AIs are genuinely different – not tools but cognitive Santa Claus machines, a bit like appliances, a bit like partners, capable of becoming but not really the same as anything else we’ve ever created. Let’s call them metatools, manifestations of our collective intelligence and generators of it. One consequence of this is that they are really good at doing what humans can do, including teaching, and students are turning to them in droves because they already teach the explicit stuff (the measurable skills and knowledge we tend to assess, as opposed to the values, attitudes, motivational and socially connected stuff that we rarely even notice) better than most human teachers. Secondly, genAI has been highly disruptive to traditional assessment approaches: change (not necessarily positive change) must happen. Thirdly, our cognition itself is changed by this new kind of technology for better or worse, creating a hybrid intelligence we are only beginning to understand but that cannot be ignored for long without rendering education irrelevant. Finally genAI really is changing everything everywhere all at once: everyone needs to adapt to it, across the globe and at every scale, ecosystem-wide.

There are huge risks that it can (and plentiful evidence that it already does) reinforce the worst of the worst of education by simply replacing what we already do with something that hardens it further, that does the bad things more efficiently, and more pervasively, that revives obscene forms of assessment and archaic teaching practices, but without any of the saving graces and intricacies that make educational systems work despite their apparent dysfunctionality. This is the most likely outcome, sadly. If we follow this path, it ends in model collapse for not just LLMs but for human cognition. However, just perhaps, how we respond to it could change the way we teach in good if not excellent ways. It can do so as long as human teachers are able to focus on the tacit, the relational, the social, and the immeasurable aspects of what education does rather than the objectives-led, credential-driven, instrumentalist stuff that currently drives it and that genAI can replace very efficiently, reliably, and economically. In the past, the tacit came for free when we did the explicit thing because the explicit thing could not easily be achieved without it. When humans teach, no matter how terribly, they teach ways of being human. Now, if we want it to happen (and of course we do, because education is ultimately more about learning to be than learning to do), we need to pay considerably more deliberate attention to it.

The table below, copied from the slides, summarizes some of the ways we might productively divide the teaching role between humans and AIs:

Human Role (e.g.)

AI role (e.g.)

Relationships

Interacting, role modelling, expressing, reacting. Nurturing human relationships, discussion catalyzing/summarizing

Values

Establishing values through actions, discussion, and policy. Staying out of this as much as possible!

Information

Helping learners to see the personal relevance, meaning, and value of what they are learning. Caring. Helping learners to acquire the information. Providing the information.

Feedback

Discussing and planning, making salient, challenging. Caring. Analyzing objective strengths and weaknesses, helping with subgoals, offering support, explaining.

Credentialling

Responsibility, qualitative evaluation. Tracking progress, identifying unprespecified outcomes, discussion with human teachers.

Organizing

Goal setting, reacting, responding. Scheduling, adaptive delivery, supporting, reminding.

Ways of being

Modelling, responding, interacting, reflecting. Staying out of this as much as possible!

I don’t think this is a particularly tall order but it does demand a major shift in culture, process, design, and attitude.  Achieving that from scratch would be simple. Making it happen within existing institutions without breaking them is going to be hard, and the transition is going to be complex and painful. Failing to do so, though, doesn’t bear thinking of.

Abstract

In all of its nearly 1000-year history, university education has never truly been transformed. Rather, the institution has gradually evolved in incremental steps, each step building on but almost never eliminating the last. As a result, a mediaeval professor dropped into a modern university would still find plenty that was familiar, including courses, semesters, assessments, methods of teaching and perhaps, once or twice a year, scholars dressed like him. Even such hugely disruptive innovations as the printing press or the Internet have not transformed so much as reinforced and amplified what institutions have always done. What chance, then, does generative AI have of achieving transformation, and what would such transformation look like?
In this keynote I will discuss some of the ways that, perhaps, it really is different this time: for instance, that generative AIs are the first technologies ever invented that can themselves invent new technologies; that the unprecedented rate and breadth of adoption is sufficient to disrupt stabilizing structures at every scale; that their disruption to credentialing roles may push the system past a tipping point; and that, as cognitive Santa Claus machines, they are bringing sweeping changes to our individual and collective cognition, whether we like it or not, that education cannot help but accommodate. However, complex path dependencies make it at least as likely that AI will reinforce the existing patterns of higher education as disrupt them. Already, a surge in regressive throwbacks like oral and written exams is leading us to double down on what ought to be transformed while rendering vestigial the creative, relational and tacit aspects of our institutions that never should. Together, we will explore ways to avoid this fate and to bring about constructive transformation at every layer, from the individual learner to the institution itself.

Paper: Cognitive Santa Claus Machines and the Tacit Curriculum

This is my contribution to the inaugural issue of AACE’s new journal of AI-Enhanced Learning, Cognitive Santa Claus Machines and the Tacit Curriculum. If the title sounds vaguely familiar, it might be because you might have seen my post offering some further thoughts on cognitive Santa Claus machines written not long after I had submitted this paper.

The paper itself delves a bit into the theory and dynamics of genAI, cognition, and education.  It draws heavily from how the theory in my last book, has evolved, adding a few of its own refinements here and there, most notably in its distinction of use-as-purpose vs use-as-process. Because genAIs are not tools but cognitive Santa Claus machines, this helps to explain how the use of genAI can simultaneously enhance and diminish learning, both individually and collectively, to varying degrees that range from cognitive apocalypse to cognitive nirvana, depending on what we define learning to be, whose learning we care about, and what kind of learning gets enhanced or diminished. A fair portion of the paper is taken up with explaining why, in a traditional credentials-driven, fixed-outcomes-focused institutional context, generative AI will usually fail to enhance learning and, in many typical learning and institutional designs, may even diminish our individual (and ultimately collective) capacity to do so. As always, it is only the whole assembly that matters, especially the larger structural elements, and genAI can easily short-circuit a few of those, making the whole seem more effective (courses seem to work better, students seem to display better evidence of success) but the things that actually matter get left out of the circuit.

The conclusion describes the broad characteristics of educational paths that will tend to lead towards learning enhancement by, first of all, focusing our energies on education’s social role in building and sharing tacit knowledge, then on ways of using genAI to do more that we could do alone, and, underpinning this, on expanding our definitions of what “learning” means beyond the narrow confines of “individuals meeting measurable learning outcomes”. The devil is in the detail and there are certainly other ways to get there than by the broad paths I recommend but I think that, if we start with the assumption that our students are neither products nor consumers nor vessels for learning outcomes, but co-participants in our richly complex, ever evolving, technologically intertwingled learning communities, we probably won’t go too far wrong.

Abstract:

Every technology we create, from this sentence to the Internet, changes us but, through generative AI (genAI), we can now access a kind of cognitive Santa Claus machine that invents other technologies, so the rate of change is exponentially rising. Educators struggle to maintain a balance between sustaining pre-genAI values and skills, and using the new possibilities genAIs offer. This paper provides a conceptual lens for understanding and responding to this tension. It argues that, on the one hand, educators must acknowledge and embrace the changes genAI brings to our extended cognition while, on the other, that we must valorize and double-down on the tacit curriculum, through which we learn ways of being human in the world.

New open journal from AACE: AI-Enhanced Learning (with a paper from me)

AI-Enhanced Learning cover illustrating a cyborg, AI-human hybrid mindThe Journal of Artificial Intelligence Enhanced Learning (AIEL), a diamond open-access journal published under the auspices of AACE and distributed worldwide through LearnTechLib has just launched its inaugural issue, which includes a paper from me (Cognitive Santa Claus Machines and the Tacit Curriculum).

This inaugural issue is a great start to what I think will come to be recognized as a leading journal in the field of AI and education.  As not just an author but an associate editor I am naturally a little biased but I’m very picky about the journals I work with and this one ticks all the right boxes. It is genuinely open, without fees for authors or readers. It is explicitly very multidisciplinary. The editors – Mike Searson, Theo Bastiaens and Gary Marks – are truly excellent, and prominent in the field of online and technology-enhanced learning. The publisher, AACE is a very well-oiled, prominent, professional, and likeable organization that has been a major player in the field for over 30 years, with extensive reach into institutional libraries the world over via LearnTechLib.

And the journal has an attitude that I like very much: it’s about learning enhancement through AI, not just AI and education. This fills a huge pragmatic need in an area where many practitioners are like deer caught in the headlights when it comes to thinking about what positive things we can do with our new robot friends/overlords/interlopers, and where too much of the conversation is implicitly focused on protecting the traditional forms and structures of our mediaeval education systems and the kinds of knowledge generative AI can more easily and effectively replicate.

This first issue crosses many disciplinary boundaries and aspects of the educational endeavour with a very diverse range of reflective papers by recognized experts in many facets of AI, education, and learning.  All are ultimately optimistic about the potential for learning enhancement but few back away from the wicked problems and potential for the opposite effect.  My own paper finds a thread of hope that we might not so much reinvent as simply notice what education currently does (it’s about learning to be as much as learning to do), and that we might recognize generative AIs not as tools but as cognitive Santa Claus machines, sharing their cognitive gifts to help us collectively achieve things we could not dream of before. It has a bit of theory to back that up.

If you have influence over such things, do encourage your libraries to subscribe!

Educational technologies and the synecdochic fallacy

all hands on deckFor a few minutes the other day I thought that I had invented a new kind of fallacy or, at least, a great term to describe it. Disappointingly, a quick search revealed that it was not only an old idea but one that has been independently invented at least twice before (Berry & Martin, 1974; Weinstock, 1981). Here is its definition from Weinstock (1981):

“a synecdochic fallacy is a deceptive, misleading, erroneous, or false notion, belief, idea, or statement where a part is substituted for a whole, a whole for a part, cause for effect, effect for cause, and so on.”

Most synecdoches (syn-NEK-doh-kees in case you were wondering – I have been getting it totally wrong for decades) are positively useful. Synecdoches make aspects of a whole more salient by focusing on the parts. No one, for instance, thinks “all hands on deck” actually means the crew should put their hands on the deck let alone that disembodied hands should crew the ship, but it does focus on an aspect of the whole that is of great interest: that there is an expectation that those hands will be used to do what hands do. Equally, synecdoches can make the parts more salient by focusing on the whole. When we say “Canada beat the USA in the finals” no one thinks that one literal country got up and thrashed the other, but it draws attention to a symbolic aspect of a hockey game that reveals one of its richer social roles. It becomes a fallacy only when we take it literally. Unfortunately, doing so is surprisingly common in research about education and educational technologies.

Technologies as synecdoches

The labels we use for technologies are very liable to be synecdochic (syn-nek-DOH-kik if you were wondering): it is almost a defining characteristic. Technologies are assemblies, and parts of assemblies, often contained by other technologies, often containing an indeterminate number of technologies that themselves consist of indeterminate numbers of technologies, that participate in richly recursive webs of further technologies with dynamic boundaries, where the interplay of process, product, structure, and use constantly shifts and shimmers. The labels we give to technologies are as much descriptions of sets of dynamic relationships as they are of objects (cognitive, physical, virtual, organizational, etc) in the world, and the boundaries we use to distinguish one from another are very, very fluid.

There is no technology that cannot be combined with different others or in different ways in order to create a different whole. Without changing or adding anything to the physical assembly a screwdriver, say, can be a paint stirrer, a pointer, a weapon, or unprestatably many other technologies, far from all of which are so easily labelled. Virtually every use of a technology is itself a technology, and it is often one that has never occurred in exactly the same way in the entire history of the universe. This sentence is one such technology: though there may be lots of sentences that are similar, the chances that anyone has ever used exactly this combination of words and punctuation before now are close to zero. Same for this post. This post has a title: that is the name of this technology, though it is a synecdoche for… what? The words it contains? Not quite, because now (literally as I write) it contains more of them but it is still this post. Is it still this post when it is syndicated? If the URL changes? Or the title? Or if I read it and turn it into podcast? I don’t know. This sentence does not have a name, but it is no less a technology. So is your reading of it. So is much of what is involved in the sense you are making of it, and that is the technology that probably matters most right now. No one has ever made sense of anything in exactly this way, right now, the way you are doing it, and no one ever will. The technosphere is almost as awesomely complex as the biosphere and, in education, the technosphere extends deep into every learner, not just as an object of learning but as part of learning itself.

Synecdoches and educational/edtech research

Let’s say you wanted to investigate the effects of putting computers in classrooms. It seems reasonable enough: after all, it’s a big investment so you’d want to know whether it was worth it. But what do you actually learn from doing so apart from that, in this particular instance, with this particular set of orchestrations and uses, something happened? Yes, computers might have been prerequisites for it happening but so what? An infinite number of different things could have happened if you had done something else even slightly different with them, there are infinitely many other things you could have done that might have been better, and all bets would be off if the computers themselves had been different. The same is equally true for what happens in classrooms without computers. What can you predict as a result? Even if you were to find that, 100% of the time until now, computers in classrooms led to better/worse learning (whatever that might mean to you) I guarantee that I could find plenty of ways of using them to do the precise opposite. This is functionally similar to taking “all hands on deck” literally: the hands may be very salient but, without taking into account the people they are attached to and exactly what they are doing with those hands, there is little or no value in making comparisons. Averages, maybe; patterns, perhaps, as long as you can keep everything else more or less similar (e.g. a traditional formal school setting); but reliable predictions of cause and effect? No. Or anything that can usefully transfer to a different setting (e.g. unschooling or – ha – online learning)? Not at all.

Conversely but following the same synecdochic logic we might ask questions about the effectiveness of online and distance learning (the whole),  comparing it with in-person learning.  Both encompass immense numbers of wildly diverse technologies, including not just course and class technologies but things like pedagogical techniques, institutional structures, and national standards, instantiated with wildly varying degrees of skill and talent, all of which matter at least as much as the fact that it is online and at a distance. Many may matter more. This is functionally similar to taking “Canada beat the US” literally. It did not. It remains a fallacy even if, on average, Canada (the hockey team) does win more often, or if online and distance learning is generally more effective than in-person learning, whatever that means. The problem is that it does not distinguish which of the many millions of parts of the distance or the in-person orchestration of phenomena matter and, for aforementioned and soon-to-be-mentioned reasons, it cannot.

Beyond causing physical harm – and even then with caveats – there is virtually nothing you could do or use to teach someone that, if you modified some other part of the assembly or organized the parts a little differently, could not have exactly the opposite effect the next time you do or use it. This sentence, say, will have quite different effects from the next despite using almost the exact same components. Almost components effects next the despite using different quite will sentence, say, this have the from exact. It’s a silly example and it is not difficult to argue that further components (rules of grammar, say) are sufficiently different that the comparison is flawed, but that’s exactly the point: all instantiations of educational technologies are different, in countless significant ways, each of which impacts lots of others which in turn impact others, in a complex adaptive system filled with positive and negative feedback loops, emergence, evolution, and random impacts from the systems that surround it. I didn’t actually even have to mix up the words. Had I repeated the exact same statement, its impact would have been different from the first because something else in the system had changed as a result of it: you and the sentence after. And this is just one sentence, and you are just one reader. Things get much more complex really fast.

In a nutshell, the synecdochic fallacy is why reductive research methods that serve us so well in the natural sciences are often completely inappropriate in the field of technology in general and education in particular. Natural science seeks and studies invariant phenomena but, because every use (at least in education) is a unique orchestration, technologies as they are actually enacted (i.e. the whole, including the current use) are never invariant and, even on those odd occasions that they do remain sufficiently similar for long enough to make study worthwhile, it just takes one small tweak to render useless everything we have learned about them.

All is not lost

There are lots of useful and effective kinds of research that we can do about educational technologies. Reductive science is great for identifying phenomena and what we can do with them in a technological assembly, and that can include other technologies that are parts of assemblies. It is really useful, say, to know about the properties of nuts and bolts used to build desks or computers, the performance characteristics of a database, or that students have persistent difficulties answering a particular quiz question. We can use this information to make good creative choices when changing or creating designs. Notice, though, that this is not a science of teaching or education. This is a science of parts and, if we do it with caution, their interactions with other parts. It is never going to tell us anything useful about, say, whether teaching to learning styles has any positive effect, that direct instruction is better than problem based learning, or that blended learning is better than in-person or online learning, but it might help us build a better LMS or design a lesson or two more effectively, if (and only if)  we used the information creatively and wisely.

Other effective methods involve the telling of rich stories that reveal phenomena of interest and reasons for or effects of decisions we made about putting them together: these can help others faced with similar situations, providing inspirations and warnings that might be very useful. If we find new ways of assembling or orchestrating the parts (we do something no one has done before) then it is really helpful to share what we have done: this helps others to invent because it expands the adjacent possible. Similarly we can look for patterns in the assembly that seem to work and that we can re-use (as parts) in other assemblies. We can sometimes come up with rules of thumb that might help us to (though never to predict that we will) build better new ones. We can share plans. We can describe reasons.

What this all boils down to is that we can and we should learn a great deal that is useful about the component technologies and we can and should seek broad patterns in ways that they intertwingle. What we cannot do, neither in principle nor in practice, is to use what we have learned to accurately predict anything specific about what happens when we put them together to support learning. It’s about improving the palette, not improving the painting. As Longo & Kauffman (2012) put it, in a complex system of this nature – and this applies as much to the biosphere, culture, and economics as it does to education and technology –  there are no laws of entailment, just of enablement. We are firmly in the land of emergence, evolution, craft, design, and bricolage, not engineering, manufacture and mass-production. I find this quite liberating.

 

References

Berry, K. J., & Martin, T. W. (1974). The Synecdochic Fallacy: A Challenge to Recent Research and Theory-Building in Sociology. Pacific Sociological Review, 17(2), 139–166. https://doi.org/10.2307/1388339
Longo, G., Montévil, M., & Kauffman, S. (2012). No entailing laws, but enablement in the evolution of the biosphere. Proceedings of the 14th Annual Conference Companion on Genetic and Evolutionary Computation, 1379–1392. https://doi.org/10.1145/2330784.2330946
Weinstock, Stephen M. (1981). Synecdochic Fallacy [Panel paper]. 67th annual meeting of the Speech Communication Association, Anaheim, California. https://www.scribd.com/document/396524982/Synecdochic-Fallacy-1981

Cognitive Santa Claus machines

cognitive santa claus machine receiving human cognitive products and outputting thoughtsI’ve just submitted a journal paper (shameless plug: to AACE’s AIEL, of which I am an associate editor) in which I describe generative AIs as cognitive Santa Claus machines. I don’t know if it’s original but the idea appeals to me. Whatever thought we ask for, genAIs will provide it, mining their deep, deep wells of lossily compressed recorded human knowledge to provide us with skills and knowledge we do not currently have. Often they surprise us with unwanted gifts and some are not employing the smartest elves in the block but, by and large, they give us the thinking (or near facsimile) we want without having to wait until Christmas Eve.

Having submitted the paper, it now occurs to me that they are not just standalone thinking appliances: they can potentially be drivers of general-purpose Santa Claus machines. As active users of and, above all, creators of all sorts of digital technologies, I have found them, for example, incredibly handy for quickly churning out small apps and utilities that are useful but that would not be worth the week or more of effort they would otherwise take me to build. It is already often quicker to build a Quick Action for my Mac Finder than it would be to seek out an existing utility on the Web. The really interesting thing, though, is that they are perfectly capable of creating .scad files (or similar) that can be 3D printed. My own 3D printer has been gathering dust in a basement with a dead power supply for a few years so I’ve not tested the output yet, but I have already used Claude, ChatGPT and Gemini to design and provide full instructions and software for some quite complex electronics projects: between them they do a very good job, by and large, notwithstanding odd hallucinations and memory lapses. My own terrible soldering and construction skills are the only really weak points in the process.

One way or another, for the first time in the existence of our species, we now have machines that do not just perform predetermined orchestrations or participate as tools in our own orchestrations: they do the orchestration for us. We therefore have at our fingertips machines that are able (in principle) to make any technology – including any other machine (including another 3D printer) – we can imagine. The intellectual property complexities that will emerge when you can ask ChatGPT to, say, make you a smartphone or a house to your precise specifications make current copyright disputes pale by comparison. Phones might be tricky, for now, but houses are definitely possible. There are many (including my own son) who are looking further than that, down to a molecular level for what we can build, and that’s not to mention the long gestating field of nanobots.

This is a level of abundance that has only been the stuff of speculative fiction until now and, for the most part, even scifi mostly talks of replicators, not active creators of something new. Much as in the evolution of life, there have been moments in the evolution of technology when evolvability itself has evolved: inventions like writing, technologies of transport, the Internet, the electronic valve, the wheel, or steam power, for example, have disproportionately accelerated the rate of evolution, bringing exponential increases in the adjacent possible. This might just be the biggest such moment yet.

Education in the age of Santa Claus machines

Where education sits in all of this is complicated. To a very large extent, at least the explicit goal of educational systems is to teach us how to operate the tools and other technologies of our cultures, by which I mean the literacies that allow us to participate in a complex technologically mediated society, from writing to iambic pentameter, from experiments to theories. In brief, the stuff you can specify as learning outcomes. Even now, with the breakneck exponential increase in technologies of all kinds that has characterized the last couple of centuries, the rate of change is slow enough and the need for complex skills is growing steadily enough that there is a very clear demand for educational systems to provide them, and there are roughly enough skilled teachers to teach them.

The need persists because, when we create technologies we are not just creating processes, objects, structures, and tools: we are creating gaps in them that humans must fill with soft or hard technique, because the use of a technology is also a technology.  This means that the more technologies we create (up until now) the more we have had to learn in order to use them. Though offset somewhat by the deskilling orchestrations built into the machines we create (often the bulk of the code in a digital project is concerned with lessening the cognitive load, and even a humble door handle is a cognitive load-reducer)  the world really is and always has been getting more complex than it was. We need education more than ever.

Generative AIs modify that equation. Without genAI, creating 3D designs, say, and turning them into printed objects still demands vast amounts of human skill – skills using quite complex software, math, geometry, materials science, machinery, screwdrivers, ventilation, spatial reasoning, etc, etc etc. Black-boxing and automation can help: some of that complexity may be encapsulated in smart interfaces and algorithms that simplify the choices needed but, until now, there has usually been a trade-off between fine-grained control and ease of use. GenAIs restore that fine-grained control, to a large extent, without demanding immense skill. We just have to be able to describe what we want, and to follow instructions for playing our remaining roles like applying glue sticks or dunking objects in acetone baths. The same is true for non-physical genAI products.

So what does it mean to be able to use the technologies of your culture if there are literally millions of new and unique ones every day? Not just new arrangements of the same existing technologies like words, code, or images but heterogenous assemblies that no one has ever thought of before, tailor-made to your precise specifications. I have so many things I want to make this way. Some assembly will still be needed for many years to come but we will get ever closer to Theodore Taylor’s original vision of a fully self-contained Santa Claus machine, needing nothing but energy and raw materials to make anything we can imagine. If educational institutions are still needed, what will they teach and how will they teach it? One way they may respond is to largely ignore the problem, as most are doing now.

If educational systems do continue – without significant modification, without fully embracing the new adjacent possibles – to do nothing but teach and assess existing skills that AIs can easily perform at least as well, two weird things will happen. Firstly, sensible time-poor students will use the AIs to do the work or, at the very least, to help them. Secondly, sensible time-poor teachers will use the the AIs to teach because, if all you care about is achieving measurable learning outcomes, AIs can or will be able to do that better, faster, and cheaper. That would make both roles rather pointless. But teaching doesn’t just teach measurable skills; it teaches ways of being human. The same is true when AIs do it, too. It’s just that we learn ways of being human from machines. All of which (and much more, that I have written and spoken about more than enough in the past) suggests that continuing along our existing outcomes-driven educational path might not be the smartest move – or failure to move – we have ever made.

It’s a systems thing. GenAIs are coming into a world that is already full of systems, and systems above all else have a will to survive. In our education systems we are still dealing with the problems caused by mediaeval monks solving problems with the limited technologies available to them because, once things start to depend on other things and subsystems form, people within them get very invested in solving local problems, not system-level problems, and those solutions cause problems for other local subsystems, and so it goes on in a largely unbroken chain, rich in recursive sub-cycles, until any change made in one part is counter-acted by changes in others. What we fondly think of as good pedagogy, for instance, is not a universal law of teaching: it is how we solve problems caused by how our systems have evolved to teach. I think the worst thing we can possibly do right now is to use genAIs to solve the local problems we face as teachers, as learners, as administrators, etc. If we use them to replicate the practices we have inherited from mediaeval monks, instead of transforming our educational systems it will actively reinforce everything that is wrong with them because it will just make them better or faster at doing what they already do.

But of course we will do exactly that because what else can we do? We have problems to solve and genAIs offer solutions.

Three hopeful paths

I reckon that there are three hopeful, interlocking, and complementary paths we can take to prevent at least the worst case impacts of what happens when genAI is combined with local thinking:

I. embrace the machine

The first hopeful path is to embrace the machine. It seems to me that we should be focusing a bit less on how to use or replicate the technologies we already have and a lot more on the technologies we can dream of creating. If we wish (and have the imagination to persuade a genAI to do it) we can choose exactly how much human skill is needed for any technological assembly so the black-boxing trade-off that automation has always imposed upon us is not necessarily an issue any more: we can choose exactly the amount of soft technique we want to leave for humans in any given assembly instead of having it foisted upon us. For the first time, we can adjust the granularity of our cognition to match our needs and wishes rather than the availability of technologies. As a trivial example, if you want to nurture the creative skills of, say, drawing, you can build a technology that supports it, while automating the things you’d rather not think about like, say, colouring it in. From an educational perspective this is transformative. It frees us from the need for prerequisite skills and scaffolding, because they can be provided by the genAI, which in turn gives us a laser focus on what we want to learn, not the peripheral parts of the assembly. At one fell swoop (think about it) that negates the need for disciplinary boundaries, courses, and cognitive barriers to participation, and that’s just a start: there are many dominoes that fall once we start pushing at the foundations. It makes the accomplishment of authentic, meaningful, personally relevant, sufficiently challenging but not overwhelming tasks within everyone’s reach. As well as shaping education to the technologies of our cultures, we can shape the technologies to the education.

A potential obstacle to all of that is that very few of us have any idea where the adjacent possibles lie so how can we teach what, by definition, we do not know? I think the answer to that is simple: just let go, because that’s not what or how we should be teaching anyway. We should be teaching ways of making that journey,  supporting learners along the way, nurturing communities, and learning with them, not providing maps for getting there. GenAIs can help with that, nudging, connecting, summarizing, and so on. They can also help us to track progress and harvest learning outcomes if we still really need that credentialing role. And, if we don’t know how to do that, they can teach us what we need to know. That’s one of the really cool things about genAIs: we don’t need to be trained to use them. They can teach us what we need themselves. But, on its own, this is not enough.

II. embrace the tacit dimension

With the explicit learning outcomes taken care of (OK, that’s a bit of an exaggeration), the second hopeful path is to celebrate and double down on the tacit curriculum: to focus on the values, ways of thinking, passions, relationships, and meaning-making that learning from other humans has always provide for free while we teach students to meet those measurable learning outcomes. If we accept the primary role of educational systems as being social, to do with meaning-making, identity, and growth, treating everyone as an end in themselves, not as a means to an end, it avoids or mitigates most of the risks of learning to be human through machines and that is something that even those of us who have no idea how to use genAI can contribute to in a meaningful and useful way. Again, this is highly transformative. We must focus on the implicit, the tacit, and the idiosyncratic, because that’s what’s left when you take the learning outcomes away. Imagine a world in which learners choose an institution because of its communities and the quality of human relationships it supports, not its academic excellence. Imagine that this is what “academic excellence” means. I like this world.

III. embrace the human

The third hopeful path, interlocked with the other two, is to more fully celebrate the value of people doing things despite the fact that machines can do them better.

Though genAIs are a wholly new kind of technology that change a lot of rules, so we should be very wary of drawing too much from lessons of the past, it is worth reflecting on how the introduction of new technologies that appear to replace older technologies has worked before. When photography was new, for instance, photographers often tried to replicate painterly styles but it also led to an explosion of new aesthetics for painting and a re-evaluation of what value a human artist creates. Without photography it is unlikely that Impressionism would have happened, at least at the point in history that it did: photography’s superior accuracy in rendering images of the world freed painters from the expectation of realism and eventually led to a different and more human understanding of what “realism” means, as well as many new kinds of visual abstraction. Photography also created its own adjacent possibles, influencing composition and choices of subject matter for painters and, of course, it became a major art form in its own right. The fact that AIs can (or at least eventually will) produce better images than most humans does not mean we should or will stop drawing. It just means the reasons for doing so will be fewer and/or that the balance of reasons for doing it will shift. There might not be so many jobs that involve drawing or painting, but we will almost certain value what humans produce more than ever, both in the product and the process. We will care about what of and how it expresses our human experience, and its cognitive benefits, perhaps, rather than its technical precision: exactly the kinds of things that make it valuable for human infants to learn, as it happens. On the subject of human infants, this is why there are probably many more of us with our children’s or grandchildren’s pictures than the products of diffusion models on our refrigerators, and why they often share pride of place with the work of great masters on our walls.

The same is almost certainly true for teaching: generative AIs are, I hope, teaching’s photography moment, the point in history at which we step back and notice that what makes the activity valuable is not the transfer of explicit skills and knowledge so much as the ways of being human that are communicated with that: the passion (or even the lack of it), the meaning, the values, the attitudes, the ways of thinking.  When the dust settles, we are going to be far more appreciative of the products of humans working with dumb technologies than the products of genAIs, even when the genAI does it measurably better. I think that is mostly a good thing, especially taking into account the many potential new heights of as-yet-unforeseeable creation that will be possible when we partner up with the machines and step into more of the adjacent possibles.

Embracing the right things

Technologies are often seen as solutions to problems but that is only (and often the least interesting) part of what they do. Firstly, they also and invariably create new problems to solve. Secondly, and maybe more importantly, they create new adjacent possibles. Both of these other roles are open-ended and unprestateable: no amount of prior research will tell us more than a fraction of these. Finally, therefore, and as an overarching rule of thumb, I think it is beholden on all of us who are engaged in the educational endeavour to play with these things in order to discover those adjacent possibles, and, if we do choose to use them to solve our immediate problems, to discover as much as we can of the Faustian bargains they entail. Deontology is our friend in this: when we use it for a purpose we should always ask ourselves what would happen if everyone in the world who was in a similar situation used genAI for that purpose, and would we want to live in that world? What would our days be like if they did? This is not as hypothetical as it is for most ethical decisions: there is a very strong chance that, for instance, a large percentage of teaching to learning outcomes will very soon be performed (directly or indirectly) by genAI, and we know that a significant (though hard-to-quantify) amount of student work is already the direct or indirect result of them. The decisions we are faced with are faced by many others and they are happening at scale. We may have some substantial ethical concerns about using these things – I certainly do – but I think the consequences of not doing so are considerably worse. We’re not going to stop it by refusing to engage. We are the last generation to grow up without genAI so it is our job to try to preserve what should be preserved, and to try to change what shouldn’t.