Is higher education broken? Not exactly.

a university in collapse in the style of illustrations of the Fall of the House of UsherWhat does it mean for higher education to work?

The problem with claiming (as I sometimes do) that higher education is broken and needs to be transformed is that it begs the question of what it means for higher education to work, and that depends what you think it is for.

From the name you’d expect that higher education might be for …well… education, assuming that to be concerned with learning and teaching, but it outgrew that single purpose a very long time ago. Yes, learning & teaching still looms large, but credentialing is at least as significant (often more so) and, at least for some, so are research or various forms of service.  But, depending on your perspective and context, a university or college might also or alternatively be thought of quite differently as, for example:

  • a driver of peace or prosperity in a society;
  • a creator of knowledge in the world;
  • a support for local economies;
  • training for industry;
  • a market for contract cheating;
  • a home for sports teams;
  • a sharer and preserver of cultural artifacts;
  • an incubator for the performing arts;
  • a means to get a better job;
  • a medical facility;
  • a production line for professors;
  • an enabler of social mobility;
  • a profit-/surplus-making business;
  • a political pawn;
  • a selection filter for smart people;
  • and so on, and on, and on.

You might reasonably object that you could take any one of these away apart from the teaching role and you would still be left with a recognizable educational institution and, indeed, some are possible only because of the teaching role. However, to some people, somewhere, some time, every one of those roles is the role that matters most, and might be a target for transformation. Like every instantiated technology, a university or college is an assembly. In fact it is a huge assembly. It is part of and contains countless other assemblies, and is thoroughly, deeply entangled with a host of other systems and subsystems on which it depends and that depend on it.  Everyone within it or interacting with it perceives it from a different perspective, in different ways at different times, working together or independently as mutually affective coparticipants to do whatever it is that, from each of those different perspectives, it does. In many ways, as a whole, it thus resembles an ecosystem and, like an ecosystem, each individual part can be perceived as having a goal and a relationship with other parts, and with the whole, but the whole itself does not. I think this is probably a feature of institutions in general, and may be what distinguishes them most clearly from simple organizations and businesses.

So what?

As long as the distinct roles, from each individual’s perspective, do their jobs, this is not a problem. If you are interested in, say, in getting an education then you can largely ignore everything else an educational institution does and judge it solely by whether it teaches, notwithstanding the huge complexities of knowing what that even means, let alone with what proxies to measure it.

Unfortunately, a fair number of these roles deeply and negatively impact others. For me, by far the biggest problem is that the credentialing role is fundamentally at odds with the teaching role, due to the profound negative impact of extrinsic motivation on intrinsic motivation (I’ve written a lot about this, e.g. in these slides and in How Education Works so I won’t repeat the arguments again here). Combined with the side effects of trying to teach everyone the same thing at the same time, this results in the vast majority of our most cherished teaching and assessment methods being nothing more than ways of restoring or replacing the intrinsic motivation sucked out of students by how we teach and assess.  Other big conflicts matter too, though. For instance, when patents or copyrights are at stake, the business role battles with the underlying goal of increasing knowledge in the world, turning non-rival knowledge into a rivalrous commodity; ditto for the insanity that is journal publishing, where the public pays us to provide our editorial and reviewing services for papers on research that they also pay for, then the journals sell the papers back to us or charge us for sharing them, making obscene profits for an increasingly trivial service; similarly, the research role, that should in principle exist in a virtuous circle with teaching, is too often in competition with it and, in many institutions, teaching loses; the filtering role that rewards most universities (not mine) for excluding as many students as possible is in direct conflict with a mission to bring higher forms of learning to as many people as possible, and undermines the incentive to teach well because those carefully selected students will learn pretty well regardless of how well they are taught. There are countless other examples like this: public vs private good, excellence vs equity, local vs global responsibilities, supporting student diversity vs economic stability, and so on. Fixing one role invariably impacts others, usually negatively. These are structural issues that will persist as long as higher education continues to play those roles. The solutions to the problems in one role are the problems that other roles have to solve, and (to a large extent) they must be.

At a micro scale the problem is even more ubiquitous. Everyone is solving problems in their own local sphere, creating problems for others in their own local spheres, whose solutions cause problems for others, and so it goes around and comes around. Every time we create a solution to one problem we give rise to other problems elsewhere. To give a few trivial and commonplace examples of issues I am trying to deal with right now:

  • I recently learned of two courses that could not be launched because tutors for the single course that they replace would have to be rehired and lose benefits gained for long service. In terms of priorities and primary roles, this implies that offering stable employment to staff matters more than teaching. That’s not the intent of any particular individual involved in the process but it’s how the system works, thanks to union agreements that solved different problems a long time ago.
  • For nearly 50 years now, our undergraduate students have had 6 months to complete a course, unless they are grant-funded (an important minority), in which case they only get 4 months because funding bodies assume universities always teach in semesters of a standardized length and demand results within that timeframe. And so we are in the process of making all contracts 4 months, knowing full well that students will be more pressured, cheating will increase, and pass rates will go down, but at least it will be fairer.
  • When we commit structures to code they are supposed to model the system but, having done so, they normally dictate it. For instance, my need for all of our faculty to be able to see the teaching sites of all of our courses (a critical part of my strategy to improve our teaching) is under threat due to the cascading roles used to determine who can do what that are baked into the implementation of our LMS and that make it difficult and long-winded for our editors to edit our courses, because the roles have to be modified each time they use its impersonation function that is necessary for viewing courses as they will be experienced. The obvious solution is to fix those roles, not remove access for those who need it, but the editors lack such rights, and those who have them support other faculties with different and conflicting needs.
  • We have recently shifted to a centralized front-line support system, explicitly to deal with common difficulties students have in navigating and using our administrative systems and websites. The more obvious solution would be to make those systems work better in the first place. Instead, we employ vast numbers of people whose job it is to patch over gaps, errors, and poor design decisions made elsewhere. This reduces the pressure to fix the systems, so the need persists, except that now we have a whole load of people with jobs that would be in jeopardy if we fix them. We employ many people whose job is to fix problems caused by issues with how others do theirs: people dedicated to exam cheating, say, or accommodating disabilities, or the aforementioned editors. There’s a fine and indistinct line between dividing a workload so that people with the right expertise do the right things, and creating a workload because people with the wrong expertise have done the wrong things.

I could easily write pages of similar examples and, if you work for a university or college, I’m sure you could too: the specific problems may be peculiar to Athabasca University, but the underlying dynamics are ubiquitous in higher education and, for that matter, most large organizations. And I’m sure that you can think of ways to deal with any of them but that’s exactly the point: fixing them is what we all do, all the time, every day, on a grand scale, and educators have been doing so for nearly 1000 years so the number of fixes to fixes to fixes to fixes is vast.  For almost any role or activity, no matter how small or how large, there is probably another role and set of activities on which it impinges, directly or otherwise.

The big problem is that, on the whole, we create counter-technologies to fix the worst of the problems and that’s a policy of despair, every counter-technology creating new problems for further counter-technologies to solve. In fact, a large part of the reason for all those many roles is precisely because counter-technologies were created to solve what probably seemed like pressing problems and, in an inevitable Faustian bargain, created the problems we now need to address. Every one of these counter-technologies increases the robustness of the whole, increasing the interdependencies, making the patterns more and more indelible so, even if we do occasionally come up with something truly different, the overall system holds together as a massive web of mutually interdependent pieces more strongly than ever.

The more things change…

For all the many structural problems, it would be a synecdochic fallacy of mistaking the part for the whole to describe higher education as broken. Sure, thanks to all those competing roles (especially credentialing) it is not particularly great at education (at least), so transformation is devoutly to be wished for but, by the most basic and essential criterion of all –  survival – it is rampantly successful. In fact, it is exactly those competing and complementary roles that have sustained it because a diverse ecosystem is a resilient ecosystem. The webs of dependencies are mutually sustaining even, to a well-evolved point, when one is antagonistic to the other.

For nearly a millennium the university and its brethren have not only survived but have now spread to almost every populated region of the world, and they continue to expand. Within my lifetime, in my country of birth, enrolments in higher education have risen from around 5% of the population to around 50%. To achieve such success, it has had to evolve: the invention of written exams, say, in the 18th Century, Humboldtian models that justified and embedded research, the adoption of flexible curricula, or the admittance of women in the 19th Century, were all huge changes. It has lost the trivium and quadrivium along the way, and diversified enormously in the range of subjects taught. The technological systems are way more advanced and varied than they were.  There are regional variations, and a few speciated niches (colleges, open universities, distance education, etc). Administratively, a lot has changed, from recruitment and enrolment to the roles of professional bodies, industry, and governments.  It is constantly evolving, for sure.

But.

The main technological features that universities acquired in the first century of their existence are still fully present, in virtually unaltered form.  Courses, classes, terms/semesters, professors, credentials, methods of teaching, organizational structures, methods of assessment, and plenty more are visibly the same species as their mediaeval forebears, and remain the central motifs of virtually all formal higher education. We may use a few more polyesters and zippers, and the gowns now come in women’s sizes but, at least once a year, many of us even dress the same, a behaviour shared with only a few other institutions like (in some countries) the legal profession or the church. On the subject of which, most universities continue to have roles like dean, chancellor, rector, provost, registrar, bursar and even the odd beadle (what even is that?) that not only reveal their ecclesiastic origins but also how little the basic entities in the system have since evolved.

If the purpose of higher education were simply to educate then we would expect it to work a lot better and to see a whole load more variation in how it is done, especially given the wide range of technologies that can now be used to overcome the problems caused by those features, but we don’t. It’s not just the purpose that survives: it’s the form. We can radically alter a great many processes  but changing at least one or two of the central motifs themselves – which, to me, is what “transformation” must entail – is hardly never even on the table.

Adaptation, not transformation

If the institution had a clear overriding goal then we could re-engineer it to work differently, but this is not an engineering problem: it’s an evolutionary problem. We build with what we have on what we have, a process of tinkering or bricolage that is anything but engineered. It is, though, not natural but technological evolution. In natural ecosystems massive disruption can occur when populations become isolated, or when the environment radically changes. Technological evolution emerges through recombination and assembly of parts, not genes, and the technologies of higher education have evolved to be globally connected and massively intertwingled with nearly every other part of nearly every society, making isolation virtually impossible. In nature, ecosystems can be disrupted by invasive species, parasites, etc, but our educational systems – technologies one and all – have evolved to be great at absorbing stuff rather than competing with it, so even that path is fraught. Even something as apparently disruptive as generative AI, which is impacting almost every aspect of the system and all the systems with which it interacts, is currently causing reinforcement of objectives-driven models of teaching, (at least in Western countries) cultural individualism, and highly traditionalist solutions to fears of cheating like written and oral exams at least as much as it is inspiring change.

For those of us who care about the education role, there are plenty of ways we could actually transform it if we had the power to make the necessary changes. Decoupling learning and assessment would be a good start. Not just separating teaching and tests: that would just result in teaching to the test, as we see now. The decoupling would have to be asymmetrical, so the assessed tasks would demand synthesis of many taught things. Or we could get rid of classes and courses: to a large extent, this is what (despite the name) many Connectivist MOOCs have attempted to do, and it is also the pattern behind things like the Kahn Academy or Connect North’s AI Tutor Pro, not to mention traditional PhDs (at least in some countries), apprenticeship models of learning, most instructional videos on sites like YouTube, or Stack Exchange or Quora, and the bulk of student projects (like MOOCs, labelled as courses but lacking most if not all of their traditional trappings). Or we could keep courses but drop the schedules and time limits. If nothing else, imagining how things might work if we messed with those central motifs is a good way to stimulate creative use of what we have. If done at scale, such things could make a huge impact on our educational systems.

But they probably won’t.

The problem always comes back to the fact that, though (collectively) we could change the fitness landscape itself, making survival dependent on whatever we think matters most, we are unlikely to agree what does matter most. For some, better higher education would be measured in credentials, or explicit learning outcomes, or better fits with industry needs. Others would like it to advance their personal careers or status, or to do research without a profit motive. For me, improvements would be in far harder-to-measure aspects like building safer, kinder, smarter, more creative societies. Unfortunately (for me and others who feel that way), thanks to pace layering, the ones who could shape the fitness landscape the most are governments, and they are the least likely to do so. Governments tend to prefer things that are easier to measure, quicker to show results, that are most likely to keep voters voting for them and sponsors (especially from industry) sponsoring them. Increasingly, institutional mandates are measured by industry-impact, which does erode some traditional aspects of higher education but that reinforces the big ones, like the measurable, assessed, outcome-driven course, with its classes, its schedules, its semesters, its textbooks, its assessments, its teachers, and so on. It doesn’t have to, in principle but, in practice, those are not the things we adapt. If radical transformation ever does occur it will therefore most likely be the result of something so disruptive that the loss of higher education would be a minor concern: devastation caused by climate change, or nuclear war, or being hit by a large asteroid, for instance. And, to be honest, I’m not even sure that would be enough.

The limited chances of success should not discourage us from tinkering, all the time, whenever we can. Evolution must happen because the world that higher education inhabits evolves so, if this is the system we are stuck with, we should make it do what we want it to do as best we can.  There are usually ways to reduce dependencies, techniques to decouple antagonistic roles, strategies of simplification, approaches to parcellating the landscape (skunkworks, etc), and values-based principles for prioritizing activities that can make it more likely that the changes will be successful and persistent. However, if we have learned anything from biological studies over the past many decades, it is that you shouldn’t mess with an ecosystem. Whatever we do will put it out of balance, and self-organizing dynamics will ensure that either the balance will be restored, or that it spirals out of control and breaks altogether. Either way, it will never be exactly what we planned and, on average, it will tend to eventually keep things much the same as they are while making most of it worse while it restabilizes itself.

Knowing that, though, can be useful. If every change will result in changes elsewhere, it is not enough to monitor the direct impact of an intervention: rather, we need to figure out ways of harvesting the outcomes across the system and/or, as best we are able, to model them in advance. No one has access to more than a fraction of the information needed, not least because a because a significant amount of it is tacit, embedded in the culture and practices of people and communities within the system. However, we can try to intentionally capture it, to tell stories, to share experiences and understandings across all those many niches. We can do what we can to make the invisible visible. We can talk. And we have technologies to help, inasmuch as we can train AIs to know our stories and ask them about the impacts of things we do, and point out impacts that would be difficult if not impossible for any person to do. And that, I think, is the only viable path we have. The problems we generally have to deal with are a direct result of local thinking: solutions in one space that cause problems in another. The less locally we think about such things, the greater the chances that we will avoid unwanted impacts elsewhere or, equally good, that we will cause wanted impacts. To achieve that demands openness and dialogue, channels through which we can share and communicate, and some way of compressing, parsing, and relaying all that so that sharing and communication is not the only thing we ever do. This is not an impossibly tall order but it certainly isn’t easy.

Tool-using tools – Perceptions and misperceptions of generative AI (slides from my keynote for the Global AI Summit, 2025, at Bennett University, India)

tool-using robotHere are the slides from the first of my two keynotes last week, Tool-using tools – Perceptions and misperceptions of generative AI. This one was for the Global AI Summit 2025, hosted at Bennett University in India.

The talk covered ground that I’ve already blogged about. My big point is that it is not just inaccurate but misleading to think of genAIs as tools: it grants us too much agency. If you have to use an existing term then I think “appliance” is a much more accurate label because they are technologies that do thinking for us, much as refrigerators do cooling for us, or dishwashers wash our dishes. Just as some skill is needed to use a dishwasher or fridge, some skill is needed to get a genAI to think: it’s OK to think of prompts as tools for that purpose. However, it is not our thinking, and that matters. GenAIs are unlike any prior technology because they are, like us, tool users and creators. It is possible to ask genAIs to act as (or at least create and host) tools. It’s just not what we usually use them for. I think “metatool” is a better term.

I gave this talk online, at 4am Wednesday morning, finishing less than an hour before I had to leave for the airport for Japan, where I was due to give my second keynote of the week,  on generative vs degenerative AI, so I might not have been at the top of my game!

Generative vs Degenerative AI (my ICEEL 2025 keynote slides)

AI Santa fighting KrampusI gave my second keynote of the week last week (in person!) at the excellent ICEEL conference in Tokyo.  Here are the slides: Generative AI vs degenerative AI: steps towards the constructive transformation of education in the digital age. The conference theme was “AI-Powered Learning: Transforming Education in the Digital Age”,  so this is roughly what I talked about…

Transformation in (especially higher) education is quite difficult to achieve.  There is gradual evolution, for sure, and the occasional innovation, but the basic themes, motifs, and patterns – the stuff universities do and the ways they do it – have barely changed in nigh-on a millennium. A mediaeval professor or student would likely feel right at home in most modern institutions, now and then right down to the clothing. There are lots of path dependencies that have led to this, but a big part of the reason is down to the multiple subsystems that have evolved within education, and the vast number of supersystems in which education participates. Anything new has to thrive in an ecosystem along with countless other parts that have co-evolved together over the last thousand years. There aren’t a lot of new niches, the incumbents are very well established, and they are very deeply enmeshed.

There are several reasons that things may be different now that generative AI has joined the mix. Firstly, generative AIs are genuinely different – not tools but cognitive Santa Claus machines, a bit like appliances, a bit like partners, capable of becoming but not really the same as anything else we’ve ever created. Let’s call them metatools, manifestations of our collective intelligence and generators of it. One consequence of this is that they are really good at doing what humans can do, including teaching, and students are turning to them in droves because they already teach the explicit stuff (the measurable skills and knowledge we tend to assess, as opposed to the values, attitudes, motivational and socially connected stuff that we rarely even notice) better than most human teachers. Secondly, genAI has been highly disruptive to traditional assessment approaches: change (not necessarily positive change) must happen. Thirdly, our cognition itself is changed by this new kind of technology for better or worse, creating a hybrid intelligence we are only beginning to understand but that cannot be ignored for long without rendering education irrelevant. Finally genAI really is changing everything everywhere all at once: everyone needs to adapt to it, across the globe and at every scale, ecosystem-wide.

There are huge risks that it can (and plentiful evidence that it already does) reinforce the worst of the worst of education by simply replacing what we already do with something that hardens it further, that does the bad things more efficiently, and more pervasively, that revives obscene forms of assessment and archaic teaching practices, but without any of the saving graces and intricacies that make educational systems work despite their apparent dysfunctionality. This is the most likely outcome, sadly. If we follow this path, it ends in model collapse for not just LLMs but for human cognition. However, just perhaps, how we respond to it could change the way we teach in good if not excellent ways. It can do so as long as human teachers are able to focus on the tacit, the relational, the social, and the immeasurable aspects of what education does rather than the objectives-led, credential-driven, instrumentalist stuff that currently drives it and that genAI can replace very efficiently, reliably, and economically. In the past, the tacit came for free when we did the explicit thing because the explicit thing could not easily be achieved without it. When humans teach, no matter how terribly, they teach ways of being human. Now, if we want it to happen (and of course we do, because education is ultimately more about learning to be than learning to do), we need to pay considerably more deliberate attention to it.

The table below, copied from the slides, summarizes some of the ways we might productively divide the teaching role between humans and AIs:

Human Role (e.g.)

AI role (e.g.)

Relationships

Interacting, role modelling, expressing, reacting. Nurturing human relationships, discussion catalyzing/summarizing

Values

Establishing values through actions, discussion, and policy. Staying out of this as much as possible!

Information

Helping learners to see the personal relevance, meaning, and value of what they are learning. Caring. Helping learners to acquire the information. Providing the information.

Feedback

Discussing and planning, making salient, challenging. Caring. Analyzing objective strengths and weaknesses, helping with subgoals, offering support, explaining.

Credentialling

Responsibility, qualitative evaluation. Tracking progress, identifying unprespecified outcomes, discussion with human teachers.

Organizing

Goal setting, reacting, responding. Scheduling, adaptive delivery, supporting, reminding.

Ways of being

Modelling, responding, interacting, reflecting. Staying out of this as much as possible!

I don’t think this is a particularly tall order but it does demand a major shift in culture, process, design, and attitude.  Achieving that from scratch would be simple. Making it happen within existing institutions without breaking them is going to be hard, and the transition is going to be complex and painful. Failing to do so, though, doesn’t bear thinking of.

Abstract

In all of its nearly 1000-year history, university education has never truly been transformed. Rather, the institution has gradually evolved in incremental steps, each step building on but almost never eliminating the last. As a result, a mediaeval professor dropped into a modern university would still find plenty that was familiar, including courses, semesters, assessments, methods of teaching and perhaps, once or twice a year, scholars dressed like him. Even such hugely disruptive innovations as the printing press or the Internet have not transformed so much as reinforced and amplified what institutions have always done. What chance, then, does generative AI have of achieving transformation, and what would such transformation look like?
In this keynote I will discuss some of the ways that, perhaps, it really is different this time: for instance, that generative AIs are the first technologies ever invented that can themselves invent new technologies; that the unprecedented rate and breadth of adoption is sufficient to disrupt stabilizing structures at every scale; that their disruption to credentialing roles may push the system past a tipping point; and that, as cognitive Santa Claus machines, they are bringing sweeping changes to our individual and collective cognition, whether we like it or not, that education cannot help but accommodate. However, complex path dependencies make it at least as likely that AI will reinforce the existing patterns of higher education as disrupt them. Already, a surge in regressive throwbacks like oral and written exams is leading us to double down on what ought to be transformed while rendering vestigial the creative, relational and tacit aspects of our institutions that never should. Together, we will explore ways to avoid this fate and to bring about constructive transformation at every layer, from the individual learner to the institution itself.

Paper: Cognitive Santa Claus Machines and the Tacit Curriculum

This is my contribution to the inaugural issue of AACE’s new journal of AI-Enhanced Learning, Cognitive Santa Claus Machines and the Tacit Curriculum. If the title sounds vaguely familiar, it might be because you might have seen my post offering some further thoughts on cognitive Santa Claus machines written not long after I had submitted this paper.

The paper itself delves a bit into the theory and dynamics of genAI, cognition, and education.  It draws heavily from how the theory in my last book, has evolved, adding a few of its own refinements here and there, most notably in its distinction of use-as-purpose vs use-as-process. Because genAIs are not tools but cognitive Santa Claus machines, this helps to explain how the use of genAI can simultaneously enhance and diminish learning, both individually and collectively, to varying degrees that range from cognitive apocalypse to cognitive nirvana, depending on what we define learning to be, whose learning we care about, and what kind of learning gets enhanced or diminished. A fair portion of the paper is taken up with explaining why, in a traditional credentials-driven, fixed-outcomes-focused institutional context, generative AI will usually fail to enhance learning and, in many typical learning and institutional designs, may even diminish our individual (and ultimately collective) capacity to do so. As always, it is only the whole assembly that matters, especially the larger structural elements, and genAI can easily short-circuit a few of those, making the whole seem more effective (courses seem to work better, students seem to display better evidence of success) but the things that actually matter get left out of the circuit.

The conclusion describes the broad characteristics of educational paths that will tend to lead towards learning enhancement by, first of all, focusing our energies on education’s social role in building and sharing tacit knowledge, then on ways of using genAI to do more that we could do alone, and, underpinning this, on expanding our definitions of what “learning” means beyond the narrow confines of “individuals meeting measurable learning outcomes”. The devil is in the detail and there are certainly other ways to get there than by the broad paths I recommend but I think that, if we start with the assumption that our students are neither products nor consumers nor vessels for learning outcomes, but co-participants in our richly complex, ever evolving, technologically intertwingled learning communities, we probably won’t go too far wrong.

Abstract:

Every technology we create, from this sentence to the Internet, changes us but, through generative AI (genAI), we can now access a kind of cognitive Santa Claus machine that invents other technologies, so the rate of change is exponentially rising. Educators struggle to maintain a balance between sustaining pre-genAI values and skills, and using the new possibilities genAIs offer. This paper provides a conceptual lens for understanding and responding to this tension. It argues that, on the one hand, educators must acknowledge and embrace the changes genAI brings to our extended cognition while, on the other, that we must valorize and double-down on the tacit curriculum, through which we learn ways of being human in the world.

New open journal from AACE: AI-Enhanced Learning (with a paper from me)

AI-Enhanced Learning cover illustrating a cyborg, AI-human hybrid mindThe Journal of Artificial Intelligence Enhanced Learning (AIEL), a diamond open-access journal published under the auspices of AACE and distributed worldwide through LearnTechLib has just launched its inaugural issue, which includes a paper from me (Cognitive Santa Claus Machines and the Tacit Curriculum).

This inaugural issue is a great start to what I think will come to be recognized as a leading journal in the field of AI and education.  As not just an author but an associate editor I am naturally a little biased but I’m very picky about the journals I work with and this one ticks all the right boxes. It is genuinely open, without fees for authors or readers. It is explicitly very multidisciplinary. The editors – Mike Searson, Theo Bastiaens and Gary Marks – are truly excellent, and prominent in the field of online and technology-enhanced learning. The publisher, AACE is a very well-oiled, prominent, professional, and likeable organization that has been a major player in the field for over 30 years, with extensive reach into institutional libraries the world over via LearnTechLib.

And the journal has an attitude that I like very much: it’s about learning enhancement through AI, not just AI and education. This fills a huge pragmatic need in an area where many practitioners are like deer caught in the headlights when it comes to thinking about what positive things we can do with our new robot friends/overlords/interlopers, and where too much of the conversation is implicitly focused on protecting the traditional forms and structures of our mediaeval education systems and the kinds of knowledge generative AI can more easily and effectively replicate.

This first issue crosses many disciplinary boundaries and aspects of the educational endeavour with a very diverse range of reflective papers by recognized experts in many facets of AI, education, and learning.  All are ultimately optimistic about the potential for learning enhancement but few back away from the wicked problems and potential for the opposite effect.  My own paper finds a thread of hope that we might not so much reinvent as simply notice what education currently does (it’s about learning to be as much as learning to do), and that we might recognize generative AIs not as tools but as cognitive Santa Claus machines, sharing their cognitive gifts to help us collectively achieve things we could not dream of before. It has a bit of theory to back that up.

If you have influence over such things, do encourage your libraries to subscribe!

Educational technologies and the synecdochic fallacy

all hands on deckFor a few minutes the other day I thought that I had invented a new kind of fallacy or, at least, a great term to describe it. Disappointingly, a quick search revealed that it was not only an old idea but one that has been independently invented at least twice before (Berry & Martin, 1974; Weinstock, 1981). Here is its definition from Weinstock (1981):

“a synecdochic fallacy is a deceptive, misleading, erroneous, or false notion, belief, idea, or statement where a part is substituted for a whole, a whole for a part, cause for effect, effect for cause, and so on.”

Most synecdoches (syn-NEK-doh-kees in case you were wondering – I have been getting it totally wrong for decades) are positively useful. Synecdoches make aspects of a whole more salient by focusing on the parts. No one, for instance, thinks “all hands on deck” actually means the crew should put their hands on the deck let alone that disembodied hands should crew the ship, but it does focus on an aspect of the whole that is of great interest: that there is an expectation that those hands will be used to do what hands do. Equally, synecdoches can make the parts more salient by focusing on the whole. When we say “Canada beat the USA in the finals” no one thinks that one literal country got up and thrashed the other, but it draws attention to a symbolic aspect of a hockey game that reveals one of its richer social roles. It becomes a fallacy only when we take it literally. Unfortunately, doing so is surprisingly common in research about education and educational technologies.

Technologies as synecdoches

The labels we use for technologies are very liable to be synecdochic (syn-nek-DOH-kik if you were wondering): it is almost a defining characteristic. Technologies are assemblies, and parts of assemblies, often contained by other technologies, often containing an indeterminate number of technologies that themselves consist of indeterminate numbers of technologies, that participate in richly recursive webs of further technologies with dynamic boundaries, where the interplay of process, product, structure, and use constantly shifts and shimmers. The labels we give to technologies are as much descriptions of sets of dynamic relationships as they are of objects (cognitive, physical, virtual, organizational, etc) in the world, and the boundaries we use to distinguish one from another are very, very fluid.

There is no technology that cannot be combined with different others or in different ways in order to create a different whole. Without changing or adding anything to the physical assembly a screwdriver, say, can be a paint stirrer, a pointer, a weapon, or unprestatably many other technologies, far from all of which are so easily labelled. Virtually every use of a technology is itself a technology, and it is often one that has never occurred in exactly the same way in the entire history of the universe. This sentence is one such technology: though there may be lots of sentences that are similar, the chances that anyone has ever used exactly this combination of words and punctuation before now are close to zero. Same for this post. This post has a title: that is the name of this technology, though it is a synecdoche for… what? The words it contains? Not quite, because now (literally as I write) it contains more of them but it is still this post. Is it still this post when it is syndicated? If the URL changes? Or the title? Or if I read it and turn it into podcast? I don’t know. This sentence does not have a name, but it is no less a technology. So is your reading of it. So is much of what is involved in the sense you are making of it, and that is the technology that probably matters most right now. No one has ever made sense of anything in exactly this way, right now, the way you are doing it, and no one ever will. The technosphere is almost as awesomely complex as the biosphere and, in education, the technosphere extends deep into every learner, not just as an object of learning but as part of learning itself.

Synecdoches and educational/edtech research

Let’s say you wanted to investigate the effects of putting computers in classrooms. It seems reasonable enough: after all, it’s a big investment so you’d want to know whether it was worth it. But what do you actually learn from doing so apart from that, in this particular instance, with this particular set of orchestrations and uses, something happened? Yes, computers might have been prerequisites for it happening but so what? An infinite number of different things could have happened if you had done something else even slightly different with them, there are infinitely many other things you could have done that might have been better, and all bets would be off if the computers themselves had been different. The same is equally true for what happens in classrooms without computers. What can you predict as a result? Even if you were to find that, 100% of the time until now, computers in classrooms led to better/worse learning (whatever that might mean to you) I guarantee that I could find plenty of ways of using them to do the precise opposite. This is functionally similar to taking “all hands on deck” literally: the hands may be very salient but, without taking into account the people they are attached to and exactly what they are doing with those hands, there is little or no value in making comparisons. Averages, maybe; patterns, perhaps, as long as you can keep everything else more or less similar (e.g. a traditional formal school setting); but reliable predictions of cause and effect? No. Or anything that can usefully transfer to a different setting (e.g. unschooling or – ha – online learning)? Not at all.

Conversely but following the same synecdochic logic we might ask questions about the effectiveness of online and distance learning (the whole),  comparing it with in-person learning.  Both encompass immense numbers of wildly diverse technologies, including not just course and class technologies but things like pedagogical techniques, institutional structures, and national standards, instantiated with wildly varying degrees of skill and talent, all of which matter at least as much as the fact that it is online and at a distance. Many may matter more. This is functionally similar to taking “Canada beat the US” literally. It did not. It remains a fallacy even if, on average, Canada (the hockey team) does win more often, or if online and distance learning is generally more effective than in-person learning, whatever that means. The problem is that it does not distinguish which of the many millions of parts of the distance or the in-person orchestration of phenomena matter and, for aforementioned and soon-to-be-mentioned reasons, it cannot.

Beyond causing physical harm – and even then with caveats – there is virtually nothing you could do or use to teach someone that, if you modified some other part of the assembly or organized the parts a little differently, could not have exactly the opposite effect the next time you do or use it. This sentence, say, will have quite different effects from the next despite using almost the exact same components. Almost components effects next the despite using different quite will sentence, say, this have the from exact. It’s a silly example and it is not difficult to argue that further components (rules of grammar, say) are sufficiently different that the comparison is flawed, but that’s exactly the point: all instantiations of educational technologies are different, in countless significant ways, each of which impacts lots of others which in turn impact others, in a complex adaptive system filled with positive and negative feedback loops, emergence, evolution, and random impacts from the systems that surround it. I didn’t actually even have to mix up the words. Had I repeated the exact same statement, its impact would have been different from the first because something else in the system had changed as a result of it: you and the sentence after. And this is just one sentence, and you are just one reader. Things get much more complex really fast.

In a nutshell, the synecdochic fallacy is why reductive research methods that serve us so well in the natural sciences are often completely inappropriate in the field of technology in general and education in particular. Natural science seeks and studies invariant phenomena but, because every use (at least in education) is a unique orchestration, technologies as they are actually enacted (i.e. the whole, including the current use) are never invariant and, even on those odd occasions that they do remain sufficiently similar for long enough to make study worthwhile, it just takes one small tweak to render useless everything we have learned about them.

All is not lost

There are lots of useful and effective kinds of research that we can do about educational technologies. Reductive science is great for identifying phenomena and what we can do with them in a technological assembly, and that can include other technologies that are parts of assemblies. It is really useful, say, to know about the properties of nuts and bolts used to build desks or computers, the performance characteristics of a database, or that students have persistent difficulties answering a particular quiz question. We can use this information to make good creative choices when changing or creating designs. Notice, though, that this is not a science of teaching or education. This is a science of parts and, if we do it with caution, their interactions with other parts. It is never going to tell us anything useful about, say, whether teaching to learning styles has any positive effect, that direct instruction is better than problem based learning, or that blended learning is better than in-person or online learning, but it might help us build a better LMS or design a lesson or two more effectively, if (and only if)  we used the information creatively and wisely.

Other effective methods involve the telling of rich stories that reveal phenomena of interest and reasons for or effects of decisions we made about putting them together: these can help others faced with similar situations, providing inspirations and warnings that might be very useful. If we find new ways of assembling or orchestrating the parts (we do something no one has done before) then it is really helpful to share what we have done: this helps others to invent because it expands the adjacent possible. Similarly we can look for patterns in the assembly that seem to work and that we can re-use (as parts) in other assemblies. We can sometimes come up with rules of thumb that might help us to (though never to predict that we will) build better new ones. We can share plans. We can describe reasons.

What this all boils down to is that we can and we should learn a great deal that is useful about the component technologies and we can and should seek broad patterns in ways that they intertwingle. What we cannot do, neither in principle nor in practice, is to use what we have learned to accurately predict anything specific about what happens when we put them together to support learning. It’s about improving the palette, not improving the painting. As Longo & Kauffman (2012) put it, in a complex system of this nature – and this applies as much to the biosphere, culture, and economics as it does to education and technology –  there are no laws of entailment, just of enablement. We are firmly in the land of emergence, evolution, craft, design, and bricolage, not engineering, manufacture and mass-production. I find this quite liberating.

 

References

Berry, K. J., & Martin, T. W. (1974). The Synecdochic Fallacy: A Challenge to Recent Research and Theory-Building in Sociology. Pacific Sociological Review, 17(2), 139–166. https://doi.org/10.2307/1388339
Longo, G., Montévil, M., & Kauffman, S. (2012). No entailing laws, but enablement in the evolution of the biosphere. Proceedings of the 14th Annual Conference Companion on Genetic and Evolutionary Computation, 1379–1392. https://doi.org/10.1145/2330784.2330946
Weinstock, Stephen M. (1981). Synecdochic Fallacy [Panel paper]. 67th annual meeting of the Speech Communication Association, Anaheim, California. https://www.scribd.com/document/396524982/Synecdochic-Fallacy-1981

Cognitive Santa Claus machines

cognitive santa claus machine receiving human cognitive products and outputting thoughtsI’ve just submitted a journal paper (shameless plug: to AACE’s AIEL, of which I am an associate editor) in which I describe generative AIs as cognitive Santa Claus machines. I don’t know if it’s original but the idea appeals to me. Whatever thought we ask for, genAIs will provide it, mining their deep, deep wells of lossily compressed recorded human knowledge to provide us with skills and knowledge we do not currently have. Often they surprise us with unwanted gifts and some are not employing the smartest elves in the block but, by and large, they give us the thinking (or near facsimile) we want without having to wait until Christmas Eve.

Having submitted the paper, it now occurs to me that they are not just standalone thinking appliances: they can potentially be drivers of general-purpose Santa Claus machines. As active users of and, above all, creators of all sorts of digital technologies, I have found them, for example, incredibly handy for quickly churning out small apps and utilities that are useful but that would not be worth the week or more of effort they would otherwise take me to build. It is already often quicker to build a Quick Action for my Mac Finder than it would be to seek out an existing utility on the Web. The really interesting thing, though, is that they are perfectly capable of creating .scad files (or similar) that can be 3D printed. My own 3D printer has been gathering dust in a basement with a dead power supply for a few years so I’ve not tested the output yet, but I have already used Claude, ChatGPT and Gemini to design and provide full instructions and software for some quite complex electronics projects: between them they do a very good job, by and large, notwithstanding odd hallucinations and memory lapses. My own terrible soldering and construction skills are the only really weak points in the process.

One way or another, for the first time in the existence of our species, we now have machines that do not just perform predetermined orchestrations or participate as tools in our own orchestrations: they do the orchestration for us. We therefore have at our fingertips machines that are able (in principle) to make any technology – including any other machine (including another 3D printer) – we can imagine. The intellectual property complexities that will emerge when you can ask ChatGPT to, say, make you a smartphone or a house to your precise specifications make current copyright disputes pale by comparison. Phones might be tricky, for now, but houses are definitely possible. There are many (including my own son) who are looking further than that, down to a molecular level for what we can build, and that’s not to mention the long gestating field of nanobots.

This is a level of abundance that has only been the stuff of speculative fiction until now and, for the most part, even scifi mostly talks of replicators, not active creators of something new. Much as in the evolution of life, there have been moments in the evolution of technology when evolvability itself has evolved: inventions like writing, technologies of transport, the Internet, the electronic valve, the wheel, or steam power, for example, have disproportionately accelerated the rate of evolution, bringing exponential increases in the adjacent possible. This might just be the biggest such moment yet.

Education in the age of Santa Claus machines

Where education sits in all of this is complicated. To a very large extent, at least the explicit goal of educational systems is to teach us how to operate the tools and other technologies of our cultures, by which I mean the literacies that allow us to participate in a complex technologically mediated society, from writing to iambic pentameter, from experiments to theories. In brief, the stuff you can specify as learning outcomes. Even now, with the breakneck exponential increase in technologies of all kinds that has characterized the last couple of centuries, the rate of change is slow enough and the need for complex skills is growing steadily enough that there is a very clear demand for educational systems to provide them, and there are roughly enough skilled teachers to teach them.

The need persists because, when we create technologies we are not just creating processes, objects, structures, and tools: we are creating gaps in them that humans must fill with soft or hard technique, because the use of a technology is also a technology.  This means that the more technologies we create (up until now) the more we have had to learn in order to use them. Though offset somewhat by the deskilling orchestrations built into the machines we create (often the bulk of the code in a digital project is concerned with lessening the cognitive load, and even a humble door handle is a cognitive load-reducer)  the world really is and always has been getting more complex than it was. We need education more than ever.

Generative AIs modify that equation. Without genAI, creating 3D designs, say, and turning them into printed objects still demands vast amounts of human skill – skills using quite complex software, math, geometry, materials science, machinery, screwdrivers, ventilation, spatial reasoning, etc, etc etc. Black-boxing and automation can help: some of that complexity may be encapsulated in smart interfaces and algorithms that simplify the choices needed but, until now, there has usually been a trade-off between fine-grained control and ease of use. GenAIs restore that fine-grained control, to a large extent, without demanding immense skill. We just have to be able to describe what we want, and to follow instructions for playing our remaining roles like applying glue sticks or dunking objects in acetone baths. The same is true for non-physical genAI products.

So what does it mean to be able to use the technologies of your culture if there are literally millions of new and unique ones every day? Not just new arrangements of the same existing technologies like words, code, or images but heterogenous assemblies that no one has ever thought of before, tailor-made to your precise specifications. I have so many things I want to make this way. Some assembly will still be needed for many years to come but we will get ever closer to Theodore Taylor’s original vision of a fully self-contained Santa Claus machine, needing nothing but energy and raw materials to make anything we can imagine. If educational institutions are still needed, what will they teach and how will they teach it? One way they may respond is to largely ignore the problem, as most are doing now.

If educational systems do continue – without significant modification, without fully embracing the new adjacent possibles – to do nothing but teach and assess existing skills that AIs can easily perform at least as well, two weird things will happen. Firstly, sensible time-poor students will use the AIs to do the work or, at the very least, to help them. Secondly, sensible time-poor teachers will use the the AIs to teach because, if all you care about is achieving measurable learning outcomes, AIs can or will be able to do that better, faster, and cheaper. That would make both roles rather pointless. But teaching doesn’t just teach measurable skills; it teaches ways of being human. The same is true when AIs do it, too. It’s just that we learn ways of being human from machines. All of which (and much more, that I have written and spoken about more than enough in the past) suggests that continuing along our existing outcomes-driven educational path might not be the smartest move – or failure to move – we have ever made.

It’s a systems thing. GenAIs are coming into a world that is already full of systems, and systems above all else have a will to survive. In our education systems we are still dealing with the problems caused by mediaeval monks solving problems with the limited technologies available to them because, once things start to depend on other things and subsystems form, people within them get very invested in solving local problems, not system-level problems, and those solutions cause problems for other local subsystems, and so it goes on in a largely unbroken chain, rich in recursive sub-cycles, until any change made in one part is counter-acted by changes in others. What we fondly think of as good pedagogy, for instance, is not a universal law of teaching: it is how we solve problems caused by how our systems have evolved to teach. I think the worst thing we can possibly do right now is to use genAIs to solve the local problems we face as teachers, as learners, as administrators, etc. If we use them to replicate the practices we have inherited from mediaeval monks, instead of transforming our educational systems it will actively reinforce everything that is wrong with them because it will just make them better or faster at doing what they already do.

But of course we will do exactly that because what else can we do? We have problems to solve and genAIs offer solutions.

Three hopeful paths

I reckon that there are three hopeful, interlocking, and complementary paths we can take to prevent at least the worst case impacts of what happens when genAI is combined with local thinking:

I. embrace the machine

The first hopeful path is to embrace the machine. It seems to me that we should be focusing a bit less on how to use or replicate the technologies we already have and a lot more on the technologies we can dream of creating. If we wish (and have the imagination to persuade a genAI to do it) we can choose exactly how much human skill is needed for any technological assembly so the black-boxing trade-off that automation has always imposed upon us is not necessarily an issue any more: we can choose exactly the amount of soft technique we want to leave for humans in any given assembly instead of having it foisted upon us. For the first time, we can adjust the granularity of our cognition to match our needs and wishes rather than the availability of technologies. As a trivial example, if you want to nurture the creative skills of, say, drawing, you can build a technology that supports it, while automating the things you’d rather not think about like, say, colouring it in. From an educational perspective this is transformative. It frees us from the need for prerequisite skills and scaffolding, because they can be provided by the genAI, which in turn gives us a laser focus on what we want to learn, not the peripheral parts of the assembly. At one fell swoop (think about it) that negates the need for disciplinary boundaries, courses, and cognitive barriers to participation, and that’s just a start: there are many dominoes that fall once we start pushing at the foundations. It makes the accomplishment of authentic, meaningful, personally relevant, sufficiently challenging but not overwhelming tasks within everyone’s reach. As well as shaping education to the technologies of our cultures, we can shape the technologies to the education.

A potential obstacle to all of that is that very few of us have any idea where the adjacent possibles lie so how can we teach what, by definition, we do not know? I think the answer to that is simple: just let go, because that’s not what or how we should be teaching anyway. We should be teaching ways of making that journey,  supporting learners along the way, nurturing communities, and learning with them, not providing maps for getting there. GenAIs can help with that, nudging, connecting, summarizing, and so on. They can also help us to track progress and harvest learning outcomes if we still really need that credentialing role. And, if we don’t know how to do that, they can teach us what we need to know. That’s one of the really cool things about genAIs: we don’t need to be trained to use them. They can teach us what we need themselves. But, on its own, this is not enough.

II. embrace the tacit dimension

With the explicit learning outcomes taken care of (OK, that’s a bit of an exaggeration), the second hopeful path is to celebrate and double down on the tacit curriculum: to focus on the values, ways of thinking, passions, relationships, and meaning-making that learning from other humans has always provide for free while we teach students to meet those measurable learning outcomes. If we accept the primary role of educational systems as being social, to do with meaning-making, identity, and growth, treating everyone as an end in themselves, not as a means to an end, it avoids or mitigates most of the risks of learning to be human through machines and that is something that even those of us who have no idea how to use genAI can contribute to in a meaningful and useful way. Again, this is highly transformative. We must focus on the implicit, the tacit, and the idiosyncratic, because that’s what’s left when you take the learning outcomes away. Imagine a world in which learners choose an institution because of its communities and the quality of human relationships it supports, not its academic excellence. Imagine that this is what “academic excellence” means. I like this world.

III. embrace the human

The third hopeful path, interlocked with the other two, is to more fully celebrate the value of people doing things despite the fact that machines can do them better.

Though genAIs are a wholly new kind of technology that change a lot of rules, so we should be very wary of drawing too much from lessons of the past, it is worth reflecting on how the introduction of new technologies that appear to replace older technologies has worked before. When photography was new, for instance, photographers often tried to replicate painterly styles but it also led to an explosion of new aesthetics for painting and a re-evaluation of what value a human artist creates. Without photography it is unlikely that Impressionism would have happened, at least at the point in history that it did: photography’s superior accuracy in rendering images of the world freed painters from the expectation of realism and eventually led to a different and more human understanding of what “realism” means, as well as many new kinds of visual abstraction. Photography also created its own adjacent possibles, influencing composition and choices of subject matter for painters and, of course, it became a major art form in its own right. The fact that AIs can (or at least eventually will) produce better images than most humans does not mean we should or will stop drawing. It just means the reasons for doing so will be fewer and/or that the balance of reasons for doing it will shift. There might not be so many jobs that involve drawing or painting, but we will almost certain value what humans produce more than ever, both in the product and the process. We will care about what of and how it expresses our human experience, and its cognitive benefits, perhaps, rather than its technical precision: exactly the kinds of things that make it valuable for human infants to learn, as it happens. On the subject of human infants, this is why there are probably many more of us with our children’s or grandchildren’s pictures than the products of diffusion models on our refrigerators, and why they often share pride of place with the work of great masters on our walls.

The same is almost certainly true for teaching: generative AIs are, I hope, teaching’s photography moment, the point in history at which we step back and notice that what makes the activity valuable is not the transfer of explicit skills and knowledge so much as the ways of being human that are communicated with that: the passion (or even the lack of it), the meaning, the values, the attitudes, the ways of thinking.  When the dust settles, we are going to be far more appreciative of the products of humans working with dumb technologies than the products of genAIs, even when the genAI does it measurably better. I think that is mostly a good thing, especially taking into account the many potential new heights of as-yet-unforeseeable creation that will be possible when we partner up with the machines and step into more of the adjacent possibles.

Embracing the right things

Technologies are often seen as solutions to problems but that is only (and often the least interesting) part of what they do. Firstly, they also and invariably create new problems to solve. Secondly, and maybe more importantly, they create new adjacent possibles. Both of these other roles are open-ended and unprestateable: no amount of prior research will tell us more than a fraction of these. Finally, therefore, and as an overarching rule of thumb, I think it is beholden on all of us who are engaged in the educational endeavour to play with these things in order to discover those adjacent possibles, and, if we do choose to use them to solve our immediate problems, to discover as much as we can of the Faustian bargains they entail. Deontology is our friend in this: when we use it for a purpose we should always ask ourselves what would happen if everyone in the world who was in a similar situation used genAI for that purpose, and would we want to live in that world? What would our days be like if they did? This is not as hypothetical as it is for most ethical decisions: there is a very strong chance that, for instance, a large percentage of teaching to learning outcomes will very soon be performed (directly or indirectly) by genAI, and we know that a significant (though hard-to-quantify) amount of student work is already the direct or indirect result of them. The decisions we are faced with are faced by many others and they are happening at scale. We may have some substantial ethical concerns about using these things – I certainly do – but I think the consequences of not doing so are considerably worse. We’re not going to stop it by refusing to engage. We are the last generation to grow up without genAI so it is our job to try to preserve what should be preserved, and to try to change what shouldn’t.

 

Democratech: reflections on the human nature of blockchain

mediaeval blockchain votingAt short notice I was invited to be guest of honour and keynote at Bennett University’s International Conference on Blockchain for Inclusive and Representative Democracy  yesterday. I was not able to attend the entire conference – my opening keynote was at 9:30pm here in Vancouver and I eventually needed to sleep – but I made it for a few hours. I was impressed with the diversity and breadth of the work going on, mainly in India, and the passionate, smart people in attendance. It was a particular pleasure to hear from Ramesh Sharma, who I have known for many years in an online learning context, here speaking of very different things, and I really loved the ceremonial lighting of the lantern – the sharing of the light – with which the conference began. It is a powerful and connecting metaphor.

Like most geeks I do have the occasional thought about blockchain and democracy but I can’t describe myself as an expert or even an enthusiastic amateur in either field. So, rather than speaking about things the delegates knew far more about than I, and given the compressed time-frame for preparing the keynote, I chose to ground the talk in familiar territory, taking a broad-brush view of how to think of the technological ecosystem into which the technologies must fit. It led to some new thoughts here and there: in particular, I rather like the idea of technologies in general acting as a kind of distributed ledger of human cognition. The result was these slides – Democratech: reflections on the human nature of blockchain.

In rough note form (not a polished academic work and not particularly coherent!), the text below is approximately what I spoke about for each of the slides:

1 In this talk I will be using ideas from my most recent book: here it is. You can download it for free or buy it in paper or electronic form if you wish. See http://teachingcrowds.ca. It is at least as  much about the nature of technology as it is about the nature of education, and that’s what I want to talk about today: what kind of a technology is blockchain, and why does it matter?

2 “Technology” is a fuzzy term that can mean many things to different people. I spend a whole chapter in the book exploring many definitions of what “technology” means. To, save time, I am going to use what I conclude to be the best definition, from Brian Arthur, “orchestrating phenomena to our use”.

3 I prefer to think of this as “organizing stuff to do stuff”, because it makes it clearer that the stuff that it organizes nearly always includes stuff already organized to do stuff: as Arthur observes, almost all if not all technologies are assemblies of other technologies, at least when they are put to use.

Technologies are made of technologies, at every scale, and they are parts of webs of technologies that stretch far into time and space.  Kevin Kelly calls this massively interconnected network the technium. And, as he puts it, technology can be thought of as both a thing and a verb or, as Ursula Franklin puts it fish and water – a slippery thing to pin down. It is something we do and something we have done. In fact it is typically both.

4 By this definition, democracies are technologies too – in fact, hugely complex assemblies of technologies. They orchestrate phenomena using systems, physical objects, and assemblies of them, to approximate a fair voice for all in the governance of where we dwell. So are words, and language, and, as Franklin notes, there are technologies of prayer.

5 If you take nothing else from this speech, take this: only the whole assembly matters. The parts are very important to the designer and make a big difference to how a technology works and is experienced, but it is how the parts are assembled and act together that makes the technology as it is experienced, as it is instantiated. That includes what we do with them – more on that in a moment.

If you are not convinced, think about some of the parts of the computer you are looking at now: some are sharp, some contain harmful chemicals, and there’s a good chance that there is a deadly amount of  electricity flowing through them, and yet we gain benefit from them, not loss of life, because we assemble them in ways that (at least normally) eliminate the harm by adding technologies to prevent it: counter technologies. Often, a large part of what we recognize as a technology is in fact a counter technology to other parts of it – think of cars, for example, where many of the components are simply there to stop other components blowing up, seizing, or killing people.

6 Technologies create what Stuart Kauffman calls “adjacent possibles” – empty niches that further technologies can fill, individually or in conjunction with others, including others that already exist. Every new technology makes further technologies possible, adding new parts to new assemblies. This accounts for the exponential growth in technologies over the past 10000 years or so: technologies evolve from and with other technologies, almost never out of nothing.

Those adjacent possible empty niches are fundamentally unprestatable, as Kauffman puts it: no one can imagine all the possible assemblies into which we might put something as simple as a screwdriver. A stirrer of paint, a back scratcher, a scribe, a pointer, a stabbing weapon, a weight, a missile, a crow bar… And this is true of every technology. All can be assembled differently, in indefinitely many assemblies, to make indefinitely many wholes. This is true at the finest of scales. Though there may be some very close resemblances between instances, you have never written your own signature, nor washed your clothes, nor eaten your food the same way twice. Only machines can do that, but they are part of our technologies as much as we are part of them: the machine may behave consistently but the technology through which we use it – the instantiation in which we participate – most likely does not.

Technologies also come with path dependencies that can harden and distort assemblies, because the soft must shape itself around the hard. What exists shapes what can exist.

7, 8 When instantiated, we are participants in, not just users of, the technology. Using a technology is also a technology: whether organizing it or being part of the organization

9 , 10 We are coparticipants in a largely self-organizing web of technology that is part organic, part process, part physical object, part conceptual, part structural. Technologies democratize cognition though they also embed and harden values of the powerful, and the uses to which they are put are too often to subdue, constrain, or abuse our fellow humans. It is always important to remember that the technology that matters is seldom its most obvious components: it is the assembly they are in. As they are used, they are different technologies to everyone that uses them, because they are parts of different assemblies: the production line is a very different technology for its boss, its workers, its shareholders, the consumers of what it produces, orchestrating different phenomena to different users. This means that technologies – as instantiated – are never neutral. They have histories, contexts, and propensities.

11 And our input matters: it is not just the method but the way things are done that matters. Every assembly can be a creative assembly, and it is possible to do it well or badly. And so we all create new adjacent possibles for one another.  Through technologies we participate in the collective cognition of the human race: in effect, technologies form the distributed ledger of our shared cognition. But all of us assemble and interpret in the ways we use technology, whether we form part of it (hard technique) or are the organizers (soft technique).

12 Blockchain is a technology capable of achieving great good: potentially accountable but equally interesting in ways it can support anonymity, free from central control but also interesting in the context of an existing system of trust, good for both privacy and transparency, etc. It has indefinitely many adjacent possibles, from the exchange of property to the assertion of identity, from enabling reliable voting to making supply chains accountable.

13 But all technologies are what Neil Postman called Faustian bargains. When you invent the ship you invent the shipwreck as Paul Virilio put it. The story of the Monkeys Paw, by W.W. Jacobs is a tale of horror in which a monkey’s paw grants three wishes to a modest couple, who ask only to pay off their mortgage with their first wish. Moments later, they learn their son has died in a horrible accident at a factory in which he works and the company will pay compensation: the exact cost of the outstanding mortgage. And so the story goes on. Technologies are like that.

Blockchain can be subverted by organized crowds (botnets and human), malware, cracking, etc, and quantum computing means all bets are off about reliability an security. It is possible to lose votes as easily as it is to lose millions in bitcoin. Blockchain can conceal criminal activity, and, conversely, enable a level of surveillance never seen before. Remember, this is all about the assembly, and blockchain is a very versatile component. It’s a super-soft technology that connects many others. Blockchain makes new forms of democracy possible, but it also enables new forms of tyranny.

To understand blockchain we must understand the technologies of which it forms only part of the assembly. Never forget that it is only ever the assembly that matters, not the parts. This is and has always been true of all the technologies of democracy. Paper voting, say, in its raw form is incredibly and fundamentally unreliable, prone to loss, error, abuse, corruption, coercion, loss of privacy, etc and it is terribly, terribly inefficient and insecure. However, we throw in a lot of counter technologies – systems to assure reliability, safes, multiple counts, policing procedures, surveillance, electronic counts, , observers, etc – and so the process is now so well evolved that it often enough works. Paper is not the technology of interest: it is the whole system that surrounds it. Same for blockchain.

14 Understanding technologies mean we we must know the adjacent possibles but, remember, we we can only ever see the most brightly lit of these from where we currently stand. The creative potential, for both good and evil, is barely visible at all. Someone, somehow, somewhere, will find new assemblies that achieve their ends, whether it benefits all of us or not. Sadly, those most able are typically those least trustworthy thanks to the fundamental inequalities of our societies that reward greed and that give most to those who already have most. Anything is weaponizable, including democracy, as (here in Canada) our neighbours south of the border are discovering to their cost. And it means understand what happens at scale: the environmental impacts and counter technologies to that: but, as Reneé Dubos put it, fixing problems with counter technologies is a philosophy of despair, because every counter technology we create is another Faustian bargain that creates new problems to solve, and new adjacent possibles we never foresaw.

15 We must understand where blockchain fits in the massive web of the collective technium – the Ricardian contracts, the oracles, the legal frameworks that surround them, the ZKP techniques, the privacy laws, the voting practices, the laws of ownership, and so on. It is unwise to simply drop it in as a replacement for what we already do because it will harden what should not be hardened – when we automate we tend to simplify – and create new relationships that may be incompatible or positively dangerous to existing technologies of democracy. But, as we reinvent it, we must always remember the unprestatable adjacent possibles we create, the things we reinforce, the things we lose. And we must remember that someone, somewhere is seeing adjacent possibles we did not imagine, assemblies we have yet to conceive, and they may not be friendly to democratic ideals.

16 To understand this means we must look far beyond the bits and bytes and flashing lights; we must make empathetic leaps into the hearts and minds of our coparticipants in the technium. We are technologies, as much a part of blockchain as it is part of the broader web of the technium.

What kind of technologies do we want to be?

Just a metatool? Some thoughts why generative AIs are not tools

hammer holding an AI nailMany people brush generative AI aside as being just a tool. ChatGPT describes itself as such (I asked). I think it’s more complicated than that, and this post is going to be an attempt to explain why. I’m not sure about much of what follows and welcome any thoughts you may have on whether this resonates with you and, if not, why not.

What makes something a tool

I think that to call something a tool is shorthand for it having all of the following 5 attributes:

  1. It is an object (physical, digital, cognitive, procedural, organizational, structural, conceptual, spiritual, etc. – i.e. the thing we normally identify as the tool),
  2. used with/designed for a purpose, that
  3. can extend the capabilities of an actor (an intelligent agent, typically human), who
  4. may perform an organized action or series of actions with it, that
  5. cause changes to a subject other than the tool itself (such as a foodstuff, or piece of paper, a mental state, or a configuration of bits),

More informally, less precisely, but perhaps more memorably:

A tool is something that an intelligent agent does something with in order to do something to something else

Let me unpack that a bit.

A pebble used as a knife sharpener is a tool, but one used to reinforce concrete is not. A pen used to write on paper is a tool, but the paper is not. The toolness in each case emerges from what the agent does and the fact that it is done to something, in order to achieve something (a sharp knife, some writing).

Any object we label as a tool can become part of another with different organization. A screwdriver can become an indefinitely large number of other tools  apart from one intended for driving screws. In fact, almost anything can become a tool with the right organization. The paper can be a tool if it is, say, used to scoop up dirt. And, when I say “paper”, remember that this is the label for the object I am calling a tool, but it is the purpose, what it does, how it is organized, and the subject it acts upon that makes it so.

It is not always easy to identify the “something else” that a tool affects. A saw used to cut wood is an archetypal tool, but a saw played with a bow to make music is, I think, not. Perhaps the bow is a tool, and maybe we could think of the saw as a tool acting on air molecules, but I think we tend to perceive it as the thing that is acted upon rather than the thing we do something with.

Toolness is intransitive: a computer may be a tool for running programs, and a program running on it may be a tool that fixes a corrupt disk, but a computer is not a tool for fixing a corrupt disk.

A great many tools are also a technologies in their own right. The intention and technique of the tool maker combines with that of the tool user, so the tool user may achieve more (or more reliably, faster, more consistently, etc) than would be possible without both. A fountain pen adds more to the writing assembly than a quill, for instance, so demanding less of the writer. Many tools are partnerships of this nature, allowing the cognition of more than one person to be shared. This is the ratchet that makes humans smart.

Often, the organization performed by the maker of a technology entirely replaces that of the tool user. A dish sponge is a tool, but a dishwasher is not: it is an appliance. Some skill is needed to load it but the dishwashing itself – the purpose for which it is designed – is entirely managed by the machine.

The case is less clear for an appliance like, say, a vacuum cleaner. I think this is because there are two aspects to the device: the mechanism that autonomously sucks dirt is what makes it an appliance, but the hose (or whatever) used to select the dirt to be removed is a tool. This is reflected in common usage, inasmuch as a vacuum cleaner is normally sold with what are universally described as tools (i.e. the things that a person actively manipulates). The same distinction is still there in a handheld machine, too – in fact, many come with additional tools – though I would be much more comfortable describing the whole device as a tool, because that’s what is manipulated to suck up the dirt. Many power tools fit in this category: they do some of the work autonomously but they are still things people do something with in order to do something to something else.

Humans can occasionally be accurately described as tools: the movie Swiss Army Man, for instance, features Daniel Radcliffe as a corpse that turns out to have many highly inventive uses. For real live humans, though, the case is less clear.  Employees in scripted call centres, or teachers following scripted lesson plans are more like appliances than tools: having been “programmed”, they run autonomously, so the scripts may be tools but the people are not. Most other ways of using other people are even less tool-like. If I ask you to pick up some shopping for me, say, then my techniques of persuasion may be tools, but you are the one organizing phenomena to shop, which is the purpose in question.

The case is similar for sheepdogs (though they are not themselves tool users), that I would be reluctant to label as tools, though skills are clearly needed to make them do our bidding and they do serve tool-like purposes as part of the technology of shepherding. The tools, though, are the commands, methods of training, treats, and so on, not the animals themselves.

Why generative AIs are not tools

For the same reasons of transitivity that dishwashers, people, and sheepdogs are not normally tools, neither are generative AIs. Prompts and other means of getting AIs to do our bidding are tools but generative AIs themselves work autonomously.  This comes with the proviso that almost anything can be repurposed so there is nothing that is not at least latently a tool but, at least in their most familiar guises, generative AIs tend not to be.

Unlike conventional appliances, but more like sheepdogs, the work generative AIs perform is neither designed by humans nor scrutable to us. Unlike sheepdogs, but more like humans, generative AIs are tool users, too: not just (or not so much) words, but libraries, programming languages, web crawlers, filters, and so on. Unlike humans, though, generative AIs act with their users’ intentions, not their own, expressed through the tools with which we interact with them.  They are a bit like partial brains, perhaps, remarkably capable but not aware of nor able to use that capability autonomously.

It’s not just chatbots. Many recommender systems and search engines (increasingly incorporating deep learning), also sit uncomfortably in the category of tools, though they are often presented as such. Amazon’s search, say, is not (primarily) designed to help you find what you are looking for but to push things at you that Amazon would like you to buy, which is why you must troll through countless not-quite-right things despite it being perfectly capable of exactly matching your needs. If it is anyone’s tool, it is Amazon’s, not ours. The same for a Google search: the tools are your search terms, not Google Search, and it is acting quite independently in performing the search and returning results that are likely more beneficial to Google than to you. This is not true of all search systems. If I search for a file on my own computer then, if it fails to provide what I am looking for, it is a sign that the tool (and I think it is a tool because the results should be entirely determinate) is malfunctioning. Back in those far off days when Amazon wanted you to find what you wanted or Google tried to provide the closest match to your search term, if not tools then we could at least think of them as appliances designed to be controlled by us.

I think we need a different term for these things. I like “metatool” because it is catchy and fairly accurate. A metatool is something that uses tools to do our bidding, not a tool in its own right.  It is something that we use tools to act upon that is itself a tool user. I think this is better than a lot of other metaphors we might use: slave, assistant (Claude describes itself, incidentally, not as ‘merely’ a tool, but as an intelligent assistant), partner, co-worker, contractor, etc all suggest more agency and intention than generative AIs actually possess, but appliance, machine, device, etc fail to capture the creativity, tailoring, and unpredictability of the results.

Why it matters

The big problem with treating generative AIs as tools is that it overplays our own agency and underplays the creative agency of the AI. It encourages us to think of them, like actual tools, as, cognitive prostheses, ways of augmenting and amplifying but still using and preserving human cognitive capabilities, when what we are actually doing is using theirs. It also encourages us to think the results will be more deterministic than they actually are. This is not to negate the skill needed to use prompts effectively, nor to underplay the need to understand what the prompt is acting upon. Just as the shepherd needs to know the sheepdog, the genAI user has to know how their tools will affect the medium.

Like all technologies, these strange partial brains effectively enlarge our own. All other technologies, though, embed or embody other humans’ thinking and/or our own. Though largely consisting of the compressed expressed thoughts of millions of people, AI’s thoughts are not human thoughts: even using the most transparent of them, we have very little access to the mechanisms behind their probablistic deliberations. And yet, nor are they independent thinking agents. Like any technology we might think of them as cognitive extensions but, if they are, then it is as though we have undergone an extreme form of corpus callosotomy, or we are experiencing something like Jaynes’s bicameral mind. Generative AIs are their own thing: an embodiment of collective intelligence as well as contributors to our own, wrapped up in a whole bunch of intentional programming and training that imbues them, in part, with (and I find this very troubling) the values of their creators and in part with the sum output of a great many humans who created the data on which they are trained.

I don’t know whether this is, ultimately, a bad thing. Perhaps it is another stage in our evolution that will make us more fit to deal with the complex world and new problems in it that we collectively continue to create. Perhaps it will make us less smart, or more the same, or less creative. Perhaps it will have the opposite effects. Most likely it will involve a bit of all of that. I think it is important that we recognize it as something new in the world, though, and not just another tool.

We are (in part) our tools and they are (in part) us

anthropomorphized hammer using a person as a toolHere’s a characteristically well-expressed and succinct summary of the complex nature of technologies, our relationships with them, and what that means for education by the ever-wonderful Tim Fawns. I like it a lot, and it expresses much what I have tried to express about the nature and value of technologies, far better than I could do it and in far fewer words. Some of it, though, feels like it wants to be unpacked a little further, especially the notions that there are no tools, that tools are passive, and that tools are technologies. None of what follows contradicts or negates Tim’s points, but I think it helps to reveal some of the complexities.

There are tools

Tim starts provocatively with the claim that:

There are no tools. Tools are passive, neutral. They can be picked up and put down, used to achieve human goals without changing the user (the user might change, but the change is not attributed to the tool).

I get the point about the connection between tools and technology (in fact it is very similar to one I make in the “Not just tools” section of Chapter 3 of How Education Works) and I understand where Tim is going with it (which is almost immediately to consciously sort-of contradict himself), but I think it is a bit misleading to claim there are no tools, even in the deliberately partial and over-literal sense that Tim uses the term. This is because to call something a tool is to describe a latent or actual relationship between it and an agent (be it a person, a crow, or a generative AI), not just to describe the object itself. At the point at which that relationship is instantiated it very much changes the agent: at the very least, they now have a capability that they did not have before, assuming the tool works and is used for a purpose. Figuring out how to use the tool is not just a change to the agent but a change to what the agent may become that expands the adjacent possible. And, of course, many tools are intracranial so, by definition, having them and using them changes the user. This is particularly obvious when the tool in question is a word, a concept, a model, or a theory, but it is just as true of a hammer, a whiteboard, an iPhone, or a stick picked up from the ground with some purpose in mind, because of the roles we play in them.

Tools are not (exactly) technologies

Tim goes on to claim:

Tools are really technologies. Each technology creates new possibilities for acting, seeing and organising the world.

Again, he is sort-of right and, again, not quite, because “tool” is (as he says) a relational term. When it is used a tool is always part of a technology because the technique needed to use it is a technology that is part of the assembly, and the assembly is the technology that matters. However, the thing that is used – the tool itself – is not necessarily a technology in its own right. A stick on the ground that might be picked up to hit something, point to something, or scratch something is simply a stick.

Tools are not neutral

Tim says:

So a hammer is not just sitting there waiting to be picked up, it is actively involved in possibility-shaping, which subtly and unsubtly entangles itself with social, cognitive, material and digital activity. A hammer brings possibilities of building and destroying, threatening and protecting, and so forth, but as part of a wider, complex activity.

I like this: by this point, Tim is telling us that there are tools and that they are not neutral, in an allusion to Culkin’s/McLuhan’s dictum that we shape our tools and thereafter our tools shape us.  Every new tool changes us, for sure, and it is an active participant in cognition, not a non-existent neutral object. But our enactment of the technology in which the tool participates is what defines it as a tool, so we don’t so much shape it as we are part of the shape of it, and it is that participation that changes us. We are our tools, and our tools are us.

There is interpretive flexibility in this – a natural result of the adjacent possibles that all technologies enable – which means that any technology can be combined with others to create a new technology. An iPhone, say, can be used by anyone, including monkeys, to crack open nuts (I wonder whether that is covered by AppleCare?), but this does not make the iPhone neutral to someone who is enmeshed in the web of technologies of which the iPhone is designed to be a part. As the kind of tool (actually many tools) it is designed to be, it plays quite an active role in the orchestration: as a thing, it is not just used but using. The greater the pre-orchestration of any tool, the more its designers are co-participants in the assembled technology, and it can often be a dominant role that is anything but neutral.

Most things that we call tools (Tim uses the hammer as an example) are also technologies in their own right, regardless of their tooliness: they are phenomena orchestrated with a purpose, stuff that is organized to do stuff and, though softer tools like hammers have a great many adjacent possibles that provide almost infinite interpretive flexibility, they also – as Tim suggests – have propensities that invite very particular kinds of use. A good hardware store sells at least a dozen different kinds of hammer with slightly different propensities, labelled for different uses. All demand a fair amount of skill to use them as intended. Such stores also sell nail guns, though, that reduce the amount of skill needed by automating elements of the process. While they do open up many further adjacent possibles (with chainsaws, making them mainstays of a certain kind of horror movie), and they demand their own sets of skills to use them safely, the pre-orchestration in nail guns greatly reduces many of the adjacent possibles of a manual hammer: they aren’t much good for, say, prying things open, or using as a makeshift anchor for a kayak, or propping up the lid of a tin of paint. Interestingly, nor are they much use for quite a wide range of nail hammering tasks where delicacy or precision are needed. All of this is true because, as a nail driver, there is a smaller gap between intention and execution that needs to be filled than for even the most specialized manual hammer, due to the creators of the nail gun having already filled a lot of it, thus taking quite a few choices away from the tool user. This is the essence of my distinction between hard and soft technologies, and it is exactly the point of making a device of this nature. By filling gaps, the hardness simplifies many of the complexities and makes for greater speed and consistency which in turn makes more things possible (because we no longer have to spend so much time being part of a hammer) but, in the process, it eliminates other adjacent possibles. The gaps can be filled further. The person using such a machine to, say, nail together boxes on a production line is not so much a tool user as a part of someone else’s tool. Their agency is so much reduced that they are just a component, albeit a relatively unreliable component.

Being tools

In an educational context, a great deal of hardening is commonplace, which simplifies the teaching process and allows things to be done at scale. This in turn allows us to do something approximating reductive science, which gives us the comforting feeling that there is some objective value in how we teach. We can, for example, look at the effects of changes to pre-specified lesson plans on SAT results, if both lesson plans and SATs are very rigid, and infer moderately consistent relationships between the two, and so we can improve the process and measure our success quite objectively. The big problem here, though, is what we do not (and cannot) examine by such approaches, such as the many other things that are learned as a result of being treated as cogs in a mechanical system, the value of learning vs the value of grades, or our places in social hierarchies in which we are forced to comply with a very particular kind of authority. SATs change us, in many less than savoury ways. SATs also fail to capture more than a miniscule fraction of the potentially useful learning that also (hopefully) occurred. As tools for sorting learners by levels of competence, SATs are as far from neutral as you can get, and as situated as they could possibly be. As tools for learning or for evaluating learning they are, to say the least, problematic, at least in part because they make the learner a part of the tool rather than a user of it. Either way, you cannot separate them from their context because, if you did, it would be a different technology. If I chose to take a SAT for fun (and I do like puzzles and quizzes, so this is not improbable) it would be a completely different technology than for a student, or a teacher, or an administrator in an educational system. They are all, in very different ways, parts of the tool that is in part made of SATs. I would be a user of it.

All of this reinforces Tim’s main and extremely sound points, that we are embroiled in deeply intertwingled relationships with all of our technologies, and that they cannot be de-situated. I prefer the term “intertwingled” to the term “entangled” that Tim uses because, to me, “entangled” implies chaos and randomness but, though there may (formally) be chaos involved, in the sense of sensitivity to initial conditions and emergence, this is anything but random. It is an extremely complex system but it is highly self-organizing, filled with metastabilities and pockets of order, each of which acts as a further entity in the complex system from which it emerges.

It is incredibly difficult to write about the complex wholes of technological systems of this nature. I think the hardest problem of all is the massive amount of recursion it entails. We are in the realms of what Kauffman calls Kantian Wholes, in which the whole exists for and by means of the parts, and the parts exist for and by means of the whole, but we are talking about many wholes that are parts of or that depend on many other wholes and their parts that are wholes, and so on ad infinitum, often crossing and weaving back and forth so that we sometimes wind up with weird situations in which it seems that a whole is part of another whole that is also part of the whole that is a part of it, thanks to the fact that this is a dynamic system, filled with emergence and in a constant state of becoming. Systems don’t stay still: their narratives are cyclic, recursive, and only rarely linear. Natural language cannot easily do this justice, so it is not surprising that, in his post, Tim is essentially telling us both that tools are neutral and that they are not, that tools exist and that they do not, and that tools are technologies and they are not. I think that I just did pretty much the same thing.

Source: There are no tools – Timbocopia