Is higher education broken? Not exactly.

a university in collapse in the style of illustrations of the Fall of the House of UsherWhat does it mean for higher education to work?

The problem with claiming (as I sometimes do) that higher education is broken and needs to be transformed is that it begs the question of what it means for higher education to work, and that depends what you think it is for.

From the name you’d expect that higher education might be for …well… education, assuming that to be concerned with learning and teaching, but it outgrew that single purpose a very long time ago. Yes, learning & teaching still looms large, but credentialing is at least as significant (often more so) and, at least for some, so are research or various forms of service.  But, depending on your perspective and context, a university or college might also or alternatively be thought of quite differently as, for example:

  • a driver of peace or prosperity in a society;
  • a creator of knowledge in the world;
  • a support for local economies;
  • training for industry;
  • a market for contract cheating;
  • a home for sports teams;
  • a sharer and preserver of cultural artifacts;
  • an incubator for the performing arts;
  • a means to get a better job;
  • a medical facility;
  • a production line for professors;
  • an enabler of social mobility;
  • a profit-/surplus-making business;
  • a political pawn;
  • a selection filter for smart people;
  • and so on, and on, and on.

You might reasonably object that you could take any one of these away apart from the teaching role and you would still be left with a recognizable educational institution and, indeed, some are possible only because of the teaching role. However, to some people, somewhere, some time, every one of those roles is the role that matters most, and might be a target for transformation. Like every instantiated technology, a university or college is an assembly. In fact it is a huge assembly. It is part of and contains countless other assemblies, and is thoroughly, deeply entangled with a host of other systems and subsystems on which it depends and that depend on it.  Everyone within it or interacting with it perceives it from a different perspective, in different ways at different times, working together or independently as mutually affective coparticipants to do whatever it is that, from each of those different perspectives, it does. In many ways, as a whole, it thus resembles an ecosystem and, like an ecosystem, each individual part can be perceived as having a goal and a relationship with other parts, and with the whole, but the whole itself does not. I think this is probably a feature of institutions in general, and may be what distinguishes them most clearly from simple organizations and businesses.

So what?

As long as the distinct roles, from each individual’s perspective, do their jobs, this is not a problem. If you are interested in, say, in getting an education then you can largely ignore everything else an educational institution does and judge it solely by whether it teaches, notwithstanding the huge complexities of knowing what that even means, let alone with what proxies to measure it.

Unfortunately, a fair number of these roles deeply and negatively impact others. For me, by far the biggest problem is that the credentialing role is fundamentally at odds with the teaching role, due to the profound negative impact of extrinsic motivation on intrinsic motivation (I’ve written a lot about this, e.g. in these slides and in How Education Works so I won’t repeat the arguments again here). Combined with the side effects of trying to teach everyone the same thing at the same time, this results in the vast majority of our most cherished teaching and assessment methods being nothing more than ways of restoring or replacing the intrinsic motivation sucked out of students by how we teach and assess.  Other big conflicts matter too, though. For instance, when patents or copyrights are at stake, the business role battles with the underlying goal of increasing knowledge in the world, turning non-rival knowledge into a rivalrous commodity; ditto for the insanity that is journal publishing, where the public pays us to provide our editorial and reviewing services for papers on research that they also pay for, then the journals sell the papers back to us or charge us for sharing them, making obscene profits for an increasingly trivial service; similarly, the research role, that should in principle exist in a virtuous circle with teaching, is too often in competition with it and, in many institutions, teaching loses; the filtering role that rewards most universities (not mine) for excluding as many students as possible is in direct conflict with a mission to bring higher forms of learning to as many people as possible, and undermines the incentive to teach well because those carefully selected students will learn pretty well regardless of how well they are taught. There are countless other examples like this: public vs private good, excellence vs equity, local vs global responsibilities, supporting student diversity vs economic stability, and so on. Fixing one role invariably impacts others, usually negatively. These are structural issues that will persist as long as higher education continues to play those roles. The solutions to the problems in one role are the problems that other roles have to solve, and (to a large extent) they must be.

At a micro scale the problem is even more ubiquitous. Everyone is solving problems in their own local sphere, creating problems for others in their own local spheres, whose solutions cause problems for others, and so it goes around and comes around. Every time we create a solution to one problem we give rise to other problems elsewhere. To give a few trivial and commonplace examples of issues I am trying to deal with right now:

  • I recently learned of two courses that could not be launched because tutors for the single course that they replace would have to be rehired and lose benefits gained for long service. In terms of priorities and primary roles, this implies that offering stable employment to staff matters more than teaching. That’s not the intent of any particular individual involved in the process but it’s how the system works, thanks to union agreements that solved different problems a long time ago.
  • For nearly 50 years now, our undergraduate students have had 6 months to complete a course, unless they are grant-funded (an important minority), in which case they only get 4 months because funding bodies assume universities always teach in semesters of a standardized length and demand results within that timeframe. And so we are in the process of making all contracts 4 months, knowing full well that students will be more pressured, cheating will increase, and pass rates will go down, but at least it will be fairer.
  • When we commit structures to code they are supposed to model the system but, having done so, they normally dictate it. For instance, my need for all of our faculty to be able to see the teaching sites of all of our courses (a critical part of my strategy to improve our teaching) is under threat due to the cascading roles used to determine who can do what that are baked into the implementation of our LMS and that make it difficult and long-winded for our editors to edit our courses, because the roles have to be modified each time they use its impersonation function that is necessary for viewing courses as they will be experienced. The obvious solution is to fix those roles, not remove access for those who need it, but the editors lack such rights, and those who have them support other faculties with different and conflicting needs.
  • We have recently shifted to a centralized front-line support system, explicitly to deal with common difficulties students have in navigating and using our administrative systems and websites. The more obvious solution would be to make those systems work better in the first place. Instead, we employ vast numbers of people whose job it is to patch over gaps, errors, and poor design decisions made elsewhere. This reduces the pressure to fix the systems, so the need persists, except that now we have a whole load of people with jobs that would be in jeopardy if we fix them. We employ many people whose job is to fix problems caused by issues with how others do theirs: people dedicated to exam cheating, say, or accommodating disabilities, or the aforementioned editors. There’s a fine and indistinct line between dividing a workload so that people with the right expertise do the right things, and creating a workload because people with the wrong expertise have done the wrong things.

I could easily write pages of similar examples and, if you work for a university or college, I’m sure you could too: the specific problems may be peculiar to Athabasca University, but the underlying dynamics are ubiquitous in higher education and, for that matter, most large organizations. And I’m sure that you can think of ways to deal with any of them but that’s exactly the point: fixing them is what we all do, all the time, every day, on a grand scale, and educators have been doing so for nearly 1000 years so the number of fixes to fixes to fixes to fixes is vast.  For almost any role or activity, no matter how small or how large, there is probably another role and set of activities on which it impinges, directly or otherwise.

The big problem is that, on the whole, we create counter-technologies to fix the worst of the problems and that’s a policy of despair, every counter-technology creating new problems for further counter-technologies to solve. In fact, a large part of the reason for all those many roles is precisely because counter-technologies were created to solve what probably seemed like pressing problems and, in an inevitable Faustian bargain, created the problems we now need to address. Every one of these counter-technologies increases the robustness of the whole, increasing the interdependencies, making the patterns more and more indelible so, even if we do occasionally come up with something truly different, the overall system holds together as a massive web of mutually interdependent pieces more strongly than ever.

The more things change…

For all the many structural problems, it would be a synecdochic fallacy of mistaking the part for the whole to describe higher education as broken. Sure, thanks to all those competing roles (especially credentialing) it is not particularly great at education (at least), so transformation is devoutly to be wished for but, by the most basic and essential criterion of all –  survival – it is rampantly successful. In fact, it is exactly those competing and complementary roles that have sustained it because a diverse ecosystem is a resilient ecosystem. The webs of dependencies are mutually sustaining even, to a well-evolved point, when one is antagonistic to the other.

For nearly a millennium the university and its brethren have not only survived but have now spread to almost every populated region of the world, and they continue to expand. Within my lifetime, in my country of birth, enrolments in higher education have risen from around 5% of the population to around 50%. To achieve such success, it has had to evolve: the invention of written exams, say, in the 18th Century, Humboldtian models that justified and embedded research, the adoption of flexible curricula, or the admittance of women in the 19th Century, were all huge changes. It has lost the trivium and quadrivium along the way, and diversified enormously in the range of subjects taught. The technological systems are way more advanced and varied than they were.  There are regional variations, and a few speciated niches (colleges, open universities, distance education, etc). Administratively, a lot has changed, from recruitment and enrolment to the roles of professional bodies, industry, and governments.  It is constantly evolving, for sure.

But.

The main technological features that universities acquired in the first century of their existence are still fully present, in virtually unaltered form.  Courses, classes, terms/semesters, professors, credentials, methods of teaching, organizational structures, methods of assessment, and plenty more are visibly the same species as their mediaeval forebears, and remain the central motifs of virtually all formal higher education. We may use a few more polyesters and zippers, and the gowns now come in women’s sizes but, at least once a year, many of us even dress the same, a behaviour shared with only a few other institutions like (in some countries) the legal profession or the church. On the subject of which, most universities continue to have roles like dean, chancellor, rector, provost, registrar, bursar and even the odd beadle (what even is that?) that not only reveal their ecclesiastic origins but also how little the basic entities in the system have since evolved.

If the purpose of higher education were simply to educate then we would expect it to work a lot better and to see a whole load more variation in how it is done, especially given the wide range of technologies that can now be used to overcome the problems caused by those features, but we don’t. It’s not just the purpose that survives: it’s the form. We can radically alter a great many processes  but changing at least one or two of the central motifs themselves – which, to me, is what “transformation” must entail – is hardly never even on the table.

Adaptation, not transformation

If the institution had a clear overriding goal then we could re-engineer it to work differently, but this is not an engineering problem: it’s an evolutionary problem. We build with what we have on what we have, a process of tinkering or bricolage that is anything but engineered. It is, though, not natural but technological evolution. In natural ecosystems massive disruption can occur when populations become isolated, or when the environment radically changes. Technological evolution emerges through recombination and assembly of parts, not genes, and the technologies of higher education have evolved to be globally connected and massively intertwingled with nearly every other part of nearly every society, making isolation virtually impossible. In nature, ecosystems can be disrupted by invasive species, parasites, etc, but our educational systems – technologies one and all – have evolved to be great at absorbing stuff rather than competing with it, so even that path is fraught. Even something as apparently disruptive as generative AI, which is impacting almost every aspect of the system and all the systems with which it interacts, is currently causing reinforcement of objectives-driven models of teaching, (at least in Western countries) cultural individualism, and highly traditionalist solutions to fears of cheating like written and oral exams at least as much as it is inspiring change.

For those of us who care about the education role, there are plenty of ways we could actually transform it if we had the power to make the necessary changes. Decoupling learning and assessment would be a good start. Not just separating teaching and tests: that would just result in teaching to the test, as we see now. The decoupling would have to be asymmetrical, so the assessed tasks would demand synthesis of many taught things. Or we could get rid of classes and courses: to a large extent, this is what (despite the name) many Connectivist MOOCs have attempted to do, and it is also the pattern behind things like the Kahn Academy or Connect North’s AI Tutor Pro, not to mention traditional PhDs (at least in some countries), apprenticeship models of learning, most instructional videos on sites like YouTube, or Stack Exchange or Quora, and the bulk of student projects (like MOOCs, labelled as courses but lacking most if not all of their traditional trappings). Or we could keep courses but drop the schedules and time limits. If nothing else, imagining how things might work if we messed with those central motifs is a good way to stimulate creative use of what we have. If done at scale, such things could make a huge impact on our educational systems.

But they probably won’t.

The problem always comes back to the fact that, though (collectively) we could change the fitness landscape itself, making survival dependent on whatever we think matters most, we are unlikely to agree what does matter most. For some, better higher education would be measured in credentials, or explicit learning outcomes, or better fits with industry needs. Others would like it to advance their personal careers or status, or to do research without a profit motive. For me, improvements would be in far harder-to-measure aspects like building safer, kinder, smarter, more creative societies. Unfortunately (for me and others who feel that way), thanks to pace layering, the ones who could shape the fitness landscape the most are governments, and they are the least likely to do so. Governments tend to prefer things that are easier to measure, quicker to show results, that are most likely to keep voters voting for them and sponsors (especially from industry) sponsoring them. Increasingly, institutional mandates are measured by industry-impact, which does erode some traditional aspects of higher education but that reinforces the big ones, like the measurable, assessed, outcome-driven course, with its classes, its schedules, its semesters, its textbooks, its assessments, its teachers, and so on. It doesn’t have to, in principle but, in practice, those are not the things we adapt. If radical transformation ever does occur it will therefore most likely be the result of something so disruptive that the loss of higher education would be a minor concern: devastation caused by climate change, or nuclear war, or being hit by a large asteroid, for instance. And, to be honest, I’m not even sure that would be enough.

The limited chances of success should not discourage us from tinkering, all the time, whenever we can. Evolution must happen because the world that higher education inhabits evolves so, if this is the system we are stuck with, we should make it do what we want it to do as best we can.  There are usually ways to reduce dependencies, techniques to decouple antagonistic roles, strategies of simplification, approaches to parcellating the landscape (skunkworks, etc), and values-based principles for prioritizing activities that can make it more likely that the changes will be successful and persistent. However, if we have learned anything from biological studies over the past many decades, it is that you shouldn’t mess with an ecosystem. Whatever we do will put it out of balance, and self-organizing dynamics will ensure that either the balance will be restored, or that it spirals out of control and breaks altogether. Either way, it will never be exactly what we planned and, on average, it will tend to eventually keep things much the same as they are while making most of it worse while it restabilizes itself.

Knowing that, though, can be useful. If every change will result in changes elsewhere, it is not enough to monitor the direct impact of an intervention: rather, we need to figure out ways of harvesting the outcomes across the system and/or, as best we are able, to model them in advance. No one has access to more than a fraction of the information needed, not least because a because a significant amount of it is tacit, embedded in the culture and practices of people and communities within the system. However, we can try to intentionally capture it, to tell stories, to share experiences and understandings across all those many niches. We can do what we can to make the invisible visible. We can talk. And we have technologies to help, inasmuch as we can train AIs to know our stories and ask them about the impacts of things we do, and point out impacts that would be difficult if not impossible for any person to do. And that, I think, is the only viable path we have. The problems we generally have to deal with are a direct result of local thinking: solutions in one space that cause problems in another. The less locally we think about such things, the greater the chances that we will avoid unwanted impacts elsewhere or, equally good, that we will cause wanted impacts. To achieve that demands openness and dialogue, channels through which we can share and communicate, and some way of compressing, parsing, and relaying all that so that sharing and communication is not the only thing we ever do. This is not an impossibly tall order but it certainly isn’t easy.

Generative vs Degenerative AI (my ICEEL 2025 keynote slides)

AI Santa fighting KrampusI gave my second keynote of the week last week (in person!) at the excellent ICEEL conference in Tokyo.  Here are the slides: Generative AI vs degenerative AI: steps towards the constructive transformation of education in the digital age. The conference theme was “AI-Powered Learning: Transforming Education in the Digital Age”,  so this is roughly what I talked about…

Transformation in (especially higher) education is quite difficult to achieve.  There is gradual evolution, for sure, and the occasional innovation, but the basic themes, motifs, and patterns – the stuff universities do and the ways they do it – have barely changed in nigh-on a millennium. A mediaeval professor or student would likely feel right at home in most modern institutions, now and then right down to the clothing. There are lots of path dependencies that have led to this, but a big part of the reason is down to the multiple subsystems that have evolved within education, and the vast number of supersystems in which education participates. Anything new has to thrive in an ecosystem along with countless other parts that have co-evolved together over the last thousand years. There aren’t a lot of new niches, the incumbents are very well established, and they are very deeply enmeshed.

There are several reasons that things may be different now that generative AI has joined the mix. Firstly, generative AIs are genuinely different – not tools but cognitive Santa Claus machines, a bit like appliances, a bit like partners, capable of becoming but not really the same as anything else we’ve ever created. Let’s call them metatools, manifestations of our collective intelligence and generators of it. One consequence of this is that they are really good at doing what humans can do, including teaching, and students are turning to them in droves because they already teach the explicit stuff (the measurable skills and knowledge we tend to assess, as opposed to the values, attitudes, motivational and socially connected stuff that we rarely even notice) better than most human teachers. Secondly, genAI has been highly disruptive to traditional assessment approaches: change (not necessarily positive change) must happen. Thirdly, our cognition itself is changed by this new kind of technology for better or worse, creating a hybrid intelligence we are only beginning to understand but that cannot be ignored for long without rendering education irrelevant. Finally genAI really is changing everything everywhere all at once: everyone needs to adapt to it, across the globe and at every scale, ecosystem-wide.

There are huge risks that it can (and plentiful evidence that it already does) reinforce the worst of the worst of education by simply replacing what we already do with something that hardens it further, that does the bad things more efficiently, and more pervasively, that revives obscene forms of assessment and archaic teaching practices, but without any of the saving graces and intricacies that make educational systems work despite their apparent dysfunctionality. This is the most likely outcome, sadly. If we follow this path, it ends in model collapse for not just LLMs but for human cognition. However, just perhaps, how we respond to it could change the way we teach in good if not excellent ways. It can do so as long as human teachers are able to focus on the tacit, the relational, the social, and the immeasurable aspects of what education does rather than the objectives-led, credential-driven, instrumentalist stuff that currently drives it and that genAI can replace very efficiently, reliably, and economically. In the past, the tacit came for free when we did the explicit thing because the explicit thing could not easily be achieved without it. When humans teach, no matter how terribly, they teach ways of being human. Now, if we want it to happen (and of course we do, because education is ultimately more about learning to be than learning to do), we need to pay considerably more deliberate attention to it.

The table below, copied from the slides, summarizes some of the ways we might productively divide the teaching role between humans and AIs:

Human Role (e.g.)

AI role (e.g.)

Relationships

Interacting, role modelling, expressing, reacting. Nurturing human relationships, discussion catalyzing/summarizing

Values

Establishing values through actions, discussion, and policy. Staying out of this as much as possible!

Information

Helping learners to see the personal relevance, meaning, and value of what they are learning. Caring. Helping learners to acquire the information. Providing the information.

Feedback

Discussing and planning, making salient, challenging. Caring. Analyzing objective strengths and weaknesses, helping with subgoals, offering support, explaining.

Credentialling

Responsibility, qualitative evaluation. Tracking progress, identifying unprespecified outcomes, discussion with human teachers.

Organizing

Goal setting, reacting, responding. Scheduling, adaptive delivery, supporting, reminding.

Ways of being

Modelling, responding, interacting, reflecting. Staying out of this as much as possible!

I don’t think this is a particularly tall order but it does demand a major shift in culture, process, design, and attitude.  Achieving that from scratch would be simple. Making it happen within existing institutions without breaking them is going to be hard, and the transition is going to be complex and painful. Failing to do so, though, doesn’t bear thinking of.

Abstract

In all of its nearly 1000-year history, university education has never truly been transformed. Rather, the institution has gradually evolved in incremental steps, each step building on but almost never eliminating the last. As a result, a mediaeval professor dropped into a modern university would still find plenty that was familiar, including courses, semesters, assessments, methods of teaching and perhaps, once or twice a year, scholars dressed like him. Even such hugely disruptive innovations as the printing press or the Internet have not transformed so much as reinforced and amplified what institutions have always done. What chance, then, does generative AI have of achieving transformation, and what would such transformation look like?
In this keynote I will discuss some of the ways that, perhaps, it really is different this time: for instance, that generative AIs are the first technologies ever invented that can themselves invent new technologies; that the unprecedented rate and breadth of adoption is sufficient to disrupt stabilizing structures at every scale; that their disruption to credentialing roles may push the system past a tipping point; and that, as cognitive Santa Claus machines, they are bringing sweeping changes to our individual and collective cognition, whether we like it or not, that education cannot help but accommodate. However, complex path dependencies make it at least as likely that AI will reinforce the existing patterns of higher education as disrupt them. Already, a surge in regressive throwbacks like oral and written exams is leading us to double down on what ought to be transformed while rendering vestigial the creative, relational and tacit aspects of our institutions that never should. Together, we will explore ways to avoid this fate and to bring about constructive transformation at every layer, from the individual learner to the institution itself.

Paper: Cognitive Santa Claus Machines and the Tacit Curriculum

This is my contribution to the inaugural issue of AACE’s new journal of AI-Enhanced Learning, Cognitive Santa Claus Machines and the Tacit Curriculum. If the title sounds vaguely familiar, it might be because you might have seen my post offering some further thoughts on cognitive Santa Claus machines written not long after I had submitted this paper.

The paper itself delves a bit into the theory and dynamics of genAI, cognition, and education.  It draws heavily from how the theory in my last book, has evolved, adding a few of its own refinements here and there, most notably in its distinction of use-as-purpose vs use-as-process. Because genAIs are not tools but cognitive Santa Claus machines, this helps to explain how the use of genAI can simultaneously enhance and diminish learning, both individually and collectively, to varying degrees that range from cognitive apocalypse to cognitive nirvana, depending on what we define learning to be, whose learning we care about, and what kind of learning gets enhanced or diminished. A fair portion of the paper is taken up with explaining why, in a traditional credentials-driven, fixed-outcomes-focused institutional context, generative AI will usually fail to enhance learning and, in many typical learning and institutional designs, may even diminish our individual (and ultimately collective) capacity to do so. As always, it is only the whole assembly that matters, especially the larger structural elements, and genAI can easily short-circuit a few of those, making the whole seem more effective (courses seem to work better, students seem to display better evidence of success) but the things that actually matter get left out of the circuit.

The conclusion describes the broad characteristics of educational paths that will tend to lead towards learning enhancement by, first of all, focusing our energies on education’s social role in building and sharing tacit knowledge, then on ways of using genAI to do more that we could do alone, and, underpinning this, on expanding our definitions of what “learning” means beyond the narrow confines of “individuals meeting measurable learning outcomes”. The devil is in the detail and there are certainly other ways to get there than by the broad paths I recommend but I think that, if we start with the assumption that our students are neither products nor consumers nor vessels for learning outcomes, but co-participants in our richly complex, ever evolving, technologically intertwingled learning communities, we probably won’t go too far wrong.

Abstract:

Every technology we create, from this sentence to the Internet, changes us but, through generative AI (genAI), we can now access a kind of cognitive Santa Claus machine that invents other technologies, so the rate of change is exponentially rising. Educators struggle to maintain a balance between sustaining pre-genAI values and skills, and using the new possibilities genAIs offer. This paper provides a conceptual lens for understanding and responding to this tension. It argues that, on the one hand, educators must acknowledge and embrace the changes genAI brings to our extended cognition while, on the other, that we must valorize and double-down on the tacit curriculum, through which we learn ways of being human in the world.

New open journal from AACE: AI-Enhanced Learning (with a paper from me)

AI-Enhanced Learning cover illustrating a cyborg, AI-human hybrid mindThe Journal of Artificial Intelligence Enhanced Learning (AIEL), a diamond open-access journal published under the auspices of AACE and distributed worldwide through LearnTechLib has just launched its inaugural issue, which includes a paper from me (Cognitive Santa Claus Machines and the Tacit Curriculum).

This inaugural issue is a great start to what I think will come to be recognized as a leading journal in the field of AI and education.  As not just an author but an associate editor I am naturally a little biased but I’m very picky about the journals I work with and this one ticks all the right boxes. It is genuinely open, without fees for authors or readers. It is explicitly very multidisciplinary. The editors – Mike Searson, Theo Bastiaens and Gary Marks – are truly excellent, and prominent in the field of online and technology-enhanced learning. The publisher, AACE is a very well-oiled, prominent, professional, and likeable organization that has been a major player in the field for over 30 years, with extensive reach into institutional libraries the world over via LearnTechLib.

And the journal has an attitude that I like very much: it’s about learning enhancement through AI, not just AI and education. This fills a huge pragmatic need in an area where many practitioners are like deer caught in the headlights when it comes to thinking about what positive things we can do with our new robot friends/overlords/interlopers, and where too much of the conversation is implicitly focused on protecting the traditional forms and structures of our mediaeval education systems and the kinds of knowledge generative AI can more easily and effectively replicate.

This first issue crosses many disciplinary boundaries and aspects of the educational endeavour with a very diverse range of reflective papers by recognized experts in many facets of AI, education, and learning.  All are ultimately optimistic about the potential for learning enhancement but few back away from the wicked problems and potential for the opposite effect.  My own paper finds a thread of hope that we might not so much reinvent as simply notice what education currently does (it’s about learning to be as much as learning to do), and that we might recognize generative AIs not as tools but as cognitive Santa Claus machines, sharing their cognitive gifts to help us collectively achieve things we could not dream of before. It has a bit of theory to back that up.

If you have influence over such things, do encourage your libraries to subscribe!

Educational technologies and the synecdochic fallacy

all hands on deckFor a few minutes the other day I thought that I had invented a new kind of fallacy or, at least, a great term to describe it. Disappointingly, a quick search revealed that it was not only an old idea but one that has been independently invented at least twice before (Berry & Martin, 1974; Weinstock, 1981). Here is its definition from Weinstock (1981):

“a synecdochic fallacy is a deceptive, misleading, erroneous, or false notion, belief, idea, or statement where a part is substituted for a whole, a whole for a part, cause for effect, effect for cause, and so on.”

Most synecdoches (syn-NEK-doh-kees in case you were wondering – I have been getting it totally wrong for decades) are positively useful. Synecdoches make aspects of a whole more salient by focusing on the parts. No one, for instance, thinks “all hands on deck” actually means the crew should put their hands on the deck let alone that disembodied hands should crew the ship, but it does focus on an aspect of the whole that is of great interest: that there is an expectation that those hands will be used to do what hands do. Equally, synecdoches can make the parts more salient by focusing on the whole. When we say “Canada beat the USA in the finals” no one thinks that one literal country got up and thrashed the other, but it draws attention to a symbolic aspect of a hockey game that reveals one of its richer social roles. It becomes a fallacy only when we take it literally. Unfortunately, doing so is surprisingly common in research about education and educational technologies.

Technologies as synecdoches

The labels we use for technologies are very liable to be synecdochic (syn-nek-DOH-kik if you were wondering): it is almost a defining characteristic. Technologies are assemblies, and parts of assemblies, often contained by other technologies, often containing an indeterminate number of technologies that themselves consist of indeterminate numbers of technologies, that participate in richly recursive webs of further technologies with dynamic boundaries, where the interplay of process, product, structure, and use constantly shifts and shimmers. The labels we give to technologies are as much descriptions of sets of dynamic relationships as they are of objects (cognitive, physical, virtual, organizational, etc) in the world, and the boundaries we use to distinguish one from another are very, very fluid.

There is no technology that cannot be combined with different others or in different ways in order to create a different whole. Without changing or adding anything to the physical assembly a screwdriver, say, can be a paint stirrer, a pointer, a weapon, or unprestatably many other technologies, far from all of which are so easily labelled. Virtually every use of a technology is itself a technology, and it is often one that has never occurred in exactly the same way in the entire history of the universe. This sentence is one such technology: though there may be lots of sentences that are similar, the chances that anyone has ever used exactly this combination of words and punctuation before now are close to zero. Same for this post. This post has a title: that is the name of this technology, though it is a synecdoche for… what? The words it contains? Not quite, because now (literally as I write) it contains more of them but it is still this post. Is it still this post when it is syndicated? If the URL changes? Or the title? Or if I read it and turn it into podcast? I don’t know. This sentence does not have a name, but it is no less a technology. So is your reading of it. So is much of what is involved in the sense you are making of it, and that is the technology that probably matters most right now. No one has ever made sense of anything in exactly this way, right now, the way you are doing it, and no one ever will. The technosphere is almost as awesomely complex as the biosphere and, in education, the technosphere extends deep into every learner, not just as an object of learning but as part of learning itself.

Synecdoches and educational/edtech research

Let’s say you wanted to investigate the effects of putting computers in classrooms. It seems reasonable enough: after all, it’s a big investment so you’d want to know whether it was worth it. But what do you actually learn from doing so apart from that, in this particular instance, with this particular set of orchestrations and uses, something happened? Yes, computers might have been prerequisites for it happening but so what? An infinite number of different things could have happened if you had done something else even slightly different with them, there are infinitely many other things you could have done that might have been better, and all bets would be off if the computers themselves had been different. The same is equally true for what happens in classrooms without computers. What can you predict as a result? Even if you were to find that, 100% of the time until now, computers in classrooms led to better/worse learning (whatever that might mean to you) I guarantee that I could find plenty of ways of using them to do the precise opposite. This is functionally similar to taking “all hands on deck” literally: the hands may be very salient but, without taking into account the people they are attached to and exactly what they are doing with those hands, there is little or no value in making comparisons. Averages, maybe; patterns, perhaps, as long as you can keep everything else more or less similar (e.g. a traditional formal school setting); but reliable predictions of cause and effect? No. Or anything that can usefully transfer to a different setting (e.g. unschooling or – ha – online learning)? Not at all.

Conversely but following the same synecdochic logic we might ask questions about the effectiveness of online and distance learning (the whole),  comparing it with in-person learning.  Both encompass immense numbers of wildly diverse technologies, including not just course and class technologies but things like pedagogical techniques, institutional structures, and national standards, instantiated with wildly varying degrees of skill and talent, all of which matter at least as much as the fact that it is online and at a distance. Many may matter more. This is functionally similar to taking “Canada beat the US” literally. It did not. It remains a fallacy even if, on average, Canada (the hockey team) does win more often, or if online and distance learning is generally more effective than in-person learning, whatever that means. The problem is that it does not distinguish which of the many millions of parts of the distance or the in-person orchestration of phenomena matter and, for aforementioned and soon-to-be-mentioned reasons, it cannot.

Beyond causing physical harm – and even then with caveats – there is virtually nothing you could do or use to teach someone that, if you modified some other part of the assembly or organized the parts a little differently, could not have exactly the opposite effect the next time you do or use it. This sentence, say, will have quite different effects from the next despite using almost the exact same components. Almost components effects next the despite using different quite will sentence, say, this have the from exact. It’s a silly example and it is not difficult to argue that further components (rules of grammar, say) are sufficiently different that the comparison is flawed, but that’s exactly the point: all instantiations of educational technologies are different, in countless significant ways, each of which impacts lots of others which in turn impact others, in a complex adaptive system filled with positive and negative feedback loops, emergence, evolution, and random impacts from the systems that surround it. I didn’t actually even have to mix up the words. Had I repeated the exact same statement, its impact would have been different from the first because something else in the system had changed as a result of it: you and the sentence after. And this is just one sentence, and you are just one reader. Things get much more complex really fast.

In a nutshell, the synecdochic fallacy is why reductive research methods that serve us so well in the natural sciences are often completely inappropriate in the field of technology in general and education in particular. Natural science seeks and studies invariant phenomena but, because every use (at least in education) is a unique orchestration, technologies as they are actually enacted (i.e. the whole, including the current use) are never invariant and, even on those odd occasions that they do remain sufficiently similar for long enough to make study worthwhile, it just takes one small tweak to render useless everything we have learned about them.

All is not lost

There are lots of useful and effective kinds of research that we can do about educational technologies. Reductive science is great for identifying phenomena and what we can do with them in a technological assembly, and that can include other technologies that are parts of assemblies. It is really useful, say, to know about the properties of nuts and bolts used to build desks or computers, the performance characteristics of a database, or that students have persistent difficulties answering a particular quiz question. We can use this information to make good creative choices when changing or creating designs. Notice, though, that this is not a science of teaching or education. This is a science of parts and, if we do it with caution, their interactions with other parts. It is never going to tell us anything useful about, say, whether teaching to learning styles has any positive effect, that direct instruction is better than problem based learning, or that blended learning is better than in-person or online learning, but it might help us build a better LMS or design a lesson or two more effectively, if (and only if)  we used the information creatively and wisely.

Other effective methods involve the telling of rich stories that reveal phenomena of interest and reasons for or effects of decisions we made about putting them together: these can help others faced with similar situations, providing inspirations and warnings that might be very useful. If we find new ways of assembling or orchestrating the parts (we do something no one has done before) then it is really helpful to share what we have done: this helps others to invent because it expands the adjacent possible. Similarly we can look for patterns in the assembly that seem to work and that we can re-use (as parts) in other assemblies. We can sometimes come up with rules of thumb that might help us to (though never to predict that we will) build better new ones. We can share plans. We can describe reasons.

What this all boils down to is that we can and we should learn a great deal that is useful about the component technologies and we can and should seek broad patterns in ways that they intertwingle. What we cannot do, neither in principle nor in practice, is to use what we have learned to accurately predict anything specific about what happens when we put them together to support learning. It’s about improving the palette, not improving the painting. As Longo & Kauffman (2012) put it, in a complex system of this nature – and this applies as much to the biosphere, culture, and economics as it does to education and technology –  there are no laws of entailment, just of enablement. We are firmly in the land of emergence, evolution, craft, design, and bricolage, not engineering, manufacture and mass-production. I find this quite liberating.

 

References

Berry, K. J., & Martin, T. W. (1974). The Synecdochic Fallacy: A Challenge to Recent Research and Theory-Building in Sociology. Pacific Sociological Review, 17(2), 139–166. https://doi.org/10.2307/1388339
Longo, G., Montévil, M., & Kauffman, S. (2012). No entailing laws, but enablement in the evolution of the biosphere. Proceedings of the 14th Annual Conference Companion on Genetic and Evolutionary Computation, 1379–1392. https://doi.org/10.1145/2330784.2330946
Weinstock, Stephen M. (1981). Synecdochic Fallacy [Panel paper]. 67th annual meeting of the Speech Communication Association, Anaheim, California. https://www.scribd.com/document/396524982/Synecdochic-Fallacy-1981

We are (in part) our tools and they are (in part) us

anthropomorphized hammer using a person as a toolHere’s a characteristically well-expressed and succinct summary of the complex nature of technologies, our relationships with them, and what that means for education by the ever-wonderful Tim Fawns. I like it a lot, and it expresses much what I have tried to express about the nature and value of technologies, far better than I could do it and in far fewer words. Some of it, though, feels like it wants to be unpacked a little further, especially the notions that there are no tools, that tools are passive, and that tools are technologies. None of what follows contradicts or negates Tim’s points, but I think it helps to reveal some of the complexities.

There are tools

Tim starts provocatively with the claim that:

There are no tools. Tools are passive, neutral. They can be picked up and put down, used to achieve human goals without changing the user (the user might change, but the change is not attributed to the tool).

I get the point about the connection between tools and technology (in fact it is very similar to one I make in the “Not just tools” section of Chapter 3 of How Education Works) and I understand where Tim is going with it (which is almost immediately to consciously sort-of contradict himself), but I think it is a bit misleading to claim there are no tools, even in the deliberately partial and over-literal sense that Tim uses the term. This is because to call something a tool is to describe a latent or actual relationship between it and an agent (be it a person, a crow, or a generative AI), not just to describe the object itself. At the point at which that relationship is instantiated it very much changes the agent: at the very least, they now have a capability that they did not have before, assuming the tool works and is used for a purpose. Figuring out how to use the tool is not just a change to the agent but a change to what the agent may become that expands the adjacent possible. And, of course, many tools are intracranial so, by definition, having them and using them changes the user. This is particularly obvious when the tool in question is a word, a concept, a model, or a theory, but it is just as true of a hammer, a whiteboard, an iPhone, or a stick picked up from the ground with some purpose in mind, because of the roles we play in them.

Tools are not (exactly) technologies

Tim goes on to claim:

Tools are really technologies. Each technology creates new possibilities for acting, seeing and organising the world.

Again, he is sort-of right and, again, not quite, because “tool” is (as he says) a relational term. When it is used a tool is always part of a technology because the technique needed to use it is a technology that is part of the assembly, and the assembly is the technology that matters. However, the thing that is used – the tool itself – is not necessarily a technology in its own right. A stick on the ground that might be picked up to hit something, point to something, or scratch something is simply a stick.

Tools are not neutral

Tim says:

So a hammer is not just sitting there waiting to be picked up, it is actively involved in possibility-shaping, which subtly and unsubtly entangles itself with social, cognitive, material and digital activity. A hammer brings possibilities of building and destroying, threatening and protecting, and so forth, but as part of a wider, complex activity.

I like this: by this point, Tim is telling us that there are tools and that they are not neutral, in an allusion to Culkin’s/McLuhan’s dictum that we shape our tools and thereafter our tools shape us.  Every new tool changes us, for sure, and it is an active participant in cognition, not a non-existent neutral object. But our enactment of the technology in which the tool participates is what defines it as a tool, so we don’t so much shape it as we are part of the shape of it, and it is that participation that changes us. We are our tools, and our tools are us.

There is interpretive flexibility in this – a natural result of the adjacent possibles that all technologies enable – which means that any technology can be combined with others to create a new technology. An iPhone, say, can be used by anyone, including monkeys, to crack open nuts (I wonder whether that is covered by AppleCare?), but this does not make the iPhone neutral to someone who is enmeshed in the web of technologies of which the iPhone is designed to be a part. As the kind of tool (actually many tools) it is designed to be, it plays quite an active role in the orchestration: as a thing, it is not just used but using. The greater the pre-orchestration of any tool, the more its designers are co-participants in the assembled technology, and it can often be a dominant role that is anything but neutral.

Most things that we call tools (Tim uses the hammer as an example) are also technologies in their own right, regardless of their tooliness: they are phenomena orchestrated with a purpose, stuff that is organized to do stuff and, though softer tools like hammers have a great many adjacent possibles that provide almost infinite interpretive flexibility, they also – as Tim suggests – have propensities that invite very particular kinds of use. A good hardware store sells at least a dozen different kinds of hammer with slightly different propensities, labelled for different uses. All demand a fair amount of skill to use them as intended. Such stores also sell nail guns, though, that reduce the amount of skill needed by automating elements of the process. While they do open up many further adjacent possibles (with chainsaws, making them mainstays of a certain kind of horror movie), and they demand their own sets of skills to use them safely, the pre-orchestration in nail guns greatly reduces many of the adjacent possibles of a manual hammer: they aren’t much good for, say, prying things open, or using as a makeshift anchor for a kayak, or propping up the lid of a tin of paint. Interestingly, nor are they much use for quite a wide range of nail hammering tasks where delicacy or precision are needed. All of this is true because, as a nail driver, there is a smaller gap between intention and execution that needs to be filled than for even the most specialized manual hammer, due to the creators of the nail gun having already filled a lot of it, thus taking quite a few choices away from the tool user. This is the essence of my distinction between hard and soft technologies, and it is exactly the point of making a device of this nature. By filling gaps, the hardness simplifies many of the complexities and makes for greater speed and consistency which in turn makes more things possible (because we no longer have to spend so much time being part of a hammer) but, in the process, it eliminates other adjacent possibles. The gaps can be filled further. The person using such a machine to, say, nail together boxes on a production line is not so much a tool user as a part of someone else’s tool. Their agency is so much reduced that they are just a component, albeit a relatively unreliable component.

Being tools

In an educational context, a great deal of hardening is commonplace, which simplifies the teaching process and allows things to be done at scale. This in turn allows us to do something approximating reductive science, which gives us the comforting feeling that there is some objective value in how we teach. We can, for example, look at the effects of changes to pre-specified lesson plans on SAT results, if both lesson plans and SATs are very rigid, and infer moderately consistent relationships between the two, and so we can improve the process and measure our success quite objectively. The big problem here, though, is what we do not (and cannot) examine by such approaches, such as the many other things that are learned as a result of being treated as cogs in a mechanical system, the value of learning vs the value of grades, or our places in social hierarchies in which we are forced to comply with a very particular kind of authority. SATs change us, in many less than savoury ways. SATs also fail to capture more than a miniscule fraction of the potentially useful learning that also (hopefully) occurred. As tools for sorting learners by levels of competence, SATs are as far from neutral as you can get, and as situated as they could possibly be. As tools for learning or for evaluating learning they are, to say the least, problematic, at least in part because they make the learner a part of the tool rather than a user of it. Either way, you cannot separate them from their context because, if you did, it would be a different technology. If I chose to take a SAT for fun (and I do like puzzles and quizzes, so this is not improbable) it would be a completely different technology than for a student, or a teacher, or an administrator in an educational system. They are all, in very different ways, parts of the tool that is in part made of SATs. I would be a user of it.

All of this reinforces Tim’s main and extremely sound points, that we are embroiled in deeply intertwingled relationships with all of our technologies, and that they cannot be de-situated. I prefer the term “intertwingled” to the term “entangled” that Tim uses because, to me, “entangled” implies chaos and randomness but, though there may (formally) be chaos involved, in the sense of sensitivity to initial conditions and emergence, this is anything but random. It is an extremely complex system but it is highly self-organizing, filled with metastabilities and pockets of order, each of which acts as a further entity in the complex system from which it emerges.

It is incredibly difficult to write about the complex wholes of technological systems of this nature. I think the hardest problem of all is the massive amount of recursion it entails. We are in the realms of what Kauffman calls Kantian Wholes, in which the whole exists for and by means of the parts, and the parts exist for and by means of the whole, but we are talking about many wholes that are parts of or that depend on many other wholes and their parts that are wholes, and so on ad infinitum, often crossing and weaving back and forth so that we sometimes wind up with weird situations in which it seems that a whole is part of another whole that is also part of the whole that is a part of it, thanks to the fact that this is a dynamic system, filled with emergence and in a constant state of becoming. Systems don’t stay still: their narratives are cyclic, recursive, and only rarely linear. Natural language cannot easily do this justice, so it is not surprising that, in his post, Tim is essentially telling us both that tools are neutral and that they are not, that tools exist and that they do not, and that tools are technologies and they are not. I think that I just did pretty much the same thing.

Source: There are no tools – Timbocopia

Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research | TechTrends

The latest paper I can proudly add to my list of publications,  Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research has been published in the (unfortunately) closed journal TechTrends. Here’s a direct link to the paper that should hopefully bypass the paywall, if it has not been used too often.

I’m 16th of 47 coauthors, led by the truly wonderful Junhong Xiao, who is the primary orchestrator and mastermind behind it. This is a companion piece to our Manifesto for Teaching and Learning in a Time of Generative AI and it starts where the other paper left off, delving further into what we don’t know (or at least do not agree that we know) about and (taking up most of the paper) what we might do about that lack of knowledge. I think this presents a pretty useful and wide-ranging research agenda for anyone with an interest in AI and education.

Methodologically, it emerged through a collaborative writing process between a very multinational group of international researchers in open, digital, and online learning. It’s not a random sample of people who happen to know one another: the huge group represents a rich mix of (extremely) well-established and (excellent) emerging researchers from a broad set of cultural backgrounds, covering a wide range of research interests in the field. Junhong does a great job of extracting the themes and organizing all of that into a coherent narrative.

In many ways I like this paper more than its companion piece. I think this is because, though its findings are – as the title implies – less well-defined than the first, I am more closely aligned with the underlying assumptions, attitudes and values that underpin the analysis. It grapples more firmly with the wicked problems and it goes deeper into the broader, situated, human nature of the systems in which generative AI is necessarily intertwingled, skimming over the more simplistic conversations about cheating, reliability, and so on to get at some meatier but more fundamental issues that, ultimately, relate to how and why we do this education thing in the first place.

Abstract

Advocates of AI in Education (AIEd) assert that the current generation of technologies, collectively dubbed artificial intelligence, including generative artificial intelligence (GenAI), promise results that can transform our conceptions of what education looks like. Therefore, it is imperative to investigate how educators perceive GenAI and its potential use and future impact on education. Adopting the methodology of collective writing as an inquiry, this study reports on the participating educators’ perceived grey areas (i.e. issues that are unclear and/or controversial) and recommendations on future research. The grey areas reported cover decision-making on the use of GenAI, AI ethics, appropriate levels of use of GenAI in education, impact on learning and teaching, policy, data, GenAI outputs, humans in the loop and public–private partnerships. Recommended directions for future research include learning and teaching, ethical and legal implications, ownership/authorship, funding, technology, research support, AI metaphor and types of research. Each theme or subtheme is presented in the form of a statement, followed by a justification. These findings serve as a call to action to encourage a continuing debate around GenAI and to engage more educators in research. The paper concludes that unless we can ask the right questions now, we may find that, in the pursuit of greater efficiency, we have lost the very essence of what it means to educate and learn.

Reference

Xiao, J., Bozkurt, A., Nichols, M., Pazurek, A., Stracke, C. M., Bai, J. Y. H., Farrow, R., Mulligan, D., Nerantzi, C., Sharma, R. C., Singh, L., Frumin, I., Swindell, A., Honeychurch, S., Bond, M., Dron, J., Moore, S., Leng, J., van Tryon, P. J. S., … Themeli, C. (2025). Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research. TechTrends. https://doi.org/10.1007/s11528-025-01060-6

How AI works for education: an interview with me for AACE Review

Thanks to Stefanie Panke for some great questions and excellent editing in this interview with me for the AACE Review.

The content is in fact the product of two discussions, one coming from student questions at the end of a talk that I gave for the Asian University for Women just before Christmas, the other asynchronously with Stefanie herself.

Stefanie did a very good job of making sense of my rambling replies to the students that spanned quite a few issues, including some from my book, How Education Works, some with (mainly) generative AI, and a little about the intersection of collective and artificial intelligence. Stefanie’s own prompts were great: they encouraged me to think a little differently, and to take some enjoyable detours along the way around the evils of learning management systems, artificially-generated music, and  social media, as well as a discussion of the impact of generative AI on learning designers, thoughts on legislation to control AI, and assessment.

Here are the slides from that talk at AUW – I’ve not posted this separately because hardly any are new: it mostly cobbles together two recent talks, one for Contact North and the other my keynote for ICEEL ’24. The conversation afterwards was great, though, thanks to a wonderfully thoughtful and enthusiastic bunch of very smart students.

New paper: The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future

I’m proud to be the 7th of 47 authors on this excellent new paper, led by the indefatigable Aras Bozkurt and featuring some of the most distinguished contemporary researchers in online, open, mobile, distance, e- and [insert almost any cognate sub-discipline here] learning, as well as a few of us hanging on their coat tails like me.

AI negaiveAs the title suggests, it is a manifesto: it makes a series of statements (divided into 15 positive and 20 negative themes) about what is or what should be, and it is underpinned by a firm set of humanist pedagogical and ethical attitudes that are anything but neutral. What makes it interesting to me, though, can mostly be found in the critical insights that accompany each theme, that capture a little of the complexity of the discussions that led to them, and that add a lot of nuance. The research methodology, a modified and super-iterative Delphi design in which all participants are also authors is, I think, an incredibly powerful approach to research in the technology of education (broadly construed) that provides rigour and accountability without succumbing to science-envy.

 

AI-positiveNotwithstanding the lion’s share of the work of leading, assembling, editing, and submitting the paper being taken on by Aras and Junhong, it was a truly collective effort so I have very little idea about what percentage of it could be described as my work. We were thinking and writing together.  Being a part of that was a fantastic learning experience for many of us, that stretched the limits of what can be done with tracked changes and comments in a Google Doc, with contributions coming in at all times of day and night and just about every timezone, over weeks. The depth and breadth of dialogue was remarkable, as much an organic process of evolution and emergence as intelligent design, and one in which the document itself played a significant participant role. I felt a strong sense of belonging, not so much as part of a community but as part of a connectome.

For me, this epitomizes what learning technologies are all about. It would be difficult if not impossible to do this in an in-person setting: even if the researchers worked together on an online document, the simple fact that they met in person would utterly change the social dynamics, the pacing, and the structure. Indeed, even online, replicating this in a formal institutional context would be very difficult because of the power relationships, assessment requirements, motivational complexities and artificial schedules that formal institutions add to the assembly. This was an online-native way of learning of a sort I aspire to but seldom achieve in my own teaching.

The paper offers a foundational model or framework on which to build or situate further work as well as providing a moderately succinct summary of  a very significant percentage of the issues relating to generative AI and education as they exist today. Even if it only ever gets referred to by each of its 47 authors this will get more citations than most of my papers, but the paper is highly cite-able in its own right, whether you agree with its statements or not. I know I am biased but, if you’re interested in the impacts of generative AI on education, I think it is a must-read.

The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future

Bozkurt, A., Xiao, J., Farrow, R., Bai, J. Y. H., Nerantzi, C., Moore, S., Dron, J., … Asino, T. I. (2024). The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future. Open Praxis, 16(4), 487–513. https://doi.org/10.55982/openpraxis.16.4.777

Full list of authors:

  • Aras Bozkurt
  • Junhong Xiao
  • Robert Farrow
  • John Y. H. Bai
  • Chrissi Nerantzi
  • Stephanie Moore
  • Jon Dron
  • Christian M. Stracke
  • Lenandlar Singh
  • Helen Crompton
  • Apostolos Koutropoulos
  • Evgenii Terentev
  • Angelica Pazurek
  • Mark Nichols
  • Alexander M. Sidorkin
  • Eamon Costello
  • Steven Watson
  • Dónal Mulligan
  • Sarah Honeychurch
  • Charles B. Hodges
  • Mike Sharples
  • Andrew Swindell
  • Isak Frumin
  • Ahmed Tlili
  • Patricia J. Slagter van Tryon
  • Melissa Bond
  • Maha Bali
  • Jing Leng
  • Kai Zhang
  • Mutlu Cukurova
  • Thomas K. F. Chiu
  • Kyungmee Lee
  • Stefan Hrastinski
  • Manuel B. Garcia
  • Ramesh Chander Sharma
  • Bryan Alexander
  • Olaf Zawacki-Richter
  • Henk Huijser
  • Petar Jandrić
  • Chanjin Zheng
  • Peter Shea
  • Josep M. Duart
  • Chryssa Themeli
  • Anton Vorochkov
  • Sunagül Sani-Bozkurt
  • Robert L. Moore
  • Tutaleni Iita Asino

Abstract

This manifesto critically examines the unfolding integration of Generative AI (GenAI), chatbots, and algorithms into higher education, using a collective and thoughtful approach to navigate the future of teaching and learning. GenAI, while celebrated for its potential to personalize learning, enhance efficiency, and expand educational accessibility, is far from a neutral tool. Algorithms now shape human interaction, communication, and content creation, raising profound questions about human agency and biases and values embedded in their designs. As GenAI continues to evolve, we face critical challenges in maintaining human oversight, safeguarding equity, and facilitating meaningful, authentic learning experiences. This manifesto emphasizes that GenAI is not ideologically and culturally neutral. Instead, it reflects worldviews that can reinforce existing biases and marginalize diverse voices. Furthermore, as the use of GenAI reshapes education, it risks eroding essential human elements—creativity, critical thinking, and empathy—and could displace meaningful human interactions with algorithmic solutions. This manifesto calls for robust, evidence-based research and conscious decision-making to ensure that GenAI enhances, rather than diminishes, human agency and ethical responsibility in education.

Slides from my ICEEL ’24 Keynote: “No Teacher Left Behind: Surviving Transformation”

Here are the slides from from my keynote at the 8th International Conference on Education and E-Learning in Tokyo yesterday. Sadly I was not actually in Tokyo for this but the online integration was well done and there was some good audience interaction. I am also the conference chair (an honorary title) so I may be a bit biased, but I think it’s a really good conference, with an increasingly rare blend of both the tech and the pedagogical aspects of the field, and some wonderfully diverse keynotes ranging in subject matter from the hardest computer science to reflections on literature and love (thanks to its collocation with ICLLL, a literature and linguistics conference). My keynote was somewhere in between, and deliberately targeted at the conference theme, “Transformative Learning in the Digital Era: Navigating Innovation and Inclusion.”

the technological connectome, represented in the style of 1950s children's booksAs my starting point for the talk I introduced the concept of the technological connectome, about which I have just written a paper (currently under revision, hopefully due for publication in a forthcoming issue of the new Journal of Open, Distance, and Digital Education), which is essentially a way of talking about extended cognition from a technological rather than a cognitive perspective. From there I moved on to the adjacent possible and the exponential growth in technology that has, over the past century or so, reached such a breakneck rate of change that innovations such as generative AI, the transformation I particularly focused on (because it is topical), can transform vast swathes of culture and practice in months if not in weeks. This is a bit of a problem for traditional educators, who are as unprepared as anyone else for it, but who find themselves in a system that could not be more vulnerable to the consequences. At the very least it disrupts the learning outcomes-driven teacher-centric model of teaching that still massively dominates institutional learning the world over, both in the mockery it makes of traditional assessment practices and in the fact that generative AIs make far better teachers if all you care about are the measurable outcomes.

The solutions I presented and that formed the bulk of the talk, largely informed by the model of education presented in How Education Works, were mostly pretty traditional, emphasizing the value of community, and of passion for learning, along with caring about, respecting, and supporting learners. There were also some slightly less conventional but widely held perspectives on assessment, plus a bit of complexivist thinking about celebrating the many teachers and acknowledging the technological connectome as the means, the object and the subject of learning, but nothing Earth-shatteringly novel. I think this is as it should be. We don’t need new values and attitudes; we just need to emphasize those that are learning-positive rather than the increasingly mainstream learning-negative, outcomes-driven, externally regulated approaches that the cult of measurement imposes on us.

Post-secondary institutions have had to grapple with their learning-antagonistic role of summative assessment since not long after their inception so this is not a new problem but, until recent decades, the two roles have largely maintained an uneasy truce. A great deal of the impetus for the shift has come from expanding access to PSE. This has resulted in students who are less able, less willing, and less well-supported than their forebears who were, on average, far more advantaged in ability, motivation, and unencumbered time simply because fewer were able to get in. In the past, teachers hardly needed to teach. The students were already very capable, and had few other demands on their time (like working to get through college), so they just needed to hang out with smart people, some of whom who knew the subject and could guide them through it in order to know what to learn and whether they had been successful, along with the time and resources to support their learning. Teachers could be confident that, as long as students had the resources (libraries, lecture notes, study time, other students) they would be sufficiently driven by the need to pass the assessments and/or intrinsic interest, that they could largely be left to their own devices (OK, a slight caricature, but not far off the reality).

Unfortunately, though this is no longer even close to the norm,  it is still the model on which most universities are based.  Most of the time professors are still hired because of their research skills, not teaching ability, and it is relatively rare that they are expected to receive more than the most perfunctory training, let alone education, in how to teach. Those with an interest usually have opportunities to develop their skills but, if they do not, there are few consequences. Thanks to the technological connectome, the rewards and punishments of credentials continue to do the job well enough, notwithstanding the vast amounts of cheating, satisficing, student suffering, and lost love of learning that ensues. There are still plenty of teachers: students have textbooks, YouTube tutorials, other students, help sites, and ChatGPT, to name but a few, of which there are more every day. This is probably all that is propping up a fundamentally dysfunctional system. Increasingly, the primary value of post-secondary education comes to lie in its credentialling function.

No one who wants to teach wants this, but virtually all of those who teach in universities are the ones who succeeded in retaining their love of learning for its own sake despite it, so they find it hard to understand students who don’t. Too many (though, I believe, a minority) are positively hostile to their students as a result, believing that most students are lazy, willing to cheat, or to otherwise game the system, and they set up elaborate means of control and gotchas to trap them.  The majority who want the best for their students, however,  are also to blame, seeing their purpose as to improve grades, using “learning science” (which is like using colour theory to paint – useful, not essential) to develop methods that will, on average, do so more effectively. In fairness, though grades are not the purpose, they are not wrong about the need to teach the measurable stuff well: it does matter to achieve the skills and knowledge that students set out to achieve. However, it is only part of the purpose. Mostly, education is a means to less measurable ends; of forming identities, attitudes, values, ways of relating to others, ways of thinking, and ways of being. You don’t need the best teaching methods to achieve that: you just need to care, and to create environments and structures that support stuff like community, diversity, connection, sharing, openness, collaboration, play, and passion.

The keynote was recorded but I am not sure if or when it will be available. If it is released on a public site, I will share it here.