Generative vs Degenerative AI (my ICEEL 2025 keynote slides)

AI Santa fighting KrampusI gave my second keynote of the week last week (in person!) at the excellent ICEEL conference in Tokyo.  Here are the slides: Generative AI vs degenerative AI: steps towards the constructive transformation of education in the digital age. The conference theme was “AI-Powered Learning: Transforming Education in the Digital Age”,  so this is roughly what I talked about…

Transformation in (especially higher) education is quite difficult to achieve.  There is gradual evolution, for sure, and the occasional innovation, but the basic themes, motifs, and patterns – the stuff universities do and the ways they do it – have barely changed in nigh-on a millennium. A mediaeval professor or student would likely feel right at home in most modern institutions, now and then right down to the clothing. There are lots of path dependencies that have led to this, but a big part of the reason is down to the multiple subsystems that have evolved within education, and the vast number of supersystems in which education participates. Anything new has to thrive in an ecosystem along with countless other parts that have co-evolved together over the last thousand years. There aren’t a lot of new niches, the incumbents are very well established, and they are very deeply enmeshed.

There are several reasons that things may be different now that generative AI has joined the mix. Firstly, generative AIs are genuinely different – not tools but cognitive Santa Claus machines, a bit like appliances, a bit like partners, capable of becoming but not really the same as anything else we’ve ever created. Let’s call them metatools, manifestations of our collective intelligence and generators of it. One consequence of this is that they are really good at doing what humans can do, including teaching, and students are turning to them in droves because they already teach the explicit stuff (the measurable skills and knowledge we tend to assess, as opposed to the values, attitudes, motivational and socially connected stuff that we rarely even notice) better than most human teachers. Secondly, genAI has been highly disruptive to traditional assessment approaches: change (not necessarily positive change) must happen. Thirdly, our cognition itself is changed by this new kind of technology for better or worse, creating a hybrid intelligence we are only beginning to understand but that cannot be ignored for long without rendering education irrelevant. Finally genAI really is changing everything everywhere all at once: everyone needs to adapt to it, across the globe and at every scale, ecosystem-wide.

There are huge risks that it can (and plentiful evidence that it already does) reinforce the worst of the worst of education by simply replacing what we already do with something that hardens it further, that does the bad things more efficiently, and more pervasively, that revives obscene forms of assessment and archaic teaching practices, but without any of the saving graces and intricacies that make educational systems work despite their apparent dysfunctionality. This is the most likely outcome, sadly. If we follow this path, it ends in model collapse for not just LLMs but for human cognition. However, just perhaps, how we respond to it could change the way we teach in good if not excellent ways. It can do so as long as human teachers are able to focus on the tacit, the relational, the social, and the immeasurable aspects of what education does rather than the objectives-led, credential-driven, instrumentalist stuff that currently drives it and that genAI can replace very efficiently, reliably, and economically. In the past, the tacit came for free when we did the explicit thing because the explicit thing could not easily be achieved without it. When humans teach, no matter how terribly, they teach ways of being human. Now, if we want it to happen (and of course we do, because education is ultimately more about learning to be than learning to do), we need to pay considerably more deliberate attention to it.

The table below, copied from the slides, summarizes some of the ways we might productively divide the teaching role between humans and AIs:

Human Role (e.g.)

AI role (e.g.)

Relationships

Interacting, role modelling, expressing, reacting. Nurturing human relationships, discussion catalyzing/summarizing

Values

Establishing values through actions, discussion, and policy. Staying out of this as much as possible!

Information

Helping learners to see the personal relevance, meaning, and value of what they are learning. Caring. Helping learners to acquire the information. Providing the information.

Feedback

Discussing and planning, making salient, challenging. Caring. Analyzing objective strengths and weaknesses, helping with subgoals, offering support, explaining.

Credentialling

Responsibility, qualitative evaluation. Tracking progress, identifying unprespecified outcomes, discussion with human teachers.

Organizing

Goal setting, reacting, responding. Scheduling, adaptive delivery, supporting, reminding.

Ways of being

Modelling, responding, interacting, reflecting. Staying out of this as much as possible!

I don’t think this is a particularly tall order but it does demand a major shift in culture, process, design, and attitude.  Achieving that from scratch would be simple. Making it happen within existing institutions without breaking them is going to be hard, and the transition is going to be complex and painful. Failing to do so, though, doesn’t bear thinking of.

Abstract

In all of its nearly 1000-year history, university education has never truly been transformed. Rather, the institution has gradually evolved in incremental steps, each step building on but almost never eliminating the last. As a result, a mediaeval professor dropped into a modern university would still find plenty that was familiar, including courses, semesters, assessments, methods of teaching and perhaps, once or twice a year, scholars dressed like him. Even such hugely disruptive innovations as the printing press or the Internet have not transformed so much as reinforced and amplified what institutions have always done. What chance, then, does generative AI have of achieving transformation, and what would such transformation look like?
In this keynote I will discuss some of the ways that, perhaps, it really is different this time: for instance, that generative AIs are the first technologies ever invented that can themselves invent new technologies; that the unprecedented rate and breadth of adoption is sufficient to disrupt stabilizing structures at every scale; that their disruption to credentialing roles may push the system past a tipping point; and that, as cognitive Santa Claus machines, they are bringing sweeping changes to our individual and collective cognition, whether we like it or not, that education cannot help but accommodate. However, complex path dependencies make it at least as likely that AI will reinforce the existing patterns of higher education as disrupt them. Already, a surge in regressive throwbacks like oral and written exams is leading us to double down on what ought to be transformed while rendering vestigial the creative, relational and tacit aspects of our institutions that never should. Together, we will explore ways to avoid this fate and to bring about constructive transformation at every layer, from the individual learner to the institution itself.

Recording and slides from my ESET 2023 keynote: Artificial humanity and human artificiality

Here are the slides from my keynote at ESET23 in Taiwan (I was online, alas, not in Taipei!).

I will try to remember to update this post with a link to the recording, when it is available.

Here’s a recording of the actual keynote.

The themes of my talk will be familiar to anyone who follows my blog or who has read my recent paper on the subject. This is about applying the coparticipation theory from How Education Works to generative AI, raising concerns about the ways it mimics the soft technique of humans, and discussing how problematic that will be if the skills it replaces atrophy or are never learned in the first place, amongst other issues.

This is the abstract:

We are participants in, not just users of technologies. Sometimes we participate as orchestrators (for instance, when choosing words that we write) and sometimes as part of the orchestration (for instance, when spelling those words correctly). Usually, we play both roles.  When we automate aspects of technologies in which we are just parts of the orchestration, it frees us up to be able to orchestrate more, to do creative and problem-solving tasks, while our tools perform the hard, mechanical tasks better, more consistently, and faster than we could ourselves. Collectively and individually, we therefore become smarter. Generative AIs are the first of our technologies to successfully automate those soft, open-ended, creative cognitive tasks. If we lack sufficient time and/or knowledge to do what they do ourselves, they are like tireless, endlessly flexible personal assistants, expanding what we can do alone. If we cannot draw, or draw up a rental agreement, say, an AI will do it for us, so we may get on with other things. Teachers are therefore scrambling to use AIs to assist in their teaching as fast as students use AIs to assist with their assessments.

For achieving measurable learning outcomes, AIs are or will be effective teachers, opening up greater learning opportunities that are more personalized, at lower cost, in ways that are superior to average human teachers.  But human teachers, be they professionals, other students, or authors of websites, do more than help learners to achieve measurable outcomes. They model ways of thinking, ways of being, tacit knowledge, and values: things that make us human. Education is a preparation to participate in human cultures, not just a means of imparting economically valuable skills. What will happen as we increasingly learn those ways of being from a machine? If machines can replicate skills like drawing, reasoning, writing, and planning, will humans need to learn them at all? Are there aspects of those skills that must not atrophy, and what will happen to us at a global scale if we lose them? What parts of our cognition should we allow AIs to replace? What kinds of credentials, if any, will be needed? In this talk I will use the theory presented in my latest book, How Education Works: Teaching, Technology, and Technique to provide a framework for exploring why, how, and for what purpose our educational institutions exist, and what the future may hold for them.

Pre-conference background reading, including the book, articles, and blog posts on generative AI and education may be found linked from https://howeducationworks.ca