Transformation in (especially higher) education is quite difficult to achieve. There is gradual evolution, for sure, and the occasional innovation, but the basic themes, motifs, and patterns – the stuff universities do and the ways they do it – have barely changed in nigh-on a millennium. A mediaeval professor or student would likely feel right at home in most modern institutions, now and then right down to the clothing. There are lots of path dependencies that have led to this, but a big part of the reason is down to the multiple subsystems that have evolved within education, and the vast number of supersystems in which education participates. Anything new has to thrive in an ecosystem along with countless other parts that have co-evolved together over the last thousand years. There aren’t a lot of new niches, the incumbents are very well established, and they are very deeply enmeshed.
There are several reasons that things may be different now that generative AI has joined the mix. Firstly, generative AIs are genuinely different – not tools but cognitive Santa Claus machines, a bit like appliances, a bit like partners, capable of becoming but not really the same as anything else we’ve ever created. Let’s call them metatools, manifestations of our collective intelligence and generators of it. One consequence of this is that they are really good at doing what humans can do, including teaching, and students are turning to them in droves because they already teach the explicit stuff (the measurable skills and knowledge we tend to assess, as opposed to the values, attitudes, motivational and socially connected stuff that we rarely even notice) better than most human teachers. Secondly, genAI has been highly disruptive to traditional assessment approaches: change (not necessarily positive change) must happen. Thirdly, our cognition itself is changed by this new kind of technology for better or worse, creating a hybrid intelligence we are only beginning to understand but that cannot be ignored for long without rendering education irrelevant. Finally genAI really is changing everything everywhere all at once: everyone needs to adapt to it, across the globe and at every scale, ecosystem-wide.
There are huge risks that it can (and plentiful evidence that it already does) reinforce the worst of the worst of education by simply replacing what we already do with something that hardens it further, that does the bad things more efficiently, and more pervasively, that revives obscene forms of assessment and archaic teaching practices, but without any of the saving graces and intricacies that make educational systems work despite their apparent dysfunctionality. This is the most likely outcome, sadly. If we follow this path, it ends in model collapse for not just LLMs but for human cognition. However, just perhaps, how we respond to it could change the way we teach in good if not excellent ways. It can do so as long as human teachers are able to focus on the tacit, the relational, the social, and the immeasurable aspects of what education does rather than the objectives-led, credential-driven, instrumentalist stuff that currently drives it and that genAI can replace very efficiently, reliably, and economically. In the past, the tacit came for free when we did the explicit thing because the explicit thing could not easily be achieved without it. When humans teach, no matter how terribly, they teach ways of being human. Now, if we want it to happen (and of course we do, because education is ultimately more about learning to be than learning to do), we need to pay considerably more deliberate attention to it.
The table below, copied from the slides, summarizes some of the ways we might productively divide the teaching role between humans and AIs:
Human Role (e.g.)
AI role (e.g.)
Relationships
Interacting, role modelling, expressing, reacting.
Nurturing human relationships, discussion catalyzing/summarizing
Values
Establishing values through actions, discussion, and policy.
Staying out of this as much as possible!
Information
Helping learners to see the personal relevance, meaning, and value of what they are learning. Caring.
Helping learners to acquire the information. Providing the information.
Feedback
Discussing and planning, making salient, challenging. Caring.
Analyzing objective strengths and weaknesses, helping with subgoals, offering support, explaining.
Credentialling
Responsibility, qualitative evaluation.
Tracking progress, identifying unprespecified outcomes, discussion with human teachers.
I don’t think this is a particularly tall order but it does demand a major shift in culture, process, design, and attitude. Achieving that from scratch would be simple. Making it happen within existing institutions without breaking them is going to be hard, and the transition is going to be complex and painful. Failing to do so, though, doesn’t bear thinking of.
Abstract
In all of its nearly 1000-year history, university education has never truly been transformed. Rather, the institution has gradually evolved in incremental steps, each step building on but almost never eliminating the last. As a result, a mediaeval professor dropped into a modern university would still find plenty that was familiar, including courses, semesters, assessments, methods of teaching and perhaps, once or twice a year, scholars dressed like him. Even such hugely disruptive innovations as the printing press or the Internet have not transformed so much as reinforced and amplified what institutions have always done. What chance, then, does generative AI have of achieving transformation, and what would such transformation look like?
In this keynote I will discuss some of the ways that, perhaps, it really is different this time: for instance, that generative AIs are the first technologies ever invented that can themselves invent new technologies; that the unprecedented rate and breadth of adoption is sufficient to disrupt stabilizing structures at every scale; that their disruption to credentialing roles may push the system past a tipping point; and that, as cognitive Santa Claus machines, they are bringing sweeping changes to our individual and collective cognition, whether we like it or not, that education cannot help but accommodate. However, complex path dependencies make it at least as likely that AI will reinforce the existing patterns of higher education as disrupt them. Already, a surge in regressive throwbacks like oral and written exams is leading us to double down on what ought to be transformed while rendering vestigial the creative, relational and tacit aspects of our institutions that never should. Together, we will explore ways to avoid this fate and to bring about constructive transformation at every layer, from the individual learner to the institution itself.
Free-to-register International online symposium, December 5th, 2024, 12-3pm PST
Start time:
This is going to be an important symposium, I think.
I will be taking 3 very precious hours out of my wedding anniversary to attend, in fairness unintentionally: I did not do the timezone conversion when I submitted my paper so I thought it was the next day. However, I have not cancelled despite the potentially dire consequences, partly because the line-up of speakers is wonderful, partly because we all use the words “collective intelligence” (CI) but we come from diverse disciplinary areas and we mean sometimes very different things by them (so there will be some potentially inspiring conversations) and partly for a bigger reason that I will get to at the end of this post. You can read abstracts and most of the position papers on the symposium website,
In my own position paper I have invented the term ochlotecture (from the Classical Greek ὄχλος (ochlos), meaning something like “multitude” and τέκτων (tektōn) meaning “builder”) to describe the structures and processes of a collection of people, whether it be a small seminar group, a network of researchers, or a set of adherents to a world religion. An ochlotecture includes elements like names, physical/virtual spaces, structural hierarchies, rules, norms, mythologies, vocabularies, and purposes, as well as emergent phenomena occurring through individual and subgroup interactions, most notably the recursive cycle of information capture, processing, and (re)presentation that I think characterizes any CI. Through this lens, I can see both what is common and what distinguishes the different kinds of CI described in these position papers a bit more clearly. In fact, my own use of the term has changed a few times over the years so it helps me make sense of my own thoughts on the matter too.
Where I’ve come from that leads me here
I have been researching CI and education for a long time. Initially, I used the term very literally to describe something very distinct from individual intelligence, and largely independent of it. My PhD, started in 1997, was inspired by the observation that (even then) there were at least tens of thousands of very good resources (people, discussions, tutorials, references, videos, courseware etc) openly available on the Web to support learners in most subject areas, that could meet almost any conceivable learning need. The problem was and remains how to find the right ones. These were pre-Google times but even the good-Google of olden days (a classic application of collective intelligence as I was using the term) only showed the most implicitly popular, not those that would best meet a particular learner’s needs. As a novice teacher, I also observed that, in a typical classroom, the students’ combined knowledge and ability to seek more of it far exceeded my own. I therefore hit upon the idea of using a nature-inspired evolutionary approach to collectively discover and recommend resources, that led me very quickly into the realm of evolutionary theory and thence to the dynamics of self-organizing systems, complex adaptive systems, stigmergy, flocking, city planning, markets, and collective intelligence.
And so I became an ochlotect. I built a series of self-organizing social software systems that used stuff like social navigation (stigmergy), evolutionary, and flocking algorithms to create environments that both shaped and were shaped by the crowd. Acknowledging that “intelligence” is a problematic word, I simply called these collectives, a name inspired by Star Trek TNG’s Borg (the pre-Borg-Queen Borg, before the writers got bored or lazy). The intelligence of a “pure” collective as I conceived it back then was largely to be found in the algorithm, not the individual agents. Human stock markets are no smarter than termite mounds by this way of thinking (and they are not). I was trying to amplify the intelligence of crowds while avoiding the stupidity of mobs by creating interfaces and algorithms that made value to learners a survival characteristic. I was building systems that played some of the roles of a teacher but that were powered by collectives consisting of learners. Some years later, Mark Zuckerberg hit on the idea of doing the exact opposite, with considerably greater success, making a virtue out of systems that amplified collective stupidity, but the general principles behind both EdgeRank and my algorithms were similar.
When I say that I “built” systems, though, I mean that I built the software part. I came to increasingly realize that the largest part of all of them was always the human part: what the individuals did, and the surrounding context in which they did it, including the norms, the processes, the rules, the structures, the hierarchies, and everything else that formed the ochlotecture, was intrinsic to their success or failure. Some of those human-enacted parts were as algorithmic as the software environments I provided and were no smarter than those used by termites (e.g. “click on the results from the top of the list or in bigger fonts”), but many others were designed, and played critical roles. This slightly more complex concept of CI played a major supporting role in my first book providing a grounded basis for the design of social software systems that could support maximal learner control. In it I wound up offering a set of 10 design principles that addressed human, organizational, pedagogical and tech factors as well as emergent collective characteristics that were prerequisites if social software systems were to evolve to become educationally useful.
Collectives also formed a cornerstone of my work with Terry Anderson over the next decade or so, and our use of the term evolved further. In our first few papers, starting in 2007, we conflated the dynamic process with the individual agents who made it happen: for us back then, a collective was the people and processes (a sort of cross between my original definition and a social configuration the Soviets were once fond of) and so we treated a collective as somewhat akin to a group or a network. Before too long we realized that was dumb and separated these elements out, categorizing three primary social forms (the set, the net, and the group) that could blend, and from which collectives could emerge and interact, as a different kind of ochlotectural entity altogether. This led us to a formal abstract definition of collectives that continues to get the odd citation to this day. We wrote a book about social media and learning in which this abstract definition of collectives figured largely, and designed The Landing to take advantage of it (not well – it was a learning experience). It appears in my position paper, too.
Collectives have come back with a vengeance but wearing different clothes in my work of the last decade, including my most recent book. I am a little less inclined to use the word “collective” now because I have come to understand all intelligence as collective, almost all of it mediated and often enacted through technologies. Technologies are the assemblies we construct from stuff to do stuff, and the stuff that they do then forms some of the stuff from which we construct more stuff to do stuff. A single PC alone, for instance, might contain hundreds of billions of instances of technologies in its assembly. A shelf of books might contain almost as many, not just in words and letters but in the concepts, theories, and models they make. As for the processes of making them, editing them, manufacturing the paper and the ink, printing them, distributing them, reading them, and so on… it’s a massive, constantly evolving, ever-adapting, partly biological system, not far off from natural ecosystems in its complexity, and equally diverse. Every use of a technology is also a technology, from words in your head to flying a space ship, and it becomes part of the stuff that can be organized by yourself or others. Through technique (technologies enacted intracranially), technologies are parts of us and we are parts of them, and that is what makes us smart. Collective behaviour in humans can occur without technologies but what makes it collective intelligence is a technological connectome that grows, adapts, evolves, replicates, and connects every one of us to every other one of us: most of what we think is the direct result of assembling what we and others, stretching back in time and outward in space, have created. The technological connectome continuously evolves as we connect and orchestrate the vast web of technologies in which we participate, creating assemblies that have never occurred the same way twice, maybe thousands of times every day: have you ever even brushed your teeth or eaten a mouthful of cereal exactly the same way twice, in your whole life? Every single one of us is doing this, and quite a few of those technologies magnify the effects, from words to drawing to numbers to writing to wheels to screws to ships to postal services to pedagogical methods to printing to newspapers to libraries to broadcast networks to the Internet to the World Wide Web to generative AI. It is not just how we are able to be individually smart: it is an indivisible part of that smartness. Or stupidity. Whatever. The jury is out. Global warming, widening inequality, war, epidemics of obesity, lies, religious bigotry, famine and many other dire phenomena are a direct result of this collective “intelligence”, as much as Vancouver, the Mona Lisa, and space telescopes. Let’s just stick with “collective”.
The obligatory LLM connection and the big reason I’m attending the symposium
My position paper for this symposium wanders a bit circuitously towards a discussion of the collective nature of large language models (LLMs) and their consequent global impact on our education systems. LLMs are collectives in their own right, with algorithms that are not only orders of magnitude more complex than any of their predecessors, but that are unique to every instantiation of them, operating from and on vast datasets, presenting results to users who also feed those datasets. This is what makes them capable of very convincingly simulating both the hard (inflexible, correct) and the soft (flexible, creative) technique of humans, which is both their super-power and the cause of the biggest threat they pose. The danger is that a) they replace the need to learn the soft technique ourselves (not necessarily a disaster if we use them creatively in further assemblies) and, more worryingly, b) that we learn ways of being human from collectives that, though made of human stuff, are not human. They will in turn become parts of all the rest of the collectives in which we participate. This can and will change us. It is happening now, frighteningly fast, even faster and at a greater scale than similar changes that the Zuckerbergian style of social media have also brought about.
As educators, we should pay attention to this. Unfortunately, with their emphasis on explicit measurable outcomes, combined with the extrinsic lure of credentials, the ochlotecture of our chronically underfunded educational systems is not geared towards compensating for these tendencies. In fact, exactly the reverse. LLMs can already both teach and meet those explicit outcomes far more effectively than most humans, at a very compelling price so, more and more, they will. Both students and teachers are replaceable components in such a system. The saving grace and/or problem is that, though they matter, and they are how we measure educational success, those explicit outcomes are not in fact the most important ends of education, albeit that they are means to those ends.
The things that matter more are the human ways of thinking, of learning, and of seeing, that we learn while achieving such outcomes; the attitudes, values, connections, and relationships; our identities and the ways we learn to exist in our societies and cultures. It’s not just about doing and knowing: it’s about being, it’s about love, fear, wonder, and hunger. We don’t have to (and can’t) measure those because they all come for free when humans and the stuff they create are the means through which explicit outcomes are achieved. It’s an unavoidable tacit curriculum that underpins every kind of intentional and most unintentional learning we undertake, for better or (too often) for worse. It’s the (largely) non-technological consequence of the technologies in which we participate, and how we participate in them. Technologies don’t make us less human, on the whole: they are exactly what make us human.
We will learn such things from generative AIs, too, thanks to the soft technique they mimic so well, but what we will learn to be as a result will not be quite human. Worse, the outputs of the machines will begin to dominate their own inputs, and the rest will come from humans who have been changed by their interactions with them, like photocopies of photocopies, constantly and recursively degrading. In my position paper I argue for the need to therefore cherish the human parts of these new collectives in our education systems far more than we have before, and I suggest some ways of doing that. It matters not just to avoid model collapse in LLMs, but to prevent model collapse in the collective intelligence of the whole human race. I think that is quite important, and that’s the real reason I will spend some of my wedding anniversary talking with some very intelligent and influential people about it.