New paper: The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future

I’m proud to be the 7th of 47 authors on this excellent new paper, led by the indefatigable Aras Bozkurt and featuring some of the most distinguished contemporary researchers in online, open, mobile, distance, e- and [insert almost any cognate sub-discipline here] learning, as well as a few of us hanging on their coat tails like me.

AI negaiveAs the title suggests, it is a manifesto: it makes a series of statements (divided into 15 positive and 20 negative themes) about what is or what should be, and it is underpinned by a firm set of humanist pedagogical and ethical attitudes that are anything but neutral. What makes it interesting to me, though, can mostly be found in the critical insights that accompany each theme, that capture a little of the complexity of the discussions that led to them, and that add a lot of nuance. The research methodology, a modified and super-iterative Delphi design in which all participants are also authors is, I think, an incredibly powerful approach to research in the technology of education (broadly construed) that provides rigour and accountability without succumbing to science-envy.

 

AI-positiveNotwithstanding the lion’s share of the work of leading, assembling, editing, and submitting the paper being taken on by Aras and Junhong, it was a truly collective effort so I have very little idea about what percentage of it could be described as my work. We were thinking and writing together.  Being a part of that was a fantastic learning experience for many of us, that stretched the limits of what can be done with tracked changes and comments in a Google Doc, with contributions coming in at all times of day and night and just about every timezone, over weeks. The depth and breadth of dialogue was remarkable, as much an organic process of evolution and emergence as intelligent design, and one in which the document itself played a significant participant role. I felt a strong sense of belonging, not so much as part of a community but as part of a connectome.

For me, this epitomizes what learning technologies are all about. It would be difficult if not impossible to do this in an in-person setting: even if the researchers worked together on an online document, the simple fact that they met in person would utterly change the social dynamics, the pacing, and the structure. Indeed, even online, replicating this in a formal institutional context would be very difficult because of the power relationships, assessment requirements, motivational complexities and artificial schedules that formal institutions add to the assembly. This was an online-native way of learning of a sort I aspire to but seldom achieve in my own teaching.

The paper offers a foundational model or framework on which to build or situate further work as well as providing a moderately succinct summary of  a very significant percentage of the issues relating to generative AI and education as they exist today. Even if it only ever gets referred to by each of its 47 authors this will get more citations than most of my papers, but the paper is highly cite-able in its own right, whether you agree with its statements or not. I know I am biased but, if you’re interested in the impacts of generative AI on education, I think it is a must-read.

The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future

Bozkurt, A., Xiao, J., Farrow, R., Bai, J. Y. H., Nerantzi, C., Moore, S., Dron, J., … Asino, T. I. (2024). The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future. Open Praxis, 16(4), 487–513. https://doi.org/10.55982/openpraxis.16.4.777

Full list of authors:

  • Aras Bozkurt
  • Junhong Xiao
  • Robert Farrow
  • John Y. H. Bai
  • Chrissi Nerantzi
  • Stephanie Moore
  • Jon Dron
  • Christian M. Stracke
  • Lenandlar Singh
  • Helen Crompton
  • Apostolos Koutropoulos
  • Evgenii Terentev
  • Angelica Pazurek
  • Mark Nichols
  • Alexander M. Sidorkin
  • Eamon Costello
  • Steven Watson
  • Dónal Mulligan
  • Sarah Honeychurch
  • Charles B. Hodges
  • Mike Sharples
  • Andrew Swindell
  • Isak Frumin
  • Ahmed Tlili
  • Patricia J. Slagter van Tryon
  • Melissa Bond
  • Maha Bali
  • Jing Leng
  • Kai Zhang
  • Mutlu Cukurova
  • Thomas K. F. Chiu
  • Kyungmee Lee
  • Stefan Hrastinski
  • Manuel B. Garcia
  • Ramesh Chander Sharma
  • Bryan Alexander
  • Olaf Zawacki-Richter
  • Henk Huijser
  • Petar Jandrić
  • Chanjin Zheng
  • Peter Shea
  • Josep M. Duart
  • Chryssa Themeli
  • Anton Vorochkov
  • Sunagül Sani-Bozkurt
  • Robert L. Moore
  • Tutaleni Iita Asino

Abstract

This manifesto critically examines the unfolding integration of Generative AI (GenAI), chatbots, and algorithms into higher education, using a collective and thoughtful approach to navigate the future of teaching and learning. GenAI, while celebrated for its potential to personalize learning, enhance efficiency, and expand educational accessibility, is far from a neutral tool. Algorithms now shape human interaction, communication, and content creation, raising profound questions about human agency and biases and values embedded in their designs. As GenAI continues to evolve, we face critical challenges in maintaining human oversight, safeguarding equity, and facilitating meaningful, authentic learning experiences. This manifesto emphasizes that GenAI is not ideologically and culturally neutral. Instead, it reflects worldviews that can reinforce existing biases and marginalize diverse voices. Furthermore, as the use of GenAI reshapes education, it risks eroding essential human elements—creativity, critical thinking, and empathy—and could displace meaningful human interactions with algorithmic solutions. This manifesto calls for robust, evidence-based research and conscious decision-making to ensure that GenAI enhances, rather than diminishes, human agency and ethical responsibility in education.

Slides from my ICEEL ’24 Keynote: “No Teacher Left Behind: Surviving Transformation”

Here are the slides from from my keynote at the 8th International Conference on Education and E-Learning in Tokyo yesterday. Sadly I was not actually in Tokyo for this but the online integration was well done and there was some good audience interaction. I am also the conference chair (an honorary title) so I may be a bit biased, but I think it’s a really good conference, with an increasingly rare blend of both the tech and the pedagogical aspects of the field, and some wonderfully diverse keynotes ranging in subject matter from the hardest computer science to reflections on literature and love (thanks to its collocation with ICLLL, a literature and linguistics conference). My keynote was somewhere in between, and deliberately targeted at the conference theme, “Transformative Learning in the Digital Era: Navigating Innovation and Inclusion.”

the technological connectome, represented in the style of 1950s children's booksAs my starting point for the talk I introduced the concept of the technological connectome, about which I have just written a paper (currently under revision, hopefully due for publication in a forthcoming issue of the new Journal of Open, Distance, and Digital Education), which is essentially a way of talking about extended cognition from a technological rather than a cognitive perspective. From there I moved on to the adjacent possible and the exponential growth in technology that has, over the past century or so, reached such a breakneck rate of change that innovations such as generative AI, the transformation I particularly focused on (because it is topical), can transform vast swathes of culture and practice in months if not in weeks. This is a bit of a problem for traditional educators, who are as unprepared as anyone else for it, but who find themselves in a system that could not be more vulnerable to the consequences. At the very least it disrupts the learning outcomes-driven teacher-centric model of teaching that still massively dominates institutional learning the world over, both in the mockery it makes of traditional assessment practices and in the fact that generative AIs make far better teachers if all you care about are the measurable outcomes.

The solutions I presented and that formed the bulk of the talk, largely informed by the model of education presented in How Education Works, were mostly pretty traditional, emphasizing the value of community, and of passion for learning, along with caring about, respecting, and supporting learners. There were also some slightly less conventional but widely held perspectives on assessment, plus a bit of complexivist thinking about celebrating the many teachers and acknowledging the technological connectome as the means, the object and the subject of learning, but nothing Earth-shatteringly novel. I think this is as it should be. We don’t need new values and attitudes; we just need to emphasize those that are learning-positive rather than the increasingly mainstream learning-negative, outcomes-driven, externally regulated approaches that the cult of measurement imposes on us.

Post-secondary institutions have had to grapple with their learning-antagonistic role of summative assessment since not long after their inception so this is not a new problem but, until recent decades, the two roles have largely maintained an uneasy truce. A great deal of the impetus for the shift has come from expanding access to PSE. This has resulted in students who are less able, less willing, and less well-supported than their forebears who were, on average, far more advantaged in ability, motivation, and unencumbered time simply because fewer were able to get in. In the past, teachers hardly needed to teach. The students were already very capable, and had few other demands on their time (like working to get through college), so they just needed to hang out with smart people, some of whom who knew the subject and could guide them through it in order to know what to learn and whether they had been successful, along with the time and resources to support their learning. Teachers could be confident that, as long as students had the resources (libraries, lecture notes, study time, other students) they would be sufficiently driven by the need to pass the assessments and/or intrinsic interest, that they could largely be left to their own devices (OK, a slight caricature, but not far off the reality).

Unfortunately, though this is no longer even close to the norm,  it is still the model on which most universities are based.  Most of the time professors are still hired because of their research skills, not teaching ability, and it is relatively rare that they are expected to receive more than the most perfunctory training, let alone education, in how to teach. Those with an interest usually have opportunities to develop their skills but, if they do not, there are few consequences. Thanks to the technological connectome, the rewards and punishments of credentials continue to do the job well enough, notwithstanding the vast amounts of cheating, satisficing, student suffering, and lost love of learning that ensues. There are still plenty of teachers: students have textbooks, YouTube tutorials, other students, help sites, and ChatGPT, to name but a few, of which there are more every day. This is probably all that is propping up a fundamentally dysfunctional system. Increasingly, the primary value of post-secondary education comes to lie in its credentialling function.

No one who wants to teach wants this, but virtually all of those who teach in universities are the ones who succeeded in retaining their love of learning for its own sake despite it, so they find it hard to understand students who don’t. Too many (though, I believe, a minority) are positively hostile to their students as a result, believing that most students are lazy, willing to cheat, or to otherwise game the system, and they set up elaborate means of control and gotchas to trap them.  The majority who want the best for their students, however,  are also to blame, seeing their purpose as to improve grades, using “learning science” (which is like using colour theory to paint – useful, not essential) to develop methods that will, on average, do so more effectively. In fairness, though grades are not the purpose, they are not wrong about the need to teach the measurable stuff well: it does matter to achieve the skills and knowledge that students set out to achieve. However, it is only part of the purpose. Mostly, education is a means to less measurable ends; of forming identities, attitudes, values, ways of relating to others, ways of thinking, and ways of being. You don’t need the best teaching methods to achieve that: you just need to care, and to create environments and structures that support stuff like community, diversity, connection, sharing, openness, collaboration, play, and passion.

The keynote was recorded but I am not sure if or when it will be available. If it is released on a public site, I will share it here.

Announcing the First International Symposium on Educating for Collective Intelligence (and some thoughts on collective intelligence)

First International Symposium on Educating for Collective Intelligence | UTS:CIC

Free-to-register International online symposium, December 5th, 2024, 12-3pm PST

Start time:

This is going to be an important symposium, I think.

I will be taking 3 very precious hours out of my wedding anniversary to attend, in fairness unintentionally: I did not do the timezone conversion when I submitted my paper so I thought it was the next day. However,  I have not cancelled despite the potentially dire consequences, partly because the line-up of speakers is wonderful, partly because we all use the words “collective intelligence” (CI) but we come from diverse disciplinary areas and we mean sometimes very different things by them (so there will be some potentially inspiring conversations) and partly for a bigger reason that I will get to at the end of this post.  You can read abstracts and most of the position papers on the symposium website,

In my own position paper  I have invented the term ochlotecture (from the Classical Greek ὄχλος (ochlos), meaning something like “multitude” and τέκτων (tektōn) meaning “builder”) to describe the structures and processes of a collection of people, whether it be a small seminar group, a network of researchers, or a set of adherents to a world religion. An ochlotecture includes elements like names, physical/virtual spaces, structural hierarchies, rules, norms, mythologies, vocabularies, and purposes, as well as emergent phenomena occurring through individual and subgroup interactions, most notably the recursive cycle of information capture, processing, and (re)presentation that I think characterizes any CI. Through this lens, I can see both what is common and what distinguishes the different kinds of CI described in these position papers a bit more clearly. In fact, my own use of the term has changed a few times over the years so it helps me make sense of my own thoughts on the matter too.

Where I’ve come from that leads me here

symbolic representation of collective intelligenceI have been researching CI and education for a long time. Initially, I used the term very literally to describe something very distinct from individual intelligence, and largely independent of it.  My PhD, started in 1997, was inspired by the observation that (even then) there were at least tens of thousands of very good resources (people, discussions, tutorials, references, videos, courseware etc) openly available on the Web to support learners in most subject areas, that could meet almost any conceivable learning need. The problem was and remains how to find the right ones. These were pre-Google times but even the good-Google of olden days (a classic application of collective intelligence as I was using the term) only showed the most implicitly popular, not those that would best meet a particular learner’s needs. As a novice teacher, I also observed that, in a typical classroom, the students’ combined knowledge and ability to seek more of it far exceeded my own.  I therefore hit upon the idea of using a nature-inspired evolutionary approach to collectively discover and recommend resources, that led me very quickly into the realm of evolutionary theory and thence to the dynamics of self-organizing systems, complex adaptive systems, stigmergy, flocking, city planning, markets, and collective intelligence.

And so I became an ochlotect. I built a series of self-organizing social software systems that used stuff like social navigation (stigmergy), evolutionary, and flocking algorithms to create environments that both shaped and were shaped by the crowd. Acknowledging that “intelligence” is a problematic word, I simply called these collectives, a name inspired by Star Trek TNG’s Borg (the pre-Borg-Queen Borg, before the writers got bored or lazy). The intelligence of a “pure” collective as I conceived it back then was largely to be found in the algorithm, not the individual agents. Human stock markets are no smarter than termite mounds by this way of thinking (and they are not). I was trying to amplify the intelligence of crowds while avoiding the stupidity of mobs by creating interfaces and algorithms that made value to learners a survival characteristic. I was building systems that played some of the roles of a teacher but that were powered by collectives consisting of learners.  Some years later, Mark Zuckerberg hit on the idea of doing the exact opposite, with considerably greater success, making a virtue out of systems that amplified collective stupidity, but the general principles behind both EdgeRank and my algorithms were similar.

When I say that I “built” systems, though, I mean that I built the software part. I came to increasingly realize that the largest part of all of them was always the human part: what the individuals did, and the surrounding context in which they did it, including the norms, the processes, the rules, the structures, the hierarchies, and everything else that formed the ochlotecture, was intrinsic to their success or failure.  Some of those human-enacted parts were as algorithmic as the software environments I provided and were no smarter than those used by termites (e.g. “click on the results from the top of the list or in bigger fonts”), but many others were designed, and played critical roles.  This slightly more complex concept of CI played a major supporting role in my first book providing a grounded basis for the design of social software systems that could support maximal learner control. In it I wound up offering a set of 10 design principles that addressed human, organizational, pedagogical and tech factors as well as emergent collective characteristics that were prerequisites if social software systems were to evolve to become educationally useful.

Collectives also formed a cornerstone of my work with Terry Anderson over the next decade or so, and our use of the term evolved further. In our first few papers, starting  in 2007, we conflated the dynamic process with the individual agents who made it happen: for us back then, a collective was the people and processes (a sort of cross between my original definition and a social configuration the Soviets were once fond of) and so we treated a collective as somewhat akin to a group or a network. Before too long we realized that was dumb and separated these elements out, categorizing three primary social forms (the set, the net, and the group) that could blend, and from which collectives could emerge and interact, as a different kind of ochlotectural entity altogether. This led us to a formal abstract definition of collectives that continues to get the odd citation to this day. We wrote a book about social media and learning in which this abstract definition of collectives figured largely, and designed The Landing to take advantage of it (not well – it was a learning experience). It appears in my position paper, too.

Collectives have come back with a vengeance but wearing different clothes in my work of the last decade, including my most recent book. I am a little less inclined to use the word “collective” now because I have come to understand all intelligence as collective, almost all of it mediated and often enacted through technologies. Technologies are the assemblies we construct from stuff to do stuff, and the stuff that they do then forms some of the stuff from which we construct more stuff to do stuff. A single PC alone, for instance, might contain hundreds of billions of instances of technologies in its assembly. A shelf of books might contain almost as many, not just in words and letters but in the concepts, theories, and models they make. As for the processes of making them, editing them, manufacturing the paper and the ink, printing them, distributing them, reading them, and so on… it’s a massive, constantly evolving, ever-adapting, partly biological system, not far off from natural ecosystems in its complexity, and equally diverse. Every use of a technology is also a technology, from words in your head to flying a space ship, and it becomes part of the stuff that can be organized by yourself or others. Through technique (technologies enacted intracranially), technologies are parts of us and we are parts of them, and that is what makes us smart.  Collective behaviour in humans can occur without technologies but what makes it collective intelligence is a technological connectome that grows, adapts, evolves, replicates, and connects every one of us to every other one of us: most of what we think is the direct result of assembling what we and others, stretching back in time and outward in space, have created. The technological connectome continuously evolves as we connect and orchestrate the vast web of technologies in which we participate, creating assemblies that have never occurred the same way twice, maybe thousands of times every day: have you ever even brushed your teeth or eaten a mouthful of cereal exactly the same way twice, in your whole life? Every single one of us is doing this, and quite a few of those technologies magnify the effects, from words to drawing to numbers to  writing to wheels to screws to ships to postal services to pedagogical methods to printing to newspapers to libraries to broadcast networks to the Internet to the World Wide Web to generative AI. It is not just how we are able to be individually smart: it is an indivisible part of that smartness. Or stupidity. Whatever. The jury is out. Global warming, widening inequality, war, epidemics of obesity, lies, religious bigotry, famine and many other dire phenomena are a direct result of this collective “intelligence”, as much as Vancouver, the Mona Lisa, and space telescopes. Let’s just stick with “collective”.

The obligatory LLM connection and the big reason I’m attending the symposium

My position paper for this symposium wanders a bit circuitously towards a discussion of the collective nature of large language models (LLMs) and their consequent global impact on our education systems. LLMs are collectives in their own right, with algorithms that are not only orders of magnitude more complex than any of their predecessors, but that are unique to every instantiation of them, operating from and on vast datasets, presenting results to users who also feed those datasets. This is what makes them capable of very convincingly simulating both the hard (inflexible, correct) and the soft (flexible, creative) technique of humans, which is both their super-power and the cause of the biggest threat they pose. The danger is that a) they replace the need to learn the soft technique ourselves (not necessarily a disaster if we use them creatively in further assemblies) and, more worryingly, b) that we learn ways of being human from collectives that, though made of human stuff, are not human. They will in turn become parts of all the rest of the collectives in which we participate. This can and will change us. It is happening now, frighteningly fast, even faster and at a greater scale than similar changes that the Zuckerbergian style of social media have also brought about.

As educators, we should pay attention to this. Unfortunately, with their emphasis on explicit measurable outcomes,  combined with the extrinsic lure of credentials, the ochlotecture of our chronically underfunded educational systems is not geared towards compensating for these tendencies. In fact, exactly the reverse. LLMs can already both teach and meet those explicit outcomes far more effectively than most humans, at a very compelling price so, more and more, they will. Both students and teachers are replaceable components in such a system. The saving grace and/or problem is that, though they matter, and they are how we measure educational success, those explicit outcomes are not in fact the most important ends of education, albeit that they are means to those ends.

The things that matter more are the human ways of thinking, of learning, and of seeing, that we learn while achieving such outcomes; the attitudes, values, connections, and relationships; our identities and the ways we learn to exist in our societies and cultures. It’s not just about doing and knowing: it’s about being, it’s about love, fear, wonder, and hunger. We don’t have to (and can’t) measure those because they all come for free when humans and the stuff they create are the means through which explicit outcomes are achieved. It’s an unavoidable tacit curriculum that underpins every kind of intentional and most unintentional learning we undertake, for better or (too often) for worse. It’s the (largely) non-technological consequence of the technologies in which we participate, and how we participate in them. Technologies don’t make us less human, on the whole: they are exactly what make us human.

We will learn such things from generative AIs, too, thanks to the soft technique they mimic so well, but what we will learn to be as a result will not be quite human. Worse, the outputs of the machines will begin to dominate their own inputs, and the rest will come from humans who have been changed by their interactions with them, like photocopies of photocopies, constantly and recursively degrading. In my position paper I argue for the need to therefore cherish the human parts of these new collectives in our education systems far more than we have before, and I suggest some ways of doing that. It matters not just to avoid model collapse in LLMs, but to prevent model collapse in the collective intelligence of the whole human race. I think that is quite important, and that’s the real reason I will spend some of my wedding anniversary talking with some very intelligent and influential people about it.

 

 

The Second Coming

For some reason I can’t get this poem out of my head today. Again.

The Second Coming

By W.B. Yeats

Turning and turning in the widening gyre
The falcon cannot hear the falconer;
Things fall apart; the centre cannot hold;
Mere anarchy is loosed upon the world,
The blood-dimmed tide is loosed, and everywhere
The ceremony of innocence is drowned;
The best lack all conviction, while the worst
Are full of passionate intensity.

Surely some revelation is at hand;
Surely the Second Coming is at hand.
The Second Coming! Hardly are those words out
When a vast image out of Spiritus Mundi
Troubles my sight: somewhere in sands of the desert
A shape with lion body and the head of a man,
A gaze blank and pitiless as the sun,
Is moving its slow thighs, while all about it
Reel shadows of the indignant desert birds.
The darkness drops again; but now I know
That twenty centuries of stony sleep
Were vexed to nightmare by a rocking cradle,
And what rough beast, its hour come round at last,
Slouches towards Bethlehem to be born?

Video and slides from my webinar, How to Be an Educational Technology: An Entangled Perspective on Teaching

an entangled teacher, represented as an anthropomorphic dog wrapped in cables that hold multiple technologies around him such as books and computersFor those with an interest, here are the slides from my webinar for Contact North | Contact Nord that I gave today: How to be an educational technology (warning: large download, about 32MB).

Here is a link to the video of the session.

I was invited to do this webinar because my book (How Education Works: Teaching, Technology, and Technique, briefly reviewed on the Contact North | Contact Nord site last year) was among the top 5 most viewed books of the year, so that was what the talk was about. Among the most central messages of the book and the ones that I was trying to get across in this presentation were:

  1. that how we do teaching matters more than what we do (“T’ain’t what you do, it’s the way that you do it”) and
  2. that we can only understand the process if we examine the whole complex assembly of teaching (very much including the technique of all who contribute to it, including learners, textbooks, and room designers) not just the individual parts.

Along the way I had a few other things to say about why that must be the case, the nature of teaching, the nature of collective cognition, and some of the profound consequences of seeing the world this way. I had fun persuading ChatGPT to illustrate the slides in a style that was not that of Richard Scarry (ChatGPT would not do that, for copyright reasons) but that was reminiscent of it, so there are lots of cute animals doing stuff with technologies on the slides.

I rushed and rambled, I sang, I fumbled and stumbled, but I think it sparked some interest and critical thinking. Even if it didn’t, some learning happened, and that is always a good thing. The conversations in the chat went too fast for me to follow but I think there were some good ones. If nothing else, though I was very nervous, I had fun, and it was lovely to notice a fair number of friends, colleagues, and even the odd relative among the audience. Thank you all who were there, and thank you anyone who catches the recording later.

How AI Teaches Its Children: slides and reflections from my keynote for AISUMMIT-2024

Late last night I gave the opening keynote at the Global AI Summit 2024, International Conference on Artificial Intelligence and Emerging Technology,  hosted by Bennett University, Noida, India. My talk was online. Here are the slides: How AI Teaches Its Children. It was recorded but I don’t know when or whether or with whom it will be shared: if possible I will add it to this post.

a robot teaching children in the 18th Century
a robot teaching children in the 18th Century

For those who have been following my thoughts on generative AI there will be few surprises in my slides, and I only had half an hour so there was not much time to go into the nuances. The title is an allusion to Pestalozzi’s 18th Century tract, How Gertrude Teaches Her Children, which has been phenomenally influential to the development of education systems around the world and that continues to have impact to this day. Much of it is actually great: Pestalozzi championed very child-centric teaching approaches that leveraged the skills and passions of their teachers. He recommended methods of teaching that made full use of the creativity and idiosyncratic knowledge the teachers possessed and that were very much concerned with helping children to develop their own interests, values and attitudes. However, some of the ideas – and those that have ultimately been more influential – were decidedly problematic, as is succinctly summarized in this passage on page 41:

I believe it is not possible for common popular instruction to advance a step, so long as formulas of instruction are not found which make the teacher, at least in the elementary stages of knowledge, merely the mechanical tool of a method, the result of which springs from the nature of the formulas and not from the skill of the man who uses  it.

This is almost the exact opposite of the central argument of my book, How Education Works, that mechanical methods are not the most important part of a soft technology such as teaching: what usually matters more is how it is done, not just what is done. You can use good methods badly and bad methods well because you are a participant in the instantiation of a technology, responsible for the complete orchestration of the parts, not just a user of them.

As usual, in the talk I applied a bit of co-participation theory to explain why I am both enthralled by and fearful of the consequences of generative AIs because they are the first technologies we have ever built that can use other technologies in ways that resemble how we use them. Previous technologies only reproduced hard technique – the explicit methods we use that make us part of the technology. Generative AIs reproduce soft technique, assembling and organizing phenomena in endlessly novel ways to act as creators of the technology. They are active, not passive participants.

Two dangers

I see there to be two essential risks lying in the delegation of soft technique to AIs. The first is not too terrible: that, because we will increasingly delegate creative activities we would have otherwise performed ourselves to machines, we will not learn those skills ourselves. I mourn the potential passing of hard skills in (say) drawing, or writing, or making music, but the bigger risk is that we will lose the the soft skills that come from learning them: the things we do with the hard skills, the capacity to be creative.

That said, like most technologies, generative AIs are ratchets that let us do more than we could achieve alone. In the past week, for instance, I “wrote” an app that would have taken me many weeks without AI assistance in a less than a day. Though it followed a spec that I had carefully and creatively written, it replaced the soft skills that I would have applied had I written it myself, the little creative flourishes and rabbit holes of idea-following that are inevitable in any creation process. When we create we do so in conversation with the hard technologies available to us (including our own technique), using the affordances and constraints to grasp adjacent possibles they provide. Every word we utter or wheel we attach to an axle opens and closes opportunities for what we can do next.

With that in mind, the app that the system created was just the beginning. Having seen the adjacent possibles of the finished app, I have spent too many hours in subsequent days extending and refining the app to do things that, in the past, I would not have bothered to do because they would have been too difficult. It has become part of my own extended cognition, starting higher up the tree than I would have reached alone. This has also greatly improved my own coding skills because, inevitably, after many iterations, the AI and/or I started to introduce bugs, some of which have been quite subtle and intractable. I did try to get the AI to examine the whole code (now over 2000 lines of JavaScript) and rewrite it or at least to point out the flaws, but that failed abysmally, amply illustrating both the strength of LLMs as creative participants in technologies, and their limitations in being unable to do the same thing the same way twice. As a result, the AI and I have have had to act as partners trying to figure out what is wrong. Often, though the AI has come up with workable ideas, its own solution has been a little dumb, but I could build on it to solve the problem better. Though I have not actually created much of the code myself, I think my creative role might have been greater than it would have been had I written every line.

Similarly for the images I used to illustrate the talk: I could not possibly have drawn them alone but, once the AI had done so, I engaged in a creative conversation to try (sometimes very unsuccessfully) to get it to reproduce what I had in mind. Often, though, it did things that sparked new ideas so, again, it became a partner in creation, sharing in my cognition and sparking my own invention. It was very much not just a tool: it was a co-worker, with different and complementary skills, and “ideas” of its own. I think this is a good thing. Yes, perhaps it is a pity that those who follow us may not be able to draw with a pen (and more than a little worrying thinking about the training sets that future AIs with learn to draw from), but they will have new ways of being creative.

Like all learning, both these activities changed me: not just my skills, but my ways of thinking. That leads me to the bigger risk.

Learning our humanity from machines

The second risk is more troubling: that we will learn ways of being human from machines. This is because of the tacit curriculum that comes with every learning interaction. When we learn from others, whether they are actively teaching, writing textbooks, showing us, or chatting with us, we don’t just learn methods of doing things: we learn values, attitudes, ways of thinking, ways of understanding, and ways of being at the same time. So far we have only learned that kind of thing from humans (sometimes mediated through code) and it has come for free with all the other stuff, but now we are doing so from machines. Those machines are very much like us because 99% of what they are – their training sets – is what we have made, but they not the same. Though LLMs are embodiments of our own collective intelligence, they don’t so much lack values, attitudes, ways of thinking etc as they have any and all of them. Every implicit value and attitude of the people whose work constituted their training set is available to them, and they can become whatever we want them to be. Interacting with them is, in this sense, very much not like interacting with something created by a human, let alone with humans more directly. They have no identity, no relationships, no purposes, no passion, no life history and no future plans. Nothing matters to them.

To make matters worse, there is programmed and trained stuff on top of that, like their interminable cheery patience  that might not teach us great ways of interacting with others. And of course it will impact how we interact with others because we will spend more and more time engaged with it, rather than with actual humans. The economic and practical benefits make this an absolute certainty. LLMs also use explicit coding to remove or massage data from the input or output, reflecting the values and cultures of their creators for better or worse. I was giving this talk in India to a predominantly Indian audience of AI researchers, every single one of whom was making extensive use of predominantly American LLMs like ChatGPT, Gemini, or Claude, and (inevitably) learning ways of thinking and doing from it. This is way more powerful than Hollywood as an instrument of Americanization.

I am concerned about how this will change our cultures and our selves because this is happening at phenomenal and global scale, and it is doing so in a world that is unprepared for the consequences, the designed parts of which assume a very different context. One of generative AI’s greatest potential benefits lies in the potential to provide “high quality” education at low cost to those who are currently denied it, but those low costs will make it increasingly compelling for everyone. However, because of the designs that assume a different context “quality”, in this sense, relates to the achievement of explicit learning outcomes: this is Pestalozzi’s method writ large. Generative AIs are great at teaching what we want to learn – the stuff we could write down as learning objectives or intended outcomes – so, as that is the way we have designed our educational systems (and our general attitudes to learning new skills), of course we will use them for that purpose. However, that cannot be done without teaching the other stuff – the tacit curriculum – which is ultimately more important because it shapes how we are in the world, not just the skills we employ to be that way. We might not have designed our educational systems to do that, and we seldom if ever think about it when teaching ourselves or receiving training to do something, but it is perhaps education’s most important role.

By way of illustration, I find it hugely bothersome that generative AIs are being used to write children’s stories (and, increasingly, videos) and I hope you feel some unease too, because those stories – not the facts in them but the lessons about things that matter that they teach – are intrinsic to them becoming who they will become. However, though perhaps of less magnitude, the same issue relates to learning everything from how to change a plug to how to philosophize: we don’t stop learning from the underlying stories behind those just because we have grown up. I fear that educators, formal or otherwise, will become victims of the McNamara Fallacy, setting our goals to achieve what is easily measurable while ignoring what cannot (easily) be measured, and so rush blindly towards subtly new ways of thinking and acting that few will even notice, until the changes are so widespread they cannot be reversed. Whether better or worse, it will very definitely be different, so it really matters that we examine and understand where this is all leading. This is the time, I believe, to reclaim a revalorize the value of things that are human before it is too late. This is the time to recognize education (far from only formal) as being how we become who we are, individually and collectively, not just how we meet planned learning outcomes. And I think (at least hope) that we will do that. We will, I hope, value more than ever the fact that something – be it a lesson plan or a book or a screwdriver – is made by someone or by a machine that has been explicitly programmed by someone. We will, I hope, better recognize  the relationships between us that it embodies, the ways it teaches us things it does not mean to teach, and the meaning it has in our lives as a result. This might happen by itself – already there is a backlash against the bland output of countless bots – but it might not be a bad idea to help it along when we can. This post (and my talk last night) has been one such small nudge.

Forthcoming webinar, September 24, 2024 – How to be an Educational Technology: An Entangled Perspective on Teaching

This is an announcement for an event I’ll be facilitating as part of TeachOnline’s excellent ongoing series of webinars. In it I will be discussing some of the key ideas of my open book, How Education Works, and exploring what they imply about how we should teach and, more broadly, how we should design systems of education. It will be fun. It will be educational. There may be music.

Here are the details:

Date: Tuesday, September 24, 2024

Time: 1:00 PM – 2:00 PM (Eastern Time) (find your time zone here)

Register (free of charge) for the event here

 

Source: How to be an Educational Technology: An Entangled Perspective on Teaching | Welcome to TeachOnline

On the importance of place

Distance learners and teachers in different kinds of spaceI had the great pleasure of being invited to the Open University of the Netherlands and, later in the day, to EdLab, Maastricht University a few weeks ago, giving a slightly different talk in each place based on some of the main themes in my most recent book, How Education Works. Although I adapted my slides a little for each audience, with different titles and a few different slides adjusted to the contexts, I could probably have used either presentation interchangeably. In fact, I could as easily have used the slides from my SITE keynote on which both were quite closely based (which is why I am not sharing them here). As well as most of the same slides, I used some of the same words, many of the same examples, and several of the same anecdotes. For the most part, this was essentially the same presentation given twice. Except, of course, it really, really wasn’t. In fact, the two events could barely have been more different, and what everyone (including me) learned was significantly different in each session.

This is highly self-referential. One of the big points of the book is that it only ever makes sense to consider the entire orchestration, including the roles that learners play in making sense of it all the many components of the assembly, designed for the purpose and otherwise. The slides, structure, and content did provide the theme and a certain amount of hardness, but what we (collectively) did with them led to two very different learning experiences. They shared some components and purposes, just as a car, a truck, and a bicycle share some of the same components and purposes, but the assemblies and orchestrations were quite different, leading to very different outcomes. Some of the variation was planned in advance, including an hour of conversation at the end of each presentation and a structure that encouraged dialogue at various points along the way: these were as much workshops as presentations. However, much of the variance occurred not due to any planning but because of the locations themselves. One of the rooms was a well-appointed conventional lecture theatre, the other an airy space with grouped tables, and with huge windows looking out on a busy and attractive campus. In the lecture theatre I essentially gave a lecture: the interactive parts were very much staged, and I had to devise ways to make them work. In the airy room, I had a conversation and had to devise ways to maintain some structure to the process, that was delightfully disrupted by the occasional passing road train and the very tangible lives of others going on outside, as well as an innately more intimate and conversational atmosphere enabled (not entailed) by the layout. Other parts of the context mattered too: the time of day, the temperature, the different needs and interests of the audience, the fact that one occurred in the midst of planning for a major annual event, and so on. All of this had a big effect on how I and others behaved, and on what and how people learned. From one perspective, in both talks, I was sculpting the available affordances and constraints to achieve my intended ends but, from another equally valid point of view, I was being sculpted by them. The creators and maintainers of the rooms and I were teaching partners, coparticipants in the learning process. Pedagogically, and despite the various things I did to assemble the missing parts in each, they were significantly different learning technologies.

The complexity of distance teaching

Train journeys are great contexts for uninterrupted reflection (trains teach too) so, sitting on the train on my journey back the next day, I began to reflect on what all of this means for my usual teaching practice, and made some notes on which this post is based (notebooks teach, too).  I am a distance educator by trade and, as a rule, with exceptions for work-based learning, practicums, co-ops, placements, and a few other limited contexts, distance educators rarely even acknowledge that students occupy a physical space, let alone do we adapt to it. We might sometimes encourage students to use things in their environments as part of a learning activity, but we rarely change our teaching on the fly as a result of the differences between those environments. As I have previously observed, the problem is exacerbated by the illusion that online systems are environments (in the sense of being providers of the context in which we learn) and that we believe we can observe what happens in them. They are not, and we cannot. They are parts of the learners’ own environments, and all we can (ethically) observe are interactions with our designed systems, not the behaviour of the learners within the spaces that they occupy. It is as hard for students to understand our context as it is for us to understand theirs, and that matters too. It makes it trickier to model ways of thinking and approaches to problem solving, for example, if the teacher occupies a different context.

This matters little for some of the harder elements of the teaching process. Information provision, resource design, planning, and at least some forms of assessment and feedback are at least as easy to do at a distance as not. We can certainly do those and make a point of doing them well, thereby providing a little counterbalance. However, facilitation, role modelling, guidance, supporting motivation, fostering networks, monitoring of learning, responsive adaptation, and many other significant teaching roles are more complex to perform because of how little is known about learning activities within an environment. As Peter Goodyear has put it, matter matters. The more that the designated teacher can understand that, the more effective they can be in helping learners to succeed.

Because we are not so able to adapt our teaching to the context, distance learning (more accurately, distance teaching) mostly works because students are the most important teachers, and the pedagogies they add to the raw materials we provide do most of the heavy lifting. Given some shared resources and guided interactions, they are the ones who perform most of the kinds of orchestration and assembly that I added to my two talks in the Netherlands; they are the ones who both adapt and adapt to their spaces for learning. Those better able to do this in the first place tend to do better in the long run, regardless of subject interest or innate ability. This is reflected in the results. In my faculty and on average, more than 95% of our graduate students – who have already proven themselves to be successful learners and so are better able to teach themselves – succeed on any given course, in the sense of reaching the end and achieving a passing grade.  70% of our undergraduates, on the other hand, are the first in their family to take a degree. Many have taken years or even decades out of formal education, and many had poor experiences in school. On average, therefore, they typically have fewer skills in teaching themselves in an academic context (which is a big thing to learn about in and of itself) and we are not able to adapt our teaching to what we cannot perceive, so we are of little assistance either. Without the shared physical context, we can only guess and anticipate when and where they might be learning, and we seldom have the faintest idea how it occurs, save through sparse digital signals that they leave in discussion forums or submitted assignments, or coarse statistics based on web page views. In a few undergraduate core courses within my faculty it is therefore no surprise that the success rates are less than 30%, and (on average) only about half of all our students are successful, with rates that improve dramatically in more senior level courses. The vast majority of those who get to the end pass. Most who don’t succeed drop out. It doesn’t take many core courses with success rates of 30% to eliminate nearly 95% of students by the end of a program.

Teaching with a context

We can better deal with this if we let go of the illusion that we can be in control and, at the same time, find better ways to stay close: to make the learning process including the environment in which it occurs, as visible as possible. It is emphatically not about capturing digital traces and using analytics to reveal patterns. Though such techniques can have a place in helping to build a picture of how learners are responding to our deliberate acts of teaching, they are not even close to a solution for understanding learners in context. Most learning analytics and adaptive systems are McNamara Machines, blind to most of what matters.  There’s a huge risk that we start by measuring the easily measurable then wind up not just ignoring but implicitly denying that the things we cannot measure are important. Yes, it might help us to help students who are going to get to the end anyway to get better grades, but it tells us very little about (for instance) how they are learning, what obstacles they face, or how we could help them orchestrate their learning in the contexts in which they live.  Could generative AI help with that? I think it might. In conversation, an AI agent could ask leading questions, could recommend things to do with the space, could aggregate and report back on how and where students seem to be learning. Unlike traditional adaptive systems, generative AI can play an active discovery role and make broader connections that have not been scripted. However, this is not and should not be a substitute for an actual teacher: rather, it should mediate between humans, amplifying and feeding back, not guiding or informing.

For the most part, though, I think the trick is to use pedagogical designs that are made to support flexibility, that encourage learners to connect with the spaces live and people they share them with, that support them in understanding the impact of the environments they are in, and, as much as possible, to incorporate conduits that make it likely that participants will share information about their contexts and what they are doing in them, such as through reflective learning diaries, shared videos or audio, or introductory discussions intended to elicit that information. A good trick that I’ve used in the past, for example, is to ask students to send virtual postcards showing where they are and what they have been doing (nowadays a microblog post might serve a similar role). Similarly, it can be useful to start discussions that seek ideas about how to configure time and space for learning, sharing problems and solutions from the students themselves. Modelling behaviours can help: in my own communications, I try to reveal things about where I am and what I have been doing that provide some context and background story, especially when it relates to how I am changing as a result of our shared endeavours. Building social interaction opportunities into every inhabited virtual space would help a lot, making it more likely that students will share more of what they are doing and increasing awareness of both the presence and the non-presence (the difference in context) of others. Learning management systems are almost universally utter rubbish for that, typically relegating interactions to controlled areas of course sites and encouraging instrumental and ephemeral discussions that largely ignore context. We need more, more pervasively, and we need better.

None of this will replicate the rich, shared environments of in-person learning, and that is not the point. This is about acknowledging the differences in online and distance learning and building different orchestrations around them. On the whole, the independence of distance students is an extremely good thing, with great motivational benefits, not to mention convenience, much lower environmental harm, exploitable diversity, and many other valuable features that are hard to reproduce in person. When it works, it works very well. We just need to make it work better for those for whom that is not enough. To do that, we need to understand the whole assembly, not just the pieces we provide.

Sets, nets and groups revisited

Here are the slides from a talk I gave earlier today, hosted by George Siemens and his fine team of people at Human Systems. Terry Anderson helped me to put the slides together, and offered some great insights and commentary after the presentation but I am largely to blame for the presentation itself. Our brief was to talk about sets, nets and groups, the theme of our last book Teaching Crowds: learning and social media and much of our work together since 2007 but, as I was the one presenting, I bent it a little towards generative AI and my own intertwingled perspective on technologies and collective cognition, which is most fully developed (so far) in my most recent book, How Education Works: Teaching, Technology, and Technique. If you’re not familiar with our model of sets, nets, groups and collectives, there’s a brief overview on the Teaching Crowds website. It’s a little long in the tooth but I think it is still useful and will help to frame what follows.

A recreation of the famous New Yorker cartoon, "On the Internet no one knows you are a dog" showing a dog using a web browser - but it is a robot dog
A recreation of the famous New Yorker cartoon, “On the Internet no one knows you are a dog” – but it is a robot dog

The key new insight that appears for the first time in this presentation is that, rather than being a fundamental social form in their own right, groups consist of technological processes that make use of and help to engender/give shape to the more fundamental forms of nets and sets. At least, I think they do: I need to think and talk some more about this, at least with Terry, and work it up into a paper, but I haven’t yet thought through all the repercussions. Even back when we wrote the book I always thought of groups as technologically mediated entities but it was only when writing these slides in the light of my more recent thinking on technology that I paid much attention to the phenomena that they actually orchestrate in order to achieve their ends. Although there are non-technological prototypes – notably in the form of families – these are emergent rather than designed. The phenomena that intentional groups primarily orchestrate are those of networks and sets, which are simply configurations of humans and their relationships with one another. Modern groups – in a learning context, classes, cohorts, tutorial groups, seminar groups, and so on – are designed to fulfill more specific purposes than their natural prototypes, and they are made possible by technological inventions such as rules, roles, decision-making processes, and structural hierarchies. Essentially, the group is a purpose-driven technological overlay on top of more basic social forms. It seems natural, much as language seems natural, because it is so basic and fundamental to our existence and how everything else works in human societies, but it is an invention (or many inventions, in fact) as much as wheels and silicon chips.

Groups are among the oldest and most highly evolved of human technologies and they are incredibly important for learning, but they have a number of inherent flaws and trade-offs/Faustian bargains, notably in their effects on individual freedoms, in scalability (mainly achieved through hierarchies), in sometimes unhealthy power dynamics, and in limitations they place on roles individuals play in learning. Modern digital technologies can help to scale them a little further and refine or reify some of the rules and roles, but the basic flaws remain. However, modern digital technologies also offer other ways of enabling sets and networks of people to support one another’s learning, from blogs and mailing lists to purpose-built social networking systems, from Wikipedia and Academia.edu to Quora, in ways that can (optionally) integrate with and utilize groups but that differ in significant ways, such as in removing hierarchies, structuring through behaviour (collectives) and filtering or otherwise mediating messages. With some exceptions, however, the purposes of large-scale systems of this nature (which would provide an ideal set of phenomena to exploit) are not usually driven by a need for learning, but by a need to gain attention and profit. Facebook, Instagram, LinkedIn, X, and others of their ilk have vast networks to draw on but few mechanisms that support learning and limited checks and balances for reliability or quality when it does occur (which of course it does). Most of their algorithmic power is devoted to driving engagement, and the content and purpose of that engagement only matters insofar as it drives further engagement. Up to a point, trolls are good for them, which is seldom if ever true for learning systems. Some – Wikipedia, the Khan Academy, Slashdot, Stack Exchange, Quora, some SubReddits, and so on – achieve both engagement and intentional support for learning. However, they remain works in progress in the latter regard, being prone to a host of ills from filter bubbles and echo chambers to context collapse and the Matthew Effect, not to mention intentional harm by bad actors. I’ve been exploring this space for approaching 30 years now, but there remains almost as much scope for further research and development in this area as there was when I began. Though progress has been made, we have yet to figure out the right rules and structures to deal with a great many problems, and it is increasingly difficult to slot the products of our research into an increasingly bland, corporate online space dominated by a shrinking number of bland, centralized learning management systems that continue to refine their automation of group processes and structures and, increasingly, to ignore the sets and networks on which they rely.

With that in mind, I see big potential benefits for generative AIs – the ultimate collectives – as supporters and enablers for crowds of people learning together. Generative AI provides us with the means to play with structures and adapt in hitherto impossible ways, because the algorithms that drive their adaptations are indefinitely flexible, the reified activities that form them are vast, and the people that participate in them play an active role in adjusting and forming their algorithms (not the underpinning neural nets but the emergent configurations they take). These are significant differences from traditional collectives, that tend to have one purpose and algorithm (typically complex but deterministic), such as returning search results or engaging network interactions.  I also see a great many potential risks, of which I have written fairly extensively of late, most notably in playing soft orchestral roles in the assembly that replace the need for humans to learn to play them. We tread a fine line between learning utopia and learning dystopia, especially if we try to overlay them on top of educational systems that are driven by credentials. Credentials used to signify a vast range of tacit knowledge and skills that were never measured, and (notwithstanding a long tradition of cheating) that was fine as long as nothing else could create those signals, because they were serviceable proxies. If you could pass the test or assignment, it meant that you had gone through the process and learned a lot more than what was tested. This has been eroded for some time, abetted by social media like Course Hero or Chegg that remain quite effective ways of bypassing the process for those willing to pay a nominal sum and accept the risk. Now that generative AI can do the same at considerably lower cost, with greater reliability, and lower risk, without having gone through the process, they no longer make good signifiers and, anyway (playing Devil’s advocate), it remains unclear to what extent those soft, tacit skills are needed now that generative AIs can achieve them so well.  I am much encouraged by the existence of George’s Paul LeBlanc’s lab initiative, the fact that George is the headliner chief scientist for it, its intent to enable human-centred learning in an age of AI, and its aspiration to reinvent education to fit. We need such endeavours. I hope they will do some great things.