Slides from my TRU TPC keynote: “It’s a technology problem: How education doesn’t work and why we shouldn’t fix it”

mediaeval zoom session, with a professor holding a carrot and stick, looking puzzled and a bit surprised, presumably because Zoom was not a popular technology in mediaeval times Here are the slides from my keynote at Thompson Rivers University’s Teaching Practices Colloquium this morning. I quite like the mediaeval theme (thanks ChatGPT), which I created to provide a constant reminder that the problems we have to solve are the direct result of decisions made 1000 years ago. There was a lot of stuff from my last book in the talk, framed in terms of Faustian Bargains, intrinsic motivation, counter technologies, and adjacent possibles. This was the abstract:

Why is it that educators feel it is necessary to motivate students to learn when love of learning is a defining characteristic of our species? Why do students disengage from education? Why do so many cheat? How can we be better teachers? What does “good teaching” even mean? And what role does technology play in all of this? Drawing on ideas, theories, and models from his book, How Education Works: Teaching, Technology, and Technique, Jon Dron will provide some answers to these and many more questions through a tale that straddles most of a millennium, during which you may encounter a mutilated monk, a man who lost a war, a robot named Claude, part of a monkey, and an unsuccessful Swiss farmer who made a Faustian bargain and changed education forever. Along the way you will learn why most educational science is pointless, why the best teaching methods fail, why the worst succeed, and why you should learn to love learning technologies. There may be singing.

I had a lot of fun –  there was indeed singing, a silicone gorilla hand that turned out to be really useful, and some fun activities from which I learned stuff. I think it worked fine as a hybrid event. It was a sympathetic audience, online and in-person. TRU has a really interesting (and tension-filled, in good and bad ways) mix of online and in-person teaching practices, and I’ve met and listened to some really smart, thoughtful, reflective practitioners today. Almost all cross disciplinary boundaries – who knew you could combine culinary science and nursing? – so there’s a lot of invention going on. Unexpectedly, and far more than from a lot of bigger International conferences,  I’m going to go home armed with a whole bunch of of new ideas.

Understanding collective stupidity in social computing systems

Here are the slides from a talk I just gave to a group of grad students at AU in our ongoing seminar series, on the nature of collectives and ways we can use and abuse them. It’s a bit of a sprawl covering some 30 odd years of a particularly geeky, semi-philosophical branch of my research career (not much on learning and teaching in this one, but plenty of termites) and winding up with very much a work in progress. I rushed through it at the end of a very long day/week/month/year/life but I hope someone may find it useful!

This is the abstract:

“Collective intelligence” (CI)  is a widely-used but fuzzy term that can mean anything from the behaviour of termites, to the ability of an organization to adapt to a changing environment, to the entire human race’s capacity to think, to the ways that our individual neurons give rise to cognition. Common to all, though, is the notion that the combined behaviours of many independent agents can lead to positive emergent changes in the behaviour of the whole and, conversely, that the behaviour of the whole leads to beneficial changes in the behaviours of the agents of which it is formed. Many social computing systems, from Facebook to Amazon, are built to enable or to take advantage of CI. Here I define social computing systems as digital systems that have no value unless they are used by at least two participants, and in which those participants play significant roles in affecting one another’s behaviour. This is a broad definition that embraces Google Search as much as email, wikis, and blogs, and in which the behaviour of humans and the surrounding structures and systems they belong to are at least as important as the algorithms and interfaces that support them.  Unfortunately, the same processes that lead to the wisdom of crowds can at least as easily result in the stupidity of mobs, including phenomena like filter bubbles and echo chambers that may be harmful in themselves or that render systems open to abuse such as trolling, disinformation campaigns, vote brigading, and successful state manipulation of elections.  If we can build better models of social computing systems, taking into account their human and contextual elements, then we stand a better chance of being able to avoid their harmful effects and using them for good.  To this end I have coined the term “ochlotecture”, from the Classical Greek ὄχλος (ochlos), meaning  “multitude” and τέκτων (tektōn) meaning “builder”. In this seminar I will identify some of the main ochlotectural elements that contribute to collective intelligence, describe some of the ways it can be undermined, and explore some of the ramifications as they relate to social software design and management.

 

Published in JODDE – Learning: A technological perspective

Dron, J. (2024). Learning: A technological perspective. Journal of Open, Distance, and Digital Education, 1(2), Article 2. https://doi.org/10.25619/dpvg4687

abstract representation of the technological connectomeMy latest paper, Learning: A technological perspective, was published today in the (open) Journal of Open, Distance, and Digital Education.  Methodologically, it provides a connected series of (I think) reasonable and largely uncontroversial assertions about the nature of technology and, for each assertion, offers some examples of why that matters to educators. In the process it wends its way towards a view of learning that is firmly situated in the field of extended cognition (and related complexivist learning theories such as Connectivism, Rhizomatic Learning, Networks of Practice, etc), with a technological twist that is, I think, pragmatically useful and theoretically interesting. Much of it repeats ideas from How Education Works but it extends and generalizes them further into the realms of intelligence and cognition through what I describe as the technological connectome.

I wrote this paper to align with the themes of the journal so, as a result, it has a greater focus on education than on the technological connectome, but I intend to write more on the subject some time soon. The essence of the idea is that what we recognize as intelligent behaviour consists largely of intracranial technologies like words, symbols, theories, models, procedures, structures, skills, ways of doing things, and so on – our cognitive gadgets – that we largely share with others, and that exist in vastly interconnected, hugely recursive, massively layered assemblies in and beyond our heads. I invoke Reed’s Law to help explain how and why this makes our intracranial cognition so much greater than the neural networks that host it: it’s not just the neural connections but the groups and multi-scaled clusters of technological entities that emerge as a result that can then be a part of the network that embodies them, and of one another, and so on and so on. In passing, I have a vague and hard-to-express hunch that the “and so on” is at least part of the answer to the hard problem: networks that form other networks that themselves become parts of the networks that form them (rinse and repeat) seems like a potential path to self-consciousness to me. However,  the ludicrous levels of intertwingularity implied by this, not to mention an almost total absence of any idea about the underlying mechanism, ties my little mind in knots that I cannot yet and probably will never unravel.

At least as importantly, these private intracranial technologies are in turn parts of even greater assemblies that extend into our bodies, our environments, and above all into the technologies around us, and thence into the minds of others. To a large extent it is our ability to make use of and participate in this extended technological connectome, that is both within us and beyond us, that forms the object, the subject, and the purpose of education. Our technologies as much form a part of our cognition as they enable it. We continuously shape and are shaped by them, assembling and reassembling them as we move into the adjacent possibles that result, creating further adjacent possibles every time we do, for ourselves and others. There is something incredibly awesome about that.

Abstract

This paper frames technology as a phenomenon that is inextricable from individual and collective cognition. Technologies are not “the other”, separate from us: we are parts of them and they are parts of us. We learn to be technologies as much as we learn to use them, and each use is itself a technology through which we participate both as parts and as creators of nodes in a vast technological connectome of awesome complexity. The technological connectome in turn forms a major part of what makes us, individually and collectively, smart. With that framing in mind, the paper is presented as a series of sets of observations about the nature of technology followed by examples of consequences for educators that illustrate some of the potential value of understanding technology this way, ending with an application of the model to provide actionable insights into what large language models imply for how we should teach.

How AI works for education: an interview with me for AACE Review

Thanks to Stefanie Panke for some great questions and excellent editing in this interview with me for the AACE Review.

The content is in fact the product of two discussions, one coming from student questions at the end of a talk that I gave for the Asian University for Women just before Christmas, the other asynchronously with Stefanie herself.

Stefanie did a very good job of making sense of my rambling replies to the students that spanned quite a few issues, including some from my book, How Education Works, some with (mainly) generative AI, and a little about the intersection of collective and artificial intelligence. Stefanie’s own prompts were great: they encouraged me to think a little differently, and to take some enjoyable detours along the way around the evils of learning management systems, artificially-generated music, and  social media, as well as a discussion of the impact of generative AI on learning designers, thoughts on legislation to control AI, and assessment.

Here are the slides from that talk at AUW – I’ve not posted this separately because hardly any are new: it mostly cobbles together two recent talks, one for Contact North and the other my keynote for ICEEL ’24. The conversation afterwards was great, though, thanks to a wonderfully thoughtful and enthusiastic bunch of very smart students.

The collective ochlotecture of large language models: slides from my talk at CI.edu, 2024

Here are my slides from the 1st International Symposium on Educating for Collective Intelligence, last week, here is my paper on which it was based, and here is the video of the talk itself:

You can find this and videos of the rest of the stunning line-up of speakers at https://www.youtube.com/playlist?list=PLcS9QDvS_uS6kGxefLFr3kFToVIvIpisn It was an incredibly engaging and energizing event: the chat alone was a masterclass in collective intelligence that was difficult to follow at times but that was filled with rich insights and enlightening debates. The symposium site, that has all this and more, is at https://cic.uts.edu.au/events/collective-intelligence-edu-2024/

Collective intelligence, represented in the style of 1950s children's books.With just 10 minutes to make the case and 10 minutes for discussion, none of us were able to go into much depth in our talks. In mine I introduced the term “ochlotecture”, from the Classical Greek ὄχλος (ochlos), meaning  “multitude” and τέκτων (tektōn) meaning “builder” to describe the structures and processes that define the stuff that gives shape and form to collections of people and their interactions. I think we need such a term because there are virtually infinite ways that such things can be configured, and the configuration makes all the difference. We blithely talk of things like groups, teams, clubs, companies, squads, and, of course, collectives, assuming that others will share an understanding of what we mean when, of course, they don’t. There were at least half a dozen quite distinct uses of the term “collective intelligence” in this symposium alone. I’m still working on a big paper on this subject that goes into some depth on the various dimensions of interest as they pertain to a wide range of social organizations but, for this talk, I was only concerned with the ochlotecture of collectives (a term I much prefer to “collective intelligence” because intelligence is such a slippery word, and collective stupidity is at least as common). From an ochlotectural perspective, these consist of a means of collecting crowd-generated information, processing it, and presenting the processed results back to the crowd. Human collective ochlotectures often contain other elements – group norms, structural hierarchies, schedules, digital media, etc – but I think those are the defining features. If I am right then large language models (LLMs) are collectives, too, because that is exactly what they do. Unlike most other collectives, though (a collectively driven search engine like Google Search being one of a few partial exceptions) the processing is unique to each run of the cycle, generated via a prompt or similar input. This is what makes them so powerful, and it is what makes their mimicry of human soft technique so compelling.

I did eventually get around to the theme of the conference. I spent a while discussing why LLMs are troubling – the fact that we learn values, attitudes, ways of being, etc from interacting with them; the risks to our collective intelligence caused by them being part of the crowd, not just aggregators and processors of its outputs; and the potential loss of the soft, creative skills they can replace – and ended with what that implies for how we should act as educators: essentially, to focus on the tacit curriculum that has, till now, always come from free; to focus on community because learning to be human from and with other humans is what it is all about; and to decouple credentials so as to reduce the focus on measurable outcomes that AIs can both teach and achieve better than an average human. I also suggested a couple of principles for dealing with generative AIs: to treat them as partners rather than tools, and to use them to support and nurture human connections, as ochlotects as much as parts of the ochlotecture.

I had a point to make in a short time, so the way I presented it was a bit of a caricature of my more considered views on the matter. If you want a more balanced view, and to get a bit more of the theoretical backdrop to all this, Tim Fawns’s talk (that follows mine and that will probably play automatically after it if you play the video above) says it all, with far greater erudition and lucidity, and adds a few very valuable layers of its own. Though he uses different words and explains it far better than I, his notion of entanglement closely echoes my own ideas about the nature of technology and the roles it plays in our cognition. I like the word “intertwingled” more than “entangled” because of its more positive associations and the sense of emergent order it conveys, but we mean substantially the same thing: in fact, the example he gave of a car is one that I have frequently used myself, in exactly the same way.

New paper: The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future

I’m proud to be the 7th of 47 authors on this excellent new paper, led by the indefatigable Aras Bozkurt and featuring some of the most distinguished contemporary researchers in online, open, mobile, distance, e- and [insert almost any cognate sub-discipline here] learning, as well as a few of us hanging on their coat tails like me.

AI negaiveAs the title suggests, it is a manifesto: it makes a series of statements (divided into 15 positive and 20 negative themes) about what is or what should be, and it is underpinned by a firm set of humanist pedagogical and ethical attitudes that are anything but neutral. What makes it interesting to me, though, can mostly be found in the critical insights that accompany each theme, that capture a little of the complexity of the discussions that led to them, and that add a lot of nuance. The research methodology, a modified and super-iterative Delphi design in which all participants are also authors is, I think, an incredibly powerful approach to research in the technology of education (broadly construed) that provides rigour and accountability without succumbing to science-envy.

 

AI-positiveNotwithstanding the lion’s share of the work of leading, assembling, editing, and submitting the paper being taken on by Aras and Junhong, it was a truly collective effort so I have very little idea about what percentage of it could be described as my work. We were thinking and writing together.  Being a part of that was a fantastic learning experience for many of us, that stretched the limits of what can be done with tracked changes and comments in a Google Doc, with contributions coming in at all times of day and night and just about every timezone, over weeks. The depth and breadth of dialogue was remarkable, as much an organic process of evolution and emergence as intelligent design, and one in which the document itself played a significant participant role. I felt a strong sense of belonging, not so much as part of a community but as part of a connectome.

For me, this epitomizes what learning technologies are all about. It would be difficult if not impossible to do this in an in-person setting: even if the researchers worked together on an online document, the simple fact that they met in person would utterly change the social dynamics, the pacing, and the structure. Indeed, even online, replicating this in a formal institutional context would be very difficult because of the power relationships, assessment requirements, motivational complexities and artificial schedules that formal institutions add to the assembly. This was an online-native way of learning of a sort I aspire to but seldom achieve in my own teaching.

The paper offers a foundational model or framework on which to build or situate further work as well as providing a moderately succinct summary of  a very significant percentage of the issues relating to generative AI and education as they exist today. Even if it only ever gets referred to by each of its 47 authors this will get more citations than most of my papers, but the paper is highly cite-able in its own right, whether you agree with its statements or not. I know I am biased but, if you’re interested in the impacts of generative AI on education, I think it is a must-read.

The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future

Bozkurt, A., Xiao, J., Farrow, R., Bai, J. Y. H., Nerantzi, C., Moore, S., Dron, J., … Asino, T. I. (2024). The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future. Open Praxis, 16(4), 487–513. https://doi.org/10.55982/openpraxis.16.4.777

Full list of authors:

  • Aras Bozkurt
  • Junhong Xiao
  • Robert Farrow
  • John Y. H. Bai
  • Chrissi Nerantzi
  • Stephanie Moore
  • Jon Dron
  • Christian M. Stracke
  • Lenandlar Singh
  • Helen Crompton
  • Apostolos Koutropoulos
  • Evgenii Terentev
  • Angelica Pazurek
  • Mark Nichols
  • Alexander M. Sidorkin
  • Eamon Costello
  • Steven Watson
  • Dónal Mulligan
  • Sarah Honeychurch
  • Charles B. Hodges
  • Mike Sharples
  • Andrew Swindell
  • Isak Frumin
  • Ahmed Tlili
  • Patricia J. Slagter van Tryon
  • Melissa Bond
  • Maha Bali
  • Jing Leng
  • Kai Zhang
  • Mutlu Cukurova
  • Thomas K. F. Chiu
  • Kyungmee Lee
  • Stefan Hrastinski
  • Manuel B. Garcia
  • Ramesh Chander Sharma
  • Bryan Alexander
  • Olaf Zawacki-Richter
  • Henk Huijser
  • Petar Jandrić
  • Chanjin Zheng
  • Peter Shea
  • Josep M. Duart
  • Chryssa Themeli
  • Anton Vorochkov
  • Sunagül Sani-Bozkurt
  • Robert L. Moore
  • Tutaleni Iita Asino

Abstract

This manifesto critically examines the unfolding integration of Generative AI (GenAI), chatbots, and algorithms into higher education, using a collective and thoughtful approach to navigate the future of teaching and learning. GenAI, while celebrated for its potential to personalize learning, enhance efficiency, and expand educational accessibility, is far from a neutral tool. Algorithms now shape human interaction, communication, and content creation, raising profound questions about human agency and biases and values embedded in their designs. As GenAI continues to evolve, we face critical challenges in maintaining human oversight, safeguarding equity, and facilitating meaningful, authentic learning experiences. This manifesto emphasizes that GenAI is not ideologically and culturally neutral. Instead, it reflects worldviews that can reinforce existing biases and marginalize diverse voices. Furthermore, as the use of GenAI reshapes education, it risks eroding essential human elements—creativity, critical thinking, and empathy—and could displace meaningful human interactions with algorithmic solutions. This manifesto calls for robust, evidence-based research and conscious decision-making to ensure that GenAI enhances, rather than diminishes, human agency and ethical responsibility in education.

Slides from my ICEEL ’24 Keynote: “No Teacher Left Behind: Surviving Transformation”

Here are the slides from from my keynote at the 8th International Conference on Education and E-Learning in Tokyo yesterday. Sadly I was not actually in Tokyo for this but the online integration was well done and there was some good audience interaction. I am also the conference chair (an honorary title) so I may be a bit biased, but I think it’s a really good conference, with an increasingly rare blend of both the tech and the pedagogical aspects of the field, and some wonderfully diverse keynotes ranging in subject matter from the hardest computer science to reflections on literature and love (thanks to its collocation with ICLLL, a literature and linguistics conference). My keynote was somewhere in between, and deliberately targeted at the conference theme, “Transformative Learning in the Digital Era: Navigating Innovation and Inclusion.”

the technological connectome, represented in the style of 1950s children's booksAs my starting point for the talk I introduced the concept of the technological connectome, about which I have just written a paper (currently under revision, hopefully due for publication in a forthcoming issue of the new Journal of Open, Distance, and Digital Education), which is essentially a way of talking about extended cognition from a technological rather than a cognitive perspective. From there I moved on to the adjacent possible and the exponential growth in technology that has, over the past century or so, reached such a breakneck rate of change that innovations such as generative AI, the transformation I particularly focused on (because it is topical), can transform vast swathes of culture and practice in months if not in weeks. This is a bit of a problem for traditional educators, who are as unprepared as anyone else for it, but who find themselves in a system that could not be more vulnerable to the consequences. At the very least it disrupts the learning outcomes-driven teacher-centric model of teaching that still massively dominates institutional learning the world over, both in the mockery it makes of traditional assessment practices and in the fact that generative AIs make far better teachers if all you care about are the measurable outcomes.

The solutions I presented and that formed the bulk of the talk, largely informed by the model of education presented in How Education Works, were mostly pretty traditional, emphasizing the value of community, and of passion for learning, along with caring about, respecting, and supporting learners. There were also some slightly less conventional but widely held perspectives on assessment, plus a bit of complexivist thinking about celebrating the many teachers and acknowledging the technological connectome as the means, the object and the subject of learning, but nothing Earth-shatteringly novel. I think this is as it should be. We don’t need new values and attitudes; we just need to emphasize those that are learning-positive rather than the increasingly mainstream learning-negative, outcomes-driven, externally regulated approaches that the cult of measurement imposes on us.

Post-secondary institutions have had to grapple with their learning-antagonistic role of summative assessment since not long after their inception so this is not a new problem but, until recent decades, the two roles have largely maintained an uneasy truce. A great deal of the impetus for the shift has come from expanding access to PSE. This has resulted in students who are less able, less willing, and less well-supported than their forebears who were, on average, far more advantaged in ability, motivation, and unencumbered time simply because fewer were able to get in. In the past, teachers hardly needed to teach. The students were already very capable, and had few other demands on their time (like working to get through college), so they just needed to hang out with smart people, some of whom who knew the subject and could guide them through it in order to know what to learn and whether they had been successful, along with the time and resources to support their learning. Teachers could be confident that, as long as students had the resources (libraries, lecture notes, study time, other students) they would be sufficiently driven by the need to pass the assessments and/or intrinsic interest, that they could largely be left to their own devices (OK, a slight caricature, but not far off the reality).

Unfortunately, though this is no longer even close to the norm,  it is still the model on which most universities are based.  Most of the time professors are still hired because of their research skills, not teaching ability, and it is relatively rare that they are expected to receive more than the most perfunctory training, let alone education, in how to teach. Those with an interest usually have opportunities to develop their skills but, if they do not, there are few consequences. Thanks to the technological connectome, the rewards and punishments of credentials continue to do the job well enough, notwithstanding the vast amounts of cheating, satisficing, student suffering, and lost love of learning that ensues. There are still plenty of teachers: students have textbooks, YouTube tutorials, other students, help sites, and ChatGPT, to name but a few, of which there are more every day. This is probably all that is propping up a fundamentally dysfunctional system. Increasingly, the primary value of post-secondary education comes to lie in its credentialling function.

No one who wants to teach wants this, but virtually all of those who teach in universities are the ones who succeeded in retaining their love of learning for its own sake despite it, so they find it hard to understand students who don’t. Too many (though, I believe, a minority) are positively hostile to their students as a result, believing that most students are lazy, willing to cheat, or to otherwise game the system, and they set up elaborate means of control and gotchas to trap them.  The majority who want the best for their students, however,  are also to blame, seeing their purpose as to improve grades, using “learning science” (which is like using colour theory to paint – useful, not essential) to develop methods that will, on average, do so more effectively. In fairness, though grades are not the purpose, they are not wrong about the need to teach the measurable stuff well: it does matter to achieve the skills and knowledge that students set out to achieve. However, it is only part of the purpose. Mostly, education is a means to less measurable ends; of forming identities, attitudes, values, ways of relating to others, ways of thinking, and ways of being. You don’t need the best teaching methods to achieve that: you just need to care, and to create environments and structures that support stuff like community, diversity, connection, sharing, openness, collaboration, play, and passion.

The keynote was recorded but I am not sure if or when it will be available. If it is released on a public site, I will share it here.

Announcing the First International Symposium on Educating for Collective Intelligence (and some thoughts on collective intelligence)

First International Symposium on Educating for Collective Intelligence | UTS:CIC

Free-to-register International online symposium, December 5th, 2024, 12-3pm PST

Start time:

This is going to be an important symposium, I think.

I will be taking 3 very precious hours out of my wedding anniversary to attend, in fairness unintentionally: I did not do the timezone conversion when I submitted my paper so I thought it was the next day. However,  I have not cancelled despite the potentially dire consequences, partly because the line-up of speakers is wonderful, partly because we all use the words “collective intelligence” (CI) but we come from diverse disciplinary areas and we mean sometimes very different things by them (so there will be some potentially inspiring conversations) and partly for a bigger reason that I will get to at the end of this post.  You can read abstracts and most of the position papers on the symposium website,

In my own position paper  I have invented the term ochlotecture (from the Classical Greek ὄχλος (ochlos), meaning something like “multitude” and τέκτων (tektōn) meaning “builder”) to describe the structures and processes of a collection of people, whether it be a small seminar group, a network of researchers, or a set of adherents to a world religion. An ochlotecture includes elements like names, physical/virtual spaces, structural hierarchies, rules, norms, mythologies, vocabularies, and purposes, as well as emergent phenomena occurring through individual and subgroup interactions, most notably the recursive cycle of information capture, processing, and (re)presentation that I think characterizes any CI. Through this lens, I can see both what is common and what distinguishes the different kinds of CI described in these position papers a bit more clearly. In fact, my own use of the term has changed a few times over the years so it helps me make sense of my own thoughts on the matter too.

Where I’ve come from that leads me here

symbolic representation of collective intelligenceI have been researching CI and education for a long time. Initially, I used the term very literally to describe something very distinct from individual intelligence, and largely independent of it.  My PhD, started in 1997, was inspired by the observation that (even then) there were at least tens of thousands of very good resources (people, discussions, tutorials, references, videos, courseware etc) openly available on the Web to support learners in most subject areas, that could meet almost any conceivable learning need. The problem was and remains how to find the right ones. These were pre-Google times but even the good-Google of olden days (a classic application of collective intelligence as I was using the term) only showed the most implicitly popular, not those that would best meet a particular learner’s needs. As a novice teacher, I also observed that, in a typical classroom, the students’ combined knowledge and ability to seek more of it far exceeded my own.  I therefore hit upon the idea of using a nature-inspired evolutionary approach to collectively discover and recommend resources, that led me very quickly into the realm of evolutionary theory and thence to the dynamics of self-organizing systems, complex adaptive systems, stigmergy, flocking, city planning, markets, and collective intelligence.

And so I became an ochlotect. I built a series of self-organizing social software systems that used stuff like social navigation (stigmergy), evolutionary, and flocking algorithms to create environments that both shaped and were shaped by the crowd. Acknowledging that “intelligence” is a problematic word, I simply called these collectives, a name inspired by Star Trek TNG’s Borg (the pre-Borg-Queen Borg, before the writers got bored or lazy). The intelligence of a “pure” collective as I conceived it back then was largely to be found in the algorithm, not the individual agents. Human stock markets are no smarter than termite mounds by this way of thinking (and they are not). I was trying to amplify the intelligence of crowds while avoiding the stupidity of mobs by creating interfaces and algorithms that made value to learners a survival characteristic. I was building systems that played some of the roles of a teacher but that were powered by collectives consisting of learners.  Some years later, Mark Zuckerberg hit on the idea of doing the exact opposite, with considerably greater success, making a virtue out of systems that amplified collective stupidity, but the general principles behind both EdgeRank and my algorithms were similar.

When I say that I “built” systems, though, I mean that I built the software part. I came to increasingly realize that the largest part of all of them was always the human part: what the individuals did, and the surrounding context in which they did it, including the norms, the processes, the rules, the structures, the hierarchies, and everything else that formed the ochlotecture, was intrinsic to their success or failure.  Some of those human-enacted parts were as algorithmic as the software environments I provided and were no smarter than those used by termites (e.g. “click on the results from the top of the list or in bigger fonts”), but many others were designed, and played critical roles.  This slightly more complex concept of CI played a major supporting role in my first book providing a grounded basis for the design of social software systems that could support maximal learner control. In it I wound up offering a set of 10 design principles that addressed human, organizational, pedagogical and tech factors as well as emergent collective characteristics that were prerequisites if social software systems were to evolve to become educationally useful.

Collectives also formed a cornerstone of my work with Terry Anderson over the next decade or so, and our use of the term evolved further. In our first few papers, starting  in 2007, we conflated the dynamic process with the individual agents who made it happen: for us back then, a collective was the people and processes (a sort of cross between my original definition and a social configuration the Soviets were once fond of) and so we treated a collective as somewhat akin to a group or a network. Before too long we realized that was dumb and separated these elements out, categorizing three primary social forms (the set, the net, and the group) that could blend, and from which collectives could emerge and interact, as a different kind of ochlotectural entity altogether. This led us to a formal abstract definition of collectives that continues to get the odd citation to this day. We wrote a book about social media and learning in which this abstract definition of collectives figured largely, and designed The Landing to take advantage of it (not well – it was a learning experience). It appears in my position paper, too.

Collectives have come back with a vengeance but wearing different clothes in my work of the last decade, including my most recent book. I am a little less inclined to use the word “collective” now because I have come to understand all intelligence as collective, almost all of it mediated and often enacted through technologies. Technologies are the assemblies we construct from stuff to do stuff, and the stuff that they do then forms some of the stuff from which we construct more stuff to do stuff. A single PC alone, for instance, might contain hundreds of billions of instances of technologies in its assembly. A shelf of books might contain almost as many, not just in words and letters but in the concepts, theories, and models they make. As for the processes of making them, editing them, manufacturing the paper and the ink, printing them, distributing them, reading them, and so on… it’s a massive, constantly evolving, ever-adapting, partly biological system, not far off from natural ecosystems in its complexity, and equally diverse. Every use of a technology is also a technology, from words in your head to flying a space ship, and it becomes part of the stuff that can be organized by yourself or others. Through technique (technologies enacted intracranially), technologies are parts of us and we are parts of them, and that is what makes us smart.  Collective behaviour in humans can occur without technologies but what makes it collective intelligence is a technological connectome that grows, adapts, evolves, replicates, and connects every one of us to every other one of us: most of what we think is the direct result of assembling what we and others, stretching back in time and outward in space, have created. The technological connectome continuously evolves as we connect and orchestrate the vast web of technologies in which we participate, creating assemblies that have never occurred the same way twice, maybe thousands of times every day: have you ever even brushed your teeth or eaten a mouthful of cereal exactly the same way twice, in your whole life? Every single one of us is doing this, and quite a few of those technologies magnify the effects, from words to drawing to numbers to  writing to wheels to screws to ships to postal services to pedagogical methods to printing to newspapers to libraries to broadcast networks to the Internet to the World Wide Web to generative AI. It is not just how we are able to be individually smart: it is an indivisible part of that smartness. Or stupidity. Whatever. The jury is out. Global warming, widening inequality, war, epidemics of obesity, lies, religious bigotry, famine and many other dire phenomena are a direct result of this collective “intelligence”, as much as Vancouver, the Mona Lisa, and space telescopes. Let’s just stick with “collective”.

The obligatory LLM connection and the big reason I’m attending the symposium

My position paper for this symposium wanders a bit circuitously towards a discussion of the collective nature of large language models (LLMs) and their consequent global impact on our education systems. LLMs are collectives in their own right, with algorithms that are not only orders of magnitude more complex than any of their predecessors, but that are unique to every instantiation of them, operating from and on vast datasets, presenting results to users who also feed those datasets. This is what makes them capable of very convincingly simulating both the hard (inflexible, correct) and the soft (flexible, creative) technique of humans, which is both their super-power and the cause of the biggest threat they pose. The danger is that a) they replace the need to learn the soft technique ourselves (not necessarily a disaster if we use them creatively in further assemblies) and, more worryingly, b) that we learn ways of being human from collectives that, though made of human stuff, are not human. They will in turn become parts of all the rest of the collectives in which we participate. This can and will change us. It is happening now, frighteningly fast, even faster and at a greater scale than similar changes that the Zuckerbergian style of social media have also brought about.

As educators, we should pay attention to this. Unfortunately, with their emphasis on explicit measurable outcomes,  combined with the extrinsic lure of credentials, the ochlotecture of our chronically underfunded educational systems is not geared towards compensating for these tendencies. In fact, exactly the reverse. LLMs can already both teach and meet those explicit outcomes far more effectively than most humans, at a very compelling price so, more and more, they will. Both students and teachers are replaceable components in such a system. The saving grace and/or problem is that, though they matter, and they are how we measure educational success, those explicit outcomes are not in fact the most important ends of education, albeit that they are means to those ends.

The things that matter more are the human ways of thinking, of learning, and of seeing, that we learn while achieving such outcomes; the attitudes, values, connections, and relationships; our identities and the ways we learn to exist in our societies and cultures. It’s not just about doing and knowing: it’s about being, it’s about love, fear, wonder, and hunger. We don’t have to (and can’t) measure those because they all come for free when humans and the stuff they create are the means through which explicit outcomes are achieved. It’s an unavoidable tacit curriculum that underpins every kind of intentional and most unintentional learning we undertake, for better or (too often) for worse. It’s the (largely) non-technological consequence of the technologies in which we participate, and how we participate in them. Technologies don’t make us less human, on the whole: they are exactly what make us human.

We will learn such things from generative AIs, too, thanks to the soft technique they mimic so well, but what we will learn to be as a result will not be quite human. Worse, the outputs of the machines will begin to dominate their own inputs, and the rest will come from humans who have been changed by their interactions with them, like photocopies of photocopies, constantly and recursively degrading. In my position paper I argue for the need to therefore cherish the human parts of these new collectives in our education systems far more than we have before, and I suggest some ways of doing that. It matters not just to avoid model collapse in LLMs, but to prevent model collapse in the collective intelligence of the whole human race. I think that is quite important, and that’s the real reason I will spend some of my wedding anniversary talking with some very intelligent and influential people about it.

 

 

The Second Coming

For some reason I can’t get this poem out of my head today. Again.

The Second Coming

By W.B. Yeats

Turning and turning in the widening gyre
The falcon cannot hear the falconer;
Things fall apart; the centre cannot hold;
Mere anarchy is loosed upon the world,
The blood-dimmed tide is loosed, and everywhere
The ceremony of innocence is drowned;
The best lack all conviction, while the worst
Are full of passionate intensity.

Surely some revelation is at hand;
Surely the Second Coming is at hand.
The Second Coming! Hardly are those words out
When a vast image out of Spiritus Mundi
Troubles my sight: somewhere in sands of the desert
A shape with lion body and the head of a man,
A gaze blank and pitiless as the sun,
Is moving its slow thighs, while all about it
Reel shadows of the indignant desert birds.
The darkness drops again; but now I know
That twenty centuries of stony sleep
Were vexed to nightmare by a rocking cradle,
And what rough beast, its hour come round at last,
Slouches towards Bethlehem to be born?

Video and slides from my webinar, How to Be an Educational Technology: An Entangled Perspective on Teaching

an entangled teacher, represented as an anthropomorphic dog wrapped in cables that hold multiple technologies around him such as books and computersFor those with an interest, here are the slides from my webinar for Contact North | Contact Nord that I gave today: How to be an educational technology (warning: large download, about 32MB).

Here is a link to the video of the session.

I was invited to do this webinar because my book (How Education Works: Teaching, Technology, and Technique, briefly reviewed on the Contact North | Contact Nord site last year) was among the top 5 most viewed books of the year, so that was what the talk was about. Among the most central messages of the book and the ones that I was trying to get across in this presentation were:

  1. that how we do teaching matters more than what we do (“T’ain’t what you do, it’s the way that you do it”) and
  2. that we can only understand the process if we examine the whole complex assembly of teaching (very much including the technique of all who contribute to it, including learners, textbooks, and room designers) not just the individual parts.

Along the way I had a few other things to say about why that must be the case, the nature of teaching, the nature of collective cognition, and some of the profound consequences of seeing the world this way. I had fun persuading ChatGPT to illustrate the slides in a style that was not that of Richard Scarry (ChatGPT would not do that, for copyright reasons) but that was reminiscent of it, so there are lots of cute animals doing stuff with technologies on the slides.

I rushed and rambled, I sang, I fumbled and stumbled, but I think it sparked some interest and critical thinking. Even if it didn’t, some learning happened, and that is always a good thing. The conversations in the chat went too fast for me to follow but I think there were some good ones. If nothing else, though I was very nervous, I had fun, and it was lovely to notice a fair number of friends, colleagues, and even the odd relative among the audience. Thank you all who were there, and thank you anyone who catches the recording later.