Slides from my TRU TPC keynote: “It’s a technology problem: How education doesn’t work and why we shouldn’t fix it”

mediaeval zoom session, with a professor holding a carrot and stick, looking puzzled and a bit surprised, presumably because Zoom was not a popular technology in mediaeval times Here are the slides from my keynote at Thompson Rivers University’s Teaching Practices Colloquium this morning. I quite like the mediaeval theme (thanks ChatGPT), which I created to provide a constant reminder that the problems we have to solve are the direct result of decisions made 1000 years ago. There was a lot of stuff from my last book in the talk, framed in terms of Faustian Bargains, intrinsic motivation, counter technologies, and adjacent possibles. This was the abstract:

Why is it that educators feel it is necessary to motivate students to learn when love of learning is a defining characteristic of our species? Why do students disengage from education? Why do so many cheat? How can we be better teachers? What does “good teaching” even mean? And what role does technology play in all of this? Drawing on ideas, theories, and models from his book, How Education Works: Teaching, Technology, and Technique, Jon Dron will provide some answers to these and many more questions through a tale that straddles most of a millennium, during which you may encounter a mutilated monk, a man who lost a war, a robot named Claude, part of a monkey, and an unsuccessful Swiss farmer who made a Faustian bargain and changed education forever. Along the way you will learn why most educational science is pointless, why the best teaching methods fail, why the worst succeed, and why you should learn to love learning technologies. There may be singing.

I had a lot of fun –  there was indeed singing, a silicone gorilla hand that turned out to be really useful, and some fun activities from which I learned stuff. I think it worked fine as a hybrid event. It was a sympathetic audience, online and in-person. TRU has a really interesting (and tension-filled, in good and bad ways) mix of online and in-person teaching practices, and I’ve met and listened to some really smart, thoughtful, reflective practitioners today. Almost all cross disciplinary boundaries – who knew you could combine culinary science and nursing? – so there’s a lot of invention going on. Unexpectedly, and far more than from a lot of bigger International conferences,  I’m going to go home armed with a whole bunch of of new ideas.

Understanding collective stupidity in social computing systems

Here are the slides from a talk I just gave to a group of grad students at AU in our ongoing seminar series, on the nature of collectives and ways we can use and abuse them. It’s a bit of a sprawl covering some 30 odd years of a particularly geeky, semi-philosophical branch of my research career (not much on learning and teaching in this one, but plenty of termites) and winding up with very much a work in progress. I rushed through it at the end of a very long day/week/month/year/life but I hope someone may find it useful!

This is the abstract:

“Collective intelligence” (CI)  is a widely-used but fuzzy term that can mean anything from the behaviour of termites, to the ability of an organization to adapt to a changing environment, to the entire human race’s capacity to think, to the ways that our individual neurons give rise to cognition. Common to all, though, is the notion that the combined behaviours of many independent agents can lead to positive emergent changes in the behaviour of the whole and, conversely, that the behaviour of the whole leads to beneficial changes in the behaviours of the agents of which it is formed. Many social computing systems, from Facebook to Amazon, are built to enable or to take advantage of CI. Here I define social computing systems as digital systems that have no value unless they are used by at least two participants, and in which those participants play significant roles in affecting one another’s behaviour. This is a broad definition that embraces Google Search as much as email, wikis, and blogs, and in which the behaviour of humans and the surrounding structures and systems they belong to are at least as important as the algorithms and interfaces that support them.  Unfortunately, the same processes that lead to the wisdom of crowds can at least as easily result in the stupidity of mobs, including phenomena like filter bubbles and echo chambers that may be harmful in themselves or that render systems open to abuse such as trolling, disinformation campaigns, vote brigading, and successful state manipulation of elections.  If we can build better models of social computing systems, taking into account their human and contextual elements, then we stand a better chance of being able to avoid their harmful effects and using them for good.  To this end I have coined the term “ochlotecture”, from the Classical Greek ὄχλος (ochlos), meaning  “multitude” and τέκτων (tektōn) meaning “builder”. In this seminar I will identify some of the main ochlotectural elements that contribute to collective intelligence, describe some of the ways it can be undermined, and explore some of the ramifications as they relate to social software design and management.

 

Published in JODDE – Learning: A technological perspective

Dron, J. (2024). Learning: A technological perspective. Journal of Open, Distance, and Digital Education, 1(2), Article 2. https://doi.org/10.25619/dpvg4687

abstract representation of the technological connectomeMy latest paper, Learning: A technological perspective, was published today in the (open) Journal of Open, Distance, and Digital Education.  Methodologically, it provides a connected series of (I think) reasonable and largely uncontroversial assertions about the nature of technology and, for each assertion, offers some examples of why that matters to educators. In the process it wends its way towards a view of learning that is firmly situated in the field of extended cognition (and related complexivist learning theories such as Connectivism, Rhizomatic Learning, Networks of Practice, etc), with a technological twist that is, I think, pragmatically useful and theoretically interesting. Much of it repeats ideas from How Education Works but it extends and generalizes them further into the realms of intelligence and cognition through what I describe as the technological connectome.

I wrote this paper to align with the themes of the journal so, as a result, it has a greater focus on education than on the technological connectome, but I intend to write more on the subject some time soon. The essence of the idea is that what we recognize as intelligent behaviour consists largely of intracranial technologies like words, symbols, theories, models, procedures, structures, skills, ways of doing things, and so on – our cognitive gadgets – that we largely share with others, and that exist in vastly interconnected, hugely recursive, massively layered assemblies in and beyond our heads. I invoke Reed’s Law to help explain how and why this makes our intracranial cognition so much greater than the neural networks that host it: it’s not just the neural connections but the groups and multi-scaled clusters of technological entities that emerge as a result that can then be a part of the network that embodies them, and of one another, and so on and so on. In passing, I have a vague and hard-to-express hunch that the “and so on” is at least part of the answer to the hard problem: networks that form other networks that themselves become parts of the networks that form them (rinse and repeat) seems like a potential path to self-consciousness to me. However,  the ludicrous levels of intertwingularity implied by this, not to mention an almost total absence of any idea about the underlying mechanism, ties my little mind in knots that I cannot yet and probably will never unravel.

At least as importantly, these private intracranial technologies are in turn parts of even greater assemblies that extend into our bodies, our environments, and above all into the technologies around us, and thence into the minds of others. To a large extent it is our ability to make use of and participate in this extended technological connectome, that is both within us and beyond us, that forms the object, the subject, and the purpose of education. Our technologies as much form a part of our cognition as they enable it. We continuously shape and are shaped by them, assembling and reassembling them as we move into the adjacent possibles that result, creating further adjacent possibles every time we do, for ourselves and others. There is something incredibly awesome about that.

Abstract

This paper frames technology as a phenomenon that is inextricable from individual and collective cognition. Technologies are not “the other”, separate from us: we are parts of them and they are parts of us. We learn to be technologies as much as we learn to use them, and each use is itself a technology through which we participate both as parts and as creators of nodes in a vast technological connectome of awesome complexity. The technological connectome in turn forms a major part of what makes us, individually and collectively, smart. With that framing in mind, the paper is presented as a series of sets of observations about the nature of technology followed by examples of consequences for educators that illustrate some of the potential value of understanding technology this way, ending with an application of the model to provide actionable insights into what large language models imply for how we should teach.

How AI works for education: an interview with me for AACE Review

Thanks to Stefanie Panke for some great questions and excellent editing in this interview with me for the AACE Review.

The content is in fact the product of two discussions, one coming from student questions at the end of a talk that I gave for the Asian University for Women just before Christmas, the other asynchronously with Stefanie herself.

Stefanie did a very good job of making sense of my rambling replies to the students that spanned quite a few issues, including some from my book, How Education Works, some with (mainly) generative AI, and a little about the intersection of collective and artificial intelligence. Stefanie’s own prompts were great: they encouraged me to think a little differently, and to take some enjoyable detours along the way around the evils of learning management systems, artificially-generated music, and  social media, as well as a discussion of the impact of generative AI on learning designers, thoughts on legislation to control AI, and assessment.

Here are the slides from that talk at AUW – I’ve not posted this separately because hardly any are new: it mostly cobbles together two recent talks, one for Contact North and the other my keynote for ICEEL ’24. The conversation afterwards was great, though, thanks to a wonderfully thoughtful and enthusiastic bunch of very smart students.