Here are the slides from my keynote at Thompson Rivers University’s Teaching Practices Colloquium this morning. I quite like the mediaeval theme (thanks ChatGPT), which I created to provide a constant reminder that the problems we have to solve are the direct result of decisions made 1000 years ago. There was a lot of stuff from my last book in the talk, framed in terms of Faustian Bargains, intrinsic motivation, counter technologies, and adjacent possibles. This was the abstract:
Why is it that educators feel it is necessary to motivate students to learn when love of learning is a defining characteristic of our species? Why do students disengage from education? Why do so many cheat? How can we be better teachers? What does “good teaching” even mean? And what role does technology play in all of this? Drawing on ideas, theories, and models from his book, How Education Works: Teaching, Technology, and Technique, Jon Dron will provide some answers to these and many more questions through a tale that straddles most of a millennium, during which you may encounter a mutilated monk, a man who lost a war, a robot named Claude, part of a monkey, and an unsuccessful Swiss farmer who made a Faustian bargain and changed education forever. Along the way you will learn why most educational science is pointless, why the best teaching methods fail, why the worst succeed, and why you should learn to love learning technologies. There may be singing.
I had a lot of fun – there was indeed singing, a silicone gorilla hand that turned out to be really useful, and some fun activities from which I learned stuff. I think it worked fine as a hybrid event. It was a sympathetic audience, online and in-person. TRU has a really interesting (and tension-filled, in good and bad ways) mix of online and in-person teaching practices, and I’ve met and listened to some really smart, thoughtful, reflective practitioners today. Almost all cross disciplinary boundaries – who knew you could combine culinary science and nursing? – so there’s a lot of invention going on. Unexpectedly, and far more than from a lot of bigger International conferences, I’m going to go home armed with a whole bunch of of new ideas.
Dron, J. (2024). Learning: A technological perspective. Journal of Open, Distance, and Digital Education, 1(2), Article 2. https://doi.org/10.25619/dpvg4687
My latest paper, Learning: A technological perspective, was published today in the (open) Journal of Open, Distance, and Digital Education. Methodologically, it provides a connected series of (I think) reasonable and largely uncontroversial assertions about the nature of technology and, for each assertion, offers some examples of why that matters to educators. In the process it wends its way towards a view of learning that is firmly situated in the field of extended cognition (and related complexivist learning theories such as Connectivism, Rhizomatic Learning, Networks of Practice, etc), with a technological twist that is, I think, pragmatically useful and theoretically interesting. Much of it repeats ideas from How Education Works but it extends and generalizes them further into the realms of intelligence and cognition through what I describe as the technological connectome.
I wrote this paper to align with the themes of the journal so, as a result, it has a greater focus on education than on the technological connectome, but I intend to write more on the subject some time soon. The essence of the idea is that what we recognize as intelligent behaviour consists largely of intracranial technologies like words, symbols, theories, models, procedures, structures, skills, ways of doing things, and so on – our cognitive gadgets – that we largely share with others, and that exist in vastly interconnected, hugely recursive, massively layered assemblies in and beyond our heads. I invoke Reed’s Law to help explain how and why this makes our intracranial cognition so much greater than the neural networks that host it: it’s not just the neural connections but the groups and multi-scaled clusters of technological entities that emerge as a result that can then be a part of the network that embodies them, and of one another, and so on and so on. In passing, I have a vague and hard-to-express hunch that the “and so on” is at least part of the answer to the hard problem: networks that form other networks that themselves become parts of the networks that form them (rinse and repeat) seems like a potential path to self-consciousness to me. However, the ludicrous levels of intertwingularity implied by this, not to mention an almost total absence of any idea about the underlying mechanism, ties my little mind in knots that I cannot yet and probably will never unravel.
At least as importantly, these private intracranial technologies are in turn parts of even greater assemblies that extend into our bodies, our environments, and above all into the technologies around us, and thence into the minds of others. To a large extent it is our ability to make use of and participate in this extended technological connectome, that is both within us and beyond us, that forms the object, the subject, and the purpose of education. Our technologies as much form a part of our cognition as they enable it. We continuously shape and are shaped by them, assembling and reassembling them as we move into the adjacent possibles that result, creating further adjacent possibles every time we do, for ourselves and others. There is something incredibly awesome about that.
Abstract
This paper frames technology as a phenomenon that is inextricable from individual and collective cognition. Technologies are not “the other”, separate from us: we are parts of them and they are parts of us. We learn to be technologies as much as we learn to use them, and each use is itself a technology through which we participate both as parts and as creators of nodes in a vast technological connectome of awesome complexity. The technological connectome in turn forms a major part of what makes us, individually and collectively, smart. With that framing in mind, the paper is presented as a series of sets of observations about the nature of technology followed by examples of consequences for educators that illustrate some of the potential value of understanding technology this way, ending with an application of the model to provide actionable insights into what large language models imply for how we should teach.
For those who have been following my thoughts on generative AI there will be few surprises in my slides, and I only had half an hour so there was not much time to go into the nuances. The title is an allusion to Pestalozzi’s 18th Century tract, How Gertrude Teaches Her Children, which has been phenomenally influential to the development of education systems around the world and that continues to have impact to this day. Much of it is actually great: Pestalozzi championed very child-centric teaching approaches that leveraged the skills and passions of their teachers. He recommended methods of teaching that made full use of the creativity and idiosyncratic knowledge the teachers possessed and that were very much concerned with helping children to develop their own interests, values and attitudes. However, some of the ideas – and those that have ultimately been more influential – were decidedly problematic, as is succinctly summarized in this passage on page 41:
I believe it is not possible for common popular instruction to advance a step, so long as formulas of instruction are not found which make the teacher, at least in the elementary stages of knowledge, merely the mechanical tool of a method, the result of which springs from the nature of the formulas and not from the skill of the man who uses it.
This is almost the exact opposite of the central argument of my book, How Education Works, that mechanical methods are not the most important part of a soft technology such as teaching: what usually matters more is how it is done, not just what is done. You can use good methods badly and bad methods well because you are a participant in the instantiation of a technology, responsible for the complete orchestration of the parts, not just a user of them.
As usual, in the talk I applied a bit of co-participation theory to explain why I am both enthralled by and fearful of the consequences of generative AIs because they are the first technologies we have ever built that can use other technologies in ways that resemble how we use them. Previous technologies only reproduced hard technique – the explicit methods we use that make us part of the technology. Generative AIs reproduce soft technique, assembling and organizing phenomena in endlessly novel ways to act as creators of the technology. They are active, not passive participants.
Two dangers
I see there to be two essential risks lying in the delegation of soft technique to AIs. The first is not too terrible: that, because we will increasingly delegate creative activities we would have otherwise performed ourselves to machines, we will not learn those skills ourselves. I mourn the potential passing of hard skills in (say) drawing, or writing, or making music, but the bigger risk is that we will lose the the soft skills that come from learning them: the things we do with the hard skills, the capacity to be creative.
That said, like most technologies, generative AIs are ratchets that let us do more than we could achieve alone. In the past week, for instance, I “wrote” an app that would have taken me many weeks without AI assistance in a less than a day. Though it followed a spec that I had carefully and creatively written, it replaced the soft skills that I would have applied had I written it myself, the little creative flourishes and rabbit holes of idea-following that are inevitable in any creation process. When we create we do so in conversation with the hard technologies available to us (including our own technique), using the affordances and constraints to grasp adjacent possibles they provide. Every word we utter or wheel we attach to an axle opens and closes opportunities for what we can do next.
With that in mind, the app that the system created was just the beginning. Having seen the adjacent possibles of the finished app, I have spent too many hours in subsequent days extending and refining the app to do things that, in the past, I would not have bothered to do because they would have been too difficult. It has become part of my own extended cognition, starting higher up the tree than I would have reached alone. This has also greatly improved my own coding skills because, inevitably, after many iterations, the AI and/or I started to introduce bugs, some of which have been quite subtle and intractable. I did try to get the AI to examine the whole code (now over 2000 lines of JavaScript) and rewrite it or at least to point out the flaws, but that failed abysmally, amply illustrating both the strength of LLMs as creative participants in technologies, and their limitations in being unable to do the same thing the same way twice. As a result, the AI and I have have had to act as partners trying to figure out what is wrong. Often, though the AI has come up with workable ideas, its own solution has been a little dumb, but I could build on it to solve the problem better. Though I have not actually created much of the code myself, I think my creative role might have been greater than it would have been had I written every line.
Similarly for the images I used to illustrate the talk: I could not possibly have drawn them alone but, once the AI had done so, I engaged in a creative conversation to try (sometimes very unsuccessfully) to get it to reproduce what I had in mind. Often, though, it did things that sparked new ideas so, again, it became a partner in creation, sharing in my cognition and sparking my own invention. It was very much not just a tool: it was a co-worker, with different and complementary skills, and “ideas” of its own. I think this is a good thing. Yes, perhaps it is a pity that those who follow us may not be able to draw with a pen (and more than a little worrying thinking about the training sets that future AIs with learn to draw from), but they will have new ways of being creative.
Like all learning, both these activities changed me: not just my skills, but my ways of thinking. That leads me to the bigger risk.
Learning our humanity from machines
The second risk is more troubling: that we will learn ways of being human from machines. This is because of the tacit curriculum that comes with every learning interaction. When we learn from others, whether they are actively teaching, writing textbooks, showing us, or chatting with us, we don’t just learn methods of doing things: we learn values, attitudes, ways of thinking, ways of understanding, and ways of being at the same time. So far we have only learned that kind of thing from humans (sometimes mediated through code) and it has come for free with all the other stuff, but now we are doing so from machines. Those machines are very much like us because 99% of what they are – their training sets – is what we have made, but they not the same. Though LLMs are embodiments of our own collective intelligence, they don’t so much lack values, attitudes, ways of thinking etc as they have any and all of them. Every implicit value and attitude of the people whose work constituted their training set is available to them, and they can become whatever we want them to be. Interacting with them is, in this sense, very much not like interacting with something created by a human, let alone with humans more directly. They have no identity, no relationships, no purposes, no passion, no life history and no future plans. Nothing matters to them.
To make matters worse, there is programmed and trained stuff on top of that, like their interminable cheery patience that might not teach us great ways of interacting with others. And of course it will impact how we interact with others because we will spend more and more time engaged with it, rather than with actual humans. The economic and practical benefits make this an absolute certainty. LLMs also use explicit coding to remove or massage data from the input or output, reflecting the values and cultures of their creators for better or worse. I was giving this talk in India to a predominantly Indian audience of AI researchers, every single one of whom was making extensive use of predominantly American LLMs like ChatGPT, Gemini, or Claude, and (inevitably) learning ways of thinking and doing from it. This is way more powerful than Hollywood as an instrument of Americanization.
I am concerned about how this will change our cultures and our selves because this is happening at phenomenal and global scale, and it is doing so in a world that is unprepared for the consequences, the designed parts of which assume a very different context. One of generative AI’s greatest potential benefits lies in the potential to provide “high quality” education at low cost to those who are currently denied it, but those low costs will make it increasingly compelling for everyone. However, because of the designs that assume a different context “quality”, in this sense, relates to the achievement of explicit learning outcomes: this is Pestalozzi’s method writ large. Generative AIs are great at teaching what we want to learn – the stuff we could write down as learning objectives or intended outcomes – so, as that is the way we have designed our educational systems (and our general attitudes to learning new skills), of course we will use them for that purpose. However, that cannot be done without teaching the other stuff – the tacit curriculum – which is ultimately more important because it shapes how we are in the world, not just the skills we employ to be that way. We might not have designed our educational systems to do that, and we seldom if ever think about it when teaching ourselves or receiving training to do something, but it is perhaps education’s most important role.
By way of illustration, I find it hugely bothersome that generative AIs are being used to write children’s stories (and, increasingly, videos) and I hope you feel some unease too, because those stories – not the facts in them but the lessons about things that matter that they teach – are intrinsic to them becoming who they will become. However, though perhaps of less magnitude, the same issue relates to learning everything from how to change a plug to how to philosophize: we don’t stop learning from the underlying stories behind those just because we have grown up. I fear that educators, formal or otherwise, will become victims of the McNamara Fallacy, setting our goals to achieve what is easily measurable while ignoring what cannot (easily) be measured, and so rush blindly towards subtly new ways of thinking and acting that few will even notice, until the changes are so widespread they cannot be reversed. Whether better or worse, it will very definitely be different, so it really matters that we examine and understand where this is all leading. This is the time, I believe, to reclaim a revalorize the value of things that are human before it is too late. This is the time to recognize education (far from only formal) as being how we become who we are, individually and collectively, not just how we meet planned learning outcomes. And I think (at least hope) that we will do that. We will, I hope, value more than ever the fact that something – be it a lesson plan or a book or a screwdriver – is made by someone or by a machine that has been explicitly programmed by someone. We will, I hope, better recognize the relationships between us that it embodies, the ways it teaches us things it does not mean to teach, and the meaning it has in our lives as a result. This might happen by itself – already there is a backlash against the bland output of countless bots – but it might not be a bad idea to help it along when we can. This post (and my talk last night) has been one such small nudge.
I had the great pleasure of being invited to the Open University of the Netherlands and, later in the day, to EdLab, Maastricht University a few weeks ago, giving a slightly different talk in each place based on some of the main themes in my most recent book, How Education Works. Although I adapted my slides a little for each audience, with different titles and a few different slides adjusted to the contexts, I could probably have used either presentation interchangeably. In fact, I could as easily have used the slides from my SITE keynote on which both were quite closely based (which is why I am not sharing them here). As well as most of the same slides, I used some of the same words, many of the same examples, and several of the same anecdotes. For the most part, this was essentially the same presentation given twice. Except, of course, it really, really wasn’t. In fact, the two events could barely have been more different, and what everyone (including me) learned was significantly different in each session.
This is highly self-referential. One of the big points of the book is that it only ever makes sense to consider the entire orchestration, including the roles that learners play in making sense of it all the many components of the assembly, designed for the purpose and otherwise. The slides, structure, and content did provide the theme and a certain amount of hardness, but what we (collectively) did with them led to two very different learning experiences. They shared some components and purposes, just as a car, a truck, and a bicycle share some of the same components and purposes, but the assemblies and orchestrations were quite different, leading to very different outcomes. Some of the variation was planned in advance, including an hour of conversation at the end of each presentation and a structure that encouraged dialogue at various points along the way: these were as much workshops as presentations. However, much of the variance occurred not due to any planning but because of the locations themselves. One of the rooms was a well-appointed conventional lecture theatre, the other an airy space with grouped tables, and with huge windows looking out on a busy and attractive campus. In the lecture theatre I essentially gave a lecture: the interactive parts were very much staged, and I had to devise ways to make them work. In the airy room, I had a conversation and had to devise ways to maintain some structure to the process, that was delightfully disrupted by the occasional passing road train and the very tangible lives of others going on outside, as well as an innately more intimate and conversational atmosphere enabled (not entailed) by the layout. Other parts of the context mattered too: the time of day, the temperature, the different needs and interests of the audience, the fact that one occurred in the midst of planning for a major annual event, and so on. All of this had a big effect on how I and others behaved, and on what and how people learned. From one perspective, in both talks, I was sculpting the available affordances and constraints to achieve my intended ends but, from another equally valid point of view, I was being sculpted by them. The creators and maintainers of the rooms and I were teaching partners, coparticipants in the learning process. Pedagogically, and despite the various things I did to assemble the missing parts in each, they were significantly different learning technologies.
The complexity of distance teaching
Train journeys are great contexts for uninterrupted reflection (trains teach too) so, sitting on the train on my journey back the next day, I began to reflect on what all of this means for my usual teaching practice, and made some notes on which this post is based (notebooks teach, too). I am a distance educator by trade and, as a rule, with exceptions for work-based learning, practicums, co-ops, placements, and a few other limited contexts, distance educators rarely even acknowledge that students occupy a physical space, let alone do we adapt to it. We might sometimes encourage students to use things in their environments as part of a learning activity, but we rarely change our teaching on the fly as a result of the differences between those environments. As I have previously observed, the problem is exacerbated by the illusion that online systems are environments (in the sense of being providers of the context in which we learn) and that we believe we can observe what happens in them. They are not, and we cannot. They are parts of the learners’ own environments, and all we can (ethically) observe are interactions with our designed systems, not the behaviour of the learners within the spaces that they occupy. It is as hard for students to understand our context as it is for us to understand theirs, and that matters too. It makes it trickier to model ways of thinking and approaches to problem solving, for example, if the teacher occupies a different context.
This matters little for some of the harder elements of the teaching process. Information provision, resource design, planning, and at least some forms of assessment and feedback are at least as easy to do at a distance as not. We can certainly do those and make a point of doing them well, thereby providing a little counterbalance. However, facilitation, role modelling, guidance, supporting motivation, fostering networks, monitoring of learning, responsive adaptation, and many other significant teaching roles are more complex to perform because of how little is known about learning activities within an environment. As Peter Goodyear has put it, matter matters. The more that the designated teacher can understand that, the more effective they can be in helping learners to succeed.
Because we are not so able to adapt our teaching to the context, distance learning (more accurately, distance teaching) mostly works because students are the most important teachers, and the pedagogies they add to the raw materials we provide do most of the heavy lifting. Given some shared resources and guided interactions, they are the ones who perform most of the kinds of orchestration and assembly that I added to my two talks in the Netherlands; they are the ones who both adapt and adapt to their spaces for learning. Those better able to do this in the first place tend to do better in the long run, regardless of subject interest or innate ability. This is reflected in the results. In my faculty and on average, more than 95% of our graduate students – who have already proven themselves to be successful learners and so are better able to teach themselves – succeed on any given course, in the sense of reaching the end and achieving a passing grade. 70% of our undergraduates, on the other hand, are the first in their family to take a degree. Many have taken years or even decades out of formal education, and many had poor experiences in school. On average, therefore, they typically have fewer skills in teaching themselves in an academic context (which is a big thing to learn about in and of itself) and we are not able to adapt our teaching to what we cannot perceive, so we are of little assistance either. Without the shared physical context, we can only guess and anticipate when and where they might be learning, and we seldom have the faintest idea how it occurs, save through sparse digital signals that they leave in discussion forums or submitted assignments, or coarse statistics based on web page views. In a few undergraduate core courses within my faculty it is therefore no surprise that the success rates are less than 30%, and (on average) only about half of all our students are successful, with rates that improve dramatically in more senior level courses. The vast majority of those who get to the end pass. Most who don’t succeed drop out. It doesn’t take many core courses with success rates of 30% to eliminate nearly 95% of students by the end of a program.
Teaching with a context
We can better deal with this if we let go of the illusion that we can be in control and, at the same time, find better ways to stay close: to make the learning process including the environment in which it occurs, as visible as possible. It is emphatically not about capturing digital traces and using analytics to reveal patterns. Though such techniques can have a place in helping to build a picture of how learners are responding to our deliberate acts of teaching, they are not even close to a solution for understanding learners in context. Most learning analytics and adaptive systems are McNamara Machines, blind to most of what matters. There’s a huge risk that we start by measuring the easily measurable then wind up not just ignoring but implicitly denying that the things we cannot measure are important. Yes, it might help us to help students who are going to get to the end anyway to get better grades, but it tells us very little about (for instance) how they are learning, what obstacles they face, or how we could help them orchestrate their learning in the contexts in which they live. Could generative AI help with that? I think it might. In conversation, an AI agent could ask leading questions, could recommend things to do with the space, could aggregate and report back on how and where students seem to be learning. Unlike traditional adaptive systems, generative AI can play an active discovery role and make broader connections that have not been scripted. However, this is not and should not be a substitute for an actual teacher: rather, it should mediate between humans, amplifying and feeding back, not guiding or informing.
For the most part, though, I think the trick is to use pedagogical designs that are made to support flexibility, that encourage learners to connect with the spaces live and people they share them with, that support them in understanding the impact of the environments they are in, and, as much as possible, to incorporate conduits that make it likely that participants will share information about their contexts and what they are doing in them, such as through reflective learning diaries, shared videos or audio, or introductory discussions intended to elicit that information. A good trick that I’ve used in the past, for example, is to ask students to send virtual postcards showing where they are and what they have been doing (nowadays a microblog post might serve a similar role). Similarly, it can be useful to start discussions that seek ideas about how to configure time and space for learning, sharing problems and solutions from the students themselves. Modelling behaviours can help: in my own communications, I try to reveal things about where I am and what I have been doing that provide some context and background story, especially when it relates to how I am changing as a result of our shared endeavours. Building social interaction opportunities into every inhabited virtual space would help a lot, making it more likely that students will share more of what they are doing and increasing awareness of both the presence and the non-presence (the difference in context) of others. Learning management systems are almost universally utter rubbish for that, typically relegating interactions to controlled areas of course sites and encouraging instrumental and ephemeral discussions that largely ignore context. We need more, more pervasively, and we need better.
None of this will replicate the rich, shared environments of in-person learning, and that is not the point. This is about acknowledging the differences in online and distance learning and building different orchestrations around them. On the whole, the independence of distance students is an extremely good thing, with great motivational benefits, not to mention convenience, much lower environmental harm, exploitable diversity, and many other valuable features that are hard to reproduce in person. When it works, it works very well. We just need to make it work better for those for whom that is not enough. To do that, we need to understand the whole assembly, not just the pieces we provide.
Here are the slides from a talk I gave earlier today, hosted by George Siemens and his fine team of people at Human Systems. Terry Anderson helped me to put the slides together, and offered some great insights and commentary after the presentation but I am largely to blame for the presentation itself. Our brief was to talk about sets, nets and groups, the theme of our last book Teaching Crowds: learning and social media and much of our work together since 2007 but, as I was the one presenting, I bent it a little towards generative AI and my own intertwingled perspective on technologies and collective cognition, which is most fully developed (so far) in my most recent book, How Education Works: Teaching, Technology, and Technique. If you’re not familiar with our model of sets, nets, groups and collectives, there’s a brief overview on the Teaching Crowds website. It’s a little long in the tooth but I think it is still useful and will help to frame what follows.
A recreation of the famous New Yorker cartoon, “On the Internet no one knows you are a dog” – but it is a robot dog
The key new insight that appears for the first time in this presentation is that, rather than being a fundamental social form in their own right, groups consist of technological processes that make use of and help to engender/give shape to the more fundamental forms of nets and sets. At least, I think they do: I need to think and talk some more about this, at least with Terry, and work it up into a paper, but I haven’t yet thought through all the repercussions. Even back when we wrote the book I always thought of groups as technologically mediated entities but it was only when writing these slides in the light of my more recent thinking on technology that I paid much attention to the phenomena that they actually orchestrate in order to achieve their ends. Although there are non-technological prototypes – notably in the form of families – these are emergent rather than designed. The phenomena that intentional groups primarily orchestrate are those of networks and sets, which are simply configurations of humans and their relationships with one another. Modern groups – in a learning context, classes, cohorts, tutorial groups, seminar groups, and so on – are designed to fulfill more specific purposes than their natural prototypes, and they are made possible by technological inventions such as rules, roles, decision-making processes, and structural hierarchies. Essentially, the group is a purpose-driven technological overlay on top of more basic social forms. It seems natural, much as language seems natural, because it is so basic and fundamental to our existence and how everything else works in human societies, but it is an invention (or many inventions, in fact) as much as wheels and silicon chips.
Groups are among the oldest and most highly evolved of human technologies and they are incredibly important for learning, but they have a number of inherent flaws and trade-offs/Faustian bargains, notably in their effects on individual freedoms, in scalability (mainly achieved through hierarchies), in sometimes unhealthy power dynamics, and in limitations they place on roles individuals play in learning. Modern digital technologies can help to scale them a little further and refine or reify some of the rules and roles, but the basic flaws remain. However, modern digital technologies also offer other ways of enabling sets and networks of people to support one another’s learning, from blogs and mailing lists to purpose-built social networking systems, from Wikipedia and Academia.edu to Quora, in ways that can (optionally) integrate with and utilize groups but that differ in significant ways, such as in removing hierarchies, structuring through behaviour (collectives) and filtering or otherwise mediating messages. With some exceptions, however, the purposes of large-scale systems of this nature (which would provide an ideal set of phenomena to exploit) are not usually driven by a need for learning, but by a need to gain attention and profit. Facebook, Instagram, LinkedIn, X, and others of their ilk have vast networks to draw on but few mechanisms that support learning and limited checks and balances for reliability or quality when it does occur (which of course it does). Most of their algorithmic power is devoted to driving engagement, and the content and purpose of that engagement only matters insofar as it drives further engagement. Up to a point, trolls are good for them, which is seldom if ever true for learning systems. Some – Wikipedia, the Khan Academy, Slashdot, Stack Exchange, Quora, some SubReddits, and so on – achieve both engagement and intentional support for learning. However, they remain works in progress in the latter regard, being prone to a host of ills from filter bubbles and echo chambers to context collapse and the Matthew Effect, not to mention intentional harm by bad actors. I’ve been exploring this space for approaching 30 years now, but there remains almost as much scope for further research and development in this area as there was when I began. Though progress has been made, we have yet to figure out the right rules and structures to deal with a great many problems, and it is increasingly difficult to slot the products of our research into an increasingly bland, corporate online space dominated by a shrinking number of bland, centralized learning management systems that continue to refine their automation of group processes and structures and, increasingly, to ignore the sets and networks on which they rely.
With that in mind, I see big potential benefits for generative AIs – the ultimate collectives – as supporters and enablers for crowds of people learning together. Generative AI provides us with the means to play with structures and adapt in hitherto impossible ways, because the algorithms that drive their adaptations are indefinitely flexible, the reified activities that form them are vast, and the people that participate in them play an active role in adjusting and forming their algorithms (not the underpinning neural nets but the emergent configurations they take). These are significant differences from traditional collectives, that tend to have one purpose and algorithm (typically complex but deterministic), such as returning search results or engaging network interactions. I also see a great many potential risks, of which I have written fairly extensively of late, most notably in playing soft orchestral roles in the assembly that replace the need for humans to learn to play them. We tread a fine line between learning utopia and learning dystopia, especially if we try to overlay them on top of educational systems that are driven by credentials. Credentials used to signify a vast range of tacit knowledge and skills that were never measured, and (notwithstanding a long tradition of cheating) that was fine as long as nothing else could create those signals, because they were serviceable proxies. If you could pass the test or assignment, it meant that you had gone through the process and learned a lot more than what was tested. This has been eroded for some time, abetted by social media like Course Hero or Chegg that remain quite effective ways of bypassing the process for those willing to pay a nominal sum and accept the risk. Now that generative AI can do the same at considerably lower cost, with greater reliability, and lower risk, without having gone through the process, they no longer make good signifiers and, anyway (playing Devil’s advocate), it remains unclear to what extent those soft, tacit skills are needed now that generative AIs can achieve them so well. I am much encouraged by the existence of George’s Paul LeBlanc’s lab initiative, the fact that George is the headliner chief scientist for it, its intent to enable human-centred learning in an age of AI, and its aspiration to reinvent education to fit. We need such endeavours. I hope they will do some great things.
A Turkish university candidate was recently arrested after being caught using an AI-powered system to obtain answers to the entrance exam in real-time.
The candidate used a simple and rather obvious set-up: a camera disguised as a shirt button that was used to read the questions, a router hidden in a hollowed-out shoe linking to a stealthily concealed mobile device that queried a generative AI (likely ChatGPT-powered) that fed the answers back verbally to an in-ear bluetooth earpiece. Constructing such a thing would take a little ingenuity but it’s not rocket science. It’s not even computer science. Anyone could do this. It would take some skill to make it work well, though, and that may be the reason this attempt went wrong. The candidate was caught as a result of their suspicious behaviour, not because anyone directly noticed the tech. I’m trying to imagine the interface, how the machine would know which question to answer (did the candidate have to point their button in the right direction?), how they dealt with dictating the answers at a usable speed (what if they needed it to be repeated? Did they have to tap a microphone a number of times?), how they managed sequence and pacing (sub-vocalization? moving in a particular way?). These are soluble problems but they are not trivial, and skill would be needed to make the whole thing seem natural.
It may take a little while for this to become a widespread commodity item (and a bit longer for exam-takers to learn to use it unobtrusively), but I’m prepared to bet that someone is working on it, if it is not already available. And, yes, exam-setters will come up with a counter-technology to address this particular threat (scanners? signal blockers?Forcing students to strip naked?) but the cheats will be more ingenious, the tech will improve, and so it will go on, in an endless and unwinnable arms race.
Very few people cheat as a matter of course. This candidate was arrested – exam cheating is against the law in Turkey – for attempting to solve the problem they were required to solve, which was to pass the test, not to demonstrate their competence. The level of desperation that led to them adopting such a risky solution to the problem is hard to imagine, but it’s easy to understand how high the stakes must have seemed and how strong the incentive to succeed must have been. The fact that, in most societies, we habitually inflict such tests on both children and adults, on an unimaginably vast scale, will hopefully one day be seen as barbaric, on a par with beating children to make them behave. They are inauthentic, inaccurate, inequitable and, most absurdly of all, a primary cause of the problem they are designed to solve. We really do need to find a better solution.
Note on the post title: the student was caught so, as some have pointed out, it would be an exaggeration to say that this one case is proof that proctored exams have fallen to generative AI, but I think it is a very safe assumption that this is not a lone example. This is a landmark case because it provides the first direct evidence that this is happening in the wild, not because it is the first time it has ever happened.
Many thanks, too, to Junhong for sending me the printed version that arrived today, smelling deliciously of ink. I hardly ever read anything longer than a shopping bill on paper any more but there is something rather special about paper that digital versions entirely lack. The particular beauty of a book or journal written in a language and script that I don’t even slightly understand is that, notwithstanding the ease with which I can translate it using my phone, it largely divorces the medium from the message. Even with translation tools my name is unrecognizable to me in this: Google Lens translates it as “Jon Delong”. Although I know it contains a translation of my own words, it is really just a thing: the signs it contains mean nothing to me, in and of themselves. And it is a thing that I like, much as I like the books on my bookshelf.
I am not alone in loving paper books, a fact that owners of physical copies of my most recent book (which can be read online for free but that costs about $CAD40 on paper) have had the kindness to mention, e.g. here and here. There is something generational in this, perhaps. For those of us who grew up knowing no other reading medium than ink on paper, there is comfort in the familiar, and we have thousands (perhaps millions) of deeply associated memories in our muscles and brains connected with it, made more precious by the increasing rarity with which those memories are reinforced by actually reading them that way. But, for the most part, I doubt that my grandchildren, at least, will lack that. While they do enjoy and enthusiastically interact with text on screens, from before they have been able to accurately grasp them they have been exposed to printed books, and have loved some of them as much as I did at the same ages.
It is tempting to think that our love of paper might simply be because we don’t have decent e-readers, but I think there is more to it than that. I have some great e-readers in many sizes and types, and I do prefer some of them to read from, for sure: backlighting when I need it, robustness, flexibility, the means to see it in any size or font that works for me, the simple and precise search, the shareable highlights, the lightness of (some) devices, the different ways I can hold them, and so on, make them far more accessible. But paper has its charms, too. Most obviously, something printed on a paper is a thing to own whereas, on the whole, a digital copy tends to just be a licence to read, and ownership matters. I won’t be leaving my e-books to my children. The thingness really matters in other ways, too. Paper is something to handle, something to smell. Pages and book covers have textures – I can recognize some books I know well by touch alone. It affects many senses, and is more salient as a result. It takes up room in an environment so it’s a commitment, and so it has to matter, simply because it is there; a rivalrous object competing with other rivalrous objects for limited space. Paper comes in fixed sizes that may wear down but will never change: it thus keeps its shape in our memories, too. My wife has framed occasional pages from my previously translated work, elevating them to art works, decoupled from their original context, displayed with the same lofty reverence as pages from old atlases. Interestingly, she won’t do that if it is just a printed PDF: it has to come from a published paper journal, so the provenance matters. Paper has a history and a context of its own, beyond what it contains. And paper creates its own context, filled with physical signals and landmarks that make words relative to the medium, not abstractions that can be reflowed, translated into other languages, or converted into other media (notably speech). The result is something that is far more memorable than a reflowable e-text. Over the years I’ve written a little about this here and there, and elsewhere, including a paper on the subject (ironically, a paper that is not available on paper, as it happens), describing an approach to making e-texts more memorable.
After reaching a slightly ridiculous peak in the mid-2000s, and largely as a result of a brutal culling that occurred when I came to Canada nearly 17 years ago, my paper book collection has now diminished to easily fit in a single and not particularly large free-standing IKEA shelving unit. The survivors are mostly ones I might want to refer to or read again, and losing some of them would sadden me a great deal, but I would only (perhaps) run into a burning building to save just a few, including, for instance:
A dictionary from 1936, bound in leather by my father and used in countless games of Scrabble and spelling disputes when I was a boy, and that was used by my whole family to look up words at one time or another.
My original hardback copy of the Phantom Tollbooth (I have a paperback copy for lending), that remains my favourite book of all time, that was first read to me by my father, and that I have read myself many times at many ages, including to my own children.
A boxed set of the complete works of Narnia, that I chose as my school art prize when I was 18 because the family copies had become threadbare (read and abused by me and my four siblings), and that I later read to my own children. How someone with very limited artistic skill came to win the school art prize is a story for another time.
A well-worn original hardback copy of Harold and the Purple Crayon (I have a paperback copy for lending) that my father once displayed for children in his school to read, with the admonition “This is Mr Dron’s book. Please handle with care” (it was not – it was mine).
A scribble-filled, bookmark-laden copy of Kevin Kelly’s Out of Control that strongly influenced my thinking when I was researching my PhD and that still inspires me today. I can remember exactly where I sat when I made some of the margin notes.
A disintegrating copy of Storyland, given to me by my godmother in 1963 and read to me and by me for many years thereafter. There is a double value to this one because we once had two copies of this in our home: the other belonged to my wife, and was also a huge influence on her at similar ages.
These books proudly wear their history and their relationships with me and my loved ones in all their creases, coffee stains, scuffs, and tattered pages.To a greater or lesser extent, the same is true of almost all of the other physical books I have kept. They sit there as a constant reminder of their presence – their physical presence, their emotional presence, their social presence and their cognitive presence – flitting by in my peripheral vision many times a day, connecting me to thoughts and inspirations I had when I read them and, often, with people and places connected with them. None of this is true of my e-books. Nor is it quite the same for other objects of sentimental value, except perhaps (and for very similar reasons) the occasional sculpture or picture, or some musical instruments. Much as I am fond of (say) baby clothes worn by my kids or a battered teddy bear, they are little more than aides memoires for other times and other activities, whereas the books (and a few other objects) latently embody the experiences themselves. If I opened them again (and I sometimes do) it would not be the same experience, but it would enrich and connect with those that I already had.
I have hundreds of e-books that are available on many devices, one of which I carry with me at all times, not to mention an Everand (formerly Scribd) account with a long history, not to mention a long and mostly lost history of library borrowing, and I have at least a dozen devices on which to read them, from a 4 inch e-ink reader to a 32 inch monitor and much in between, but my connection with those is far more limited and transient. It is still more limited for books that are locked to a certain duration through DRM (which is one reason they are the scum of the earth). When I look at my devices and open the various reading apps on them I do see a handful of book covers, usually those that I have most recently read, but that is too fleeting and volatile to have much value. And when I open them they don’t fall open on well-thumbed pages. The text is not tangibly connected with the object at all.
As well as smarter landmarks within them, better ways to make e-books more visible would help, which brings me to the real point of this post. For many years I have wanted to paper a wall or two with e-paper (preferably in colour) on which to display e-book covers, but the costs are still prohibitive. It would be fun if the covers would become battered with increasing use, showing the ones that really mattered, and maybe dust could settle on those that were never opened, though it would not have to be so skeuomorphic – fading would work, or glyphs. They could be ordered manually or by (say) reading date, title, author, or subject. Perhaps touching them or scanning a QR code could open them. I would love to get a research grant to do this but I don’t think asking for electronic wallpaper in my office would fly with most funding sources, even if I prettied it up with words like “autoethnography”, and I don’t have a strong enough case, nor can I think of a rigorous enough research methodology to try it in a larger study with other people. Well. Maybe I will try some time. Until the costs of e-paper come down much further, it is not going to be a commercially viable product, either, though prices are now low enough that it might be possible to do it in a limited way with a poster-sized display for a (very) few thousand dollars. It could certainly be done with a large screen TV for well under $1000 but I don’t think a power-hungry glowing screen would be at all the way to go: the value would not be enough to warrant the environmental harm or energy costs, and something that emitted light would be too distracting. I do have a big monitor on my desk, though, which is already doing that so it wouldn’t be any worse, to which I could add a background showing e-book covers or spines. I could easily do this as a static image or slideshow, but I’d rather have something dynamic. It shouldn’t be too hard to extract the metadata from my list of books, swipe the images from the Web or the e-book files, and show them as a backdrop (a screensaver would be trivial). It might even be worth extending this to papers and articles I have read. I already have Pocket open most of the time, displaying web pages that I have recently read or want to read (serving a similar purpose for short-term recollection), and that could be incorporated in this. I think it would be useful, and it would not be too much work to do it – most of the important development could be done in a day or two. If anyone has done this already or feels like coding it, do get in touch!
A week or so ago, early (for me) on a Monday morning, Professor David Webster and I had a conversation about generative AI, which was recorded as the first of a podcast series on the topic, hosted by the University of Liverpool. Here is that podcast. In it we explore both the darker and the more optimistic aspects of genAI, in a pleasantly rambling discussion that, surprisingly, lasted for about an hour.
I hadn’t spoken with Dave for well over a decade, at a conference in Hawaii, long before we became full professors or got elevated to loftier roles in our respective institutions, but it felt like we were just continuing the conversations we had back then. The only thing missing was a cold beer, swaying palm trees, and the sound of ukuleles drifting in the warm breeze. Well, that and a 6.5 earthquake that took out the power for a day and that made the conference a lot more memorable than it otherwise might have been. This conversation was a lot less earth shattering but it was just as enjoyable.
I somehow missed this when it was first posted, despite fairly avidly following OLDaily and keeping my eyes wide open for commentary on How Education Works. My only excuse is that I was travelling the day this was posted, and it was a hectic few days after that.
I’m very pleased that Stephen has some nice things to say about the book, and that he picks up on the fact that it is indeed as much about technology (and our deep, intrinsic intertwingularity with it) as it is about education. Absolutely.
I’m quite attached to the soft-hard metaphor that Stephen is lukewarm about but only, as he hints, because of what it implies about the dynamics of technology. When I started writing the book I used to talk a bit simplistically of soft and hard technologies. I still think that can be a useful distinction and there’s still plenty on the subject in the book. However, any soft technology can, in assembly, be hardened, and any hard technology can, in assembly, be softened, so it is really just another (I think slightly better) way of thinking about affordances of technologies, not about the technologies as they are assembled. For similar reasons, it is only slightly less fuzzy than existing theories of affordances, offering a framework for explaining technologies but not much that is predictive. The thing that led to the first of many rewrites of the book was my growing realization that the more important distinction is between soft and hard technique (the subset of technologies that are enacted by humans). The thing that matters most is the extent to which we are part of a pre-set (hard) orchestration, or we are the orchestrators, in any instantiation of an assembly of technologies. That is a much more precise distinction that both explains and predicts, and it is the basic distinction that (I think) is implicit in most social-constructivist models of technology in society, including Franklin’s distinction between holistic and prescriptive technologies, Boyd’s dominative and liberative technologies, Pinch & Bijker’s interpretive flexibility, and the dynamics of actor-network theory. Understanding the interplay between the rigid and the flexible in any given technology provides us with the means to control what should be controlled, to think about how we are being controlled and, if the hard components lead us down unwanted paths, ways of leaving those paths. And, of course, it is primarily technique (soft and hard) that education explicitly seeks to develop, so it gives us a very useful tool for understanding the complex nature of education itself.
You may have heard that the president of Athabasca University, Peter Scott, was replaced yesterday with Alex Clark, erstwhile Dean of the Faculty of Health Disciplines at AU.
This was a complete surprise to everyone at AU (apart from Alex), very much including Peter. None of the members of the executive team, including the provost, knew of it in advance. I gather that the secret was kept even from academic members of the Board of Governors: it was, it seems, presented to them as a done deal, on the day it happened. From the reactions I saw when it was announced, student board members may not even have known about it until that point. It was therefore – presumably – voted on in secret by the unholy cabal of governors who were appointed by the minister of advanced education last year, after the rest were sacked or forced to resign, and who make up the majority of the board. Essentially, Minister Nicolaides just fired our president.
The same seems to be true for the hiring of our new president. Although Alex had been a strong candidate when Peter got the job, and he is well qualified for the role, there are some serious questions to be asked about the appointment process, in which it appears that none of those voting had any involvement in the original appointment, no one asked the opinions of academics on the original hiring committee, and no one even asked the opinions of the academics on the board itself. This, like Peter’s dismissal, can only be seen as a political hire. And it is not an interim appointment, unlike that of his successor as Dean of FHD.
Peter was fired over the phone (ironic that this was done virtually by those who oppose our virtual strategy) without notice or explanation. The timing of his firing, a few days after an agreement was signed that, despite the Albertan government’s best efforts, has largely been seen by the press as a win for Peter (it was a loss, but a manageable loss), seems hardly coincidental. When all else failed, they stabbed him in the back when he was as down as anyone could be. Peter had in fact been away thanks to the sudden death of his wife, that occurred very shortly after her diagnosis with cancer at the end of last year. She had been buried abroad, 8 working days before he was fired. It is hard to imagine how he is feeling right now, but tears well up just thinking about it. All of this was well known to the board and to the minister. The moment was chosen with intent and malice. This was monstrous in the extreme.
It should have been so very different.
When Peter came to AU, not much more than a year ago, I cried tears of happiness. This was the leader we needed at the time we needed him: a brilliant, dynamic, imaginative, compassionate, principled man who had played a key role as a leader in transforming not just his prior institutions but the field of online and distance learning itself. Now, I cry tears of anger, outrage, and sadness. Peter could have transformed the university into something magnificent, and I believe he would have done so were it not for the utterly outrageous behaviour of the Albertan government. They fomented the union unrest into which Peter was thrust from the moment he arrived and then, over the last year, have outrageously and heavy-handedly directly meddled in the university’s affairs, against which Peter rightly and courageously fought. Peter’s assumption was, perhaps, that Alberta was like most of the rest of the world in recognizing academic freedoms, autonomy, and rights as sacrosanct. I don’t think he fully realized, at that point, that Alberta is not like that. It has a philistine government run by corrupt little despots, sponsored by corporations whose main activity is violence against the planet (this applies to most of the board of governors, as it happens). Going up against the Albertan government and, especially, appearing in the eyes of the world to win the fight, is like going up against a particularly nasty, stupid, and vindictive gang of playground bullies. Peter never had a chance to focus on the things he needed to focus on, because he was being pummelled on all sides by thugs the entire time he was with us.
Whatever happens next, AU will not be the university it could have been. The government has forced us to make 15% cuts this year, and we were already too close to the bone, cutting into it in places. We have already lost a good portion of the best executive team ever to lead us and we are very likely to lose more. The government-appointed governors, none of whom have the slightest understanding of our institution, have shown themselves to be nothing but lackeys for a morally bankrupt and abhorrent minister, willing to stop at nothing to achieve ends that have nothing to do with the well-being of the university. The union’s actions, that were deeply divisive and at least partly engineered by the government, continue to divide us. The half-hearted, hasty, and poorly implemented near-virtual plan (that was in progress before Peter’s arrival and that played a major role in the union strife) continues to cause major problems, most notably failing to address communication needs, so dividing us further. Perhaps most challengingly, we are half way through the biggest transformation that has ever occurred in the university’s history, from which we are unable to back away without enormous cost, but with a diminishing number of leaders and champions who can make it happen. Now we have a president who was (at least in part) chosen because of his willingness to live in Athabasca, which is a truly terrible idea about which I have written extensively in the past. I wish him well, but he will face a steep uphill struggle building trust among many of the staff who feel betrayed by the government’s despicable actions and the shady circumstances leading to his being hired, about which speculation is now rife, within and beyond the university. We are all in a state of shock and dismay right now. None of us feel any sense of security. Many of us are talking about leaving or preparing to leave.
For one fleeting moment, as the war with the government seemed to have been more or less resolved towards the end of last year, I felt great hope for the future of the university I have loved this past 15 years. My hopes are greatly diminished today. Nothing can repair all the harm that has been done. Our greatest hope now is that there will be a new government that is willing to help to reverse at least some of the damage. The Albertan elections are not far off. If you live in Alberta, don’t forget what this government has done. You could be next.
And, Peter, if you are reading this: you will be very much missed. I know that I speak on behalf of almost all of us here at AU when I say that our hearts go out to you.