Here are the slides from a talk I just gave to a group of grad students at AU in our ongoing seminar series, on the nature of collectives and ways we can use and abuse them. It’s a bit of a sprawl covering some 30 odd years of a particularly geeky, semi-philosophical branch of my research career (not much on learning and teaching in this one, but plenty of termites) and winding up with very much a work in progress. I rushed through it at the end of a very long day/week/month/year/life but I hope someone may find it useful!
This is the abstract:
“Collective intelligence” (CI) is a widely-used but fuzzy term that can mean anything from the behaviour of termites, to the ability of an organization to adapt to a changing environment, to the entire human race’s capacity to think, to the ways that our individual neurons give rise to cognition. Common to all, though, is the notion that the combined behaviours of many independent agents can lead to positive emergent changes in the behaviour of the whole and, conversely, that the behaviour of the whole leads to beneficial changes in the behaviours of the agents of which it is formed. Many social computing systems, from Facebook to Amazon, are built to enable or to take advantage of CI. Here I define social computing systems as digital systems that have no value unless they are used by at least two participants, and in which those participants play significant roles in affecting one another’s behaviour. This is a broad definition that embraces Google Search as much as email, wikis, and blogs, and in which the behaviour of humans and the surrounding structures and systems they belong to are at least as important as the algorithms and interfaces that support them. Unfortunately, the same processes that lead to the wisdom of crowds can at least as easily result in the stupidity of mobs, including phenomena like filter bubbles and echo chambers that may be harmful in themselves or that render systems open to abuse such as trolling, disinformation campaigns, vote brigading, and successful state manipulation of elections. If we can build better models of social computing systems, taking into account their human and contextual elements, then we stand a better chance of being able to avoid their harmful effects and using them for good. To this end I have coined the term “ochlotecture”, from the Classical Greek ὄχλος (ochlos), meaning “multitude” and τέκτων (tektōn) meaning “builder”. In this seminar I will identify some of the main ochlotectural elements that contribute to collective intelligence, describe some of the ways it can be undermined, and explore some of the ramifications as they relate to social software design and management.
I’m proud to be the 7th of 47 authors on this excellent new paper, led by the indefatigable Aras Bozkurt and featuring some of the most distinguished contemporary researchers in online, open, mobile, distance, e- and [insert almost any cognate sub-discipline here] learning, as well as a few of us hanging on their coat tails like me.
As the title suggests, it is a manifesto: it makes a series of statements (divided into 15 positive and 20 negative themes) about what is or what should be, and it is underpinned by a firm set of humanist pedagogical and ethical attitudes that are anything but neutral. What makes it interesting to me, though, can mostly be found in the critical insights that accompany each theme, that capture a little of the complexity of the discussions that led to them, and that add a lot of nuance. The research methodology, a modified and super-iterative Delphi design in which all participants are also authors is, I think, an incredibly powerful approach to research in the technology of education (broadly construed) that provides rigour and accountability without succumbing to science-envy.
Notwithstanding the lion’s share of the work of leading, assembling, editing, and submitting the paper being taken on by Aras and Junhong, it was a truly collective effort so I have very little idea about what percentage of it could be described as my work. We were thinking and writing together. Being a part of that was a fantastic learning experience for many of us, that stretched the limits of what can be done with tracked changes and comments in a Google Doc, with contributions coming in at all times of day and night and just about every timezone, over weeks. The depth and breadth of dialogue was remarkable, as much an organic process of evolution and emergence as intelligent design, and one in which the document itself played a significant participant role. I felt a strong sense of belonging, not so much as part of a community but as part of a connectome.
For me, this epitomizes what learning technologies are all about. It would be difficult if not impossible to do this in an in-person setting: even if the researchers worked together on an online document, the simple fact that they met in person would utterly change the social dynamics, the pacing, and the structure. Indeed, even online, replicating this in a formal institutional context would be very difficult because of the power relationships, assessment requirements, motivational complexities and artificial schedules that formal institutions add to the assembly. This was an online-native way of learning of a sort I aspire to but seldom achieve in my own teaching.
The paper offers a foundational model or framework on which to build or situate further work as well as providing a moderately succinct summary of a very significant percentage of the issues relating to generative AI and education as they exist today. Even if it only ever gets referred to by each of its 47 authors this will get more citations than most of my papers, but the paper is highly cite-able in its own right, whether you agree with its statements or not. I know I am biased but, if you’re interested in the impacts of generative AI on education, I think it is a must-read.
The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future
Bozkurt, A., Xiao, J., Farrow, R., Bai, J. Y. H., Nerantzi, C., Moore, S., Dron, J., … Asino, T. I. (2024). The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future. Open Praxis, 16(4), 487–513. https://doi.org/10.55982/openpraxis.16.4.777
Full list of authors:
Aras Bozkurt
Junhong Xiao
Robert Farrow
John Y. H. Bai
Chrissi Nerantzi
Stephanie Moore
Jon Dron
Christian M. Stracke
Lenandlar Singh
Helen Crompton
Apostolos Koutropoulos
Evgenii Terentev
Angelica Pazurek
Mark Nichols
Alexander M. Sidorkin
Eamon Costello
Steven Watson
Dónal Mulligan
Sarah Honeychurch
Charles B. Hodges
Mike Sharples
Andrew Swindell
Isak Frumin
Ahmed Tlili
Patricia J. Slagter van Tryon
Melissa Bond
Maha Bali
Jing Leng
Kai Zhang
Mutlu Cukurova
Thomas K. F. Chiu
Kyungmee Lee
Stefan Hrastinski
Manuel B. Garcia
Ramesh Chander Sharma
Bryan Alexander
Olaf Zawacki-Richter
Henk Huijser
Petar Jandrić
Chanjin Zheng
Peter Shea
Josep M. Duart
Chryssa Themeli
Anton Vorochkov
Sunagül Sani-Bozkurt
Robert L. Moore
Tutaleni Iita Asino
Abstract
This manifesto critically examines the unfolding integration of Generative AI (GenAI), chatbots, and algorithms into higher education, using a collective and thoughtful approach to navigate the future of teaching and learning. GenAI, while celebrated for its potential to personalize learning, enhance efficiency, and expand educational accessibility, is far from a neutral tool. Algorithms now shape human interaction, communication, and content creation, raising profound questions about human agency and biases and values embedded in their designs. As GenAI continues to evolve, we face critical challenges in maintaining human oversight, safeguarding equity, and facilitating meaningful, authentic learning experiences. This manifesto emphasizes that GenAI is not ideologically and culturally neutral. Instead, it reflects worldviews that can reinforce existing biases and marginalize diverse voices. Furthermore, as the use of GenAI reshapes education, it risks eroding essential human elements—creativity, critical thinking, and empathy—and could displace meaningful human interactions with algorithmic solutions. This manifesto calls for robust, evidence-based research and conscious decision-making to ensure that GenAI enhances, rather than diminishes, human agency and ethical responsibility in education.
Free-to-register International online symposium, December 5th, 2024, 12-3pm PST
Start time:
This is going to be an important symposium, I think.
I will be taking 3 very precious hours out of my wedding anniversary to attend, in fairness unintentionally: I did not do the timezone conversion when I submitted my paper so I thought it was the next day. However, I have not cancelled despite the potentially dire consequences, partly because the line-up of speakers is wonderful, partly because we all use the words “collective intelligence” (CI) but we come from diverse disciplinary areas and we mean sometimes very different things by them (so there will be some potentially inspiring conversations) and partly for a bigger reason that I will get to at the end of this post. You can read abstracts and most of the position papers on the symposium website,
In my own position paper I have invented the term ochlotecture (from the Classical Greek ὄχλος (ochlos), meaning something like “multitude” and τέκτων (tektōn) meaning “builder”) to describe the structures and processes of a collection of people, whether it be a small seminar group, a network of researchers, or a set of adherents to a world religion. An ochlotecture includes elements like names, physical/virtual spaces, structural hierarchies, rules, norms, mythologies, vocabularies, and purposes, as well as emergent phenomena occurring through individual and subgroup interactions, most notably the recursive cycle of information capture, processing, and (re)presentation that I think characterizes any CI. Through this lens, I can see both what is common and what distinguishes the different kinds of CI described in these position papers a bit more clearly. In fact, my own use of the term has changed a few times over the years so it helps me make sense of my own thoughts on the matter too.
Where I’ve come from that leads me here
I have been researching CI and education for a long time. Initially, I used the term very literally to describe something very distinct from individual intelligence, and largely independent of it. My PhD, started in 1997, was inspired by the observation that (even then) there were at least tens of thousands of very good resources (people, discussions, tutorials, references, videos, courseware etc) openly available on the Web to support learners in most subject areas, that could meet almost any conceivable learning need. The problem was and remains how to find the right ones. These were pre-Google times but even the good-Google of olden days (a classic application of collective intelligence as I was using the term) only showed the most implicitly popular, not those that would best meet a particular learner’s needs. As a novice teacher, I also observed that, in a typical classroom, the students’ combined knowledge and ability to seek more of it far exceeded my own. I therefore hit upon the idea of using a nature-inspired evolutionary approach to collectively discover and recommend resources, that led me very quickly into the realm of evolutionary theory and thence to the dynamics of self-organizing systems, complex adaptive systems, stigmergy, flocking, city planning, markets, and collective intelligence.
And so I became an ochlotect. I built a series of self-organizing social software systems that used stuff like social navigation (stigmergy), evolutionary, and flocking algorithms to create environments that both shaped and were shaped by the crowd. Acknowledging that “intelligence” is a problematic word, I simply called these collectives, a name inspired by Star Trek TNG’s Borg (the pre-Borg-Queen Borg, before the writers got bored or lazy). The intelligence of a “pure” collective as I conceived it back then was largely to be found in the algorithm, not the individual agents. Human stock markets are no smarter than termite mounds by this way of thinking (and they are not). I was trying to amplify the intelligence of crowds while avoiding the stupidity of mobs by creating interfaces and algorithms that made value to learners a survival characteristic. I was building systems that played some of the roles of a teacher but that were powered by collectives consisting of learners. Some years later, Mark Zuckerberg hit on the idea of doing the exact opposite, with considerably greater success, making a virtue out of systems that amplified collective stupidity, but the general principles behind both EdgeRank and my algorithms were similar.
When I say that I “built” systems, though, I mean that I built the software part. I came to increasingly realize that the largest part of all of them was always the human part: what the individuals did, and the surrounding context in which they did it, including the norms, the processes, the rules, the structures, the hierarchies, and everything else that formed the ochlotecture, was intrinsic to their success or failure. Some of those human-enacted parts were as algorithmic as the software environments I provided and were no smarter than those used by termites (e.g. “click on the results from the top of the list or in bigger fonts”), but many others were designed, and played critical roles. This slightly more complex concept of CI played a major supporting role in my first book providing a grounded basis for the design of social software systems that could support maximal learner control. In it I wound up offering a set of 10 design principles that addressed human, organizational, pedagogical and tech factors as well as emergent collective characteristics that were prerequisites if social software systems were to evolve to become educationally useful.
Collectives also formed a cornerstone of my work with Terry Anderson over the next decade or so, and our use of the term evolved further. In our first few papers, starting in 2007, we conflated the dynamic process with the individual agents who made it happen: for us back then, a collective was the people and processes (a sort of cross between my original definition and a social configuration the Soviets were once fond of) and so we treated a collective as somewhat akin to a group or a network. Before too long we realized that was dumb and separated these elements out, categorizing three primary social forms (the set, the net, and the group) that could blend, and from which collectives could emerge and interact, as a different kind of ochlotectural entity altogether. This led us to a formal abstract definition of collectives that continues to get the odd citation to this day. We wrote a book about social media and learning in which this abstract definition of collectives figured largely, and designed The Landing to take advantage of it (not well – it was a learning experience). It appears in my position paper, too.
Collectives have come back with a vengeance but wearing different clothes in my work of the last decade, including my most recent book. I am a little less inclined to use the word “collective” now because I have come to understand all intelligence as collective, almost all of it mediated and often enacted through technologies. Technologies are the assemblies we construct from stuff to do stuff, and the stuff that they do then forms some of the stuff from which we construct more stuff to do stuff. A single PC alone, for instance, might contain hundreds of billions of instances of technologies in its assembly. A shelf of books might contain almost as many, not just in words and letters but in the concepts, theories, and models they make. As for the processes of making them, editing them, manufacturing the paper and the ink, printing them, distributing them, reading them, and so on… it’s a massive, constantly evolving, ever-adapting, partly biological system, not far off from natural ecosystems in its complexity, and equally diverse. Every use of a technology is also a technology, from words in your head to flying a space ship, and it becomes part of the stuff that can be organized by yourself or others. Through technique (technologies enacted intracranially), technologies are parts of us and we are parts of them, and that is what makes us smart. Collective behaviour in humans can occur without technologies but what makes it collective intelligence is a technological connectome that grows, adapts, evolves, replicates, and connects every one of us to every other one of us: most of what we think is the direct result of assembling what we and others, stretching back in time and outward in space, have created. The technological connectome continuously evolves as we connect and orchestrate the vast web of technologies in which we participate, creating assemblies that have never occurred the same way twice, maybe thousands of times every day: have you ever even brushed your teeth or eaten a mouthful of cereal exactly the same way twice, in your whole life? Every single one of us is doing this, and quite a few of those technologies magnify the effects, from words to drawing to numbers to writing to wheels to screws to ships to postal services to pedagogical methods to printing to newspapers to libraries to broadcast networks to the Internet to the World Wide Web to generative AI. It is not just how we are able to be individually smart: it is an indivisible part of that smartness. Or stupidity. Whatever. The jury is out. Global warming, widening inequality, war, epidemics of obesity, lies, religious bigotry, famine and many other dire phenomena are a direct result of this collective “intelligence”, as much as Vancouver, the Mona Lisa, and space telescopes. Let’s just stick with “collective”.
The obligatory LLM connection and the big reason I’m attending the symposium
My position paper for this symposium wanders a bit circuitously towards a discussion of the collective nature of large language models (LLMs) and their consequent global impact on our education systems. LLMs are collectives in their own right, with algorithms that are not only orders of magnitude more complex than any of their predecessors, but that are unique to every instantiation of them, operating from and on vast datasets, presenting results to users who also feed those datasets. This is what makes them capable of very convincingly simulating both the hard (inflexible, correct) and the soft (flexible, creative) technique of humans, which is both their super-power and the cause of the biggest threat they pose. The danger is that a) they replace the need to learn the soft technique ourselves (not necessarily a disaster if we use them creatively in further assemblies) and, more worryingly, b) that we learn ways of being human from collectives that, though made of human stuff, are not human. They will in turn become parts of all the rest of the collectives in which we participate. This can and will change us. It is happening now, frighteningly fast, even faster and at a greater scale than similar changes that the Zuckerbergian style of social media have also brought about.
As educators, we should pay attention to this. Unfortunately, with their emphasis on explicit measurable outcomes, combined with the extrinsic lure of credentials, the ochlotecture of our chronically underfunded educational systems is not geared towards compensating for these tendencies. In fact, exactly the reverse. LLMs can already both teach and meet those explicit outcomes far more effectively than most humans, at a very compelling price so, more and more, they will. Both students and teachers are replaceable components in such a system. The saving grace and/or problem is that, though they matter, and they are how we measure educational success, those explicit outcomes are not in fact the most important ends of education, albeit that they are means to those ends.
The things that matter more are the human ways of thinking, of learning, and of seeing, that we learn while achieving such outcomes; the attitudes, values, connections, and relationships; our identities and the ways we learn to exist in our societies and cultures. It’s not just about doing and knowing: it’s about being, it’s about love, fear, wonder, and hunger. We don’t have to (and can’t) measure those because they all come for free when humans and the stuff they create are the means through which explicit outcomes are achieved. It’s an unavoidable tacit curriculum that underpins every kind of intentional and most unintentional learning we undertake, for better or (too often) for worse. It’s the (largely) non-technological consequence of the technologies in which we participate, and how we participate in them. Technologies don’t make us less human, on the whole: they are exactly what make us human.
We will learn such things from generative AIs, too, thanks to the soft technique they mimic so well, but what we will learn to be as a result will not be quite human. Worse, the outputs of the machines will begin to dominate their own inputs, and the rest will come from humans who have been changed by their interactions with them, like photocopies of photocopies, constantly and recursively degrading. In my position paper I argue for the need to therefore cherish the human parts of these new collectives in our education systems far more than we have before, and I suggest some ways of doing that. It matters not just to avoid model collapse in LLMs, but to prevent model collapse in the collective intelligence of the whole human race. I think that is quite important, and that’s the real reason I will spend some of my wedding anniversary talking with some very intelligent and influential people about it.
For those with an interest, here are the slides from my webinar for Contact North | Contact Nord that I gave today: How to be an educational technology (warning: large download, about 32MB).
that how we do teaching matters more than what we do (“T’ain’t what you do, it’s the way that you do it”) and
that we can only understand the process if we examine the whole complex assembly of teaching (very much including the technique of all who contribute to it, including learners, textbooks, and room designers) not just the individual parts.
Along the way I had a few other things to say about why that must be the case, the nature of teaching, the nature of collective cognition, and some of the profound consequences of seeing the world this way. I had fun persuading ChatGPT to illustrate the slides in a style that was not that of Richard Scarry (ChatGPT would not do that, for copyright reasons) but that was reminiscent of it, so there are lots of cute animals doing stuff with technologies on the slides.
I rushed and rambled, I sang, I fumbled and stumbled, but I think it sparked some interest and critical thinking. Even if it didn’t, some learning happened, and that is always a good thing. The conversations in the chat went too fast for me to follow but I think there were some good ones. If nothing else, though I was very nervous, I had fun, and it was lovely to notice a fair number of friends, colleagues, and even the odd relative among the audience. Thank you all who were there, and thank you anyone who catches the recording later.
Here are the slides from a talk I gave earlier today, hosted by George Siemens and his fine team of people at Human Systems. Terry Anderson helped me to put the slides together, and offered some great insights and commentary after the presentation but I am largely to blame for the presentation itself. Our brief was to talk about sets, nets and groups, the theme of our last book Teaching Crowds: learning and social media and much of our work together since 2007 but, as I was the one presenting, I bent it a little towards generative AI and my own intertwingled perspective on technologies and collective cognition, which is most fully developed (so far) in my most recent book, How Education Works: Teaching, Technology, and Technique. If you’re not familiar with our model of sets, nets, groups and collectives, there’s a brief overview on the Teaching Crowds website. It’s a little long in the tooth but I think it is still useful and will help to frame what follows.
A recreation of the famous New Yorker cartoon, “On the Internet no one knows you are a dog” – but it is a robot dog
The key new insight that appears for the first time in this presentation is that, rather than being a fundamental social form in their own right, groups consist of technological processes that make use of and help to engender/give shape to the more fundamental forms of nets and sets. At least, I think they do: I need to think and talk some more about this, at least with Terry, and work it up into a paper, but I haven’t yet thought through all the repercussions. Even back when we wrote the book I always thought of groups as technologically mediated entities but it was only when writing these slides in the light of my more recent thinking on technology that I paid much attention to the phenomena that they actually orchestrate in order to achieve their ends. Although there are non-technological prototypes – notably in the form of families – these are emergent rather than designed. The phenomena that intentional groups primarily orchestrate are those of networks and sets, which are simply configurations of humans and their relationships with one another. Modern groups – in a learning context, classes, cohorts, tutorial groups, seminar groups, and so on – are designed to fulfill more specific purposes than their natural prototypes, and they are made possible by technological inventions such as rules, roles, decision-making processes, and structural hierarchies. Essentially, the group is a purpose-driven technological overlay on top of more basic social forms. It seems natural, much as language seems natural, because it is so basic and fundamental to our existence and how everything else works in human societies, but it is an invention (or many inventions, in fact) as much as wheels and silicon chips.
Groups are among the oldest and most highly evolved of human technologies and they are incredibly important for learning, but they have a number of inherent flaws and trade-offs/Faustian bargains, notably in their effects on individual freedoms, in scalability (mainly achieved through hierarchies), in sometimes unhealthy power dynamics, and in limitations they place on roles individuals play in learning. Modern digital technologies can help to scale them a little further and refine or reify some of the rules and roles, but the basic flaws remain. However, modern digital technologies also offer other ways of enabling sets and networks of people to support one another’s learning, from blogs and mailing lists to purpose-built social networking systems, from Wikipedia and Academia.edu to Quora, in ways that can (optionally) integrate with and utilize groups but that differ in significant ways, such as in removing hierarchies, structuring through behaviour (collectives) and filtering or otherwise mediating messages. With some exceptions, however, the purposes of large-scale systems of this nature (which would provide an ideal set of phenomena to exploit) are not usually driven by a need for learning, but by a need to gain attention and profit. Facebook, Instagram, LinkedIn, X, and others of their ilk have vast networks to draw on but few mechanisms that support learning and limited checks and balances for reliability or quality when it does occur (which of course it does). Most of their algorithmic power is devoted to driving engagement, and the content and purpose of that engagement only matters insofar as it drives further engagement. Up to a point, trolls are good for them, which is seldom if ever true for learning systems. Some – Wikipedia, the Khan Academy, Slashdot, Stack Exchange, Quora, some SubReddits, and so on – achieve both engagement and intentional support for learning. However, they remain works in progress in the latter regard, being prone to a host of ills from filter bubbles and echo chambers to context collapse and the Matthew Effect, not to mention intentional harm by bad actors. I’ve been exploring this space for approaching 30 years now, but there remains almost as much scope for further research and development in this area as there was when I began. Though progress has been made, we have yet to figure out the right rules and structures to deal with a great many problems, and it is increasingly difficult to slot the products of our research into an increasingly bland, corporate online space dominated by a shrinking number of bland, centralized learning management systems that continue to refine their automation of group processes and structures and, increasingly, to ignore the sets and networks on which they rely.
With that in mind, I see big potential benefits for generative AIs – the ultimate collectives – as supporters and enablers for crowds of people learning together. Generative AI provides us with the means to play with structures and adapt in hitherto impossible ways, because the algorithms that drive their adaptations are indefinitely flexible, the reified activities that form them are vast, and the people that participate in them play an active role in adjusting and forming their algorithms (not the underpinning neural nets but the emergent configurations they take). These are significant differences from traditional collectives, that tend to have one purpose and algorithm (typically complex but deterministic), such as returning search results or engaging network interactions. I also see a great many potential risks, of which I have written fairly extensively of late, most notably in playing soft orchestral roles in the assembly that replace the need for humans to learn to play them. We tread a fine line between learning utopia and learning dystopia, especially if we try to overlay them on top of educational systems that are driven by credentials. Credentials used to signify a vast range of tacit knowledge and skills that were never measured, and (notwithstanding a long tradition of cheating) that was fine as long as nothing else could create those signals, because they were serviceable proxies. If you could pass the test or assignment, it meant that you had gone through the process and learned a lot more than what was tested. This has been eroded for some time, abetted by social media like Course Hero or Chegg that remain quite effective ways of bypassing the process for those willing to pay a nominal sum and accept the risk. Now that generative AI can do the same at considerably lower cost, with greater reliability, and lower risk, without having gone through the process, they no longer make good signifiers and, anyway (playing Devil’s advocate), it remains unclear to what extent those soft, tacit skills are needed now that generative AIs can achieve them so well. I am much encouraged by the existence of George’s Paul LeBlanc’s lab initiative, the fact that George is the headliner chief scientist for it, its intent to enable human-centred learning in an age of AI, and its aspiration to reinvent education to fit. We need such endeavours. I hope they will do some great things.
Many thanks, too, to Junhong for sending me the printed version that arrived today, smelling deliciously of ink. I hardly ever read anything longer than a shopping bill on paper any more but there is something rather special about paper that digital versions entirely lack. The particular beauty of a book or journal written in a language and script that I don’t even slightly understand is that, notwithstanding the ease with which I can translate it using my phone, it largely divorces the medium from the message. Even with translation tools my name is unrecognizable to me in this: Google Lens translates it as “Jon Delong”. Although I know it contains a translation of my own words, it is really just a thing: the signs it contains mean nothing to me, in and of themselves. And it is a thing that I like, much as I like the books on my bookshelf.
I am not alone in loving paper books, a fact that owners of physical copies of my most recent book (which can be read online for free but that costs about $CAD40 on paper) have had the kindness to mention, e.g. here and here. There is something generational in this, perhaps. For those of us who grew up knowing no other reading medium than ink on paper, there is comfort in the familiar, and we have thousands (perhaps millions) of deeply associated memories in our muscles and brains connected with it, made more precious by the increasing rarity with which those memories are reinforced by actually reading them that way. But, for the most part, I doubt that my grandchildren, at least, will lack that. While they do enjoy and enthusiastically interact with text on screens, from before they have been able to accurately grasp them they have been exposed to printed books, and have loved some of them as much as I did at the same ages.
It is tempting to think that our love of paper might simply be because we don’t have decent e-readers, but I think there is more to it than that. I have some great e-readers in many sizes and types, and I do prefer some of them to read from, for sure: backlighting when I need it, robustness, flexibility, the means to see it in any size or font that works for me, the simple and precise search, the shareable highlights, the lightness of (some) devices, the different ways I can hold them, and so on, make them far more accessible. But paper has its charms, too. Most obviously, something printed on a paper is a thing to own whereas, on the whole, a digital copy tends to just be a licence to read, and ownership matters. I won’t be leaving my e-books to my children. The thingness really matters in other ways, too. Paper is something to handle, something to smell. Pages and book covers have textures – I can recognize some books I know well by touch alone. It affects many senses, and is more salient as a result. It takes up room in an environment so it’s a commitment, and so it has to matter, simply because it is there; a rivalrous object competing with other rivalrous objects for limited space. Paper comes in fixed sizes that may wear down but will never change: it thus keeps its shape in our memories, too. My wife has framed occasional pages from my previously translated work, elevating them to art works, decoupled from their original context, displayed with the same lofty reverence as pages from old atlases. Interestingly, she won’t do that if it is just a printed PDF: it has to come from a published paper journal, so the provenance matters. Paper has a history and a context of its own, beyond what it contains. And paper creates its own context, filled with physical signals and landmarks that make words relative to the medium, not abstractions that can be reflowed, translated into other languages, or converted into other media (notably speech). The result is something that is far more memorable than a reflowable e-text. Over the years I’ve written a little about this here and there, and elsewhere, including a paper on the subject (ironically, a paper that is not available on paper, as it happens), describing an approach to making e-texts more memorable.
After reaching a slightly ridiculous peak in the mid-2000s, and largely as a result of a brutal culling that occurred when I came to Canada nearly 17 years ago, my paper book collection has now diminished to easily fit in a single and not particularly large free-standing IKEA shelving unit. The survivors are mostly ones I might want to refer to or read again, and losing some of them would sadden me a great deal, but I would only (perhaps) run into a burning building to save just a few, including, for instance:
A dictionary from 1936, bound in leather by my father and used in countless games of Scrabble and spelling disputes when I was a boy, and that was used by my whole family to look up words at one time or another.
My original hardback copy of the Phantom Tollbooth (I have a paperback copy for lending), that remains my favourite book of all time, that was first read to me by my father, and that I have read myself many times at many ages, including to my own children.
A boxed set of the complete works of Narnia, that I chose as my school art prize when I was 18 because the family copies had become threadbare (read and abused by me and my four siblings), and that I later read to my own children. How someone with very limited artistic skill came to win the school art prize is a story for another time.
A well-worn original hardback copy of Harold and the Purple Crayon (I have a paperback copy for lending) that my father once displayed for children in his school to read, with the admonition “This is Mr Dron’s book. Please handle with care” (it was not – it was mine).
A scribble-filled, bookmark-laden copy of Kevin Kelly’s Out of Control that strongly influenced my thinking when I was researching my PhD and that still inspires me today. I can remember exactly where I sat when I made some of the margin notes.
A disintegrating copy of Storyland, given to me by my godmother in 1963 and read to me and by me for many years thereafter. There is a double value to this one because we once had two copies of this in our home: the other belonged to my wife, and was also a huge influence on her at similar ages.
These books proudly wear their history and their relationships with me and my loved ones in all their creases, coffee stains, scuffs, and tattered pages.To a greater or lesser extent, the same is true of almost all of the other physical books I have kept. They sit there as a constant reminder of their presence – their physical presence, their emotional presence, their social presence and their cognitive presence – flitting by in my peripheral vision many times a day, connecting me to thoughts and inspirations I had when I read them and, often, with people and places connected with them. None of this is true of my e-books. Nor is it quite the same for other objects of sentimental value, except perhaps (and for very similar reasons) the occasional sculpture or picture, or some musical instruments. Much as I am fond of (say) baby clothes worn by my kids or a battered teddy bear, they are little more than aides memoires for other times and other activities, whereas the books (and a few other objects) latently embody the experiences themselves. If I opened them again (and I sometimes do) it would not be the same experience, but it would enrich and connect with those that I already had.
I have hundreds of e-books that are available on many devices, one of which I carry with me at all times, not to mention an Everand (formerly Scribd) account with a long history, not to mention a long and mostly lost history of library borrowing, and I have at least a dozen devices on which to read them, from a 4 inch e-ink reader to a 32 inch monitor and much in between, but my connection with those is far more limited and transient. It is still more limited for books that are locked to a certain duration through DRM (which is one reason they are the scum of the earth). When I look at my devices and open the various reading apps on them I do see a handful of book covers, usually those that I have most recently read, but that is too fleeting and volatile to have much value. And when I open them they don’t fall open on well-thumbed pages. The text is not tangibly connected with the object at all.
As well as smarter landmarks within them, better ways to make e-books more visible would help, which brings me to the real point of this post. For many years I have wanted to paper a wall or two with e-paper (preferably in colour) on which to display e-book covers, but the costs are still prohibitive. It would be fun if the covers would become battered with increasing use, showing the ones that really mattered, and maybe dust could settle on those that were never opened, though it would not have to be so skeuomorphic – fading would work, or glyphs. They could be ordered manually or by (say) reading date, title, author, or subject. Perhaps touching them or scanning a QR code could open them. I would love to get a research grant to do this but I don’t think asking for electronic wallpaper in my office would fly with most funding sources, even if I prettied it up with words like “autoethnography”, and I don’t have a strong enough case, nor can I think of a rigorous enough research methodology to try it in a larger study with other people. Well. Maybe I will try some time. Until the costs of e-paper come down much further, it is not going to be a commercially viable product, either, though prices are now low enough that it might be possible to do it in a limited way with a poster-sized display for a (very) few thousand dollars. It could certainly be done with a large screen TV for well under $1000 but I don’t think a power-hungry glowing screen would be at all the way to go: the value would not be enough to warrant the environmental harm or energy costs, and something that emitted light would be too distracting. I do have a big monitor on my desk, though, which is already doing that so it wouldn’t be any worse, to which I could add a background showing e-book covers or spines. I could easily do this as a static image or slideshow, but I’d rather have something dynamic. It shouldn’t be too hard to extract the metadata from my list of books, swipe the images from the Web or the e-book files, and show them as a backdrop (a screensaver would be trivial). It might even be worth extending this to papers and articles I have read. I already have Pocket open most of the time, displaying web pages that I have recently read or want to read (serving a similar purpose for short-term recollection), and that could be incorporated in this. I think it would be useful, and it would not be too much work to do it – most of the important development could be done in a day or two. If anyone has done this already or feels like coding it, do get in touch!
UPDATE: the video of my talk is now available at https://www.youtube.com/watch?v=ji0jjifFXTs (slides and audio only) …
These are the slides from my opening keynote at SITE ‘24 today, at Planet Hollywood in Las Vegas. The talk was based closely on some of the main ideas in How Education Works. I’d written an over-ambitious abstract promising answers to many questions and concerns, that I did just about cover but far too broadly. For counter balance, therefore, I tried to keep the focus on a single message – t’aint what you do, it’s the way that you do it (which is the epigraph for the book) – and, because it was Vegas, I felt that I had to do a show, so I ended the session with a short ukulele version of the song of that name. I had fun, and a few people tried to sing along. The keynote conversation that followed was most enjoyable – wonderful people with wonderful ideas, and the hour allotted to it gave us time to explore all of them.
Here is that bloated abstract:
Abstract: All of us are learning technologists, teaching others through the use of technologies, be they language, white boards, and pencils or computers, apps, and networks. We are all part of a vast, technology-mediated cognitive web in which a cast of millions – in formal education including teachers such as textbook authors, media producers, architects, software designers, system administrators, and, above all, learners themselves – co-participates in creating an endless, richly entwined tapestry of learning. This tapestry spreads far beyond formal acts of teaching, far back in time, and far into the future, weaving in and helping to form not just the learning of individuals but the collective intelligence of the whole human race. Everyone’s learning journey both differs from and is intertwingled with that of everyone else. Education is an overwhelmingly complex and unpredictable technological system in which coarse patterns and average effects can be found but, except in the most rigid, invariant, minor details, of which individual predictions cannot be accurately made. No learner is average, and outcomes are always greater than what is intended. The beat of a butterfly’s wing in Timbuktu can radically affect the experience of a learner in Toronto. A slight variation in tone of voice can make all the difference between a life-transforming learning experience and a lifelong aversion to a subject. Beautifully crafted, research-informed teaching methods can be completely ineffective, while poor teaching, or even the absence of it, can result in profoundly affective learning. For all our efforts to understand and control it, education as a technological process is far closer to art than to engineering. What we do is usually far less significant than the idiosyncratic way that we do it, and how much we care for the subject, our students, and our craft is often far more important than the pedagogical methods we use. In this talk I will discuss what all of this implies for how we should teach, for how we understand teaching, and for how we research the massively intertwingled processes and tools of teaching. Along the way I will explain why there is no significant difference between measured outcomes of online or in-person learning, the futility of teaching to learning styles, the reason for the 2-sigma advantage of personal tuition, the surprising commonalities between behaviourist, cognitivist, constructivist models of learning and teaching, the nature of literacies, and the failure of reductive research methods in education. It will be fun
Here is a paper from the Asia-Pacific Journal of Teacher Education by my friend Gerald Ardito and me that presents a slightly different way of thinking about teaching and learning. We adopt a broadly complexivist stance that sees environments not as a backdrop to learning but as a rich network of dynamic, interwingled relationships between the various parts (including parts played by people), mediated through technologies, enabling and enabled by autonomy. The model that we develop knits together a smorgasbord of theories and models, including Self-Determination Theory (SDT), Connectivism, an assortment of complexity theories, the extended version of Paulsen’s model of cooperative freedoms developed by me and Terry Anderson, Garrison & Baynton’s model of autonomy, and my own coparticipation theory, wrapping up with a bit of social network analysis of a couple of Gerald’s courses that puts it all into perspective. From Gerald’s initial draft the paper took years of very sporadic development and went through many iterations. It seemed to take forever, but we had fun writing it. Looking afresh at the finished article, I think the diagrams might have been clearer, we might have done more to join all the dots, and we might have expressed the ideas a bit less wordily, but I am mostly pleased with the way it turned out, and I am glad to see it finally published. The good bits are all Gerald’s, but I am personally most pleased with the consolidated model of autonomy visualized in figure 4, that connects my own & Terry Anderson’s cooperative freedoms, Garrison & Baynton’s model of autonomy, and SDT.
Reference:
Gerald Ardito & Jon Dron(2024)The emergence of autonomy in intertwingled learning environments: a model of teaching and learning,Asia-Pacific Journal of Teacher Education,DOI: 10.1080/1359866X.2024.2325746
Since 2018, Terry Greene has been producing a wonderful series of podcast interviews with open and online learning researchers and practitioners called Getting Air. Prompted by the publication of How Education Works, (Terry is also responsible for the musical version of the book, so I think he likes it) this week’s episode features an interview with me.
I probably should have been better prepared. Terry asked some probing, well-informed, and sometimes disarming questions, most of which led to me rambling more than I might have done if I’d thought about them in advance. It was fun, though, drifting through a broad range of topics from the nature of technology to music to the perils of generative AI (of course).
I hope that Terry does call his PhD dissertation “Getting rid of instructional designers”.
Since 2015 Kay Guccione and Matthew Cheeseman have been editing the wonderful Journal of Imaginary Research (tagline “Writing Without Discipline”) that, once a year, publishes fictional research abstracts by fictional researchers. Each issue has a theme, and Volume 9’s is “Deal or Dealing”. I have an abstract in it.
As well as providing some entertaining and often very funny short reads, there is a serious academic intent behind all of this. As Guccione and Cheeseman put it,
“In producing these short, exploratory pieces, we seek to help writers establish a new relationship with writing; less driven by the demands of productivity. Writing fiction in a familiar format helps us reflect on how we can creatively communicate our research projects, and how we can find the joy of creativity in all our writing. Many of the pieces we receive, whilst fictional, have a basis in a real observation or experience; almost all take a fresh look at a problem, frustration or constraint experienced by the researchers who crafted them.”
My own contribution (well, that of Dr Dorian Faust Jr, an assistant professor in the Faculty of Arbitrary Studies at the University of New Catatonia) is one of two that investigate the economic value of a soul. Mine is less about soul-selling than it is about the misapplication of quantitative research to things that cannot be quantified, as well as offering a broader critique of systems driving academia in general. It’s the work of less than an hour and I suspect that it might not make much of a contribution to my h-index but, self-referentially, that’s not going to stop me from listing it as a journal publication for my annual performance review.