New paper: The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future

I’m proud to be the 7th of 47 authors on this excellent new paper, led by the indefatigable Aras Bozkurt and featuring some of the most distinguished contemporary researchers in online, open, mobile, distance, e- and [insert almost any cognate sub-discipline here] learning, as well as a few of us hanging on their coat tails like me.

AI negaiveAs the title suggests, it is a manifesto: it makes a series of statements (divided into 15 positive and 20 negative themes) about what is or what should be, and it is underpinned by a firm set of humanist pedagogical and ethical attitudes that are anything but neutral. What makes it interesting to me, though, can mostly be found in the critical insights that accompany each theme, that capture a little of the complexity of the discussions that led to them, and that add a lot of nuance. The research methodology, a modified and super-iterative Delphi design in which all participants are also authors is, I think, an incredibly powerful approach to research in the technology of education (broadly construed) that provides rigour and accountability without succumbing to science-envy.

 

AI-positiveNotwithstanding the lion’s share of the work of leading, assembling, editing, and submitting the paper being taken on by Aras and Junhong, it was a truly collective effort so I have very little idea about what percentage of it could be described as my work. We were thinking and writing together.  Being a part of that was a fantastic learning experience for many of us, that stretched the limits of what can be done with tracked changes and comments in a Google Doc, with contributions coming in at all times of day and night and just about every timezone, over weeks. The depth and breadth of dialogue was remarkable, as much an organic process of evolution and emergence as intelligent design, and one in which the document itself played a significant participant role. I felt a strong sense of belonging, not so much as part of a community but as part of a connectome.

For me, this epitomizes what learning technologies are all about. It would be difficult if not impossible to do this in an in-person setting: even if the researchers worked together on an online document, the simple fact that they met in person would utterly change the social dynamics, the pacing, and the structure. Indeed, even online, replicating this in a formal institutional context would be very difficult because of the power relationships, assessment requirements, motivational complexities and artificial schedules that formal institutions add to the assembly. This was an online-native way of learning of a sort I aspire to but seldom achieve in my own teaching.

The paper offers a foundational model or framework on which to build or situate further work as well as providing a moderately succinct summary of  a very significant percentage of the issues relating to generative AI and education as they exist today. Even if it only ever gets referred to by each of its 47 authors this will get more citations than most of my papers, but the paper is highly cite-able in its own right, whether you agree with its statements or not. I know I am biased but, if you’re interested in the impacts of generative AI on education, I think it is a must-read.

The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future

Bozkurt, A., Xiao, J., Farrow, R., Bai, J. Y. H., Nerantzi, C., Moore, S., Dron, J., … Asino, T. I. (2024). The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future. Open Praxis, 16(4), 487–513. https://doi.org/10.55982/openpraxis.16.4.777

Full list of authors:

  • Aras Bozkurt
  • Junhong Xiao
  • Robert Farrow
  • John Y. H. Bai
  • Chrissi Nerantzi
  • Stephanie Moore
  • Jon Dron
  • Christian M. Stracke
  • Lenandlar Singh
  • Helen Crompton
  • Apostolos Koutropoulos
  • Evgenii Terentev
  • Angelica Pazurek
  • Mark Nichols
  • Alexander M. Sidorkin
  • Eamon Costello
  • Steven Watson
  • Dónal Mulligan
  • Sarah Honeychurch
  • Charles B. Hodges
  • Mike Sharples
  • Andrew Swindell
  • Isak Frumin
  • Ahmed Tlili
  • Patricia J. Slagter van Tryon
  • Melissa Bond
  • Maha Bali
  • Jing Leng
  • Kai Zhang
  • Mutlu Cukurova
  • Thomas K. F. Chiu
  • Kyungmee Lee
  • Stefan Hrastinski
  • Manuel B. Garcia
  • Ramesh Chander Sharma
  • Bryan Alexander
  • Olaf Zawacki-Richter
  • Henk Huijser
  • Petar Jandrić
  • Chanjin Zheng
  • Peter Shea
  • Josep M. Duart
  • Chryssa Themeli
  • Anton Vorochkov
  • Sunagül Sani-Bozkurt
  • Robert L. Moore
  • Tutaleni Iita Asino

Abstract

This manifesto critically examines the unfolding integration of Generative AI (GenAI), chatbots, and algorithms into higher education, using a collective and thoughtful approach to navigate the future of teaching and learning. GenAI, while celebrated for its potential to personalize learning, enhance efficiency, and expand educational accessibility, is far from a neutral tool. Algorithms now shape human interaction, communication, and content creation, raising profound questions about human agency and biases and values embedded in their designs. As GenAI continues to evolve, we face critical challenges in maintaining human oversight, safeguarding equity, and facilitating meaningful, authentic learning experiences. This manifesto emphasizes that GenAI is not ideologically and culturally neutral. Instead, it reflects worldviews that can reinforce existing biases and marginalize diverse voices. Furthermore, as the use of GenAI reshapes education, it risks eroding essential human elements—creativity, critical thinking, and empathy—and could displace meaningful human interactions with algorithmic solutions. This manifesto calls for robust, evidence-based research and conscious decision-making to ensure that GenAI enhances, rather than diminishes, human agency and ethical responsibility in education.

Slides from my ICEEL ’24 Keynote: “No Teacher Left Behind: Surviving Transformation”

Here are the slides from from my keynote at the 8th International Conference on Education and E-Learning in Tokyo yesterday. Sadly I was not actually in Tokyo for this but the online integration was well done and there was some good audience interaction. I am also the conference chair (an honorary title) so I may be a bit biased, but I think it’s a really good conference, with an increasingly rare blend of both the tech and the pedagogical aspects of the field, and some wonderfully diverse keynotes ranging in subject matter from the hardest computer science to reflections on literature and love (thanks to its collocation with ICLLL, a literature and linguistics conference). My keynote was somewhere in between, and deliberately targeted at the conference theme, “Transformative Learning in the Digital Era: Navigating Innovation and Inclusion.”

the technological connectome, represented in the style of 1950s children's booksAs my starting point for the talk I introduced the concept of the technological connectome, about which I have just written a paper (currently under revision, hopefully due for publication in a forthcoming issue of the new Journal of Open, Distance, and Digital Education), which is essentially a way of talking about extended cognition from a technological rather than a cognitive perspective. From there I moved on to the adjacent possible and the exponential growth in technology that has, over the past century or so, reached such a breakneck rate of change that innovations such as generative AI, the transformation I particularly focused on (because it is topical), can transform vast swathes of culture and practice in months if not in weeks. This is a bit of a problem for traditional educators, who are as unprepared as anyone else for it, but who find themselves in a system that could not be more vulnerable to the consequences. At the very least it disrupts the learning outcomes-driven teacher-centric model of teaching that still massively dominates institutional learning the world over, both in the mockery it makes of traditional assessment practices and in the fact that generative AIs make far better teachers if all you care about are the measurable outcomes.

The solutions I presented and that formed the bulk of the talk, largely informed by the model of education presented in How Education Works, were mostly pretty traditional, emphasizing the value of community, and of passion for learning, along with caring about, respecting, and supporting learners. There were also some slightly less conventional but widely held perspectives on assessment, plus a bit of complexivist thinking about celebrating the many teachers and acknowledging the technological connectome as the means, the object and the subject of learning, but nothing Earth-shatteringly novel. I think this is as it should be. We don’t need new values and attitudes; we just need to emphasize those that are learning-positive rather than the increasingly mainstream learning-negative, outcomes-driven, externally regulated approaches that the cult of measurement imposes on us.

Post-secondary institutions have had to grapple with their learning-antagonistic role of summative assessment since not long after their inception so this is not a new problem but, until recent decades, the two roles have largely maintained an uneasy truce. A great deal of the impetus for the shift has come from expanding access to PSE. This has resulted in students who are less able, less willing, and less well-supported than their forebears who were, on average, far more advantaged in ability, motivation, and unencumbered time simply because fewer were able to get in. In the past, teachers hardly needed to teach. The students were already very capable, and had few other demands on their time (like working to get through college), so they just needed to hang out with smart people, some of whom who knew the subject and could guide them through it in order to know what to learn and whether they had been successful, along with the time and resources to support their learning. Teachers could be confident that, as long as students had the resources (libraries, lecture notes, study time, other students) they would be sufficiently driven by the need to pass the assessments and/or intrinsic interest, that they could largely be left to their own devices (OK, a slight caricature, but not far off the reality).

Unfortunately, though this is no longer even close to the norm,  it is still the model on which most universities are based.  Most of the time professors are still hired because of their research skills, not teaching ability, and it is relatively rare that they are expected to receive more than the most perfunctory training, let alone education, in how to teach. Those with an interest usually have opportunities to develop their skills but, if they do not, there are few consequences. Thanks to the technological connectome, the rewards and punishments of credentials continue to do the job well enough, notwithstanding the vast amounts of cheating, satisficing, student suffering, and lost love of learning that ensues. There are still plenty of teachers: students have textbooks, YouTube tutorials, other students, help sites, and ChatGPT, to name but a few, of which there are more every day. This is probably all that is propping up a fundamentally dysfunctional system. Increasingly, the primary value of post-secondary education comes to lie in its credentialling function.

No one who wants to teach wants this, but virtually all of those who teach in universities are the ones who succeeded in retaining their love of learning for its own sake despite it, so they find it hard to understand students who don’t. Too many (though, I believe, a minority) are positively hostile to their students as a result, believing that most students are lazy, willing to cheat, or to otherwise game the system, and they set up elaborate means of control and gotchas to trap them.  The majority who want the best for their students, however,  are also to blame, seeing their purpose as to improve grades, using “learning science” (which is like using colour theory to paint – useful, not essential) to develop methods that will, on average, do so more effectively. In fairness, though grades are not the purpose, they are not wrong about the need to teach the measurable stuff well: it does matter to achieve the skills and knowledge that students set out to achieve. However, it is only part of the purpose. Mostly, education is a means to less measurable ends; of forming identities, attitudes, values, ways of relating to others, ways of thinking, and ways of being. You don’t need the best teaching methods to achieve that: you just need to care, and to create environments and structures that support stuff like community, diversity, connection, sharing, openness, collaboration, play, and passion.

The keynote was recorded but I am not sure if or when it will be available. If it is released on a public site, I will share it here.

Video and slides from my webinar, How to Be an Educational Technology: An Entangled Perspective on Teaching

an entangled teacher, represented as an anthropomorphic dog wrapped in cables that hold multiple technologies around him such as books and computersFor those with an interest, here are the slides from my webinar for Contact North | Contact Nord that I gave today: How to be an educational technology (warning: large download, about 32MB).

Here is a link to the video of the session.

I was invited to do this webinar because my book (How Education Works: Teaching, Technology, and Technique, briefly reviewed on the Contact North | Contact Nord site last year) was among the top 5 most viewed books of the year, so that was what the talk was about. Among the most central messages of the book and the ones that I was trying to get across in this presentation were:

  1. that how we do teaching matters more than what we do (“T’ain’t what you do, it’s the way that you do it”) and
  2. that we can only understand the process if we examine the whole complex assembly of teaching (very much including the technique of all who contribute to it, including learners, textbooks, and room designers) not just the individual parts.

Along the way I had a few other things to say about why that must be the case, the nature of teaching, the nature of collective cognition, and some of the profound consequences of seeing the world this way. I had fun persuading ChatGPT to illustrate the slides in a style that was not that of Richard Scarry (ChatGPT would not do that, for copyright reasons) but that was reminiscent of it, so there are lots of cute animals doing stuff with technologies on the slides.

I rushed and rambled, I sang, I fumbled and stumbled, but I think it sparked some interest and critical thinking. Even if it didn’t, some learning happened, and that is always a good thing. The conversations in the chat went too fast for me to follow but I think there were some good ones. If nothing else, though I was very nervous, I had fun, and it was lovely to notice a fair number of friends, colleagues, and even the odd relative among the audience. Thank you all who were there, and thank you anyone who catches the recording later.

How AI Teaches Its Children: slides and reflections from my keynote for AISUMMIT-2024

Late last night I gave the opening keynote at the Global AI Summit 2024, International Conference on Artificial Intelligence and Emerging Technology,  hosted by Bennett University, Noida, India. My talk was online. Here are the slides: How AI Teaches Its Children. It was recorded but I don’t know when or whether or with whom it will be shared: if possible I will add it to this post.

a robot teaching children in the 18th Century
a robot teaching children in the 18th Century

For those who have been following my thoughts on generative AI there will be few surprises in my slides, and I only had half an hour so there was not much time to go into the nuances. The title is an allusion to Pestalozzi’s 18th Century tract, How Gertrude Teaches Her Children, which has been phenomenally influential to the development of education systems around the world and that continues to have impact to this day. Much of it is actually great: Pestalozzi championed very child-centric teaching approaches that leveraged the skills and passions of their teachers. He recommended methods of teaching that made full use of the creativity and idiosyncratic knowledge the teachers possessed and that were very much concerned with helping children to develop their own interests, values and attitudes. However, some of the ideas – and those that have ultimately been more influential – were decidedly problematic, as is succinctly summarized in this passage on page 41:

I believe it is not possible for common popular instruction to advance a step, so long as formulas of instruction are not found which make the teacher, at least in the elementary stages of knowledge, merely the mechanical tool of a method, the result of which springs from the nature of the formulas and not from the skill of the man who uses  it.

This is almost the exact opposite of the central argument of my book, How Education Works, that mechanical methods are not the most important part of a soft technology such as teaching: what usually matters more is how it is done, not just what is done. You can use good methods badly and bad methods well because you are a participant in the instantiation of a technology, responsible for the complete orchestration of the parts, not just a user of them.

As usual, in the talk I applied a bit of co-participation theory to explain why I am both enthralled by and fearful of the consequences of generative AIs because they are the first technologies we have ever built that can use other technologies in ways that resemble how we use them. Previous technologies only reproduced hard technique – the explicit methods we use that make us part of the technology. Generative AIs reproduce soft technique, assembling and organizing phenomena in endlessly novel ways to act as creators of the technology. They are active, not passive participants.

Two dangers

I see there to be two essential risks lying in the delegation of soft technique to AIs. The first is not too terrible: that, because we will increasingly delegate creative activities we would have otherwise performed ourselves to machines, we will not learn those skills ourselves. I mourn the potential passing of hard skills in (say) drawing, or writing, or making music, but the bigger risk is that we will lose the the soft skills that come from learning them: the things we do with the hard skills, the capacity to be creative.

That said, like most technologies, generative AIs are ratchets that let us do more than we could achieve alone. In the past week, for instance, I “wrote” an app that would have taken me many weeks without AI assistance in a less than a day. Though it followed a spec that I had carefully and creatively written, it replaced the soft skills that I would have applied had I written it myself, the little creative flourishes and rabbit holes of idea-following that are inevitable in any creation process. When we create we do so in conversation with the hard technologies available to us (including our own technique), using the affordances and constraints to grasp adjacent possibles they provide. Every word we utter or wheel we attach to an axle opens and closes opportunities for what we can do next.

With that in mind, the app that the system created was just the beginning. Having seen the adjacent possibles of the finished app, I have spent too many hours in subsequent days extending and refining the app to do things that, in the past, I would not have bothered to do because they would have been too difficult. It has become part of my own extended cognition, starting higher up the tree than I would have reached alone. This has also greatly improved my own coding skills because, inevitably, after many iterations, the AI and/or I started to introduce bugs, some of which have been quite subtle and intractable. I did try to get the AI to examine the whole code (now over 2000 lines of JavaScript) and rewrite it or at least to point out the flaws, but that failed abysmally, amply illustrating both the strength of LLMs as creative participants in technologies, and their limitations in being unable to do the same thing the same way twice. As a result, the AI and I have have had to act as partners trying to figure out what is wrong. Often, though the AI has come up with workable ideas, its own solution has been a little dumb, but I could build on it to solve the problem better. Though I have not actually created much of the code myself, I think my creative role might have been greater than it would have been had I written every line.

Similarly for the images I used to illustrate the talk: I could not possibly have drawn them alone but, once the AI had done so, I engaged in a creative conversation to try (sometimes very unsuccessfully) to get it to reproduce what I had in mind. Often, though, it did things that sparked new ideas so, again, it became a partner in creation, sharing in my cognition and sparking my own invention. It was very much not just a tool: it was a co-worker, with different and complementary skills, and “ideas” of its own. I think this is a good thing. Yes, perhaps it is a pity that those who follow us may not be able to draw with a pen (and more than a little worrying thinking about the training sets that future AIs with learn to draw from), but they will have new ways of being creative.

Like all learning, both these activities changed me: not just my skills, but my ways of thinking. That leads me to the bigger risk.

Learning our humanity from machines

The second risk is more troubling: that we will learn ways of being human from machines. This is because of the tacit curriculum that comes with every learning interaction. When we learn from others, whether they are actively teaching, writing textbooks, showing us, or chatting with us, we don’t just learn methods of doing things: we learn values, attitudes, ways of thinking, ways of understanding, and ways of being at the same time. So far we have only learned that kind of thing from humans (sometimes mediated through code) and it has come for free with all the other stuff, but now we are doing so from machines. Those machines are very much like us because 99% of what they are – their training sets – is what we have made, but they not the same. Though LLMs are embodiments of our own collective intelligence, they don’t so much lack values, attitudes, ways of thinking etc as they have any and all of them. Every implicit value and attitude of the people whose work constituted their training set is available to them, and they can become whatever we want them to be. Interacting with them is, in this sense, very much not like interacting with something created by a human, let alone with humans more directly. They have no identity, no relationships, no purposes, no passion, no life history and no future plans. Nothing matters to them.

To make matters worse, there is programmed and trained stuff on top of that, like their interminable cheery patience  that might not teach us great ways of interacting with others. And of course it will impact how we interact with others because we will spend more and more time engaged with it, rather than with actual humans. The economic and practical benefits make this an absolute certainty. LLMs also use explicit coding to remove or massage data from the input or output, reflecting the values and cultures of their creators for better or worse. I was giving this talk in India to a predominantly Indian audience of AI researchers, every single one of whom was making extensive use of predominantly American LLMs like ChatGPT, Gemini, or Claude, and (inevitably) learning ways of thinking and doing from it. This is way more powerful than Hollywood as an instrument of Americanization.

I am concerned about how this will change our cultures and our selves because this is happening at phenomenal and global scale, and it is doing so in a world that is unprepared for the consequences, the designed parts of which assume a very different context. One of generative AI’s greatest potential benefits lies in the potential to provide “high quality” education at low cost to those who are currently denied it, but those low costs will make it increasingly compelling for everyone. However, because of the designs that assume a different context “quality”, in this sense, relates to the achievement of explicit learning outcomes: this is Pestalozzi’s method writ large. Generative AIs are great at teaching what we want to learn – the stuff we could write down as learning objectives or intended outcomes – so, as that is the way we have designed our educational systems (and our general attitudes to learning new skills), of course we will use them for that purpose. However, that cannot be done without teaching the other stuff – the tacit curriculum – which is ultimately more important because it shapes how we are in the world, not just the skills we employ to be that way. We might not have designed our educational systems to do that, and we seldom if ever think about it when teaching ourselves or receiving training to do something, but it is perhaps education’s most important role.

By way of illustration, I find it hugely bothersome that generative AIs are being used to write children’s stories (and, increasingly, videos) and I hope you feel some unease too, because those stories – not the facts in them but the lessons about things that matter that they teach – are intrinsic to them becoming who they will become. However, though perhaps of less magnitude, the same issue relates to learning everything from how to change a plug to how to philosophize: we don’t stop learning from the underlying stories behind those just because we have grown up. I fear that educators, formal or otherwise, will become victims of the McNamara Fallacy, setting our goals to achieve what is easily measurable while ignoring what cannot (easily) be measured, and so rush blindly towards subtly new ways of thinking and acting that few will even notice, until the changes are so widespread they cannot be reversed. Whether better or worse, it will very definitely be different, so it really matters that we examine and understand where this is all leading. This is the time, I believe, to reclaim a revalorize the value of things that are human before it is too late. This is the time to recognize education (far from only formal) as being how we become who we are, individually and collectively, not just how we meet planned learning outcomes. And I think (at least hope) that we will do that. We will, I hope, value more than ever the fact that something – be it a lesson plan or a book or a screwdriver – is made by someone or by a machine that has been explicitly programmed by someone. We will, I hope, better recognize  the relationships between us that it embodies, the ways it teaches us things it does not mean to teach, and the meaning it has in our lives as a result. This might happen by itself – already there is a backlash against the bland output of countless bots – but it might not be a bad idea to help it along when we can. This post (and my talk last night) has been one such small nudge.

Forthcoming webinar, September 24, 2024 – How to be an Educational Technology: An Entangled Perspective on Teaching

This is an announcement for an event I’ll be facilitating as part of TeachOnline’s excellent ongoing series of webinars. In it I will be discussing some of the key ideas of my open book, How Education Works, and exploring what they imply about how we should teach and, more broadly, how we should design systems of education. It will be fun. It will be educational. There may be music.

Here are the details:

Date: Tuesday, September 24, 2024

Time: 1:00 PM – 2:00 PM (Eastern Time) (find your time zone here)

Register (free of charge) for the event here

 

Source: How to be an Educational Technology: An Entangled Perspective on Teaching | Welcome to TeachOnline

Sets, nets and groups revisited

Here are the slides from a talk I gave earlier today, hosted by George Siemens and his fine team of people at Human Systems. Terry Anderson helped me to put the slides together, and offered some great insights and commentary after the presentation but I am largely to blame for the presentation itself. Our brief was to talk about sets, nets and groups, the theme of our last book Teaching Crowds: learning and social media and much of our work together since 2007 but, as I was the one presenting, I bent it a little towards generative AI and my own intertwingled perspective on technologies and collective cognition, which is most fully developed (so far) in my most recent book, How Education Works: Teaching, Technology, and Technique. If you’re not familiar with our model of sets, nets, groups and collectives, there’s a brief overview on the Teaching Crowds website. It’s a little long in the tooth but I think it is still useful and will help to frame what follows.

A recreation of the famous New Yorker cartoon, "On the Internet no one knows you are a dog" showing a dog using a web browser - but it is a robot dog
A recreation of the famous New Yorker cartoon, “On the Internet no one knows you are a dog” – but it is a robot dog

The key new insight that appears for the first time in this presentation is that, rather than being a fundamental social form in their own right, groups consist of technological processes that make use of and help to engender/give shape to the more fundamental forms of nets and sets. At least, I think they do: I need to think and talk some more about this, at least with Terry, and work it up into a paper, but I haven’t yet thought through all the repercussions. Even back when we wrote the book I always thought of groups as technologically mediated entities but it was only when writing these slides in the light of my more recent thinking on technology that I paid much attention to the phenomena that they actually orchestrate in order to achieve their ends. Although there are non-technological prototypes – notably in the form of families – these are emergent rather than designed. The phenomena that intentional groups primarily orchestrate are those of networks and sets, which are simply configurations of humans and their relationships with one another. Modern groups – in a learning context, classes, cohorts, tutorial groups, seminar groups, and so on – are designed to fulfill more specific purposes than their natural prototypes, and they are made possible by technological inventions such as rules, roles, decision-making processes, and structural hierarchies. Essentially, the group is a purpose-driven technological overlay on top of more basic social forms. It seems natural, much as language seems natural, because it is so basic and fundamental to our existence and how everything else works in human societies, but it is an invention (or many inventions, in fact) as much as wheels and silicon chips.

Groups are among the oldest and most highly evolved of human technologies and they are incredibly important for learning, but they have a number of inherent flaws and trade-offs/Faustian bargains, notably in their effects on individual freedoms, in scalability (mainly achieved through hierarchies), in sometimes unhealthy power dynamics, and in limitations they place on roles individuals play in learning. Modern digital technologies can help to scale them a little further and refine or reify some of the rules and roles, but the basic flaws remain. However, modern digital technologies also offer other ways of enabling sets and networks of people to support one another’s learning, from blogs and mailing lists to purpose-built social networking systems, from Wikipedia and Academia.edu to Quora, in ways that can (optionally) integrate with and utilize groups but that differ in significant ways, such as in removing hierarchies, structuring through behaviour (collectives) and filtering or otherwise mediating messages. With some exceptions, however, the purposes of large-scale systems of this nature (which would provide an ideal set of phenomena to exploit) are not usually driven by a need for learning, but by a need to gain attention and profit. Facebook, Instagram, LinkedIn, X, and others of their ilk have vast networks to draw on but few mechanisms that support learning and limited checks and balances for reliability or quality when it does occur (which of course it does). Most of their algorithmic power is devoted to driving engagement, and the content and purpose of that engagement only matters insofar as it drives further engagement. Up to a point, trolls are good for them, which is seldom if ever true for learning systems. Some – Wikipedia, the Khan Academy, Slashdot, Stack Exchange, Quora, some SubReddits, and so on – achieve both engagement and intentional support for learning. However, they remain works in progress in the latter regard, being prone to a host of ills from filter bubbles and echo chambers to context collapse and the Matthew Effect, not to mention intentional harm by bad actors. I’ve been exploring this space for approaching 30 years now, but there remains almost as much scope for further research and development in this area as there was when I began. Though progress has been made, we have yet to figure out the right rules and structures to deal with a great many problems, and it is increasingly difficult to slot the products of our research into an increasingly bland, corporate online space dominated by a shrinking number of bland, centralized learning management systems that continue to refine their automation of group processes and structures and, increasingly, to ignore the sets and networks on which they rely.

With that in mind, I see big potential benefits for generative AIs – the ultimate collectives – as supporters and enablers for crowds of people learning together. Generative AI provides us with the means to play with structures and adapt in hitherto impossible ways, because the algorithms that drive their adaptations are indefinitely flexible, the reified activities that form them are vast, and the people that participate in them play an active role in adjusting and forming their algorithms (not the underpinning neural nets but the emergent configurations they take). These are significant differences from traditional collectives, that tend to have one purpose and algorithm (typically complex but deterministic), such as returning search results or engaging network interactions.  I also see a great many potential risks, of which I have written fairly extensively of late, most notably in playing soft orchestral roles in the assembly that replace the need for humans to learn to play them. We tread a fine line between learning utopia and learning dystopia, especially if we try to overlay them on top of educational systems that are driven by credentials. Credentials used to signify a vast range of tacit knowledge and skills that were never measured, and (notwithstanding a long tradition of cheating) that was fine as long as nothing else could create those signals, because they were serviceable proxies. If you could pass the test or assignment, it meant that you had gone through the process and learned a lot more than what was tested. This has been eroded for some time, abetted by social media like Course Hero or Chegg that remain quite effective ways of bypassing the process for those willing to pay a nominal sum and accept the risk. Now that generative AI can do the same at considerably lower cost, with greater reliability, and lower risk, without having gone through the process, they no longer make good signifiers and, anyway (playing Devil’s advocate), it remains unclear to what extent those soft, tacit skills are needed now that generative AIs can achieve them so well.  I am much encouraged by the existence of George’s Paul LeBlanc’s lab initiative, the fact that George is the headliner chief scientist for it, its intent to enable human-centred learning in an age of AI, and its aspiration to reinvent education to fit. We need such endeavours. I hope they will do some great things.

And now in Chinese: 在线学习环境:隐喻问题与系统改进. And some thoughts on the value of printed texts.

Warm off the press, and with copious thanks and admiration to Junhong Xiao for the invitation to submit and the translation, here is my paper “The problematic metaphor of the environment in online learning” in Chinese, in the Journal of Open Learning. It is based on my OTESSA Journal paper, originally published as “On the Misappropriation of Spatial Metaphors in Online Learning” at the end of 2022 (a paper I am quite pleased with, though it has yet to receive any citations that I am aware of).

Many thanks, too, to Junhong for sending me the printed version that arrived today, smelling deliciously of ink. I hardly ever read anything longer than a shopping bill on paper any more but there is something rather special about paper that digital versions entirely lack. The particular beauty of a book or journal written in a language and script that I don’t even slightly understand is that, notwithstanding the ease with which I can translate it using my phone, it largely divorces the medium from the message. Even with translation tools my name is unrecognizable to me in this: Google Lens translates it as “Jon Delong”. Although I know it contains a translation of my own words, it is really just a thing: the signs it contains mean nothing to me, in and of themselves. And it is a thing that I like, much as I like the books on my bookshelf.

I am not alone in loving paper books, a fact that owners of physical copies of my most recent book (which can be read online for free but that costs about $CAD40 on paper) have had the kindness to mention, e.g. here and here. There is something generational in this, perhaps. For those of us who grew up knowing no other reading medium than ink on paper, there is comfort in the familiar, and we have thousands (perhaps millions) of deeply associated memories in our muscles and brains connected with it, made more precious by the increasing rarity with which those memories are reinforced by actually reading them that way. But, for the most part, I doubt that my grandchildren, at least, will lack that. While they do enjoy and enthusiastically interact with text on screens, from before they have been able to accurately grasp them they have been exposed to printed books, and have loved some of them as much as I did at the same ages.

It is tempting to think that our love of paper might simply be because we don’t have decent e-readers, but I think there is more to it than that. I have some great e-readers in many sizes and types, and I do prefer some of them to read from, for sure: backlighting when I need it, robustness, flexibility, the means to see it in any size or font that works for me, the simple and precise search, the shareable highlights, the lightness of (some) devices, the different ways I can hold them, and so on, make them far more accessible. But paper has its charms, too. Most obviously, something printed on a paper is a thing to own whereas, on the whole, a digital copy tends to just be a licence to read, and ownership matters. I won’t be leaving my e-books to my children. The thingness really matters in other ways, too. Paper is something to handle, something to smell. Pages and book covers have textures – I can recognize some books I know well by touch alone. It affects many senses, and is more salient as a result. It takes up room in an environment so it’s a commitment, and so it has to matter, simply because it is there; a rivalrous object competing with other rivalrous objects for limited space. Paper comes in fixed sizes that may wear down but will never change: it thus keeps its shape in our memories, too. My wife has framed occasional pages from my previously translated work, elevating them to art works, decoupled from their original context, displayed with the same lofty reverence as pages from old atlases. Interestingly, she won’t do that if it is just a printed PDF: it has to come from a published paper journal, so the provenance matters. Paper has a history and a context of its own, beyond what it contains. And paper creates its own context, filled with physical signals and landmarks that make words relative to the medium, not abstractions that can be reflowed, translated into other languages, or converted into other media (notably speech). The result is something that is far more memorable than a reflowable e-text. Over the years I’ve written a little about this here and there, and elsewhere, including a paper on the subject (ironically, a paper that is not available on paper, as it happens), describing an approach to making e-texts more memorable.

After reaching a slightly ridiculous peak in the mid-2000s, and largely as a result of a brutal culling that occurred when I came to Canada nearly 17 years ago, my paper book collection has now diminished to easily fit in a single and not particularly large free-standing IKEA shelving unit. The survivors are mostly ones I might want to refer to or read again, and losing some of them would sadden me a great deal, but I would only (perhaps) run into a burning building to save just a few, including, for instance:

  • A dictionary from 1936, bound in leather by my father and used in countless games of Scrabble and spelling disputes when I was a boy, and that was used by my whole family to look up words at one time or another.
  • My original hardback copy of the Phantom Tollbooth (I have a paperback copy for lending), that remains my favourite book of all time, that was first read to me by my father, and that I have read myself many times at many ages, including to my own children.
  • A boxed set of the complete works of Narnia, that I chose as my school art prize when I was 18 because the family copies had become threadbare (read and abused by me and my four siblings), and that I later read to my own children. How someone with very limited artistic skill came to win the school art prize is a story for another time.
  • A well-worn original hardback copy of Harold and the Purple Crayon (I have a paperback copy for lending) that my father once displayed for children in his school to read, with the admonition “This is Mr Dron’s book. Please handle with care” (it was not – it was mine).
  • A scribble-filled, bookmark-laden copy of Kevin Kelly’s Out of Control that strongly influenced my thinking when I was researching my PhD and that still inspires me today. I can remember exactly where I sat when I made some of the margin notes.
  • A disintegrating copy of Storyland, given to me by my godmother in 1963 and read to me and by me for many years thereafter. There is a double value to this one because we once had two copies of this in our home: the other belonged to my wife, and was also a huge influence on her at similar ages.

These books proudly wear their history and their relationships with me and my loved ones in all their creases, coffee stains, scuffs, and tattered pages.pile of some of my favourite books  To a greater or lesser extent, the same is true of almost all of the other physical books I have kept. They sit there as a constant reminder of their presence – their physical presence, their emotional presence, their social presence and their cognitive presence – flitting by in my peripheral vision many times a day, connecting me to thoughts and inspirations I had when I read them and, often, with people and places connected with them. None of this is true of my e-books. Nor is it quite the same for other objects of sentimental value, except perhaps (and for very similar reasons) the occasional sculpture or picture, or some musical instruments. Much as I am fond of (say) baby clothes worn by my kids or a battered teddy bear, they are little more than aides memoires for other times and other activities, whereas the books (and a few other objects) latently embody the experiences themselves. If I opened them again (and I sometimes do) it would not be the same experience, but it would enrich and connect with those that I already had.

I have hundreds of e-books that are available on many devices, one of which I carry with me at all times, not to mention an Everand (formerly Scribd) account with a long history, not to mention a long and mostly lost history of library borrowing, and I have at least a dozen devices on which to read them, from a 4 inch e-ink reader to a 32 inch monitor and much in between, but my connection with those is far more limited and transient. It is still more limited for books that are locked to a certain duration through DRM (which is one reason they are the scum of the earth). When I look at my devices and open the various reading apps on them I do see a handful of book covers, usually those that I have most recently read, but that is too fleeting and volatile to have much value. And when I open them they don’t fall open on well-thumbed pages. The text is not tangibly connected with the object at all.

As well as smarter landmarks within them, better ways to make e-books more visible would help, which brings me to the real point of this post. For many years I have wanted to paper a wall or two with e-paper (preferably in colour) on which to display e-book covers, but the costs are still prohibitive. It would be fun if the covers would become battered with increasing use, showing the ones that really mattered, and maybe dust could settle on those that were never opened, though it would not have to be so skeuomorphic – fading would work, or glyphs. They could be ordered manually or by (say) reading date, title, author, or subject. Perhaps touching them or scanning a QR code could open them. I would love to get a research grant to do this but I don’t think asking for electronic wallpaper in my office would fly with most funding sources, even if I prettied it up with words like “autoethnography”, and I don’t have a strong enough case, nor can I think of a rigorous enough research methodology to try it in a larger study with other people. Well. Maybe I will try some time. Until the costs of e-paper come down much further, it is not going to be a commercially viable product, either, though prices are now low enough that it might be possible to do it in a limited way with a poster-sized display for a (very) few thousand dollars. It could certainly be done with a large screen TV for well under $1000 but I don’t think a power-hungry glowing screen would be at all the way to go: the value would not be enough to warrant the environmental harm or energy costs, and something that emitted light would be too distracting. I do have a big monitor on my desk, though, which is already doing that so it wouldn’t be any worse, to which I could add a background showing e-book covers or spines. I could easily do this as a static image or slideshow, but I’d rather have something dynamic. It shouldn’t be too hard to extract the metadata from my list of books, swipe the images from the Web or the e-book files, and show them as a backdrop (a screensaver would be trivial). It might even be worth extending this to papers and articles I have read. I already have Pocket open most of the time, displaying web pages that I have recently read or want to read (serving a similar purpose for short-term recollection), and that could be incorporated in this. I think it would be useful, and it would not be too much work to do it – most of the important development could be done in a day or two. If anyone has done this already or feels like coding it, do get in touch!

Slides from my SITE keynote, 2024: The Intertwingled Teacher

The Intertwingled Teacher 

UPDATE:  the video of my talk is now available at https://www.youtube.com/watch?v=ji0jjifFXTs  (slides and audio only) …

Photo of Jon holding a photo of Jon These are the slides from my opening keynote at SITE ‘24 today, at Planet Hollywood in Las Vegas. The talk was based closely on some of the main ideas in How Education Works.  I’d written an over-ambitious abstract promising answers to many questions and concerns, that I did just about cover but far too broadly. For counter balance, therefore, I tried to keep the focus on a single message – t’aint what you do, it’s the way that you do it (which is the epigraph for the book) – and, because it was Vegas,  I felt that I had to do a show, so I ended the session with a short ukulele version of the song of that name. I had fun, and a few people tried to sing along. The keynote conversation that followed was most enjoyable – wonderful people with wonderful ideas, and the hour allotted to it gave us time to explore all of them.

Here is that bloated abstract:

Abstract: All of us are learning technologists, teaching others through the use of technologies, be they language, white boards, and pencils or computers, apps, and networks. We are all part of a vast, technology-mediated cognitive web in which a cast of millions – in formal education including teachers such as textbook authors, media producers, architects, software designers, system administrators, and, above all, learners themselves –  co-participates in creating an endless, richly entwined tapestry of learning. This tapestry spreads far beyond formal acts of teaching, far back in time, and far into the future, weaving in and helping to form not just the learning of individuals but the collective intelligence of the whole human race. Everyone’s learning journey both differs from and is intertwingled with that of everyone else. Education is an overwhelmingly complex and unpredictable technological system in which coarse patterns and average effects can be found but, except in the most rigid, invariant, minor details, of which individual predictions cannot be accurately made. No learner is average, and outcomes are always greater than what is intended. The beat of a butterfly’s wing in Timbuktu can radically affect the experience of a learner in Toronto. A slight variation in tone of voice can make all the difference between a life-transforming learning experience and a lifelong aversion to a subject. Beautifully crafted, research-informed teaching methods can be completely ineffective, while poor teaching, or even the absence of it, can result in profoundly affective learning. For all our efforts to understand and control it, education as a technological process is far closer to art than to engineering. What we do is usually far less significant than the idiosyncratic way that we do it, and how much we care for the subject, our students, and our craft is often far more important than the pedagogical methods we use. In this talk I will discuss what all of this implies for how we should teach, for how we understand teaching, and for how we research the massively intertwingled processes and tools of teaching. Along the way I will explain why there is no significant difference between measured outcomes of online or in-person learning, the futility of teaching to learning styles, the reason for the 2-sigma advantage of personal tuition, the surprising commonalities between behaviourist, cognitivist, constructivist models of learning and teaching, the nature of literacies, and the failure of reductive research methods in education. It will be fun

New article from Gerald Ardito and me – The emergence of autonomy in intertwingled learning environments: a model of teaching and learning

Here is a paper from the Asia-Pacific Journal of Teacher Education by my friend Gerald Ardito and me that presents a slightly different way of thinking about teaching and learning. We adopt a broadly complexivist stance that sees environments not as a backdrop to learning but as a rich network of dynamic, interwingled relationships between the various parts (including parts played by people), mediated through technologies, enabling and enabled by autonomy. The model that we develop knits together a smorgasbord of theories and models, including Self-Determination Theory (SDT), Connectivism, an assortment of complexity theories, the extended version of Paulsen’s model of cooperative freedoms developed by me and Terry Anderson, Garrison & Baynton’s model of autonomy, and my own coparticipation theory, wrapping up with a bit of social network analysis of a couple of Gerald’s courses that puts it all into perspective. From Gerald’s initial draft the paper took years of very sporadic development and went through many iterations. It seemed to take forever, but we had fun writing it. Looking afresh at the finished article, I think the diagrams might have been clearer, we might have done more to join all the dots, and we might have expressed the ideas a bit less wordily, but I am mostly pleased with the way it turned out, and I am glad to see it finally published. The good bits are all Gerald’s, but I am personally most pleased with the consolidated model of autonomy visualized in figure 4, that connects my own & Terry Anderson’s cooperative freedoms, Garrison & Baynton’s model of autonomy, and SDT.

combining cooperative freedoms, autonomy, and SDT

Reference:

Gerald Ardito & Jon Dron (2024) The emergence of autonomy in intertwingled learning environments: a model of teaching and learning, Asia-Pacific Journal of Teacher Education, DOI: 10.1080/1359866X.2024.2325746