And now in Chinese: 在线学习环境:隐喻问题与系统改进. And some thoughts on the value of printed texts.

Warm off the press, and with copious thanks and admiration to Junhong Xiao for the invitation to submit and the translation, here is my paper “The problematic metaphor of the environment in online learning” in Chinese, in the Journal of Open Learning. It is based on my OTESSA Journal paper, originally published as “On the Misappropriation of Spatial Metaphors in Online Learning” at the end of 2022 (a paper I am quite pleased with, though it has yet to receive any citations that I am aware of).

Many thanks, too, to Junhong for sending me the printed version that arrived today, smelling deliciously of ink. I hardly ever read anything longer than a shopping bill on paper any more but there is something rather special about paper that digital versions entirely lack. The particular beauty of a book or journal written in a language and script that I don’t even slightly understand is that, notwithstanding the ease with which I can translate it using my phone, it largely divorces the medium from the message. Even with translation tools my name is unrecognizable to me in this: Google Lens translates it as “Jon Delong”. Although I know it contains a translation of my own words, it is really just a thing: the signs it contains mean nothing to me, in and of themselves. And it is a thing that I like, much as I like the books on my bookshelf.

I am not alone in loving paper books, a fact that owners of physical copies of my most recent book (which can be read online for free but that costs about $CAD40 on paper) have had the kindness to mention, e.g. here and here. There is something generational in this, perhaps. For those of us who grew up knowing no other reading medium than ink on paper, there is comfort in the familiar, and we have thousands (perhaps millions) of deeply associated memories in our muscles and brains connected with it, made more precious by the increasing rarity with which those memories are reinforced by actually reading them that way. But, for the most part, I doubt that my grandchildren, at least, will lack that. While they do enjoy and enthusiastically interact with text on screens, from before they have been able to accurately grasp them they have been exposed to printed books, and have loved some of them as much as I did at the same ages.

It is tempting to think that our love of paper might simply be because we don’t have decent e-readers, but I think there is more to it than that. I have some great e-readers in many sizes and types, and I do prefer some of them to read from, for sure: backlighting when I need it, robustness, flexibility, the means to see it in any size or font that works for me, the simple and precise search, the shareable highlights, the lightness of (some) devices, the different ways I can hold them, and so on, make them far more accessible. But paper has its charms, too. Most obviously, something printed on a paper is a thing to own whereas, on the whole, a digital copy tends to just be a licence to read, and ownership matters. I won’t be leaving my e-books to my children. The thingness really matters in other ways, too. Paper is something to handle, something to smell. Pages and book covers have textures – I can recognize some books I know well by touch alone. It affects many senses, and is more salient as a result. It takes up room in an environment so it’s a commitment, and so it has to matter, simply because it is there; a rivalrous object competing with other rivalrous objects for limited space. Paper comes in fixed sizes that may wear down but will never change: it thus keeps its shape in our memories, too. My wife has framed occasional pages from my previously translated work, elevating them to art works, decoupled from their original context, displayed with the same lofty reverence as pages from old atlases. Interestingly, she won’t do that if it is just a printed PDF: it has to come from a published paper journal, so the provenance matters. Paper has a history and a context of its own, beyond what it contains. And paper creates its own context, filled with physical signals and landmarks that make words relative to the medium, not abstractions that can be reflowed, translated into other languages, or converted into other media (notably speech). The result is something that is far more memorable than a reflowable e-text. Over the years I’ve written a little about this here and there, and elsewhere, including a paper on the subject (ironically, a paper that is not available on paper, as it happens), describing an approach to making e-texts more memorable.

After reaching a slightly ridiculous peak in the mid-2000s, and largely as a result of a brutal culling that occurred when I came to Canada nearly 17 years ago, my paper book collection has now diminished to easily fit in a single and not particularly large free-standing IKEA shelving unit. The survivors are mostly ones I might want to refer to or read again, and losing some of them would sadden me a great deal, but I would only (perhaps) run into a burning building to save just a few, including, for instance:

  • A dictionary from 1936, bound in leather by my father and used in countless games of Scrabble and spelling disputes when I was a boy, and that was used by my whole family to look up words at one time or another.
  • My original hardback copy of the Phantom Tollbooth (I have a paperback copy for lending), that remains my favourite book of all time, that was first read to me by my father, and that I have read myself many times at many ages, including to my own children.
  • A boxed set of the complete works of Narnia, that I chose as my school art prize when I was 18 because the family copies had become threadbare (read and abused by me and my four siblings), and that I later read to my own children. How someone with very limited artistic skill came to win the school art prize is a story for another time.
  • A well-worn original hardback copy of Harold and the Purple Crayon (I have a paperback copy for lending) that my father once displayed for children in his school to read, with the admonition “This is Mr Dron’s book. Please handle with care” (it was not – it was mine).
  • A scribble-filled, bookmark-laden copy of Kevin Kelly’s Out of Control that strongly influenced my thinking when I was researching my PhD and that still inspires me today. I can remember exactly where I sat when I made some of the margin notes.
  • A disintegrating copy of Storyland, given to me by my godmother in 1963 and read to me and by me for many years thereafter. There is a double value to this one because we once had two copies of this in our home: the other belonged to my wife, and was also a huge influence on her at similar ages.

These books proudly wear their history and their relationships with me and my loved ones in all their creases, coffee stains, scuffs, and tattered pages.pile of some of my favourite books  To a greater or lesser extent, the same is true of almost all of the other physical books I have kept. They sit there as a constant reminder of their presence – their physical presence, their emotional presence, their social presence and their cognitive presence – flitting by in my peripheral vision many times a day, connecting me to thoughts and inspirations I had when I read them and, often, with people and places connected with them. None of this is true of my e-books. Nor is it quite the same for other objects of sentimental value, except perhaps (and for very similar reasons) the occasional sculpture or picture, or some musical instruments. Much as I am fond of (say) baby clothes worn by my kids or a battered teddy bear, they are little more than aides memoires for other times and other activities, whereas the books (and a few other objects) latently embody the experiences themselves. If I opened them again (and I sometimes do) it would not be the same experience, but it would enrich and connect with those that I already had.

I have hundreds of e-books that are available on many devices, one of which I carry with me at all times, not to mention an Everand (formerly Scribd) account with a long history, not to mention a long and mostly lost history of library borrowing, and I have at least a dozen devices on which to read them, from a 4 inch e-ink reader to a 32 inch monitor and much in between, but my connection with those is far more limited and transient. It is still more limited for books that are locked to a certain duration through DRM (which is one reason they are the scum of the earth). When I look at my devices and open the various reading apps on them I do see a handful of book covers, usually those that I have most recently read, but that is too fleeting and volatile to have much value. And when I open them they don’t fall open on well-thumbed pages. The text is not tangibly connected with the object at all.

As well as smarter landmarks within them, better ways to make e-books more visible would help, which brings me to the real point of this post. For many years I have wanted to paper a wall or two with e-paper (preferably in colour) on which to display e-book covers, but the costs are still prohibitive. It would be fun if the covers would become battered with increasing use, showing the ones that really mattered, and maybe dust could settle on those that were never opened, though it would not have to be so skeuomorphic – fading would work, or glyphs. They could be ordered manually or by (say) reading date, title, author, or subject. Perhaps touching them or scanning a QR code could open them. I would love to get a research grant to do this but I don’t think asking for electronic wallpaper in my office would fly with most funding sources, even if I prettied it up with words like “autoethnography”, and I don’t have a strong enough case, nor can I think of a rigorous enough research methodology to try it in a larger study with other people. Well. Maybe I will try some time. Until the costs of e-paper come down much further, it is not going to be a commercially viable product, either, though prices are now low enough that it might be possible to do it in a limited way with a poster-sized display for a (very) few thousand dollars. It could certainly be done with a large screen TV for well under $1000 but I don’t think a power-hungry glowing screen would be at all the way to go: the value would not be enough to warrant the environmental harm or energy costs, and something that emitted light would be too distracting. I do have a big monitor on my desk, though, which is already doing that so it wouldn’t be any worse, to which I could add a background showing e-book covers or spines. I could easily do this as a static image or slideshow, but I’d rather have something dynamic. It shouldn’t be too hard to extract the metadata from my list of books, swipe the images from the Web or the e-book files, and show them as a backdrop (a screensaver would be trivial). It might even be worth extending this to papers and articles I have read. I already have Pocket open most of the time, displaying web pages that I have recently read or want to read (serving a similar purpose for short-term recollection), and that could be incorporated in this. I think it would be useful, and it would not be too much work to do it – most of the important development could be done in a day or two. If anyone has done this already or feels like coding it, do get in touch!

Slides from my SITE keynote, 2024: The Intertwingled Teacher

Photo of Jon holding a photo of Jon The Intertwingled Teacher 

These are the slides from my opening keynote at SITE ‘24 today, at Planet Hollywood in Las Vegas. The talk was based closely on some of the main ideas in How Education Works.  I’d written an over-ambitious abstract promising answers to many questions and concerns, that I did just about cover but far too broadly. For counter balance, therefore, I tried to keep the focus on a single message – t’aint what you do, it’s the way that you do it (which is the epigraph for the book) – and, because it was Vegas,  I felt that I had to do a show, so I ended the session with a short ukulele version of the song of that name. I had fun, and a few people tried to sing along. The keynote conversation that followed was most enjoyable – wonderful people with wonderful ideas, and the hour allotted to it gave us time to explore all of them.

Here is that bloated abstract:

Abstract: All of us are learning technologists, teaching others through the use of technologies, be they language, white boards, and pencils or computers, apps, and networks. We are all part of a vast, technology-mediated cognitive web in which a cast of millions – in formal education including teachers such as textbook authors, media producers, architects, software designers, system administrators, and, above all, learners themselves –  co-participates in creating an endless, richly entwined tapestry of learning. This tapestry spreads far beyond formal acts of teaching, far back in time, and far into the future, weaving in and helping to form not just the learning of individuals but the collective intelligence of the whole human race. Everyone’s learning journey both differs from and is intertwingled with that of everyone else. Education is an overwhelmingly complex and unpredictable technological system in which coarse patterns and average effects can be found but, except in the most rigid, invariant, minor details, of which individual predictions cannot be accurately made. No learner is average, and outcomes are always greater than what is intended. The beat of a butterfly’s wing in Timbuktu can radically affect the experience of a learner in Toronto. A slight variation in tone of voice can make all the difference between a life-transforming learning experience and a lifelong aversion to a subject. Beautifully crafted, research-informed teaching methods can be completely ineffective, while poor teaching, or even the absence of it, can result in profoundly affective learning. For all our efforts to understand and control it, education as a technological process is far closer to art than to engineering. What we do is usually far less significant than the idiosyncratic way that we do it, and how much we care for the subject, our students, and our craft is often far more important than the pedagogical methods we use. In this talk I will discuss what all of this implies for how we should teach, for how we understand teaching, and for how we research the massively intertwingled processes and tools of teaching. Along the way I will explain why there is no significant difference between measured outcomes of online or in-person learning, the futility of teaching to learning styles, the reason for the 2-sigma advantage of personal tuition, the surprising commonalities between behaviourist, cognitivist, constructivist models of learning and teaching, the nature of literacies, and the failure of reductive research methods in education. It will be fun

New article from Gerald Ardito and me – The emergence of autonomy in intertwingled learning environments: a model of teaching and learning

Here is a paper from the Asia-Pacific Journal of Teacher Education by my friend Gerald Ardito and me that presents a slightly different way of thinking about teaching and learning. We adopt a broadly complexivist stance that sees environments not as a backdrop to learning but as a rich network of dynamic, interwingled relationships between the various parts (including parts played by people), mediated through technologies, enabling and enabled by autonomy. The model that we develop knits together a smorgasbord of theories and models, including Self-Determination Theory (SDT), Connectivism, an assortment of complexity theories, the extended version of Paulsen’s model of cooperative freedoms developed by me and Terry Anderson, Garrison & Baynton’s model of autonomy, and my own coparticipation theory, wrapping up with a bit of social network analysis of a couple of Gerald’s courses that puts it all into perspective. From Gerald’s initial draft the paper took years of very sporadic development and went through many iterations. It seemed to take forever, but we had fun writing it. Looking afresh at the finished article, I think the diagrams might have been clearer, we might have done more to join all the dots, and we might have expressed the ideas a bit less wordily, but I am mostly pleased with the way it turned out, and I am glad to see it finally published. The good bits are all Gerald’s, but I am personally most pleased with the consolidated model of autonomy visualized in figure 4, that connects my own & Terry Anderson’s cooperative freedoms, Garrison & Baynton’s model of autonomy, and SDT.

combining cooperative freedoms, autonomy, and SDT

Reference:

Gerald Ardito & Jon Dron (2024) The emergence of autonomy in intertwingled learning environments: a model of teaching and learning, Asia-Pacific Journal of Teacher Education, DOI: 10.1080/1359866X.2024.2325746

▶ I got air: interview with Terry Greene

Since 2018, Terry Greene has been producing a wonderful series of podcast interviews with open and online learning researchers and practitioners called Getting Air. Prompted by the publication of How Education Works, (Terry is also responsible for the musical version of the book, so I think he likes it) this week’s episode features an interview with me.

I probably should have been better prepared. Terry asked some probing, well-informed, and sometimes disarming questions, most of which led to me rambling more than I might have done if I’d thought about them in advance. It was fun, though, drifting through a broad range of topics from the nature of technology to music to the perils of generative AI (of course).

I hope that Terry does call his PhD dissertation “Getting rid of instructional designers”.

Journal of Imaginary Research, Volume 9 (including a piece by me)

Since 2015 Kay Guccione and Matthew Cheeseman have been editing the wonderful Journal of Imaginary Research (tagline “Writing Without Discipline”) that, once a year, publishes fictional research abstracts by fictional researchers. Each issue has a theme, and Volume 9’s is “Deal or Dealing”.  I have an abstract in it.

As well as providing some entertaining and often very funny short reads, there is a serious academic intent behind all of this. As Guccione and Cheeseman put it,

In producing these short, exploratory pieces, we seek to help writers establish a new relationship with writing; less driven by the demands of
productivity. Writing fiction in a familiar format helps us reflect on how we can creatively communicate our research projects, and how we can find the joy of creativity in all our writing. Many of the pieces we receive, whilst fictional, have a basis in a real observation or experience; almost all take a fresh look at a problem, frustration or constraint experienced by the researchers who crafted them.

My own contribution (well, that of Dr Dorian Faust Jr, an assistant professor in the Faculty of Arbitrary Studies at the University of New Catatonia) is one of two that investigate the economic value of a soul. Mine is less about soul-selling than it is about the misapplication of quantitative research to things that cannot be quantified, as well as offering a broader critique of systems driving academia in general. It’s the work of less than an hour and I suspect that it might not make much of a contribution to my h-index but, self-referentially, that’s not going to stop me from listing it as a journal publication for my annual performance review.

Stories that matter and stories that don’t: some thoughts on appropriate teaching roles for generative AIs

robot reading a bedtime story to a child Well, this was definitely going to happen.

The system discussed in this Wired article is a bot (not available to the general public) that takes characters from the absurdly popular Bluey cartoon series and creates personalized bedtime stories involving them for its creator’s children using ChatGPT+. This is something anyone could do – it doesn’t take a prompt-wizard or specialized bot to do this. You could easily make any reasonably proficient LLM incorporate your child’s interests, friends, family, and characteristics and churn out a decent enough story from it. With copyright-free material you could make the writing style and scenes very similar to the original. A little editorial control may be needed here and there but I think that, with a smart enough prompt, it would do a fairly good, average sort of a job, at least as readable as what an average human might produce, in a fraction of the time. I find this to be hugely problematic, though, and not for the reasons given in the article, though there are certainly some legal and ethical concerns, especially around copyright and privacy as well as the potential for generating dubious, disturbing, or otherwise poor content.

Why stories matter

The thing that bothers me most about this is not the quality of the stories but the quality of the relationship between the author and the reader (or listener).  Stories are the most human of artifacts, the ways that we create and express meaning, no matter how banal. They act as hooks that bind us together, whether invented by a parent or shared across whole cultures. They are a big part of how we learn and establish our relationships with the world and with one another. They are glimpses into how another person thinks and feels: they teach us what it means to be human, in all its rich diversity. They reflect the best and the worst of us, and they teach us about what matters.

My children were in part formed by the stories I made up or read to them 30 or more years ago, and it matters that none were made by machines. The language that I used, the ways that I wove in people and things that were meaningful to them, the attitudes I expressed, the love that went into them, all mattered.  I wish I’d recorded one or two, or jotted down the plots of at least some of the very many Lemmie the Suicidal Lemming stories that were a particular favourite. These were not as dark as they sound – Lemmie was a cheerful creature who just happened to be prone to putting himself in life-threatening situations, usually as a result of following others. Now that they have children of their own, both my kids have deliciously dark but fundamentally compassionate senses of humour and a fierce independence that I’d like to think may, in small part, be a result of such tales.

The books I (or, as they grew, we, and then they) chose probably mattered more. Some had been read to me by my own parents and at least a couple were read to them by their own parents. Like my children, I learned to read very young, largely because my imagination was fired by those stories, and fired by how much they mattered to my parents and siblings. As much as the people around me, the people who wrote and inhabited the books I listened to and later read made me who I am, and taught me much of what I still know today – not just facts to recall in a pub quiz but ways of thinking and understanding the world, and not just because of the values they shared but because of my responses to them, that increasingly challenged those values. Unlike AI-generated tales, these were shared cultural artifacts, read by vast numbers of people, creating a shared cultural context, values, and meanings that helped to sustain and unite the society I lived in. You may not have read many of the same books I read as a middle class boy growing up in 1960s Britain but, even if you are not of my generation or cultural background, you might have read (or seen video adaptations of) one or more children’s works by A.A. Milne, Enid Blyton, C.S. Lewis, J.R.R.Tolkein, Hans Christian Anderson, Charles Dickens, Lewis Caroll, Kenneth Grahame, Rev. W. Awdry, T.S. Eliot, the Brothers Grimm, Norton Juster, Edward Lear, Hugh Lofting, Dr. Seuss, and so on. That matters, and it matters that I can still name them. These were real authors with attitudes, beliefs, ideas, and styles unlike any other. They were products and producers of the times and places they lived in. Many of their attitudes and values are, looking back, troublesome, and that was true even then. So many racist and sexist stereotypes and assumptions, so many false beliefs, so many values and attitudes that had no place in the 1960s, let alone now. And that was good, because it introduced me to a diversity of ways of being and thinking, and allowed me to compare them with my own values and those of other authors, and it prepared me for changes to come because I had noticed the differences between their context and mine, and questioned the reasons.

With careful prompting, generative AIs are already capable of producing work of similar quality and originality to fan fiction or corporate franchise output around the characters and themes of these and many other creative works, and maybe there is a place for that. It couldn’t be much worse than (say) the welter of appallingly sickly, anodyne, Americanized, cookie-cutter, committee-written Thomas the Tank Engine stories that my grandchildren get to watch and read, that bear as little resemblance to Rev. W. Awdry’s sublimely stuffy Railway Stories as Star Wars. It would soften the sting when kids reach the end of a much loved series, perhaps. And, while it is a novelty, a personalized story might be very appealing, albeit that there is something rather distasteful about making a child feel special with the unconscious output of a machine to which nothing matters. But this is not just about value to individuals, living with the histories and habits we have acquired in pre-AI times. This is something that is happening at a ubiquitous and massive scale, everywhere. When this is no longer a novelty but the norm it will change us, and change our societies, in ways that make me shiver. I fear that mass-individualization will in fact be mass-blandification, a myriad of pale shadows that neither challenge nor offend, that shut down rather than open up debate, that reinforce norms that never change and are never challenged (because who else will have read them?), that look back rather than forward, that teach us average ways of thinking, that learn what we like and enclose us in our own private filter bubble, keeping us from evolving, that only surprise us when they go wrong. This is in the nature of generative AIs because all they have to learn from is our own deliberate outputs and, increasingly, the outputs of prior generative AIs, not from any kind of lived experience. They are averaging mirrors whose warped distortions can convince us they are true reflections. Introducing AI-generated stories to very young children, at scale, seems to me to be an awful gamble with very high stakes for their futures. We are performing uncontrolled experiments with stuff that forms minds, values, attitudes, expectations, and meanings that these kids will carry with them for the rest of their lives, and there is at least some reason to suspect that the harm may be greater than the good, both on an individual and a societal level. At the very least, there is a need for a large amount of editorial control, but how many parents of young children have the time or the energy for that?

That said…

Generating, not consuming output

I do see great value in working with and supporting the kids in creating the prompts for those stories themselves. While the technology is moving too fast for these evanescent skills to be describable as generative AI literacies, the techniques they learn and discoveries they make while doing so may help them to understand the strengths and limitations of the tools as they continue to develop, and the outputs will matter more because they contributed to creating them. Plus, it is a great fun way to learn. My nearly 7-year-old grandchild, with the help of their father, has enjoyed and learned a lot from creating images with DALL-E, for instance, and has been doing so long enough to see massive improvements in its capabilities, so has learned some great meta-lessons about the nature of technological evolution too. This has not stopped them from developing their own artistic skills, including with the help of iPads and AI-assisted drawing tools, which offer excellent points of comparison and affordances to reflect on the differences. It has given them critical insight into the nature of the output and the processes that led to it, and it has challenged them to bend the machine to do what they want it to do. This kind of mindful use of the tools as complementary partners, rather than consumption of their products, makes sense to me.

I think the lessons carry forward to adult learning, too. I have huge misgivings about giving generative AIs a didactic role, for the same reasons that having them tell stories to children worry me. However, they can be great teachers for those that make use of them to create output, rather than being targets of the output they have created. For instance I have been really enjoying using ChatGPT+ to help me write an Elgg plugin over the past few weeks, intended to deal with a couple of show-stopping bugs in an upgrade to the Landing that I had been struggling with for about 3 years, on and (mostly) off. I had come to see the problems as intractable, especially as a fair number of far smarter Elgg developers than I had looked at them and failed to see where the problems lay. ChatGPT+ let me try out a lot more ideas than even a large team of developers would have been able to come up with alone, and it took care of some of the mundane repetitive work that made the process slow.  Though none of it was bad, little of its code was particularly good: it made up stuff, omitted stuff, and did things inefficiently. It was really good, though, at putting in explanatory comments and documenting what it was doing. This was great, because the things I had to do to fix the flaws taught me a lot more than I would have learned had they been perfect solutions. Nearly always, it was good enough and well-documented enough to set me on the right path, but the ways it failed drove me to look at source documentation, query the underlying database (now knowing what to look for), follow conversations on GitHub, and examine human-created plugins, from which I learned a lot more and got further inspiration about what to ask the LLM to do next. Because it made different mistakes each time, it helped me to slowly develop a clearer model of how it should really have happened, so I got better and better at solving the problems myself, meanwhile learning a whole raft of useful tricks from the code that worked and at least as much from figuring out why it didn’t. It was very iterative: each attempt sparked ideas for the next attempt. It gave me just enough scaffolding to help me do what I could not do alone. About half way through I discovered the cause of the problem – a single changed word in the 150,000+ lines of code in the core engine, that was intended to better suit the new notification system, but that resulted in the existing 20m+ notification messages in the system failing to display correctly. This gave me ideas for some better prompts, the results of which taught me more. As a result, I am now a better Elgg coder than I was when I began, and I have a solution to a problem that has held up vital improvements to an ailing site used by more than 16,000 people for many years (though there are still a few hurdles to overcome before it reaches the production site).

Filling the right gaps

The final solution actually uses no code from ChatGPT+ at all, but it would not have been possible to get to that point without it. The skills it provided were different to and complementary to my own, and I think that is the critical point. To play an effective teaching role, a teacher has to leave the right kind of gaps for the learner to fill. If they are too large or too small, the learner learns little or nothing. The to and fro between me and the machine, and the ease with which I could try out different ideas, eventually led to those gaps being just the right size so that, instead of being an overwhelming problem, it became an achievable challenge. And that is the story that matters here.

The same is true of the stories that inspire: they leave the right sized gaps for the reader or listener to fill with their own imaginations while providing sufficient scaffolding to guide them, surprise them, or support them on the journey. We are participants in the stories, not passive recipients of them, much as I was a participant in the development of the Elgg plugin and, similarly, we learn through that participation. But there is a crucial difference. While I was learning the mechanical skills of coding from this process (as well as independently developing the soft skills to use them well), the listener to or reader of a story is learning the social, cultural, and emotional skills of being human (as well as, potentially, absorbing a few hard facts and the skills of telling their own stories). A story can be seen as a kind of machine in its own right: one that is designed to make us think and feel in ways that matter to the author. And that, in a nutshell, is why a story produced by a generative AI is such a problematic idea for the reader, but the use of a generative AI to help produce that story can be such a good idea for the writer.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/21680600/stories-that-matter-and-stories-that-dont-some-thoughts-on-appropriate-teaching-roles-for-generative-ais

Published in Digital – The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education

A month or two ago I shared a “warts-and-all” preprint of this paper on the risks of educational uses of generative AIs. The revised, open-access published version, The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education is now available in the Journal Digital.

The process has been a little fraught. Two reviewers really liked the paper and suggested minimal but worthwhile changes. One quite liked it but had a few reasonable suggestions for improvements that mostly helped to make the paper better. The fourth, though, was bothersome in many ways, and clearly wanted me to write a completely different paper altogether. Despite this, I did most of what they asked, even though some of the changes, in my opinion, made the paper a bit worse. However, I drew the line at the point that they demanded (without giving any reason) that I should refer to 8 very mediocre, forgettable, cookie cutter computer science papers which, on closer inspection, had all clearly been written by the reviewer or their team. The big problem I had with this was not so much the poor quality of the papers, nor even the blatant nepotism/self-promotion of the demand, but the fact that none were in any conceivable way relevant to mine, apart from being about AI: they were about algorithm-tweaking, mostly in the context of traffic movements in cities.  It was as ridiculous as a reviewer of a work on Elizabethan literature requiring the author to refer to papers on slightly more efficient manufacturing processes for staples. Though it is normal and acceptable for reviewers to suggest reference to their own papers when it would clearly lead to improvements, this was an utterly shameless abuse of power of a scale and kind that I have never seen before. I politely refused, making it clear that I was on to their game but not directly calling them out on it.

In retrospect, I slightly regret not calling them out. For a grizzly old researcher like me who could probably find another publisher without too much hassle, it doesn’t matter much if I upset a reviewer enough to make them reject my paper. However, for early-career researchers stuck in the publish-or-perish cycle, it would be very much harder to say no. This kind of behaviour is harmful for the author, the publisher, the reader, and the collective intelligence of the human race. The fact that the reviewer was so desperate to get a few more citations for their own team with so little regard for quality or relevance seems to me to be a poor reflection on them and their institution but, more so, a damning indictment of a broken system of academic publishing, and of the reward systems driving academic promotion and recognition. I do blame the reviewer, but I understand the pressures they might have been under to do such a blatantly immoral thing.

As it happens, my paper has more than a thing or two to say about this kind of McNamara phenomenon, whereby the means used to measure success in a system become and warp its purpose, because it is among the main reasons that generative AIs pose such a threat. It is easy to forget that the ways we establish goals and measure success in educational systems are no more than signals of a much more complex phenomenon with far more expansive goals that are concerned with helping humans to be, individually and in their cultures and societies, as much as with helping them to do particular things. Generative AIs are great at both generating and displaying those signals – better than most humans in many cases – but that’s all they do: the signals signify nothing. For well-defined tasks with well-defined goals they provide a lot of opportunities for cost-saving, quality improvement, and efficiency and, in many occupations, that can be really useful. If you want to quickly generate some high quality advertising copy, the intent of which is to sell a product, then it makes good sense to use a generative AI. Not so much in education, though, where it is too easy to forget that learning objectives, learning outcomes, grades, credentials, and so on are not the purposes of learning but just means for and signals of achieving them.

Though there are other big reasons to be very concerned about using generative AIs in education, some of which I explore in the paper, this particular problem is not so much with the AIs themselves as with the technological systems into which they are, piecemeal, inserted. It’s a problem with thinking locally, not globally; of focusing on one part of the technology assembly without acknowledging its role in the whole. Generative AIs could, right now and with little assistance,  perform almost every measurable task in an educational system from (for students) producing essays and exam answers, to (for teachers) writing activities and assignments, or acting as personal tutors. They could do so better than most people. If that is all that matters to us then we might as well therefore remove the teachers and the students from the system because, quite frankly, they only get in the way. This absurd outcome is more or less exactly the end game that will occur though, if we don’t rethink (or double down on existing rethinking of) how education should work and what it is for, beyond the signals that we usually use to evaluate success or intent. Just thinking of ways to use generative AIs to improve our teaching is well-meaning, but it risks destroying the woods by focusing on the trees. We really need to step back a bit and think of why we bother in the first place.

For more on this, and for my tentative partial solutions to these and other related problems, do read the paper!

Abstract and citation

This paper analyzes the ways that the widespread use of generative AIs (GAIs) in education and, more broadly, in contributing to and reflecting the collective intelligence of our species, can and will change us. Methodologically, the paper applies a theoretical model and grounded argument to present a case that GAIs are different in kind from all previous technologies. The model extends Brian Arthur’s insights into the nature of technologies as the orchestration of phenomena to our use by explaining the nature of humans’ participation in their enactment, whether as part of the orchestration (hard technique, where our roles must be performed correctly) or as orchestrators of phenomena (soft technique, performed creatively or idiosyncratically). Education may be seen as a technological process for developing these soft and hard techniques in humans to participate in the technologies, and thus the collective intelligence, of our cultures. Unlike all earlier technologies, by embodying that collective intelligence themselves, GAIs can closely emulate and implement not only the hard technique but also the soft that, until now, was humanity’s sole domain; the very things that technologies enabled us to do can now be done by the technologies themselves. Because they replace things that learners have to do in order to learn and that teachers must do in order to teach, the consequences for what, how, and even whether learning occurs are profound. The paper explores some of these consequences and concludes with theoretically informed approaches that may help us to avert some dangers while benefiting from the strengths of generative AIs. Its distinctive contributions include a novel means of understanding the distinctive differences between GAIs and all other technologies, a characterization of the nature of generative AIs as collectives (forms of collective intelligence), reasons to avoid the use of GAIs to replace teachers, and a theoretically grounded framework to guide adoption of generative AIs in education.

Dron, J. (2023). The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education. Digital, 3(4), 319–335. https://doi.org/10.3390/digital3040020

Originally posted at: https://landing.athabascau.ca/bookmarks/view/21104429/published-in-digital-the-human-nature-of-generative-ais-and-the-technological-nature-of-humanity-implications-for-education

▶ How Education Works, the audio book: now with beats

My book has been set to music!

Many thanks to Terry Greene for converting How Education Works into the second in his inspired series of podcasts, EZ Learning – Audio Books with Beats. There’s a total of 15 episodes that can be listened to online, subscribed to with your preferred podcast app, or downloaded for later listening, read by a computer-generated voice and accompanied by some cool, soothing beats.

Terry chose a deep North American voice for the reader and Eaters In Coffeeshops Mix 1 by Eaters to accompany my book. I reckon it works really well. It’s bizarre, at first – the soothing robotic voice introduces weird pauses, mispronunciations, and curious emphases, and there are occasional voice parts in the music that can be slightly distracting – but you soon get used to it if you relax into the rhythm, and it leads to the odd serendipitous emphasis that enhances rather than detracts from the text. Oddly, in some ways it almost feels more human as a result. Though it can be a bit disconcerting at times and there’s a fair chance of being lulled to sleep by the gentle rhythm, I have a hunch that the addition of music might make it easier to remember passages from it, for reasons discussed in a paper I wrote with Rory McGreal, VIve Kumar, and Jennifer Davies a year or so ago.

I have been slowly and painfully working on a manually performed audiobook of How Education Works but it is taking much longer than expected thanks to living on the flight path of a surprising number of float planes, being in a city built on a rain forest with a noisy gutter outside my window, having two very vocal cats, and so on, not to mention not having a lot of free time to work on it, so I am very pleased that Terry has done this. I won’t stop working on the human-read version – I think this fills a different and very complementary niche – but it’s great to have something to point people towards when they ask for an audio version.

The first season of Audio Books with Beats, appearing in the feed after the podcasts for my book chapters, was another AU Press book, Terry Anderson’s Theory and Practice of Online Learning which is also well worth a listen – those chapters follow directly from mine in the list of episodes. I hope and expect there will be more seasons to come so, if you are reading this some time after it was posted, you may need to scroll down through other podcasts until you reach the How Education Works. In case it’s hard to find, here’s a list of direct links to the episodes.

Acknowledgements, Prologue, introduction

Chapter 1: A Handful of Anecdotes About Elephants

Chapter 2:  A Handful of Observations About Elephants

Part 1: All About Technology

Chapter 3: Organizing Stuff to Do Stuff

Chapter 4: How Technologies Work

Chapter 5: Participation and Technique

Part II: Education as a Technological System

Chapter 6: A Co-Participation Model of Teaching

Chapter 7: Theories of Teaching

Chapter 8: Technique, Expertise, and Literacy

Part III: Applying the Co-Participation Model

Chapter 9: Revealing Elephants

Chapter 10: How Education Works

Epilogue

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20936998/%E2%96%B6-how-education-works-the-audio-book-now-with-beats

Recording and slides from my ESET 2023 keynote: Artificial humanity and human artificiality

Here are the slides from my keynote at ESET23 in Taiwan (I was online, alas, not in Taipei!).

I will try to remember to update this post with a link to the recording, when it is available.

Here’s a recording of the actual keynote.

The themes of my talk will be familiar to anyone who follows my blog or who has read my recent paper on the subject. This is about applying the coparticipation theory from How Education Works to generative AI, raising concerns about the ways it mimics the soft technique of humans, and discussing how problematic that will be if the skills it replaces atrophy or are never learned in the first place, amongst other issues.

This is the abstract:

We are participants in, not just users of technologies. Sometimes we participate as orchestrators (for instance, when choosing words that we write) and sometimes as part of the orchestration (for instance, when spelling those words correctly). Usually, we play both roles.  When we automate aspects of technologies in which we are just parts of the orchestration, it frees us up to be able to orchestrate more, to do creative and problem-solving tasks, while our tools perform the hard, mechanical tasks better, more consistently, and faster than we could ourselves. Collectively and individually, we therefore become smarter. Generative AIs are the first of our technologies to successfully automate those soft, open-ended, creative cognitive tasks. If we lack sufficient time and/or knowledge to do what they do ourselves, they are like tireless, endlessly flexible personal assistants, expanding what we can do alone. If we cannot draw, or draw up a rental agreement, say, an AI will do it for us, so we may get on with other things. Teachers are therefore scrambling to use AIs to assist in their teaching as fast as students use AIs to assist with their assessments.

For achieving measurable learning outcomes, AIs are or will be effective teachers, opening up greater learning opportunities that are more personalized, at lower cost, in ways that are superior to average human teachers.  But human teachers, be they professionals, other students, or authors of websites, do more than help learners to achieve measurable outcomes. They model ways of thinking, ways of being, tacit knowledge, and values: things that make us human. Education is a preparation to participate in human cultures, not just a means of imparting economically valuable skills. What will happen as we increasingly learn those ways of being from a machine? If machines can replicate skills like drawing, reasoning, writing, and planning, will humans need to learn them at all? Are there aspects of those skills that must not atrophy, and what will happen to us at a global scale if we lose them? What parts of our cognition should we allow AIs to replace? What kinds of credentials, if any, will be needed? In this talk I will use the theory presented in my latest book, How Education Works: Teaching, Technology, and Technique to provide a framework for exploring why, how, and for what purpose our educational institutions exist, and what the future may hold for them.

Pre-conference background reading, including the book, articles, and blog posts on generative AI and education may be found linked from https://howeducationworks.ca

Preprint – The human nature of generative AIs and the technological nature of humanity: implications for education

Here is a preprint of a paper I just submitted to MDPI’s Digital journal that applies the co-participation model that underpins How Education Works (and a number of my papers over the last few years) to generative AIs (GAIs). I don’t know whether it will be accepted and, even if it is, it is very likely that some changes will be required. This is a warts-and-all raw first submission. It’s fairly long (around 10,000 words).

The central observation around which the paper revolves is that, for the first time in the history of technology, recent generations of GAIs automate (or at least appear to automate) the soft technique that has, till now, been the sole domain of humans. Up until now, every technology we have ever created, be it physically instantiated, cognitive, organizational, structural, or conceptual, has left all of the soft part of the orchestration to human beings.

The fact that GAIs replicate the soft stuff is a matter for some concern when they start to play a role in education, mainly because:

  • the skills they replace may atrophy or never be learned in the first place. This is not even slightly like replacing hard skills of handwriting or arithmetic: we are talking about skills like creativity, problem-solving, critical inquiry, design, and so on. We’re talking about the stuff that GAIs are trained with.
  • the AIs themselves are an amalgam, an embodiment of our collective intelligence, not actual people. You can spin up any kind of persona you like and discard it just as easily. Much of the crucially important hidden/tacit curriculum of education is concerned with relationships, identity, ways of thinking, ways of being, ways of working and playing with others. It’s about learning to be human in a human society. It is therefore quite problematic to delegate how we learn to be human to a machine with (literally and figuratively) no skin in the game, trained on a bunch of signals signifying nothing but more signals.

On the other hand, to not use them in educational systems would be as stupid as to not use writing. These technologies are now parts of our extended cognition, intertwingled with our collective intelligence as much as any other technology, so of course they must be integrated in our educational systems. The big questions are not about whether we should embrace them but how, and what soft skills they might replace that we wish to preserve or develop. I hope that we will value real humans and their inventions more, rather than less, though I fear that, as long as we retain the main structural features of our education systems without significant adjustments to how they work, we will no longer care, and we may lose some of our capacity for caring.

I suggest a few ways we might avert some of the greatest risks by, for instance, treating them as partners/contractors/team members rather than tools, by avoiding methods of “personalization” that simply reinforce existing power imbalances and pedagogies designed for better indoctrination, by using them to help connect us and support human relationships, by doing what we can to reduce extrinsic drivers, by decoupling learning and credentials, and by doubling down on the social aspects of learning. There is also an undeniable explosion in adjacent possibles, leading to new skills to learn, new ways to be creative, and new possibilities for opening up education to more people. The potential paths we might take from now on are unprestatable and multifarious but, once we start down them, resulting path dependencies may lead us into great calamity at least as easily as they may expand our potential. We need to make wise decisions now, while we still have the wisdom to make them.

MDPI invited me to submit this article free of their normal article processing charge (APC). The fact that I accepted is therefore very much not an endorsement of APCs, though I respect MDPI’s willingness to accommodate those who find payment difficult, the good editorial services they provide, and the fact that all they publish is open. I was not previously familiar with the Digital journal itself. It has been publishing 4 articles a year since 2021, mostly offering a mix of reports on application designs and literature reviews. The quality seems good.

Abstract

This paper applies a theoretical model to analyze the ways that widespread use of generative AIs (GAIs) in education and, more broadly, in contributing to and reflecting the collective intelligence of our species, can and will change us. The model extends Brian Arthur’s insights into the nature of technologies as the orchestration of phenomena to our use by explaining the nature of humans participation in their enactment, whether as part of the orchestration (hard technique, where our roles must be performed correctly) or as orchestrators of phenomena (soft technique performed creatively or idiosyncratically). Education may be seen as a technological process for developing the soft and hard techniques of humans to participate in the technologies and thus the collective intelligence of our cultures. Unlike all earlier technologies, by embodying that collective intelligence themselves, GAIs can closely emulate and implement not only the hard technique but also the soft that, until now, was humanity’s sole domain: the very things that technologies enabled us to do can now be done by the technologies themselves. The consequences for what, how, and even whether we learn are profound. The paper explores some of these consequences and concludes with theoretically informed approaches that may help us to avert some dangers while benefiting from the strengths of generative AIs.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20512771/preprint-the-human-nature-of-generative-ais-and-the-technological-nature-of-humanity-implications-for-education