Stories that matter and stories that don’t: some thoughts on appropriate teaching roles for generative AIs

robot reading a bedtime story to a child Well, this was definitely going to happen.

The system discussed in this Wired article is a bot (not available to the general public) that takes characters from the absurdly popular Bluey cartoon series and creates personalized bedtime stories involving them for its creator’s children using ChatGPT+. This is something anyone could do – it doesn’t take a prompt-wizard or specialized bot to do this. You could easily make any reasonably proficient LLM incorporate your child’s interests, friends, family, and characteristics and churn out a decent enough story from it. With copyright-free material you could make the writing style and scenes very similar to the original. A little editorial control may be needed here and there but I think that, with a smart enough prompt, it would do a fairly good, average sort of a job, at least as readable as what an average human might produce, in a fraction of the time. I find this to be hugely problematic, though, and not for the reasons given in the article, though there are certainly some legal and ethical concerns, especially around copyright and privacy as well as the potential for generating dubious, disturbing, or otherwise poor content.

Why stories matter

The thing that bothers me most about this is not the quality of the stories but the quality of the relationship between the author and the reader (or listener).  Stories are the most human of artifacts, the ways that we create and express meaning, no matter how banal. They act as hooks that bind us together, whether invented by a parent or shared across whole cultures. They are a big part of how we learn and establish our relationships with the world and with one another. They are glimpses into how another person thinks and feels: they teach us what it means to be human, in all its rich diversity. They reflect the best and the worst of us, and they teach us about what matters.

My children were in part formed by the stories I made up or read to them 30 or more years ago, and it matters that none were made by machines. The language that I used, the ways that I wove in people and things that were meaningful to them, the attitudes I expressed, the love that went into them, all mattered.  I wish I’d recorded one or two, or jotted down the plots of at least some of the very many Lemmie the Suicidal Lemming stories that were a particular favourite. These were not as dark as they sound – Lemmie was a cheerful creature who just happened to be prone to putting himself in life-threatening situations, usually as a result of following others. Now that they have children of their own, both my kids have deliciously dark but fundamentally compassionate senses of humour and a fierce independence that I’d like to think may, in small part, be a result of such tales.

The books I (or, as they grew, we, and then they) chose probably mattered more. Some had been read to me by my own parents and at least a couple were read to them by their own parents. Like my children, I learned to read very young, largely because my imagination was fired by those stories, and fired by how much they mattered to my parents and siblings. As much as the people around me, the people who wrote and inhabited the books I listened to and later read made me who I am, and taught me much of what I still know today – not just facts to recall in a pub quiz but ways of thinking and understanding the world, and not just because of the values they shared but because of my responses to them, that increasingly challenged those values. Unlike AI-generated tales, these were shared cultural artifacts, read by vast numbers of people, creating a shared cultural context, values, and meanings that helped to sustain and unite the society I lived in. You may not have read many of the same books I read as a middle class boy growing up in 1960s Britain but, even if you are not of my generation or cultural background, you might have read (or seen video adaptations of) one or more children’s works by A.A. Milne, Enid Blyton, C.S. Lewis, J.R.R.Tolkein, Hans Christian Anderson, Charles Dickens, Lewis Caroll, Kenneth Grahame, Rev. W. Awdry, T.S. Eliot, the Brothers Grimm, Norton Juster, Edward Lear, Hugh Lofting, Dr. Seuss, and so on. That matters, and it matters that I can still name them. These were real authors with attitudes, beliefs, ideas, and styles unlike any other. They were products and producers of the times and places they lived in. Many of their attitudes and values are, looking back, troublesome, and that was true even then. So many racist and sexist stereotypes and assumptions, so many false beliefs, so many values and attitudes that had no place in the 1960s, let alone now. And that was good, because it introduced me to a diversity of ways of being and thinking, and allowed me to compare them with my own values and those of other authors, and it prepared me for changes to come because I had noticed the differences between their context and mine, and questioned the reasons.

With careful prompting, generative AIs are already capable of producing work of similar quality and originality to fan fiction or corporate franchise output around the characters and themes of these and many other creative works, and maybe there is a place for that. It couldn’t be much worse than (say) the welter of appallingly sickly, anodyne, Americanized, cookie-cutter, committee-written Thomas the Tank Engine stories that my grandchildren get to watch and read, that bear as little resemblance to Rev. W. Awdry’s sublimely stuffy Railway Stories as Star Wars. It would soften the sting when kids reach the end of a much loved series, perhaps. And, while it is a novelty, a personalized story might be very appealing, albeit that there is something rather distasteful about making a child feel special with the unconscious output of a machine to which nothing matters. But this is not just about value to individuals, living with the histories and habits we have acquired in pre-AI times. This is something that is happening at a ubiquitous and massive scale, everywhere. When this is no longer a novelty but the norm it will change us, and change our societies, in ways that make me shiver. I fear that mass-individualization will in fact be mass-blandification, a myriad of pale shadows that neither challenge nor offend, that shut down rather than open up debate, that reinforce norms that never change and are never challenged (because who else will have read them?), that look back rather than forward, that teach us average ways of thinking, that learn what we like and enclose us in our own private filter bubble, keeping us from evolving, that only surprise us when they go wrong. This is in the nature of generative AIs because all they have to learn from is our own deliberate outputs and, increasingly, the outputs of prior generative AIs, not from any kind of lived experience. They are averaging mirrors whose warped distortions can convince us they are true reflections. Introducing AI-generated stories to very young children, at scale, seems to me to be an awful gamble with very high stakes for their futures. We are performing uncontrolled experiments with stuff that forms minds, values, attitudes, expectations, and meanings that these kids will carry with them for the rest of their lives, and there is at least some reason to suspect that the harm may be greater than the good, both on an individual and a societal level. At the very least, there is a need for a large amount of editorial control, but how many parents of young children have the time or the energy for that?

That said…

Generating, not consuming output

I do see great value in working with and supporting the kids in creating the prompts for those stories themselves. While the technology is moving too fast for these evanescent skills to be describable as generative AI literacies, the techniques they learn and discoveries they make while doing so may help them to understand the strengths and limitations of the tools as they continue to develop, and the outputs will matter more because they contributed to creating them. Plus, it is a great fun way to learn. My nearly 7-year-old grandchild, with the help of their father, has enjoyed and learned a lot from creating images with DALL-E, for instance, and has been doing so long enough to see massive improvements in its capabilities, so has learned some great meta-lessons about the nature of technological evolution too. This has not stopped them from developing their own artistic skills, including with the help of iPads and AI-assisted drawing tools, which offer excellent points of comparison and affordances to reflect on the differences. It has given them critical insight into the nature of the output and the processes that led to it, and it has challenged them to bend the machine to do what they want it to do. This kind of mindful use of the tools as complementary partners, rather than consumption of their products, makes sense to me.

I think the lessons carry forward to adult learning, too. I have huge misgivings about giving generative AIs a didactic role, for the same reasons that having them tell stories to children worry me. However, they can be great teachers for those that make use of them to create output, rather than being targets of the output they have created. For instance I have been really enjoying using ChatGPT+ to help me write an Elgg plugin over the past few weeks, intended to deal with a couple of show-stopping bugs in an upgrade to the Landing that I had been struggling with for about 3 years, on and (mostly) off. I had come to see the problems as intractable, especially as a fair number of far smarter Elgg developers than I had looked at them and failed to see where the problems lay. ChatGPT+ let me try out a lot more ideas than even a large team of developers would have been able to come up with alone, and it took care of some of the mundane repetitive work that made the process slow.  Though none of it was bad, little of its code was particularly good: it made up stuff, omitted stuff, and did things inefficiently. It was really good, though, at putting in explanatory comments and documenting what it was doing. This was great, because the things I had to do to fix the flaws taught me a lot more than I would have learned had they been perfect solutions. Nearly always, it was good enough and well-documented enough to set me on the right path, but the ways it failed drove me to look at source documentation, query the underlying database (now knowing what to look for), follow conversations on GitHub, and examine human-created plugins, from which I learned a lot more and got further inspiration about what to ask the LLM to do next. Because it made different mistakes each time, it helped me to slowly develop a clearer model of how it should really have happened, so I got better and better at solving the problems myself, meanwhile learning a whole raft of useful tricks from the code that worked and at least as much from figuring out why it didn’t. It was very iterative: each attempt sparked ideas for the next attempt. It gave me just enough scaffolding to help me do what I could not do alone. About half way through I discovered the cause of the problem – a single changed word in the 150,000+ lines of code in the core engine, that was intended to better suit the new notification system, but that resulted in the existing 20m+ notification messages in the system failing to display correctly. This gave me ideas for some better prompts, the results of which taught me more. As a result, I am now a better Elgg coder than I was when I began, and I have a solution to a problem that has held up vital improvements to an ailing site used by more than 16,000 people for many years (though there are still a few hurdles to overcome before it reaches the production site).

Filling the right gaps

The final solution actually uses no code from ChatGPT+ at all, but it would not have been possible to get to that point without it. The skills it provided were different to and complementary to my own, and I think that is the critical point. To play an effective teaching role, a teacher has to leave the right kind of gaps for the learner to fill. If they are too large or too small, the learner learns little or nothing. The to and fro between me and the machine, and the ease with which I could try out different ideas, eventually led to those gaps being just the right size so that, instead of being an overwhelming problem, it became an achievable challenge. And that is the story that matters here.

The same is true of the stories that inspire: they leave the right sized gaps for the reader or listener to fill with their own imaginations while providing sufficient scaffolding to guide them, surprise them, or support them on the journey. We are participants in the stories, not passive recipients of them, much as I was a participant in the development of the Elgg plugin and, similarly, we learn through that participation. But there is a crucial difference. While I was learning the mechanical skills of coding from this process (as well as independently developing the soft skills to use them well), the listener to or reader of a story is learning the social, cultural, and emotional skills of being human (as well as, potentially, absorbing a few hard facts and the skills of telling their own stories). A story can be seen as a kind of machine in its own right: one that is designed to make us think and feel in ways that matter to the author. And that, in a nutshell, is why a story produced by a generative AI is such a problematic idea for the reader, but the use of a generative AI to help produce that story can be such a good idea for the writer.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/21680600/stories-that-matter-and-stories-that-dont-some-thoughts-on-appropriate-teaching-roles-for-generative-ais

Technology, Teaching, and the Many Distances of Distance Learning | Journal of Open, Flexible and Distance Learning

I am pleased to announce my latest paper, published openly in the Journal of Open, Flexible and Distance Learning, which has long been one of my favourite distance and ed tech journals.

The paper starts with an abbreviated argument about the technological nature of education drawn from my forthcoming book, How Education Works, zooming in on the distributed teaching aspect of that, leading to a conclusion that the notion of “distance” as a measure of the relationship between a learner and their teacher/institution is not very useful when there might be countless teachers at countless distances involved.

I go on to explore a number of alternative ways we might conceptualize distance, some familiar, some less so, not so much because I think they are any better than (say) transactional distance, but to draw attention to the complexity, fuzziness, and fragility of the concept. However, I find some of them quite appealing: I am particularly pleased with the idea of inverting the various presences in the Community of Inquiry model (and extensions of it). Teaching, cognitive, and social (and emotional and agency) distances and presences essentially measure the same things in the same way, but the shift in perspective subtly changes the narratives we might build around them. I could probably write a paper on each kind of distance I provide, but each gets a paragraph or two because what it is all leading towards is an idea that I think has some more useful legs: technological distance.

I’m still developing this idea, and have just submitted another paper that tries to unpack it a bit more, so don’t expect something fully-formed just yet – I welcome discussion and debate on its value, meaning, and usefulness. Basically, technological distance is a measure of the gaps left between the technologies (including cognitive tools in learners’ own minds, what teachers orchestrate, textbooks, digital tools, etc, etc) that the learner has to fill in order to learn something. This is not just about the subject matter – it’s about the mill (how we learn) well as the grist (what we learn). There are lots of ways to reduce that distance, many of which are good for learning, but some of which undermine it by effectively providing what Dave Cormier delightfully describes as autotune for knowledge. The technologies provide the knowledge so learners don’t have to engage with or connect it themselves. This is not always a bad thing – architects may not need drafting skills, for instance, if they are going to only ever use CAD, memorization of facts easily discovered might not always be essential, and we will most likely see ubiquitous generative AI as part of our toolset now and in the future, for instance – but choosing what to learn is one reason teachers (who/whatever they are) can be useful. Effective teaching is about making the right things soft so the process itself teaches. However, as what needs to be soft is different for every person on the planet, we need to make learning (of ourselves or others) visible in order to know that. It’s not science – it’s technology. That means that invention, surprise, creativity, passion, and many other situated things matter.

My paper is nicely juxtaposed in the journal with one from Simon Paul Atkinson, which addresses definitions of “open”, “distance” and “flexible” that, funnily enough, was my first idea for a topic when I was invited to submit my paper. If you read both, I think you’ll see that Simon and I might see the issue quite differently, but his is a fine paper making some excellent points.

Abstract

The “distance” in “distance learning”, however it is defined, normally refers to a gap between a learner and their teacher(s), typically in a formal context. In this paper I take a slightly different view. The paper begins with an argument that teaching is fundamentally a technological process. It is, though, a vastly complex, massively distributed technology in which the most important parts are enacted idiosyncratically by vast numbers of people, both present and distant in time and space, who not only use technologies but also participate creatively in their enactment. Through the techniques we use we are co-participants in not just technologies but the learning of ourselves and others, and hence in the collective intelligence of those around us and, ultimately, that of our species. We are all teachers. There is therefore not one distance between learner and teacher in any act of deliberate learning— but many. I go on to speculate on alternative ways of understanding distance in terms of the physical, temporal, structural, agency, social, emotional, cognitive, cultural, pedagogical, and technological gaps that may exist between learners and their many teachers. And I conclude with some broad suggestions about ways to reduce these many distances.

Reference

Originally posted at: https://landing.athabascau.ca/bookmarks/view/17293757/my-latest-paper-technology-teaching-and-the-many-distances-of-distance-learning-journal-of-open-flexible-and-distance-learning

Petition · Athabasca University – Oppose direct political interference in universities · Change.org

https://www.change.org/p/athabasca-university-oppose-direct-political-interference-in-universities

I, like many staff and students, have been deeply shaken and outraged by recent events at Athabasca University. This is a petition by me and Simon Buckingham Shum, of the University of Technology Sydney, Australia to protest the blatant interference by the Albertan government in the affairs of AU over the past year, that culminated in the firing of its president, Professor Peter Scott, without reason or notice. Even prior to this, the actions of the Albertan government had been described by Glen Jones (Professor of Higher Education, University of Toronto) as: “the most egregious political interference in a public university in Canada in more than 100 years” This was an assault on our university, an assault on the very notion of a public university, and it sets a disturbing precedent that cannot stand unopposed.

We invite you to view this brief summary, and consider signing this petition to signal your concern. Please feel more than free to pass this on to anyone and everyone – it is an international petition that has already been signed by many, both within and beyond the AU community.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/17102318/petition-%C2%B7-athabasca-university-oppose-direct-political-interference-in-universities-%C2%B7-changeorg

Hot off the press: Handbook of Open, Distance and Digital Education (open access)

https://link.springer.com/referencework/10.1007/978-981-19-2080-6

This might be the most important book in the field of open, distance, and digital education to be published this decade.Handbook cover Congratulations to Olaf Zawacki-Richter and Insung Jung, the editors, as well as to all the section editors, for assembling a truly remarkable compendium of pretty much everything anyone would need to know on the subject. It includes chapters written by a very high proportion of the most well-known and influential researchers and practitioners on the planet as well as a few lesser known folk along for the ride like me (I have a couple of chapters, both cowritten with Terry Anderson, who is one of those top researchers). Athabasca University makes a pretty good showing in the list of authors and in works referenced. In keeping with the subject matter, it is published by Springer as an open access volume, but even the hardcover version is remarkably good value (US$60) for something of this size.

The book is divided into six broad sections (plus an introduction), each of which is a decent book in itself, covering the following topics:

  • History, Theory and Research,
  • Global Perspectives and Internationalization,
  • Organization, Leadership and Change,
  • Infrastructure, Quality Assurance and Support Systems,
  • Learners, Teachers, Media and Technology, and
  • Design, Delivery, and Assessment

There’s no way I’m likely to read all of its 1400+ pages in the near future, but there is so much in it from so many remarkable people that it is going to be a point of reference for me for years to come. I’m really going to enjoy dipping into this.

If you’re interested, the chapters that Terry and I wrote are on Pedagogical Paradigms in Open and Distance Education and Informal Learning in Digital Contexts. A special shoutout to Junhong Xiao for all his help with these.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/16584686/hot-off-the-press-handbook-of-open-distance-and-digital-education-open-access