Stories that matter and stories that don’t: some thoughts on appropriate teaching roles for generative AIs

robot reading a bedtime story to a child Well, this was definitely going to happen.

The system discussed in this Wired article is a bot (not available to the general public) that takes characters from the absurdly popular Bluey cartoon series and creates personalized bedtime stories involving them for its creator’s children using ChatGPT+. This is something anyone could do – it doesn’t take a prompt-wizard or specialized bot to do this. You could easily make any reasonably proficient LLM incorporate your child’s interests, friends, family, and characteristics and churn out a decent enough story from it. With copyright-free material you could make the writing style and scenes very similar to the original. A little editorial control may be needed here and there but I think that, with a smart enough prompt, it would do a fairly good, average sort of a job, at least as readable as what an average human might produce, in a fraction of the time. I find this to be hugely problematic, though, and not for the reasons given in the article, though there are certainly some legal and ethical concerns, especially around copyright and privacy as well as the potential for generating dubious, disturbing, or otherwise poor content.

Why stories matter

The thing that bothers me most about this is not the quality of the stories but the quality of the relationship between the author and the reader (or listener).  Stories are the most human of artifacts, the ways that we create and express meaning, no matter how banal. They act as hooks that bind us together, whether invented by a parent or shared across whole cultures. They are a big part of how we learn and establish our relationships with the world and with one another. They are glimpses into how another person thinks and feels: they teach us what it means to be human, in all its rich diversity. They reflect the best and the worst of us, and they teach us about what matters.

My children were in part formed by the stories I made up or read to them 30 or more years ago, and it matters that none were made by machines. The language that I used, the ways that I wove in people and things that were meaningful to them, the attitudes I expressed, the love that went into them, all mattered.  I wish I’d recorded one or two, or jotted down the plots of at least some of the very many Lemmie the Suicidal Lemming stories that were a particular favourite. These were not as dark as they sound – Lemmie was a cheerful creature who just happened to be prone to putting himself in life-threatening situations, usually as a result of following others. Now that they have children of their own, both my kids have deliciously dark but fundamentally compassionate senses of humour and a fierce independence that I’d like to think may, in small part, be a result of such tales.

The books I (or, as they grew, we, and then they) chose probably mattered more. Some had been read to me by my own parents and at least a couple were read to them by their own parents. Like my children, I learned to read very young, largely because my imagination was fired by those stories, and fired by how much they mattered to my parents and siblings. As much as the people around me, the people who wrote and inhabited the books I listened to and later read made me who I am, and taught me much of what I still know today – not just facts to recall in a pub quiz but ways of thinking and understanding the world, and not just because of the values they shared but because of my responses to them, that increasingly challenged those values. Unlike AI-generated tales, these were shared cultural artifacts, read by vast numbers of people, creating a shared cultural context, values, and meanings that helped to sustain and unite the society I lived in. You may not have read many of the same books I read as a middle class boy growing up in 1960s Britain but, even if you are not of my generation or cultural background, you might have read (or seen video adaptations of) one or more children’s works by A.A. Milne, Enid Blyton, C.S. Lewis, J.R.R.Tolkein, Hans Christian Anderson, Charles Dickens, Lewis Caroll, Kenneth Grahame, Rev. W. Awdry, T.S. Eliot, the Brothers Grimm, Norton Juster, Edward Lear, Hugh Lofting, Dr. Seuss, and so on. That matters, and it matters that I can still name them. These were real authors with attitudes, beliefs, ideas, and styles unlike any other. They were products and producers of the times and places they lived in. Many of their attitudes and values are, looking back, troublesome, and that was true even then. So many racist and sexist stereotypes and assumptions, so many false beliefs, so many values and attitudes that had no place in the 1960s, let alone now. And that was good, because it introduced me to a diversity of ways of being and thinking, and allowed me to compare them with my own values and those of other authors, and it prepared me for changes to come because I had noticed the differences between their context and mine, and questioned the reasons.

With careful prompting, generative AIs are already capable of producing work of similar quality and originality to fan fiction or corporate franchise output around the characters and themes of these and many other creative works, and maybe there is a place for that. It couldn’t be much worse than (say) the welter of appallingly sickly, anodyne, Americanized, cookie-cutter, committee-written Thomas the Tank Engine stories that my grandchildren get to watch and read, that bear as little resemblance to Rev. W. Awdry’s sublimely stuffy Railway Stories as Star Wars. It would soften the sting when kids reach the end of a much loved series, perhaps. And, while it is a novelty, a personalized story might be very appealing, albeit that there is something rather distasteful about making a child feel special with the unconscious output of a machine to which nothing matters. But this is not just about value to individuals, living with the histories and habits we have acquired in pre-AI times. This is something that is happening at a ubiquitous and massive scale, everywhere. When this is no longer a novelty but the norm it will change us, and change our societies, in ways that make me shiver. I fear that mass-individualization will in fact be mass-blandification, a myriad of pale shadows that neither challenge nor offend, that shut down rather than open up debate, that reinforce norms that never change and are never challenged (because who else will have read them?), that look back rather than forward, that teach us average ways of thinking, that learn what we like and enclose us in our own private filter bubble, keeping us from evolving, that only surprise us when they go wrong. This is in the nature of generative AIs because all they have to learn from is our own deliberate outputs and, increasingly, the outputs of prior generative AIs, not from any kind of lived experience. They are averaging mirrors whose warped distortions can convince us they are true reflections. Introducing AI-generated stories to very young children, at scale, seems to me to be an awful gamble with very high stakes for their futures. We are performing uncontrolled experiments with stuff that forms minds, values, attitudes, expectations, and meanings that these kids will carry with them for the rest of their lives, and there is at least some reason to suspect that the harm may be greater than the good, both on an individual and a societal level. At the very least, there is a need for a large amount of editorial control, but how many parents of young children have the time or the energy for that?

That said…

Generating, not consuming output

I do see great value in working with and supporting the kids in creating the prompts for those stories themselves. While the technology is moving too fast for these evanescent skills to be describable as generative AI literacies, the techniques they learn and discoveries they make while doing so may help them to understand the strengths and limitations of the tools as they continue to develop, and the outputs will matter more because they contributed to creating them. Plus, it is a great fun way to learn. My nearly 7-year-old grandchild, with the help of their father, has enjoyed and learned a lot from creating images with DALL-E, for instance, and has been doing so long enough to see massive improvements in its capabilities, so has learned some great meta-lessons about the nature of technological evolution too. This has not stopped them from developing their own artistic skills, including with the help of iPads and AI-assisted drawing tools, which offer excellent points of comparison and affordances to reflect on the differences. It has given them critical insight into the nature of the output and the processes that led to it, and it has challenged them to bend the machine to do what they want it to do. This kind of mindful use of the tools as complementary partners, rather than consumption of their products, makes sense to me.

I think the lessons carry forward to adult learning, too. I have huge misgivings about giving generative AIs a didactic role, for the same reasons that having them tell stories to children worry me. However, they can be great teachers for those that make use of them to create output, rather than being targets of the output they have created. For instance I have been really enjoying using ChatGPT+ to help me write an Elgg plugin over the past few weeks, intended to deal with a couple of show-stopping bugs in an upgrade to the Landing that I had been struggling with for about 3 years, on and (mostly) off. I had come to see the problems as intractable, especially as a fair number of far smarter Elgg developers than I had looked at them and failed to see where the problems lay. ChatGPT+ let me try out a lot more ideas than even a large team of developers would have been able to come up with alone, and it took care of some of the mundane repetitive work that made the process slow.  Though none of it was bad, little of its code was particularly good: it made up stuff, omitted stuff, and did things inefficiently. It was really good, though, at putting in explanatory comments and documenting what it was doing. This was great, because the things I had to do to fix the flaws taught me a lot more than I would have learned had they been perfect solutions. Nearly always, it was good enough and well-documented enough to set me on the right path, but the ways it failed drove me to look at source documentation, query the underlying database (now knowing what to look for), follow conversations on GitHub, and examine human-created plugins, from which I learned a lot more and got further inspiration about what to ask the LLM to do next. Because it made different mistakes each time, it helped me to slowly develop a clearer model of how it should really have happened, so I got better and better at solving the problems myself, meanwhile learning a whole raft of useful tricks from the code that worked and at least as much from figuring out why it didn’t. It was very iterative: each attempt sparked ideas for the next attempt. It gave me just enough scaffolding to help me do what I could not do alone. About half way through I discovered the cause of the problem – a single changed word in the 150,000+ lines of code in the core engine, that was intended to better suit the new notification system, but that resulted in the existing 20m+ notification messages in the system failing to display correctly. This gave me ideas for some better prompts, the results of which taught me more. As a result, I am now a better Elgg coder than I was when I began, and I have a solution to a problem that has held up vital improvements to an ailing site used by more than 16,000 people for many years (though there are still a few hurdles to overcome before it reaches the production site).

Filling the right gaps

The final solution actually uses no code from ChatGPT+ at all, but it would not have been possible to get to that point without it. The skills it provided were different to and complementary to my own, and I think that is the critical point. To play an effective teaching role, a teacher has to leave the right kind of gaps for the learner to fill. If they are too large or too small, the learner learns little or nothing. The to and fro between me and the machine, and the ease with which I could try out different ideas, eventually led to those gaps being just the right size so that, instead of being an overwhelming problem, it became an achievable challenge. And that is the story that matters here.

The same is true of the stories that inspire: they leave the right sized gaps for the reader or listener to fill with their own imaginations while providing sufficient scaffolding to guide them, surprise them, or support them on the journey. We are participants in the stories, not passive recipients of them, much as I was a participant in the development of the Elgg plugin and, similarly, we learn through that participation. But there is a crucial difference. While I was learning the mechanical skills of coding from this process (as well as independently developing the soft skills to use them well), the listener to or reader of a story is learning the social, cultural, and emotional skills of being human (as well as, potentially, absorbing a few hard facts and the skills of telling their own stories). A story can be seen as a kind of machine in its own right: one that is designed to make us think and feel in ways that matter to the author. And that, in a nutshell, is why a story produced by a generative AI is such a problematic idea for the reader, but the use of a generative AI to help produce that story can be such a good idea for the writer.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/21680600/stories-that-matter-and-stories-that-dont-some-thoughts-on-appropriate-teaching-roles-for-generative-ais

Published in Digital – The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education

A month or two ago I shared a “warts-and-all” preprint of this paper on the risks of educational uses of generative AIs. The revised, open-access published version, The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education is now available in the Journal Digital.

The process has been a little fraught. Two reviewers really liked the paper and suggested minimal but worthwhile changes. One quite liked it but had a few reasonable suggestions for improvements that mostly helped to make the paper better. The fourth, though, was bothersome in many ways, and clearly wanted me to write a completely different paper altogether. Despite this, I did most of what they asked, even though some of the changes, in my opinion, made the paper a bit worse. However, I drew the line at the point that they demanded (without giving any reason) that I should refer to 8 very mediocre, forgettable, cookie cutter computer science papers which, on closer inspection, had all clearly been written by the reviewer or their team. The big problem I had with this was not so much the poor quality of the papers, nor even the blatant nepotism/self-promotion of the demand, but the fact that none were in any conceivable way relevant to mine, apart from being about AI: they were about algorithm-tweaking, mostly in the context of traffic movements in cities.  It was as ridiculous as a reviewer of a work on Elizabethan literature requiring the author to refer to papers on slightly more efficient manufacturing processes for staples. Though it is normal and acceptable for reviewers to suggest reference to their own papers when it would clearly lead to improvements, this was an utterly shameless abuse of power of a scale and kind that I have never seen before. I politely refused, making it clear that I was on to their game but not directly calling them out on it.

In retrospect, I slightly regret not calling them out. For a grizzly old researcher like me who could probably find another publisher without too much hassle, it doesn’t matter much if I upset a reviewer enough to make them reject my paper. However, for early-career researchers stuck in the publish-or-perish cycle, it would be very much harder to say no. This kind of behaviour is harmful for the author, the publisher, the reader, and the collective intelligence of the human race. The fact that the reviewer was so desperate to get a few more citations for their own team with so little regard for quality or relevance seems to me to be a poor reflection on them and their institution but, more so, a damning indictment of a broken system of academic publishing, and of the reward systems driving academic promotion and recognition. I do blame the reviewer, but I understand the pressures they might have been under to do such a blatantly immoral thing.

As it happens, my paper has more than a thing or two to say about this kind of McNamara phenomenon, whereby the means used to measure success in a system become and warp its purpose, because it is among the main reasons that generative AIs pose such a threat. It is easy to forget that the ways we establish goals and measure success in educational systems are no more than signals of a much more complex phenomenon with far more expansive goals that are concerned with helping humans to be, individually and in their cultures and societies, as much as with helping them to do particular things. Generative AIs are great at both generating and displaying those signals – better than most humans in many cases – but that’s all they do: the signals signify nothing. For well-defined tasks with well-defined goals they provide a lot of opportunities for cost-saving, quality improvement, and efficiency and, in many occupations, that can be really useful. If you want to quickly generate some high quality advertising copy, the intent of which is to sell a product, then it makes good sense to use a generative AI. Not so much in education, though, where it is too easy to forget that learning objectives, learning outcomes, grades, credentials, and so on are not the purposes of learning but just means for and signals of achieving them.

Though there are other big reasons to be very concerned about using generative AIs in education, some of which I explore in the paper, this particular problem is not so much with the AIs themselves as with the technological systems into which they are, piecemeal, inserted. It’s a problem with thinking locally, not globally; of focusing on one part of the technology assembly without acknowledging its role in the whole. Generative AIs could, right now and with little assistance,  perform almost every measurable task in an educational system from (for students) producing essays and exam answers, to (for teachers) writing activities and assignments, or acting as personal tutors. They could do so better than most people. If that is all that matters to us then we might as well therefore remove the teachers and the students from the system because, quite frankly, they only get in the way. This absurd outcome is more or less exactly the end game that will occur though, if we don’t rethink (or double down on existing rethinking of) how education should work and what it is for, beyond the signals that we usually use to evaluate success or intent. Just thinking of ways to use generative AIs to improve our teaching is well-meaning, but it risks destroying the woods by focusing on the trees. We really need to step back a bit and think of why we bother in the first place.

For more on this, and for my tentative partial solutions to these and other related problems, do read the paper!

Abstract and citation

This paper analyzes the ways that the widespread use of generative AIs (GAIs) in education and, more broadly, in contributing to and reflecting the collective intelligence of our species, can and will change us. Methodologically, the paper applies a theoretical model and grounded argument to present a case that GAIs are different in kind from all previous technologies. The model extends Brian Arthur’s insights into the nature of technologies as the orchestration of phenomena to our use by explaining the nature of humans’ participation in their enactment, whether as part of the orchestration (hard technique, where our roles must be performed correctly) or as orchestrators of phenomena (soft technique, performed creatively or idiosyncratically). Education may be seen as a technological process for developing these soft and hard techniques in humans to participate in the technologies, and thus the collective intelligence, of our cultures. Unlike all earlier technologies, by embodying that collective intelligence themselves, GAIs can closely emulate and implement not only the hard technique but also the soft that, until now, was humanity’s sole domain; the very things that technologies enabled us to do can now be done by the technologies themselves. Because they replace things that learners have to do in order to learn and that teachers must do in order to teach, the consequences for what, how, and even whether learning occurs are profound. The paper explores some of these consequences and concludes with theoretically informed approaches that may help us to avert some dangers while benefiting from the strengths of generative AIs. Its distinctive contributions include a novel means of understanding the distinctive differences between GAIs and all other technologies, a characterization of the nature of generative AIs as collectives (forms of collective intelligence), reasons to avoid the use of GAIs to replace teachers, and a theoretically grounded framework to guide adoption of generative AIs in education.

Dron, J. (2023). The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education. Digital, 3(4), 319–335. https://doi.org/10.3390/digital3040020

Originally posted at: https://landing.athabascau.ca/bookmarks/view/21104429/published-in-digital-the-human-nature-of-generative-ais-and-the-technological-nature-of-humanity-implications-for-education

10 minute chats on Generative AI – a great series, now including an interview with me

This is a great series of brief interviews between Tim Fawns and an assortment of educators and researchers from across the world on the subject of generative AI and its impact on learning and teaching.

The latest (tenth in the series) is with me.

Tim asked us all to come up with 3 key statements beforehand that he used to structure the interviews. I only realized that I had to do this on the day of the interview so mine are not very well thought-through, but there follows a summary of very roughly what I would have said about each if my wits were sharper. The reality was, of course, not quite like this. I meandered around a few other ideas and we ran out of time, but I think this captures the gist of what I actually wanted to convey:

Key statement 1: Most academics are afraid of AIs being used by students to cheat. I am afraid of AIs being used by teachers to cheat. cyborg teacher

For much the same reasons that many of us balk at students using, say, ChatGPT to write part or all of their essays or code, I think we should be concerned when teachers use it to replace or supplement their teaching, whether it be for writing course outlines, assessing student work, or acting as intelligent tutors (to name but a few common uses).  The main thing that bothers me is that human teachers (including other learners, authors, and many more) do not simply help learners to achieve specified learning outcomes. In the process, they model ways of thinking, values, attitudes, feelings, and a host of other hard-to-measure tacit and implicit phenomena that relate to ways of being, ways of interacting, ways of responding, and ways of connecting with others. There can be huge value in seeing the world through another’s eyes, of interacting with them, adapting your responses, seeing how they adapt to yours, and so on. This is a critical part of how we learn the soft stuff, the ways of doing things, the meaning, the social value, the connections with our own motivations, and so on. In short, education is as much about being a human being, living in human communities, as it is about learning facts and skills. Even when we are not interacting but, say, simply reading a book, we are learning not just the contents but the ways the contents are presented, the quirks, the passions, the ways the authors think of their readers, their implicit beliefs, and so on.

While a generative AI can mimic this pretty well, it is by nature a kind of average, a blurry reconstruction mashed up from countless examples of the work of real humans. It is human-like, not human. It can mimic a wide assortment of nearly-humans without identity, without purpose, without persistence, without skin in the game. As things currently stand (though this will change) it is also likely to be pretty bland – good enough, but not great.

It might be argued that this is better than nothing at all, or that it augments rather than replaces human teachers, or it helps with relatively mundance chores, or it provides personalized support and efficiencies in learning hard skills, or it allows teachers to focus on those human aspects, or even that using a generative AI is a good way of learning in itself. Right now and in the near future, this may be true because we are in a system on the verge of disruption, not yet in the thick of it, and we come to it with all our existing skills and structures intact. My concern is what happens as it scales and becomes ubiquitous; as the bean-counting focus on efficiencies that relate solely to measurable outcomes increasingly crowd out the time spent with other humans; as the generative AIs feed on one another becoming more and more divorced from their human originals; as the skills of teaching that are replaced by AIs atrophy in the next generation; as time we spend with one another is replaced with time spent with not-quite human simulacra; as the AIs themselves become more and more a part of our cognitive apparatus in both what is learned and how we learn it. There are Monkeys’ Paws all the way down the line: for everything that might improved, there are at least as many things that can and will get worse.

Key statement 2: We and our technologies are inherently intertwingled so it makes no more sense to exclude AIs from the classroom than it would to exclude, say, books or writing. The big questions are about what we need to keep. intertwingled technologies and humans

Our cognition is fundamentally intertwingled with the technologies that we use, both physical and cognitive, and those technologies are intertwingled with one another, and that’s how our collective intelligence emerges. For all the vital human aspects mentioned above, a significant part of the educational process is concerned with building cognitive gadgets that enable us to participate in the technologies of our cultures, from poetry and long division to power stations and web design. Through that participation our cognition is highly distributed, and our intelligence is fundamentally collective. Now that generative AIs are part of that, it would be crazy to exclude them from classrooms or from their use in assessments. It does, however, raise more than a few questions about what cognitive activities we still need to keep for ourselves.

Technologies expand or augment what we can do unaided. Writing, say, allows us (among other things) to extend our memories. This creates many adjacent possibles, including sharing them with others, and allowing us to construct more complex ideas using scaffolding that would be very difficult to construct on our own because our memories are not that great.

Central to the nature of writing is that, as with most technologies, we don’t just use it but we participate in its enactment, performing part of the orchestration ourselves (for instance we choose what words and ideas we write – the soft stuff), but also being part of its orchestration (e.g we must typically spell words and use grammar sufficiently uniformly that others can understand them – the hard stuff).

In the past, we used to do nearly all of that writing by hand. Handwriting was a hard skill that had to be learned well enough that others could read what we have written, a process that typically required years of training and practice, demanding mastery of a wide range of technical proficiencies from spelling and punctuation to manual dexterity and the ability to sharpen a quill/fill a fountain pen/insert a cartridge, etc. To an increasingly large extent we have now offloaded many of those hard skills, first to typewriters and now to computers. While some of the soft aspects of handwriting have been lost – the cognitive processes that affect how we write and how we think, the expressiveness of the never-perfect ways we write letters on a page, etc – this was a sensible thing to do. From a functional perspective, text produced by a computer is far more consistent, far more readable, far more adaptable, far more reusable, and far more easily communicated. Why should we devote so much effort and time to learning to be part of a machine when a machine can do that part for us, and do it better?

Something that can free us from having to act as an inflexible machine seems, by and large, like a good thing. If we don’t have to do it ourselves then we can spend more time and effort on what we do, how we do it, the soft stuff, the creative stuff, the problem-solving stuff, and so on. It allows us to be more capable, to reach further, to communicate more clearly. There are some really big issues relating to the ways that the constraints of handwriting such as the relative difficulty of making corrections, the physicality of the movements, and the ways our brains are changed by handwriting that result in different ways of thinking, some of which may be very valuable. But, as Postman wrote, all technologies are Faustian bargains involving losses and harms as well as gains and benefits. A technology that thrives is usually (at least in the short term) one in which the gains are perceived to outweigh the losses. And, even when largely replaced, old technologies seldom if ever die, so it is usually possible to retrieve what is lost, at least until the skills atrophy, components are no longer made, or they are designed to die (old printers with chip-protected cartridges that are no longer made, for instance).

What is fundamentally different about generative AIs, however, is that they allow us to offload exactly the soft, creative, problem solving aspects of our cognition, that technologies normally support and expand, to a machine. They provide extremely good pastiches of human thought and creativity that can act well enough to be considered as drop-in replacements. In many cases, they can do so a lot better – from the point of view of someone seeing only the outputs – than an average human. An AI image generator can draw a great deal better than me, for instance. But, given that these machines are now part of our extended, intertwingled minds, what is left for us? What parts of our minds should they or will they replace? How can we use them without losing the capacity to do at least some of the things they do better or as well as us? What happens if we lack those cognitive gadgets we never installed in our minds because AIs did it for us? This is not the same as, say, not knowing how to make a bow and arrow or write in cuneiform. Even when atrophied, such skills can be recovered. This is the stuff that we learn the other stuff for. It is especially important in the field of education which, traditionally at least, has been deeply concerned with cultivating the hard skills largely if not solely so that we can use them creatively, socially and productively once they are learned. If the machines are doing that for us, what is our role? This is not (yet) Kurzweil’s singularity, the moment when machines exceed our own intelligence and start to develop on their own, but it is the (drawn-out, fragmented) moment that machines have become capable of participating in soft, creative technologies on at least equal footing to humans. That matters. This leads to my final key statement.

Key statement 3: AIs create countless new adjacent possible empty niches. They can augment what we can do, but we need to go full-on Amish when deciding whether they should replace what we already do. Amish cyborg

Every new creation in the world opens up new and inherently unprestatable adjacent possible empty niches for further creation, not just in how it can be used as part of new assemblies but in how it connects with those that already exist. It’s the exponential dynamic ratchet underlying natural evolution as much as technology, and it is what results in the complexity of the universe. The rapid acceleration in use and complexity of generative AIs – itself enabled by the adjacent possibles of the already highly disruptive Internet – that we have seen over the past couple of years has resulted in a positive explosion of new adjacent possibles, in turn spawning others, and so on, at a hitherto unprecedented scale and speed.

This is exactly what we should expect in an exponentially growing system. It makes it increasingly difficult to predict what will happen next, or what skills, attitudes, and values we will need to deal with it, or how we will affected by it. As the number of possible scenarios increases at the same exponential rate, and the time between major changes gets ever shorter, patterns of thinking, ways of doing things, skills we need, and the very structures of our societies must change in unpredictable ways, too. Occupations, including in education, are already being massively disrupted, for better and for worse. Deeply embedded systems, from assessment for credentials to the mass media, are suddenly and catastrophically breaking.  Legislation, regulations, resistance from groups of affected individuals, and other checks and balances may slightly alter the rate of change, but likely not enough to matter. Education serves both a stabilizing and a generative role in society, but educators are at least as unprepared and at least as disrupted as anyone else. We don’t – in fact we cannot – know what kind of world we are preparing our students for, and the generative technologies that now form part of our cognition are changing faster than we can follow. Any AI literacies we develop will be obsolete in the blink of an eye. And, remember, generative AIs are not just replacing hard skills. They are replacing the soft ones, the things that we use our hard skills to accomplish.

This is why I believe we would do well to heed the example of the Amish, who (contrary to popular belief) are not opposed to modern technologies but, in their communities, debate and discuss the merits and disadvantages of any technology that is available, considering the ways in which it might affect or conflict with their values, only adopting those agreed to be, on balance, good, and only doing so in ways that accord with those values. Different communities make different choices according to their contexts and needs. In order to do that, we have to have values in the first place. But what are the values that matter in education?

With a few exceptions (laws and regulations being the main ones) technologies do not determine how we will act but, through the ways they integrate with our shared cognition, existing technologies, and practices, they have a lot of momentum and, unchecked, generative AIs will inherit the values associated with what currently exists. In educational systems that are increasingly regulated by government mandates that focus on nothing but their economic contributions to industry, where success or failure is measured solely by proxy criteria like predetermined outcomes of learning and enrolments, where a millennium of path dependencies still embodies patterns of teacher control and indoctrination that worked for mediaeval monks and skillsets that suited the demands of factory owners during the industrial revolution, this will not end well. Now seems the time we most need to reassert and double down on the human, the social, the cultural, the societal, the personal, and the tacit value of our institutions. This is the time to talk about those values, locally and globally. This is the time to examine what matters, what we care about, what we must not lose, and why we must not lose it. Tomorrow it will be too late. I think this is a time of great risk but it is also a time of great opportunity, a chance to reflect on and examine the value and nature of education itself. Some of us have been wanting to have these conversations for decades.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20146256/10-minute-chats-on-generative-ai-a-great-series-now-including-an-interview-with-me

Research, Writing, and Creative Process in Open and Distance Education: Tales from the Field | Open Book Publishers

Research, Writing, and Creative Process in Open and Distance Education: Tales from the Field is a great new book about how researchers in the field of open, online, and distance education go about writing and/or their advice to newcomers in the field. More than that, it is about the process of writing in general, containing stories, recommendations, methods, tricks, and principles that pretty much anyone who writes, from students to experienced authors, would find useful and interesting. It is published as an open book (with a very open CC-BY-NC licence) that is free to read or download as well as to purchase in paper form.

OK, full disclosure, I am a bit biased. I have a chapter in it, and many of the rest are by friends and aquaintances. The editor and author of one of the chapters is Dianne Conrad, the foreword is by Terry Anderson, and the list of authors includes some of the most luminous, widely cited names in the field, with a wealth of experience and many thousands of publications between them. The full list includes David Starr-Glass, Pamela Ryan,  Junhong Xiao, Jennifer Roberts, Aras Bozkurt, Catherine Cronin, Randy Garrison, Tony Bates, Mark Nichols, Marguerite Koole (with Michael Cottrell, Janet Okoko & Kristine Dreaver-Charles), and Paul Prinsloo.

Apart from being a really good idea that fills a really important gap in the market, what I love most about the book is the diversity of the chapters. There’s everything from practical advice on how to structure an effective paper, to meandering reflective streams of consciousness that read like poetry, to academic discussions of identity and culture. It contains a lot of great stories that present a rich variety of approaches and processes, offering far from uniform suggestions about how best to write or why it is worth doing in the first place. Though the contributors are all researchers in the field of open and distance learning, nearly all of us started out on very different career paths, so we come at it with a wide range of disciplinary, epistemological and stylistic frameworks. Dianne has done a great job of weaving all of these different perspectives together into a coherent tapestry, not just a simple collection of essays.

The diversity is also a direct result of the instructions Dianne sent with the original proposal, which provides a pretty good description of the general approach and content that you will find in the book:

I am asking colleagues, as researchers, scholars, teachers, and writers in our field (ODL), to reflect on and write about your research/writing process, including topics such as:

  *   Your background and training as a scholar

  *   Your scholarly interests

  *   Why you research/write

  *   How you research/write

  *   What philosophies guide your work?

  *   Conflicts?  Barriers?

  *   Mentors, opportunities

  *   Reflections, insights, sorrows

  *   Advice, takeaways

  *   Anything else you feel is relevant

The “personal stuff,” as listed above, should serve as jump-off points to scholarly issues; that is, this isn’t intended to be a memoir or even a full-on reflective. Use the opportunity to reflect on your own work as a lead-in/up to the scholarly issues you want to address/promote/explore.

The aim of the book is to inform hesitant scholars, new scholars, and fledgling/nervous writers of our time-tested processes; and to spread awareness of the behind-the-curtain work involved in publishing and “being heard.”

My own chapter (Chapter 3, On being written) starts with rather a lot of sailing metaphors that tack around the ways that writing participates in my cognition and connects us, moves back to the land with a slight clunk and some geeky practical advice about my approach to notetaking and the roles of the tools that I use for the purpose, thence saunters on to the value of academic blogging and how I feel about it, and finally to a conclusion that frames the rest in something akin to a broader theory of complexity and cognition. All of it draws heavily from themes and theories explored in my recently published (also open) book, How Education Works: Teaching, Technology, and Technique. For all the stretched metaphors, meandering sidetracks, and clunky continuity I’m quite pleased with how it came out.

Most of the other chapters are better structured and organized, and most have more direct advice on the process (from start to finish), but they all tell rich, personal, and enlightening stories that are fascinating to read, especially if you know the people writing them or are familiar with their work. However, while the context, framing, and some of the advice is specific to the field of open and distance learning, the vast majority of lessons and advice are about academic writing in general. Whatever field you identify with, if you ever have to write anything then there’s probably something in it for you.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/19868519/research-writing-and-creative-process-in-open-and-distance-education-tales-from-the-field-open-book-publishers

View of Speculative Futures on ChatGPT and Generative Artificial Intelligence (AI): A Collective Reflection from the Educational Landscape

This is a remarkable paper, pubished in the Asian Journal of Distance Education, written by 35 remarkable people from all over the world and me. It was led by the remarkable Aras Boskurt, who pulled all 36 of us together and wrote much of it in the midst of personal tragedy and the aftermath of a devastating earthquake. The research methodology was fantastic: Aras got each of us to write two 500-word pieces of speculative fiction, presenting positive and negative futures for generative AI in education. The themes that emerged from them were then condensed in the conventional part of the paper, that we worked on together using Google Docs. It took less than 50 days from the initial invitation on January 22 to the publication of the paper. As Eamon Costello put it, “It felt like being in a flash mob of top scholars.”  At 130 pages it is more of a book than a paper,  but most of it consists of those stories/poems/plays, many of which are great stories in their own right. They make good bedtime reading.

Abstract

While ChatGPT has recently become very popular, AI has a long history and philosophy. This paper intends to explore the promises and pitfalls of the Generative Pre-trained Transformer (GPT) AI and potentially future technologies by adopting a speculative methodology. Speculative future narratives with a specific focus on educational contexts are provided in an attempt to identify emerging themes and discuss their implications for education in the 21st century. Affordances of (using) AI in Education (AIEd) and possible adverse effects are identified and discussed which emerge from the narratives. It is argued that now is the best of times to define human vs AI contribution to education because AI can accomplish more and more educational activities that used to be the prerogative of human educators. Therefore, it is imperative to rethink the respective roles of technology and human educators in education with a future-oriented mindset.

Citation

Bozkurt, A., Xiao, J., Lambert, S., Pazurek, A., Crompton, H., Koseoglu, S., Farrow, R., Bond, M., Nerantzi, C., Honeychurch, S., Bali, M., Dron, J., Mir, K., Stewart, B., Costello, E., Mason, J., Stracke, C. M., Romero-Hall, E., Koutropoulos, A., Toquero, C. M., Singh, L Tlili, A., Lee, K., Nichols, M., Ossiannilsson, E., Brown, M., Irvine, V., Raffaghelli, J. E., Santos-Hermosa, G Farrell, O., Adam, T., Thong, Y. L., Sani-Bozkurt, S., Sharma, R. C., Hrastinski, S., & Jandrić, P. (2023). Speculative futures on ChatGPT and generative artificial intelligence (AI): A collective reflection from the educational landscape. Asian Journal of Distance Education, 18(1), 53-130. https://doi.org/10.5281/zenodo.7636568

Originally posted at: https://landing.athabascau.ca/bookmarks/view/17699638/view-of-speculative-futures-on-chatgpt-and-generative-artificial-intelligence-ai-a-collective-reflection-from-the-educational-landscape

Technology, Teaching, and the Many Distances of Distance Learning | Journal of Open, Flexible and Distance Learning

I am pleased to announce my latest paper, published openly in the Journal of Open, Flexible and Distance Learning, which has long been one of my favourite distance and ed tech journals.

The paper starts with an abbreviated argument about the technological nature of education drawn from my forthcoming book, How Education Works, zooming in on the distributed teaching aspect of that, leading to a conclusion that the notion of “distance” as a measure of the relationship between a learner and their teacher/institution is not very useful when there might be countless teachers at countless distances involved.

I go on to explore a number of alternative ways we might conceptualize distance, some familiar, some less so, not so much because I think they are any better than (say) transactional distance, but to draw attention to the complexity, fuzziness, and fragility of the concept. However, I find some of them quite appealing: I am particularly pleased with the idea of inverting the various presences in the Community of Inquiry model (and extensions of it). Teaching, cognitive, and social (and emotional and agency) distances and presences essentially measure the same things in the same way, but the shift in perspective subtly changes the narratives we might build around them. I could probably write a paper on each kind of distance I provide, but each gets a paragraph or two because what it is all leading towards is an idea that I think has some more useful legs: technological distance.

I’m still developing this idea, and have just submitted another paper that tries to unpack it a bit more, so don’t expect something fully-formed just yet – I welcome discussion and debate on its value, meaning, and usefulness. Basically, technological distance is a measure of the gaps left between the technologies (including cognitive tools in learners’ own minds, what teachers orchestrate, textbooks, digital tools, etc, etc) that the learner has to fill in order to learn something. This is not just about the subject matter – it’s about the mill (how we learn) well as the grist (what we learn). There are lots of ways to reduce that distance, many of which are good for learning, but some of which undermine it by effectively providing what Dave Cormier delightfully describes as autotune for knowledge. The technologies provide the knowledge so learners don’t have to engage with or connect it themselves. This is not always a bad thing – architects may not need drafting skills, for instance, if they are going to only ever use CAD, memorization of facts easily discovered might not always be essential, and we will most likely see ubiquitous generative AI as part of our toolset now and in the future, for instance – but choosing what to learn is one reason teachers (who/whatever they are) can be useful. Effective teaching is about making the right things soft so the process itself teaches. However, as what needs to be soft is different for every person on the planet, we need to make learning (of ourselves or others) visible in order to know that. It’s not science – it’s technology. That means that invention, surprise, creativity, passion, and many other situated things matter.

My paper is nicely juxtaposed in the journal with one from Simon Paul Atkinson, which addresses definitions of “open”, “distance” and “flexible” that, funnily enough, was my first idea for a topic when I was invited to submit my paper. If you read both, I think you’ll see that Simon and I might see the issue quite differently, but his is a fine paper making some excellent points.

Abstract

The “distance” in “distance learning”, however it is defined, normally refers to a gap between a learner and their teacher(s), typically in a formal context. In this paper I take a slightly different view. The paper begins with an argument that teaching is fundamentally a technological process. It is, though, a vastly complex, massively distributed technology in which the most important parts are enacted idiosyncratically by vast numbers of people, both present and distant in time and space, who not only use technologies but also participate creatively in their enactment. Through the techniques we use we are co-participants in not just technologies but the learning of ourselves and others, and hence in the collective intelligence of those around us and, ultimately, that of our species. We are all teachers. There is therefore not one distance between learner and teacher in any act of deliberate learning— but many. I go on to speculate on alternative ways of understanding distance in terms of the physical, temporal, structural, agency, social, emotional, cognitive, cultural, pedagogical, and technological gaps that may exist between learners and their many teachers. And I conclude with some broad suggestions about ways to reduce these many distances.

Reference

Originally posted at: https://landing.athabascau.ca/bookmarks/view/17293757/my-latest-paper-technology-teaching-and-the-many-distances-of-distance-learning-journal-of-open-flexible-and-distance-learning

Petition · Athabasca University – Oppose direct political interference in universities · Change.org

https://www.change.org/p/athabasca-university-oppose-direct-political-interference-in-universities

I, like many staff and students, have been deeply shaken and outraged by recent events at Athabasca University. This is a petition by me and Simon Buckingham Shum, of the University of Technology Sydney, Australia to protest the blatant interference by the Albertan government in the affairs of AU over the past year, that culminated in the firing of its president, Professor Peter Scott, without reason or notice. Even prior to this, the actions of the Albertan government had been described by Glen Jones (Professor of Higher Education, University of Toronto) as: “the most egregious political interference in a public university in Canada in more than 100 years” This was an assault on our university, an assault on the very notion of a public university, and it sets a disturbing precedent that cannot stand unopposed.

We invite you to view this brief summary, and consider signing this petition to signal your concern. Please feel more than free to pass this on to anyone and everyone – it is an international petition that has already been signed by many, both within and beyond the AU community.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/17102318/petition-%C2%B7-athabasca-university-oppose-direct-political-interference-in-universities-%C2%B7-changeorg

Proceedings of The Open/Technology in Education, Society, and Scholarship Association Conference, 2022 (and call for proposals for this year’s conference, due January 31)

https://conference.otessa.org/index.php/conference/issue/view/3

These are the proceedings of OTESSA ’22. There’s a good mix of research/theory and practice papers, including one from me, Rory McGreal, Vive Kumar, and Jennifer Davies arising from our work on trying to use digital landmarks to make e-texts more memorable.

It was a great conference, held entirely online but at least as engaging and with as many opportunities for networking, personal interaction, and community building (including musical and dance sessions) as many that I’ve attended held in person. Kudos to the organizers.

This year’s conference will be held both in Toronto and online, from May 27-June 2. The in-person/blended part of the conference is from May 29-31, the rest is online. The deadline for proposals is January 31st, which is dauntingly close. However, only 250-500 words are needed for a research-oriented or practice-oriented proposal. If you wish to publish as well, you can submit a proceeding file (1000-2000 words – or media) now or at any later date. Here’s the link for submissions.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/16754483/proceedings-of-the-opentechnology-in-education-society-and-scholarship-association-conference-2022-and-call-for-proposals-for-this-years-conference-due-january-31

Hot off the press: Handbook of Open, Distance and Digital Education (open access)

https://link.springer.com/referencework/10.1007/978-981-19-2080-6

This might be the most important book in the field of open, distance, and digital education to be published this decade.Handbook cover Congratulations to Olaf Zawacki-Richter and Insung Jung, the editors, as well as to all the section editors, for assembling a truly remarkable compendium of pretty much everything anyone would need to know on the subject. It includes chapters written by a very high proportion of the most well-known and influential researchers and practitioners on the planet as well as a few lesser known folk along for the ride like me (I have a couple of chapters, both cowritten with Terry Anderson, who is one of those top researchers). Athabasca University makes a pretty good showing in the list of authors and in works referenced. In keeping with the subject matter, it is published by Springer as an open access volume, but even the hardcover version is remarkably good value (US$60) for something of this size.

The book is divided into six broad sections (plus an introduction), each of which is a decent book in itself, covering the following topics:

  • History, Theory and Research,
  • Global Perspectives and Internationalization,
  • Organization, Leadership and Change,
  • Infrastructure, Quality Assurance and Support Systems,
  • Learners, Teachers, Media and Technology, and
  • Design, Delivery, and Assessment

There’s no way I’m likely to read all of its 1400+ pages in the near future, but there is so much in it from so many remarkable people that it is going to be a point of reference for me for years to come. I’m really going to enjoy dipping into this.

If you’re interested, the chapters that Terry and I wrote are on Pedagogical Paradigms in Open and Distance Education and Informal Learning in Digital Contexts. A special shoutout to Junhong Xiao for all his help with these.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/16584686/hot-off-the-press-handbook-of-open-distance-and-digital-education-open-access

Slides from my ICEEL 22 Keynote, November 20, 2022

ICEEL 22 keynote

Here are the slides (11.2MB PDF) from my opening keynote yesterday at the 6th International Conference on Education and E-Learning, held online, hosted this year in Japan. In it I discussed a few of the ideas and consequences of them from my forthcoming book, How Education Works: Teaching, Technology, and Technique.

Title: It ain’t what you do, it’s the way that you do it, that’s what gets results

Abstract: In an educational system, no teacher ever teaches alone. Students teach themselves and, more often than not, teach one another. Textbook authors and illustrators, designers of open educational resources, creators of curricula, and so on play obvious teaching roles. However, beyond those obvious teachers there are always many others, from legislators to software architects, from professional bodies to furniture manufacturers . All of these teachers matter, not just in what they do but in how they do it: the techniques matter at least as much as the tools and methods.  The resulting complex collective teacher is deeply situated and, for any given learner, inherently unpredictable in its effects. In this talk I will provide a theoretical model to explain how these many teachers may work together or in opposition, how educational systems evolve, and the nature of learning technologies. Along the way I will use the model to explain why there is and can be no significant difference between outcomes for online and in-person teaching, why teaching to perceived learning styles research is doomed to fail, why small group tutoring will always (on average) be better than classroom teaching, and why quantitative research methods have little value in educational research.