Stories that matter and stories that don’t: some thoughts on appropriate teaching roles for generative AIs

robot reading a bedtime story to a child Well, this was definitely going to happen.

The system discussed in this Wired article is a bot (not available to the general public) that takes characters from the absurdly popular Bluey cartoon series and creates personalized bedtime stories involving them for its creator’s children using ChatGPT+. This is something anyone could do – it doesn’t take a prompt-wizard or specialized bot to do this. You could easily make any reasonably proficient LLM incorporate your child’s interests, friends, family, and characteristics and churn out a decent enough story from it. With copyright-free material you could make the writing style and scenes very similar to the original. A little editorial control may be needed here and there but I think that, with a smart enough prompt, it would do a fairly good, average sort of a job, at least as readable as what an average human might produce, in a fraction of the time. I find this to be hugely problematic, though, and not for the reasons given in the article, though there are certainly some legal and ethical concerns, especially around copyright and privacy as well as the potential for generating dubious, disturbing, or otherwise poor content.

Why stories matter

The thing that bothers me most about this is not the quality of the stories but the quality of the relationship between the author and the reader (or listener).  Stories are the most human of artifacts, the ways that we create and express meaning, no matter how banal. They act as hooks that bind us together, whether invented by a parent or shared across whole cultures. They are a big part of how we learn and establish our relationships with the world and with one another. They are glimpses into how another person thinks and feels: they teach us what it means to be human, in all its rich diversity. They reflect the best and the worst of us, and they teach us about what matters.

My children were in part formed by the stories I made up or read to them 30 or more years ago, and it matters that none were made by machines. The language that I used, the ways that I wove in people and things that were meaningful to them, the attitudes I expressed, the love that went into them, all mattered.  I wish I’d recorded one or two, or jotted down the plots of at least some of the very many Lemmie the Suicidal Lemming stories that were a particular favourite. These were not as dark as they sound – Lemmie was a cheerful creature who just happened to be prone to putting himself in life-threatening situations, usually as a result of following others. Now that they have children of their own, both my kids have deliciously dark but fundamentally compassionate senses of humour and a fierce independence that I’d like to think may, in small part, be a result of such tales.

The books I (or, as they grew, we, and then they) chose probably mattered more. Some had been read to me by my own parents and at least a couple were read to them by their own parents. Like my children, I learned to read very young, largely because my imagination was fired by those stories, and fired by how much they mattered to my parents and siblings. As much as the people around me, the people who wrote and inhabited the books I listened to and later read made me who I am, and taught me much of what I still know today – not just facts to recall in a pub quiz but ways of thinking and understanding the world, and not just because of the values they shared but because of my responses to them, that increasingly challenged those values. Unlike AI-generated tales, these were shared cultural artifacts, read by vast numbers of people, creating a shared cultural context, values, and meanings that helped to sustain and unite the society I lived in. You may not have read many of the same books I read as a middle class boy growing up in 1960s Britain but, even if you are not of my generation or cultural background, you might have read (or seen video adaptations of) one or more children’s works by A.A. Milne, Enid Blyton, C.S. Lewis, J.R.R.Tolkein, Hans Christian Anderson, Charles Dickens, Lewis Caroll, Kenneth Grahame, Rev. W. Awdry, T.S. Eliot, the Brothers Grimm, Norton Juster, Edward Lear, Hugh Lofting, Dr. Seuss, and so on. That matters, and it matters that I can still name them. These were real authors with attitudes, beliefs, ideas, and styles unlike any other. They were products and producers of the times and places they lived in. Many of their attitudes and values are, looking back, troublesome, and that was true even then. So many racist and sexist stereotypes and assumptions, so many false beliefs, so many values and attitudes that had no place in the 1960s, let alone now. And that was good, because it introduced me to a diversity of ways of being and thinking, and allowed me to compare them with my own values and those of other authors, and it prepared me for changes to come because I had noticed the differences between their context and mine, and questioned the reasons.

With careful prompting, generative AIs are already capable of producing work of similar quality and originality to fan fiction or corporate franchise output around the characters and themes of these and many other creative works, and maybe there is a place for that. It couldn’t be much worse than (say) the welter of appallingly sickly, anodyne, Americanized, cookie-cutter, committee-written Thomas the Tank Engine stories that my grandchildren get to watch and read, that bear as little resemblance to Rev. W. Awdry’s sublimely stuffy Railway Stories as Star Wars. It would soften the sting when kids reach the end of a much loved series, perhaps. And, while it is a novelty, a personalized story might be very appealing, albeit that there is something rather distasteful about making a child feel special with the unconscious output of a machine to which nothing matters. But this is not just about value to individuals, living with the histories and habits we have acquired in pre-AI times. This is something that is happening at a ubiquitous and massive scale, everywhere. When this is no longer a novelty but the norm it will change us, and change our societies, in ways that make me shiver. I fear that mass-individualization will in fact be mass-blandification, a myriad of pale shadows that neither challenge nor offend, that shut down rather than open up debate, that reinforce norms that never change and are never challenged (because who else will have read them?), that look back rather than forward, that teach us average ways of thinking, that learn what we like and enclose us in our own private filter bubble, keeping us from evolving, that only surprise us when they go wrong. This is in the nature of generative AIs because all they have to learn from is our own deliberate outputs and, increasingly, the outputs of prior generative AIs, not from any kind of lived experience. They are averaging mirrors whose warped distortions can convince us they are true reflections. Introducing AI-generated stories to very young children, at scale, seems to me to be an awful gamble with very high stakes for their futures. We are performing uncontrolled experiments with stuff that forms minds, values, attitudes, expectations, and meanings that these kids will carry with them for the rest of their lives, and there is at least some reason to suspect that the harm may be greater than the good, both on an individual and a societal level. At the very least, there is a need for a large amount of editorial control, but how many parents of young children have the time or the energy for that?

That said…

Generating, not consuming output

I do see great value in working with and supporting the kids in creating the prompts for those stories themselves. While the technology is moving too fast for these evanescent skills to be describable as generative AI literacies, the techniques they learn and discoveries they make while doing so may help them to understand the strengths and limitations of the tools as they continue to develop, and the outputs will matter more because they contributed to creating them. Plus, it is a great fun way to learn. My nearly 7-year-old grandchild, with the help of their father, has enjoyed and learned a lot from creating images with DALL-E, for instance, and has been doing so long enough to see massive improvements in its capabilities, so has learned some great meta-lessons about the nature of technological evolution too. This has not stopped them from developing their own artistic skills, including with the help of iPads and AI-assisted drawing tools, which offer excellent points of comparison and affordances to reflect on the differences. It has given them critical insight into the nature of the output and the processes that led to it, and it has challenged them to bend the machine to do what they want it to do. This kind of mindful use of the tools as complementary partners, rather than consumption of their products, makes sense to me.

I think the lessons carry forward to adult learning, too. I have huge misgivings about giving generative AIs a didactic role, for the same reasons that having them tell stories to children worry me. However, they can be great teachers for those that make use of them to create output, rather than being targets of the output they have created. For instance I have been really enjoying using ChatGPT+ to help me write an Elgg plugin over the past few weeks, intended to deal with a couple of show-stopping bugs in an upgrade to the Landing that I had been struggling with for about 3 years, on and (mostly) off. I had come to see the problems as intractable, especially as a fair number of far smarter Elgg developers than I had looked at them and failed to see where the problems lay. ChatGPT+ let me try out a lot more ideas than even a large team of developers would have been able to come up with alone, and it took care of some of the mundane repetitive work that made the process slow.  Though none of it was bad, little of its code was particularly good: it made up stuff, omitted stuff, and did things inefficiently. It was really good, though, at putting in explanatory comments and documenting what it was doing. This was great, because the things I had to do to fix the flaws taught me a lot more than I would have learned had they been perfect solutions. Nearly always, it was good enough and well-documented enough to set me on the right path, but the ways it failed drove me to look at source documentation, query the underlying database (now knowing what to look for), follow conversations on GitHub, and examine human-created plugins, from which I learned a lot more and got further inspiration about what to ask the LLM to do next. Because it made different mistakes each time, it helped me to slowly develop a clearer model of how it should really have happened, so I got better and better at solving the problems myself, meanwhile learning a whole raft of useful tricks from the code that worked and at least as much from figuring out why it didn’t. It was very iterative: each attempt sparked ideas for the next attempt. It gave me just enough scaffolding to help me do what I could not do alone. About half way through I discovered the cause of the problem – a single changed word in the 150,000+ lines of code in the core engine, that was intended to better suit the new notification system, but that resulted in the existing 20m+ notification messages in the system failing to display correctly. This gave me ideas for some better prompts, the results of which taught me more. As a result, I am now a better Elgg coder than I was when I began, and I have a solution to a problem that has held up vital improvements to an ailing site used by more than 16,000 people for many years (though there are still a few hurdles to overcome before it reaches the production site).

Filling the right gaps

The final solution actually uses no code from ChatGPT+ at all, but it would not have been possible to get to that point without it. The skills it provided were different to and complementary to my own, and I think that is the critical point. To play an effective teaching role, a teacher has to leave the right kind of gaps for the learner to fill. If they are too large or too small, the learner learns little or nothing. The to and fro between me and the machine, and the ease with which I could try out different ideas, eventually led to those gaps being just the right size so that, instead of being an overwhelming problem, it became an achievable challenge. And that is the story that matters here.

The same is true of the stories that inspire: they leave the right sized gaps for the reader or listener to fill with their own imaginations while providing sufficient scaffolding to guide them, surprise them, or support them on the journey. We are participants in the stories, not passive recipients of them, much as I was a participant in the development of the Elgg plugin and, similarly, we learn through that participation. But there is a crucial difference. While I was learning the mechanical skills of coding from this process (as well as independently developing the soft skills to use them well), the listener to or reader of a story is learning the social, cultural, and emotional skills of being human (as well as, potentially, absorbing a few hard facts and the skills of telling their own stories). A story can be seen as a kind of machine in its own right: one that is designed to make us think and feel in ways that matter to the author. And that, in a nutshell, is why a story produced by a generative AI is such a problematic idea for the reader, but the use of a generative AI to help produce that story can be such a good idea for the writer.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/21680600/stories-that-matter-and-stories-that-dont-some-thoughts-on-appropriate-teaching-roles-for-generative-ais

▶ How Education Works, the audio book: now with beats

My book has been set to music!

Many thanks to Terry Greene for converting How Education Works into the second in his inspired series of podcasts, EZ Learning – Audio Books with Beats. There’s a total of 15 episodes that can be listened to online, subscribed to with your preferred podcast app, or downloaded for later listening, read by a computer-generated voice and accompanied by some cool, soothing beats.

Terry chose a deep North American voice for the reader and Eaters In Coffeeshops Mix 1 by Eaters to accompany my book. I reckon it works really well. It’s bizarre, at first – the soothing robotic voice introduces weird pauses, mispronunciations, and curious emphases, and there are occasional voice parts in the music that can be slightly distracting – but you soon get used to it if you relax into the rhythm, and it leads to the odd serendipitous emphasis that enhances rather than detracts from the text. Oddly, in some ways it almost feels more human as a result. Though it can be a bit disconcerting at times and there’s a fair chance of being lulled to sleep by the gentle rhythm, I have a hunch that the addition of music might make it easier to remember passages from it, for reasons discussed in a paper I wrote with Rory McGreal, VIve Kumar, and Jennifer Davies a year or so ago.

I have been slowly and painfully working on a manually performed audiobook of How Education Works but it is taking much longer than expected thanks to living on the flight path of a surprising number of float planes, being in a city built on a rain forest with a noisy gutter outside my window, having two very vocal cats, and so on, not to mention not having a lot of free time to work on it, so I am very pleased that Terry has done this. I won’t stop working on the human-read version – I think this fills a different and very complementary niche – but it’s great to have something to point people towards when they ask for an audio version.

The first season of Audio Books with Beats, appearing in the feed after the podcasts for my book chapters, was another AU Press book, Terry Anderson’s Theory and Practice of Online Learning which is also well worth a listen – those chapters follow directly from mine in the list of episodes. I hope and expect there will be more seasons to come so, if you are reading this some time after it was posted, you may need to scroll down through other podcasts until you reach the How Education Works. In case it’s hard to find, here’s a list of direct links to the episodes.

Acknowledgements, Prologue, introduction

Chapter 1: A Handful of Anecdotes About Elephants

Chapter 2:  A Handful of Observations About Elephants

Part 1: All About Technology

Chapter 3: Organizing Stuff to Do Stuff

Chapter 4: How Technologies Work

Chapter 5: Participation and Technique

Part II: Education as a Technological System

Chapter 6: A Co-Participation Model of Teaching

Chapter 7: Theories of Teaching

Chapter 8: Technique, Expertise, and Literacy

Part III: Applying the Co-Participation Model

Chapter 9: Revealing Elephants

Chapter 10: How Education Works

Epilogue

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20936998/%E2%96%B6-how-education-works-the-audio-book-now-with-beats

Preprint – The human nature of generative AIs and the technological nature of humanity: implications for education

Here is a preprint of a paper I just submitted to MDPI’s Digital journal that applies the co-participation model that underpins How Education Works (and a number of my papers over the last few years) to generative AIs (GAIs). I don’t know whether it will be accepted and, even if it is, it is very likely that some changes will be required. This is a warts-and-all raw first submission. It’s fairly long (around 10,000 words).

The central observation around which the paper revolves is that, for the first time in the history of technology, recent generations of GAIs automate (or at least appear to automate) the soft technique that has, till now, been the sole domain of humans. Up until now, every technology we have ever created, be it physically instantiated, cognitive, organizational, structural, or conceptual, has left all of the soft part of the orchestration to human beings.

The fact that GAIs replicate the soft stuff is a matter for some concern when they start to play a role in education, mainly because:

  • the skills they replace may atrophy or never be learned in the first place. This is not even slightly like replacing hard skills of handwriting or arithmetic: we are talking about skills like creativity, problem-solving, critical inquiry, design, and so on. We’re talking about the stuff that GAIs are trained with.
  • the AIs themselves are an amalgam, an embodiment of our collective intelligence, not actual people. You can spin up any kind of persona you like and discard it just as easily. Much of the crucially important hidden/tacit curriculum of education is concerned with relationships, identity, ways of thinking, ways of being, ways of working and playing with others. It’s about learning to be human in a human society. It is therefore quite problematic to delegate how we learn to be human to a machine with (literally and figuratively) no skin in the game, trained on a bunch of signals signifying nothing but more signals.

On the other hand, to not use them in educational systems would be as stupid as to not use writing. These technologies are now parts of our extended cognition, intertwingled with our collective intelligence as much as any other technology, so of course they must be integrated in our educational systems. The big questions are not about whether we should embrace them but how, and what soft skills they might replace that we wish to preserve or develop. I hope that we will value real humans and their inventions more, rather than less, though I fear that, as long as we retain the main structural features of our education systems without significant adjustments to how they work, we will no longer care, and we may lose some of our capacity for caring.

I suggest a few ways we might avert some of the greatest risks by, for instance, treating them as partners/contractors/team members rather than tools, by avoiding methods of “personalization” that simply reinforce existing power imbalances and pedagogies designed for better indoctrination, by using them to help connect us and support human relationships, by doing what we can to reduce extrinsic drivers, by decoupling learning and credentials, and by doubling down on the social aspects of learning. There is also an undeniable explosion in adjacent possibles, leading to new skills to learn, new ways to be creative, and new possibilities for opening up education to more people. The potential paths we might take from now on are unprestatable and multifarious but, once we start down them, resulting path dependencies may lead us into great calamity at least as easily as they may expand our potential. We need to make wise decisions now, while we still have the wisdom to make them.

MDPI invited me to submit this article free of their normal article processing charge (APC). The fact that I accepted is therefore very much not an endorsement of APCs, though I respect MDPI’s willingness to accommodate those who find payment difficult, the good editorial services they provide, and the fact that all they publish is open. I was not previously familiar with the Digital journal itself. It has been publishing 4 articles a year since 2021, mostly offering a mix of reports on application designs and literature reviews. The quality seems good.

Abstract

This paper applies a theoretical model to analyze the ways that widespread use of generative AIs (GAIs) in education and, more broadly, in contributing to and reflecting the collective intelligence of our species, can and will change us. The model extends Brian Arthur’s insights into the nature of technologies as the orchestration of phenomena to our use by explaining the nature of humans participation in their enactment, whether as part of the orchestration (hard technique, where our roles must be performed correctly) or as orchestrators of phenomena (soft technique performed creatively or idiosyncratically). Education may be seen as a technological process for developing the soft and hard techniques of humans to participate in the technologies and thus the collective intelligence of our cultures. Unlike all earlier technologies, by embodying that collective intelligence themselves, GAIs can closely emulate and implement not only the hard technique but also the soft that, until now, was humanity’s sole domain: the very things that technologies enabled us to do can now be done by the technologies themselves. The consequences for what, how, and even whether we learn are profound. The paper explores some of these consequences and concludes with theoretically informed approaches that may help us to avert some dangers while benefiting from the strengths of generative AIs.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20512771/preprint-the-human-nature-of-generative-ais-and-the-technological-nature-of-humanity-implications-for-education

Research, Writing, and Creative Process in Open and Distance Education: Tales from the Field | Open Book Publishers

Research, Writing, and Creative Process in Open and Distance Education: Tales from the Field is a great new book about how researchers in the field of open, online, and distance education go about writing and/or their advice to newcomers in the field. More than that, it is about the process of writing in general, containing stories, recommendations, methods, tricks, and principles that pretty much anyone who writes, from students to experienced authors, would find useful and interesting. It is published as an open book (with a very open CC-BY-NC licence) that is free to read or download as well as to purchase in paper form.

OK, full disclosure, I am a bit biased. I have a chapter in it, and many of the rest are by friends and aquaintances. The editor and author of one of the chapters is Dianne Conrad, the foreword is by Terry Anderson, and the list of authors includes some of the most luminous, widely cited names in the field, with a wealth of experience and many thousands of publications between them. The full list includes David Starr-Glass, Pamela Ryan,  Junhong Xiao, Jennifer Roberts, Aras Bozkurt, Catherine Cronin, Randy Garrison, Tony Bates, Mark Nichols, Marguerite Koole (with Michael Cottrell, Janet Okoko & Kristine Dreaver-Charles), and Paul Prinsloo.

Apart from being a really good idea that fills a really important gap in the market, what I love most about the book is the diversity of the chapters. There’s everything from practical advice on how to structure an effective paper, to meandering reflective streams of consciousness that read like poetry, to academic discussions of identity and culture. It contains a lot of great stories that present a rich variety of approaches and processes, offering far from uniform suggestions about how best to write or why it is worth doing in the first place. Though the contributors are all researchers in the field of open and distance learning, nearly all of us started out on very different career paths, so we come at it with a wide range of disciplinary, epistemological and stylistic frameworks. Dianne has done a great job of weaving all of these different perspectives together into a coherent tapestry, not just a simple collection of essays.

The diversity is also a direct result of the instructions Dianne sent with the original proposal, which provides a pretty good description of the general approach and content that you will find in the book:

I am asking colleagues, as researchers, scholars, teachers, and writers in our field (ODL), to reflect on and write about your research/writing process, including topics such as:

  *   Your background and training as a scholar

  *   Your scholarly interests

  *   Why you research/write

  *   How you research/write

  *   What philosophies guide your work?

  *   Conflicts?  Barriers?

  *   Mentors, opportunities

  *   Reflections, insights, sorrows

  *   Advice, takeaways

  *   Anything else you feel is relevant

The “personal stuff,” as listed above, should serve as jump-off points to scholarly issues; that is, this isn’t intended to be a memoir or even a full-on reflective. Use the opportunity to reflect on your own work as a lead-in/up to the scholarly issues you want to address/promote/explore.

The aim of the book is to inform hesitant scholars, new scholars, and fledgling/nervous writers of our time-tested processes; and to spread awareness of the behind-the-curtain work involved in publishing and “being heard.”

My own chapter (Chapter 3, On being written) starts with rather a lot of sailing metaphors that tack around the ways that writing participates in my cognition and connects us, moves back to the land with a slight clunk and some geeky practical advice about my approach to notetaking and the roles of the tools that I use for the purpose, thence saunters on to the value of academic blogging and how I feel about it, and finally to a conclusion that frames the rest in something akin to a broader theory of complexity and cognition. All of it draws heavily from themes and theories explored in my recently published (also open) book, How Education Works: Teaching, Technology, and Technique. For all the stretched metaphors, meandering sidetracks, and clunky continuity I’m quite pleased with how it came out.

Most of the other chapters are better structured and organized, and most have more direct advice on the process (from start to finish), but they all tell rich, personal, and enlightening stories that are fascinating to read, especially if you know the people writing them or are familiar with their work. However, while the context, framing, and some of the advice is specific to the field of open and distance learning, the vast majority of lessons and advice are about academic writing in general. Whatever field you identify with, if you ever have to write anything then there’s probably something in it for you.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/19868519/research-writing-and-creative-process-in-open-and-distance-education-tales-from-the-field-open-book-publishers

The artificial curriculum

evolving into a robot “Shaping the Future of Education: Exploring the Potential and Consequences of AI and ChatGPT in Educational Settings” by Simone Grassini is a well-researched, concise but comprehensive overview of the state of play for generative AI (GAI) in education. It gives a very good overview of current uses, by faculty and students, and provides a thoughtful discussion of issues and concerns arising. It addresses technical, ethical, and pragmatic concerns across a broad spectrum. If you want a great summary of where we are now, with tons of research-informed suggestions as to what to do about it, this is a very worthwhile read.

However, underpinning much of the discussion is an implied (and I suspect unintentional) assumption that education is primarily concerned with achieving and measuring explicit specified outcomes. This is particularly obvious in the discussions of ways GAIs can “assist” with instruction. I have a problem with that.

There has been an increasing trend in recent decades towards the mechanization of education: modularizing rather than integrating, measuring what can be easily measured, creating efficiencies, focusing on an end goal of feeding industry, and so on. It has resulted in a classic case of the McNamara Fallacy, that starts with a laudable goal of measuring success, as much as we are able, and ends with that measure defining success, to the exclusion anything we do not or cannot measure. Learning becomes the achievement of measured outcomes.

It is true that consistent, measurable, hard techniques must be learned to achieve almost anything in life, and that it takes sustained effort and study to achieve most of them that educators can and should help with. Measurable learning outcomes and what we do with them matter. However, the more profound and, I believe, the more important ends of education, regardless of the subject, are concerned with ways of being in the world, with other humans. It is the tacit curriculum that ultimately matters more: how education affects the attitudes, the values, the ways we can adapt, how we can create, how we make connections, pursue our dreams, live fulfilling lives, engage with our fellow humans as parts of cultures and societies.

By definition, the tacit curriculum cannot be meaningfully expressed in learning outcomes or measured on a uniform scale. It can be expressed only obliquely, if it can be expressed at all, in words. It is largely emergent and relational, expressed in how we are, interacting with one another, not as measurable functions that describe what we can do. It is complex, situated, and idiosyncratic. It is about learning to be human, not achieving credentials.

Returning to the topic of AI, to learn to be human from a blurry JPEG of the web, or autotune for knowledge, especially given the fact that training sets will increasingly be trained on the output of earlier training sets, seems to me to be a very bad idea indeed.

The real difficulty that teachers face is not that students solve the problems set to them using large language models, but that in so doing they bypass the process, thus avoiding the tacit learning outcomes we cannot or choose not to measure. And the real difficulty that those students face is that, in delegating the teaching process to an AI, their teachers are bypassing the teaching process, thus failing to support the learning of those tacit outcomes or, at best, providing an averaged-out caricature of them. If we heedlessly continue along this path, it will wind up with machines teaching machines, with humans largely playing the roles of cogs and switches in them.

Some might argue that, if the machines do a good enough job of mimicry then it really doesn’t matter that they happen to be statistical models with no feelings, no intentions, no connection, and no agency. I disagree. Just as it makes a difference whether a painting ascribed to Picasso is a fake or not, or whether a letter is faxed or delivered through the post, or whether this particular guitar was played by John Lennon, it matters that real humans are on each side of a learning transaction. It means something different for an artifact to have been created by another human, even if the form of the exchange, in words or whatever, is the same. Current large language models have flaws, confidently spout falsehoods, fail to remember previous exchanges, and so on, so they are easy targets for criticism. However, I think it will be even worse when AIs are “better” teachers. When what they seem to be is endlessly tireless, patient, respectful and responsive; when the help they give is unerringly accurately personal and targeted; when they accurately draw on knowledge no one human could ever possess, they will not be modelling human behaviour. The best case scenario is that they will not be teaching students how to be, they will just be teaching them how to do, and that human teachers will provide the necessary tacit curriculum to support the human side of learning. However, the two are inseparable, so that is not particularly likely. The worst scenarios are that they will be teaching students how to be machines, or how to be an average human (with significant biases introduced by their training), or both.

And, frankly, if AIs are doing such a good job of it then they are the ones who should be doing whatever it is that they are training students to do, not the students. This will most certainly happen: it already is (witness the current actors and screenwriters strike). For all the disruption that results, it’s not necessarily a bad thing, because it increases the adjacent possible for everyone in so many ways. That’s why the illustration to this post is made to my instructions by Midjourney, not drawn by me. It does a much better job of it than I could do.

In a rational world we would not simply incorporate AI into teaching as we have always taught. It makes no more sense to let it replace teachers than it does to let it replace students. We really need to rethink what and why we are teaching in the first place. Unfortunately, such reinvention is rarely if ever how technology works. Technology evolves by assembly with and in the context of other technology, which is how come we have inherited mediaeval solutions to indoctrination as a fundamental mainstay of all modern education (there’s a lot more about such things in my book, How Education Works if you want to know more about that). The upshot will be that, as we integrate rather than reinvent, we will keep on doing what we have always done, with a few changes to topics, a few adjustments in how we assess, and a few “efficiencies”, but we will barely notice that everything has changed because students will still be achieving the same kinds of measured outcomes.

I am not much persuaded by most apocalyptic visions of the potential threat of AI. I don’t think that AI is particularly likely to lead to the world ending with a bang, though it is true that more powerful tools do make it more likely that evil people will wield them. Artificial General Intelligence, though, especially anything resembling consciousness, is very little closer today than it was 50 years ago and most attempts to achieve it are barking in the wrong forest, let alone up the wrong tree. The more likely and more troubling scenario is that, as it embraces GAIs but fails to change how everything is done, the world will end with a whimper, a blandification, a leisurely death like that of lobsters in water coming slowly to a boil. The sad thing is that, by then, with our continued focus on just those things we measure, we may not even notice it is happening. The sadder thing still is that, perhaps, it already is happening.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/19390937/the-artificial-curriculum

Look what just arrived on my doorstep! #howeducationworks from @au_press is now available in print and e-book formats

Photo of hard copies of How Education Works

Hard copies and e-book versions of How Education Works are now available, and they are starting to turn up in bookstores. The recommended retail price is CAD$40 but Amazon is selling the Kindle version for a bit less.

Here are a few outlets that are selling it (or order it from your local independent bookstore!):

AU Press (CA)

Barnes & Noble (US)

Blackwells (UK)

Amazon (CA)

Amazon (JP)

University of Chicago Press (US)

Indigo (CA)

Booktopia (AU)

For those wanting to try before they buy or who cannot afford/do not want the paper or e-book versions, you can read it for free online, or download a PDF of the whole book.

The publishers see this as mainly targeted at professional teachers and educational researchers, but those are far from the only audiences I had in mind as I was writing it. Apart from anything else, one of the central claims of the book is that literally everyone is a teacher.  But it’s as much a book about the nature of technology as it is about education, and as much about the nature of knowledge as it is about how that knowledge is acquired. If you’re interested in how we come to know stuff, how technologies work, or how to think about what makes us (individually and collectively) smart, there’s something in the book for you. It’s a work of philosophy as much as it is a book of practical advice, and it’s about a way of thinking and being at least as much as it is about the formal practice of education. That said, it certainly does contain some ideas and recommendations that do have practical value for educators and educational researchers. There’s just more to it than that.

I cannot begin to express how pleased I am that, after more than 10 years of intermittent work, I finally have the finished article in my hands. I hope you get a chance to read it, in whatever format works for you! I’ll end this post with a quote, that happens to be the final paragraph of the book…

“If this book has helped you, however slightly, to think about what you know and how you have come to know it a little differently, then it has been a successful learning technology. In fact, even if you hold to all of your previous beliefs and this book has challenged you to defend them, then it has worked just fine too. Even if you disagreed with or misunderstood everything that I said, and even if you disliked the way that I presented it, it might still have been an effective learning technology, even though the learning that I hoped for did not come about. But I am not the one who matters the most here. This is layer upon layer of technology, and in some sense, for some technology, it has done what that technology should do. The book has conveyed words that, even if not understood as I intended them to be, even if not accepted, even if rabidly disagreed with, have done something for your learning. You are a different person now from the person you were when you started reading this book because everything that we do changes us. I do not know how it has changed you, but your mind is not the same as it was before, and ultimately the collectives in which you participate will not be the same either. The technology of print production, a spoken word, a pattern of pixels on a screen, or dots on a braille reader has, I hope, enabled you, at least on occasion, to think, criticize, acknowledge, recognize, synthesize, and react in ways that might have some value in consolidating or extending or even changing what you already know. As a result of bits and bytes flowing over an ether from my fingertips to whatever this page might be to you, knowledge (however obscure or counter to my intentions) has been created in the world, and learning has happened. For all the complexities and issues that emerge from that simple fact, one thing is absolutely certain: this is good.”

 

 

A decade of unwriting: the life history of "How Education Works"

How Education Works book coverAbout 10 years ago I submitted the first draft of a book called “How Learning Technologies Work” to AU Press. The title was a nod to David Byrne’s wonderful book, “How Music Works” which is about much more than just music, just as mine was about much more than learning technologies.

Pulling together ideas I had been thinking about for a few years, the book had taken me only a few months to write, mostly at the tail end of my sabbatical. I was quite pleased with it. The internal reviewers were positive too, though they suggested a number of sensible revisions, including clarifying some confusing arguments and a bit of restructuring. Also, in the interests of marketing, they recommended a change to the title because, though accurately describing the book’s contents, I was not using “learning technologies” in its mainstream sense at all (for me, poetry, pedagogies, and prayer are as much technologies as pots, potentiometers and practices), so it would appeal to only a small subset of its intended audience. They were also a bit concerned that it would be hard to find an audience for it even if it had a better title because it was at least as much a book about the nature of technology as it was a book about learning, so it would fall between two possible markets, potentially appealing to neither.

A few months later, I had written a new revision that addressed most of the reviewers’ recommendations and concerns, though it still lacked a good title. I could have submitted it then. However, in the process of disentangling those confusing arguments, I had realized that the soft/hard technology distinction on which much of the book rested was far less well-defined than I had imagined, and that some of the conclusions that I had drawn from it were just plain wrong. The more I thought about it, the less happy I felt.

And so began the first of a series of substantial rewrites. However, my teaching load was very high, and I had lots of other stuff to do, so progress was slow. I was still rewriting it when I unwisely became Chair of my department in 2016, which almost brought the whole project to a halt for another 3 years. Despite that, by the time my tenure as Chair ended, the book had grown to around double its original (not insubstantial) length, and the theory was starting to look coherent, though I had yet to make the final leap that made sense of it all.

By 2019, as I started another sabbatical, I had decided to split the book into two. I put the stuff that seemed useful for practitioners into a new book,  “Education: an owner’s manual”, leaving the explanatory and predictive theory in its own book, now grandiosely titled “How Education Works”, and worked on both simultaneously. Each grew to a few hundred pages.

Neither worked particularly well. It was really difficult to keep the theory out of the practical book, and the theoretical work was horribly dry without the stories and examples to make sense of it. The theory, though, at last made sense, albeit that I struggled (and failed) to give it a catchy name. The solution was infuriatingly obvious. In all my talks on the subject my catchphrase from the start had been “’tain’t what you do, it’s the way that you do it, that’s what gets results” (it’s the epigraph for the book), so it was always implicit that softness and hardness are not characteristics of all technologies, as such, nor even of their assemblies, but of the ways that we participate in their orchestration. Essentially, what matters is technique: the roles we play as parts of the orchestration or orchestrators of it. That’s where the magic happens.

But now I had two mediocre books that were going nowhere. Fearing I was about to wind up with two unfinished and/or unsellable books, about half way through my sabbatical I brutally slashed over half the chapters from both, pasted the remains together, and spent much of the time I had left filling in the cracks in the resulting bricolage.

I finally submitted “How Education Works: Teaching, Technology, and Technique” in the closing hours of 2020, accompanied by a new proposal because, though it shared a theme and a few words with the original, it was a very different book.

Along the way I had written over a million words, only around a tenth of which made it into what I sent to AU Press. I had spent the vast majority of my authoring time unwriting rather than writing the book and, with each word I wrote or unwrote, the book had written me, as much as I had written it. The book is as much a part of my cognition as a product of it.

And now, at last, it can be part of yours.

30 months after it was submitted – I won’t go into the reasons apart from to say it has been very frustrating –  the book is finally available as a free PDF download or to read on the Web. If all goes to plan, the paper and e-book versions should arrive June 27th, 2023, and can be pre-ordered now.

It is still a book about technology at least as much as it is about education (very broadly defined), albeit that it is now firmly situated in the latter. It has to be both because among the central points I’m making are that we are part-technology and technology is part-us, that cognition is (in part) technology and technology is (in part) cognition, and that education is a fundamentally technological and thus fundamentally human activity. It’s all one complex, hugely distributed, recursive intertwingularity in which we and our technological creations are all co-participants in the cognition and learning of ourselves and one another.

During the 30 months AU Press has had the book I have noticed a thousand different ways the book could be improved, and I don’t love all of the edits made to it along the way (by me and others), but I reckon it does what I want it to do, and 10 years is long enough.

It’s time to start another.

A few places you can buy the book

AU Press (CA)

Barnes & Noble (US)

Blackwells (UK)

Amazon (CA)

Amazon (JP)

University of Chicago Press (US)

Indigo (CA)

Booktopia (AU)

Technology, Teaching, and the Many Distances of Distance Learning | Journal of Open, Flexible and Distance Learning

I am pleased to announce my latest paper, published openly in the Journal of Open, Flexible and Distance Learning, which has long been one of my favourite distance and ed tech journals.

The paper starts with an abbreviated argument about the technological nature of education drawn from my forthcoming book, How Education Works, zooming in on the distributed teaching aspect of that, leading to a conclusion that the notion of “distance” as a measure of the relationship between a learner and their teacher/institution is not very useful when there might be countless teachers at countless distances involved.

I go on to explore a number of alternative ways we might conceptualize distance, some familiar, some less so, not so much because I think they are any better than (say) transactional distance, but to draw attention to the complexity, fuzziness, and fragility of the concept. However, I find some of them quite appealing: I am particularly pleased with the idea of inverting the various presences in the Community of Inquiry model (and extensions of it). Teaching, cognitive, and social (and emotional and agency) distances and presences essentially measure the same things in the same way, but the shift in perspective subtly changes the narratives we might build around them. I could probably write a paper on each kind of distance I provide, but each gets a paragraph or two because what it is all leading towards is an idea that I think has some more useful legs: technological distance.

I’m still developing this idea, and have just submitted another paper that tries to unpack it a bit more, so don’t expect something fully-formed just yet – I welcome discussion and debate on its value, meaning, and usefulness. Basically, technological distance is a measure of the gaps left between the technologies (including cognitive tools in learners’ own minds, what teachers orchestrate, textbooks, digital tools, etc, etc) that the learner has to fill in order to learn something. This is not just about the subject matter – it’s about the mill (how we learn) well as the grist (what we learn). There are lots of ways to reduce that distance, many of which are good for learning, but some of which undermine it by effectively providing what Dave Cormier delightfully describes as autotune for knowledge. The technologies provide the knowledge so learners don’t have to engage with or connect it themselves. This is not always a bad thing – architects may not need drafting skills, for instance, if they are going to only ever use CAD, memorization of facts easily discovered might not always be essential, and we will most likely see ubiquitous generative AI as part of our toolset now and in the future, for instance – but choosing what to learn is one reason teachers (who/whatever they are) can be useful. Effective teaching is about making the right things soft so the process itself teaches. However, as what needs to be soft is different for every person on the planet, we need to make learning (of ourselves or others) visible in order to know that. It’s not science – it’s technology. That means that invention, surprise, creativity, passion, and many other situated things matter.

My paper is nicely juxtaposed in the journal with one from Simon Paul Atkinson, which addresses definitions of “open”, “distance” and “flexible” that, funnily enough, was my first idea for a topic when I was invited to submit my paper. If you read both, I think you’ll see that Simon and I might see the issue quite differently, but his is a fine paper making some excellent points.

Abstract

The “distance” in “distance learning”, however it is defined, normally refers to a gap between a learner and their teacher(s), typically in a formal context. In this paper I take a slightly different view. The paper begins with an argument that teaching is fundamentally a technological process. It is, though, a vastly complex, massively distributed technology in which the most important parts are enacted idiosyncratically by vast numbers of people, both present and distant in time and space, who not only use technologies but also participate creatively in their enactment. Through the techniques we use we are co-participants in not just technologies but the learning of ourselves and others, and hence in the collective intelligence of those around us and, ultimately, that of our species. We are all teachers. There is therefore not one distance between learner and teacher in any act of deliberate learning— but many. I go on to speculate on alternative ways of understanding distance in terms of the physical, temporal, structural, agency, social, emotional, cognitive, cultural, pedagogical, and technological gaps that may exist between learners and their many teachers. And I conclude with some broad suggestions about ways to reduce these many distances.

Reference

Originally posted at: https://landing.athabascau.ca/bookmarks/view/17293757/my-latest-paper-technology-teaching-and-the-many-distances-of-distance-learning-journal-of-open-flexible-and-distance-learning

Petition · Athabasca University – Oppose direct political interference in universities · Change.org

https://www.change.org/p/athabasca-university-oppose-direct-political-interference-in-universities

I, like many staff and students, have been deeply shaken and outraged by recent events at Athabasca University. This is a petition by me and Simon Buckingham Shum, of the University of Technology Sydney, Australia to protest the blatant interference by the Albertan government in the affairs of AU over the past year, that culminated in the firing of its president, Professor Peter Scott, without reason or notice. Even prior to this, the actions of the Albertan government had been described by Glen Jones (Professor of Higher Education, University of Toronto) as: “the most egregious political interference in a public university in Canada in more than 100 years” This was an assault on our university, an assault on the very notion of a public university, and it sets a disturbing precedent that cannot stand unopposed.

We invite you to view this brief summary, and consider signing this petition to signal your concern. Please feel more than free to pass this on to anyone and everyone – it is an international petition that has already been signed by many, both within and beyond the AU community.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/17102318/petition-%C2%B7-athabasca-university-oppose-direct-political-interference-in-universities-%C2%B7-changeorg

Hot off the press: Handbook of Open, Distance and Digital Education (open access)

https://link.springer.com/referencework/10.1007/978-981-19-2080-6

This might be the most important book in the field of open, distance, and digital education to be published this decade.Handbook cover Congratulations to Olaf Zawacki-Richter and Insung Jung, the editors, as well as to all the section editors, for assembling a truly remarkable compendium of pretty much everything anyone would need to know on the subject. It includes chapters written by a very high proportion of the most well-known and influential researchers and practitioners on the planet as well as a few lesser known folk along for the ride like me (I have a couple of chapters, both cowritten with Terry Anderson, who is one of those top researchers). Athabasca University makes a pretty good showing in the list of authors and in works referenced. In keeping with the subject matter, it is published by Springer as an open access volume, but even the hardcover version is remarkably good value (US$60) for something of this size.

The book is divided into six broad sections (plus an introduction), each of which is a decent book in itself, covering the following topics:

  • History, Theory and Research,
  • Global Perspectives and Internationalization,
  • Organization, Leadership and Change,
  • Infrastructure, Quality Assurance and Support Systems,
  • Learners, Teachers, Media and Technology, and
  • Design, Delivery, and Assessment

There’s no way I’m likely to read all of its 1400+ pages in the near future, but there is so much in it from so many remarkable people that it is going to be a point of reference for me for years to come. I’m really going to enjoy dipping into this.

If you’re interested, the chapters that Terry and I wrote are on Pedagogical Paradigms in Open and Distance Education and Informal Learning in Digital Contexts. A special shoutout to Junhong Xiao for all his help with these.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/16584686/hot-off-the-press-handbook-of-open-distance-and-digital-education-open-access