The importance of a good opening line

This post asks the question,

How does the order of questions in a test affects how well students do?

The answer is “significantly.”

The post points to a paywalled study that shows, fairly conclusively, that starting with simpler questions in a typical academic quiz (on average) improves the overall results and, in particular, the chances of getting to the end of a quiz at all.  The study includes both an experimental field study using a low-stakes quiz, and a large-scale correlational study using a PISA dataset. Some of the effect sizes are quite large: about a 50% increase in non-completions for the hard-to-easy condition compared with the easy-to-hard condition, and a about a 25% increase in time on task for the easy-to-hard condition, suggesting students stick at it more when they have gained confidence earlier on. The increase in marks for the easy-to-hard condition compared with the hard-to easy condition is more modest when non-completions are excluded, but enough to make the difference between a pass and a fail for many students.

I kind-of knew this already but would not have expected it to make such a big difference.  It is a good reminder that, of course, objective tests are not objective. A quiz is a kind of interactive story with a very definite beginning, middle, and end, and it makes a big difference which parts of the story happen when, especially the beginning. Quizzes are like all kinds of learning experience: scaffolding helps, confidence matters, and motivation is central.  You can definitely put someone off reading a story if it has a bad first paragraph. Attitude makes all the difference in the world, which is one very good reason that such tests, and written exams in general, are so unfair and weak at discriminating capability, and why I have always done unreasonably well in such things: I generally relish the challenge. The authors reckon that adaptive quizzes might be one answer, and would especially benefit weaker students by ramping up the difficulty slowly, but warn that they may make things worse for more competent students who would experience the more difficult questions sooner. That resonates with my experience, too.

I don’t give marks for quizzes in any of my own courses and I allow students to try them as often as they wish but, even so, I have probably caused motivational harm by randomizing formative questions. I’m going to stop doing that in future. Designated teachers are never the sole authors of any educational story but, whenever they exert control, their contributions can certainly matter, at small scales and large. I wonder, how many people have had their whole lives changed for the worse by a bad opening line?

Source: It’s a question of order – 3-Star learning experiences

 

And now in Chinese: 在线学习环境:隐喻问题与系统改进. And some thoughts on the value of printed texts.

Warm off the press, and with copious thanks and admiration to Junhong Xiao for the invitation to submit and the translation, here is my paper “The problematic metaphor of the environment in online learning” in Chinese, in the Journal of Open Learning. It is based on my OTESSA Journal paper, originally published as “On the Misappropriation of Spatial Metaphors in Online Learning” at the end of 2022 (a paper I am quite pleased with, though it has yet to receive any citations that I am aware of).

Many thanks, too, to Junhong for sending me the printed version that arrived today, smelling deliciously of ink. I hardly ever read anything longer than a shopping bill on paper any more but there is something rather special about paper that digital versions entirely lack. The particular beauty of a book or journal written in a language and script that I don’t even slightly understand is that, notwithstanding the ease with which I can translate it using my phone, it largely divorces the medium from the message. Even with translation tools my name is unrecognizable to me in this: Google Lens translates it as “Jon Delong”. Although I know it contains a translation of my own words, it is really just a thing: the signs it contains mean nothing to me, in and of themselves. And it is a thing that I like, much as I like the books on my bookshelf.

I am not alone in loving paper books, a fact that owners of physical copies of my most recent book (which can be read online for free but that costs about $CAD40 on paper) have had the kindness to mention, e.g. here and here. There is something generational in this, perhaps. For those of us who grew up knowing no other reading medium than ink on paper, there is comfort in the familiar, and we have thousands (perhaps millions) of deeply associated memories in our muscles and brains connected with it, made more precious by the increasing rarity with which those memories are reinforced by actually reading them that way. But, for the most part, I doubt that my grandchildren, at least, will lack that. While they do enjoy and enthusiastically interact with text on screens, from before they have been able to accurately grasp them they have been exposed to printed books, and have loved some of them as much as I did at the same ages.

It is tempting to think that our love of paper might simply be because we don’t have decent e-readers, but I think there is more to it than that. I have some great e-readers in many sizes and types, and I do prefer some of them to read from, for sure: backlighting when I need it, robustness, flexibility, the means to see it in any size or font that works for me, the simple and precise search, the shareable highlights, the lightness of (some) devices, the different ways I can hold them, and so on, make them far more accessible. But paper has its charms, too. Most obviously, something printed on a paper is a thing to own whereas, on the whole, a digital copy tends to just be a licence to read, and ownership matters. I won’t be leaving my e-books to my children. The thingness really matters in other ways, too. Paper is something to handle, something to smell. Pages and book covers have textures – I can recognize some books I know well by touch alone. It affects many senses, and is more salient as a result. It takes up room in an environment so it’s a commitment, and so it has to matter, simply because it is there; a rivalrous object competing with other rivalrous objects for limited space. Paper comes in fixed sizes that may wear down but will never change: it thus keeps its shape in our memories, too. My wife has framed occasional pages from my previously translated work, elevating them to art works, decoupled from their original context, displayed with the same lofty reverence as pages from old atlases. Interestingly, she won’t do that if it is just a printed PDF: it has to come from a published paper journal, so the provenance matters. Paper has a history and a context of its own, beyond what it contains. And paper creates its own context, filled with physical signals and landmarks that make words relative to the medium, not abstractions that can be reflowed, translated into other languages, or converted into other media (notably speech). The result is something that is far more memorable than a reflowable e-text. Over the years I’ve written a little about this here and there, and elsewhere, including a paper on the subject (ironically, a paper that is not available on paper, as it happens), describing an approach to making e-texts more memorable.

After reaching a slightly ridiculous peak in the mid-2000s, and largely as a result of a brutal culling that occurred when I came to Canada nearly 17 years ago, my paper book collection has now diminished to easily fit in a single and not particularly large free-standing IKEA shelving unit. The survivors are mostly ones I might want to refer to or read again, and losing some of them would sadden me a great deal, but I would only (perhaps) run into a burning building to save just a few, including, for instance:

  • A dictionary from 1936, bound in leather by my father and used in countless games of Scrabble and spelling disputes when I was a boy, and that was used by my whole family to look up words at one time or another.
  • My original hardback copy of the Phantom Tollbooth (I have a paperback copy for lending), that remains my favourite book of all time, that was first read to me by my father, and that I have read myself many times at many ages, including to my own children.
  • A boxed set of the complete works of Narnia, that I chose as my school art prize when I was 18 because the family copies had become threadbare (read and abused by me and my four siblings), and that I later read to my own children. How someone with very limited artistic skill came to win the school art prize is a story for another time.
  • A well-worn original hardback copy of Harold and the Purple Crayon (I have a paperback copy for lending) that my father once displayed for children in his school to read, with the admonition “This is Mr Dron’s book. Please handle with care” (it was not – it was mine).
  • A scribble-filled, bookmark-laden copy of Kevin Kelly’s Out of Control that strongly influenced my thinking when I was researching my PhD and that still inspires me today. I can remember exactly where I sat when I made some of the margin notes.
  • A disintegrating copy of Storyland, given to me by my godmother in 1963 and read to me and by me for many years thereafter. There is a double value to this one because we once had two copies of this in our home: the other belonged to my wife, and was also a huge influence on her at similar ages.

These books proudly wear their history and their relationships with me and my loved ones in all their creases, coffee stains, scuffs, and tattered pages.pile of some of my favourite books  To a greater or lesser extent, the same is true of almost all of the other physical books I have kept. They sit there as a constant reminder of their presence – their physical presence, their emotional presence, their social presence and their cognitive presence – flitting by in my peripheral vision many times a day, connecting me to thoughts and inspirations I had when I read them and, often, with people and places connected with them. None of this is true of my e-books. Nor is it quite the same for other objects of sentimental value, except perhaps (and for very similar reasons) the occasional sculpture or picture, or some musical instruments. Much as I am fond of (say) baby clothes worn by my kids or a battered teddy bear, they are little more than aides memoires for other times and other activities, whereas the books (and a few other objects) latently embody the experiences themselves. If I opened them again (and I sometimes do) it would not be the same experience, but it would enrich and connect with those that I already had.

I have hundreds of e-books that are available on many devices, one of which I carry with me at all times, not to mention an Everand (formerly Scribd) account with a long history, not to mention a long and mostly lost history of library borrowing, and I have at least a dozen devices on which to read them, from a 4 inch e-ink reader to a 32 inch monitor and much in between, but my connection with those is far more limited and transient. It is still more limited for books that are locked to a certain duration through DRM (which is one reason they are the scum of the earth). When I look at my devices and open the various reading apps on them I do see a handful of book covers, usually those that I have most recently read, but that is too fleeting and volatile to have much value. And when I open them they don’t fall open on well-thumbed pages. The text is not tangibly connected with the object at all.

As well as smarter landmarks within them, better ways to make e-books more visible would help, which brings me to the real point of this post. For many years I have wanted to paper a wall or two with e-paper (preferably in colour) on which to display e-book covers, but the costs are still prohibitive. It would be fun if the covers would become battered with increasing use, showing the ones that really mattered, and maybe dust could settle on those that were never opened, though it would not have to be so skeuomorphic – fading would work, or glyphs. They could be ordered manually or by (say) reading date, title, author, or subject. Perhaps touching them or scanning a QR code could open them. I would love to get a research grant to do this but I don’t think asking for electronic wallpaper in my office would fly with most funding sources, even if I prettied it up with words like “autoethnography”, and I don’t have a strong enough case, nor can I think of a rigorous enough research methodology to try it in a larger study with other people. Well. Maybe I will try some time. Until the costs of e-paper come down much further, it is not going to be a commercially viable product, either, though prices are now low enough that it might be possible to do it in a limited way with a poster-sized display for a (very) few thousand dollars. It could certainly be done with a large screen TV for well under $1000 but I don’t think a power-hungry glowing screen would be at all the way to go: the value would not be enough to warrant the environmental harm or energy costs, and something that emitted light would be too distracting. I do have a big monitor on my desk, though, which is already doing that so it wouldn’t be any worse, to which I could add a background showing e-book covers or spines. I could easily do this as a static image or slideshow, but I’d rather have something dynamic. It shouldn’t be too hard to extract the metadata from my list of books, swipe the images from the Web or the e-book files, and show them as a backdrop (a screensaver would be trivial). It might even be worth extending this to papers and articles I have read. I already have Pocket open most of the time, displaying web pages that I have recently read or want to read (serving a similar purpose for short-term recollection), and that could be incorporated in this. I think it would be useful, and it would not be too much work to do it – most of the important development could be done in a day or two. If anyone has done this already or feels like coding it, do get in touch!

Slides from my SITE keynote, 2024: The Intertwingled Teacher

The Intertwingled Teacher 

UPDATE:  the video of my talk is now available at https://www.youtube.com/watch?v=ji0jjifFXTs  (slides and audio only) …

Photo of Jon holding a photo of Jon These are the slides from my opening keynote at SITE ‘24 today, at Planet Hollywood in Las Vegas. The talk was based closely on some of the main ideas in How Education Works.  I’d written an over-ambitious abstract promising answers to many questions and concerns, that I did just about cover but far too broadly. For counter balance, therefore, I tried to keep the focus on a single message – t’aint what you do, it’s the way that you do it (which is the epigraph for the book) – and, because it was Vegas,  I felt that I had to do a show, so I ended the session with a short ukulele version of the song of that name. I had fun, and a few people tried to sing along. The keynote conversation that followed was most enjoyable – wonderful people with wonderful ideas, and the hour allotted to it gave us time to explore all of them.

Here is that bloated abstract:

Abstract: All of us are learning technologists, teaching others through the use of technologies, be they language, white boards, and pencils or computers, apps, and networks. We are all part of a vast, technology-mediated cognitive web in which a cast of millions – in formal education including teachers such as textbook authors, media producers, architects, software designers, system administrators, and, above all, learners themselves –  co-participates in creating an endless, richly entwined tapestry of learning. This tapestry spreads far beyond formal acts of teaching, far back in time, and far into the future, weaving in and helping to form not just the learning of individuals but the collective intelligence of the whole human race. Everyone’s learning journey both differs from and is intertwingled with that of everyone else. Education is an overwhelmingly complex and unpredictable technological system in which coarse patterns and average effects can be found but, except in the most rigid, invariant, minor details, of which individual predictions cannot be accurately made. No learner is average, and outcomes are always greater than what is intended. The beat of a butterfly’s wing in Timbuktu can radically affect the experience of a learner in Toronto. A slight variation in tone of voice can make all the difference between a life-transforming learning experience and a lifelong aversion to a subject. Beautifully crafted, research-informed teaching methods can be completely ineffective, while poor teaching, or even the absence of it, can result in profoundly affective learning. For all our efforts to understand and control it, education as a technological process is far closer to art than to engineering. What we do is usually far less significant than the idiosyncratic way that we do it, and how much we care for the subject, our students, and our craft is often far more important than the pedagogical methods we use. In this talk I will discuss what all of this implies for how we should teach, for how we understand teaching, and for how we research the massively intertwingled processes and tools of teaching. Along the way I will explain why there is no significant difference between measured outcomes of online or in-person learning, the futility of teaching to learning styles, the reason for the 2-sigma advantage of personal tuition, the surprising commonalities between behaviourist, cognitivist, constructivist models of learning and teaching, the nature of literacies, and the failure of reductive research methods in education. It will be fun

New article from Gerald Ardito and me – The emergence of autonomy in intertwingled learning environments: a model of teaching and learning

Here is a paper from the Asia-Pacific Journal of Teacher Education by my friend Gerald Ardito and me that presents a slightly different way of thinking about teaching and learning. We adopt a broadly complexivist stance that sees environments not as a backdrop to learning but as a rich network of dynamic, interwingled relationships between the various parts (including parts played by people), mediated through technologies, enabling and enabled by autonomy. The model that we develop knits together a smorgasbord of theories and models, including Self-Determination Theory (SDT), Connectivism, an assortment of complexity theories, the extended version of Paulsen’s model of cooperative freedoms developed by me and Terry Anderson, Garrison & Baynton’s model of autonomy, and my own coparticipation theory, wrapping up with a bit of social network analysis of a couple of Gerald’s courses that puts it all into perspective. From Gerald’s initial draft the paper took years of very sporadic development and went through many iterations. It seemed to take forever, but we had fun writing it. Looking afresh at the finished article, I think the diagrams might have been clearer, we might have done more to join all the dots, and we might have expressed the ideas a bit less wordily, but I am mostly pleased with the way it turned out, and I am glad to see it finally published. The good bits are all Gerald’s, but I am personally most pleased with the consolidated model of autonomy visualized in figure 4, that connects my own & Terry Anderson’s cooperative freedoms, Garrison & Baynton’s model of autonomy, and SDT.

combining cooperative freedoms, autonomy, and SDT

Reference:

Gerald Ardito & Jon Dron (2024) The emergence of autonomy in intertwingled learning environments: a model of teaching and learning, Asia-Pacific Journal of Teacher Education, DOI: 10.1080/1359866X.2024.2325746

▶ I got air: interview with Terry Greene

Since 2018, Terry Greene has been producing a wonderful series of podcast interviews with open and online learning researchers and practitioners called Getting Air. Prompted by the publication of How Education Works, (Terry is also responsible for the musical version of the book, so I think he likes it) this week’s episode features an interview with me.

I probably should have been better prepared. Terry asked some probing, well-informed, and sometimes disarming questions, most of which led to me rambling more than I might have done if I’d thought about them in advance. It was fun, though, drifting through a broad range of topics from the nature of technology to music to the perils of generative AI (of course).

I hope that Terry does call his PhD dissertation “Getting rid of instructional designers”.

Educational ends and means: McNamara’s Fallacy and the coming robot apocalypse (presentation for TAMK)

These are the slides that I used for my talk with a delightful group of educational leadership students from TAMK University of Applied Sciences in Tampere, Finland at (for me) a somewhat ungodly hour Wednesday night/Thursday morning after a long day. If you were in attendance, sorry for any bleariness on my part. If not, or if you just want to re-live the moment, here is the video of the session (thanks Mark!)man shaking hands with a robot

The brief that I was given was to talk about what generative AI means for education and, if you have been following any of my reflections on this topic then you’ll already have a pretty good idea of what kinds of issues I raised about that. My real agenda, though, was not so much to talk about generative AI as to reflect on the nature and roles of education and educational systems because, like all technologies, the technology that matters in any given situation is the enacted whole rather than any of its assembled parts. My concerns about uses of generative AI in education are not due to inherent issues with generative AIs (plentiful though those may be) but to inherent issues with educational systems that come to the fore when you mash the two together at a grand scale.

The crux of this argument is that, as long as we think of the central purposes of education as being the attainment of measurable learning outcomes or the achievement of credentials, especially if the focus is on training people for a hypothetical workplace, the long-term societal effects of inserting generative AIs into the teaching process are likely to be dystopian. That’s where Robert McNamara comes into the picture. The McNamara Fallacy is what happens when you pick an aspect of a system to measure, usually because it is easy, and then you use that measure to define success, choosing to ignore or to treat as irrelevant anything that cannot be measured. It gets its name from Robert McNamara, US Secretary of Defense during the Vietnam war, who famously measured who was winning by body count, which is probably among the main reasons that the US lost the war.

My concern is that measurable learning outcomes (and still less the credentials that signify having achieved them) are not the ends that matter most. They are, more, means to achieve far more complex, situated, personal and social ends that lead to happy, safe, productive societies and richer lives for those within them. While it does play an important role in developing skills and knowledge, education is thus more fundamentally concerned with developing values, attitudes, ways of thinking, ways of seeing, ways of relating to others, ways of understanding and knowing what matters to ourselves and others, and finding how we fit into the social, cultural, technological, and physical worlds that we inhabit. These critical social, cultural, technological, and personal roles have always been implicit in our educational systems but, at least in in-person institutions, it seldom needs to be made explicit because it is inherent in the structures and processes that have evolved over many centuries to meet this need. This is why naive attempts to simply replicate the in-person learning experience online usually fail: they replicate the intentional teaching activities but neglect to cater for the vast amounts of learning that occur simply due to being in a space with other people, and all that emerges as a result of that. It is for much the same reasons that simply inserting generative AI into existing educational structures and systems is so dangerous.

If we choose to measure the success or failure of an educational system by the extent to which learners achieve explicit learning outcomes and credentials, then the case for using generative AIs to teach is extremely compelling. Already, they are far more knowledgeable, far more patient, far more objective, far better able to adapt their teaching to support individual student learning, and far, far cheaper than human teachers. They will get better. Much better. As long as we focus only on the easily measurable outcomes and the extrinsic targets, simple economics combined with their measurably greater effectiveness means that generative AIs will increasingly replace teachers in the majority of teaching roles.  That would not be so bad – as Arthur C. Clarke observed, any teacher that can be replaced by a machine should be – were it not for all the other more important roles that education plays, and that it will continue to play, except that now we will be learning those ways of being human from things that are not human and that, in more or less subtle ways, do not behave like humans. If this occurs at scale – as it is bound to do – the consequences for future generations may not be great. And, for the most part, the AIs will be better able to achieve those learning outcomes themselves – what is distinctive about them is that they are, like us, tool users, not simply tools – so why bother teaching fallible, inconsistent, unreliable humans to achieve them? In fact, why bother with humans at all? There are, almost certainly, already large numbers of instances in which at least part of the teaching process is generated by an AI and where generative AIs are used by students to create work that is assessed by AIs.

It doesn’t have to be this way. We can choose to recognize the more important roles of our educational systems and redesign them accordingly, as many educational thinkers have been recommending for considerably more than a century. I provide a few thoughts on that in the last few slides that are far from revolutionary but that’s really the point: we don’t need much novel thinking about how to accommodate generative AI into our existing systems. We just need to make those systems work the way we have known they should work for a very long time.

Download the slides | Watch the video

Presentation – Generative AIs in Learning & Teaching: the Case Against

Here are the slides from my presentation at AU’s Lunch ‘n’ Learn session today. The presentation itself took 20 minutes and was followed by a wonderfully lively and thoughtful conversation for another 40 minutes, though it was only scheduled for half an hour. Thanks to all who attended for a very enjoyable discussion! self portrait of chatGPT, showing an androgynous human face overlaid with circuits

The arguments made in this were mostly derived from my recent paper on the subject (Dron, J. (2023). The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education. Digital, 3(4), 319–335. https://doi.org/10.3390/digital3040020) but, despite the title, my point was not to reject the use of generative AIs at all. The central message I was hoping to get across was a simpler and more important one: to encourage attendees to think about what education is for, and what we would like it to be. As the slides suggest, I believe that is only partially to do with the objectives and outcomes we set out to achieve,  that it is nothing much at all to do with the products of the system such as grades and credentials, and that focus on those mechanical aspects of the system often creates obstacles to the achievement of it. Beyond those easily measured things, education is about the values, beliefs, attitudes, relationships, and development of humans and their societies.  It’s about ways of being, not just capacity to do stuff. It’s about developing humans, not (just) developing skills. My hope is that the disruptions caused by generative AIs are encouraging us to think like the Amish, and to place greater value on the things we cannot measure. These are good conversations to have.

Published in Digital – The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education

A month or two ago I shared a “warts-and-all” preprint of this paper on the risks of educational uses of generative AIs. The revised, open-access published version, The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education is now available in the Journal Digital.

The process has been a little fraught. Two reviewers really liked the paper and suggested minimal but worthwhile changes. One quite liked it but had a few reasonable suggestions for improvements that mostly helped to make the paper better. The fourth, though, was bothersome in many ways, and clearly wanted me to write a completely different paper altogether. Despite this, I did most of what they asked, even though some of the changes, in my opinion, made the paper a bit worse. However, I drew the line at the point that they demanded (without giving any reason) that I should refer to 8 very mediocre, forgettable, cookie cutter computer science papers which, on closer inspection, had all clearly been written by the reviewer or their team. The big problem I had with this was not so much the poor quality of the papers, nor even the blatant nepotism/self-promotion of the demand, but the fact that none were in any conceivable way relevant to mine, apart from being about AI: they were about algorithm-tweaking, mostly in the context of traffic movements in cities.  It was as ridiculous as a reviewer of a work on Elizabethan literature requiring the author to refer to papers on slightly more efficient manufacturing processes for staples. Though it is normal and acceptable for reviewers to suggest reference to their own papers when it would clearly lead to improvements, this was an utterly shameless abuse of power of a scale and kind that I have never seen before. I politely refused, making it clear that I was on to their game but not directly calling them out on it.

In retrospect, I slightly regret not calling them out. For a grizzly old researcher like me who could probably find another publisher without too much hassle, it doesn’t matter much if I upset a reviewer enough to make them reject my paper. However, for early-career researchers stuck in the publish-or-perish cycle, it would be very much harder to say no. This kind of behaviour is harmful for the author, the publisher, the reader, and the collective intelligence of the human race. The fact that the reviewer was so desperate to get a few more citations for their own team with so little regard for quality or relevance seems to me to be a poor reflection on them and their institution but, more so, a damning indictment of a broken system of academic publishing, and of the reward systems driving academic promotion and recognition. I do blame the reviewer, but I understand the pressures they might have been under to do such a blatantly immoral thing.

As it happens, my paper has more than a thing or two to say about this kind of McNamara phenomenon, whereby the means used to measure success in a system become and warp its purpose, because it is among the main reasons that generative AIs pose such a threat. It is easy to forget that the ways we establish goals and measure success in educational systems are no more than signals of a much more complex phenomenon with far more expansive goals that are concerned with helping humans to be, individually and in their cultures and societies, as much as with helping them to do particular things. Generative AIs are great at both generating and displaying those signals – better than most humans in many cases – but that’s all they do: the signals signify nothing. For well-defined tasks with well-defined goals they provide a lot of opportunities for cost-saving, quality improvement, and efficiency and, in many occupations, that can be really useful. If you want to quickly generate some high quality advertising copy, the intent of which is to sell a product, then it makes good sense to use a generative AI. Not so much in education, though, where it is too easy to forget that learning objectives, learning outcomes, grades, credentials, and so on are not the purposes of learning but just means for and signals of achieving them.

Though there are other big reasons to be very concerned about using generative AIs in education, some of which I explore in the paper, this particular problem is not so much with the AIs themselves as with the technological systems into which they are, piecemeal, inserted. It’s a problem with thinking locally, not globally; of focusing on one part of the technology assembly without acknowledging its role in the whole. Generative AIs could, right now and with little assistance,  perform almost every measurable task in an educational system from (for students) producing essays and exam answers, to (for teachers) writing activities and assignments, or acting as personal tutors. They could do so better than most people. If that is all that matters to us then we might as well therefore remove the teachers and the students from the system because, quite frankly, they only get in the way. This absurd outcome is more or less exactly the end game that will occur though, if we don’t rethink (or double down on existing rethinking of) how education should work and what it is for, beyond the signals that we usually use to evaluate success or intent. Just thinking of ways to use generative AIs to improve our teaching is well-meaning, but it risks destroying the woods by focusing on the trees. We really need to step back a bit and think of why we bother in the first place.

For more on this, and for my tentative partial solutions to these and other related problems, do read the paper!

Abstract and citation

This paper analyzes the ways that the widespread use of generative AIs (GAIs) in education and, more broadly, in contributing to and reflecting the collective intelligence of our species, can and will change us. Methodologically, the paper applies a theoretical model and grounded argument to present a case that GAIs are different in kind from all previous technologies. The model extends Brian Arthur’s insights into the nature of technologies as the orchestration of phenomena to our use by explaining the nature of humans’ participation in their enactment, whether as part of the orchestration (hard technique, where our roles must be performed correctly) or as orchestrators of phenomena (soft technique, performed creatively or idiosyncratically). Education may be seen as a technological process for developing these soft and hard techniques in humans to participate in the technologies, and thus the collective intelligence, of our cultures. Unlike all earlier technologies, by embodying that collective intelligence themselves, GAIs can closely emulate and implement not only the hard technique but also the soft that, until now, was humanity’s sole domain; the very things that technologies enabled us to do can now be done by the technologies themselves. Because they replace things that learners have to do in order to learn and that teachers must do in order to teach, the consequences for what, how, and even whether learning occurs are profound. The paper explores some of these consequences and concludes with theoretically informed approaches that may help us to avert some dangers while benefiting from the strengths of generative AIs. Its distinctive contributions include a novel means of understanding the distinctive differences between GAIs and all other technologies, a characterization of the nature of generative AIs as collectives (forms of collective intelligence), reasons to avoid the use of GAIs to replace teachers, and a theoretically grounded framework to guide adoption of generative AIs in education.

Dron, J. (2023). The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education. Digital, 3(4), 319–335. https://doi.org/10.3390/digital3040020

Originally posted at: https://landing.athabascau.ca/bookmarks/view/21104429/published-in-digital-the-human-nature-of-generative-ais-and-the-technological-nature-of-humanity-implications-for-education

▶ How Education Works, the audio book: now with beats

My book has been set to music!

Many thanks to Terry Greene for converting How Education Works into the second in his inspired series of podcasts, EZ Learning – Audio Books with Beats. There’s a total of 15 episodes that can be listened to online, subscribed to with your preferred podcast app, or downloaded for later listening, read by a computer-generated voice and accompanied by some cool, soothing beats.

Terry chose a deep North American voice for the reader and Eaters In Coffeeshops Mix 1 by Eaters to accompany my book. I reckon it works really well. It’s bizarre, at first – the soothing robotic voice introduces weird pauses, mispronunciations, and curious emphases, and there are occasional voice parts in the music that can be slightly distracting – but you soon get used to it if you relax into the rhythm, and it leads to the odd serendipitous emphasis that enhances rather than detracts from the text. Oddly, in some ways it almost feels more human as a result. Though it can be a bit disconcerting at times and there’s a fair chance of being lulled to sleep by the gentle rhythm, I have a hunch that the addition of music might make it easier to remember passages from it, for reasons discussed in a paper I wrote with Rory McGreal, VIve Kumar, and Jennifer Davies a year or so ago.

I have been slowly and painfully working on a manually performed audiobook of How Education Works but it is taking much longer than expected thanks to living on the flight path of a surprising number of float planes, being in a city built on a rain forest with a noisy gutter outside my window, having two very vocal cats, and so on, not to mention not having a lot of free time to work on it, so I am very pleased that Terry has done this. I won’t stop working on the human-read version – I think this fills a different and very complementary niche – but it’s great to have something to point people towards when they ask for an audio version.

The first season of Audio Books with Beats, appearing in the feed after the podcasts for my book chapters, was another AU Press book, Terry Anderson’s Theory and Practice of Online Learning which is also well worth a listen – those chapters follow directly from mine in the list of episodes. I hope and expect there will be more seasons to come so, if you are reading this some time after it was posted, you may need to scroll down through other podcasts until you reach the How Education Works. In case it’s hard to find, here’s a list of direct links to the episodes.

Acknowledgements, Prologue, introduction

Chapter 1: A Handful of Anecdotes About Elephants

Chapter 2:  A Handful of Observations About Elephants

Part 1: All About Technology

Chapter 3: Organizing Stuff to Do Stuff

Chapter 4: How Technologies Work

Chapter 5: Participation and Technique

Part II: Education as a Technological System

Chapter 6: A Co-Participation Model of Teaching

Chapter 7: Theories of Teaching

Chapter 8: Technique, Expertise, and Literacy

Part III: Applying the Co-Participation Model

Chapter 9: Revealing Elephants

Chapter 10: How Education Works

Epilogue

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20936998/%E2%96%B6-how-education-works-the-audio-book-now-with-beats

Recording and slides from my ESET 2023 keynote: Artificial humanity and human artificiality

Here are the slides from my keynote at ESET23 in Taiwan (I was online, alas, not in Taipei!).

I will try to remember to update this post with a link to the recording, when it is available.

Here’s a recording of the actual keynote.

The themes of my talk will be familiar to anyone who follows my blog or who has read my recent paper on the subject. This is about applying the coparticipation theory from How Education Works to generative AI, raising concerns about the ways it mimics the soft technique of humans, and discussing how problematic that will be if the skills it replaces atrophy or are never learned in the first place, amongst other issues.

This is the abstract:

We are participants in, not just users of technologies. Sometimes we participate as orchestrators (for instance, when choosing words that we write) and sometimes as part of the orchestration (for instance, when spelling those words correctly). Usually, we play both roles.  When we automate aspects of technologies in which we are just parts of the orchestration, it frees us up to be able to orchestrate more, to do creative and problem-solving tasks, while our tools perform the hard, mechanical tasks better, more consistently, and faster than we could ourselves. Collectively and individually, we therefore become smarter. Generative AIs are the first of our technologies to successfully automate those soft, open-ended, creative cognitive tasks. If we lack sufficient time and/or knowledge to do what they do ourselves, they are like tireless, endlessly flexible personal assistants, expanding what we can do alone. If we cannot draw, or draw up a rental agreement, say, an AI will do it for us, so we may get on with other things. Teachers are therefore scrambling to use AIs to assist in their teaching as fast as students use AIs to assist with their assessments.

For achieving measurable learning outcomes, AIs are or will be effective teachers, opening up greater learning opportunities that are more personalized, at lower cost, in ways that are superior to average human teachers.  But human teachers, be they professionals, other students, or authors of websites, do more than help learners to achieve measurable outcomes. They model ways of thinking, ways of being, tacit knowledge, and values: things that make us human. Education is a preparation to participate in human cultures, not just a means of imparting economically valuable skills. What will happen as we increasingly learn those ways of being from a machine? If machines can replicate skills like drawing, reasoning, writing, and planning, will humans need to learn them at all? Are there aspects of those skills that must not atrophy, and what will happen to us at a global scale if we lose them? What parts of our cognition should we allow AIs to replace? What kinds of credentials, if any, will be needed? In this talk I will use the theory presented in my latest book, How Education Works: Teaching, Technology, and Technique to provide a framework for exploring why, how, and for what purpose our educational institutions exist, and what the future may hold for them.

Pre-conference background reading, including the book, articles, and blog posts on generative AI and education may be found linked from https://howeducationworks.ca