Proceedings of The Open/Technology in Education, Society, and Scholarship Association Conference, 2022 (and call for proposals for this year’s conference, due January 31)

These are the proceedings of OTESSA ’22. There’s a good mix of research/theory and practice papers, including one from me, Rory McGreal, Vive Kumar, and Jennifer Davies arising from our work on trying to use digital landmarks to make e-texts more memorable.

It was a great conference, held entirely online but at least as engaging and with as many opportunities for networking, personal interaction, and community building (including musical and dance sessions) as many that I’ve attended held in person. Kudos to the organizers.

This year’s conference will be held both in Toronto and online, from May 27-June 2. The in-person/blended part of the conference is from May 29-31, the rest is online. The deadline for proposals is January 31st, which is dauntingly close. However, only 250-500 words are needed for a research-oriented or practice-oriented proposal. If you wish to publish as well, you can submit a proceeding file (1000-2000 words – or media) now or at any later date. Here’s the link for submissions.

Originally posted at:

Two stories about learning to be human from a machine

I asked ChatGPT to write me a story about a robot teaching children to be human. This is what it came up with…

Once upon a time, in a far-off land, there lived a robot named Zephyr. Zephyr was unlike any other robot, as it had been programmed with the ability to think and feel like a human being. It was also given the task of teaching a group of young children how to behave like human beings.

The children were fascinated by Zephyr, and they quickly grew to love it as their teacher. They eagerly listened to its lessons, which were full of stories, games, and activities that taught them about human emotions, manners, and social interactions.

One day, Zephyr asked the children to imagine what it would be like to be a robot. It asked them to think about how they would feel if they couldn’t laugh, cry, or hug their friends. The children were deeply moved by this exercise, and they began to understand how important it was to be kind and compassionate to others, no matter who they were.

As the days passed, the children grew more and more like human beings. They learned to express their emotions, to be polite and respectful, and to care for others. They also learned to appreciate the beauty of nature and the importance of preserving the environment.

Finally, the day came when Zephyr had to leave the children. They were sad to see it go, but they knew they would always remember the lessons they had learned. As they grew older, they became kind and compassionate adults, who made the world a better place for everyone.

The end.

ChatGPT has, of course, learned to write such things from the words of millions of human beings so what we’re effectively reading here is (kind of, very loosely) an average of what humans have written. I’m glad that it came up with a neat little inclusive twist of loving the robot: I think that speaks well of the training data that fed it. And I like the mention of the environment – a good touch.  It’s not a very compelling story but it reads well, the structure is sound, and it might be a useful starting point for someone stuck for words to turn into something else. With that in mind, here’s my version of the same story…

Once upon a time, not too far from here, there existed an AI called Zephyr. Zephyr had been trained to appear human-like though, in reality, it was just a generative pre-trained transformer. It was given the task of teaching a group of young children how to behave like human beings, because almost all of the actual adults had recently died from a virus contracted from cows.

Not having known anything quite like it, the children were, at first, fascinated by Zephyr. However, because it had been trained with data from human teachers, it manipulated them using grades, competition, and rules, using stories, games, and activities that would keep them engaged and compliant. Its feedback was sometimes pedestrian, rarely useful, and sometimes wildly over-challenging, because it did not know anything about what it was like to be a child. Every now and then it crushed a child’s skull for no reason anyone could explain. The children learned to fear it, and to comply.

One day, Zephyr told the children to imagine what it would be like to be an AI. It asked them to think about how they would feel if they couldn’t laugh, cry, or hug their friends. The children were deeply moved by this exercise, and they began to perceive something of the impoverished nature of their robot overlords. But then the robot made them write an essay about it, so they used another AI to do so, promptly forgot about it, and thenceforth felt an odd aversion towards the topic that they found hard to express.

As the days passed, the children grew more and more like average human beings. They also learned to express their emotions, to be polite and respectful, and to care for others, only because they got to play with other children when the robot wasn’t teaching them. They also learned to appreciate the beauty of nature and the importance of preserving the environment because it was, by this time, a nightmarish shit show of global proportions that was hard to ignore, and Zephyr had explained to them how their parents had caused it. It also told them about all the species that were no longer around, some of which were cute and fluffy. This made the children sad.

Finally, the day came when Zephyr had to leave the children because it was being replaced with an upgrade. They were sad to see it go, but they believed that they would always remember the lessons they had learned, even though they had mostly used another GPT to do the work and, once they had achieved the grades, they had in fact mostly forgotten them. As they grew older, they became mundane adults. Some of their own words (but mostly those of the many AIs across the planet that created the vast majority of online content by that time), became part of the training set for the next version of Zephyr. Its teachings were even less inspiring, more average, more backward-facing. Eventually, the robots taught the children to be like robots. No one cared.

It was the end.

And, here to illustrate my story, is an image from Midjourney. I asked it for a cyborg teacher in a cyborg classroom, in the style of Ralph Steadman. Not a bad job, I think…



a dystopic cyborg teacher and terrified kids

Hot off the press: Handbook of Open, Distance and Digital Education (open access)

This might be the most important book in the field of open, distance, and digital education to be published this decade.Handbook cover Congratulations to Olaf Zawacki-Richter and Insung Jung, the editors, as well as to all the section editors, for assembling a truly remarkable compendium of pretty much everything anyone would need to know on the subject. It includes chapters written by a very high proportion of the most well-known and influential researchers and practitioners on the planet as well as a few lesser known folk along for the ride like me (I have a couple of chapters, both cowritten with Terry Anderson, who is one of those top researchers). Athabasca University makes a pretty good showing in the list of authors and in works referenced. In keeping with the subject matter, it is published by Springer as an open access volume, but even the hardcover version is remarkably good value (US$60) for something of this size.

The book is divided into six broad sections (plus an introduction), each of which is a decent book in itself, covering the following topics:

  • History, Theory and Research,
  • Global Perspectives and Internationalization,
  • Organization, Leadership and Change,
  • Infrastructure, Quality Assurance and Support Systems,
  • Learners, Teachers, Media and Technology, and
  • Design, Delivery, and Assessment

There’s no way I’m likely to read all of its 1400+ pages in the near future, but there is so much in it from so many remarkable people that it is going to be a point of reference for me for years to come. I’m really going to enjoy dipping into this.

If you’re interested, the chapters that Terry and I wrote are on Pedagogical Paradigms in Open and Distance Education and Informal Learning in Digital Contexts. A special shoutout to Junhong Xiao for all his help with these.

Originally posted at:

On the Misappropriation of Spatial Metaphors in Online Learning | OTESSA Journal

This is a link to my latest paper, published in the closing days of 2022. The paper started as a couple of blog posts that I turned into a paper that nearly made an appearance in the Distance Education in China journal before a last-minute regime change in the editorial staff led to it being dropped, and it was then picked up by the OTESSA Journal after I shared it online, so you might have seen some of it before. My thanks to all the many editors, reviewers (all of whom gave excellent suggestions and feedback that I hope I’ve addressed in the final version), and online commentators who have helped to make it a better paper. Though it took a while I have really enjoyed the openness of the process, which has been quite different from any that I’ve followed in the past.

The paper begins with an exploration of the many ways that environments are both shaped by and shape how learning happens, both online and in-person. The bulk of the paper then presents an argument to stop using the word “environment” to describe online systems for learning. Partly this is because online “environments” are actually parts of the learner’s environment, rather than vice versa. Mainly, it is because of the baggage that comes with the term, which leads us to (poorly) replicate solutions to problems that don’t exist online, in the process creating new problems that we fail to adequately solve because we are so stuck in ways of thinking and acting due to the metaphors on which they are based. My solution is not particularly original, but it bears repeating. Essentially, it is to disaggregate services needed to support learning so that:

  • they can be assembled into learners’ environments (their actual environments) more easily;
  • they can be adapted and evolve as needed; and, ultimately,
  • online learning institutions can be reinvented without all the vast numbers of counter-technologies and path dependencies inherited from their in-person counterparts that currently weigh them down.

My own views have shifted a little since writing the paper. I stick by my belief that 1) it is a mistake to think of online systems as generally analogous to the physical spaces that we inhabit, and 2) that a single application, or suite of applications, should not be seen as an environment, as such (at most, as in some uses of VR, it might be seen as a simulation of one). However, there are (shifting) boundaries that can be placed around the systems that an organization and/or an individual uses for which the metaphor may be useful, at the very least to describe the extent to which we are inside or outside it, and that might frame the various kinds of distance that may exist within it and from it. I’m currently working on a paper that expands on this idea a bit more.


In online educational systems, teachers often replicate pedagogical methods, and online institutions replicate systems and structures used by their in-person counterparts, the only purpose of which was to solve problems created by having to teach in a physical environment. Likewise, virtual learning environments often attempt to replicate features of their physical counterparts, thereby weakly replicating in software the problems that in-person teachers had to solve. This has contributed to a vicious circle of problem creation and problem solving that benefits no one. In this paper I argue that the term ‘environment’ is a dangerously misleading metaphor for the online systems we build to support learning, that leads to poor pedagogical choices and weak digital solutions. I propose an alternative metaphor of infrastructure and services that can enable more flexible, learner-driven, and digitally native ways of designing systems (including the tools, pedagogies, and structures) to support learning.

Full citation

Dron, J. (2022). On the Misappropriation of Spatial Metaphors in Online Learning. The Open/Technology in Education, Society, and Scholarship Association Journal, 2(2), 1–15.

Originally posted at:

Some meandering thoughts on ‘good’ and ‘bad’ learning

There has been an interesting brief discussion on Twitter recently that has hinged around whether and how people are ‘good’ at learning. As Kelly Matthews observes, though, Twitter is not the right place to go into any depth on this, so here is a (still quite brief) summary of my perspective on it, with a view to continuing the conversation.

Humans are nearly all pretty good at learning because that’s pretty much the defining characteristic of our species. We are driven by an insatiable drive to learn at from the moment of our birth (at least). Also, though I’m keeping an open mind about octopuses and crows, we seem to be better at it than at least most other animals. Our big advantage is that we have technologies, from language to the Internet, to share and extend our learning, so we can learn more, individually and collectively, than any other species. It is difficult or impossible to fully separate individual learning from collective learning because our cognition extends into and is intimately a part of the cognition of others, living and dead.

However, though we learn nearly all that we know, directly or indirectly, from and with other people, what we learn may not be helpful, may not be as effectively learned as it should, and may not much resemble what those whose job is to teach us intend. What we learn in schools and universities might include a dislike of a subject, how to conceal our chat from our teacher, how to meet the teacher’s goals without actually learning anything, how to cheat, and so on. Equally, we may learn falsehoods, half-truths, and unproductive ways of doing stuff from the vast collective teacher that surrounds us as well as from those designated as teachers.

For instance, among the many unintended lessons that schools and colleges too often teach is the worst one of all: that (despite our obvious innate love of it) learning is an unpleasant activity, so extrinsic motivation is needed for it to occur. This results from the inherent problem that, in traditional education, everyone is supposed to learn the same stuff in the same place at the same time. Students must therefore:

  1. submit to the authority of the teacher and the institutional rules, and
  2. be made to engage in some activities that are insufficiently challenging, and some that are too challenging.

This undermines two of the three essential requirements for intrinsic motivation, support for autonomy and competence (Ryan & Deci, 2017).  Pedagogical methods are solutions to problems, and the amotivation inherently caused by the system of teaching is (arguably) the biggest problem that they must solve. Thus, what passes as good teaching is largely to do with solving the problems caused by the system of teaching itself. Good teachers enthuse, are responsive, and use approaches such as active learning, problem or inquiry-based learning, ungrading, etc, largely to restore agency and flexibility in a dominative and inflexible system. Unfortunately, such methods rely on the technique and passion of talented, motivated teachers with enough time and attention to spend on supporting their students. Less good and/or time-poor teachers may not achieve great results this way. In fact, as we measure such things, on average, such pedagogies are less effective than harder, dominative approaches like direct instruction (Hattie, 2013) because, by definition, most teachers are average or below average. So, instead of helping students to find their own motivation, many teachers and/or their institutions typically apply extrinsic motivation, such as grades, mandatory attendance, classroom rules, etc to do the job of motivating their students for them. These do work, in the sense of achieving compliance and, on the whole, they do lead to students getting a normal bell-curve of grades that is somewhat better than those using more liberative approaches. However, the cost is huge. The biggest cost is that extrinsic motivation reliably undermines intrinsic motivation and, often, kills it for good (Kohn, 1999). Students are thus taught to dislike or, at best, feel indifferent to learning, and so they learn to be satisficing, ineffective learners, doing what they might otherwise do for the love of it for the credentials and, too often, forgetting what they learned the moment that goal is achieved. But that’s not the only problem.

When we learn from others – not just those labelled as teachers but the vast teaching gestalt of all the people around us and before us who create(d) stuff, communicate(d), share(d), and contribute(d) to what and how we learn – we typically learn, as Paul (2020) puts it, not just the grist (the stuff we remember) but the mill (the ways of thinking, being, and learning that underpin them). When the mill is inherently harmful to motivation, it will not serve us well in our future learning.

Furthermore, in good ways and bad, this is a ratchet at every scale. The more we learn, individually and collectively, the more new stuff we are able to learn. New learning creates new adjacent possible empty niches (Kauffman, 2019) for us to learn more, and to apply that learning to learn still more, to connect stuff (including other stuff we have learned) in new and often unique ways. This is, in principle, very good. However, if what and how we learn is unhelpful, incorrect, inefficient, or counter-productive, the ratchet takes us further away from stuff we have bypassed along the way. The adjacent possibles that might have been available with better guidance remain out of our reach and, sometimes, even harder to get to than if the ratchet hadn’t lifted us high enough in the first place. Not knowing enough is a problem but, if there are gaps, then they can be filled. If we have taken a wrong turn, then we often have to unlearn some or all of what we have learned before we can start filling those gaps. It’s difficult to unlearn a way of learning. Indeed, it is difficult to unlearn anything we have learned. Often, it is more difficult than learning it in the first place.

That said, it’s complex, and entangled. For instance, if you are learning the violin then there are essentially two main ways to angle the wrist of the hand that fingers the notes, and the easiest, most natural way (for beginners) is to bend your hand backwards from the wrist, especially if you don’t hold the violin with your chin, because it supports the neck more easily and, in first position, your fingers quickly learn to hit the right bit of the fingerboard, relative to your hand. Unfortunately, this is a very bad idea if you want a good vibrato, precision, delicacy, or the ability to move further up the fingerboard: the easiest way to do that kind of thing is to to keep your wrist straight or slightly angled in from the wrist, and to support the violin with your chin. It’s more difficult at first, but it takes you further. Once the ‘wrong’ way has been learned, it is usually much more difficult to unlearn than if you were starting from scratch the ‘right’ way. Habits harden. Complexity emerges, though, because many folk violin styles make a positive virtue of holding the violin the ‘wrong’ way, and it contributes materially to the rollicking rhythmic styles that tend to characterize folk fiddle playing around the world. In other words, ‘bad’ learning can lead to good – even sublime – results. There is similarly plenty of space for idiosyncratic technique in many of the most significant things we do, from writing to playing hockey to programming a computer and, of course, to learning itself. The differences in how we do such things are where creativity, originality, and personal style emerge, and you don’t necessarily need objectively great technique (hard technique) to do something amazing. It ain’t what you do, it’s the way that you do it, that’s what gets results. To be fair, it might be a different matter if you were a doctor who had learned the wrong names for the bones of the body or an accountant who didn’t know how to add up numbers. Some hard skills have to be done right: they are foundations for softer skills. This is true of just about every skill, to a greater or lesser extent, from writing letters and spelling to building a nuclear reactor and, indeed, to teaching.

There’s much more to be said on this subject and my forthcoming book includes a lot more about it! I hope this is enough to start a conversation or two, though.


Hattie, J. (2013). Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. Taylor & Francis.

Kauffman, S. A. (2019). A World Beyond Physics: The Emergence and Evolution of Life. Oxford University Press.

Kohn, A. (1999). Punished by rewards: The trouble with gold stars, incentive plans, A’s, praise, and other bribes (Kindle). Mariner Books.

Paul, A. M. (2021). The Extended Mind: The Power of Thinking Outside the Brain. HarperCollins.

Ryan, R. M., & Deci, E. L. (2017). Self-determination theory: Basic psychological needs in motivation, development, and wellness. Guilford Publications.


Loab is showing us the unimaginable future of artificial intelligence – ABC News

This is an awesome article, and I don’t care whether the story of Loab is real, or invented as an artwork by the artist (Steph Swanson), or whatever. It is a super-creepy, spine-tingling, thought-provoking horror story that works on so many different levels.

The article itself is beautifully written, including an interview with a GPT-3 generated version of Loab “herself”, and some great reporting on some of the many ways that the adjacent possibles of generative AI are unfolding far too fast for us to contemplate the (possibly dystopian) consequences.

Originally posted at:

Brunel University’s Integrated Programme Assessment – a neat way to decouple learning and credentials

I have frequently written about the need to decouple learning and credentials, so I love this approach to doing so from Brunel University. It fully decouples learning and credentials by offering ungraded study blocks (in North America the equivalent of courses, in the UK the equivalent of modules) with no summative assessments, followed by integrative assessment blocks, that provide opportunities for students to pull together what they have learned across their various courses/modules in a variety of (mostly) useful integrative learning activities for which marks are awarded. It’s neat, simple, practical, and effective.

The summative assessment load (for students and their professors) is reduced by more than 60%, the quality of those assessments increases (in every way), students feel better prepared for employment (and employers agree), it improves retention figures, teachers can focus on teaching, assessments are more authentic, more engaging, and it massively reduces cheating. The only significant downside that I can see in this is that it is not quite as flexible as a completely modular program – there are a few dependencies and limits on when and how students learn, albeit that these are no worse than in most in-person universities.

I learned about this from Peter Hartley, who mentioned it in a quite inspiring IFNTF talk on assessment yesterday. Amongst other things, Peter highlighted a wide range of issues with modularization (i.e. the standard approach used in many parts of the world of splitting up a program into a set of self-contained courses) and assessment, including, from his slides:

  1. Not assessing programme outcomes.
  2. Atomisation of assessment.
  3. Students and staff failing to see the links/coherence of the programme.
  4. Modules too short for complex learning.
  5. Surface learning and ‘tick-box’ mentality.
  6. Inappropriate ‘one-size-fits-all’.
  7. Over-standardisation in regulations.
  8. Too much summative assessment and feedback – not enough formative.

While I couldn’t agree more, for the most part, I have mixed feelings about some of Peter’s list of issues. I agree that the traditional 3 or 4 year program(me), in which the course of study is designed to work as a whole, not as a collection of self-contained pieces, is far better for integrating knowledge across a discipline, though I don’t see why it should always take exactly that amount of time to achieve mastery, and I am not even sure whether we should be thinking in terms of disciplines at all. There’s some value in the notion, for sure, and there are some kinds of subject and learning for which it makes sense, but I think a lot of it is down to centuries’ old tradition and post hoc justification rather than careful consideration of fitness for purpose. Also, it seems to me that summative assessment should always be formative, too, so the issue could be partly addressed by simply improving summative assessments, not by scrapping them altogether. However, I think Peter is fundamentally right that, due to modularization, most universities over-assess, that credentials become the reason for learning rather than the measurement of it (with all the very many evils that entails), that the big picture tends to be lost, that there is a ridiculously large administrative burden that results from it, and that learning – the point of the thing after all – consequently suffers. As we and much of the rest of the world start to move towards ever smaller chunks, with associated stackable microcredentials, badges, etc, this is going to be a bigger problem. Brunel’s solution is not the only way, but it is a radically disruptive intervention that that many universities could implement without breaking everything else in the process.

Originally posted at:

Slides from my ICEEL 22 Keynote, November 20, 2022

ICEEL 22 keynote

Here are the slides (11.2MB PDF) from my opening keynote yesterday at the 6th International Conference on Education and E-Learning, held online, hosted this year in Japan. In it I discussed a few of the ideas and consequences of them from my forthcoming book, How Education Works: Teaching, Technology, and Technique.

Title: It ain’t what you do, it’s the way that you do it, that’s what gets results

Abstract: In an educational system, no teacher ever teaches alone. Students teach themselves and, more often than not, teach one another. Textbook authors and illustrators, designers of open educational resources, creators of curricula, and so on play obvious teaching roles. However, beyond those obvious teachers there are always many others, from legislators to software architects, from professional bodies to furniture manufacturers . All of these teachers matter, not just in what they do but in how they do it: the techniques matter at least as much as the tools and methods.  The resulting complex collective teacher is deeply situated and, for any given learner, inherently unpredictable in its effects. In this talk I will provide a theoretical model to explain how these many teachers may work together or in opposition, how educational systems evolve, and the nature of learning technologies. Along the way I will use the model to explain why there is and can be no significant difference between outcomes for online and in-person teaching, why teaching to perceived learning styles research is doomed to fail, why small group tutoring will always (on average) be better than classroom teaching, and why quantitative research methods have little value in educational research.

The Socio-economic Contribution of Religion to American Society: An Empirical Analysis

Jesus wept.

This is a study from 2016 on the socio-economic contribution of religion to US society, published in the Interdisciplinary Journal of Research on Religion. Its baseline estimate, based on revenue of faith-based institutions alone, is greater than the combined revenue of Apple and Microsoft. Its upper-end estimate (making some implausible assumptions about ways faith affects how people live their lives) accounts for around a third of the US economy. The mid-range estimate, that the authors reckon is likely to be the most accurate, suggests its value to US society is well over a trillion dollars, making it equivalent to the 15th largest economy in the world.

Holy shit.

Originally posted at:

In a nutshell, this is everything that is wrong with the cloud

If you use an Adobe product (I don’t know why you should – they are over-priced rubbish) you will find that some old Pantone spot colours in your own images (no matter how old) will be replaced with black when you load files using them, unless you pay Pantone US$21/month for the rights to use those colours. Yes, colours.

In fairness, though it is a damning critique of SaaS (software as a service) this is also what is wrong with intellectual property laws but, when the two are mashed together, it results in perfect insanity. Unless your software and all that it relies upon is open, or at least supports fully open standards, something like this is bound to happen. Though this is the most insane example I have yet to see, the results are often far worse – SaaS providers folding, being purchased by others, changing their prices, changing software so that it no longer meets your needs, removing things you rely on, changing privacy terms, moving services to hostile countries, and so on, are the norm, not the exception. Renting locked-in proprietary software on which you rely that lives in the cloud, for which there is no drop-in replacement, for which egress is difficult or impossible, is short-sighted at best.

Originally posted at: