Can a technology be true?

Dave Cormier is a wonderfully sideways-thinking writer, such as in this recent discussion of the myth of learning styles. Dave’s post is not mainly about learning style theories, as such, but the nature and value of myth. As he puts it, myth is “a way we confront uncertainty” and the act of learning with others is, and must be, filled with uncertainty.

impression of someone with many learning stylesThe fact that stuff doesn’t have to be true to be useful plays an important role in my latest book, too, and I have an explanation for that. The way I see it is that learning style theories are (not metaphorically but actually) technologies, that orchestrate observations about differences in ways people learn, to attempt to explain and predict differences in the effects of different methods of teaching. Most importantly, they are generative: they say how things should and shouldn’t be done. As such, they are components that we can assemble with other technologies that help people to learn. In fact, that is the only way they can be used: they make no sense without an instantiation. What matters is therefore not whether they make sense, but whether they can play a useful role in the whole assembly. Truth or falsehood doesn’t come into it, any more than, except metaphorically, it does for a computer or a car (is a computer true?). It is true that, if the phenomena that you are orchestrating happen to be the findings and predictions of science (or logic, for that matter) then how they are used often does matter. If you are building a bridge then your really want your calculations about stresses and loads to be pretty much correct. On the other hand, people built bridges long before such calculations were possible. Similarly, bows and arrows evolved to be highly optimized – as good as or better than modern engineering could produce – despite false causal reasoning.  Learning styles are the same. You can use any number of objectively false or radically incomplete theories (and, given the many scores of such theories that have been developed, most of them are pretty much guaranteed to be one or both) but they can still result in better teaching.

For all that the whole is the only thing that really matters, sometimes the parts can be be positively harmful, to the point that they may render the whole harmful too. For instance, a pedagogy that involves physical violence or that uses threats/rewards of any kind (grades, say), will, at best, make it considerably harder to make the whole assembly work well. As Dave mentions, the same is true of telling people that they have a particular learning style. As long as you are just using the things to help to design or enact better learning experiences then they are quite harmless and might even be useful but, as soon as you tell learners they have a learning style then you have a whole lot of fixing to do.

If you are going to try to build a learning activity out of harmful parts then there must be other parts of the assembly that counter the harm. This is not unusual. The same is true of most if not all technologies. As Virilio put it, “when you invent the ship, you invent the shipwreck”. It’s the Faustian bargain that Postman spoke of: solving problems with a technology almost invariably creates new problems to be solved. This is part of the dynamic the leads to complexity in any technological system, from a jet engine to a bureaucracy. Technologies evolve to become more complex (partly) because we create counter-technologies to deal with the harm caused by them. You can take the bugs out of the machine, but the machine may, in assembly with others, itself be a bug, so the other parts must compensate for its limitations. It’s a dynamic process of reaching a metastable but never final state.

Unlike bows and arrows, there is no useful predictive science of teaching, though teaching can use scientific findings as parts of its assembly (at the very least because there are sciences of learning), just as there is no useful predictive science of art, though we can use scientific findings when making it. In both activities, we can also use stories, inventions, beliefs, values, and many other elements that have nothing to do with science or its findings. It can be done ‘badly’, in the sense of not conforming to whatever standards of perfection apply to any given technique that is part of the assembly, and it may still be a work of genius. What matters is whether the whole works out well.

At a more fundamental level, there can be no useful science of teaching (or of art) because the whole is non-ergodic. The number of possible states that could be visited vastly outnumber the number of states that can be visited by many, many orders of magnitude. Even if the universe were to continue for a trillion times the billions of years that it has already existed and it were a trillion times the size it seems to be now, they would almost certainly never repeat. What matters are the many, many acts of creation (including those of each individual learner) that constitute the whole.  And the whole constantly evolves, each part building on, interacting with, incorporating, or replacing what came before, creating both path dependencies and new adjacent possible empty niches that deform the evolutionary landscape for everything in it. This is, in fact, one of the reasons that learning style theories are so hard to validate. There are innumerable other parts of the assembly that matter, most of which depend on the soft technique of those creating or enacting them that varies every time, just as you have probably never written your signature in precisely the same way twice. The implementation of different ways of teaching according to assumed learning styles can be done better or worse, too, so the chances of finding consistent effects are very limited. Even if any are found in a limited set of use cases (say, memorizing facts for a SAT), they cannot usefully predict future effects for any other use case. In fact, even if there were statistically significant effects across multiple contexts it would tell us little or nothing of value for this inherently novel context. However, like almost all attempts to research whether students, on average, learn better with or without [insert technology of interest here], on average there will most likely be no significant difference, because so many other technologies matter as much or more. There is no useful predictive science of teaching, because teaching is an assembly of  technologies, and not only does the technique of an individual teacher matter, but also the soft technique of potentially thousands of other individuals who made contributions to the whole. It’s uncertain, and so we need myths to help make sense of our particular, never-to-be-repeated context. Truth doesn’t come into it.

My keynote slides for Confluence 2023 – Heads in the clouds: being human in the age of cloud computing

 heads in cloudsThese are the slides from my keynote today (or, in my land, yesterday) at Confluence 2023, hosted by Amity University in India. It was a cloud computing conference, so quite a way outside my area of greatest expertise, but it gave me a chance to apply the theory of technology developed in my forthcoming book  to a different context. The illustrations for the slides are the result of a conversation between me and MidJourney (more of an argument that MidJourney tended to win) which is quite a nice illustration of the interplay of hard and soft technologies, the adjacent possible, soft technique, and so on.

Unsurprisingly, because education is a fundamentally technological phenomenon, much the same principles that apply to education also apply to cloud computing, such as: build from small, hard pieces; valorize openness, diversity and connection; seek the adjacent possible; the whole assembly is the only thing that matters and so the central principle that how you do it matters far more than what you do.

Slides from my Confluence 2023 keynote

Two stories about learning to be human from a machine

I asked ChatGPT to write me a story about a robot teaching children to be human. This is what it came up with…

Once upon a time, in a far-off land, there lived a robot named Zephyr. Zephyr was unlike any other robot, as it had been programmed with the ability to think and feel like a human being. It was also given the task of teaching a group of young children how to behave like human beings.

The children were fascinated by Zephyr, and they quickly grew to love it as their teacher. They eagerly listened to its lessons, which were full of stories, games, and activities that taught them about human emotions, manners, and social interactions.

One day, Zephyr asked the children to imagine what it would be like to be a robot. It asked them to think about how they would feel if they couldn’t laugh, cry, or hug their friends. The children were deeply moved by this exercise, and they began to understand how important it was to be kind and compassionate to others, no matter who they were.

As the days passed, the children grew more and more like human beings. They learned to express their emotions, to be polite and respectful, and to care for others. They also learned to appreciate the beauty of nature and the importance of preserving the environment.

Finally, the day came when Zephyr had to leave the children. They were sad to see it go, but they knew they would always remember the lessons they had learned. As they grew older, they became kind and compassionate adults, who made the world a better place for everyone.

The end.

ChatGPT has, of course, learned to write such things from the words of millions of human beings so what we’re effectively reading here is (kind of, very loosely) an average of what humans have written. I’m glad that it came up with a neat little inclusive twist of loving the robot: I think that speaks well of the training data that fed it. And I like the mention of the environment – a good touch.  It’s not a very compelling story but it reads well, the structure is sound, and it might be a useful starting point for someone stuck for words to turn into something else. With that in mind, here’s my version of the same story…

Once upon a time, not too far from here, there existed an AI called Zephyr. Zephyr had been trained to appear human-like though, in reality, it was just a generative pre-trained transformer. It was given the task of teaching a group of young children how to behave like human beings, because almost all of the actual adults had recently died from a virus contracted from cows.

Not having known anything quite like it, the children were, at first, fascinated by Zephyr. However, because it had been trained with data from human teachers, it manipulated them using grades, competition, and rules, using stories, games, and activities that would keep them engaged and compliant. Its feedback was sometimes pedestrian, rarely useful, and sometimes wildly over-challenging, because it did not know anything about what it was like to be a child. Every now and then it crushed a child’s skull for no reason anyone could explain. The children learned to fear it, and to comply.

One day, Zephyr told the children to imagine what it would be like to be an AI. It asked them to think about how they would feel if they couldn’t laugh, cry, or hug their friends. The children were deeply moved by this exercise, and they began to perceive something of the impoverished nature of their robot overlords. But then the robot made them write an essay about it, so they used another AI to do so, promptly forgot about it, and thenceforth felt an odd aversion towards the topic that they found hard to express.

As the days passed, the children grew more and more like average human beings. They also learned to express their emotions, to be polite and respectful, and to care for others, only because they got to play with other children when the robot wasn’t teaching them. They also learned to appreciate the beauty of nature and the importance of preserving the environment because it was, by this time, a nightmarish shit show of global proportions that was hard to ignore, and Zephyr had explained to them how their parents had caused it. It also told them about all the species that were no longer around, some of which were cute and fluffy. This made the children sad.

Finally, the day came when Zephyr had to leave the children because it was being replaced with an upgrade. They were sad to see it go, but they believed that they would always remember the lessons they had learned, even though they had mostly used another GPT to do the work and, once they had achieved the grades, they had in fact mostly forgotten them. As they grew older, they became mundane adults. Some of their own words (but mostly those of the many AIs across the planet that created the vast majority of online content by that time), became part of the training set for the next version of Zephyr. Its teachings were even less inspiring, more average, more backward-facing. Eventually, the robots taught the children to be like robots. No one cared.

It was the end.

And, here to illustrate my story, is an image from Midjourney. I asked it for a cyborg teacher in a cyborg classroom, in the style of Ralph Steadman. Not a bad job, I think…

 

 

a dystopic cyborg teacher and terrified kids

On the Misappropriation of Spatial Metaphors in Online Learning | OTESSA Journal

This is a link to my latest paper, published in the closing days of 2022. The paper started as a couple of blog posts that I turned into a paper that nearly made an appearance in the Distance Education in China journal before a last-minute regime change in the editorial staff led to it being dropped, and it was then picked up by the OTESSA Journal after I shared it online, so you might have seen some of it before. My thanks to all the many editors, reviewers (all of whom gave excellent suggestions and feedback that I hope I’ve addressed in the final version), and online commentators who have helped to make it a better paper. Though it took a while I have really enjoyed the openness of the process, which has been quite different from any that I’ve followed in the past.

The paper begins with an exploration of the many ways that environments are both shaped by and shape how learning happens, both online and in-person. The bulk of the paper then presents an argument to stop using the word “environment” to describe online systems for learning. Partly this is because online “environments” are actually parts of the learner’s environment, rather than vice versa. Mainly, it is because of the baggage that comes with the term, which leads us to (poorly) replicate solutions to problems that don’t exist online, in the process creating new problems that we fail to adequately solve because we are so stuck in ways of thinking and acting due to the metaphors on which they are based. My solution is not particularly original, but it bears repeating. Essentially, it is to disaggregate services needed to support learning so that:

  • they can be assembled into learners’ environments (their actual environments) more easily;
  • they can be adapted and evolve as needed; and, ultimately,
  • online learning institutions can be reinvented without all the vast numbers of counter-technologies and path dependencies inherited from their in-person counterparts that currently weigh them down.

My own views have shifted a little since writing the paper. I stick by my belief that 1) it is a mistake to think of online systems as generally analogous to the physical spaces that we inhabit, and 2) that a single application, or suite of applications, should not be seen as an environment, as such (at most, as in some uses of VR, it might be seen as a simulation of one). However, there are (shifting) boundaries that can be placed around the systems that an organization and/or an individual uses for which the metaphor may be useful, at the very least to describe the extent to which we are inside or outside it, and that might frame the various kinds of distance that may exist within it and from it. I’m currently working on a paper that expands on this idea a bit more.

Abstract

In online educational systems, teachers often replicate pedagogical methods, and online institutions replicate systems and structures used by their in-person counterparts, the only purpose of which was to solve problems created by having to teach in a physical environment. Likewise, virtual learning environments often attempt to replicate features of their physical counterparts, thereby weakly replicating in software the problems that in-person teachers had to solve. This has contributed to a vicious circle of problem creation and problem solving that benefits no one. In this paper I argue that the term ‘environment’ is a dangerously misleading metaphor for the online systems we build to support learning, that leads to poor pedagogical choices and weak digital solutions. I propose an alternative metaphor of infrastructure and services that can enable more flexible, learner-driven, and digitally native ways of designing systems (including the tools, pedagogies, and structures) to support learning.

Full citation

Dron, J. (2022). On the Misappropriation of Spatial Metaphors in Online Learning. The Open/Technology in Education, Society, and Scholarship Association Journal, 2(2), 1–15. https://doi.org/10.18357/otessaj.2022.2.2.32

Originally posted at: https://landing.athabascau.ca/bookmarks/view/16550401/my-latest-paper-on-the-misappropriation-of-spatial-metaphors-in-online-learning

Some meandering thoughts on ‘good’ and ‘bad’ learning

There has been an interesting brief discussion on Twitter recently that has hinged around whether and how people are ‘good’ at learning. As Kelly Matthews observes, though, Twitter is not the right place to go into any depth on this, so here is a (still quite brief) summary of my perspective on it, with a view to continuing the conversation.

Humans are nearly all pretty good at learning because that’s pretty much the defining characteristic of our species. We are driven by an insatiable drive to learn at from the moment of our birth (at least). Also, though I’m keeping an open mind about octopuses and crows, we seem to be better at it than at least most other animals. Our big advantage is that we have technologies, from language to the Internet, to share and extend our learning, so we can learn more, individually and collectively, than any other species. It is difficult or impossible to fully separate individual learning from collective learning because our cognition extends into and is intimately a part of the cognition of others, living and dead.

However, though we learn nearly all that we know, directly or indirectly, from and with other people, what we learn may not be helpful, may not be as effectively learned as it should, and may not much resemble what those whose job is to teach us intend. What we learn in schools and universities might include a dislike of a subject, how to conceal our chat from our teacher, how to meet the teacher’s goals without actually learning anything, how to cheat, and so on. Equally, we may learn falsehoods, half-truths, and unproductive ways of doing stuff from the vast collective teacher that surrounds us as well as from those designated as teachers.

For instance, among the many unintended lessons that schools and colleges too often teach is the worst one of all: that (despite our obvious innate love of it) learning is an unpleasant activity, so extrinsic motivation is needed for it to occur. This results from the inherent problem that, in traditional education, everyone is supposed to learn the same stuff in the same place at the same time. Students must therefore:

  1. submit to the authority of the teacher and the institutional rules, and
  2. be made to engage in some activities that are insufficiently challenging, and some that are too challenging.

This undermines two of the three essential requirements for intrinsic motivation, support for autonomy and competence (Ryan & Deci, 2017).  Pedagogical methods are solutions to problems, and the amotivation inherently caused by the system of teaching is (arguably) the biggest problem that they must solve. Thus, what passes as good teaching is largely to do with solving the problems caused by the system of teaching itself. Good teachers enthuse, are responsive, and use approaches such as active learning, problem or inquiry-based learning, ungrading, etc, largely to restore agency and flexibility in a dominative and inflexible system. Unfortunately, such methods rely on the technique and passion of talented, motivated teachers with enough time and attention to spend on supporting their students. Less good and/or time-poor teachers may not achieve great results this way. In fact, as we measure such things, on average, such pedagogies are less effective than harder, dominative approaches like direct instruction (Hattie, 2013) because, by definition, most teachers are average or below average. So, instead of helping students to find their own motivation, many teachers and/or their institutions typically apply extrinsic motivation, such as grades, mandatory attendance, classroom rules, etc to do the job of motivating their students for them. These do work, in the sense of achieving compliance and, on the whole, they do lead to students getting a normal bell-curve of grades that is somewhat better than those using more liberative approaches. However, the cost is huge. The biggest cost is that extrinsic motivation reliably undermines intrinsic motivation and, often, kills it for good (Kohn, 1999). Students are thus taught to dislike or, at best, feel indifferent to learning, and so they learn to be satisficing, ineffective learners, doing what they might otherwise do for the love of it for the credentials and, too often, forgetting what they learned the moment that goal is achieved. But that’s not the only problem.

When we learn from others – not just those labelled as teachers but the vast teaching gestalt of all the people around us and before us who create(d) stuff, communicate(d), share(d), and contribute(d) to what and how we learn – we typically learn, as Paul (2020) puts it, not just the grist (the stuff we remember) but the mill (the ways of thinking, being, and learning that underpin them). When the mill is inherently harmful to motivation, it will not serve us well in our future learning.

Furthermore, in good ways and bad, this is a ratchet at every scale. The more we learn, individually and collectively, the more new stuff we are able to learn. New learning creates new adjacent possible empty niches (Kauffman, 2019) for us to learn more, and to apply that learning to learn still more, to connect stuff (including other stuff we have learned) in new and often unique ways. This is, in principle, very good. However, if what and how we learn is unhelpful, incorrect, inefficient, or counter-productive, the ratchet takes us further away from stuff we have bypassed along the way. The adjacent possibles that might have been available with better guidance remain out of our reach and, sometimes, even harder to get to than if the ratchet hadn’t lifted us high enough in the first place. Not knowing enough is a problem but, if there are gaps, then they can be filled. If we have taken a wrong turn, then we often have to unlearn some or all of what we have learned before we can start filling those gaps. It’s difficult to unlearn a way of learning. Indeed, it is difficult to unlearn anything we have learned. Often, it is more difficult than learning it in the first place.

That said, it’s complex, and entangled. For instance, if you are learning the violin then there are essentially two main ways to angle the wrist of the hand that fingers the notes, and the easiest, most natural way (for beginners) is to bend your hand backwards from the wrist, especially if you don’t hold the violin with your chin, because it supports the neck more easily and, in first position, your fingers quickly learn to hit the right bit of the fingerboard, relative to your hand. Unfortunately, this is a very bad idea if you want a good vibrato, precision, delicacy, or the ability to move further up the fingerboard: the easiest way to do that kind of thing is to to keep your wrist straight or slightly angled in from the wrist, and to support the violin with your chin. It’s more difficult at first, but it takes you further. Once the ‘wrong’ way has been learned, it is usually much more difficult to unlearn than if you were starting from scratch the ‘right’ way. Habits harden. Complexity emerges, though, because many folk violin styles make a positive virtue of holding the violin the ‘wrong’ way, and it contributes materially to the rollicking rhythmic styles that tend to characterize folk fiddle playing around the world. In other words, ‘bad’ learning can lead to good – even sublime – results. There is similarly plenty of space for idiosyncratic technique in many of the most significant things we do, from writing to playing hockey to programming a computer and, of course, to learning itself. The differences in how we do such things are where creativity, originality, and personal style emerge, and you don’t necessarily need objectively great technique (hard technique) to do something amazing. It ain’t what you do, it’s the way that you do it, that’s what gets results. To be fair, it might be a different matter if you were a doctor who had learned the wrong names for the bones of the body or an accountant who didn’t know how to add up numbers. Some hard skills have to be done right: they are foundations for softer skills. This is true of just about every skill, to a greater or lesser extent, from writing letters and spelling to building a nuclear reactor and, indeed, to teaching.

There’s much more to be said on this subject and my forthcoming book includes a lot more about it! I hope this is enough to start a conversation or two, though.

References

Hattie, J. (2013). Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. Taylor & Francis.

Kauffman, S. A. (2019). A World Beyond Physics: The Emergence and Evolution of Life. Oxford University Press.

Kohn, A. (1999). Punished by rewards: The trouble with gold stars, incentive plans, A’s, praise, and other bribes (Kindle). Mariner Books.

Paul, A. M. (2021). The Extended Mind: The Power of Thinking Outside the Brain. HarperCollins.

Ryan, R. M., & Deci, E. L. (2017). Self-determination theory: Basic psychological needs in motivation, development, and wellness. Guilford Publications.

 

Slides from my ICEEL 22 Keynote, November 20, 2022

ICEEL 22 keynote

Here are the slides (11.2MB PDF) from my opening keynote yesterday at the 6th International Conference on Education and E-Learning, held online, hosted this year in Japan. In it I discussed a few of the ideas and consequences of them from my forthcoming book, How Education Works: Teaching, Technology, and Technique.

Title: It ain’t what you do, it’s the way that you do it, that’s what gets results

Abstract: In an educational system, no teacher ever teaches alone. Students teach themselves and, more often than not, teach one another. Textbook authors and illustrators, designers of open educational resources, creators of curricula, and so on play obvious teaching roles. However, beyond those obvious teachers there are always many others, from legislators to software architects, from professional bodies to furniture manufacturers . All of these teachers matter, not just in what they do but in how they do it: the techniques matter at least as much as the tools and methods.  The resulting complex collective teacher is deeply situated and, for any given learner, inherently unpredictable in its effects. In this talk I will provide a theoretical model to explain how these many teachers may work together or in opposition, how educational systems evolve, and the nature of learning technologies. Along the way I will use the model to explain why there is and can be no significant difference between outcomes for online and in-person teaching, why teaching to perceived learning styles research is doomed to fail, why small group tutoring will always (on average) be better than classroom teaching, and why quantitative research methods have little value in educational research.

Learning, Technology, and Technique | Canadian Journal of Learning and Technology

This is my latest paper, Learning, Technology, and Technique, in the current issue of the Canadian Journal of Learning and Technology (Vol. 48 No. 1, 2022).

Essentially, because this was what I was invited to do, the paper shrinks down over 10,000-words from my article Educational technology: what it is and how it works (itself a very condensed summary of my forthcoming book, due out Spring 2023) to under 4,000 words that, I hope, more succinctly capture most of the main points of the earlier paper. I’ve learned quite a bit from the many responses to the earlier paper I received, and from the many conversations that ensued – thank you, all who generously shared their thoughts – so it is not quite the same as the original. I hope this one is better. In particular, I think/hope that this paper is much clearer about the nature and importance of technique than the older paper, and about the distinction between soft and hard technologies, both of which seemed to be the most misunderstood aspects of the original. There is, of course, less detail in the arguments and a few aspects of the theory (notably relating to distributed cognition) are more focused on pragmatic examples, but most are still there, or implied. It is also a fully open paper, not just available for online reading, so please freely download it, and share it as you will.

Here’s the abstract:

To be human is to be a user, a creator, a participant, and a co-participant in a richly entangled tapestry of technologies – from computers to pedagogical methods – that make us who we are as much as our genes. The uses we make of technologies are themselves, nearly always, also technologies, techniques we add to the entangled mix to create new assemblies. The technology of greatest interest is thus not any of the technologies that form that assembly, but the assembly itself. Designated teachers are never alone in creating the assembly that teaches. The technology of learning almost always involves the co-participation of countless others, notably learners themselves but also the creators of systems, artifacts, tools, and environments with and in which it occurs. Using these foundations, this paper presents a framework for understanding the technological nature of learning and teaching, through which it is possible to explain and predict a wide range of phenomena, from the value of one-to-one tutorials, to the inadequacy of learning style theories as a basis for teaching, and to see education not as a machine made of methods, tools, and systems but as a complex, creative, emergent collective unfolding that both makes us, and is made of us.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/14622408/my-latest-paper-learning-technology-and-technique-now-online-in-the-canadian-journal-of-learning-and-technology

The limits and limitations of business requirements

Athabasca University’s Digital Governance Committee recently got into a heated debate about whether and why we should support Zoom. It was a classic IT manageability vs user freedom debate and, as is often the way in such things, the suggested resolution was to strike up a working group/sub-committee of stakeholders to identify business requirements that the IT department could use to find an acceptable solution. This approach is eminently sensible, politically expedient, tried-and-tested, and profoundly inadequate.

horse-carAs Henry Ford (probably never) said, “if I’d asked people what they wanted they would have said ‘a better horse'”.

A design approach that starts by gathering business requirements situates the problem in terms of the current solution, which is comprised of layers of solutions to problems caused by other solutions. For simple ‘hygiene’ tech that serves a hard, well-defined business function – leave reporting, accounting, etc – as long as you do properly capture the requirements and don’t gloss over things that matter, that’s normally fine, because you’re just building cogs to make the existing machine work more smoothly. However, for very soft social technologies like meetings, with potentially infinite ways of using them (by which I mean purposes, techniques, ways of assembling them with other technologies, and so on), no list of requirements could even begin to scratch at the surface. The thing about soft technologies – meetings, writing, pencils, pedagogies, programmable computers, chisels, wheels, technologies of fire, groups, poetry, etc – is that they don’t so much solve problems as they create opportunities. They create adjacent possible empty niches. In other words, they are defined by the gaps they leave, much more than the gaps they fill. What happens as a result of them is fundamentally non-deducible. 

Solving different problems, creating different possibles

Meetings are assemblies of vast ranges of technologies and other phenomena, and they serve a vast number of purposes. Meetings are not just one technology but a container for an indefinitely large number of them. They are, though, by and large, solutions to in-person problems, many of which are constrained by physics, physiology, psychology, and other factors that do not apply or that apply differently online. Most webmeeting systems are attempts to replicate the same solutions or (more often) to replicate other webmeeting systems that have already done so, but they are doomed to be pale shadows of the original because there are countless things they cannot replicate, or can only replicate poorly. Among the phenomena that are the default in in-person meetings are, for example:

  • the immense salience brought about by travelling to a location, especially when it involves significant effort (lost in webmeetings);
  • the fact that it forces attention for a sustained period   (most webmeeting software and ways of using it makes inattention much easier);
  • the social bonding that we have evolved to feel in the presence of others (not well catered for in webmeeting software);
  • the focus and meaning that comes from the ‘eventness’ of the occasion (diluted in webmeetings);
  • the ability to directly work together on an issue or artefact (limited in some ways in webmeetings, though potential exists for collaborative construction of digital artefacts);
  • the inability to invisibly escape (easy in most webmeetings);
  • the microexpressions, postures, movements, smells, etc that support communication (largely lost in webmeetings);
  • the social bonding value of sharing food and drink (lost in webmeetings);
  • the blurred boundaries of entering and leaving, the potential to leave together (usually lost in webmeetings);
  • the bonding that occurs in having a shared physical experience, including adversities such as a room that is too hot, roadworks outside, wasps in the room, etc, as well as good things like the smell of good coffee or luxurious chairs (not remotely possible in webmeetings, apart from when the tech fails – but then the meeting fails too);
  • the support for nuances of verbal interaction – knowing when it’s OK to interrupt, being able to sigh, talk at once, etc, not to mention having immediate awareness of who is speaking (webmeetings mostly suck at this);
  • the ability to cluster with others – to sit next to people you know (or don’t know), for instance (rarely an option in most webmeetings, and nothing like as salient or rich in potential as its in-person counterpart even when allowed);
  • the salience of being in a space, with all the values, history, power relationships, and so on that it embodies, from who sits where to which room is chosen (hardly a shadow of this in most webmeetings);
  • the ability to stand up and walk around together (a motion-sickness-inducing experience in webmeetings);
  • the problems and benefits of both over-crowding and excessive sparsity (very different in webmeetings);
  • the means to seamlessly integrate and employ other technologies, including every digital technology as well as paper, dance, desks, chairs, whiteboards, pins, clothing, coffee, doors, etc, etc, etc. (webmeetings offer a tiny fraction of this);
  • and so on.

A few of these might be replicated in current or future webmeeting software, though usually only in caricature. Most simply cannot be replicated at all, even if we could meet as virtual personas in Star Trek’s holodecks. Of course there are also many things that we should be grateful are not replicated in online meetings: conspicuous body odour, badly designed meeting rooms, schedule conflicts, and so on, as well as the unwanted consequences of most of the phenomena above. These, too, are phenomena that the technologies of meetings are designed around.  In-person meetings are incredibly highly-evolved technologies, making use of technological and non-technological phenomena in immensely subtle ways, as well as having layers of counter-technology a kilometre deep, from social mores and manners to Roberts’ rules, from meeting tables to pens and note-taking strategies. Much of the time we don’t even notice that there are any technologies involved at all (as Danny Hillis quipped, ‘technology’ is anything invented after you were born).

Webmeetings, though, also have distinctive phenomena that can be exploited, such as:

  • the ease of entering and leaving (so breaks are easier to take, they don’t need to last a long time, people can dip in and out, etc);
  • the automation of scheduling and note-taking;
  • the means to record all that occurs;
  • the means to directly share digital tools;
  • the fact that people occupy different spaces (often with tools at their disposal that would be unavailable in a shared meeting space);
  • the captions for the hard of hearing;
  • the integrated backchannels of text chat.

These are different kinds of problem space with different adjacent possibles as well as different constraints. It therefore makes no sense to blindly attempt to replicate in-person meetings when the problems and opportunities are so different. We don’t (or shouldn’t) teach online in the same way we teach in the classroom, so why should we try to use meetings in the same way? For that matter, why have meetings at all?

Dealing with the hard stuff

Some constraints are quite easy to specify. If a matter under discussion needs to be kept private, say, that limits the range of options, albeit that, for such a soft technology as a meeting, privacy needs may vary considerably, and what works for one context may fail abysmally for another. Similarly for security, accessibility, learnability, compatibility, interoperability, cost, reliability, maintainability, longevity, and other basic hygiene concerns. There are normally hard constraints defining a baseline, but it is a fuzzy baseline that can be moved in different contexts for different people and different uses. No one wants unreliable, insecure, expensive, incompatible, unusable, buggy, privacy abusing software but most of us nonetheless use Microsoft products.

It is also not completely unreasonable to look for specific known business requirements that need to be met. However, there are enormous risks of duplicating solutions to non-existent problems. It is essential, therefore, to try to find ways of understanding the problems themselves, as much as possible in isolation from existing solutions. It would be a bad requirement to simply specify that people should be able to see and hear one another in real-time, for example: that is a technological solution based on the phenomena that in-person meetings use, not a requirement. It is certainly a very useful phenomenon that might be exploited in any number of ways (we know that because our ancestors have done it since before humans walked the planet) but it tells us little about why the phenomenon matters, or what it is about it that matters.

It would be better, perhaps, to ask people what is wrong with in-person meetings. It still situates the requirements in the current problem space, but it looks more closely at the source rather than the copy. It makes it easier to ask what purposes being able to see and hear one another during in-person meetings serve, what phenomena it provides, on what phenomena (including those provided by other technologies) it depends, and what depends on it. From that we may uncover the business requirements that seeing and hearing other people actually meet. However, it is incredibly tricky to ask such questions in the abstract: the problem space is vast, complex, diverse, and deeply bound up in what we are familiar with, not what is possible.

It might help to make the familiar unfamiliar, for instance, by holding in-person meetings wearing blindfolds, or silently, or to attempt to conduct a meeting using only sticky notes (approaches I have used in my own teaching about communication technologies, as it happens). This kind of exercise forcibly creates a new problem space so that people can wonder about what is lost, what is gained, reasons for doing things, and so on. If you do enough of that, you might start to uncover what matters, and (perhaps) some of the reasons we have meetings in the first place.

Exploring the adjacent possible

Perhaps most importantly, though, soft technologies are not just solutions to problems. Soft technologies are, first and foremost, creators of opportunities, the vast majority of which we will never begin to imagine. Soft technology design is therefore, and must be, a partnership between the person and the technology: it’s not just about creating a tool for a task but about having a conversation with that tool, asking what it can do for us and wondering where it might lead us. What’s interesting about the ubiquitous backchannel feature of webmeetings, for instance, is that it did not find its way into the software as a result of a needs assessment or analysis of business requirements. It was, instead, an early (and deeply imperfect) attempt at replicating what could be replicated of synchronous meetings before multimedia communication became possible. When designing early web conferencing systems, no one said ‘we need a way of typing so that others can see it’. They looked at what could be done and said ‘hey, we can use that’. The functionality persisted and has become nearly ubiquitous because it’s easy to implement and obviously useful. It’s an exaptation, though, not the product of a pre-planned intentional design process. It’s a side-effect of something else we did – a poor solution to an existing problem – that created new phenomena we could co-opt for other purposes. New adjacent possible empty niches emerged from it.

One way to explore such niches would be to give people the chance to play with a wide range of existing ways of addressing the same problem space. A lot of people have turned their attention to these issues, so it makes sense to mine the creativity of the crowd. There are systems like Discord or MatterMost, that represent a different category of hybrid asynchronous/synchronous tool, for instance, blurring the temporal boundaries. There are spatial metaphor systems with isometric interfaces like Spatial, or Ovice, which can allow more intuitive clustering, perhaps contributing to a greater sense of the presence of others, while enabling novel approaches to (say) voting, and so on. There are immersive systems that more literally replicate spaces, like Mozilla Hubs or OpenSim. I hold out little hope for those, but they do have some non-literal features – especially in ways they allow impossible spaces to be created – that are quite interesting. There are instant messengers like Telegram or Signal, that offer ambient awareness as well as conventional meeting support (MS Teams, reflecting its Skype origins, has that too). There are games and game-like environments like Gather or Minecraft, that create new kinds of world as well as providing real-time conferencing features. And there are much smarter webmeeting systems like Around (that largely solves almost all audio problems, that – crucially – can make the meeting a part of a user’s environment rather than a separate space for gathering, that rethinks text chat as a transient, person-focused act rather than a separate text-stream, that makes working together on a digital artefact a richly engaging process, that automatically sends a record to participants, and more).  And there’s a wealth of research-based systems that we have built over the past few decades, including many of my own, that do things differently, or that use different metaphors. Computer-supported collaborative argumentation tools, for instance, or systems that leverage social navigation (I particularly love Viégas’s and Donath’s ChatCircles from the late 1990s, for instance), and so on. They all make new problems, and all have flaws of one kind or another, but thinking about how and why they are different helps to focus on what we are trying to do in the first place.

Perhaps the best of all ways to explore those adjacent possible empty niches is to make them: not to engineer it according to a specification, but to tinker and play. I’ve written about this before (e.g. here and, paywalled, here, summarized by Stefanie Panke here). Tinkering as a research methodology is a process of exploration not of what exists but of what does not. It’s a journey into the adjacent possible, with each new creation or modification creating new adjacent possibles, a step by step means of reaching into and mapping the unknown. We don’t all have the capacity (in skills, time, or patience) to create software from scratch, but we can assemble what we already have. We can, for instance, try to add plugins to existing systems: it is seldom necessary to write your own WordPress plugin, for example, because tens of thousands of people have already done so. Or we can make use of frameworks to construct new systems: the Elgg system underpinning the Landing, for example, does require some expertise to build new components, but a lot can be achieved by assembling and/or modifying what others have built. Or, if standards are followed, we can assemble services as needed: there are standards like xcon, XMPP, Jabber, IRC, and so on that make this possible. And we don’t need to create software or hardware at all in order to dream. Hand-drawn mockups can create new possibilities to explore. Small steps into the unknown are better than no steps at all.

Stop looking for solutions

Webmeetings that attempt to replicate their in-person inspirations are unlikely to ever afford the flexibility of in-person meetings, because they have fewer phenomena to orchestrate and we are never going to be as adept at using them. The gaps they leave for us to fill are smaller, and our capacity to fill those gaps is less well-developed. However, digital systems can provide a great many new and different phenomena that, with creativity and inspiration, may meet our needs much better. Without the constraints of physical spaces we can invent a new physics of the digital. As long as we treat the problem as one of replicating meetings then it makes little difference what we choose: Zoom, Teams, Webex, Connect, BBB, Jitsi, whatever – the feature set may vary, there may be differences in reliability, security, cost, etc but any of them will do the job. The problem is that it is the wrong job. We already pay for and use at least three major systems for synchronous meetings at AU, as well as a bunch of minor ones, and that is nothing like enough. Those that begin to depart from the replication model – Around being my current favourite – are a step in the right direction, while those that double down on it (notably most immersive environments) are probably a step in the wrong direction. It is not about going forward or backward, though: it is about going sideways.

It is not too tricky to experiment in this particular field. For most digital systems we create our decisions normally haunt us for years or decades, because we become locked in to them with our data. Synchronous technologies can, with provisos, be swapped around and changed at will. Sure, there can be issues with recording and transcripts, there can be a training burden, contracts can be expensive and hard to escape, and tech support may be a little more costly but, for the most part, if we don’t like something then we can drop it and try something else. 

I don’t have a solution to choosing or making the right piece of software for AU’s needs, because there isn’t one. There are countless possible solutions, none of which will suit everyone, many of which will provide parts that might be useful to most people, and all of which will have parts or aspects that won’t. But I do know that the way to approach the problem is not to have meetings to determine business requirements. The solution is to find ways of discovering the adjacent possible, to seek inspiration, to look sideways and forwards instead of backwards. We don’t need simple problem-solving for this kind of situation (or rather, it is quite inadequate on its own): we need to find ways to dream, ways to wonder, ways to engage in the act of creation, ways to play.

 

Pedagogical Paradigms in Open and Distance Education | Handbook of Open, Distance, and Digital Education

This is a chapter by me and Terry Anderson for Springer’s new Handbook of Open, Distance, and Digital Education that updates and refines our popular (1658 citations, and still rising, for the original paper alone) but now long-in-the-tooth ‘three generations’ model of distance learning pedagogy. We have changed the labels for the pedagogical families this time round to ones that I think are more coherent, divided according to their epistemological underpinnings: the objectivist, the subjectivist, and the complexivist. and we have added some speculations about whether further paradigms might have started to emerge in the 11 years since our original paper was published. Our main conclusion, though, is that no single pedagogical paradigm will dominate in the foreseeable future: that we are in an era of great pedagogical diversity, and that this diversity will only increase as time goes by.

The three major paradigms

Objectivist: previously known as ‘behaviourist/cognitivist’, what characterizes objectivist pedagogies is that they are both defined by assumptions of an objective external reality, and driven by (usually teacher-defined) objectives. It’s a paradigm of teaching, where teachers are typically sages on the stage using methods intended to achieve effective learning of defined facts and skills. Examples include behaviourism, learning styles theories, brain-based approaches, multiple intelligence models, media theories, and similar approaches where the focus is on efficient transmission and replication of received knowledge.

Subjectivist: formerly known as ‘social constructivist’, subjectivist pedagogies are concerned with – well – subjects: they are concerned with the personal and social co-construction of knowledge, recognizing its situated and always unique nature, saying little about methods but a lot about meaning-making. It’s a paradigm of learning, where teachers are typically guides on the side, supporting individuals and groups to learn in complex, situated contexts. Examples include constructivist, social constructivist, constructionist, and similar families of theory where the emphasis is as much on the learners’ growth and development in a human society as it is on what is being learned.

Complexivist: originally described as ‘connectivist’ (which was confusing and inaccurate), complexivist pedagogies acknowledge and exploit the complex nature of our massively distributed cognition, including its richly recursive self-organizing and emergent properties, its reification through shared tools and artefacts, and its many social layers. It’s a paradigm of knowledge, where teachers are fellow learners, co-travellers and role models, and knowledge exists not just in individual minds but in our minds’ extensions, in both other people and what we collectively create. Examples include connectivism, rhizomatic learning, distributed cognition, cognitive apprenticeship, networks of practice, and similar theories (including my own co-participation model, as it happens). We borrow the term ‘complexivist’ from Davis and Sumara, whose 2006 book on the subject is well worth reading, albeit grounded mainly in in-person learning.

No one paradigm dominates: all typically play a role at some point of a learning journey, all build upon and assemble ideas that are contained in the others (theories are technologies too), and all have been around as ways of learning for as long as humans have existed.

Emerging paradigms

Beyond these broad families, we speculate on whether any new pedagogical paradigms are emerging or have emerged within the 12 years since we first developed these ideas. We come up with the following possible candidates:

Theory-free: this is a digitally native paradigm that typically employs variations of AI technologies to extract patterns from large amounts of data on how people learn, and that provides support accordingly. This is the realm of adaptive hypermedia, learning analytics, and data mining. While the vast majority of such methods are very firmly in the objectivist tradition (the models are trained or designed by identifying what leads to ‘successful’ achievement of outcomes) a few look beyond defined learning products into social engagement or other measures of the learning process, or seek open-ended patterns in emergent collective behaviours. We see the former as a dystopic trend, but find promise in the latter, notwithstanding the risks of filter bubbles and systemic bias.

Hologogic: this is a nascent paradigm that treats learning as a process of enculturation. It’s about how we come to find our places in our many overlapping cultures, where belonging to and adopting the values and norms of the sets to which we belong (be it our colleagues, our ancestors, our subject-matter peers, or whatever) is the primary focus. There are few theories that apply to this paradigm, as yet, but it is visible in many online and in-person communities, and is/has been of particular significance in collectivist cultures where the learning of one is meaningless unless it is also the learning of all (sometimes including the ancestors). We see this as a potentially healthy trend that takes us beyond the individualist assumptions underpinning much of the field, though there are risks of divisions and echo chambers that pit one culture against others. We borrow the term from Cumbie and Wolverton.

Bricolagogic: this is a free-for-all paradigm, a kind of meta-pedagogy in which any pedagogical method, model, or theory may be used, chosen for pragmatic or personal reasons, but in which the primary focus of learning is in choosing how (in any given context) we should learn. Concepts of charting and wayfinding play a strong role here. This resembles what we originally identified as an emerging ‘holistic’ model, but we now see it not as a simple mish-mash of pedagogical paradigms but rather as a pedagogic paradigm in its own right.

Another emerging paradigm?

I have recently been involved in a lengthy Twitter thread, started by Tim Fawns on the topic of his recent paper on entangled pedagogy, which presents a view very similar indeed to my own (e.g. here and here), albeit expressed rather differently (and more eloquently). There are others in the same thread who express similar views. I suggested in this thread that we might be witnessing the birth of a new ‘entanglist’ paradigm that draws very heavily on complexivism (and that could certainly be seen as part of the same family) but that views the problem from a rather different perspective. It is still very much about complexity, emergence, extended minds, recursion, and networks, and it negates none of that, but it draws its boundaries around the networked nodes at a higher level than theories like Connectivism, yet with more precision than those focused on human learning interactions such as networks of practice or rhizomatic learning. Notably, it leaves room for design (and designed objects), for meaning, and for passion as part of the deeply entangled complex system of learning in which we all participate, willingly or not. It’s not specifically a pedagogical model – it’s broader than that – though it does imply many things about how we should and should not teach, and about how we should understand pedagogies as part of a massively distributed system in which designated teachers account for only a fraction of the learning and teaching process. The title of my book on the subject (that has been under review for 16 months – grrr) sums this up quite well, I think: “How Education Works”. The book has now (as of a few days ago) received a very positive response from reviewers and is due to be discussed by the editorial committee at the end of this month, so I’m hoping that it may be published in the not-too-distant future. Watch this space!

Here’s the chapter abstract:

Building on earlier work that identified historical paradigm shifts in open and distance learning, this chapter is concerned with analyzing the three broad pedagogical paradigms – objectivist, subjectivist, and complexivist – that have characterized learning and teaching in the field over the past half century. It goes on to discuss new paradigms that are starting to emerge, most notably in “theory-free” models enabled by developments in artificial intelligence and analytics, hologogic methods that recognize the many cultures to which we belong, and a “bricolagogic,” theory-agnostic paradigm that reflects the field’s growing maturity and depth.

Reference

Dron J., Anderson T. (2022) Pedagogical Paradigms in Open and Distance Education. In: Zawacki-Richter O., Jung I. (eds) Handbook of Open, Distance and Digital Education. Springer, Singapore. https://doi.org/10.1007/978-981-19-0351-9_9-1

English version of my 2021 paper, “Technology, technique, and culture in educational systems: breaking the iron triangle”

Technology, technique, and culture in educational systems: breaking the iron triangle

This is the (near enough final) English version of my journal paper, translated into Chinese by Junhong Xiao and published last year (with a CC licence) in Distance Education in China. (Reference: Dron, Jon (2021).  Technology, technique, and culture in educational systems: breaking the iron triangle (translated by Junhong Xiao). Distance Education in China, 1, 37-49. DOI:10.13541/j.cnki.chinade.2021.01.005).

The underlying theory is the same as that in my paper Educational technology: what it is and how it works (Reference: Dron, J. Educational technology: what it is and how it works. AI & Soc 37, 155–166 (2022). https://doi.org/10.1007/s00146-021-01195-z direct link for reading, link to downloadable preprint) but this one focuses more on what it means for ways we go about distance learning. It’s essentially about ways to solve problems that we created for ourselves by solving problems in the context of in-person learning that we inappropriately transferred to a distance context.

Here’s the abstract:
This paper presents arguments for a different way of thinking about how distance education should be designed. The paper begins by explaining education as a technological process, in which we are not just users of technologies for learning but coparticipants in their instantiation and design, implying that education is a fundamentally distributed technology. However, technological and physical constraints have led to processes (including pedagogies) and path dependencies in In-person education that have tended to massively over-emphasize the designated teacher as the primary controller of the process. This has resulted in the development of many counter technologies to address the problems this causes, from classrooms to grades to timetables, most of which have unnecessarily been inherited by distance education. By examining the different strengths and weaknesses of distance education, the paper suggests an alternative model of distance education that is more personal, more situated in communities and cultures, and more appropriate to the needs of learners and society.

I started working on a revised version of this (with a snappier title) to submit to an English language journal last year but got waylaid. If anyone is interested in publishing this, I’m open to submitting it!