Preprint – The human nature of generative AIs and the technological nature of humanity: implications for education

Here is a preprint of a paper I just submitted to MDPI’s Digital journal that applies the co-participation model that underpins How Education Works (and a number of my papers over the last few years) to generative AIs (GAIs). I don’t know whether it will be accepted and, even if it is, it is very likely that some changes will be required. This is a warts-and-all raw first submission. It’s fairly long (around 10,000 words).

The central observation around which the paper revolves is that, for the first time in the history of technology, recent generations of GAIs automate (or at least appear to automate) the soft technique that has, till now, been the sole domain of humans. Up until now, every technology we have ever created, be it physically instantiated, cognitive, organizational, structural, or conceptual, has left all of the soft part of the orchestration to human beings.

The fact that GAIs replicate the soft stuff is a matter for some concern when they start to play a role in education, mainly because:

  • the skills they replace may atrophy or never be learned in the first place. This is not even slightly like replacing hard skills of handwriting or arithmetic: we are talking about skills like creativity, problem-solving, critical inquiry, design, and so on. We’re talking about the stuff that GAIs are trained with.
  • the AIs themselves are an amalgam, an embodiment of our collective intelligence, not actual people. You can spin up any kind of persona you like and discard it just as easily. Much of the crucially important hidden/tacit curriculum of education is concerned with relationships, identity, ways of thinking, ways of being, ways of working and playing with others. It’s about learning to be human in a human society. It is therefore quite problematic to delegate how we learn to be human to a machine with (literally and figuratively) no skin in the game, trained on a bunch of signals signifying nothing but more signals.

On the other hand, to not use them in educational systems would be as stupid as to not use writing. These technologies are now parts of our extended cognition, intertwingled with our collective intelligence as much as any other technology, so of course they must be integrated in our educational systems. The big questions are not about whether we should embrace them but how, and what soft skills they might replace that we wish to preserve or develop. I hope that we will value real humans and their inventions more, rather than less, though I fear that, as long as we retain the main structural features of our education systems without significant adjustments to how they work, we will no longer care, and we may lose some of our capacity for caring.

I suggest a few ways we might avert some of the greatest risks by, for instance, treating them as partners/contractors/team members rather than tools, by avoiding methods of “personalization” that simply reinforce existing power imbalances and pedagogies designed for better indoctrination, by using them to help connect us and support human relationships, by doing what we can to reduce extrinsic drivers, by decoupling learning and credentials, and by doubling down on the social aspects of learning. There is also an undeniable explosion in adjacent possibles, leading to new skills to learn, new ways to be creative, and new possibilities for opening up education to more people. The potential paths we might take from now on are unprestatable and multifarious but, once we start down them, resulting path dependencies may lead us into great calamity at least as easily as they may expand our potential. We need to make wise decisions now, while we still have the wisdom to make them.

MDPI invited me to submit this article free of their normal article processing charge (APC). The fact that I accepted is therefore very much not an endorsement of APCs, though I respect MDPI’s willingness to accommodate those who find payment difficult, the good editorial services they provide, and the fact that all they publish is open. I was not previously familiar with the Digital journal itself. It has been publishing 4 articles a year since 2021, mostly offering a mix of reports on application designs and literature reviews. The quality seems good.

Abstract

This paper applies a theoretical model to analyze the ways that widespread use of generative AIs (GAIs) in education and, more broadly, in contributing to and reflecting the collective intelligence of our species, can and will change us. The model extends Brian Arthur’s insights into the nature of technologies as the orchestration of phenomena to our use by explaining the nature of humans participation in their enactment, whether as part of the orchestration (hard technique, where our roles must be performed correctly) or as orchestrators of phenomena (soft technique performed creatively or idiosyncratically). Education may be seen as a technological process for developing the soft and hard techniques of humans to participate in the technologies and thus the collective intelligence of our cultures. Unlike all earlier technologies, by embodying that collective intelligence themselves, GAIs can closely emulate and implement not only the hard technique but also the soft that, until now, was humanity’s sole domain: the very things that technologies enabled us to do can now be done by the technologies themselves. The consequences for what, how, and even whether we learn are profound. The paper explores some of these consequences and concludes with theoretically informed approaches that may help us to avert some dangers while benefiting from the strengths of generative AIs.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20512771/preprint-the-human-nature-of-generative-ais-and-the-technological-nature-of-humanity-implications-for-education

10 minute chats on Generative AI – a great series, now including an interview with me

This is a great series of brief interviews between Tim Fawns and an assortment of educators and researchers from across the world on the subject of generative AI and its impact on learning and teaching.

The latest (tenth in the series) is with me.

Tim asked us all to come up with 3 key statements beforehand that he used to structure the interviews. I only realized that I had to do this on the day of the interview so mine are not very well thought-through, but there follows a summary of very roughly what I would have said about each if my wits were sharper. The reality was, of course, not quite like this. I meandered around a few other ideas and we ran out of time, but I think this captures the gist of what I actually wanted to convey:

Key statement 1: Most academics are afraid of AIs being used by students to cheat. I am afraid of AIs being used by teachers to cheat. cyborg teacher

For much the same reasons that many of us balk at students using, say, ChatGPT to write part or all of their essays or code, I think we should be concerned when teachers use it to replace or supplement their teaching, whether it be for writing course outlines, assessing student work, or acting as intelligent tutors (to name but a few common uses).  The main thing that bothers me is that human teachers (including other learners, authors, and many more) do not simply help learners to achieve specified learning outcomes. In the process, they model ways of thinking, values, attitudes, feelings, and a host of other hard-to-measure tacit and implicit phenomena that relate to ways of being, ways of interacting, ways of responding, and ways of connecting with others. There can be huge value in seeing the world through another’s eyes, of interacting with them, adapting your responses, seeing how they adapt to yours, and so on. This is a critical part of how we learn the soft stuff, the ways of doing things, the meaning, the social value, the connections with our own motivations, and so on. In short, education is as much about being a human being, living in human communities, as it is about learning facts and skills. Even when we are not interacting but, say, simply reading a book, we are learning not just the contents but the ways the contents are presented, the quirks, the passions, the ways the authors think of their readers, their implicit beliefs, and so on.

While a generative AI can mimic this pretty well, it is by nature a kind of average, a blurry reconstruction mashed up from countless examples of the work of real humans. It is human-like, not human. It can mimic a wide assortment of nearly-humans without identity, without purpose, without persistence, without skin in the game. As things currently stand (though this will change) it is also likely to be pretty bland – good enough, but not great.

It might be argued that this is better than nothing at all, or that it augments rather than replaces human teachers, or it helps with relatively mundance chores, or it provides personalized support and efficiencies in learning hard skills, or it allows teachers to focus on those human aspects, or even that using a generative AI is a good way of learning in itself. Right now and in the near future, this may be true because we are in a system on the verge of disruption, not yet in the thick of it, and we come to it with all our existing skills and structures intact. My concern is what happens as it scales and becomes ubiquitous; as the bean-counting focus on efficiencies that relate solely to measurable outcomes increasingly crowd out the time spent with other humans; as the generative AIs feed on one another becoming more and more divorced from their human originals; as the skills of teaching that are replaced by AIs atrophy in the next generation; as time we spend with one another is replaced with time spent with not-quite human simulacra; as the AIs themselves become more and more a part of our cognitive apparatus in both what is learned and how we learn it. There are Monkeys’ Paws all the way down the line: for everything that might improved, there are at least as many things that can and will get worse.

Key statement 2: We and our technologies are inherently intertwingled so it makes no more sense to exclude AIs from the classroom than it would to exclude, say, books or writing. The big questions are about what we need to keep. intertwingled technologies and humans

Our cognition is fundamentally intertwingled with the technologies that we use, both physical and cognitive, and those technologies are intertwingled with one another, and that’s how our collective intelligence emerges. For all the vital human aspects mentioned above, a significant part of the educational process is concerned with building cognitive gadgets that enable us to participate in the technologies of our cultures, from poetry and long division to power stations and web design. Through that participation our cognition is highly distributed, and our intelligence is fundamentally collective. Now that generative AIs are part of that, it would be crazy to exclude them from classrooms or from their use in assessments. It does, however, raise more than a few questions about what cognitive activities we still need to keep for ourselves.

Technologies expand or augment what we can do unaided. Writing, say, allows us (among other things) to extend our memories. This creates many adjacent possibles, including sharing them with others, and allowing us to construct more complex ideas using scaffolding that would be very difficult to construct on our own because our memories are not that great.

Central to the nature of writing is that, as with most technologies, we don’t just use it but we participate in its enactment, performing part of the orchestration ourselves (for instance we choose what words and ideas we write – the soft stuff), but also being part of its orchestration (e.g we must typically spell words and use grammar sufficiently uniformly that others can understand them – the hard stuff).

In the past, we used to do nearly all of that writing by hand. Handwriting was a hard skill that had to be learned well enough that others could read what we have written, a process that typically required years of training and practice, demanding mastery of a wide range of technical proficiencies from spelling and punctuation to manual dexterity and the ability to sharpen a quill/fill a fountain pen/insert a cartridge, etc. To an increasingly large extent we have now offloaded many of those hard skills, first to typewriters and now to computers. While some of the soft aspects of handwriting have been lost – the cognitive processes that affect how we write and how we think, the expressiveness of the never-perfect ways we write letters on a page, etc – this was a sensible thing to do. From a functional perspective, text produced by a computer is far more consistent, far more readable, far more adaptable, far more reusable, and far more easily communicated. Why should we devote so much effort and time to learning to be part of a machine when a machine can do that part for us, and do it better?

Something that can free us from having to act as an inflexible machine seems, by and large, like a good thing. If we don’t have to do it ourselves then we can spend more time and effort on what we do, how we do it, the soft stuff, the creative stuff, the problem-solving stuff, and so on. It allows us to be more capable, to reach further, to communicate more clearly. There are some really big issues relating to the ways that the constraints of handwriting such as the relative difficulty of making corrections, the physicality of the movements, and the ways our brains are changed by handwriting that result in different ways of thinking, some of which may be very valuable. But, as Postman wrote, all technologies are Faustian bargains involving losses and harms as well as gains and benefits. A technology that thrives is usually (at least in the short term) one in which the gains are perceived to outweigh the losses. And, even when largely replaced, old technologies seldom if ever die, so it is usually possible to retrieve what is lost, at least until the skills atrophy, components are no longer made, or they are designed to die (old printers with chip-protected cartridges that are no longer made, for instance).

What is fundamentally different about generative AIs, however, is that they allow us to offload exactly the soft, creative, problem solving aspects of our cognition, that technologies normally support and expand, to a machine. They provide extremely good pastiches of human thought and creativity that can act well enough to be considered as drop-in replacements. In many cases, they can do so a lot better – from the point of view of someone seeing only the outputs – than an average human. An AI image generator can draw a great deal better than me, for instance. But, given that these machines are now part of our extended, intertwingled minds, what is left for us? What parts of our minds should they or will they replace? How can we use them without losing the capacity to do at least some of the things they do better or as well as us? What happens if we lack those cognitive gadgets we never installed in our minds because AIs did it for us? This is not the same as, say, not knowing how to make a bow and arrow or write in cuneiform. Even when atrophied, such skills can be recovered. This is the stuff that we learn the other stuff for. It is especially important in the field of education which, traditionally at least, has been deeply concerned with cultivating the hard skills largely if not solely so that we can use them creatively, socially and productively once they are learned. If the machines are doing that for us, what is our role? This is not (yet) Kurzweil’s singularity, the moment when machines exceed our own intelligence and start to develop on their own, but it is the (drawn-out, fragmented) moment that machines have become capable of participating in soft, creative technologies on at least equal footing to humans. That matters. This leads to my final key statement.

Key statement 3: AIs create countless new adjacent possible empty niches. They can augment what we can do, but we need to go full-on Amish when deciding whether they should replace what we already do. Amish cyborg

Every new creation in the world opens up new and inherently unprestatable adjacent possible empty niches for further creation, not just in how it can be used as part of new assemblies but in how it connects with those that already exist. It’s the exponential dynamic ratchet underlying natural evolution as much as technology, and it is what results in the complexity of the universe. The rapid acceleration in use and complexity of generative AIs – itself enabled by the adjacent possibles of the already highly disruptive Internet – that we have seen over the past couple of years has resulted in a positive explosion of new adjacent possibles, in turn spawning others, and so on, at a hitherto unprecedented scale and speed.

This is exactly what we should expect in an exponentially growing system. It makes it increasingly difficult to predict what will happen next, or what skills, attitudes, and values we will need to deal with it, or how we will affected by it. As the number of possible scenarios increases at the same exponential rate, and the time between major changes gets ever shorter, patterns of thinking, ways of doing things, skills we need, and the very structures of our societies must change in unpredictable ways, too. Occupations, including in education, are already being massively disrupted, for better and for worse. Deeply embedded systems, from assessment for credentials to the mass media, are suddenly and catastrophically breaking.  Legislation, regulations, resistance from groups of affected individuals, and other checks and balances may slightly alter the rate of change, but likely not enough to matter. Education serves both a stabilizing and a generative role in society, but educators are at least as unprepared and at least as disrupted as anyone else. We don’t – in fact we cannot – know what kind of world we are preparing our students for, and the generative technologies that now form part of our cognition are changing faster than we can follow. Any AI literacies we develop will be obsolete in the blink of an eye. And, remember, generative AIs are not just replacing hard skills. They are replacing the soft ones, the things that we use our hard skills to accomplish.

This is why I believe we would do well to heed the example of the Amish, who (contrary to popular belief) are not opposed to modern technologies but, in their communities, debate and discuss the merits and disadvantages of any technology that is available, considering the ways in which it might affect or conflict with their values, only adopting those agreed to be, on balance, good, and only doing so in ways that accord with those values. Different communities make different choices according to their contexts and needs. In order to do that, we have to have values in the first place. But what are the values that matter in education?

With a few exceptions (laws and regulations being the main ones) technologies do not determine how we will act but, through the ways they integrate with our shared cognition, existing technologies, and practices, they have a lot of momentum and, unchecked, generative AIs will inherit the values associated with what currently exists. In educational systems that are increasingly regulated by government mandates that focus on nothing but their economic contributions to industry, where success or failure is measured solely by proxy criteria like predetermined outcomes of learning and enrolments, where a millennium of path dependencies still embodies patterns of teacher control and indoctrination that worked for mediaeval monks and skillsets that suited the demands of factory owners during the industrial revolution, this will not end well. Now seems the time we most need to reassert and double down on the human, the social, the cultural, the societal, the personal, and the tacit value of our institutions. This is the time to talk about those values, locally and globally. This is the time to examine what matters, what we care about, what we must not lose, and why we must not lose it. Tomorrow it will be too late. I think this is a time of great risk but it is also a time of great opportunity, a chance to reflect on and examine the value and nature of education itself. Some of us have been wanting to have these conversations for decades.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20146256/10-minute-chats-on-generative-ai-a-great-series-now-including-an-interview-with-me

My keynote slides for Confluence 2023 – Heads in the clouds: being human in the age of cloud computing

 heads in cloudsThese are the slides from my keynote today (or, in my land, yesterday) at Confluence 2023, hosted by Amity University in India. It was a cloud computing conference, so quite a way outside my area of greatest expertise, but it gave me a chance to apply the theory of technology developed in my forthcoming book  to a different context. The illustrations for the slides are the result of a conversation between me and MidJourney (more of an argument that MidJourney tended to win) which is quite a nice illustration of the interplay of hard and soft technologies, the adjacent possible, soft technique, and so on.

Unsurprisingly, because education is a fundamentally technological phenomenon, much the same principles that apply to education also apply to cloud computing, such as: build from small, hard pieces; valorize openness, diversity and connection; seek the adjacent possible; the whole assembly is the only thing that matters and so the central principle that how you do it matters far more than what you do.

Slides from my Confluence 2023 keynote

Learning, Technology, and Technique | Canadian Journal of Learning and Technology

This is my latest paper, Learning, Technology, and Technique, in the current issue of the Canadian Journal of Learning and Technology (Vol. 48 No. 1, 2022).

Essentially, because this was what I was invited to do, the paper shrinks down over 10,000-words from my article Educational technology: what it is and how it works (itself a very condensed summary of my forthcoming book, due out Spring 2023) to under 4,000 words that, I hope, more succinctly capture most of the main points of the earlier paper. I’ve learned quite a bit from the many responses to the earlier paper I received, and from the many conversations that ensued – thank you, all who generously shared their thoughts – so it is not quite the same as the original. I hope this one is better. In particular, I think/hope that this paper is much clearer about the nature and importance of technique than the older paper, and about the distinction between soft and hard technologies, both of which seemed to be the most misunderstood aspects of the original. There is, of course, less detail in the arguments and a few aspects of the theory (notably relating to distributed cognition) are more focused on pragmatic examples, but most are still there, or implied. It is also a fully open paper, not just available for online reading, so please freely download it, and share it as you will.

Here’s the abstract:

To be human is to be a user, a creator, a participant, and a co-participant in a richly entangled tapestry of technologies – from computers to pedagogical methods – that make us who we are as much as our genes. The uses we make of technologies are themselves, nearly always, also technologies, techniques we add to the entangled mix to create new assemblies. The technology of greatest interest is thus not any of the technologies that form that assembly, but the assembly itself. Designated teachers are never alone in creating the assembly that teaches. The technology of learning almost always involves the co-participation of countless others, notably learners themselves but also the creators of systems, artifacts, tools, and environments with and in which it occurs. Using these foundations, this paper presents a framework for understanding the technological nature of learning and teaching, through which it is possible to explain and predict a wide range of phenomena, from the value of one-to-one tutorials, to the inadequacy of learning style theories as a basis for teaching, and to see education not as a machine made of methods, tools, and systems but as a complex, creative, emergent collective unfolding that both makes us, and is made of us.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/14622408/my-latest-paper-learning-technology-and-technique-now-online-in-the-canadian-journal-of-learning-and-technology

The limits and limitations of business requirements

Athabasca University’s Digital Governance Committee recently got into a heated debate about whether and why we should support Zoom. It was a classic IT manageability vs user freedom debate and, as is often the way in such things, the suggested resolution was to strike up a working group/sub-committee of stakeholders to identify business requirements that the IT department could use to find an acceptable solution. This approach is eminently sensible, politically expedient, tried-and-tested, and profoundly inadequate.

horse-carAs Henry Ford (probably never) said, “if I’d asked people what they wanted they would have said ‘a better horse'”.

A design approach that starts by gathering business requirements situates the problem in terms of the current solution, which is comprised of layers of solutions to problems caused by other solutions. For simple ‘hygiene’ tech that serves a hard, well-defined business function – leave reporting, accounting, etc – as long as you do properly capture the requirements and don’t gloss over things that matter, that’s normally fine, because you’re just building cogs to make the existing machine work more smoothly. However, for very soft social technologies like meetings, with potentially infinite ways of using them (by which I mean purposes, techniques, ways of assembling them with other technologies, and so on), no list of requirements could even begin to scratch at the surface. The thing about soft technologies – meetings, writing, pencils, pedagogies, programmable computers, chisels, wheels, technologies of fire, groups, poetry, etc – is that they don’t so much solve problems as they create opportunities. They create adjacent possible empty niches. In other words, they are defined by the gaps they leave, much more than the gaps they fill. What happens as a result of them is fundamentally non-deducible. 

Solving different problems, creating different possibles

Meetings are assemblies of vast ranges of technologies and other phenomena, and they serve a vast number of purposes. Meetings are not just one technology but a container for an indefinitely large number of them. They are, though, by and large, solutions to in-person problems, many of which are constrained by physics, physiology, psychology, and other factors that do not apply or that apply differently online. Most webmeeting systems are attempts to replicate the same solutions or (more often) to replicate other webmeeting systems that have already done so, but they are doomed to be pale shadows of the original because there are countless things they cannot replicate, or can only replicate poorly. Among the phenomena that are the default in in-person meetings are, for example:

  • the immense salience brought about by travelling to a location, especially when it involves significant effort (lost in webmeetings);
  • the fact that it forces attention for a sustained period   (most webmeeting software and ways of using it makes inattention much easier);
  • the social bonding that we have evolved to feel in the presence of others (not well catered for in webmeeting software);
  • the focus and meaning that comes from the ‘eventness’ of the occasion (diluted in webmeetings);
  • the ability to directly work together on an issue or artefact (limited in some ways in webmeetings, though potential exists for collaborative construction of digital artefacts);
  • the inability to invisibly escape (easy in most webmeetings);
  • the microexpressions, postures, movements, smells, etc that support communication (largely lost in webmeetings);
  • the social bonding value of sharing food and drink (lost in webmeetings);
  • the blurred boundaries of entering and leaving, the potential to leave together (usually lost in webmeetings);
  • the bonding that occurs in having a shared physical experience, including adversities such as a room that is too hot, roadworks outside, wasps in the room, etc, as well as good things like the smell of good coffee or luxurious chairs (not remotely possible in webmeetings, apart from when the tech fails – but then the meeting fails too);
  • the support for nuances of verbal interaction – knowing when it’s OK to interrupt, being able to sigh, talk at once, etc, not to mention having immediate awareness of who is speaking (webmeetings mostly suck at this);
  • the ability to cluster with others – to sit next to people you know (or don’t know), for instance (rarely an option in most webmeetings, and nothing like as salient or rich in potential as its in-person counterpart even when allowed);
  • the salience of being in a space, with all the values, history, power relationships, and so on that it embodies, from who sits where to which room is chosen (hardly a shadow of this in most webmeetings);
  • the ability to stand up and walk around together (a motion-sickness-inducing experience in webmeetings);
  • the problems and benefits of both over-crowding and excessive sparsity (very different in webmeetings);
  • the means to seamlessly integrate and employ other technologies, including every digital technology as well as paper, dance, desks, chairs, whiteboards, pins, clothing, coffee, doors, etc, etc, etc. (webmeetings offer a tiny fraction of this);
  • and so on.

A few of these might be replicated in current or future webmeeting software, though usually only in caricature. Most simply cannot be replicated at all, even if we could meet as virtual personas in Star Trek’s holodecks. Of course there are also many things that we should be grateful are not replicated in online meetings: conspicuous body odour, badly designed meeting rooms, schedule conflicts, and so on, as well as the unwanted consequences of most of the phenomena above. These, too, are phenomena that the technologies of meetings are designed around.  In-person meetings are incredibly highly-evolved technologies, making use of technological and non-technological phenomena in immensely subtle ways, as well as having layers of counter-technology a kilometre deep, from social mores and manners to Roberts’ rules, from meeting tables to pens and note-taking strategies. Much of the time we don’t even notice that there are any technologies involved at all (as Danny Hillis quipped, ‘technology’ is anything invented after you were born).

Webmeetings, though, also have distinctive phenomena that can be exploited, such as:

  • the ease of entering and leaving (so breaks are easier to take, they don’t need to last a long time, people can dip in and out, etc);
  • the automation of scheduling and note-taking;
  • the means to record all that occurs;
  • the means to directly share digital tools;
  • the fact that people occupy different spaces (often with tools at their disposal that would be unavailable in a shared meeting space);
  • the captions for the hard of hearing;
  • the integrated backchannels of text chat.

These are different kinds of problem space with different adjacent possibles as well as different constraints. It therefore makes no sense to blindly attempt to replicate in-person meetings when the problems and opportunities are so different. We don’t (or shouldn’t) teach online in the same way we teach in the classroom, so why should we try to use meetings in the same way? For that matter, why have meetings at all?

Dealing with the hard stuff

Some constraints are quite easy to specify. If a matter under discussion needs to be kept private, say, that limits the range of options, albeit that, for such a soft technology as a meeting, privacy needs may vary considerably, and what works for one context may fail abysmally for another. Similarly for security, accessibility, learnability, compatibility, interoperability, cost, reliability, maintainability, longevity, and other basic hygiene concerns. There are normally hard constraints defining a baseline, but it is a fuzzy baseline that can be moved in different contexts for different people and different uses. No one wants unreliable, insecure, expensive, incompatible, unusable, buggy, privacy abusing software but most of us nonetheless use Microsoft products.

It is also not completely unreasonable to look for specific known business requirements that need to be met. However, there are enormous risks of duplicating solutions to non-existent problems. It is essential, therefore, to try to find ways of understanding the problems themselves, as much as possible in isolation from existing solutions. It would be a bad requirement to simply specify that people should be able to see and hear one another in real-time, for example: that is a technological solution based on the phenomena that in-person meetings use, not a requirement. It is certainly a very useful phenomenon that might be exploited in any number of ways (we know that because our ancestors have done it since before humans walked the planet) but it tells us little about why the phenomenon matters, or what it is about it that matters.

It would be better, perhaps, to ask people what is wrong with in-person meetings. It still situates the requirements in the current problem space, but it looks more closely at the source rather than the copy. It makes it easier to ask what purposes being able to see and hear one another during in-person meetings serve, what phenomena it provides, on what phenomena (including those provided by other technologies) it depends, and what depends on it. From that we may uncover the business requirements that seeing and hearing other people actually meet. However, it is incredibly tricky to ask such questions in the abstract: the problem space is vast, complex, diverse, and deeply bound up in what we are familiar with, not what is possible.

It might help to make the familiar unfamiliar, for instance, by holding in-person meetings wearing blindfolds, or silently, or to attempt to conduct a meeting using only sticky notes (approaches I have used in my own teaching about communication technologies, as it happens). This kind of exercise forcibly creates a new problem space so that people can wonder about what is lost, what is gained, reasons for doing things, and so on. If you do enough of that, you might start to uncover what matters, and (perhaps) some of the reasons we have meetings in the first place.

Exploring the adjacent possible

Perhaps most importantly, though, soft technologies are not just solutions to problems. Soft technologies are, first and foremost, creators of opportunities, the vast majority of which we will never begin to imagine. Soft technology design is therefore, and must be, a partnership between the person and the technology: it’s not just about creating a tool for a task but about having a conversation with that tool, asking what it can do for us and wondering where it might lead us. What’s interesting about the ubiquitous backchannel feature of webmeetings, for instance, is that it did not find its way into the software as a result of a needs assessment or analysis of business requirements. It was, instead, an early (and deeply imperfect) attempt at replicating what could be replicated of synchronous meetings before multimedia communication became possible. When designing early web conferencing systems, no one said ‘we need a way of typing so that others can see it’. They looked at what could be done and said ‘hey, we can use that’. The functionality persisted and has become nearly ubiquitous because it’s easy to implement and obviously useful. It’s an exaptation, though, not the product of a pre-planned intentional design process. It’s a side-effect of something else we did – a poor solution to an existing problem – that created new phenomena we could co-opt for other purposes. New adjacent possible empty niches emerged from it.

One way to explore such niches would be to give people the chance to play with a wide range of existing ways of addressing the same problem space. A lot of people have turned their attention to these issues, so it makes sense to mine the creativity of the crowd. There are systems like Discord or MatterMost, that represent a different category of hybrid asynchronous/synchronous tool, for instance, blurring the temporal boundaries. There are spatial metaphor systems with isometric interfaces like Spatial, or Ovice, which can allow more intuitive clustering, perhaps contributing to a greater sense of the presence of others, while enabling novel approaches to (say) voting, and so on. There are immersive systems that more literally replicate spaces, like Mozilla Hubs or OpenSim. I hold out little hope for those, but they do have some non-literal features – especially in ways they allow impossible spaces to be created – that are quite interesting. There are instant messengers like Telegram or Signal, that offer ambient awareness as well as conventional meeting support (MS Teams, reflecting its Skype origins, has that too). There are games and game-like environments like Gather or Minecraft, that create new kinds of world as well as providing real-time conferencing features. And there are much smarter webmeeting systems like Around (that largely solves almost all audio problems, that – crucially – can make the meeting a part of a user’s environment rather than a separate space for gathering, that rethinks text chat as a transient, person-focused act rather than a separate text-stream, that makes working together on a digital artefact a richly engaging process, that automatically sends a record to participants, and more).  And there’s a wealth of research-based systems that we have built over the past few decades, including many of my own, that do things differently, or that use different metaphors. Computer-supported collaborative argumentation tools, for instance, or systems that leverage social navigation (I particularly love Viégas’s and Donath’s ChatCircles from the late 1990s, for instance), and so on. They all make new problems, and all have flaws of one kind or another, but thinking about how and why they are different helps to focus on what we are trying to do in the first place.

Perhaps the best of all ways to explore those adjacent possible empty niches is to make them: not to engineer it according to a specification, but to tinker and play. I’ve written about this before (e.g. here and, paywalled, here, summarized by Stefanie Panke here). Tinkering as a research methodology is a process of exploration not of what exists but of what does not. It’s a journey into the adjacent possible, with each new creation or modification creating new adjacent possibles, a step by step means of reaching into and mapping the unknown. We don’t all have the capacity (in skills, time, or patience) to create software from scratch, but we can assemble what we already have. We can, for instance, try to add plugins to existing systems: it is seldom necessary to write your own WordPress plugin, for example, because tens of thousands of people have already done so. Or we can make use of frameworks to construct new systems: the Elgg system underpinning the Landing, for example, does require some expertise to build new components, but a lot can be achieved by assembling and/or modifying what others have built. Or, if standards are followed, we can assemble services as needed: there are standards like xcon, XMPP, Jabber, IRC, and so on that make this possible. And we don’t need to create software or hardware at all in order to dream. Hand-drawn mockups can create new possibilities to explore. Small steps into the unknown are better than no steps at all.

Stop looking for solutions

Webmeetings that attempt to replicate their in-person inspirations are unlikely to ever afford the flexibility of in-person meetings, because they have fewer phenomena to orchestrate and we are never going to be as adept at using them. The gaps they leave for us to fill are smaller, and our capacity to fill those gaps is less well-developed. However, digital systems can provide a great many new and different phenomena that, with creativity and inspiration, may meet our needs much better. Without the constraints of physical spaces we can invent a new physics of the digital. As long as we treat the problem as one of replicating meetings then it makes little difference what we choose: Zoom, Teams, Webex, Connect, BBB, Jitsi, whatever – the feature set may vary, there may be differences in reliability, security, cost, etc but any of them will do the job. The problem is that it is the wrong job. We already pay for and use at least three major systems for synchronous meetings at AU, as well as a bunch of minor ones, and that is nothing like enough. Those that begin to depart from the replication model – Around being my current favourite – are a step in the right direction, while those that double down on it (notably most immersive environments) are probably a step in the wrong direction. It is not about going forward or backward, though: it is about going sideways.

It is not too tricky to experiment in this particular field. For most digital systems we create our decisions normally haunt us for years or decades, because we become locked in to them with our data. Synchronous technologies can, with provisos, be swapped around and changed at will. Sure, there can be issues with recording and transcripts, there can be a training burden, contracts can be expensive and hard to escape, and tech support may be a little more costly but, for the most part, if we don’t like something then we can drop it and try something else. 

I don’t have a solution to choosing or making the right piece of software for AU’s needs, because there isn’t one. There are countless possible solutions, none of which will suit everyone, many of which will provide parts that might be useful to most people, and all of which will have parts or aspects that won’t. But I do know that the way to approach the problem is not to have meetings to determine business requirements. The solution is to find ways of discovering the adjacent possible, to seek inspiration, to look sideways and forwards instead of backwards. We don’t need simple problem-solving for this kind of situation (or rather, it is quite inadequate on its own): we need to find ways to dream, ways to wonder, ways to engage in the act of creation, ways to play.

 

Mediaeval Teaching in the Digital Age (slides from my keynote at Oxford Brookes University, May 26, 2021)

 front slide, mediaeval teaching

These are the slides from my keynote today at the Oxford Brookes “Theorizing the Virtual” School of Education Research Conference. As theorizing the virtual is pretty much my thing, I was keen to be a part of this! It was an ungodly hour of the day for me (2am kickoff) but it was worth staying up for. It was a great bunch of attendees who really got into the spirit of the thing and kept me wide awake. I wish I could hang around for the rest of it but, on the bright side, at least I’m up at the right time to see the Super Flower Blood Moon (though it’s looking cloudy, darn it).  In this talk I dwelt on a few of the notable differences between online and in-person teaching. This is the abstract…

Pedagogical methods (ways of teaching) are solutions to problems of helping people to learn, in a context filled with economic, physical, temporal, legal, moral, social, political, technological, and organizational constraints. In mediaeval times books were rare and unaffordable, and experts’ time was precious and limited, so lectures were a pragmatic solution, but they in turn created more problems. Counter-technologies such as classes, classrooms, behavioural rules and norms, courses, terms, curricula, timetables and assignment deadlines were were devised to solve those problems, then methods of teaching (pedagogies) were in turn invented to solve problems these counter-technologies caused, notably including:
· people who might not want (or be able) to be there at that time,
· people who were bored and
· people who were confused.
Better pedagogies supported learner needs for autonomy and competence, or helped learners find relevance to their own goals, values, and interests. They exploited physical closeness for support, role-modelling, inspiration, belongingness and so on. However, increasingly many relied on extrinsic motivators, like classroom discipline, grades and credentials to coerce students to learn. Extrinsic motivation achieves compliance, but it makes the reward or avoidance of the punishment the goal, persistently and often permanently crowding out intrinsic motivation. Intelligent students respond with instrumental approaches, satisficing, or cheating. Learning seldom persists; love of the subject is subdued; learners learn to learn in ineffective ways. More layers of counter-technologies are needed to limit the damage, and so it goes on.
Online, the constraints are very different, and its native forms are the motivational inverse of in-person learning. An online teacher cannot control every moment of a learner’s time, and learners can use the freedoms they gain to take the time they need, when they need it, to learn and to reflect, without the constraints of scheduled classroom hours and deadlines. However, more effort is usually needed to support their needs for relatedness. Unfortunately, many online teachers try (or are required) to re-establish the control they had in the classroom through grading or the promise of credentials, recreating the mediaeval problems that would otherwise not exist, using tools like learning management systems that were designed (poorly) to replicate in-person teaching functions. These are solutions to the problems caused by counter-technologies, not to problems of learning.
There are better ways, and that’s what this session is about.

front slide, mediaeval teaching

Educational technology: what it is and how it works | AI & Society

https://rdcu.be/ch1tl

This is a link to my latest paper in the journal AI & Society. You can read it in a web browser from there, but it is not directly downloadable. A preprint of the submitted version (some small differences and uncorrected errors here and there, notably in citations) can be downloaded from https://auspace.athabascau.ca/handle/2149/3653. The published version should be downloadable for free by Researchgate members.

This is a long paper (about 10,000 words), that summarizes some of the central elements of the theoretical model of learning, teaching and technology developed in my recently submitted book (still awaiting review) and that gives a few examples of its application. For instance, it explains:

  • why, on average researchers find no significant difference between learning with and without tech.
  • why learning styles theories are a) inherently unprovable, b) not important even if they were, and c) a really bad idea in any case.
  • why bad teaching sometimes works (and, conversely, why good teaching sometimes fails)
  • why replication studies cannot be done for most educational interventions (and, for the small subset that are susceptible to reductive study, all you can prove is that your technology works as intended, not whether it does anything useful).

Abstract

This theoretical paper elucidates the nature of educational technology and, in the process, sheds light on a number of phenomena in educational systems, from the no-significant-difference phenomenon to the singular lack of replication in studies of educational technologies.  Its central thesis is that we are not just users of technologies but coparticipants in them. Our participant roles may range from pressing power switches to designing digital learning systems to performing calculations in our heads. Some technologies may demand our participation only in order to enact fixed, predesigned orchestrations correctly. Other technologies leave gaps that we can or must fill with novel orchestrations, that we may perform more or less well. Most are a mix of the two, and the mix varies according to context, participant, and use. This participative orchestration is highly distributed: in educational systems, coparticipants include the learner, the teacher, and many others, from textbook authors to LMS programmers, as well as the tools and methods they use and create.  From this perspective,  all learners and teachers are educational technologists. The technologies of education are seen to be deeply, fundamentally, and irreducibly human, complex, situated and social in their constitution, their form, and their purpose, and as ungeneralizable in their effects as the choice of paintbrush is to the production of great art.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/8692242/my-latest-paper-educational-technology-what-it-is-and-how-it-works

Technology, technique, and teaching

These are the slides from my recent talk with students studying the philosophy of education at Pace University.

This is a mashup of various talks I have given in recent years, with a little new stuff drawn from my in-progress book. It starts with a discussion of the nature of technology, and the distinction between hard and soft technologies that sees relative hardness as the amount of pre-orchestration in a technology (be it a machine or a legal system or whatever). I observe that pedagogical methods (‘pedagogies’ for short) are soft technologies to those who are applying them, if not to those on the receiving end. It is implied (though I forgot to explicitly mention) that hard technologies are always more structurally significant than soft ones: they frame what is possible.

All technologies are assemblies, and (in education), the pedagogies applied by learners are always the most important parts of those assemblies. However, in traditional in-person classrooms, learners are (by default) highly controlled due to the nature of physics – the need to get a bunch of people together in one place at one time, scarcity of resources,  the limits of human voice and hearing, etc – and the consequent power relationships and organizational constraints that occur.  The classroom thus becomes the environment that frames the entire experience, which is very different from what are inaccurately described as online learning environments (which are just parts of a learner’s environment).

Because of physical constraints, the traditional classroom context is inherently very bad for intrinsic motivation. It leads to learners who don’t necessarily want to be there, having to do things they don’t necessarily want to do, often being either bored or confused. By far the most common solution to that problem is to apply externally regulated extrinsic motivation, such as grades, punishments for non-attendance, rules of classroom behaviour, and so on. This just makes matters much worse, and makes the reward (or the avoidance of punishment) the purpose of learning. Intelligent responses to this situation include cheating, short-term memorization strategies, satisficing, and agreeing with the teacher. It’s really bad for learning. Such issues are not at all surprising: all technologies create as well as solve problems, so we need to create counter technologies to deal with them. Thus, what we normally recognize as good pedagogy is, for the most part, a set of solutions to the problems created by the constraints of in-person teaching, to bring back the love of learning that is destroyed by the basic set-up. A lot of good teaching is therefore to do with supporting at least better, more internally regulated forms of extrinsic motivation.

Because pedagogies are soft technologies, skill is needed to use them well. Harder pedagogies, such as Direct Instruction, that are more prescriptive of method tend (on average) to work better than softer pedagogies such as problem-based learning, because most teachers tend towards being pretty average: that’s implicit in the term, after all. Lack of skill can be compensated for through the application of a standard set of methods that only need to be done correctly in order to work. Because such methods can also work for good teachers as well as the merely average or bad, their average effectiveness is, of course, high. Softer pedagogical methods such as active learning, problem-based learning, inquiry-based learning, and so on rely heavily on passionate, dedicated, skilled, time-rich teachers and so, on average, tend to be less successful. However, when done well, they outstrip more prescriptive methods by a large margin, and lead to richer, more expansive outcomes that go far beyond those specified in a syllabus or test. Softer technologies, by definition, allow for greater creativity, flexibility, adaptability, and so on than harder technologies but are therefore difficult to implement. There is no such thing as a purely hard or purely soft technology, though, and all exist on a spectrum,. Because all pedagogies are relatively soft technologies, even those that are quite prescriptive, almost any pedagogical method can work if it is done well: clunky, ugly, weak pedagogies used by a fantastic teacher can lead to great, persistent, enthusiastic learning. As Hattie observes, almost everything works – at least, that’s true of most things that are reported on in educational research studies :-). But (and this is the central message of my book, the consequences of which are profound) it ain’t what you do, it’s the way that you do it, that’s what gets results.

Problems can occur, though, when we use the same methods that work in person in a different context for which they were not designed. Online learning is by far the most dominant mode of learning (for those with an Internet connection – some big social, political, economic, and equity issues here) on the planet. Google, YouTube, Wikipedia, Reddit, StackExchange, Quora, etc, etc, etc, not to mention email, social networking sites, and so on, are central to how most of us in the online world learn anything nowadays. The weird thing about online education (in the institutional sense) is that online learning is far less obviously dominant, and tends to be viewed in a far less favourable light when offered as an option. Given the choice, and without other constraints, most students would rather learn in-person than online. At least in part, this is due to the fact that those of us working in formal online education continue to apply pedagogies and organizational methods that solved problems in in-person classrooms, especially with regard to teacher control: the rewards and punishments of grades, fixed length courses, strictly controlled pathways, and so on are solutions to problems that do not exist or that exist in very different forms for online learners, whose learning environment is never entirely controlled by a teacher.

The final section of the presentation is concerned with what – in very broad terms – native distance pedagogies might look like. Distance pedagogies need to acknowledge the inherently greater freedoms of distance learners and the inherently distributed nature of distance learning. Truly learner-centric teaching does not seek to control, but to support, and to acknowledge the massively distributed nature of the activity, in which everyone (including emergent collective and networked forms arising from their interactions) is part of the gestalt teacher, and each learner is – from their perspective – the most important part of all of that. To emphasize that none of this is exactly new (apart from the massive scale of connection, which does matter a lot), I include a slide of Leonardo’s to-do list that describes much the same kinds of activity as those that are needed of modern learners and teachers.

For those seeking more detail, I list a few of what Terry Anderson and I described as ‘Connectivist-generation’ pedagogical models. These are far more applicable to native online learning than earlier pedagogical generations that were invented for an in-person context. In my book I am now describing this new, digitally native generation as ‘complexivist’ pedagogies, which I think is a more accurate and less confusing name. It also acknowledges that many theories and models in the family (such as John Seely Brown’s distributed cognitive apprenticeship) predate Connectivism itself. The term comes from Davis’s and Sumara’s 2006 book, ‘Complexity and Education‘, which is a great read that deserves more attention than it received when it was published.

Slides: Technology, technique and teaching