collective intelligence

Preprint – The human nature of generative AIs and the technological nature of humanity: implications for education

Here is a preprint of a paper I just submitted to MDPI’s Digital journal that applies the co-participation model that underpins How Education Works (and a number of my papers over the last few years) to generative AIs (GAIs). I don’t know whether it will be accepted and, even if it is, it is very likely that some changes will be required. This is a warts-and-all raw first submission. It’s fairly long (around 10,000 words).

The central observation around which the paper revolves is that, for the first time in the history of technology, recent generations of GAIs automate (or at least appear to automate) the soft technique that has, till now, been the sole domain of humans. Up until now, every technology we have ever created, be it physically instantiated, cognitive, organizational, structural, or conceptual, has left all of the soft part of the orchestration to human beings.

The fact that GAIs replicate the soft stuff is a matter for some concern when they start to play a role in education, mainly because:

  • the skills they replace may atrophy or never be learned in the first place. This is not even slightly like replacing hard skills of handwriting or arithmetic: we are talking about skills like creativity, problem-solving, critical inquiry, design, and so on. We’re talking about the stuff that GAIs are trained with.
  • the AIs themselves are an amalgam, an embodiment of our collective intelligence, not actual people. You can spin up any kind of persona you like and discard it just as easily. Much of the crucially important hidden/tacit curriculum of education is concerned with relationships, identity, ways of thinking, ways of being, ways of working and playing with others. It’s about learning to be human in a human society. It is therefore quite problematic to delegate how we learn to be human to a machine with (literally and figuratively) no skin in the game, trained on a bunch of signals signifying nothing but more signals.

On the other hand, to not use them in educational systems would be as stupid as to not use writing. These technologies are now parts of our extended cognition, intertwingled with our collective intelligence as much as any other technology, so of course they must be integrated in our educational systems. The big questions are not about whether we should embrace them but how, and what soft skills they might replace that we wish to preserve or develop. I hope that we will value real humans and their inventions more, rather than less, though I fear that, as long as we retain the main structural features of our education systems without significant adjustments to how they work, we will no longer care, and we may lose some of our capacity for caring.

I suggest a few ways we might avert some of the greatest risks by, for instance, treating them as partners/contractors/team members rather than tools, by avoiding methods of “personalization” that simply reinforce existing power imbalances and pedagogies designed for better indoctrination, by using them to help connect us and support human relationships, by doing what we can to reduce extrinsic drivers, by decoupling learning and credentials, and by doubling down on the social aspects of learning. There is also an undeniable explosion in adjacent possibles, leading to new skills to learn, new ways to be creative, and new possibilities for opening up education to more people. The potential paths we might take from now on are unprestatable and multifarious but, once we start down them, resulting path dependencies may lead us into great calamity at least as easily as they may expand our potential. We need to make wise decisions now, while we still have the wisdom to make them.

MDPI invited me to submit this article free of their normal article processing charge (APC). The fact that I accepted is therefore very much not an endorsement of APCs, though I respect MDPI’s willingness to accommodate those who find payment difficult, the good editorial services they provide, and the fact that all they publish is open. I was not previously familiar with the Digital journal itself. It has been publishing 4 articles a year since 2021, mostly offering a mix of reports on application designs and literature reviews. The quality seems good.

Abstract

This paper applies a theoretical model to analyze the ways that widespread use of generative AIs (GAIs) in education and, more broadly, in contributing to and reflecting the collective intelligence of our species, can and will change us. The model extends Brian Arthur’s insights into the nature of technologies as the orchestration of phenomena to our use by explaining the nature of humans participation in their enactment, whether as part of the orchestration (hard technique, where our roles must be performed correctly) or as orchestrators of phenomena (soft technique performed creatively or idiosyncratically). Education may be seen as a technological process for developing the soft and hard techniques of humans to participate in the technologies and thus the collective intelligence of our cultures. Unlike all earlier technologies, by embodying that collective intelligence themselves, GAIs can closely emulate and implement not only the hard technique but also the soft that, until now, was humanity’s sole domain: the very things that technologies enabled us to do can now be done by the technologies themselves. The consequences for what, how, and even whether we learn are profound. The paper explores some of these consequences and concludes with theoretically informed approaches that may help us to avert some dangers while benefiting from the strengths of generative AIs.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20512771/preprint-the-human-nature-of-generative-ais-and-the-technological-nature-of-humanity-implications-for-education

I am a professional learner, employed as a Full Professor and Associate Dean, Learning & Assessment, at Athabasca University, where I research lots of things broadly in the area of learning and technology, and I teach mainly in the School of Computing & Information Systems. I am a proud Canadian, though I was born in the UK. I am married, with two grown-up children, and three growing-up grandchildren. We all live in beautiful Vancouver.

7 Comments on Preprint – The human nature of generative AIs and the technological nature of humanity: implications for education

  1. The claims for GAI are, I suggest, a little overstated. Computers have been playing chess at grand-master level for a couple of decades.

    Much of education is designed and delivered not by individual people, but by teams, following rules. It hardly much matters if an AI system designed the course, or it was designed by a large team of people, according to a set of rules written by another group of people, and delivered by a third team, none of whom the student has ever spoken to, and many of whom have never spoken to each other.

    Even if you are in a classroom face to face with a human instructor, it may not be a very human experience, if there are several other students there, and the instructor is following a script they are not permitted to change.

    1. Jon Dron says:

      Chess playing computers don’t also write poetry. But, indeed, the seeds of the current generation have been sprouting for a long time.

      I agree that there can be much that is machine-like about existing education systems although, as I suggest we might approach the use of GAIs, each person has a different role (at least, that’s the idea) and, more importantly, every choice, every word, every image, every animation, every video, is made by an actual human being. That matters, for reasons articulated at greater length in the paper.

      Script-following is, of course, the work of the devil but, even then (as I mention in my book), there are usually cracks where the light can get in (expression, passion, detours, etc) and, for a novice teacher stepping into a void, in an emergency it may occasionally be better than the alternatives. As always, it ain’t what you do but the way that you do it that matters most and all but the worst technologies can usually be be bent to good purpose, given a talented human being who cares enough to make it happen. Unfortunately, on average, teachers are average or below average, which is why I suggest in the book that we should work more on developing better teachers, rather than better ways of teaching. But it is all complex, situated, and intertwingled, and dealing with dullness in teaching is part of the tacit curriculum, part of what enables us to grow up as humans in a human society, part of the stuff that it would be unwise to delegate to GAIs. GAIs may be above average teachers (tireless, prompt, supportive, adaptive, etc) if all we measure are signals of meeting the specified outcomes, but that is only a part of what education is doing. And that’s the problem.

      1. While I am formally certified to make training videos (from back in the days of videotape), I have not made, or narrated, a training video myself in the last four years. Instead a synthetic voice narrates them, from my script, accompanied by algorithm selected stock footage, or PowerPoint slides. I use a synthetic voice with an Australian male accent, which sounds remarkably like me, if a little robotic. I can give the system the script, and it makes a video. If I make a change to the script, I recreate the entire video. This is much quicker, and easier, than editing video. The system I used in 2019 provided automatically selected stock footage synchronized with the narration https://blog.highereducationwhisperer.com/2019/08/learning-to-reflect-video.html The system I use now just works from the power-point slides: https://blog.highereducationwhisperer.com/2021/07/tools-for-creating-educational-videos.html

        Some of my colleagues argue that I should produce the videos myself. But they then spend hours of unpaid overtime producing videos, or just don’t do it. I suspect this will be the dilemma with many AI tools. They will do the job cheaply, and quickly, but not exceptionally well.

        1. Jon Dron says:

          I won’t repeat my answer to this on LinkedIn but, yes, I agree: that’s the economics of it, and there are trade-offs. I’ve been struggling with the same dilemma as I try to produce an audio version of my book. Something like Speechify can imitate me well-enough to sound more or less like me, but it’s not me: it emphasizes different things in different ways, the “smile” in my voice doesn’t appear in the right places, it doesn’t pause as I would pause, it uses different pronunciations of unusual words, etc, etc. I may, though, satisfice on it, because I don’t have the time. This is going to happen at an unimaginably vast scale and, in many cases, the AIs can do a much better job than all but the most talented humans, so the temptation will be (and is) very strong. We need some principles, rules of thumb, frameworks, etc for making decisions about when and how to do this, which is what my paper begins to do.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.