self portrait of chatGPT, showing an androgynous human face overlaid with circuits

Presentation – Generative AIs in Learning & Teaching: the Case Against

Here are the slides from my presentation at AU’s Lunch ‘n’ Learn session today. The presentation itself took 20 minutes and was followed by a wonderfully lively and thoughtful conversation for another 40 minutes, though it was only scheduled for half an hour. Thanks to all who attended for a very enjoyable discussion! self portrait of chatGPT, showing an androgynous human face overlaid with circuits

The arguments made in this were mostly derived from my recent paper on the subject (Dron, J. (2023). The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education. Digital, 3(4), 319–335. https://doi.org/10.3390/digital3040020) but, despite the title, my point was not to reject the use of generative AIs at all. The central message I was hoping to get across was a simpler and more important one: to encourage attendees to think about what education is for, and what we would like it to be. As the slides suggest, I believe that is only partially to do with the objectives and outcomes we set out to achieve,  that it is nothing much at all to do with the products of the system such as grades and credentials, and that focus on those mechanical aspects of the system often creates obstacles to the achievement of it. Beyond those easily measured things, education is about the values, beliefs, attitudes, relationships, and development of humans and their societies.  It’s about ways of being, not just capacity to do stuff. It’s about developing humans, not (just) developing skills. My hope is that the disruptions caused by generative AIs are encouraging us to think like the Amish, and to place greater value on the things we cannot measure. These are good conversations to have.

I am a professional learner, employed as a Full Professor and Associate Dean, Learning & Assessment, at Athabasca University, where I research lots of things broadly in the area of learning and technology, and I teach mainly in the School of Computing & Information Systems. I am a proud Canadian, though I was born in the UK. I am married, with two grown-up children, and three growing-up grandchildren. We all live in beautiful Vancouver.

2 Comments on Presentation – Generative AIs in Learning & Teaching: the Case Against

  1. Generative AI is novel and new, so seems threatening. Education uses older technology, which was controversial when new, but is now familiar and accepted. One example is writing. There was a time when teaching was by work of mouth. The idea that knowledge could be transmitted mechanically seemed very threatening. But people got used to written texts, and will with AI.

    For vocational programs, education is so the community can be assured the graduates have the skills and knowledge needed to do a job. When training professionals, this is partly about values, beliefs, attitudes, relationships, development & society. However, in the end it is about what can be assessed. One day I might have to justify my marking to a coroner, when one of my graduates makes a mistake which kills someone.

    As it happens I am sitting in on the accreditation committee of the Australian Computer Society. The committee is painstakingly going through applications from universities to have their degrees accredited.

    This might all sound unimportant, but recently Australia’s second largest telecommunications company had an outage of several hours. This cut off emergency calls for ambulances, police, and fire. It also stopped operations in hospitals. It appears there was a problem with a routine software upgrade. There are now multiple inquiries into what went wrong. I spent a week talking to the media about what went wrong and what might be done to prevent it happening again.

    1. Jon Dron says:

      The big point I’m making in these slides is that the latest generations of generative AI are not like writing, in one very important way. Writing provides a scaffold and an extension of our minds that we can use to achieve what we want to achieve, whether it be a creative, idiosyncratic expression of an idea, a formulation of a problem, an aide to memory, or whatever. GenAIs do the achieving for us. They are the first technologies we have ever built that effectively simulate the soft, creative, idiosyncratic ways we use cognitive technologies like language. There are certainly creative ways we can make use of that and skills that are needed to get the most out of them, so I am not suggesting for a moment that they will do all of our thinking for us. However, the fact that they don’t just augment but replace activities for which writing is used, including problem solving, idea generation, creative expression, programming, designing, etc, etc is, in an educational system, problematic, because those are the very things education is concerned with developing. GenAIs don’t just help you to achieve learning outcomes: they can achieve them for you.

      It is good news that they can do this if you are using them to achieve a further clearly definable goal such as fixing a cistern, writing an app, discovering patterns in data, and so on. It is potentially good news if the intent is to acquire hard skills of the sort you describe, albeit that we might question whether such skills are needed if AIs can do them for us. However, training of this kind does more than provide skills. It develops ways of thinking, ways of relating to others, connections, attitudes, and values. Education is more explicit about this aspect of learning than training, but training always educates and education always trains – it’s just a question of degree (as it were).

      This in turn means that we are learning these things from something that isn’t human and that doesn’t participate in human societies. It has no interest, no lived experience, no personality, no values, no goals, no passion, no sense of humour. It can, however, simulate (and be persuaded to simulate) all of those things – it is a tireless slave that can “become” any person you want it to be, including yourself. This is a worrisome role model. Right now, as most of our education comes from and with other humans, the impact is slight, but simple economics means it will soon be ubiquitous. Unless we make some careful choices now, this will be the primary means through which we learn at least a significant fraction of what we intentionally learn. And it will come with unintentional stuff, because learning always does. We never only learn the intended outcomes. We hardly ever even notice the other stuff, and it is difficult (perhaps impossible) to accurately measure. We might not even notice when it has gone. This is why I reckon the McNamara Fallacy might be our biggest problem.

      To make things worse, these things are doing the creative work for us – that is precisely what makes them so useful – so our own abilities to do them will atrophy or not be learned at all. Yes, we will learn new ways to be creative and, no, they won’t be the only sources of knowledge and skills, but they will inevitably play larger and larger roles, each time taking a little something away. Cognitively speaking, writing does more than mirror spoken words, drawing does more than share visual ideas, programming does more than provide instructions to a machine, a video is more than a record of an event. Socrates was right to question what was lost through the invention of writing: we do lose something when we offload the work to the written word, notwithstanding the vastly greater gains. But this is not just one skill we are talking about. This is every cognitive skill we have, all at once.

      To make things worse, most of what future genAIs will have to learn from is the output of previous generations (of people and genAIs) and of people who have learned ways of being from genAIs. Rinse and repeat.

      I think there are lots of ways to avoid the worst of these perils. It is vital to remember that AIs are evolving at a rapid rate, and we will evolve with them, so what is true today won’t be true tomorrow. There is no doubt that this will change us in ways we mostly won’t be able to predict. However, the things I mention almost certainly will happen, to a greater or lesser extent. That’s why we need to be having these conversations now, before the machines are too intertwingled to extract from the process. My hope is that we will value what is left to humans even more than we do today, and that this will be a catalyst to help fix things that should have been fixed in our educational systems long ago. But, for that to happen, we need to be thinking about what we actually do value, and how we want them to be. And that’s why I am sharing slides like this.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.