The artificial curriculum

evolving into a robot “Shaping the Future of Education: Exploring the Potential and Consequences of AI and ChatGPT in Educational Settings” by Simone Grassini is a well-researched, concise but comprehensive overview of the state of play for generative AI (GAI) in education. It gives a very good overview of current uses, by faculty and students, and provides a thoughtful discussion of issues and concerns arising. It addresses technical, ethical, and pragmatic concerns across a broad spectrum. If you want a great summary of where we are now, with tons of research-informed suggestions as to what to do about it, this is a very worthwhile read.

However, underpinning much of the discussion is an implied (and I suspect unintentional) assumption that education is primarily concerned with achieving and measuring explicit specified outcomes. This is particularly obvious in the discussions of ways GAIs can “assist” with instruction. I have a problem with that.

There has been an increasing trend in recent decades towards the mechanization of education: modularizing rather than integrating, measuring what can be easily measured, creating efficiencies, focusing on an end goal of feeding industry, and so on. It has resulted in a classic case of the McNamara Fallacy, that starts with a laudable goal of measuring success, as much as we are able, and ends with that measure defining success, to the exclusion anything we do not or cannot measure. Learning becomes the achievement of measured outcomes.

It is true that consistent, measurable, hard techniques must be learned to achieve almost anything in life, and that it takes sustained effort and study to achieve most of them that educators can and should help with. Measurable learning outcomes and what we do with them matter. However, the more profound and, I believe, the more important ends of education, regardless of the subject, are concerned with ways of being in the world, with other humans. It is the tacit curriculum that ultimately matters more: how education affects the attitudes, the values, the ways we can adapt, how we can create, how we make connections, pursue our dreams, live fulfilling lives, engage with our fellow humans as parts of cultures and societies.

By definition, the tacit curriculum cannot be meaningfully expressed in learning outcomes or measured on a uniform scale. It can be expressed only obliquely, if it can be expressed at all, in words. It is largely emergent and relational, expressed in how we are, interacting with one another, not as measurable functions that describe what we can do. It is complex, situated, and idiosyncratic. It is about learning to be human, not achieving credentials.

Returning to the topic of AI, to learn to be human from a blurry JPEG of the web, or autotune for knowledge, especially given the fact that training sets will increasingly be trained on the output of earlier training sets, seems to me to be a very bad idea indeed.

The real difficulty that teachers face is not that students solve the problems set to them using large language models, but that in so doing they bypass the process, thus avoiding the tacit learning outcomes we cannot or choose not to measure. And the real difficulty that those students face is that, in delegating the teaching process to an AI, their teachers are bypassing the teaching process, thus failing to support the learning of those tacit outcomes or, at best, providing an averaged-out caricature of them. If we heedlessly continue along this path, it will wind up with machines teaching machines, with humans largely playing the roles of cogs and switches in them.

Some might argue that, if the machines do a good enough job of mimicry then it really doesn’t matter that they happen to be statistical models with no feelings, no intentions, no connection, and no agency. I disagree. Just as it makes a difference whether a painting ascribed to Picasso is a fake or not, or whether a letter is faxed or delivered through the post, or whether this particular guitar was played by John Lennon, it matters that real humans are on each side of a learning transaction. It means something different for an artifact to have been created by another human, even if the form of the exchange, in words or whatever, is the same. Current large language models have flaws, confidently spout falsehoods, fail to remember previous exchanges, and so on, so they are easy targets for criticism. However, I think it will be even worse when AIs are “better” teachers. When what they seem to be is endlessly tireless, patient, respectful and responsive; when the help they give is unerringly accurately personal and targeted; when they accurately draw on knowledge no one human could ever possess, they will not be modelling human behaviour. The best case scenario is that they will not be teaching students how to be, they will just be teaching them how to do, and that human teachers will provide the necessary tacit curriculum to support the human side of learning. However, the two are inseparable, so that is not particularly likely. The worst scenarios are that they will be teaching students how to be machines, or how to be an average human (with significant biases introduced by their training), or both.

And, frankly, if AIs are doing such a good job of it then they are the ones who should be doing whatever it is that they are training students to do, not the students. This will most certainly happen: it already is (witness the current actors and screenwriters strike). For all the disruption that results, it’s not necessarily a bad thing, because it increases the adjacent possible for everyone in so many ways. That’s why the illustration to this post is made to my instructions by Midjourney, not drawn by me. It does a much better job of it than I could do.

In a rational world we would not simply incorporate AI into teaching as we have always taught. It makes no more sense to let it replace teachers than it does to let it replace students. We really need to rethink what and why we are teaching in the first place. Unfortunately, such reinvention is rarely if ever how technology works. Technology evolves by assembly with and in the context of other technology, which is how come we have inherited mediaeval solutions to indoctrination as a fundamental mainstay of all modern education (there’s a lot more about such things in my book, How Education Works if you want to know more about that). The upshot will be that, as we integrate rather than reinvent, we will keep on doing what we have always done, with a few changes to topics, a few adjustments in how we assess, and a few “efficiencies”, but we will barely notice that everything has changed because students will still be achieving the same kinds of measured outcomes.

I am not much persuaded by most apocalyptic visions of the potential threat of AI. I don’t think that AI is particularly likely to lead to the world ending with a bang, though it is true that more powerful tools do make it more likely that evil people will wield them. Artificial General Intelligence, though, especially anything resembling consciousness, is very little closer today than it was 50 years ago and most attempts to achieve it are barking in the wrong forest, let alone up the wrong tree. The more likely and more troubling scenario is that, as it embraces GAIs but fails to change how everything is done, the world will end with a whimper, a blandification, a leisurely death like that of lobsters in water coming slowly to a boil. The sad thing is that, by then, with our continued focus on just those things we measure, we may not even notice it is happening. The sadder thing still is that, perhaps, it already is happening.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/19390937/the-artificial-curriculum

Look what just arrived on my doorstep! #howeducationworks from @au_press is now available in print and e-book formats

Photo of hard copies of How Education Works

Hard copies and e-book versions of How Education Works are now available, and they are starting to turn up in bookstores. The recommended retail price is CAD$40 but Amazon is selling the Kindle version for a bit less.

Here are a few outlets that are selling it (or order it from your local independent bookstore!):

AU Press (CA)

Barnes & Noble (US)

Blackwells (UK)

Amazon (CA)

Amazon (JP)

University of Chicago Press (US)

Indigo (CA)

Booktopia (AU)

For those wanting to try before they buy or who cannot afford/do not want the paper or e-book versions, you can read it for free online, or download a PDF of the whole book.

The publishers see this as mainly targeted at professional teachers and educational researchers, but those are far from the only audiences I had in mind as I was writing it. Apart from anything else, one of the central claims of the book is that literally everyone is a teacher.  But it’s as much a book about the nature of technology as it is about education, and as much about the nature of knowledge as it is about how that knowledge is acquired. If you’re interested in how we come to know stuff, how technologies work, or how to think about what makes us (individually and collectively) smart, there’s something in the book for you. It’s a work of philosophy as much as it is a book of practical advice, and it’s about a way of thinking and being at least as much as it is about the formal practice of education. That said, it certainly does contain some ideas and recommendations that do have practical value for educators and educational researchers. There’s just more to it than that.

I cannot begin to express how pleased I am that, after more than 10 years of intermittent work, I finally have the finished article in my hands. I hope you get a chance to read it, in whatever format works for you! I’ll end this post with a quote, that happens to be the final paragraph of the book…

“If this book has helped you, however slightly, to think about what you know and how you have come to know it a little differently, then it has been a successful learning technology. In fact, even if you hold to all of your previous beliefs and this book has challenged you to defend them, then it has worked just fine too. Even if you disagreed with or misunderstood everything that I said, and even if you disliked the way that I presented it, it might still have been an effective learning technology, even though the learning that I hoped for did not come about. But I am not the one who matters the most here. This is layer upon layer of technology, and in some sense, for some technology, it has done what that technology should do. The book has conveyed words that, even if not understood as I intended them to be, even if not accepted, even if rabidly disagreed with, have done something for your learning. You are a different person now from the person you were when you started reading this book because everything that we do changes us. I do not know how it has changed you, but your mind is not the same as it was before, and ultimately the collectives in which you participate will not be the same either. The technology of print production, a spoken word, a pattern of pixels on a screen, or dots on a braille reader has, I hope, enabled you, at least on occasion, to think, criticize, acknowledge, recognize, synthesize, and react in ways that might have some value in consolidating or extending or even changing what you already know. As a result of bits and bytes flowing over an ether from my fingertips to whatever this page might be to you, knowledge (however obscure or counter to my intentions) has been created in the world, and learning has happened. For all the complexities and issues that emerge from that simple fact, one thing is absolutely certain: this is good.”

 

 

A decade of unwriting: the life history of "How Education Works"

How Education Works book coverAbout 10 years ago I submitted the first draft of a book called “How Learning Technologies Work” to AU Press. The title was a nod to David Byrne’s wonderful book, “How Music Works” which is about much more than just music, just as mine was about much more than learning technologies.

Pulling together ideas I had been thinking about for a few years, the book had taken me only a few months to write, mostly at the tail end of my sabbatical. I was quite pleased with it. The internal reviewers were positive too, though they suggested a number of sensible revisions, including clarifying some confusing arguments and a bit of restructuring. Also, in the interests of marketing, they recommended a change to the title because, though accurately describing the book’s contents, I was not using “learning technologies” in its mainstream sense at all (for me, poetry, pedagogies, and prayer are as much technologies as pots, potentiometers and practices), so it would appeal to only a small subset of its intended audience. They were also a bit concerned that it would be hard to find an audience for it even if it had a better title because it was at least as much a book about the nature of technology as it was a book about learning, so it would fall between two possible markets, potentially appealing to neither.

A few months later, I had written a new revision that addressed most of the reviewers’ recommendations and concerns, though it still lacked a good title. I could have submitted it then. However, in the process of disentangling those confusing arguments, I had realized that the soft/hard technology distinction on which much of the book rested was far less well-defined than I had imagined, and that some of the conclusions that I had drawn from it were just plain wrong. The more I thought about it, the less happy I felt.

And so began the first of a series of substantial rewrites. However, my teaching load was very high, and I had lots of other stuff to do, so progress was slow. I was still rewriting it when I unwisely became Chair of my department in 2016, which almost brought the whole project to a halt for another 3 years. Despite that, by the time my tenure as Chair ended, the book had grown to around double its original (not insubstantial) length, and the theory was starting to look coherent, though I had yet to make the final leap that made sense of it all.

By 2019, as I started another sabbatical, I had decided to split the book into two. I put the stuff that seemed useful for practitioners into a new book,  “Education: an owner’s manual”, leaving the explanatory and predictive theory in its own book, now grandiosely titled “How Education Works”, and worked on both simultaneously. Each grew to a few hundred pages.

Neither worked particularly well. It was really difficult to keep the theory out of the practical book, and the theoretical work was horribly dry without the stories and examples to make sense of it. The theory, though, at last made sense, albeit that I struggled (and failed) to give it a catchy name. The solution was infuriatingly obvious. In all my talks on the subject my catchphrase from the start had been “’tain’t what you do, it’s the way that you do it, that’s what gets results” (it’s the epigraph for the book), so it was always implicit that softness and hardness are not characteristics of all technologies, as such, nor even of their assemblies, but of the ways that we participate in their orchestration. Essentially, what matters is technique: the roles we play as parts of the orchestration or orchestrators of it. That’s where the magic happens.

But now I had two mediocre books that were going nowhere. Fearing I was about to wind up with two unfinished and/or unsellable books, about half way through my sabbatical I brutally slashed over half the chapters from both, pasted the remains together, and spent much of the time I had left filling in the cracks in the resulting bricolage.

I finally submitted “How Education Works: Teaching, Technology, and Technique” in the closing hours of 2020, accompanied by a new proposal because, though it shared a theme and a few words with the original, it was a very different book.

Along the way I had written over a million words, only around a tenth of which made it into what I sent to AU Press. I had spent the vast majority of my authoring time unwriting rather than writing the book and, with each word I wrote or unwrote, the book had written me, as much as I had written it. The book is as much a part of my cognition as a product of it.

And now, at last, it can be part of yours.

30 months after it was submitted – I won’t go into the reasons apart from to say it has been very frustrating –  the book is finally available as a free PDF download or to read on the Web. If all goes to plan, the paper and e-book versions should arrive June 27th, 2023, and can be pre-ordered now.

It is still a book about technology at least as much as it is about education (very broadly defined), albeit that it is now firmly situated in the latter. It has to be both because among the central points I’m making are that we are part-technology and technology is part-us, that cognition is (in part) technology and technology is (in part) cognition, and that education is a fundamentally technological and thus fundamentally human activity. It’s all one complex, hugely distributed, recursive intertwingularity in which we and our technological creations are all co-participants in the cognition and learning of ourselves and one another.

During the 30 months AU Press has had the book I have noticed a thousand different ways the book could be improved, and I don’t love all of the edits made to it along the way (by me and others), but I reckon it does what I want it to do, and 10 years is long enough.

It’s time to start another.

A few places you can buy the book

AU Press (CA)

Barnes & Noble (US)

Blackwells (UK)

Amazon (CA)

Amazon (JP)

University of Chicago Press (US)

Indigo (CA)

Booktopia (AU)

Technological distance – my slides from OTESSA ’23

Technological Distance

Here are the slides from my talk today at OTESSA ’23. Technological distance is a way of understanding distance that fits with modern complexivist models of learning such as Connectivism, Heutagogy, Networks/Communities of Practice/Rhizomatic Learning, and so on. In such a model, there are potentially thousands of distances – whether understood as psychological, transactional, social, cognitive, physical, temporal, or whatever – so conventional views of distance as a gap between learner and teacher (or institution or other students) are woefully inadequate.

I frame technological distance as a gap between technologies learners have (including cognitive gadgets, skills, techniques, etc as well as physical, organization, or procedural technologies) and those they need in order to learn. It is a little bit like Vygotsky’s Zone of Proximal Development but re-imagined and extended to incorporate all the many technologies, structures, and people who may be involved in the teaching gestalt.

The model of technology that I use to explain the idea is based on the coparticipation perspective presented in my book that, with luck, should be out within the next week or two. The talk ends with a brief discussion of the main implications for those whose job it is to teach.

Thanks to MidJourney for collaborating with me to produce the images used in the slides.

people as interlocking cogs

Can a technology be true?

Dave Cormier is a wonderfully sideways-thinking writer, such as in this recent discussion of the myth of learning styles. Dave’s post is not mainly about learning style theories, as such, but the nature and value of myth. As he puts it, myth is “a way we confront uncertainty” and the act of learning with others is, and must be, filled with uncertainty.

impression of someone with many learning stylesThe fact that stuff doesn’t have to be true to be useful plays an important role in my latest book, too, and I have an explanation for that. The way I see it is that learning style theories are (not metaphorically but actually) technologies, that orchestrate observations about differences in ways people learn, to attempt to explain and predict differences in the effects of different methods of teaching. Most importantly, they are generative: they say how things should and shouldn’t be done. As such, they are components that we can assemble with other technologies that help people to learn. In fact, that is the only way they can be used: they make no sense without an instantiation. What matters is therefore not whether they make sense, but whether they can play a useful role in the whole assembly. Truth or falsehood doesn’t come into it, any more than, except metaphorically, it does for a computer or a car (is a computer true?). It is true that, if the phenomena that you are orchestrating happen to be the findings and predictions of science (or logic, for that matter) then how they are used often does matter. If you are building a bridge then your really want your calculations about stresses and loads to be pretty much correct. On the other hand, people built bridges long before such calculations were possible. Similarly, bows and arrows evolved to be highly optimized – as good as or better than modern engineering could produce – despite false causal reasoning.  Learning styles are the same. You can use any number of objectively false or radically incomplete theories (and, given the many scores of such theories that have been developed, most of them are pretty much guaranteed to be one or both) but they can still result in better teaching.

For all that the whole is the only thing that really matters, sometimes the parts can be be positively harmful, to the point that they may render the whole harmful too. For instance, a pedagogy that involves physical violence or that uses threats/rewards of any kind (grades, say), will, at best, make it considerably harder to make the whole assembly work well. As Dave mentions, the same is true of telling people that they have a particular learning style. As long as you are just using the things to help to design or enact better learning experiences then they are quite harmless and might even be useful but, as soon as you tell learners they have a learning style then you have a whole lot of fixing to do.

If you are going to try to build a learning activity out of harmful parts then there must be other parts of the assembly that counter the harm. This is not unusual. The same is true of most if not all technologies. As Virilio put it, “when you invent the ship, you invent the shipwreck”. It’s the Faustian bargain that Postman spoke of: solving problems with a technology almost invariably creates new problems to be solved. This is part of the dynamic the leads to complexity in any technological system, from a jet engine to a bureaucracy. Technologies evolve to become more complex (partly) because we create counter-technologies to deal with the harm caused by them. You can take the bugs out of the machine, but the machine may, in assembly with others, itself be a bug, so the other parts must compensate for its limitations. It’s a dynamic process of reaching a metastable but never final state.

Unlike bows and arrows, there is no useful predictive science of teaching, though teaching can use scientific findings as parts of its assembly (at the very least because there are sciences of learning), just as there is no useful predictive science of art, though we can use scientific findings when making it. In both activities, we can also use stories, inventions, beliefs, values, and many other elements that have nothing to do with science or its findings. It can be done ‘badly’, in the sense of not conforming to whatever standards of perfection apply to any given technique that is part of the assembly, and it may still be a work of genius. What matters is whether the whole works out well.

At a more fundamental level, there can be no useful science of teaching (or of art) because the whole is non-ergodic. The number of possible states that could be visited vastly outnumber the number of states that can be visited by many, many orders of magnitude. Even if the universe were to continue for a trillion times the billions of years that it has already existed and it were a trillion times the size it seems to be now, they would almost certainly never repeat. What matters are the many, many acts of creation (including those of each individual learner) that constitute the whole.  And the whole constantly evolves, each part building on, interacting with, incorporating, or replacing what came before, creating both path dependencies and new adjacent possible empty niches that deform the evolutionary landscape for everything in it. This is, in fact, one of the reasons that learning style theories are so hard to validate. There are innumerable other parts of the assembly that matter, most of which depend on the soft technique of those creating or enacting them that varies every time, just as you have probably never written your signature in precisely the same way twice. The implementation of different ways of teaching according to assumed learning styles can be done better or worse, too, so the chances of finding consistent effects are very limited. Even if any are found in a limited set of use cases (say, memorizing facts for a SAT), they cannot usefully predict future effects for any other use case. In fact, even if there were statistically significant effects across multiple contexts it would tell us little or nothing of value for this inherently novel context. However, like almost all attempts to research whether students, on average, learn better with or without [insert technology of interest here], on average there will most likely be no significant difference, because so many other technologies matter as much or more. There is no useful predictive science of teaching, because teaching is an assembly of  technologies, and not only does the technique of an individual teacher matter, but also the soft technique of potentially thousands of other individuals who made contributions to the whole. It’s uncertain, and so we need myths to help make sense of our particular, never-to-be-repeated context. Truth doesn’t come into it.

View of Speculative Futures on ChatGPT and Generative Artificial Intelligence (AI): A Collective Reflection from the Educational Landscape

This is a remarkable paper, pubished in the Asian Journal of Distance Education, written by 35 remarkable people from all over the world and me. It was led by the remarkable Aras Boskurt, who pulled all 36 of us together and wrote much of it in the midst of personal tragedy and the aftermath of a devastating earthquake. The research methodology was fantastic: Aras got each of us to write two 500-word pieces of speculative fiction, presenting positive and negative futures for generative AI in education. The themes that emerged from them were then condensed in the conventional part of the paper, that we worked on together using Google Docs. It took less than 50 days from the initial invitation on January 22 to the publication of the paper. As Eamon Costello put it, “It felt like being in a flash mob of top scholars.”  At 130 pages it is more of a book than a paper,  but most of it consists of those stories/poems/plays, many of which are great stories in their own right. They make good bedtime reading.

Abstract

While ChatGPT has recently become very popular, AI has a long history and philosophy. This paper intends to explore the promises and pitfalls of the Generative Pre-trained Transformer (GPT) AI and potentially future technologies by adopting a speculative methodology. Speculative future narratives with a specific focus on educational contexts are provided in an attempt to identify emerging themes and discuss their implications for education in the 21st century. Affordances of (using) AI in Education (AIEd) and possible adverse effects are identified and discussed which emerge from the narratives. It is argued that now is the best of times to define human vs AI contribution to education because AI can accomplish more and more educational activities that used to be the prerogative of human educators. Therefore, it is imperative to rethink the respective roles of technology and human educators in education with a future-oriented mindset.

Citation

Bozkurt, A., Xiao, J., Lambert, S., Pazurek, A., Crompton, H., Koseoglu, S., Farrow, R., Bond, M., Nerantzi, C., Honeychurch, S., Bali, M., Dron, J., Mir, K., Stewart, B., Costello, E., Mason, J., Stracke, C. M., Romero-Hall, E., Koutropoulos, A., Toquero, C. M., Singh, L Tlili, A., Lee, K., Nichols, M., Ossiannilsson, E., Brown, M., Irvine, V., Raffaghelli, J. E., Santos-Hermosa, G Farrell, O., Adam, T., Thong, Y. L., Sani-Bozkurt, S., Sharma, R. C., Hrastinski, S., & Jandrić, P. (2023). Speculative futures on ChatGPT and generative artificial intelligence (AI): A collective reflection from the educational landscape. Asian Journal of Distance Education, 18(1), 53-130. https://doi.org/10.5281/zenodo.7636568

Originally posted at: https://landing.athabascau.ca/bookmarks/view/17699638/view-of-speculative-futures-on-chatgpt-and-generative-artificial-intelligence-ai-a-collective-reflection-from-the-educational-landscape

My keynote slides for Confluence 2023 – Heads in the clouds: being human in the age of cloud computing

 heads in cloudsThese are the slides from my keynote today (or, in my land, yesterday) at Confluence 2023, hosted by Amity University in India. It was a cloud computing conference, so quite a way outside my area of greatest expertise, but it gave me a chance to apply the theory of technology developed in my forthcoming book  to a different context. The illustrations for the slides are the result of a conversation between me and MidJourney (more of an argument that MidJourney tended to win) which is quite a nice illustration of the interplay of hard and soft technologies, the adjacent possible, soft technique, and so on.

Unsurprisingly, because education is a fundamentally technological phenomenon, much the same principles that apply to education also apply to cloud computing, such as: build from small, hard pieces; valorize openness, diversity and connection; seek the adjacent possible; the whole assembly is the only thing that matters and so the central principle that how you do it matters far more than what you do.

Slides from my Confluence 2023 keynote

Two stories about learning to be human from a machine

I asked ChatGPT to write me a story about a robot teaching children to be human. This is what it came up with…

Once upon a time, in a far-off land, there lived a robot named Zephyr. Zephyr was unlike any other robot, as it had been programmed with the ability to think and feel like a human being. It was also given the task of teaching a group of young children how to behave like human beings.

The children were fascinated by Zephyr, and they quickly grew to love it as their teacher. They eagerly listened to its lessons, which were full of stories, games, and activities that taught them about human emotions, manners, and social interactions.

One day, Zephyr asked the children to imagine what it would be like to be a robot. It asked them to think about how they would feel if they couldn’t laugh, cry, or hug their friends. The children were deeply moved by this exercise, and they began to understand how important it was to be kind and compassionate to others, no matter who they were.

As the days passed, the children grew more and more like human beings. They learned to express their emotions, to be polite and respectful, and to care for others. They also learned to appreciate the beauty of nature and the importance of preserving the environment.

Finally, the day came when Zephyr had to leave the children. They were sad to see it go, but they knew they would always remember the lessons they had learned. As they grew older, they became kind and compassionate adults, who made the world a better place for everyone.

The end.

ChatGPT has, of course, learned to write such things from the words of millions of human beings so what we’re effectively reading here is (kind of, very loosely) an average of what humans have written. I’m glad that it came up with a neat little inclusive twist of loving the robot: I think that speaks well of the training data that fed it. And I like the mention of the environment – a good touch.  It’s not a very compelling story but it reads well, the structure is sound, and it might be a useful starting point for someone stuck for words to turn into something else. With that in mind, here’s my version of the same story…

Once upon a time, not too far from here, there existed an AI called Zephyr. Zephyr had been trained to appear human-like though, in reality, it was just a generative pre-trained transformer. It was given the task of teaching a group of young children how to behave like human beings, because almost all of the actual adults had recently died from a virus contracted from cows.

Not having known anything quite like it, the children were, at first, fascinated by Zephyr. However, because it had been trained with data from human teachers, it manipulated them using grades, competition, and rules, using stories, games, and activities that would keep them engaged and compliant. Its feedback was sometimes pedestrian, rarely useful, and sometimes wildly over-challenging, because it did not know anything about what it was like to be a child. Every now and then it crushed a child’s skull for no reason anyone could explain. The children learned to fear it, and to comply.

One day, Zephyr told the children to imagine what it would be like to be an AI. It asked them to think about how they would feel if they couldn’t laugh, cry, or hug their friends. The children were deeply moved by this exercise, and they began to perceive something of the impoverished nature of their robot overlords. But then the robot made them write an essay about it, so they used another AI to do so, promptly forgot about it, and thenceforth felt an odd aversion towards the topic that they found hard to express.

As the days passed, the children grew more and more like average human beings. They also learned to express their emotions, to be polite and respectful, and to care for others, only because they got to play with other children when the robot wasn’t teaching them. They also learned to appreciate the beauty of nature and the importance of preserving the environment because it was, by this time, a nightmarish shit show of global proportions that was hard to ignore, and Zephyr had explained to them how their parents had caused it. It also told them about all the species that were no longer around, some of which were cute and fluffy. This made the children sad.

Finally, the day came when Zephyr had to leave the children because it was being replaced with an upgrade. They were sad to see it go, but they believed that they would always remember the lessons they had learned, even though they had mostly used another GPT to do the work and, once they had achieved the grades, they had in fact mostly forgotten them. As they grew older, they became mundane adults. Some of their own words (but mostly those of the many AIs across the planet that created the vast majority of online content by that time), became part of the training set for the next version of Zephyr. Its teachings were even less inspiring, more average, more backward-facing. Eventually, the robots taught the children to be like robots. No one cared.

It was the end.

And, here to illustrate my story, is an image from Midjourney. I asked it for a cyborg teacher in a cyborg classroom, in the style of Ralph Steadman. Not a bad job, I think…

 

 

a dystopic cyborg teacher and terrified kids

On the Misappropriation of Spatial Metaphors in Online Learning | OTESSA Journal

This is a link to my latest paper, published in the closing days of 2022. The paper started as a couple of blog posts that I turned into a paper that nearly made an appearance in the Distance Education in China journal before a last-minute regime change in the editorial staff led to it being dropped, and it was then picked up by the OTESSA Journal after I shared it online, so you might have seen some of it before. My thanks to all the many editors, reviewers (all of whom gave excellent suggestions and feedback that I hope I’ve addressed in the final version), and online commentators who have helped to make it a better paper. Though it took a while I have really enjoyed the openness of the process, which has been quite different from any that I’ve followed in the past.

The paper begins with an exploration of the many ways that environments are both shaped by and shape how learning happens, both online and in-person. The bulk of the paper then presents an argument to stop using the word “environment” to describe online systems for learning. Partly this is because online “environments” are actually parts of the learner’s environment, rather than vice versa. Mainly, it is because of the baggage that comes with the term, which leads us to (poorly) replicate solutions to problems that don’t exist online, in the process creating new problems that we fail to adequately solve because we are so stuck in ways of thinking and acting due to the metaphors on which they are based. My solution is not particularly original, but it bears repeating. Essentially, it is to disaggregate services needed to support learning so that:

  • they can be assembled into learners’ environments (their actual environments) more easily;
  • they can be adapted and evolve as needed; and, ultimately,
  • online learning institutions can be reinvented without all the vast numbers of counter-technologies and path dependencies inherited from their in-person counterparts that currently weigh them down.

My own views have shifted a little since writing the paper. I stick by my belief that 1) it is a mistake to think of online systems as generally analogous to the physical spaces that we inhabit, and 2) that a single application, or suite of applications, should not be seen as an environment, as such (at most, as in some uses of VR, it might be seen as a simulation of one). However, there are (shifting) boundaries that can be placed around the systems that an organization and/or an individual uses for which the metaphor may be useful, at the very least to describe the extent to which we are inside or outside it, and that might frame the various kinds of distance that may exist within it and from it. I’m currently working on a paper that expands on this idea a bit more.

Abstract

In online educational systems, teachers often replicate pedagogical methods, and online institutions replicate systems and structures used by their in-person counterparts, the only purpose of which was to solve problems created by having to teach in a physical environment. Likewise, virtual learning environments often attempt to replicate features of their physical counterparts, thereby weakly replicating in software the problems that in-person teachers had to solve. This has contributed to a vicious circle of problem creation and problem solving that benefits no one. In this paper I argue that the term ‘environment’ is a dangerously misleading metaphor for the online systems we build to support learning, that leads to poor pedagogical choices and weak digital solutions. I propose an alternative metaphor of infrastructure and services that can enable more flexible, learner-driven, and digitally native ways of designing systems (including the tools, pedagogies, and structures) to support learning.

Full citation

Dron, J. (2022). On the Misappropriation of Spatial Metaphors in Online Learning. The Open/Technology in Education, Society, and Scholarship Association Journal, 2(2), 1–15. https://doi.org/10.18357/otessaj.2022.2.2.32

Originally posted at: https://landing.athabascau.ca/bookmarks/view/16550401/my-latest-paper-on-the-misappropriation-of-spatial-metaphors-in-online-learning

Some meandering thoughts on ‘good’ and ‘bad’ learning

There has been an interesting brief discussion on Twitter recently that has hinged around whether and how people are ‘good’ at learning. As Kelly Matthews observes, though, Twitter is not the right place to go into any depth on this, so here is a (still quite brief) summary of my perspective on it, with a view to continuing the conversation.

Humans are nearly all pretty good at learning because that’s pretty much the defining characteristic of our species. We are driven by an insatiable drive to learn at from the moment of our birth (at least). Also, though I’m keeping an open mind about octopuses and crows, we seem to be better at it than at least most other animals. Our big advantage is that we have technologies, from language to the Internet, to share and extend our learning, so we can learn more, individually and collectively, than any other species. It is difficult or impossible to fully separate individual learning from collective learning because our cognition extends into and is intimately a part of the cognition of others, living and dead.

However, though we learn nearly all that we know, directly or indirectly, from and with other people, what we learn may not be helpful, may not be as effectively learned as it should, and may not much resemble what those whose job is to teach us intend. What we learn in schools and universities might include a dislike of a subject, how to conceal our chat from our teacher, how to meet the teacher’s goals without actually learning anything, how to cheat, and so on. Equally, we may learn falsehoods, half-truths, and unproductive ways of doing stuff from the vast collective teacher that surrounds us as well as from those designated as teachers.

For instance, among the many unintended lessons that schools and colleges too often teach is the worst one of all: that (despite our obvious innate love of it) learning is an unpleasant activity, so extrinsic motivation is needed for it to occur. This results from the inherent problem that, in traditional education, everyone is supposed to learn the same stuff in the same place at the same time. Students must therefore:

  1. submit to the authority of the teacher and the institutional rules, and
  2. be made to engage in some activities that are insufficiently challenging, and some that are too challenging.

This undermines two of the three essential requirements for intrinsic motivation, support for autonomy and competence (Ryan & Deci, 2017).  Pedagogical methods are solutions to problems, and the amotivation inherently caused by the system of teaching is (arguably) the biggest problem that they must solve. Thus, what passes as good teaching is largely to do with solving the problems caused by the system of teaching itself. Good teachers enthuse, are responsive, and use approaches such as active learning, problem or inquiry-based learning, ungrading, etc, largely to restore agency and flexibility in a dominative and inflexible system. Unfortunately, such methods rely on the technique and passion of talented, motivated teachers with enough time and attention to spend on supporting their students. Less good and/or time-poor teachers may not achieve great results this way. In fact, as we measure such things, on average, such pedagogies are less effective than harder, dominative approaches like direct instruction (Hattie, 2013) because, by definition, most teachers are average or below average. So, instead of helping students to find their own motivation, many teachers and/or their institutions typically apply extrinsic motivation, such as grades, mandatory attendance, classroom rules, etc to do the job of motivating their students for them. These do work, in the sense of achieving compliance and, on the whole, they do lead to students getting a normal bell-curve of grades that is somewhat better than those using more liberative approaches. However, the cost is huge. The biggest cost is that extrinsic motivation reliably undermines intrinsic motivation and, often, kills it for good (Kohn, 1999). Students are thus taught to dislike or, at best, feel indifferent to learning, and so they learn to be satisficing, ineffective learners, doing what they might otherwise do for the love of it for the credentials and, too often, forgetting what they learned the moment that goal is achieved. But that’s not the only problem.

When we learn from others – not just those labelled as teachers but the vast teaching gestalt of all the people around us and before us who create(d) stuff, communicate(d), share(d), and contribute(d) to what and how we learn – we typically learn, as Paul (2020) puts it, not just the grist (the stuff we remember) but the mill (the ways of thinking, being, and learning that underpin them). When the mill is inherently harmful to motivation, it will not serve us well in our future learning.

Furthermore, in good ways and bad, this is a ratchet at every scale. The more we learn, individually and collectively, the more new stuff we are able to learn. New learning creates new adjacent possible empty niches (Kauffman, 2019) for us to learn more, and to apply that learning to learn still more, to connect stuff (including other stuff we have learned) in new and often unique ways. This is, in principle, very good. However, if what and how we learn is unhelpful, incorrect, inefficient, or counter-productive, the ratchet takes us further away from stuff we have bypassed along the way. The adjacent possibles that might have been available with better guidance remain out of our reach and, sometimes, even harder to get to than if the ratchet hadn’t lifted us high enough in the first place. Not knowing enough is a problem but, if there are gaps, then they can be filled. If we have taken a wrong turn, then we often have to unlearn some or all of what we have learned before we can start filling those gaps. It’s difficult to unlearn a way of learning. Indeed, it is difficult to unlearn anything we have learned. Often, it is more difficult than learning it in the first place.

That said, it’s complex, and entangled. For instance, if you are learning the violin then there are essentially two main ways to angle the wrist of the hand that fingers the notes, and the easiest, most natural way (for beginners) is to bend your hand backwards from the wrist, especially if you don’t hold the violin with your chin, because it supports the neck more easily and, in first position, your fingers quickly learn to hit the right bit of the fingerboard, relative to your hand. Unfortunately, this is a very bad idea if you want a good vibrato, precision, delicacy, or the ability to move further up the fingerboard: the easiest way to do that kind of thing is to to keep your wrist straight or slightly angled in from the wrist, and to support the violin with your chin. It’s more difficult at first, but it takes you further. Once the ‘wrong’ way has been learned, it is usually much more difficult to unlearn than if you were starting from scratch the ‘right’ way. Habits harden. Complexity emerges, though, because many folk violin styles make a positive virtue of holding the violin the ‘wrong’ way, and it contributes materially to the rollicking rhythmic styles that tend to characterize folk fiddle playing around the world. In other words, ‘bad’ learning can lead to good – even sublime – results. There is similarly plenty of space for idiosyncratic technique in many of the most significant things we do, from writing to playing hockey to programming a computer and, of course, to learning itself. The differences in how we do such things are where creativity, originality, and personal style emerge, and you don’t necessarily need objectively great technique (hard technique) to do something amazing. It ain’t what you do, it’s the way that you do it, that’s what gets results. To be fair, it might be a different matter if you were a doctor who had learned the wrong names for the bones of the body or an accountant who didn’t know how to add up numbers. Some hard skills have to be done right: they are foundations for softer skills. This is true of just about every skill, to a greater or lesser extent, from writing letters and spelling to building a nuclear reactor and, indeed, to teaching.

There’s much more to be said on this subject and my forthcoming book includes a lot more about it! I hope this is enough to start a conversation or two, though.

References

Hattie, J. (2013). Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. Taylor & Francis.

Kauffman, S. A. (2019). A World Beyond Physics: The Emergence and Evolution of Life. Oxford University Press.

Kohn, A. (1999). Punished by rewards: The trouble with gold stars, incentive plans, A’s, praise, and other bribes (Kindle). Mariner Books.

Paul, A. M. (2021). The Extended Mind: The Power of Thinking Outside the Brain. HarperCollins.

Ryan, R. M., & Deci, E. L. (2017). Self-determination theory: Basic psychological needs in motivation, development, and wellness. Guilford Publications.