▶ How Education Works, the audio book: now with beats

My book has been set to music!

Many thanks to Terry Greene for converting How Education Works into the second in his inspired series of podcasts, EZ Learning – Audio Books with Beats. There’s a total of 15 episodes that can be listened to online, subscribed to with your preferred podcast app, or downloaded for later listening, read by a computer-generated voice and accompanied by some cool, soothing beats.

Terry chose a deep North American voice for the reader and Eaters In Coffeeshops Mix 1 by Eaters to accompany my book. I reckon it works really well. It’s bizarre, at first – the soothing robotic voice introduces weird pauses, mispronunciations, and curious emphases, and there are occasional voice parts in the music that can be slightly distracting – but you soon get used to it if you relax into the rhythm, and it leads to the odd serendipitous emphasis that enhances rather than detracts from the text. Oddly, in some ways it almost feels more human as a result. Though it can be a bit disconcerting at times and there’s a fair chance of being lulled to sleep by the gentle rhythm, I have a hunch that the addition of music might make it easier to remember passages from it, for reasons discussed in a paper I wrote with Rory McGreal, VIve Kumar, and Jennifer Davies a year or so ago.

I have been slowly and painfully working on a manually performed audiobook of How Education Works but it is taking much longer than expected thanks to living on the flight path of a surprising number of float planes, being in a city built on a rain forest with a noisy gutter outside my window, having two very vocal cats, and so on, not to mention not having a lot of free time to work on it, so I am very pleased that Terry has done this. I won’t stop working on the human-read version – I think this fills a different and very complementary niche – but it’s great to have something to point people towards when they ask for an audio version.

The first season of Audio Books with Beats, appearing in the feed after the podcasts for my book chapters, was another AU Press book, Terry Anderson’s Theory and Practice of Online Learning which is also well worth a listen – those chapters follow directly from mine in the list of episodes. I hope and expect there will be more seasons to come so, if you are reading this some time after it was posted, you may need to scroll down through other podcasts until you reach the How Education Works. In case it’s hard to find, here’s a list of direct links to the episodes.

Acknowledgements, Prologue, introduction

Chapter 1: A Handful of Anecdotes About Elephants

Chapter 2:  A Handful of Observations About Elephants

Part 1: All About Technology

Chapter 3: Organizing Stuff to Do Stuff

Chapter 4: How Technologies Work

Chapter 5: Participation and Technique

Part II: Education as a Technological System

Chapter 6: A Co-Participation Model of Teaching

Chapter 7: Theories of Teaching

Chapter 8: Technique, Expertise, and Literacy

Part III: Applying the Co-Participation Model

Chapter 9: Revealing Elephants

Chapter 10: How Education Works

Epilogue

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20936998/%E2%96%B6-how-education-works-the-audio-book-now-with-beats

Two stories about learning to be human from a machine

I asked ChatGPT to write me a story about a robot teaching children to be human. This is what it came up with…

Once upon a time, in a far-off land, there lived a robot named Zephyr. Zephyr was unlike any other robot, as it had been programmed with the ability to think and feel like a human being. It was also given the task of teaching a group of young children how to behave like human beings.

The children were fascinated by Zephyr, and they quickly grew to love it as their teacher. They eagerly listened to its lessons, which were full of stories, games, and activities that taught them about human emotions, manners, and social interactions.

One day, Zephyr asked the children to imagine what it would be like to be a robot. It asked them to think about how they would feel if they couldn’t laugh, cry, or hug their friends. The children were deeply moved by this exercise, and they began to understand how important it was to be kind and compassionate to others, no matter who they were.

As the days passed, the children grew more and more like human beings. They learned to express their emotions, to be polite and respectful, and to care for others. They also learned to appreciate the beauty of nature and the importance of preserving the environment.

Finally, the day came when Zephyr had to leave the children. They were sad to see it go, but they knew they would always remember the lessons they had learned. As they grew older, they became kind and compassionate adults, who made the world a better place for everyone.

The end.

ChatGPT has, of course, learned to write such things from the words of millions of human beings so what we’re effectively reading here is (kind of, very loosely) an average of what humans have written. I’m glad that it came up with a neat little inclusive twist of loving the robot: I think that speaks well of the training data that fed it. And I like the mention of the environment – a good touch.  It’s not a very compelling story but it reads well, the structure is sound, and it might be a useful starting point for someone stuck for words to turn into something else. With that in mind, here’s my version of the same story…

Once upon a time, not too far from here, there existed an AI called Zephyr. Zephyr had been trained to appear human-like though, in reality, it was just a generative pre-trained transformer. It was given the task of teaching a group of young children how to behave like human beings, because almost all of the actual adults had recently died from a virus contracted from cows.

Not having known anything quite like it, the children were, at first, fascinated by Zephyr. However, because it had been trained with data from human teachers, it manipulated them using grades, competition, and rules, using stories, games, and activities that would keep them engaged and compliant. Its feedback was sometimes pedestrian, rarely useful, and sometimes wildly over-challenging, because it did not know anything about what it was like to be a child. Every now and then it crushed a child’s skull for no reason anyone could explain. The children learned to fear it, and to comply.

One day, Zephyr told the children to imagine what it would be like to be an AI. It asked them to think about how they would feel if they couldn’t laugh, cry, or hug their friends. The children were deeply moved by this exercise, and they began to perceive something of the impoverished nature of their robot overlords. But then the robot made them write an essay about it, so they used another AI to do so, promptly forgot about it, and thenceforth felt an odd aversion towards the topic that they found hard to express.

As the days passed, the children grew more and more like average human beings. They also learned to express their emotions, to be polite and respectful, and to care for others, only because they got to play with other children when the robot wasn’t teaching them. They also learned to appreciate the beauty of nature and the importance of preserving the environment because it was, by this time, a nightmarish shit show of global proportions that was hard to ignore, and Zephyr had explained to them how their parents had caused it. It also told them about all the species that were no longer around, some of which were cute and fluffy. This made the children sad.

Finally, the day came when Zephyr had to leave the children because it was being replaced with an upgrade. They were sad to see it go, but they believed that they would always remember the lessons they had learned, even though they had mostly used another GPT to do the work and, once they had achieved the grades, they had in fact mostly forgotten them. As they grew older, they became mundane adults. Some of their own words (but mostly those of the many AIs across the planet that created the vast majority of online content by that time), became part of the training set for the next version of Zephyr. Its teachings were even less inspiring, more average, more backward-facing. Eventually, the robots taught the children to be like robots. No one cared.

It was the end.

And, here to illustrate my story, is an image from Midjourney. I asked it for a cyborg teacher in a cyborg classroom, in the style of Ralph Steadman. Not a bad job, I think…

 

 

a dystopic cyborg teacher and terrified kids

A modest proposal for improving exam invigilation

There has been a lot of negative reaction of late to virtual proctors of online exams. Perhaps students miss the cheery camaraderie of traditional proctored exams, sitting silently in a sweaty room with pen and paper, doing one of the highest stakes, highest stress tasks of their lives, with someone scrutinizing their every nervous tic whose adverse judgment may destroy their hopes and careers, for the benefit of an invisible examiner whose motives and wishes are unclear but whose approval they dearly seek. Lovely. Traditional. Reassuring. A ritual for us all to cherish. It’s enough to bring a tear to the eye.

But exams cost a huge amount of money to host and to invigilate. It is even worse when one of the outcomes might, for the student or the invigilator, be death or disability due to an inconvenient virus.

I have a better solution.

photo of a toy robotInstead of costly invigilators and invigilation centres, all we need to do is to send out small (returnable, postage-paid) robots to students’ homes. A little robot sitting on the student’s desk or kitchen table as they sit their written exam (on paper, of course – tradition matters), recording every blink, watching their fingers writing on the paper, with 360 degree panoramic camera and the ability to zoom in on anything suspicious or interesting. Perhaps it could include microphones, infrared and microwave sensors, and maybe sensors to monitor skin resistance, pulse, etc, in order to look for nefarious activities or to call the ambulance if the student seems to be having a heart attack or stroke due to the stress. It could be made to talk, too. Perhaps it could offer spoken advice on the process, and alerts about the time left at carefully selected intervals. Students could choose the voice. It would also allow students to sit exams wherever and whenever they please: we are all in favour of student choice. With a bit of ingenuity it could scan what the students have written or drawn, and send it back to an examiner. Or, with a bit more ingenuity and careful use of AI, it could mark the paper on the spot, saving yet more money. Everyone wins.

It would be important to be student-centric in its design. It could, for instance, be made to look like a cute little furry animal with googly eyes to put students more at ease. Maybe it could make soothing cooing noises like a tribble, or like a cat purring. Conversely, it could be made to scuttle ominously around the desk and to appear like a spider with venomous-looking fangs, making gentle hissing noises, to remind students of the much lamented presence of in-person invigilators. Indeed, maybe it could be made to look like a caricature of a professor. More advanced models could emit bad smells to replicate invigilator farts or secret smoking habits. It could be made small and mobile, so that students could take it with them if they needed a bathroom break, during which it might play soothing muzak to put the student at ease, while recording everything they do. It would have to be tough, waterproof, and sterilizable, in order to cope with the odd frustrated student throwing or dunking it.

Perhaps it could offer stern spoken warnings if anomalies or abuses are found, and maybe connect itself to a human invigilator (I hear that they are cheaper in developing nations) who could control it and watch more closely. Perhaps it could be equipped with non-lethal weaponry to punish inappropriate behaviour if the warnings fail, and/or register students on an offenders database.  It could be built to self-destruct if tampered with.

Though this is clearly something every university, school, and college would want, and the long-term savings would be immense, such technologies don’t come cheap. Quite apart from the hardware and software development costs, there would be a need for oodles of bandwidth and storage of the masses of data the robot would generate.

I have a solution to that, too: commercial sponsorship.

We could partner with, say, Amazon, who would be keen to mine useful information about the students’ surroundings and needs identified using the robot’s many sensors. A worn curtain? Stubborn stains? A shirt revealing personal interests? Send them to Amazon! Maybe Alexa could provide the voice for interactions and offer shopping advice when students stop to sharpen their pencils (need a better pencil? We have that in stock and can deliver it today!). And, of course, AWS would provide much of the infrastructure needed to support it, at fair educational prices. I expect early adopters would be described as ‘partners’ and offered slightly better (though still profitable) deals.

And there might be other things that could be done with the content. Perhaps the written answers could be analyzed to identify potential Amazon staffers. Maybe students expressing extremist views could be reported to the appropriate government agency, or at least added to a watch-list for the institution’s own use.

Naysayers might worry about hackers breaking into it or subverting its transmissions, or the data being sent to a country with laughable privacy laws, or the robot breaking down at a critical moment, or errors in handwriting recognition, but I’m sure that could be dealt with, the same as we deal with every other privacy, security, and reliability issue in IT in education. No problem. No sir. We have lawyers.

The details still need to be ironed out here and there, but the opportunities are endless. What could possibly go wrong? I think we should take this seriously. Seriously.