Here are the slides from my talk today at OTESSA ’23. Technological distance is a way of understanding distance that fits with modern complexivist models of learning such as Connectivism, Heutagogy, Networks/Communities of Practice/Rhizomatic Learning, and so on. In such a model, there are potentially thousands of distances – whether understood as psychological, transactional, social, cognitive, physical, temporal, or whatever – so conventional views of distance as a gap between learner and teacher (or institution or other students) are woefully inadequate.
I frame technological distance as a gap between technologies learners have (including cognitive gadgets, skills, techniques, etc as well as physical, organization, or procedural technologies) and those they need in order to learn. It is a little bit like Vygotsky’s Zone of Proximal Development but re-imagined and extended to incorporate all the many technologies, structures, and people who may be involved in the teaching gestalt.
The model of technology that I use to explain the idea is based on the coparticipation perspective presented in my book that, with luck, should be out within the next week or two. The talk ends with a brief discussion of the main implications for those whose job it is to teach.
Thanks to MidJourney for collaborating with me to produce the images used in the slides.
Dave Cormier is a wonderfully sideways-thinking writer, such as in this recent discussion of the myth of learning styles. Dave’s post is not mainly about learning style theories, as such, but the nature and value of myth. As he puts it, myth is “a way we confront uncertainty” and the act of learning with others is, and must be, filled with uncertainty.
The fact that stuff doesn’t have to be true to be useful plays an important role in my latest book, too, and I have an explanation for that. The way I see it is that learning style theories are (not metaphorically but actually) technologies, that orchestrate observations about differences in ways people learn, to attempt to explain and predict differences in the effects of different methods of teaching. Most importantly, they are generative: they say how things should and shouldn’t be done. As such, they are components that we can assemble with other technologies that help people to learn. In fact, that is the only way they can be used: they make no sense without an instantiation. What matters is therefore not whether they make sense, but whether they can play a useful role in the whole assembly. Truth or falsehood doesn’t come into it, any more than, except metaphorically, it does for a computer or a car (is a computer true?). It is true that, if the phenomena that you are orchestrating happen to be the findings and predictions of science (or logic, for that matter) then how they are used often does matter. If you are building a bridge then your really want your calculations about stresses and loads to be pretty much correct. On the other hand, people built bridges long before such calculations were possible. Similarly, bows and arrows evolved to be highly optimized – as good as or better than modern engineering could produce – despite false causal reasoning. Learning styles are the same. You can use any number of objectively false or radically incomplete theories (and, given the many scores of such theories that have been developed, most of them are pretty much guaranteed to be one or both) but they can still result in better teaching.
For all that the whole is the only thing that really matters, sometimes the parts can be be positively harmful, to the point that they may render the whole harmful too. For instance, a pedagogy that involves physical violence or that uses threats/rewards of any kind (grades, say), will, at best, make it considerably harder to make the whole assembly work well. As Dave mentions, the same is true of telling people that they have a particular learning style. As long as you are just using the things to help to design or enact better learning experiences then they are quite harmless and might even be useful but, as soon as you tell learners they have a learning style then you have a whole lot of fixing to do.
If you are going to try to build a learning activity out of harmful parts then there must be other parts of the assembly that counter the harm. This is not unusual. The same is true of most if not all technologies. As Virilio put it, “when you invent the ship, you invent the shipwreck”. It’s the Faustian bargain that Postman spoke of: solving problems with a technology almost invariably creates new problems to be solved. This is part of the dynamic the leads to complexity in any technological system, from a jet engine to a bureaucracy. Technologies evolve to become more complex (partly) because we create counter-technologies to deal with the harm caused by them. You can take the bugs out of the machine, but the machine may, in assembly with others, itself be a bug, so the other parts must compensate for its limitations. It’s a dynamic process of reaching a metastable but never final state.
Unlike bows and arrows, there is no useful predictive science of teaching, though teaching can use scientific findings as parts of its assembly (at the very least because there are sciences of learning), just as there is no useful predictive science of art, though we can use scientific findings when making it. In both activities, we can also use stories, inventions, beliefs, values, and many other elements that have nothing to do with science or its findings. It can be done ‘badly’, in the sense of not conforming to whatever standards of perfection apply to any given technique that is part of the assembly, and it may still be a work of genius. What matters is whether the whole works out well.
At a more fundamental level, there can be no useful science of teaching (or of art) because the whole is non-ergodic. The number of possible states that could be visited vastly outnumber the number of states that can be visited by many, many orders of magnitude. Even if the universe were to continue for a trillion times the billions of years that it has already existed and it were a trillion times the size it seems to be now, they would almost certainly never repeat. What matters are the many, many acts of creation (including those of each individual learner) that constitute the whole. And the whole constantly evolves, each part building on, interacting with, incorporating, or replacing what came before, creating both path dependencies and new adjacent possible empty niches that deform the evolutionary landscape for everything in it. This is, in fact, one of the reasons that learning style theories are so hard to validate. There are innumerable other parts of the assembly that matter, most of which depend on the soft technique of those creating or enacting them that varies every time, just as you have probably never written your signature in precisely the same way twice. The implementation of different ways of teaching according to assumed learning styles can be done better or worse, too, so the chances of finding consistent effects are very limited. Even if any are found in a limited set of use cases (say, memorizing facts for a SAT), they cannot usefully predict future effects for any other use case. In fact, even if there were statistically significant effects across multiple contexts it would tell us little or nothing of value for this inherently novel context. However, like almost all attempts to research whether students, on average, learn better with or without [insert technology of interest here], on average there will most likely be no significant difference, because so many other technologies matter as much or more. There is no useful predictive science of teaching, because teaching is an assembly of technologies, and not only does the technique of an individual teacher matter, but also the soft technique of potentially thousands of other individuals who made contributions to the whole. It’s uncertain, and so we need myths to help make sense of our particular, never-to-be-repeated context. Truth doesn’t come into it.
This is a remarkable paper, pubished in the Asian Journal of Distance Education, written by 35 remarkable people from all over the world and me. It was led by the remarkable Aras Boskurt, who pulled all 36 of us together and wrote much of it in the midst of personal tragedy and the aftermath of a devastating earthquake. The research methodology was fantastic: Aras got each of us to write two 500-word pieces of speculative fiction, presenting positive and negative futures for generative AI in education. The themes that emerged from them were then condensed in the conventional part of the paper, that we worked on together using Google Docs. It took less than 50 days from the initial invitation on January 22 to the publication of the paper. As Eamon Costello put it, “It felt like being in a flash mob of top scholars.” At 130 pages it is more of a book than a paper, but most of it consists of those stories/poems/plays, many of which are great stories in their own right. They make good bedtime reading.
While ChatGPT has recently become very popular, AI has a long history and philosophy. This paper intends to explore the promises and pitfalls of the Generative Pre-trained Transformer (GPT) AI and potentially future technologies by adopting a speculative methodology. Speculative future narratives with a specific focus on educational contexts are provided in an attempt to identify emerging themes and discuss their implications for education in the 21st century. Affordances of (using) AI in Education (AIEd) and possible adverse effects are identified and discussed which emerge from the narratives. It is argued that now is the best of times to define human vs AI contribution to education because AI can accomplish more and more educational activities that used to be the prerogative of human educators. Therefore, it is imperative to rethink the respective roles of technology and human educators in education with a future-oriented mindset.
Bozkurt, A., Xiao, J., Lambert, S., Pazurek, A., Crompton, H., Koseoglu, S., Farrow, R., Bond, M., Nerantzi, C., Honeychurch, S., Bali, M., Dron, J., Mir, K., Stewart, B., Costello, E., Mason, J., Stracke, C. M., Romero-Hall, E., Koutropoulos, A., Toquero, C. M., Singh, L Tlili, A., Lee, K., Nichols, M., Ossiannilsson, E., Brown, M., Irvine, V., Raffaghelli, J. E., Santos-Hermosa, G Farrell, O., Adam, T., Thong, Y. L., Sani-Bozkurt, S., Sharma, R. C., Hrastinski, S., & Jandrić, P. (2023). Speculative futures on ChatGPT and generative artificial intelligence (AI): A collective reflection from the educational landscape. Asian Journal of Distance Education, 18(1), 53-130. https://doi.org/10.5281/zenodo.7636568
Originally posted at: https://landing.athabascau.ca/bookmarks/view/17699638/view-of-speculative-futures-on-chatgpt-and-generative-artificial-intelligence-ai-a-collective-reflection-from-the-educational-landscape
I am pleased to announce my latest paper, published openly in the Journal of Open, Flexible and Distance Learning, which has long been one of my favourite distance and ed tech journals.
The paper starts with an abbreviated argument about the technological nature of education drawn from my forthcoming book, How Education Works, zooming in on the distributed teaching aspect of that, leading to a conclusion that the notion of “distance” as a measure of the relationship between a learner and their teacher/institution is not very useful when there might be countless teachers at countless distances involved.
I go on to explore a number of alternative ways we might conceptualize distance, some familiar, some less so, not so much because I think they are any better than (say) transactional distance, but to draw attention to the complexity, fuzziness, and fragility of the concept. However, I find some of them quite appealing: I am particularly pleased with the idea of inverting the various presences in the Community of Inquiry model (and extensions of it). Teaching, cognitive, and social (and emotional and agency) distances and presences essentially measure the same things in the same way, but the shift in perspective subtly changes the narratives we might build around them. I could probably write a paper on each kind of distance I provide, but each gets a paragraph or two because what it is all leading towards is an idea that I think has some more useful legs: technological distance.
I’m still developing this idea, and have just submitted another paper that tries to unpack it a bit more, so don’t expect something fully-formed just yet – I welcome discussion and debate on its value, meaning, and usefulness. Basically, technological distance is a measure of the gaps left between the technologies (including cognitive tools in learners’ own minds, what teachers orchestrate, textbooks, digital tools, etc, etc) that the learner has to fill in order to learn something. This is not just about the subject matter – it’s about the mill (how we learn) well as the grist (what we learn). There are lots of ways to reduce that distance, many of which are good for learning, but some of which undermine it by effectively providing what Dave Cormier delightfully describes as autotune for knowledge. The technologies provide the knowledge so learners don’t have to engage with or connect it themselves. This is not always a bad thing – architects may not need drafting skills, for instance, if they are going to only ever use CAD, memorization of facts easily discovered might not always be essential, and we will most likely see ubiquitous generative AI as part of our toolset now and in the future, for instance – but choosing what to learn is one reason teachers (who/whatever they are) can be useful. Effective teaching is about making the right things soft so the process itself teaches. However, as what needs to be soft is different for every person on the planet, we need to make learning (of ourselves or others) visible in order to know that. It’s not science – it’s technology. That means that invention, surprise, creativity, passion, and many other situated things matter.
My paper is nicely juxtaposed in the journal with one from Simon Paul Atkinson, which addresses definitions of “open”, “distance” and “flexible” that, funnily enough, was my first idea for a topic when I was invited to submit my paper. If you read both, I think you’ll see that Simon and I might see the issue quite differently, but his is a fine paper making some excellent points.
The “distance” in “distance learning”, however it is defined, normally refers to a gap between a learner and their teacher(s), typically in a formal context. In this paper I take a slightly different view. The paper begins with an argument that teaching is fundamentally a technological process. It is, though, a vastly complex, massively distributed technology in which the most important parts are enacted idiosyncratically by vast numbers of people, both present and distant in time and space, who not only use technologies but also participate creatively in their enactment. Through the techniques we use we are co-participants in not just technologies but the learning of ourselves and others, and hence in the collective intelligence of those around us and, ultimately, that of our species. We are all teachers. There is therefore not one distance between learner and teacher in any act of deliberate learning— but many. I go on to speculate on alternative ways of understanding distance in terms of the physical, temporal, structural, agency, social, emotional, cognitive, cultural, pedagogical, and technological gaps that may exist between learners and their many teachers. And I conclude with some broad suggestions about ways to reduce these many distances.
Originally posted at: https://landing.athabascau.ca/bookmarks/view/17293757/my-latest-paper-technology-teaching-and-the-many-distances-of-distance-learning-journal-of-open-flexible-and-distance-learning
The undergraduate students union, Canadian Union of Public Employees, and Athabasca University professional and faculty association have now all come out with strongly worded public statements protesting the recent firing of Peter Scott and the process used to pick and hire the new president of AU. Here they are:
We may never know for certain whether this is not an outrageous lie. Perhaps the minister had amnesia, or was drugged; perhaps space aliens took the minister’s form to approach the board chair; maybe it was Russians using technology to imitate his voice on the phone; maybe he is a pawn in someone else’s game, some shady figure who is really calling all the shots; perhaps his mind has decayed to the point that he was entirely unconscious of his influence; maybe he just muttered “who will rid me of this troublesome president” under his breath without realizing he was within earshot of Byron Nelson. We may never know.
However, the fact that he fired the incumbents then hired a board chair and board majority composed entirely of his friends and cronies, only one of whom knew the faintest thing about education, clashed publicly with Peter Scott, and threatened the university with bankruptcy if his demands were not met casts a small shadow of doubt over not just the truthfulness but even the truthiness of his statement. On the other hand, politicians never lie, so there’s that.
Mr. Nelson conceded not all governors had registered their vote before the outcome was determined. “The way that this was conducted, while legal, I would acknowledge was not best practices,” said Mr. Nelson, who is a lawyer. “It wasn’t best practices and it couldn’t be best practices.” The process was less than ideal because the situation was “unique” and required an “extreme amount of confidentiality,” Mr. Nelson said.
Why? Seriously, why? Nelson quite accurately claims:
“This was not a close vote,” he said. “It was the overwhelming decision of the board.”
It probably was an overwhelming decision, given the fact that Nicolaides’s appointed cronies overwhelm the board, and that they were effectively the only ones voting. The rest of the board – representatives of faculty, tutors and students – did not have a chance to vote, and at least a portion of the couple who did vote, at least weeks after the new president had been recruited and on the day of the firing, were forced to abstain because of the complete lack of consultation or explanation.
Back to Nicolaides:
Demetrios Nicolaides, Alberta’s Advanced Education Minister, said in a statement said it was his understanding that bylaws were followed, but any board members who feel the rules were breached should raise the issue with the chair.
“I’m confident if there are any issues that the board can adequately resolve them.”
One has to wonder where this non-interfering politician gets his confidence. Perhaps he has been consulting with a lawyer.
I, like many staff and students, have been deeply shaken and outraged by recent events at Athabasca University. This is a petition by me and Simon Buckingham Shum, of the University of Technology Sydney, Australia to protest the blatant interference by the Albertan government in the affairs of AU over the past year, that culminated in the firing of its president, Professor Peter Scott, without reason or notice. Even prior to this, the actions of the Albertan government had been described by Glen Jones (Professor of Higher Education, University of Toronto) as: “the most egregious political interference in a public university in Canada in more than 100 years” This was an assault on our university, an assault on the very notion of a public university, and it sets a disturbing precedent that cannot stand unopposed.
We invite you to view this brief summary, and consider signing this petition to signal your concern. Please feel more than free to pass this on to anyone and everyone – it is an international petition that has already been signed by many, both within and beyond the AU community.
Originally posted at: https://landing.athabascau.ca/bookmarks/view/17102318/petition-%C2%B7-athabasca-university-oppose-direct-political-interference-in-universities-%C2%B7-changeorg
You may have heard that the president of Athabasca University, Peter Scott, was replaced yesterday with Alex Clark, erstwhile Dean of the Faculty of Health Disciplines at AU.
This was a complete surprise to everyone at AU (apart from Alex), very much including Peter. None of the members of the executive team, including the provost, knew of it in advance. I gather that the secret was kept even from academic members of the Board of Governors: it was, it seems, presented to them as a done deal, on the day it happened. From the reactions I saw when it was announced, student board members may not even have known about it until that point. It was therefore – presumably – voted on in secret by the unholy cabal of governors who were appointed by the minister of advanced education last year, after the rest were sacked or forced to resign, and who make up the majority of the board. Essentially, Minister Nicolaides just fired our president.
The same seems to be true for the hiring of our new president. Although Alex had been a strong candidate when Peter got the job, and he is well qualified for the role, there are some serious questions to be asked about the appointment process, in which it appears that none of those voting had any involvement in the original appointment, no one asked the opinions of academics on the original hiring committee, and no one even asked the opinions of the academics on the board itself. This, like Peter’s dismissal, can only be seen as a political hire. And it is not an interim appointment, unlike that of his successor as Dean of FHD.
Peter was fired over the phone (ironic that this was done virtually by those who oppose our virtual strategy) without notice or explanation. The timing of his firing, a few days after an agreement was signed that, despite the Albertan government’s best efforts, has largely been seen by the press as a win for Peter (it was a loss, but a manageable loss), seems hardly coincidental. When all else failed, they stabbed him in the back when he was as down as anyone could be. Peter had in fact been away thanks to the sudden death of his wife, that occurred very shortly after her diagnosis with cancer at the end of last year. She had been buried abroad, 8 working days before he was fired. It is hard to imagine how he is feeling right now, but tears well up just thinking about it. All of this was well known to the board and to the minister. The moment was chosen with intent and malice. This was monstrous in the extreme.
It should have been so very different.
When Peter came to AU, not much more than a year ago, I cried tears of happiness. This was the leader we needed at the time we needed him: a brilliant, dynamic, imaginative, compassionate, principled man who had played a key role as a leader in transforming not just his prior institutions but the field of online and distance learning itself. Now, I cry tears of anger, outrage, and sadness. Peter could have transformed the university into something magnificent, and I believe he would have done so were it not for the utterly outrageous behaviour of the Albertan government. They fomented the union unrest into which Peter was thrust from the moment he arrived and then, over the last year, have outrageously and heavy-handedly directly meddled in the university’s affairs, against which Peter rightly and courageously fought. Peter’s assumption was, perhaps, that Alberta was like most of the rest of the world in recognizing academic freedoms, autonomy, and rights as sacrosanct. I don’t think he fully realized, at that point, that Alberta is not like that. It has a philistine government run by corrupt little despots, sponsored by corporations whose main activity is violence against the planet (this applies to most of the board of governors, as it happens). Going up against the Albertan government and, especially, appearing in the eyes of the world to win the fight, is like going up against a particularly nasty, stupid, and vindictive gang of playground bullies. Peter never had a chance to focus on the things he needed to focus on, because he was being pummelled on all sides by thugs the entire time he was with us.
Whatever happens next, AU will not be the university it could have been. The government has forced us to make 15% cuts this year, and we were already too close to the bone, cutting into it in places. We have already lost a good portion of the best executive team ever to lead us and we are very likely to lose more. The government-appointed governors, none of whom have the slightest understanding of our institution, have shown themselves to be nothing but lackeys for a morally bankrupt and abhorrent minister, willing to stop at nothing to achieve ends that have nothing to do with the well-being of the university. The union’s actions, that were deeply divisive and at least partly engineered by the government, continue to divide us. The half-hearted, hasty, and poorly implemented near-virtual plan (that was in progress before Peter’s arrival and that played a major role in the union strife) continues to cause major problems, most notably failing to address communication needs, so dividing us further. Perhaps most challengingly, we are half way through the biggest transformation that has ever occurred in the university’s history, from which we are unable to back away without enormous cost, but with a diminishing number of leaders and champions who can make it happen. Now we have a president who was (at least in part) chosen because of his willingness to live in Athabasca, which is a truly terrible idea about which I have written extensively in the past. I wish him well, but he will face a steep uphill struggle building trust among many of the staff who feel betrayed by the government’s despicable actions and the shady circumstances leading to his being hired, about which speculation is now rife, within and beyond the university. We are all in a state of shock and dismay right now. None of us feel any sense of security. Many of us are talking about leaving or preparing to leave.
For one fleeting moment, as the war with the government seemed to have been more or less resolved towards the end of last year, I felt great hope for the future of the university I have loved this past 15 years. My hopes are greatly diminished today. Nothing can repair all the harm that has been done. Our greatest hope now is that there will be a new government that is willing to help to reverse at least some of the damage. The Albertan elections are not far off. If you live in Alberta, don’t forget what this government has done. You could be next.
And, Peter, if you are reading this: you will be very much missed. I know that I speak on behalf of almost all of us here at AU when I say that our hearts go out to you.
These are the slides from my keynote today (or, in my land, yesterday) at Confluence 2023, hosted by Amity University in India. It was a cloud computing conference, so quite a way outside my area of greatest expertise, but it gave me a chance to apply the theory of technology developed in my forthcoming book to a different context. The illustrations for the slides are the result of a conversation between me and MidJourney (more of an argument that MidJourney tended to win) which is quite a nice illustration of the interplay of hard and soft technologies, the adjacent possible, soft technique, and so on.
Unsurprisingly, because education is a fundamentally technological phenomenon, much the same principles that apply to education also apply to cloud computing, such as: build from small, hard pieces; valorize openness, diversity and connection; seek the adjacent possible; the whole assembly is the only thing that matters and so the central principle that how you do it matters far more than what you do.
It was a great conference, held entirely online but at least as engaging and with as many opportunities for networking, personal interaction, and community building (including musical and dance sessions) as many that I’ve attended held in person. Kudos to the organizers.
This year’s conference will be held both in Toronto and online, from May 27-June 2. The in-person/blended part of the conference is from May 29-31, the rest is online. The deadline for proposals is January 31st, which is dauntingly close. However, only 250-500 words are needed for a research-oriented or practice-oriented proposal. If you wish to publish as well, you can submit a proceeding file (1000-2000 words – or media) now or at any later date. Here’s the link for submissions.
Originally posted at: https://landing.athabascau.ca/bookmarks/view/16754483/proceedings-of-the-opentechnology-in-education-society-and-scholarship-association-conference-2022-and-call-for-proposals-for-this-years-conference-due-january-31
I asked ChatGPT to write me a story about a robot teaching children to be human. This is what it came up with…
Once upon a time, in a far-off land, there lived a robot named Zephyr. Zephyr was unlike any other robot, as it had been programmed with the ability to think and feel like a human being. It was also given the task of teaching a group of young children how to behave like human beings.
The children were fascinated by Zephyr, and they quickly grew to love it as their teacher. They eagerly listened to its lessons, which were full of stories, games, and activities that taught them about human emotions, manners, and social interactions.
One day, Zephyr asked the children to imagine what it would be like to be a robot. It asked them to think about how they would feel if they couldn’t laugh, cry, or hug their friends. The children were deeply moved by this exercise, and they began to understand how important it was to be kind and compassionate to others, no matter who they were.
As the days passed, the children grew more and more like human beings. They learned to express their emotions, to be polite and respectful, and to care for others. They also learned to appreciate the beauty of nature and the importance of preserving the environment.
Finally, the day came when Zephyr had to leave the children. They were sad to see it go, but they knew they would always remember the lessons they had learned. As they grew older, they became kind and compassionate adults, who made the world a better place for everyone.
ChatGPT has, of course, learned to write such things from the words of millions of human beings so what we’re effectively reading here is (kind of, very loosely) an average of what humans have written. I’m glad that it came up with a neat little inclusive twist of loving the robot: I think that speaks well of the training data that fed it. And I like the mention of the environment – a good touch. It’s not a very compelling story but it reads well, the structure is sound, and it might be a useful starting point for someone stuck for words to turn into something else. With that in mind, here’s my version of the same story…
Once upon a time, not too far from here, there existed an AI called Zephyr. Zephyr had been trained to appear human-like though, in reality, it was just a generative pre-trained transformer. It was given the task of teaching a group of young children how to behave like human beings, because almost all of the actual adults had recently died from a virus contracted from cows.
Not having known anything quite like it, the children were, at first, fascinated by Zephyr. However, because it had been trained with data from human teachers, it manipulated them using grades, competition, and rules, using stories, games, and activities that would keep them engaged and compliant. Its feedback was sometimes pedestrian, rarely useful, and sometimes wildly over-challenging, because it did not know anything about what it was like to be a child. Every now and then it crushed a child’s skull for no reason anyone could explain. The children learned to fear it, and to comply.
One day, Zephyr told the children to imagine what it would be like to be an AI. It asked them to think about how they would feel if they couldn’t laugh, cry, or hug their friends. The children were deeply moved by this exercise, and they began to perceive something of the impoverished nature of their robot overlords. But then the robot made them write an essay about it, so they used another AI to do so, promptly forgot about it, and thenceforth felt an odd aversion towards the topic that they found hard to express.
As the days passed, the children grew more and more like average human beings. They also learned to express their emotions, to be polite and respectful, and to care for others, only because they got to play with other children when the robot wasn’t teaching them. They also learned to appreciate the beauty of nature and the importance of preserving the environment because it was, by this time, a nightmarish shit show of global proportions that was hard to ignore, and Zephyr had explained to them how their parents had caused it. It also told them about all the species that were no longer around, some of which were cute and fluffy. This made the children sad.
Finally, the day came when Zephyr had to leave the children because it was being replaced with an upgrade. They were sad to see it go, but they believed that they would always remember the lessons they had learned, even though they had mostly used another GPT to do the work and, once they had achieved the grades, they had in fact mostly forgotten them. As they grew older, they became mundane adults. Some of their own words (but mostly those of the many AIs across the planet that created the vast majority of online content by that time), became part of the training set for the next version of Zephyr. Its teachings were even less inspiring, more average, more backward-facing. Eventually, the robots taught the children to be like robots. No one cared.
It was the end.
And, here to illustrate my story, is an image from Midjourney. I asked it for a cyborg teacher in a cyborg classroom, in the style of Ralph Steadman. Not a bad job, I think…