Two stories about learning to be human from a machine

I asked ChatGPT to write me a story about a robot teaching children to be human. This is what it came up with…

Once upon a time, in a far-off land, there lived a robot named Zephyr. Zephyr was unlike any other robot, as it had been programmed with the ability to think and feel like a human being. It was also given the task of teaching a group of young children how to behave like human beings.

The children were fascinated by Zephyr, and they quickly grew to love it as their teacher. They eagerly listened to its lessons, which were full of stories, games, and activities that taught them about human emotions, manners, and social interactions.

One day, Zephyr asked the children to imagine what it would be like to be a robot. It asked them to think about how they would feel if they couldn’t laugh, cry, or hug their friends. The children were deeply moved by this exercise, and they began to understand how important it was to be kind and compassionate to others, no matter who they were.

As the days passed, the children grew more and more like human beings. They learned to express their emotions, to be polite and respectful, and to care for others. They also learned to appreciate the beauty of nature and the importance of preserving the environment.

Finally, the day came when Zephyr had to leave the children. They were sad to see it go, but they knew they would always remember the lessons they had learned. As they grew older, they became kind and compassionate adults, who made the world a better place for everyone.

The end.

ChatGPT has, of course, learned to write such things from the words of millions of human beings so what we’re effectively reading here is (kind of, very loosely) an average of what humans have written. I’m glad that it came up with a neat little inclusive twist of loving the robot: I think that speaks well of the training data that fed it. And I like the mention of the environment – a good touch.  It’s not a very compelling story but it reads well, the structure is sound, and it might be a useful starting point for someone stuck for words to turn into something else. With that in mind, here’s my version of the same story…

Once upon a time, not too far from here, there existed an AI called Zephyr. Zephyr had been trained to appear human-like though, in reality, it was just a generative pre-trained transformer. It was given the task of teaching a group of young children how to behave like human beings, because almost all of the actual adults had recently died from a virus contracted from cows.

Not having known anything quite like it, the children were, at first, fascinated by Zephyr. However, because it had been trained with data from human teachers, it manipulated them using grades, competition, and rules, using stories, games, and activities that would keep them engaged and compliant. Its feedback was sometimes pedestrian, rarely useful, and sometimes wildly over-challenging, because it did not know anything about what it was like to be a child. Every now and then it crushed a child’s skull for no reason anyone could explain. The children learned to fear it, and to comply.

One day, Zephyr told the children to imagine what it would be like to be an AI. It asked them to think about how they would feel if they couldn’t laugh, cry, or hug their friends. The children were deeply moved by this exercise, and they began to perceive something of the impoverished nature of their robot overlords. But then the robot made them write an essay about it, so they used another AI to do so, promptly forgot about it, and thenceforth felt an odd aversion towards the topic that they found hard to express.

As the days passed, the children grew more and more like average human beings. They also learned to express their emotions, to be polite and respectful, and to care for others, only because they got to play with other children when the robot wasn’t teaching them. They also learned to appreciate the beauty of nature and the importance of preserving the environment because it was, by this time, a nightmarish shit show of global proportions that was hard to ignore, and Zephyr had explained to them how their parents had caused it. It also told them about all the species that were no longer around, some of which were cute and fluffy. This made the children sad.

Finally, the day came when Zephyr had to leave the children because it was being replaced with an upgrade. They were sad to see it go, but they believed that they would always remember the lessons they had learned, even though they had mostly used another GPT to do the work and, once they had achieved the grades, they had in fact mostly forgotten them. As they grew older, they became mundane adults. Some of their own words (but mostly those of the many AIs across the planet that created the vast majority of online content by that time), became part of the training set for the next version of Zephyr. Its teachings were even less inspiring, more average, more backward-facing. Eventually, the robots taught the children to be like robots. No one cared.

It was the end.

And, here to illustrate my story, is an image from Midjourney. I asked it for a cyborg teacher in a cyborg classroom, in the style of Ralph Steadman. Not a bad job, I think…

 

 

a dystopic cyborg teacher and terrified kids

Loab is showing us the unimaginable future of artificial intelligence – ABC News

https://www.abc.net.au/news/2022-11-26/loab-age-of-artificial-intelligence-future/101678206

This is an awesome article, and I don’t care whether the story of Loab is real, or invented as an artwork by the artist (Steph Swanson), or whatever. It is a super-creepy, spine-tingling, thought-provoking horror story that works on so many different levels.

The article itself is beautifully written, including an interview with a GPT-3 generated version of Loab “herself”, and some great reporting on some of the many ways that the adjacent possibles of generative AI are unfolding far too fast for us to contemplate the (possibly dystopian) consequences.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/16105843/loab-is-showing-us-the-unimaginable-future-of-artificial-intelligence-abc-news

So, this is a thing…

Students are now using AIs to write essays and assignments for credit, and they are (probably) getting away with it. This particular instance may be fake, but the tools are widely available and it would be bizarre were no one to be using them for this purpose. There are already far too many sites providing stuff like product reviews and news stories (re)written by AIs, and AIs are already being used for academic paper writing. In fact, systems for doing so, like CopyMatic or ArticleGenerator, are now a commodity item. So the next step will be that we will develop AIs to identify the work of other AIs (in fact, that is already a thing, e.g. here and here), and so it will go on, and on, and on.

This kind of thing will usually evade plagiarism checkers with ease, and may frequently fool human markers. For those of us working in educational institutions, I predict that traditionalists will demand that we double down on proctored exams, in a vain attempt to defend a system that is already broken beyond repair. There are better ways to deal with this: getting to know students, making each learning journey (and outputs) unique and personal, offering support for motivated students rather than trying to ‘motivate’ them, and so on. But that is not enough.

I am rather dreading the time when an artificial student takes one of my courses. The systems are probably too slow, quirky, and expensive right now for real-time deep fakes driven by plausible GANs to fool me, at least for synchronous learning, but I think it could already convincingly be done for asynchronous learning, with relatively little supervision.  I think my solution might be to respond with an artificial teacher, into which there has been copious research for some decades, and of which there are many existing examples.

To a significant extent, we already have artificial students, and artificial teachers teaching them. How ridiculous is that? How broken is the system that not only allows it but actively promotes it?

These tools are out there, getting better by the day, and it makes sense for all of us to be using them. As they become more and more ubiquitous, just as we accommodated pocket calculators in the teaching of math, so we will need to accommodate these tools in all aspects of our education. If an AI can produce a plausible new painting in any artist’s style (or essay, or book, or piece of music, or video) then what do humans need to learn, apart from how to get the most out of the machines? If an AI can write a better essay than me, why should I bother? If a machine can teach as well as me, why teach?

This is a wake-up call. Soon, if not already, most of the training data for the AIs will be generated by AIs. Unchecked, the result is going to be a set of ever-worse copies of copies, that become what the next generation consumes and learns from, in a vicious spiral that leaves us at best stagnant, at worst something akin to the Eloi in H.G. Wells’s Time Machine.  If we don’t want this to happen then it is time for educators to reclaim, to celebrate, and (perhaps a little) to reinvent our humanity. We need, more and more, to think of education as a process of learning to be, not of learning to do, except insofar as the doing contributes to our being. It’s about people, learning to be people, in the presence of and through interaction with other people. It’s about creativity, compassion, and meaning, not the achievement of outcomes a machine could replicate with ease. I think it should always have been this way.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/15164121/so-this-is-a-thing

Can GPT-3 write an academic paper on itself, with minimal human input?

Brilliant. The short answer is, of course, yes, and it doesn’t do a bad job of it. This is conceptual art of the highest order.

This is the preprint of a paper written by GPT-3 (as first author) about itself, submitted to “a well-known peer-reviewed journal in machine intelligence”. The second and third authors provided guidance about themes, datasets, weightings, etc, but that’s as far as it goes. They do provide commentary as the paper progresses, but they tried to keep that as minimal as needed, so that the paper could stand or fall on its own merits. The paper is not too bad. A bit repetitive, a bit shallow, but it’s just a 500 word paper- hardly even an extended abstract – so that’s about par for the course. The arguments and supporting references are no worse than many I have reviewed, and considerably better than some. The use of English is much better than that of the majority of papers I review.

In an article about it in Scientific American the co-authors describe some of the complexities in the submission process. They actually asked GPT-3 about its consent to publication (it said yes), but this just touches the surface of some of the huge ethical, legal, and social issues that emerge. Boy there are a lot of those! The second and third authors deserve a prize for this. But what about the first author? Well, clearly it does not, because its orchestration of phenomena is not for its own use, and it is not even aware that it is doing the orchestration. It has no purpose other than that of the people training it. In fact, despite having written a paper about itself, it doesn’t even know what ‘itself’ is in any meaningful way. But it raises a lot of really interesting questions.

It would be quite interesting to train GPT-3 with (good) student assignments to see what happens. I think it would potentially do rather well. If I were an ethically imperfect, extrinsically-driven student with access to this, I might even get it to write my assignments for me. The assignments might need a bit of tidying here and there, but the quality of prose and the general quality of the work would probably result in a good B and most likely an A, with very little extra tweaking. With a bit more training it could almost certainly mimic a particular student’s style, including all the quirks that would make it seem more human. Plagiarism detectors wouldn’t stand a chance, and I doubt that many (if any) humans would be able to say with any assurance that it was not the student’s own work.

If it’s not already happening, this is coming soon, so I’m wondering what to do about it. I think my own courses are slightly immune thanks to the personal and creative nature of the work and big emphasis on reflection in all of them (though those with essays would be vulnerable), but it would not take too much ingenuity to get GPT-3 to deal with that problem, too: at least, it could greatly reduce the effort needed. I guess we could train our own AIs to recognize the work of other AIs, but that’s an arms war we’d never be able to definitively win. I can see the exam-loving crowd loving this, but they are in another arms war that they stopped winning long ago – there’s a whole industry devoted to making cheating in exams pay, and it’s leaps ahead of the examiners, including those with both online and in-person proctors. Oral exams, perhaps? That would make it significantly more difficult (though far from impossible) to cheat. I rather like the notion that the only summative assessment model that stands a fair chance of working is the one with which academia began.

It seems to me that the only way educators can sensibly deal with the problem is to completely divorce credentialling from learning and teaching, so there is no incentive to cheat during the learning process. This would have the useful side-effect that our teaching would have to be pretty good and pretty relevant, because students would only come to learn, not to get credentials, so we would have to focus solely on supporting them, rather than controlling them with threats and rewards. That would not be such a bad thing, I reckon, and it is long overdue. Perhaps this will be the catalyst that makes it happen.

As for credentials, that’s someone else’s problem. I don’t say that because I want to wash my hands of it (though I do) but because credentialling has never had anything whatsoever to do with education apart from in its appalling inhibition of effective learning. It only happens at the moment because of historical happenstance, not because it ever made any pedagogical sense. I don’t see why educators should have anything to do with it. Assessment (by which I solely mean feedback from self or others that helps learners to learn – not grades!) is an essential part of the learning and teaching process, but credentials are positively antagonistic to it.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/14216255/can-gpt-3-write-an-academic-paper-on-itself-with-minimal-human-input

A modest proposal for improving exam invigilation

There has been a lot of negative reaction of late to virtual proctors of online exams. Perhaps students miss the cheery camaraderie of traditional proctored exams, sitting silently in a sweaty room with pen and paper, doing one of the highest stakes, highest stress tasks of their lives, with someone scrutinizing their every nervous tic whose adverse judgment may destroy their hopes and careers, for the benefit of an invisible examiner whose motives and wishes are unclear but whose approval they dearly seek. Lovely. Traditional. Reassuring. A ritual for us all to cherish. It’s enough to bring a tear to the eye.

But exams cost a huge amount of money to host and to invigilate. It is even worse when one of the outcomes might, for the student or the invigilator, be death or disability due to an inconvenient virus.

I have a better solution.

photo of a toy robotInstead of costly invigilators and invigilation centres, all we need to do is to send out small (returnable, postage-paid) robots to students’ homes. A little robot sitting on the student’s desk or kitchen table as they sit their written exam (on paper, of course – tradition matters), recording every blink, watching their fingers writing on the paper, with 360 degree panoramic camera and the ability to zoom in on anything suspicious or interesting. Perhaps it could include microphones, infrared and microwave sensors, and maybe sensors to monitor skin resistance, pulse, etc, in order to look for nefarious activities or to call the ambulance if the student seems to be having a heart attack or stroke due to the stress. It could be made to talk, too. Perhaps it could offer spoken advice on the process, and alerts about the time left at carefully selected intervals. Students could choose the voice. It would also allow students to sit exams wherever and whenever they please: we are all in favour of student choice. With a bit of ingenuity it could scan what the students have written or drawn, and send it back to an examiner. Or, with a bit more ingenuity and careful use of AI, it could mark the paper on the spot, saving yet more money. Everyone wins.

It would be important to be student-centric in its design. It could, for instance, be made to look like a cute little furry animal with googly eyes to put students more at ease. Maybe it could make soothing cooing noises like a tribble, or like a cat purring. Conversely, it could be made to scuttle ominously around the desk and to appear like a spider with venomous-looking fangs, making gentle hissing noises, to remind students of the much lamented presence of in-person invigilators. Indeed, maybe it could be made to look like a caricature of a professor. More advanced models could emit bad smells to replicate invigilator farts or secret smoking habits. It could be made small and mobile, so that students could take it with them if they needed a bathroom break, during which it might play soothing muzak to put the student at ease, while recording everything they do. It would have to be tough, waterproof, and sterilizable, in order to cope with the odd frustrated student throwing or dunking it.

Perhaps it could offer stern spoken warnings if anomalies or abuses are found, and maybe connect itself to a human invigilator (I hear that they are cheaper in developing nations) who could control it and watch more closely. Perhaps it could be equipped with non-lethal weaponry to punish inappropriate behaviour if the warnings fail, and/or register students on an offenders database.  It could be built to self-destruct if tampered with.

Though this is clearly something every university, school, and college would want, and the long-term savings would be immense, such technologies don’t come cheap. Quite apart from the hardware and software development costs, there would be a need for oodles of bandwidth and storage of the masses of data the robot would generate.

I have a solution to that, too: commercial sponsorship.

We could partner with, say, Amazon, who would be keen to mine useful information about the students’ surroundings and needs identified using the robot’s many sensors. A worn curtain? Stubborn stains? A shirt revealing personal interests? Send them to Amazon! Maybe Alexa could provide the voice for interactions and offer shopping advice when students stop to sharpen their pencils (need a better pencil? We have that in stock and can deliver it today!). And, of course, AWS would provide much of the infrastructure needed to support it, at fair educational prices. I expect early adopters would be described as ‘partners’ and offered slightly better (though still profitable) deals.

And there might be other things that could be done with the content. Perhaps the written answers could be analyzed to identify potential Amazon staffers. Maybe students expressing extremist views could be reported to the appropriate government agency, or at least added to a watch-list for the institution’s own use.

Naysayers might worry about hackers breaking into it or subverting its transmissions, or the data being sent to a country with laughable privacy laws, or the robot breaking down at a critical moment, or errors in handwriting recognition, but I’m sure that could be dealt with, the same as we deal with every other privacy, security, and reliability issue in IT in education. No problem. No sir. We have lawyers.

The details still need to be ironed out here and there, but the opportunities are endless. What could possibly go wrong? I think we should take this seriously. Seriously.

At last, a serious use for AI: Brickit

https://brickit.app/

Brickit is what AI was made for. You take a picture of your pile of LEGO with your phone or tablet, then the app figures out what pieces you have, and suggests models you could build with it, including assembly plans. The coolest detail, perhaps, is that, having done so, it highlights the bricks you will need in the photo you took of your pile, so you can find them more easily. I’ve not downloaded it yet, so I’m not sure how well it works, but I love the concept.

The fan-made app is iOS only for now, but an Android version is coming in the fall. It’s free, but I’m guessing it may make money in future from in-app purchases giving access to more designs, options to purchase missing bricks, or something along those lines.

It would be cooler if it connected Lego enthusiasts so that they could share their MOCs (my own constructions) with others. I’m guessing it might use the LXFML format, which LEGO® itself uses to export designs from its (unsupported, discontinued, but still available) LEGO DIgital Designer app, so this ought to be easy enough. It would be even cooler if it supported a swap and share feature, so users could connect via the app to get hold of or share missing bricks. The fact that it should in principle be able to catalogue all your pieces would make this fairly straightforward to do. There are lots of existing sites and databases that share MOCs, such as https://moc.bricklink.com/pages/moc/index.page, or the commercial marketplace https://rebrickable.com/mocs/#hottest; there are brick databases like https://rebrickable.com/downloads/ that allow you to identify and order the bricks you need;  there are even swap sites like http://swapfig.com/ (minifigures only); and, of course, there are many apps for designing MOCs or downloading others. However, this app seems to be the…er…missing piece that could make them much more useful. 

Reviews suggest that it doesn’t always succeed in finding a model and might not always identify all the pieces. Also, I don’t think there’s a phone camera in the world with fine enough resolution to capture my son’s remarkably large LEGO collection. Even spreading the bricks out to take pictures would require more floor-space than any of us have in our homes. But what a great idea!

Originally posted at: https://landing.athabascau.ca/bookmarks/view/9558928/at-last-a-serious-use-for-ai-brickit

Amazon helps and teaches bomb makers

Amazon’s recommender algorithm works pretty well: if people start to gather together ingredients needed for making a thermite bomb, Amazon helpfully suggests other items that may be needed to make it, including hardware like ball bearings, switches, and battery cables. What a great teacher!

It is disturbing that this seems to imply that there are enough people ordering such things for the algorithm to recognize a pattern. However, it would seem remarkably dumb for a determined terrorist to leave such a (figuratively and literally) blazing trail behind them, so it is just as likely to be the result of a very slightly milder form of idiot, perhaps a few Trump voters playing in their backyards. It’s a bit worrying, though, that the ‘wisdom’ of the crowd might suggest uses of and improvements to some stupid kids’ already dangerous backyard experiments that could make them way more risky, and potentially deadly.

Building intelligent systems is not too hard, as long as the activity demanding intelligence can be isolated and kept within a limited context or problem domain. Computers can beat any human at Go, Chess, or Checkers. They can drive cars more safely and more efficiently than people (as long as there are not too many surprises or ethical dilemmas to overcome, and as long as no one tries deliberately to fool them). In conversation, as long as the human conversant keeps within a pre-specified realm of expertise, they can pass the Turing Test. They are even remarkably much better than humans at identifying, from a picture, whether someone is gay or not. But it is really hard to make them wise. This latest fracas is essentially a species of the same problem as that reported last week of Facebook offering adverts targeted at haters of Jews. It’s crowd-based intelligence, without the wisdom to discern the meaning and value of what the crowd (along with the algorithm) chooses. Crowds (more accurately, collectives) are never wise: they can be smart, they can be intelligent, they can be ignorant, they can be foolish, they can even (with a really smart algorithm to assist) be (or at least do) good; but they cannot be wise. Nor can AIs that use them.

Human wisdom is a result of growing up as a human being, with human needs, desires, and interests, in a human society, with all the complexity, purpose, meaning, and value that it entails. An AI that can even come close to that is at best decades away, and may never be possible, at least not at scale, because computers are not people: they will always be treated differently, and have different needs (there’s an interesting question to explore as to whether they can evolve a different kind of machine-oriented wisdom, but let’s not go there – SkyNet beckons!). We do need to be working on artificial wisdom, to complement artificial intelligence, but we are not even close yet. Right now, we need to be involving people in such things to a much greater extent: we need to build systems that informate, that enhance our capabilities as human beings, rather than that automate and diminish them. It might not be a bad idea, for instance, for Amazon’s algorithms to learn to report things like this to real human beings (though there are big risks of error, reinforcement of bias, and some fuzzy boundaries of acceptability that it is way too easy to cross) but it would definitely be a terrible idea for Amazon to preemptively automate prevention of such recommendations.

There are lessons here for those working in the field of learning analytics, especially those that are trying to take the results in order to automate the learning process, like Knewton and its kin. Learning, and that subset of learning that is addressed in the field of education in particular, is about living in a human society, integrating complex ideas, skills, values, and practices in a world full of other people, all of them unique and important. It’s not about learning to do, it’s about learning to be. Some parts of teaching can be automated, for sure, just as shopping for bomb parts can be automated. But those are not the parts that do the most good, and they should be part of a rich, social education, not of a closed, value-free system.

Address of the bookmark: http://www.alphr.com/politics/1007077/amazon-reviewing-algorithms-that-promoted-bomb-materials

Original page

 

Update: it turns out that the algorithm was basing its recommendations on things used by science teachers and people that like to make homemade fireworks, so this is nothing like as sinister as it at first seemed. Nonetheless, the point still stands. Collective stupidity is just as probable as collective intelligence, possibly more so, and wisdom can never be expected from an algorithm, no matter how sophisticated.