Can GPT-3 write an academic paper on itself, with minimal human input?

Brilliant. The short answer is, of course, yes, and it doesn’t do a bad job of it. This is conceptual art of the highest order.

This is the preprint of a paper written by GPT-3 (as first author) about itself, submitted to “a well-known peer-reviewed journal in machine intelligence”. The second and third authors provided guidance about themes, datasets, weightings, etc, but that’s as far as it goes. They do provide commentary as the paper progresses, but they tried to keep that as minimal as needed, so that the paper could stand or fall on its own merits. The paper is not too bad. A bit repetitive, a bit shallow, but it’s just a 500 word paper- hardly even an extended abstract – so that’s about par for the course. The arguments and supporting references are no worse than many I have reviewed, and considerably better than some. The use of English is much better than that of the majority of papers I review.

In an article about it in Scientific American the co-authors describe some of the complexities in the submission process. They actually asked GPT-3 about its consent to publication (it said yes), but this just touches the surface of some of the huge ethical, legal, and social issues that emerge. Boy there are a lot of those! The second and third authors deserve a prize for this. But what about the first author? Well, clearly it does not, because its orchestration of phenomena is not for its own use, and it is not even aware that it is doing the orchestration. It has no purpose other than that of the people training it. In fact, despite having written a paper about itself, it doesn’t even know what ‘itself’ is in any meaningful way. But it raises a lot of really interesting questions.

It would be quite interesting to train GPT-3 with (good) student assignments to see what happens. I think it would potentially do rather well. If I were an ethically imperfect, extrinsically-driven student with access to this, I might even get it to write my assignments for me. The assignments might need a bit of tidying here and there, but the quality of prose and the general quality of the work would probably result in a good B and most likely an A, with very little extra tweaking. With a bit more training it could almost certainly mimic a particular student’s style, including all the quirks that would make it seem more human. Plagiarism detectors wouldn’t stand a chance, and I doubt that many (if any) humans would be able to say with any assurance that it was not the student’s own work.

If it’s not already happening, this is coming soon, so I’m wondering what to do about it. I think my own courses are slightly immune thanks to the personal and creative nature of the work and big emphasis on reflection in all of them (though those with essays would be vulnerable), but it would not take too much ingenuity to get GPT-3 to deal with that problem, too: at least, it could greatly reduce the effort needed. I guess we could train our own AIs to recognize the work of other AIs, but that’s an arms war we’d never be able to definitively win. I can see the exam-loving crowd loving this, but they are in another arms war that they stopped winning long ago – there’s a whole industry devoted to making cheating in exams pay, and it’s leaps ahead of the examiners, including those with both online and in-person proctors. Oral exams, perhaps? That would make it significantly more difficult (though far from impossible) to cheat. I rather like the notion that the only summative assessment model that stands a fair chance of working is the one with which academia began.

It seems to me that the only way educators can sensibly deal with the problem is to completely divorce credentialling from learning and teaching, so there is no incentive to cheat during the learning process. This would have the useful side-effect that our teaching would have to be pretty good and pretty relevant, because students would only come to learn, not to get credentials, so we would have to focus solely on supporting them, rather than controlling them with threats and rewards. That would not be such a bad thing, I reckon, and it is long overdue. Perhaps this will be the catalyst that makes it happen.

As for credentials, that’s someone else’s problem. I don’t say that because I want to wash my hands of it (though I do) but because credentialling has never had anything whatsoever to do with education apart from in its appalling inhibition of effective learning. It only happens at the moment because of historical happenstance, not because it ever made any pedagogical sense. I don’t see why educators should have anything to do with it. Assessment (by which I solely mean feedback from self or others that helps learners to learn – not grades!) is an essential part of the learning and teaching process, but credentials are positively antagonistic to it.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/14216255/can-gpt-3-write-an-academic-paper-on-itself-with-minimal-human-input

Ernst & Young fined $100 million after employees cheated in exams

Not just any exams: ethics exams.

These are the very accountants who are supposed to catch cheats. I guess at least they’ll understand their clientele pretty well.

But how did this happen? There are clues in the article:

“Many of the employees interviewed during the federal investigation said they knew cheating was a violation of the company’s code of conduct but did it anyway because of work commitments or the fact that they couldn’t pass training exams after multiple tries.” (my emphasis).

I think there might have been a clue about their understanding of ethical behaviour in that fact alone, don’t you? But I don’t think it’s really their fault: at least, it’s completely predictable to anyone with even the slightest knowledge of how motivation works.

If passing the exam is, by design, much more important than actually being able to do what is being examined, then of course people will cheat. For those with too much else to do or too little interest to succeed, when the pressure is high and the stakes are higher, it’s a perfectly logical course of action. But, even for all the rest who don’t cheat, the main focus for them will be on passing the exam, not on gaining any genuine competence or interest in the subject. It’s not their fault: that’s how it is designed. In fact, the strong extrinsic motivation it embodies is pretty much guaranteed to (at best) persistently numb their intrinsic interest in ethics, if it doesn’t extinguish it altogether. Most will do enough to pass and no more, taking shortcuts wherever possible, and there’s a good chance they will forget most of it as soon as they have done so.

Just to put the cherry on the pie, and not unexpectedly, EY refer to the process by which their accountants are expected to learn about ethics as ‘training’ and it is mandatory. So you have a bunch of unwilling people who are already working like demons to meet company demands, to whom you are doing something normally reserved for dogs or AI models, and then you are forcing them to take high-stakes exams about it, on which their futures depend. It’s a perfect shit storm. I’d not trust a single one of their graduates, exam cheats or not, and the tragedy is that the people who were trying to force them to behave ethically were the ones directly responsible for their unethical behaviour.

There may be a lesson or two to be learned from this for academics, who tend to be the biggest exam fetishists around, and who seem to love to control what their students do.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/14163409/ernst-young-fined-100-million-after-employees-cheated-in-exams