Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research | TechTrends

The latest paper I can proudly add to my list of publications,  Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research has been published in the (unfortunately) closed journal TechTrends. Here’s a direct link to the paper that should hopefully bypass the paywall, if it has not been used too often.

I’m 16th of 47 coauthors, led by the truly wonderful Junhong Xiao, who is the primary orchestrator and mastermind behind it. This is a companion piece to our Manifesto for Teaching and Learning in a Time of Generative AI and it starts where the other paper left off, delving further into what we don’t know (or at least do not agree that we know) about and (taking up most of the paper) what we might do about that lack of knowledge. I think this presents a pretty useful and wide-ranging research agenda for anyone with an interest in AI and education.

Methodologically, it emerged through a collaborative writing process between a very multinational group of international researchers in open, digital, and online learning. It’s not a random sample of people who happen to know one another: the huge group represents a rich mix of (extremely) well-established and (excellent) emerging researchers from a broad set of cultural backgrounds, covering a wide range of research interests in the field. Junhong does a great job of extracting the themes and organizing all of that into a coherent narrative.

In many ways I like this paper more than its companion piece. I think this is because, though its findings are – as the title implies – less well-defined than the first, I am more closely aligned with the underlying assumptions, attitudes and values that underpin the analysis. It grapples more firmly with the wicked problems and it goes deeper into the broader, situated, human nature of the systems in which generative AI is necessarily intertwingled, skimming over the more simplistic conversations about cheating, reliability, and so on to get at some meatier but more fundamental issues that, ultimately, relate to how and why we do this education thing in the first place.

Abstract

Advocates of AI in Education (AIEd) assert that the current generation of technologies, collectively dubbed artificial intelligence, including generative artificial intelligence (GenAI), promise results that can transform our conceptions of what education looks like. Therefore, it is imperative to investigate how educators perceive GenAI and its potential use and future impact on education. Adopting the methodology of collective writing as an inquiry, this study reports on the participating educators’ perceived grey areas (i.e. issues that are unclear and/or controversial) and recommendations on future research. The grey areas reported cover decision-making on the use of GenAI, AI ethics, appropriate levels of use of GenAI in education, impact on learning and teaching, policy, data, GenAI outputs, humans in the loop and public–private partnerships. Recommended directions for future research include learning and teaching, ethical and legal implications, ownership/authorship, funding, technology, research support, AI metaphor and types of research. Each theme or subtheme is presented in the form of a statement, followed by a justification. These findings serve as a call to action to encourage a continuing debate around GenAI and to engage more educators in research. The paper concludes that unless we can ask the right questions now, we may find that, in the pursuit of greater efficiency, we have lost the very essence of what it means to educate and learn.

Reference

Xiao, J., Bozkurt, A., Nichols, M., Pazurek, A., Stracke, C. M., Bai, J. Y. H., Farrow, R., Mulligan, D., Nerantzi, C., Sharma, R. C., Singh, L., Frumin, I., Swindell, A., Honeychurch, S., Bond, M., Dron, J., Moore, S., Leng, J., van Tryon, P. J. S., … Themeli, C. (2025). Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research. TechTrends. https://doi.org/10.1007/s11528-025-01060-6

Can GPT-3 write an academic paper on itself, with minimal human input?

Brilliant. The short answer is, of course, yes, and it doesn’t do a bad job of it. This is conceptual art of the highest order.

This is the preprint of a paper written by GPT-3 (as first author) about itself, submitted to “a well-known peer-reviewed journal in machine intelligence”. The second and third authors provided guidance about themes, datasets, weightings, etc, but that’s as far as it goes. They do provide commentary as the paper progresses, but they tried to keep that as minimal as needed, so that the paper could stand or fall on its own merits. The paper is not too bad. A bit repetitive, a bit shallow, but it’s just a 500 word paper- hardly even an extended abstract – so that’s about par for the course. The arguments and supporting references are no worse than many I have reviewed, and considerably better than some. The use of English is much better than that of the majority of papers I review.

In an article about it in Scientific American the co-authors describe some of the complexities in the submission process. They actually asked GPT-3 about its consent to publication (it said yes), but this just touches the surface of some of the huge ethical, legal, and social issues that emerge. Boy there are a lot of those! The second and third authors deserve a prize for this. But what about the first author? Well, clearly it does not, because its orchestration of phenomena is not for its own use, and it is not even aware that it is doing the orchestration. It has no purpose other than that of the people training it. In fact, despite having written a paper about itself, it doesn’t even know what ‘itself’ is in any meaningful way. But it raises a lot of really interesting questions.

It would be quite interesting to train GPT-3 with (good) student assignments to see what happens. I think it would potentially do rather well. If I were an ethically imperfect, extrinsically-driven student with access to this, I might even get it to write my assignments for me. The assignments might need a bit of tidying here and there, but the quality of prose and the general quality of the work would probably result in a good B and most likely an A, with very little extra tweaking. With a bit more training it could almost certainly mimic a particular student’s style, including all the quirks that would make it seem more human. Plagiarism detectors wouldn’t stand a chance, and I doubt that many (if any) humans would be able to say with any assurance that it was not the student’s own work.

If it’s not already happening, this is coming soon, so I’m wondering what to do about it. I think my own courses are slightly immune thanks to the personal and creative nature of the work and big emphasis on reflection in all of them (though those with essays would be vulnerable), but it would not take too much ingenuity to get GPT-3 to deal with that problem, too: at least, it could greatly reduce the effort needed. I guess we could train our own AIs to recognize the work of other AIs, but that’s an arms war we’d never be able to definitively win. I can see the exam-loving crowd loving this, but they are in another arms war that they stopped winning long ago – there’s a whole industry devoted to making cheating in exams pay, and it’s leaps ahead of the examiners, including those with both online and in-person proctors. Oral exams, perhaps? That would make it significantly more difficult (though far from impossible) to cheat. I rather like the notion that the only summative assessment model that stands a fair chance of working is the one with which academia began.

It seems to me that the only way educators can sensibly deal with the problem is to completely divorce credentialling from learning and teaching, so there is no incentive to cheat during the learning process. This would have the useful side-effect that our teaching would have to be pretty good and pretty relevant, because students would only come to learn, not to get credentials, so we would have to focus solely on supporting them, rather than controlling them with threats and rewards. That would not be such a bad thing, I reckon, and it is long overdue. Perhaps this will be the catalyst that makes it happen.

As for credentials, that’s someone else’s problem. I don’t say that because I want to wash my hands of it (though I do) but because credentialling has never had anything whatsoever to do with education apart from in its appalling inhibition of effective learning. It only happens at the moment because of historical happenstance, not because it ever made any pedagogical sense. I don’t see why educators should have anything to do with it. Assessment (by which I solely mean feedback from self or others that helps learners to learn – not grades!) is an essential part of the learning and teaching process, but credentials are positively antagonistic to it.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/14216255/can-gpt-3-write-an-academic-paper-on-itself-with-minimal-human-input

Ernst & Young fined $100 million after employees cheated in exams

Not just any exams: ethics exams.

These are the very accountants who are supposed to catch cheats. I guess at least they’ll understand their clientele pretty well.

But how did this happen? There are clues in the article:

“Many of the employees interviewed during the federal investigation said they knew cheating was a violation of the company’s code of conduct but did it anyway because of work commitments or the fact that they couldn’t pass training exams after multiple tries.” (my emphasis).

I think there might have been a clue about their understanding of ethical behaviour in that fact alone, don’t you? But I don’t think it’s really their fault: at least, it’s completely predictable to anyone with even the slightest knowledge of how motivation works.

If passing the exam is, by design, much more important than actually being able to do what is being examined, then of course people will cheat. For those with too much else to do or too little interest to succeed, when the pressure is high and the stakes are higher, it’s a perfectly logical course of action. But, even for all the rest who don’t cheat, the main focus for them will be on passing the exam, not on gaining any genuine competence or interest in the subject. It’s not their fault: that’s how it is designed. In fact, the strong extrinsic motivation it embodies is pretty much guaranteed to (at best) persistently numb their intrinsic interest in ethics, if it doesn’t extinguish it altogether. Most will do enough to pass and no more, taking shortcuts wherever possible, and there’s a good chance they will forget most of it as soon as they have done so.

Just to put the cherry on the pie, and not unexpectedly, EY refer to the process by which their accountants are expected to learn about ethics as ‘training’ and it is mandatory. So you have a bunch of unwilling people who are already working like demons to meet company demands, to whom you are doing something normally reserved for dogs or AI models, and then you are forcing them to take high-stakes exams about it, on which their futures depend. It’s a perfect shit storm. I’d not trust a single one of their graduates, exam cheats or not, and the tragedy is that the people who were trying to force them to behave ethically were the ones directly responsible for their unethical behaviour.

There may be a lesson or two to be learned from this for academics, who tend to be the biggest exam fetishists around, and who seem to love to control what their students do.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/14163409/ernst-young-fined-100-million-after-employees-cheated-in-exams