Published in Digital – The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education

A month or two ago I shared a “warts-and-all” preprint of this paper on the risks of educational uses of generative AIs. The revised, open-access published version, The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education is now available in the Journal Digital.

The process has been a little fraught. Two reviewers really liked the paper and suggested minimal but worthwhile changes. One quite liked it but had a few reasonable suggestions for improvements that mostly helped to make the paper better. The fourth, though, was bothersome in many ways, and clearly wanted me to write a completely different paper altogether. Despite this, I did most of what they asked, even though some of the changes, in my opinion, made the paper a bit worse. However, I drew the line at the point that they demanded (without giving any reason) that I should refer to 8 very mediocre, forgettable, cookie cutter computer science papers which, on closer inspection, had all clearly been written by the reviewer or their team. The big problem I had with this was not so much the poor quality of the papers, nor even the blatant nepotism/self-promotion of the demand, but the fact that none were in any conceivable way relevant to mine, apart from being about AI: they were about algorithm-tweaking, mostly in the context of traffic movements in cities.  It was as ridiculous as a reviewer of a work on Elizabethan literature requiring the author to refer to papers on slightly more efficient manufacturing processes for staples. Though it is normal and acceptable for reviewers to suggest reference to their own papers when it would clearly lead to improvements, this was an utterly shameless abuse of power of a scale and kind that I have never seen before. I politely refused, making it clear that I was on to their game but not directly calling them out on it.

In retrospect, I slightly regret not calling them out. For a grizzly old researcher like me who could probably find another publisher without too much hassle, it doesn’t matter much if I upset a reviewer enough to make them reject my paper. However, for early-career researchers stuck in the publish-or-perish cycle, it would be very much harder to say no. This kind of behaviour is harmful for the author, the publisher, the reader, and the collective intelligence of the human race. The fact that the reviewer was so desperate to get a few more citations for their own team with so little regard for quality or relevance seems to me to be a poor reflection on them and their institution but, more so, a damning indictment of a broken system of academic publishing, and of the reward systems driving academic promotion and recognition. I do blame the reviewer, but I understand the pressures they might have been under to do such a blatantly immoral thing.

As it happens, my paper has more than a thing or two to say about this kind of McNamara phenomenon, whereby the means used to measure success in a system become and warp its purpose, because it is among the main reasons that generative AIs pose such a threat. It is easy to forget that the ways we establish goals and measure success in educational systems are no more than signals of a much more complex phenomenon with far more expansive goals that are concerned with helping humans to be, individually and in their cultures and societies, as much as with helping them to do particular things. Generative AIs are great at both generating and displaying those signals – better than most humans in many cases – but that’s all they do: the signals signify nothing. For well-defined tasks with well-defined goals they provide a lot of opportunities for cost-saving, quality improvement, and efficiency and, in many occupations, that can be really useful. If you want to quickly generate some high quality advertising copy, the intent of which is to sell a product, then it makes good sense to use a generative AI. Not so much in education, though, where it is too easy to forget that learning objectives, learning outcomes, grades, credentials, and so on are not the purposes of learning but just means for and signals of achieving them.

Though there are other big reasons to be very concerned about using generative AIs in education, some of which I explore in the paper, this particular problem is not so much with the AIs themselves as with the technological systems into which they are, piecemeal, inserted. It’s a problem with thinking locally, not globally; of focusing on one part of the technology assembly without acknowledging its role in the whole. Generative AIs could, right now and with little assistance,  perform almost every measurable task in an educational system from (for students) producing essays and exam answers, to (for teachers) writing activities and assignments, or acting as personal tutors. They could do so better than most people. If that is all that matters to us then we might as well therefore remove the teachers and the students from the system because, quite frankly, they only get in the way. This absurd outcome is more or less exactly the end game that will occur though, if we don’t rethink (or double down on existing rethinking of) how education should work and what it is for, beyond the signals that we usually use to evaluate success or intent. Just thinking of ways to use generative AIs to improve our teaching is well-meaning, but it risks destroying the woods by focusing on the trees. We really need to step back a bit and think of why we bother in the first place.

For more on this, and for my tentative partial solutions to these and other related problems, do read the paper!

Abstract and citation

This paper analyzes the ways that the widespread use of generative AIs (GAIs) in education and, more broadly, in contributing to and reflecting the collective intelligence of our species, can and will change us. Methodologically, the paper applies a theoretical model and grounded argument to present a case that GAIs are different in kind from all previous technologies. The model extends Brian Arthur’s insights into the nature of technologies as the orchestration of phenomena to our use by explaining the nature of humans’ participation in their enactment, whether as part of the orchestration (hard technique, where our roles must be performed correctly) or as orchestrators of phenomena (soft technique, performed creatively or idiosyncratically). Education may be seen as a technological process for developing these soft and hard techniques in humans to participate in the technologies, and thus the collective intelligence, of our cultures. Unlike all earlier technologies, by embodying that collective intelligence themselves, GAIs can closely emulate and implement not only the hard technique but also the soft that, until now, was humanity’s sole domain; the very things that technologies enabled us to do can now be done by the technologies themselves. Because they replace things that learners have to do in order to learn and that teachers must do in order to teach, the consequences for what, how, and even whether learning occurs are profound. The paper explores some of these consequences and concludes with theoretically informed approaches that may help us to avert some dangers while benefiting from the strengths of generative AIs. Its distinctive contributions include a novel means of understanding the distinctive differences between GAIs and all other technologies, a characterization of the nature of generative AIs as collectives (forms of collective intelligence), reasons to avoid the use of GAIs to replace teachers, and a theoretically grounded framework to guide adoption of generative AIs in education.

Dron, J. (2023). The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education. Digital, 3(4), 319–335. https://doi.org/10.3390/digital3040020

Originally posted at: https://landing.athabascau.ca/bookmarks/view/21104429/published-in-digital-the-human-nature-of-generative-ais-and-the-technological-nature-of-humanity-implications-for-education

Some thoughts for Ada Lovelace Day

This Scientific American article tells the tale of one of the genesis stories of complexity science, this one from 1952, describing what, until relatively recently, was known as the Fermi-Pasta-Ulam (FPU) problem (or ‘paradox’, though it is not in fact a paradox). It is now more commonly known as the Fermi-Pasta-Ulam-Tsingdou (FPUT) problem, in recognition of the fact that it was only discovered thanks to the extraordinary work of Mary Tsingou, who wrote the programs that revealed what, to Fermi, Pasta, and Ulam, was a very unexpected result. 

The team was attempting to simulate what happens to energy as it moves around atoms connected by chemical bonds. This is a classic non-linear problem that cannot be observed directly, and that cannot be solved by conventional reductive means (notwithstanding recent work that reveals statistical patterns in complex systems like urban travel patterns). It has to be implemented as a simulation in order to see what happens. Fermi, Pasta, and Ulam thought that, with enough iterations, it would reveal itself to be ergodic: that, given long enough, every state of a given energy of the system would be visited an equal number of times. Instead, thanks to Mary Tsingou’s work, they found that it was non-ergodic. Weird stuff happened, that could not be predicted. It was chaotic.

The discovery was, in fact, accidental. Initial results had shown the expected regularities then, one day, they left the program running for longer than usual and, instead of the recurring periodic patterns seen initially, it suddenly went haywire. It wasn’t a bug in the code. It was a phase transition, perhaps the first unequivocal demonstration of deterministic chaos. Though Fermi died and the paper was not actually published until nearly a decade later, it is hard to understate the importance of this ‘accidental’ discovery that deterministic systems are not necessarily ergodic. As Stuart Kauffman puts it, ‘non-ergodicity gives us history‘. Weather is non-ergodic. Evolution is non-ergodic. Learning is non-ergodic. We are non-ergodic. The universe is non-ergodic. Though there are other strands to the story that predate this work, more than anything else this marks the birth of a whole new kind of science – the science of complexity – that seeks to deal with the 90% or more of phenomena that matter to us, and that reductive science cannot begin to handle. 

Here’s a bit of Tsingou’s work on the program, written for the MANIAC computer:

Mary Tsingou's original algorithm design, drawn in freehand

It was not until 2008 that Tsingou’s contribution was fully recognized. In the original paper she was thanked in a footnote but not acknowledged as a co-author. It is possible that, had it been published right away she might have received proper credit. However, it is at least as possible that she might not. The reasons for this are a mix of endemic sexism, and (relatedly) the low esteem accorded to computation at the time.

The relationship between these two factors runs deep.  Historically, the word ‘computer’ originally referred to a job title.  As scientists in the 19th Century amassed vast amounts of data that needed processing, there was far too much for an individual to handle. They figured out that tasks could be broken up into smaller pieces and farmed out in parallel to humans who could do the necessary rote arithmetic.  Because women were much cheaper to hire, and computing was seen as a relatively unskilled (albeit very gruelling and cognitively demanding) role, computing therefore became a predominantly female occupation. From the 19th Century onwards into the mid 20th Century, all-women teams worked on astronomical data, artillery trajectories, and similar tasks, often performing extremely complex mathematical calculations requiring great precision and endurance, always for far less pay than they deserved or that a man would receive. Computers were victims of systematic gender discrimination from the very beginning. 

The FPUT problem, however, is one that doesn’t lend itself to chunking and parallel computation: the output of one iteration of the computation is needed before you can calculate the next. Farming it out to human computers simply wouldn’t work. For work of this kind, you have to have a machine or it would take decades to come up with a solution.

In the first decade or so after digital computers were invented significant mathematical skill was needed to operate them. Because of their existing exploitation as human computers, there was, luckily enough, a large workforce of women with advanced math skills whose manual work was being obsoleted at the same time, so women played a significant role in the dawn of the industry. Mary Tsingou was not alone in making great contributions to the field.

By the 1970s that had changed a lot, not in a good way, but numbers slowly grew again until around the mid-1980s (a terrible decade in so many ways) when things abruptly changed for the worse.

graph showing the huge drop in women in IT from the 1980s onwards

Whether this was due to armies of parents buying PCs for their (male) children thanks to aggressive marketing to that sector, or highly selective media coverage, or the increasing recognition of the value of computing skills in the job market reinforcing traditional gender disparities, or something else entirely (it is in fact complex, with vast self-reinforcing feedback loops all the way down the line), the end result was a massive fall in women in the field. Today, less than 17% of students of computer science are women, while the representation of women in most other scientific and technical fields has grown considerably.

There’s a weirder problem at work here, though, because (roughly – this is an educated guess) less than 1% of computer science graduates ever wind up doing any computer science, unless they choose a career in academia (in which case the figure rises to very low single figures), and very few of them ever do more mathematics than an average greengrocer. What we teach in universities has wildly diverged from the skills that are actually needed in most computing occupations at an even sharper rate than the decline of women in the trade. We continue to teach it in ways that would have made sense in the 1950s, when it could not be done without a deep understanding of mathematics and the science behind digital computation, even though neither of these skills has much if any use at all for more than a minute fraction of our students when they get out into the real world. Sure, we have broadened our curriculum to include many other aspects of the field, but we don’t let students study them unless they also learn the (largely unnecessary in most occupations) science and math (a subject that suffers even lower rates of non-male participation than computing). Thinking of modern computing as a branch of mathematics is a bit like treating poetry as a branch of linguistics or grammar, and thinking of modern computing as a science is a bit like treating painting as a branch of chemistry. It’s not so much that women have left computing but that computing – as a taught subject – has left women. 

Computing professionals are creative problem solvers, designers, architects, managers, musicians, writers, networkers, business people, artists, social organizers, builders, makers, teachers, or dreamers. The main thing that they share in common is that they work with computers. Some of them are programmers. A few (mostly those involved in designing machines and compilers) do real computer science. A few more do math, though rarely at more than middle school level, unless they are working on the cutting edge of a few areas like graphics, AI, or data science (in which case the libraries etc that would render it unnecessary have not yet been invented).  The vast majority of computing professionals are using the outputs of this small elite’s work, not reinventing it. It it not surprising that there is enormous diversity in the field of computing because computers are universal machines, universal media, and universal environments, so they encompass the bulk of human endeavour. That’s what makes them so much fun. If you are a computing professional you can work with anyone, and you can get involved in anything that involves computers, which is to say almost everything. And they are quite interesting in and of themselves, partly because they straddle so many boundaries, and ideas and tools from one area can spark ideas and spawn tools in another.

If you consider the uses of computer applications in many fields, from architecture or design to medicine or media to art or music, there is a far more equal gender distribution. Computing is embedded almost everywhere, and it mostly demands very different skills in each of its uses. There are some consistent gaps that computing students could fill or, better, that computing profs could teach in the context they are used. Better use could be made of computers across the board with just a little programming or other technical skills. Unfortunately, those who create, maintain, and manage computers and their applications tend to mainly come out of computer science programs (at least in North America and some other parts of the world) so many are ill prepared for participating in all that richness, and computing profs tend to stick with teaching in computer science programs so the rest of the world has to figure out things they could help with for themselves.

I think it is about time that we relegated computer science to a minor (not unimportant) stream and got back into the real world – the one with women in it. There’s still a pressing need to bring more women into that minor stream: we need inspirations like Mary Tsingou, we could do worse than preferentially hiring more non-male professors, and we desperately need to shift the discriminatory culture surrounding (especially) mathematics but, if we can at least teach in a way that better represents the richness and diversity of the computing profession itself, it would be a good start.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/10624709/some-thoughts-for-ada-lovelace-day

Are experienced online teachers best-placed to help in-person teachers cope with suddenly having to teach online? Maybe not.

lecturingI recently downloaded What Teacher Educators Should Have Learned From 2020. This is an open edited book, freely downloadable from the AACE site, for teachers of teachers whose lives were disrupted by the sudden move to emergency remote teaching over the past year or so.  I’ve only skimmed the contents and read a couple of the chapters, but my first impressions are positive. Edited by Richard Ferdig and Kristine Pytash, It springs from the very active and engaged AACE SITE community, which is a good indicator of expertise and experience. It seems well organized into three main sections:

  1.         Social and Emotional Learning for Teacher Education.
  2.         Online Teaching and Learning for Teacher Education.
  3.         eXtended Reality (XR) for Teacher Education

I like the up-front emphasis on social and emotional aspects, addressing things like belongingness, compassion, and community, mainly from theoretical/model-oriented perspectives, and the other sections seem wisely chosen to meet practitioner needs. The chapters adopt a standardized structure:

  • Introduction. 
  • What We Know. 
  • Lessons Learned for Research. 
  • Lessons Learned for Practice. 
  • What You Should Read. 
  • References

Again, this seems pretty sensible, maintaining a good focus on actionable knowledge and practical steps to be taken. It’s not quite a textbook, but it’s a useful teach-yourself resource with good coverage. I look forward to dipping into it a bit more deeply. I expect to find some good ideas, good practices, and good theoretical models to support my teaching and my understanding of the issues. And I’m really pleased that it is being released as an open publication: well done, AACE, for making this openly available.

But I do wonder a little about who else will read this.

Comfort zones and uncomfortable zones

The other day I was chatting with a neighbour who teaches a traditional hard science subject at one of the local universities, who was venting about the problems of teaching via Zoom. He knew that I had a bit of interest and experience in this area, so he asked whether I had any advice. I started to suggest some ways of rethinking it as a pedagogical opportunity, but he was not impressed. Even something as low-threshold and straightforward as flipping the classroom or focusing on what students do rather than what he has to tell them was a step too far. He patiently explained that he has classes with hundreds of students and fixed topics that they need to learn, and he really didn’t see it as desirable or even possible to depart from his well-tried lecture format. At least it would be too much work and he didn’t have the time for it. I did try to push back on that a bit and I may have mentioned the overwhelming body of research that suggests this might not be a wise move, but he was pretty clear and firm about this.  What he actually wanted was for someone to make (or tell him how to make) the digital technology as easy and as comfortably familiar as the lecture theatre, and that would somehow make the students as engaged as he perceived them to normally be in his lectures, without notably changing how he taught. The problem was the darn technology, not the teaching. I bit my tongue at this point. I eventually came up with a platitude or two about trying to find different ways to make learning visible, about explicitly showing that he cares, about taking time to listen, about modelling the behaviour he wanted to see, about using the chat to good advantage, and about how motivation differs online and off, but I don’t think it helped. I suspect that the only things that really resonated with him were suggestions about how to get the most out of a webcam and a recommendation to get a better microphone.

Within the context in which he usually teaches, he is probably a very good teacher. He’s a likeable person who clearly cares a lot about his students, he knows a lot about his subject, and he knows how to make it appealing within the situation that he normally works. His courses, as he described them, are very conventional, relying a lot on the structure given to them by the industry-driven curriculum and the university’s processes, norms, and structures, and he fills his role in all that admirably. I think he is pretty typical of the vast majority of teachers. They’re good at what they do, comfortable with how they do it, and they just want the technology to accommodate them continuing to do so without unnecessary obstacles.

Unfortunately, technology doesn’t work that way.

The main reason it doesn’t work is very simple: technologies (including pedagogies) affect one another in complex and recursive ways, so (with some trivial exceptions) you can’t change one element (especially a large element) and expect the rest to work as they did before.  It’s simple, intuitive, and obvious but unless you are already well immersed in both systems theories and educational theory, really taking it to heart and understanding how it must affect your practice demands a pretty big shift in weltanschauung, which is not the kind of thing I was keen to start while on my way to the store in the midst of a busy day.

To make matters worse, even if teachers do acknowledge the need to change, their assumption that things will eventually (maybe soon) return to normal means that they are – reasonably enough –  not willing and probably not able to invest a lot of time into it. A big part of the reason for this is that, thanks to the aforementioned interdependencies, they are probably running round like blue-arsed flies just trying to keep things together, and filling their time with fixing the things that inevitably break in the process. Systems thrive on this kind of self-healing feedback loop. I guess teachers figure that, if they can work out how to tread water until the pandemic has run its course, it will be OK in the end.

If only.

Why in-person education works

The hallmark technologies (mandatory lectures, assignments, grades, exams, etc, etc) of in-person teaching are worse than awful but, just as a talented musician can make beautiful noises with limited technical knowledge and sub-standard instruments, so there are countless teachers who use atrocious methods in dreadful contexts but who successfully lead their students to learn. As long as the technologies are soft and flexible enough to allow them to paper over the cracks of bad tools and methods with good technique, talent, and passion, it works well enough for enough people enough of the time and can (with enough talent and passion) even be inspiring.

It would not work at all, though, without the massive machinery that surrounds it.

An institution (including its systems, structures, and tools) is itself designed to teach, no matter how bad the teachers are within it. The opportunities for students to learn from and with others around them, including other students, professors, support staff, administrators, and so on; the supporting technologies, including rules, physical spaces, structures, furnishings, and tools; the common rooms, the hallways, the smokers’ areas (best classrooms ever), the lecture theatres, the bars and the coffee shops; the timetables that make students physically travel to a location together (and thus massively increase salience); the notices on the walls; the clubs and societies; the librarians, the libraries, the students reading and writing within those libraries, echoing and amplifying the culture of learning that pervades them; the student dorms and shared kitchens where even more learning happens; the parties; even the awful extrinsic motivation of grades, teacher power, and norms and rules of behaviour that emerged in the first place due to the profound motivational shortcomings of in-person teaching. All of this and more conspires to support a basic level of at least mediocre (but good enough) learning, whether or not teachers teach well. It’s a massively distributed technology enacted by many coparticipants, of which designated teachers are just a part, and in which students are the lead actors among a cast of thousands. Online, those thousands are often largely invisible. At best, their presence tends to be highly filtered, channeled, or muted.

Why in-person methods don’t transfer well online

When most of that massive complex machinery is suddenly removed, leaving nothing but a generic interface better suited to remote business meetings than learning or, much worse, some awful approximation of all the evil, hard, disempowering technologies of traditional teaching wrapped around Zoom, or nightmarishly inhuman online proctoring systems, much of the teaching (in the broadest sense) disappears with it. Teaching in an institution is not just what teachers do. It’s the work of a community; of all the structures the community creates and uses; of the written and unwritten rules; of the tacit knowledge imparted by engagement in a space made for learning; of the massive preparation of schooling and the intricate loops that connect it with the rest of society; of attitudes and cultures that are shaped and reinforced by all the rest.  It’s no wonder that teachers attempting to transfer small (but the most visible) parts of that technology online struggle with it. They need to fill the ever-widening gaps left when most of the comfortable support structures of in-person institutions that made it possible in the first place are either gone or mutated into something lean and hungry. It can be done, but it is really hard work.

More abstractly, a big part of the problem with this transfer-what-used-to-work-in-person approach is that it is a technology-first approach to the problem that focuses on one technology rather than the whole. The technology of choice in this case happens to be a set of pedagogical methods, but it is no different in principle than picking a digital tool and letting that decide how you will teach. Neither makes much sense. All the technologies in the assembly – including pedagogies, digital tools, regulations, designs, and structures – have to work together. No single technology has precedence, beyond the one that results from assembling the rest. To make matters worse, what-used-to-work-in-person pedagogies were situated solutions to the problems of teaching in physical classrooms, not universally applicable methods of teaching. Though there are some similarities here and there, the problems of teaching online are not at all the same as those of in-person teaching so of course the solutions are different. Simply transferring in-person pedagogies to an online context is much like using the paddles from a kayak to power a bicycle. You might move, but you won’t move far, you won’t move fast, you won’t move where you want to go, and it is quite likely to end in injury to yourself or others.

Such problems have, to a large extent, been adequately solved by teachers and institutions that work primarily online. Online institutions and organizations have infrastructure, processes, rules, tools, cultures, and norms that have evolved to work together, starting with the baseline assumption that little or none of the physical stuff will ever be available. Anything that didn’t work never made it to first base, or has not survived. Those that have been around a while might not be perfect, but they have ironed out most of the kinks and filled in most of the gaps. Most of my work, and that of my smarter peers, begins in this different context. In fact, in my case, it mainly involves savagely critiquing that context and figuring out ways to improve it, so it is yet another step removed from where in-person teachers are now.

OK, maybe I could offer a little advice or, at least, a metaphor

Roughly 20 years ago I did share a similar context. Working in an in-person university, I had to lead a team of novice online teachers from geographically dispersed colleges to create and teach a blended program with 28 new online courses. We built the whole thing in 6 months from start to finish, including the formal evaluations and approvals process. I could share some generic lessons from what I discovered then, the main one being to put most of the effort into learning to teach online, not into designing course materials. Put dialogue and community first, not structure. For instance, make the first thing students see in the LMS the discussion, not your notes or slides, and use the discussion to share content and guide the process. However, I’d mostly feel like the driver of a Model T Ford trying to teach someone to drive a Tesla. Technologies have changed, I have changed, my memory is unreliable.

bicycleIn fact, I haven’t driven a car of any description in years. What I normally do now is, metaphorically, much closer to riding a bicycle, which I happen to do and enjoy a lot in real life too. A bike is a really smart, well-adapted, appropriate, versatile, maintainable, sustainable soft technology for getting around. The journey tends to be much more healthy and enjoyable, traffic jams don’t bother you, you can go all sorts of places cars cannot reach, and you can much more easily stop wherever you like along the way to explore what interests you. You can pretty much guarantee that you will arrive when and where you planned to arrive, give or take a few minutes. In the city, it’s often the fastest way to get around, once you factor in parking etc. It’s very liberating. It is true that more effort is needed to get from A to B, bad weather can be a pain, and it would not be the fastest or most comfortable way to reach the other side of the continent: sometimes, alternative forms of transport are definitely worth taking and I’m not against them when it’s appropriate to use them. And the bike I normally ride does have a little electric motor in one of the wheels that helps push me up hills (not much, but enough) but it doesn’t interfere with the joy (or most of the effort) of riding.  I have learned that low-threshold, adaptable, resilient systems are often much smarter in many ways than high-tech platforms because they are part-human. They can take on your own smartness and creativity in ways no amount of automation can match. This is true of online learning tools as much as it is true of bicycles. Blogs, wikis, email, discussion forums, and so on often beat the pants off learning management systems, commercial teaching platforms, learning analytics tools or AI chatbots for many advanced pedagogical methods because they can become what you want them to be, rather than what the designer thought you wanted, and they can go anywhere, without constraint. Of course, the flip side is that they take more effort, sometimes take more time, and (without enormous care) can make it harder for all concerned to do things that are automated and streamlined in more highly engineered tools, so they might not always be the best option in all circumstances, any more than a bike is the best way to get up a snowy mountain or to cross an ocean.

Why you shouldn’t listen to my advice

It’s sad but true that most of what I would really like to say on the subject of online learning won’t help teachers on the ground right now, and it is actually worse than the help their peers could give them because what I really want to tell them is to change everything and to see the world completely differently. That’s pretty threatening, especially in these already vulnerable times, and not much use if you have a class to teach tomorrow morning.

The AACE book is more grounded in where in-person teachers are now. The chapter “We Need to Help Teachers Withstand Public Criticism as They Learn to Teach Online”, for example, delves into the issues well, in accessible ways that derive from a clear understanding of the context.  However, the book cannot help but be an implicit (and, often, explicit) critique of how teachers currently teach: that’s implied in the title, and in the chapter structures.  If you’re already interested enough in the subject and willing enough to change how you teach that you are reading this book in the first place, then this is great. You are 90% of the way there already, and you are ready to learn those lessons. One of the positive sides of emergency remote teaching has been that it has encouraged some teachers to reflect on their teaching practices and purposes, in ways that will probably continue to be beneficial if and when they return to in-person teaching. They will enjoy this book, and they may be the intended audience. But they are not the ones that really need it.

I would quite like to see (though maybe not to read) a different kind of book containing advice from beginners. Maybe it would have a title something like ‘What I learned in 2020’ or ‘How I survived Zoom.’ Emergency remote teachers might be more inclined to listen to the people who didn’t know the ‘right’ ways of doing things when the crisis began, who really didn’t want to change, who maybe resented the imposition, but who found ways to work through it from where they were then, rather than where the experts think (or know) they should be aiming now. It would no doubt annoy me and other distance learning researchers because, from the perspective of recognized good practice, much of it would probably be terrible but, unlike what we have to offer, it would actually be useful. A few chapters in the AACE book are grounded in concrete experience of this nature, but even they wind up saying what should have happened, framing the solutions in the existing discourse of the distance learning discipline. Most chapters consist of advice from experts who already knew the answers before the pandemic started. It is telling that the word ‘should’ occurs a lot more frequently than it should. This is not a criticism of the authors or editors of the book: the book is clear from the start that it is going to be a critique of current practice and a practical guidebook to the territory, and most of the advice I’ve seen in it so far makes a lot of sense. It’s just not likely to affect many of the ones who have no wish to change not just their practices but their fundamental attitudes to teaching. Sadly, that’s also true of this post which, I think, is therefore more of an explanation of why I’ve been staring into the headlights for most of the pandemic, rather than a serious attempt to help those in need. I hope there’s some value in that because it feels weird to be a (slight, minor, still-learning) expert in the field with very strong opinions about how online learning should work, but to have nothing useful to say on the subject at the one time it ought to have the most impact.

Read the book:

Ferdig, R.E. & Pytash, K.E. (2021). What Teacher Educators Should Have Learned From 2020. Association for the Advancement of Computing in Education (AACE). Retrieved March 22, 2021 from https://www.learntechlib.org/primary/p/219088/.

From Representation to Emergence: Complexity's challenge to the epistemology of schooling – Osberg – 2008 – Educational Philosophy and Theory – Wiley Online Library

This is my second post for today on the subject of boundaries and complex systems (yes, I am writing a paper!), this time pointing to a paper by Osberg, Biesta and Cilliers from 2008 that applies the concepts to knowledge and education. It’s a fascinating paper, drawing a theory of knowledge out of complex systems that the authors rather deftly fit with Dewey’s transactional realism and (far less compellingly) a bit of deconstructionism.

I think this sits very firmly within the connectivist family of theories (Stephen Downes may disagree!) albeit from a slightly different perspective. The context is the realm of complex (mostly complex adaptive) systems but the notion of knowledge as an emergent and shifting phenomenon born of engagement – a process, not a product – and the significance of the connected whole in both enabling and embodying it all is firmly in the connectivist tradition. It’s a slightly different perspective but one that is well-grounded in theory and comes to quite a similar conclusion, aptly put:

education (becoming educated) is no longer about understanding a finished  universe, or even about participating in a finished and stable universe. It is the result, rather, of participating in the creation of an unfinished universe.

The authors begin by defining what they describe as a ‘representational’ or ‘spatial’ epistemology that underpins most education. This is not quite as simplistic as it sounds – they include models and theories in this, at least. Their point is that education takes people out of ‘real life’ and therefore must rely on a means to represent ‘real life’ to do its job properly. I think this is pushing it a bit: yes, that is true of a fair amount of intentional teaching but there is a lot that goes on in education systems that is unintentional, or emerges as a by-product of interaction, or that happens in playgrounds, cafes, or common rooms, that is very different and is not just an incidental to the process but quite critical to it. To pretend that educational systems are nothing but the explicit things we intentionally do to people is, I think deliberately, creating a bit of a straw man. They make much the same point: I guess it is done to distinguish this from their solution, which is an ’emergentist’ epistemology.

The really interesting stuff for me comes from Cillier’s contribution (I’m guessing) on boundaries, which makes the simple and obvious point that complex systems (as opposed to complicated ones) are inherently incompressible, so any model we make of them is inaccurate: in leaving out the tiniest thing we make it impossible to make deterministic predictions, save in that we can create boundaries to focus on particular aspects we might care about and come up with probabalistic inferences (e.g. predicting the weather). Those boundaries are thus, of necessity, created (or, more accurately, negotiated), not discovered. They are value-laden. Thus:

“…models and theories that reduce the world to a system of rules or laws cannot be understood as pure representations of a universe that exists independently, but should rather be understood as valuable but provisional and temporary tools by means of which we constantly re-negotiate our understanding of and being in the world

They go on…

We need boundaries around our regularities before we can model or theorise them, before we can find their rules of operation, because rules make sense only in terms of boundaries. The point is that the setting of the boundary creates the condition of possibility for a rule or a law to exist. When a boundary is not naturally given, as is the case with natural complex systems, the rules that we ‘discover’ also cannot be understood as naturally given. Rules and ‘laws’ are not ‘real’ features of the systems we theorise about. Theories that attempt to reduce complexity to a system of rules or laws, like our models which do precisely this, therefore cannot be understood as pictures of reality.

So, the rules that we find are pragmatic ones – they are tools, rather than pictures of reality, that help us to renegotiate our world and the meaning we make in and of it:

From this perspective, knowledge is not about ‘the world’ as such, it is not about truth; rather, it is about what we can do in the world, how we can change it.One could say ‘acquiring’ knowledge does not ‘solve’ problems for us: it creates problems for us to solve.”

At this point they come round to Dewey, whose transactional model is not about finding out about the world but leads to a constantly emerging and ever renegotiated state of being.

“…in acting, we create knowledge, and in creating knowledge, we learn to act in different ways and in acting in different ways we bring about new knowledge which changes our world, which causes us to act differently, and so on, unendingly. There is no final truth of the matter, only increasingly diverse ways of interacting in a world that is becoming increasingly complex.

One of the more significant aspects of this, that is not dwelt on anything like enough in this paper but that forms a consistent subtext, is that this is a fundamentally social pursuit. This is a complex system not just of individuals negotiating an active relationship with the world, but of people doing it together, as part of a complex system that drives its own adaptation, at every scale and within every (overlapping, interpenetrating) boundary.

They continue with an, I think, unsuccessful attempt to align this perspective with postmodernist/poststructuralist/deconstructionist theory, claiming that Dillon’s differentiation between the radical relationality of complexity and poststructuralist theorists is illusory, because a complex system is always in a state of becoming without being, so it is much the same kind of thing. Whether or not this is true, I don’t think it adds anything significant to the arguments.

The paper rushes to a rather unsatisfactory conclusion – at last hitting the promised topic of the title – about the role of this emergentist epistemology in schooling:

Acquisition is no longer the name of the game …. This means questions about what to present in the curriculum and whether these things should be directly presented or should be represented (such that children may acquire knowledge of these things most efficiently or effectively) are no longer relevant as curricular questions. While content is important, the curriculum is less concerned with what content is presented and how, and more with the idea that content is engaged with and responded to …. Here the content that is engaged is not pre-given, but emerges from the educative situation itself. With this conception of knowledge and the world, the curriculum becomes a tool for the emergence of new worlds rather than a tool for stabilisation and replication

This follows quite naturally and makes sense, but it diminishes the significance of a pretty obvious elephant in the room, which is that the educational institution itself is one of those boundaried systems that plays a huge role in and of itself, not to mention with other boundaried systems, regardless of the processes enacted within its boundaries. I think this is symptomatic of a big gap that the paper very much implies but barely attempts to address, which is that all of these complex systems involved processes, structures, rules, tools, objects, content (whatever that is!), media, and a host of other things are part of those complex systems. Knowledge is indeed a dynamic process, a state of becoming or of being, but it incorporates really a lot of things, only a limited number of which are in the minds of individuals. It’s not about people learning – it’s about that whole, massive, complex adaptive system itself.

Address of the bookmark: http://onlinelibrary.wiley.com/doi/10.1111/j.1469-5812.2007.00407.x/abstract;jsessionid=901674561113DC6F72BDE8756B165030.f04t03?systemMessage=Wiley+Online+Library+will+be+disrupted+on+11th+July+2015+at+10%3A00-16%3A00+BST+%2F+05%3A00-11%3A00+EDT+%2F+17%3A00-23%3A00++SGT++for+essential+maintenance.++Apologies+for+the+inconvenience&userIsAuthenticated=false&deniedAccessCustomisedMessage=