Essentially, because this was what I was invited to do, the paper shrinks down over 10,000-words from my article Educational technology: what it is and how it works (itself a very condensed summary of my forthcoming book, due out Spring 2023) to under 4,000 words that, I hope, more succinctly capture most of the main points of the earlier paper. I’ve learned quite a bit from the many responses to the earlier paper I received, and from the many conversations that ensued – thank you, all who generously shared their thoughts – so it is not quite the same as the original. I hope this one is better. In particular, I think/hope that this paper is much clearer about the nature and importance of technique than the older paper, and about the distinction between soft and hard technologies, both of which seemed to be the most misunderstood aspects of the original. There is, of course, less detail in the arguments and a few aspects of the theory (notably relating to distributed cognition) are more focused on pragmatic examples, but most are still there, or implied. It is also a fully open paper, not just available for online reading, so please freely download it, and share it as you will.
Here’s the abstract:
To be human is to be a user, a creator, a participant, and a co-participant in a richly entangled tapestry of technologies – from computers to pedagogical methods – that make us who we are as much as our genes. The uses we make of technologies are themselves, nearly always, also technologies, techniques we add to the entangled mix to create new assemblies. The technology of greatest interest is thus not any of the technologies that form that assembly, but the assembly itself. Designated teachers are never alone in creating the assembly that teaches. The technology of learning almost always involves the co-participation of countless others, notably learners themselves but also the creators of systems, artifacts, tools, and environments with and in which it occurs. Using these foundations, this paper presents a framework for understanding the technological nature of learning and teaching, through which it is possible to explain and predict a wide range of phenomena, from the value of one-to-one tutorials, to the inadequacy of learning style theories as a basis for teaching, and to see education not as a machine made of methods, tools, and systems but as a complex, creative, emergent collective unfolding that both makes us, and is made of us.
Originally posted at: https://landing.athabascau.ca/bookmarks/view/14622408/my-latest-paper-learning-technology-and-technique-now-online-in-the-canadian-journal-of-learning-and-technology
This is very kind! I am sorry for all the very, very, very bad thoughts I have been thinking about you and your party. So, all we had to do was ask, eh?
We currently (in ball park terms) have about 300 staff in Athabasca out of a total of roughly 1200 staff overall. You want 65% of us to live there. So, what we need is:
ongoing funding to pay the salaries of 1500 new staff;
good, diverse, well-paid jobs for their families (yes, we have families);
support for building new homes to house the new staff;
computers, software, cloud services, high speed reliable internet to the town (not the rubbish we have now) for those new staff;
extra buildings to house them on the campus, including canteens, leisure facilities, etc;
regular, frequent transit links to the town of Athabasca.
We’ll let you off paying for 8 of those staff if you let our execs live wherever the hell they want. Maybe you could re-use the absurdly overblown presidential accommodation to house a family or six.
This is just a guess, but I think that, in total, such assistance might just about raise the government funding per student that you currently so generously provide us to around 70-80% of what you currently give to other Albertan universities.
It’s still a damn fool place to put a university so you’d better be prepared to offer some much better incentives for those you are forcing to live there. Higher pay, of course, maybe a free vehicle (electric, of course – you wouldn’t want to increase the outrageously high environmental impact of this proposal even more, would you?). If you expect us to do proper research, attracting international and national partners and research students, we will need at least a good rail link to the nearest international airport (you could have one built at Athabasca, perhaps? Imagine the additional benefits to Northern communities! Who wouldn’t want to fly to Athabasca rather than, say, Edmonton or Calgary?). You should probably improve and better maintain the road into town so that it stops killing and injuring our colleagues. We really don’t like that aspect of the job. It puts people off working there.
So, at the end of it, with all these additional expenses, you might have to put us nearly on par for per-student funding with the rest of Alberta’s comprehensive research universities. On the bright side, you’ll not have to pay for all the lawsuits and payouts for constructive dismissal, nor the humiliation of having destroyed one of the world’s finest universities, and I bet it would win you a ton of votes.
Thank you for the offer. Over to you.
P.S. And please, please, please would you just stop it with the micromanaging? It would save us all much unnecessary work and pain. More savings there.
P.P.S. And please stop talking about “not reinventing the school’s mandate but simply trying to reverse the trend away from it” by the way. You’re just lending fuel to the popular misconception that there are liars, damned liars, and politicians. I suppose you mean the mandate forced on us against our will 40 years ago that made the president and half the faculty resign? The one that was rescinded decades ago because it was completely unworkable for a university hoping to hire top quality researchers, teachers, tutors, professional staff, and administrators? That one?
P.P.P.S. ‘An ultimatum (/ˌʌltɪˈmeɪtəm/; Latin for ‘the last one’) is a demand whose fulfillment is requested in a specified period of time and which is backed up by a threat to be followed through in case of noncompliance’. Sound familiar?
For anyone else reading this…
Wherever you live, please make your views known by contacting the Minister, Demetrios Nicolaides, at email@example.com, or comment on social media, by tagging @demetriosnAB on Twitter, #abpse, #abpoli. Blog about it, write to the press about it, lobby outside the gates of the Albertan legislature, tell your friends, whatever: make a fuss.
This video from Peter Scott, president of Athabasca University, is a clear, eloquent, and passionate plea to save our university and the education of its students from imminent destruction at the hands of a brutal, self-serving, short-sighted government. Please watch it. Please act on it, in any way you can, if only to share it on your preferred social media. If we don’t stop this, Athabasca University as we know it will be no more.
If you don’t have time to watch the 12 minute video, in brief, this is the gist of it…
The Albertan government has unilaterally, without consultation with any stakeholders, demanded that:
we move about 500 of our staff (nearly half of the workforce), including the entire executive team, to the town of Athabasca by 2024-2025, to work there in-person;
we focus our efforts solely on Albertan students*;
we drop the near-virtual working policy on which we have worked and invested for many years and on which our future depends.
They have demanded that we agree to this, and to have a plan in place, by the end of next month, otherwise they will withdraw our funding. This would bankrupt us.
Right now, we are a world leader in online and distance education. The majority of our students live outside Alberta, so we are the nearest thing to a national university that Canada has. As the only fully open and distance university in Canada, we provide opportunities for many across the country who would otherwise be unable to get a decent education – people in rural or remote areas, those serving abroad, indigenous people, prisoners, and many more who would find it difficult or impossible to enrol in a conventional university, are welcome here. Over a third of our graduates are the first in their families to have achieved a degree. We have a remarkably high percentage of the finest distance and online researchers in the world, that is only possible because they are allowed to live and work where they choose. And we are half-way through the process of reinventing ourselves, with a visionary plan, and a sustainable business model that will allow us to serve better, and to serve many more, which relies entirely on being near-virtual. Over half of our staff – including virtually all faculty and tutors – have lived and worked at a distance for about 20 years. Most of the rest now happily do so. Less than 10% currently work in-person. We walk the talk. We know the struggles that our students face working online, intimately, first-hand.
I love this university and what it stands for. I love its open mission, its kick-ass research that punches far above its weight, its wonderful staff, its radical, caring vision, and its amazing, awesome students. We are something unique and precious, at least in Canada and perhaps in the world. If we let this happen, all of that will go. If we accept the directive, then at least half the faculty and most of our exceptional executive team will resign, the quality of whatever staff remain will fall through the floor, the few students that are left will suffer, and the costs of moving will send us deep into the red. Our open mission itself – the thing that most defines us – is under threat. If we reject it, we will lose a quarter of our budget and go bust. Either way, if the Albertan government persists with this insane, brutish plan, we are doomed. If anything survived at the end of it – which would only be half possible if the hostile government provided very large amounts of funding that I am fairly sure it is unwilling to provide – it would be a shrunken, irrelevant, sub-standard shadow of what it is now. The first order of business should therefore be to do all that we can to stop the government from forcing this absurd, devastating harmful mandate upon us.
Whoever you are, wherever you are, please help Athabasca University fight this threat to its survival. If you live in Alberta, please vote this atrocious, oil-addled, self-serving government out of office. Wherever you live, please make your views known by contacting the Minister, Demetrios Nicolaides, at firstname.lastname@example.org, or comment on social media, by tagging @demetriosnAB on Twitter, #abpse, #abpoli. Blog about it, write to the press about it, lobby outside the gates of the Albertan legislature, make a fuss.
And, if you happen to be politician with sway in your province or in federal government, or maybe someone who runs another university that is seeking to expand significantly further into online learning, we have a beautiful, already near-virtual, thriving, forward-looking university with a highly talented workforce (no re-housing needed, limited need for physical space, business processes and digital infrastructure already established) that would love to find some better custodians for its crucial mission.
Originally posted at: https://landing.athabascau.ca/bookmarks/view/14559190/we-need-help-athabasca-university-is-facing-an-existential-threat-from-the-government-of-alberta
*Addendum and point of clarification as this has been misunderstood by a couple of readers: this is required by the Albertan government as a change to our central mission. To the best of my knowledge it does not explicitly mandate that we cannot accept students from elsewhere into our programs, though it is a major change in emphasis that would have many adverse impacts, big and small, on what, how and to whom we teach.
Athabasca University’s Digital Governance Committee recently got into a heated debate about whether and why we should support Zoom. It was a classic IT manageability vs user freedom debate and, as is often the way in such things, the suggested resolution was to strike up a working group/sub-committee of stakeholders to identify business requirements that the IT department could use to find an acceptable solution. This approach is eminently sensible, politically expedient, tried-and-tested, and profoundly inadequate.
As Henry Ford (probably never) said, “if I’d asked people what they wanted they would have said ‘a better horse'”.
A design approach that starts by gathering business requirements situates the problem in terms of the current solution, which is comprised of layers of solutions to problems caused by other solutions. For simple ‘hygiene’ tech that serves a hard, well-defined business function – leave reporting, accounting, etc – as long as you do properly capture the requirements and don’t gloss over things that matter, that’s normally fine, because you’re just building cogs to make the existing machine work more smoothly. However, for very soft social technologies like meetings, with potentially infinite ways of using them (by which I mean purposes, techniques, ways of assembling them with other technologies, and so on), no list of requirements could even begin to scratch at the surface. The thing about soft technologies – meetings, writing, pencils, pedagogies, programmable computers, chisels, wheels, technologies of fire, groups, poetry, etc – is that they don’t so much solve problems as they create opportunities. They create adjacent possible empty niches. In other words, they are defined by the gaps they leave, much more than the gaps they fill. What happens as a result of them is fundamentally non-deducible.
Solving different problems, creating different possibles
Meetings are assemblies of vast ranges of technologies and other phenomena, and they serve a vast number of purposes. Meetings are not just one technology but a container for an indefinitely large number of them. They are, though, by and large, solutions to in-person problems, many of which are constrained by physics, physiology, psychology, and other factors that do not apply or that apply differently online. Most webmeeting systems are attempts to replicate the same solutions or (more often) to replicate other webmeeting systems that have already done so, but they are doomed to be pale shadows of the original because there are countless things they cannot replicate, or can only replicate poorly. Among the phenomena that are the default in in-person meetings are, for example:
the immense salience brought about by travelling to a location, especially when it involves significant effort (lost in webmeetings);
the fact that it forces attention for a sustained period (most webmeeting software and ways of using it makes inattention much easier);
the social bonding that we have evolved to feel in the presence of others (not well catered for in webmeeting software);
the focus and meaning that comes from the ‘eventness’ of the occasion (diluted in webmeetings);
the ability to directly work together on an issue or artefact (limited in some ways in webmeetings, though potential exists for collaborative construction of digital artefacts);
the inability to invisibly escape (easy in most webmeetings);
the microexpressions, postures, movements, smells, etc that support communication (largely lost in webmeetings);
the social bonding value of sharing food and drink (lost in webmeetings);
the blurred boundaries of entering and leaving, the potential to leave together (usually lost in webmeetings);
the bonding that occurs in having a shared physical experience, including adversities such as a room that is too hot, roadworks outside, wasps in the room, etc, as well as good things like the smell of good coffee or luxurious chairs (not remotely possible in webmeetings, apart from when the tech fails – but then the meeting fails too);
the support for nuances of verbal interaction – knowing when it’s OK to interrupt, being able to sigh, talk at once, etc, not to mention having immediate awareness of who is speaking (webmeetings mostly suck at this);
the ability to cluster with others – to sit next to people you know (or don’t know), for instance (rarely an option in most webmeetings, and nothing like as salient or rich in potential as its in-person counterpart even when allowed);
the salience of being in a space, with all the values, history, power relationships, and so on that it embodies, from who sits where to which room is chosen (hardly a shadow of this in most webmeetings);
the ability to stand up and walk around together (a motion-sickness-inducing experience in webmeetings);
the problems and benefits of both over-crowding and excessive sparsity (very different in webmeetings);
the means to seamlessly integrate and employ other technologies, including every digital technology as well as paper, dance, desks, chairs, whiteboards, pins, clothing, coffee, doors, etc, etc, etc. (webmeetings offer a tiny fraction of this);
and so on.
A few of these might be replicated in current or future webmeeting software, though usually only in caricature. Most simply cannot be replicated at all, even if we could meet as virtual personas in Star Trek’s holodecks. Of course there are also many things that we should be grateful are not replicated in online meetings: conspicuous body odour, badly designed meeting rooms, schedule conflicts, and so on, as well as the unwanted consequences of most of the phenomena above. These, too, are phenomena that the technologies of meetings are designed around. In-person meetings are incredibly highly-evolved technologies, making use of technological and non-technological phenomena in immensely subtle ways, as well as having layers of counter-technology a kilometre deep, from social mores and manners to Roberts’ rules, from meeting tables to pens and note-taking strategies. Much of the time we don’t even notice that there are any technologies involved at all (as Danny Hillis quipped, ‘technology’ is anything invented after you were born).
Webmeetings, though, also have distinctive phenomena that can be exploited, such as:
the ease of entering and leaving (so breaks are easier to take, they don’t need to last a long time, people can dip in and out, etc);
the automation of scheduling and note-taking;
the means to record all that occurs;
the means to directly share digital tools;
the fact that people occupy different spaces (often with tools at their disposal that would be unavailable in a shared meeting space);
the captions for the hard of hearing;
the integrated backchannels of text chat.
These are different kinds of problem space with different adjacent possibles as well as different constraints. It therefore makes no sense to blindly attempt to replicate in-person meetings when the problems and opportunities are so different. We don’t (or shouldn’t) teach online in the same way we teach in the classroom, so why should we try to use meetings in the same way? For that matter, why have meetings at all?
Dealing with the hard stuff
Some constraints are quite easy to specify. If a matter under discussion needs to be kept private, say, that limits the range of options, albeit that, for such a soft technology as a meeting, privacy needs may vary considerably, and what works for one context may fail abysmally for another. Similarly for security, accessibility, learnability, compatibility, interoperability, cost, reliability, maintainability, longevity, and other basic hygiene concerns. There are normally hard constraints defining a baseline, but it is a fuzzy baseline that can be moved in different contexts for different people and different uses. No one wants unreliable, insecure, expensive, incompatible, unusable, buggy, privacy abusing software but most of us nonetheless use Microsoft products.
It is also not completely unreasonable to look for specific known business requirements that need to be met. However, there are enormous risks of duplicating solutions to non-existent problems. It is essential, therefore, to try to find ways of understanding the problems themselves, as much as possible in isolation from existing solutions. It would be a bad requirement to simply specify that people should be able to see and hear one another in real-time, for example: that is a technological solution based on the phenomena that in-person meetings use, not a requirement. It is certainly a very useful phenomenon that might be exploited in any number of ways (we know that because our ancestors have done it since before humans walked the planet) but it tells us little about why the phenomenon matters, or what it is about it that matters.
It would be better, perhaps, to ask people what is wrong with in-person meetings. It still situates the requirements in the current problem space, but it looks more closely at the source rather than the copy. It makes it easier to ask what purposes being able to see and hear one another during in-person meetings serve, what phenomena it provides, on what phenomena (including those provided by other technologies) it depends, and what depends on it. From that we may uncover the business requirements that seeing and hearing other people actually meet. However, it is incredibly tricky to ask such questions in the abstract: the problem space is vast, complex, diverse, and deeply bound up in what we are familiar with, not what is possible.
It might help to make the familiar unfamiliar, for instance, by holding in-person meetings wearing blindfolds, or silently, or to attempt to conduct a meeting using only sticky notes (approaches I have used in my own teaching about communication technologies, as it happens). This kind of exercise forcibly creates a new problem space so that people can wonder about what is lost, what is gained, reasons for doing things, and so on. If you do enough of that, you might start to uncover what matters, and (perhaps) some of the reasons we have meetings in the first place.
Exploring the adjacent possible
Perhaps most importantly, though, soft technologies are not just solutions to problems. Soft technologies are, first and foremost, creators of opportunities, the vast majority of which we will never begin to imagine. Soft technology design is therefore, and must be, a partnership between the person and the technology: it’s not just about creating a tool for a task but about having a conversation with that tool, asking what it can do for us and wondering where it might lead us. What’s interesting about the ubiquitous backchannel feature of webmeetings, for instance, is that it did not find its way into the software as a result of a needs assessment or analysis of business requirements. It was, instead, an early (and deeply imperfect) attempt at replicating what could be replicated of synchronous meetings before multimedia communication became possible. When designing early web conferencing systems, no one said ‘we need a way of typing so that others can see it’. They looked at what could be done and said ‘hey, we can use that’. The functionality persisted and has become nearly ubiquitous because it’s easy to implement and obviously useful. It’s an exaptation, though, not the product of a pre-planned intentional design process. It’s a side-effect of something else we did – a poor solution to an existing problem – that created new phenomena we could co-opt for other purposes. New adjacent possible empty niches emerged from it.
One way to explore such niches would be to give people the chance to play with a wide range of existing ways of addressing the same problem space. A lot of people have turned their attention to these issues, so it makes sense to mine the creativity of the crowd. There are systems like Discord or MatterMost, that represent a different category of hybrid asynchronous/synchronous tool, for instance, blurring the temporal boundaries. There are spatial metaphor systems with isometric interfaces like Spatial, or Ovice, which can allow more intuitive clustering, perhaps contributing to a greater sense of the presence of others, while enabling novel approaches to (say) voting, and so on. There are immersive systems that more literally replicate spaces, like Mozilla Hubs or OpenSim. I hold out little hope for those, but they do have some non-literal features – especially in ways they allow impossible spaces to be created – that are quite interesting. There are instant messengers like Telegram or Signal, that offer ambient awareness as well as conventional meeting support (MS Teams, reflecting its Skype origins, has that too). There are games and game-like environments like Gather or Minecraft, that create new kinds of world as well as providing real-time conferencing features. And there are much smarter webmeeting systems like Around (that largely solves almost all audio problems, that – crucially – can make the meeting a part of a user’s environment rather than a separate space for gathering, that rethinks text chat as a transient, person-focused act rather than a separate text-stream, that makes working together on a digital artefact a richly engaging process, that automatically sends a record to participants, and more). And there’s a wealth of research-based systems that we have built over the past few decades, including many of my own, that do things differently, or that use different metaphors. Computer-supported collaborative argumentation tools, for instance, or systems that leverage social navigation (I particularly love Viégas’s and Donath’s ChatCircles from the late 1990s, for instance), and so on. They all make new problems, and all have flaws of one kind or another, but thinking about how and why they are different helps to focus on what we are trying to do in the first place.
Perhaps the best of all ways to explore those adjacent possible empty niches is to make them: not to engineer it according to a specification, but to tinker and play. I’ve written about this before (e.g. here and, paywalled, here, summarized by Stefanie Panke here). Tinkering as a research methodology is a process of exploration not of what exists but of what does not. It’s a journey into the adjacent possible, with each new creation or modification creating new adjacent possibles, a step by step means of reaching into and mapping the unknown. We don’t all have the capacity (in skills, time, or patience) to create software from scratch, but we can assemble what we already have. We can, for instance, try to add plugins to existing systems: it is seldom necessary to write your own WordPress plugin, for example, because tens of thousands of people have already done so. Or we can make use of frameworks to construct new systems: the Elgg system underpinning the Landing, for example, does require some expertise to build new components, but a lot can be achieved by assembling and/or modifying what others have built. Or, if standards are followed, we can assemble services as needed: there are standards like xcon, XMPP, Jabber, IRC, and so on that make this possible. And we don’t need to create software or hardware at all in order to dream. Hand-drawn mockups can create new possibilities to explore. Small steps into the unknown are better than no steps at all.
Stop looking for solutions
Webmeetings that attempt to replicate their in-person inspirations are unlikely to ever afford the flexibility of in-person meetings, because they have fewer phenomena to orchestrate and we are never going to be as adept at using them. The gaps they leave for us to fill are smaller, and our capacity to fill those gaps is less well-developed. However, digital systems can provide a great many new and different phenomena that, with creativity and inspiration, may meet our needs much better. Without the constraints of physical spaces we can invent a new physics of the digital. As long as we treat the problem as one of replicating meetings then it makes little difference what we choose: Zoom, Teams, Webex, Connect, BBB, Jitsi, whatever – the feature set may vary, there may be differences in reliability, security, cost, etc but any of them will do the job. The problem is that it is the wrong job. We already pay for and use at least three major systems for synchronous meetings at AU, as well as a bunch of minor ones, and that is nothing like enough. Those that begin to depart from the replication model – Around being my current favourite – are a step in the right direction, while those that double down on it (notably most immersive environments) are probably a step in the wrong direction. It is not about going forward or backward, though: it is about going sideways.
It is not too tricky to experiment in this particular field. For most digital systems we create our decisions normally haunt us for years or decades, because we become locked in to them with our data. Synchronous technologies can, with provisos, be swapped around and changed at will. Sure, there can be issues with recording and transcripts, there can be a training burden, contracts can be expensive and hard to escape, and tech support may be a little more costly but, for the most part, if we don’t like something then we can drop it and try something else.
I don’t have a solution to choosing or making the right piece of software for AU’s needs, because there isn’t one. There are countless possible solutions, none of which will suit everyone, many of which will provide parts that might be useful to most people, and all of which will have parts or aspects that won’t. But I do know that the way to approach the problem is not to have meetings to determine business requirements. The solution is to find ways of discovering the adjacent possible, to seek inspiration, to look sideways and forwards instead of backwards. We don’t need simple problem-solving for this kind of situation (or rather, it is quite inadequate on its own): we need to find ways to dream, ways to wonder, ways to engage in the act of creation, ways to play.
Brilliant. The short answer is, of course, yes, and it doesn’t do a bad job of it. This is conceptual art of the highest order.
This is the preprint of a paper written by GPT-3 (as first author) about itself, submitted to “a well-known peer-reviewed journal in machine intelligence”. The second and third authors provided guidance about themes, datasets, weightings, etc, but that’s as far as it goes. They do provide commentary as the paper progresses, but they tried to keep that as minimal as needed, so that the paper could stand or fall on its own merits. The paper is not too bad. A bit repetitive, a bit shallow, but it’s just a 500 word paper- hardly even an extended abstract – so that’s about par for the course. The arguments and supporting references are no worse than many I have reviewed, and considerably better than some. The use of English is much better than that of the majority of papers I review.
In an article about it in Scientific American the co-authors describe some of the complexities in the submission process. They actually asked GPT-3 about its consent to publication (it said yes), but this just touches the surface of some of the huge ethical, legal, and social issues that emerge. Boy there are a lot of those! The second and third authors deserve a prize for this. But what about the first author? Well, clearly it does not, because its orchestration of phenomena is not for its own use, and it is not even aware that it is doing the orchestration. It has no purpose other than that of the people training it. In fact, despite having written a paper about itself, it doesn’t even know what ‘itself’ is in any meaningful way. But it raises a lot of really interesting questions.
It would be quite interesting to train GPT-3 with (good) student assignments to see what happens. I think it would potentially do rather well. If I were an ethically imperfect, extrinsically-driven student with access to this, I might even get it to write my assignments for me. The assignments might need a bit of tidying here and there, but the quality of prose and the general quality of the work would probably result in a good B and most likely an A, with very little extra tweaking. With a bit more training it could almost certainly mimic a particular student’s style, including all the quirks that would make it seem more human. Plagiarism detectors wouldn’t stand a chance, and I doubt that many (if any) humans would be able to say with any assurance that it was not the student’s own work.
If it’s not already happening, this is coming soon, so I’m wondering what to do about it. I think my own courses are slightly immune thanks to the personal and creative nature of the work and big emphasis on reflection in all of them (though those with essays would be vulnerable), but it would not take too much ingenuity to get GPT-3 to deal with that problem, too: at least, it could greatly reduce the effort needed. I guess we could train our own AIs to recognize the work of other AIs, but that’s an arms war we’d never be able to definitively win. I can see the exam-loving crowd loving this, but they are in another arms war that they stopped winning long ago – there’s a whole industry devoted to making cheating in exams pay, and it’s leaps ahead of the examiners, including those with both online and in-person proctors. Oral exams, perhaps? That would make it significantly more difficult (though far from impossible) to cheat. I rather like the notion that the only summative assessment model that stands a fair chance of working is the one with which academia began.
It seems to me that the only way educators can sensibly deal with the problem is to completely divorce credentialling from learning and teaching, so there is no incentive to cheat during the learning process. This would have the useful side-effect that our teaching would have to be pretty good and pretty relevant, because students would only come to learn, not to get credentials, so we would have to focus solely on supporting them, rather than controlling them with threats and rewards. That would not be such a bad thing, I reckon, and it is long overdue. Perhaps this will be the catalyst that makes it happen.
As for credentials, that’s someone else’s problem. I don’t say that because I want to wash my hands of it (though I do) but because credentialling has never had anything whatsoever to do with education apart from in its appalling inhibition of effective learning. It only happens at the moment because of historical happenstance, not because it ever made any pedagogical sense. I don’t see why educators should have anything to do with it. Assessment (by which I solely mean feedback from self or others that helps learners to learn – not grades!) is an essential part of the learning and teaching process, but credentials are positively antagonistic to it.
Originally posted at: https://landing.athabascau.ca/bookmarks/view/14216255/can-gpt-3-write-an-academic-paper-on-itself-with-minimal-human-input
Anne-Marie Scott joins a long line of weary edtech illuminati who have recently expressed sadness and disillusion about life, the universe, and, in particular, the edtech industry (she has plans to do something about that – good plans – but her weariness is palpable). One of the finest antidotes to it all, Audrey Watters, has pretty much given up on trying to do anything about it. Even the usually-optimistic Tony Bates has lost his cool over it (specifically the exploitation of data harvested about students, including children, by cloud-based tools, which I predicted would be a growing issue a while back).
Personally, I burned out long ago, and the remaining embers are barely glowing. My desire to change the world is undiminished, and I still have some ideas that I don’t think anyone else has tried before, but the means, the time, the energy, and (too often) the will left me years ago. I lost most of my passion for most of edtech research long, long ago: so much rehashing of things that we’ve done again and again, so little change apart from for the worse, so many mistakes being made over and over on ever larger scales, so little that’s good getting the exposure it needs, too much that’s awful being over-exposed. The emergency responses to the pandemic just depressed me further, and my own university is devoting pretty much all of its energy and resources into reinventing its infrastructure, leaving little space for my quirky brand of toy making (though the Landing is very slowly, in fits and starts, beginning to get the attention it deserved 10 years ago). But I will not go gentle into that good night. Not yet.
Online learning (e-learning, edtech, technology-enhanced learning, etc), by its nature, has a strong propensity to do ‘human’ badly, which is a pity because education is about very little else than being human with other humans. Edtech (and almost everyone who creates it) wants to control, to measure, to collect, to impose order on disorder. Even its most organic, volatile, social spaces are filled with instrumentality, on the part of both people and machines. Much of the time, human actions are input for the algorithms that seek to control them. Machines try to make automata inside us in their own distorted image. We become what we behold, and what we behold becomes what it has made us, a spiralling loop toward mediocre grey, mirrors reflecting mirrors till all the light has gone. And the machines, in turn, are cogs in machines, that are cogs in machines, each one turning the next, grinding their gears, oblivious to our humanity, black-boxing what we once did ourselves in uniform, impenetrable digital containers, where efficiency is a measure of what can be measured, and of little or nothing that actually matters.
Back in the 1990s and early 2000s, those of us working in the still-fresh online learning field hoped we’d change the educational establishment but, instead, the educational establishment changed us. It took our monkey-paw rainbows of wishes, chewed them up, and spat them back at us in trademarked beige. It threw away what it didn’t need to reinforce its mediaeval mission, and made what was left into a cyborg prosthesis, an automated monk, each part like the next: efficient, sterile, bland, each human interaction with it a data point, each person a vessel for implementing its measured objectives, ignoring what it couldn’t measure as though it wasn’t there. In the process of putting mediaeval pedagogies online, we lost most of what made them (nearly) work, and amplified the things that make them fail, creating machines (pedagogical and digital) that attempted to control learners more than ever before.
Personalized learning depersonalizes the person. The tools provide a more efficient means of making people who are more the same, as near as possible cookie cutter images replicating the machine’s pre-programmed domain model in learners’ brains. Increasingly, too, we are learning to be human from machines that learned to be like us from the caricatured curated facades we presented to others in the simplifying mirror of cyberspace. More and more of those facades that are mined by the machines are now, themselves, created by machines. They will be what the next generation learns from, and we in turn will learn from them. Like photocopies of photocopies, the subtle gradations and details will merge and disappear and, with them, our humanity. It’s already happening. Meanwhile, outside the educational machine, we are herded like sheep into further centralized machines that use the psychology of drug pushers to feed us ever more concentrated, meme-worthy, disposable content, that do the thinking for us so that we don’t have to, that automate values that serve no one but their shareholders, that blend truth, lies, beauty, and degradation into an undifferentiated slurry of cognitive pink slime we swallow like addicts, numbing our minds to what makes them distinct. Edtech is learning from that model, replicating it, amplifying it. ‘Content’ made of bite-size video lectures and pop quizzes, reinforced by adaptive models, vie for pole position in charts of online learning products. These are not the products of a diseased imagination. They are the products of one that has atrophied.
This is not what we intended. This is not what we imagined. This is not what we wanted. Sucked into a bigger machine, scaled up, our inventions turned against us. Willingly, half-wittedly, we became what we are not. We became parts in someone else’s machine.
How can we, again, become who we are? How can we become more than we are? How can the edtech community find its soul again? Perhaps, for example:
By revering the idiosyncratic, the messy, the unformed, the newly forming;
By being part of the process, not makers of the product;
By supporting each personal technique, not replicating impersonal methods;
By embracing the complex, weird, fuzzy mystery, not analyzing, not averaging, not simplifying;
By appreciating, not measuring;
By playing for the joy of the game, not playing to win;
By tinkering, not engineering;
By opening, not closing;
By daydreaming about what could be, not solving problems;
By embracing, not rejecting;
By making machines for humans, not adapting humans to be parts of machines;
By connecting people, not collecting data about them;
By owning the machine, not renting someone else’s machine;
By sharing, not containing;
By enabling, not controlling;
By following the learners, not leading them;
By looking through the screen, not at it;
By doing with, not doing at one another;
By drinking from the living stream, unfiltered and unflavoured;
By finding softness, not imposing hardness;
By asking why, who, and where, not what, how and when;
By making learning, not just what is learned, visible;
By making learners visible (if they want);
By loving the small, the personal, the trivial, the bright seams of gold;
By being – and staying – beginners;
By grasping the end of the long tail;
By living on the boundaries, and tearing down the barriers;
By rejecting the central and the centralizing;
By engaging with the local, the specific, the situated, the social;
By knowing we learn in a place, caring we are in it, and cherishing who we share it with;
By searching for the cracks and filling them with light;
By doing the dangerous things;
By breaking things;
By feeling wonder.
We must make playgrounds, not production lines. We must embrace the logic of the poem, not the logic of the program. We must see one another in all our multifaceted strangeness, not just in our self-curated surfaces. We must celebrate and nurture the diversity, the eccentricities, the desires, the fears, the things that make us who we are, that make us more than we were, together and as individuals. The things we do not and, often, cannot measure.
These are the very accountants who are supposed to catch cheats. I guess at least they’ll understand their clientele pretty well.
But how did this happen? There are clues in the article:
“Many of the employees interviewed during the federal investigation said they knew cheating was a violation of the company’s code of conduct but did it anyway because of work commitments or the fact that they couldn’t pass training exams after multiple tries.” (my emphasis).
I think there might have been a clue about their understanding of ethical behaviour in that fact alone, don’t you? But I don’t think it’s really their fault: at least, it’s completely predictable to anyone with even the slightest knowledge of how motivation works.
If passing the exam is, by design, much more important than actually being able to do what is being examined, then of course people will cheat. For those with too much else to do or too little interest to succeed, when the pressure is high and the stakes are higher, it’s a perfectly logical course of action. But, even for all the rest who don’t cheat, the main focus for them will be on passing the exam, not on gaining any genuine competence or interest in the subject. It’s not their fault: that’s how it is designed. In fact, the strong extrinsic motivation it embodies is pretty much guaranteed to (at best) persistently numb their intrinsic interest in ethics, if it doesn’t extinguish it altogether. Most will do enough to pass and no more, taking shortcuts wherever possible, and there’s a good chance they will forget most of it as soon as they have done so.
Just to put the cherry on the pie, and not unexpectedly, EY refer to the process by which their accountants are expected to learn about ethics as ‘training’ and it is mandatory. So you have a bunch of unwilling people who are already working like demons to meet company demands, to whom you are doing something normally reserved for dogs or AI models, and then you are forcing them to take high-stakes exams about it, on which their futures depend. It’s a perfect shit storm. I’d not trust a single one of their graduates, exam cheats or not, and the tragedy is that the people who were trying to force them to behave ethically were the ones directly responsible for their unethical behaviour.
There may be a lesson or two to be learned from this for academics, who tend to be the biggest exam fetishists around, and who seem to love to control what their students do.
Originally posted at: https://landing.athabascau.ca/bookmarks/view/14163409/ernst-young-fined-100-million-after-employees-cheated-in-exams
In the convocation prayer offered by Elder Maria Campbell each year for Athabasca University graduands, she asks for blessing that their journeys be “rich, gentle, and challenging”. I can’t think of a more perfect wish than this. Each word transforms and deepens the other two. It’s truly beautiful. Every time I hear those words (or, technically, read them – they are actually spoken in Cree) they tumble together in my head for days. I am reminded of these lines (that are about music, but that seem perfectly apt here) from Robert Browning’s Abt Vogler:
And I know not if, save in this, such gift be allowed to man
That out of three sounds he frame, not a fourth sound, but a star.
On this graduation day I wish all our departing students rich, gentle, and challenging lives, and (as Maria Campbell goes on to say, gently acknowledging troubles to come) that the roads they travel are not too bumpy.
These are the slides from my invited talk at the 11th International Conference on Education and Management Innovation (ICEMI 2022), June 11th. The talk went down well – at least, I was invited to repeat the performance at a workshop (where I gave a very similar presentation today – if you’ve seen one, you probably know the content of the other!) and to give a keynote later in the year.
It’s about how methods of teaching that solve problems for in-person teachers don’t apply online, and it provides a bit of advice on online-native approaches. I’ve talked quite a bit about this over the past decade so there’s not much new in it apart from minor refinements, though I have put a greater emphasis on what goes on outside the classroom in physical institutions because I’m increasingly thinking that this matters way more than we normally acknowledge. Notably, I discuss the ways that physical institutional structures and regulations provide significant teaching functions of their own, meaning that in-person teachers can be absolute rubbish or (in some subject areas or topics) even fail to turn up, and students can still learn pretty well. This helps to explain the bizarre phenomenon that, across much of in-person academia, professors and lecturers are not expected to learn how to teach (and many never do).
Here’s the abstract…
In-person educational institutions teach, at least as much as the individual teachers they employ. Students are taken out of their own environments and into that of the institution, signalling intent to learn. The physical environment is built for pervasive learning, from common rooms, to corridors, to campus cafes; students see one another learning, share learning conversations, learn from one another. Even the act of walking from classroom to classroom makes events within them more salient. Structures such as courses, timetables, semesters, and classes solve problems of teaching efficiently within the constraints of time and space but impose great constraints on how teaching occurs, and create multiple new pedagogical and management problems of their own. The institution’s regulations, expectations, and norms play a strong pedagogical role in determining how, and when learning occurs. Combined with other entrenched systems and tools like credentials, textbooks, libraries, and curricula, a great deal of the teaching process occurs regardless of teachers. What we most readily recognize as ‘good’ teaching overcomes the problems caused by these in-person environments, and exploits their affordances.
Online institutions have radically different problems to solve, and radically different affordances to exploit, so it makes no sense to teach or manage the learning process in the same ways. Online, students do not inhabit the environment of the institution: the institution inhabits the environment of the student. It is just one small part of the student’s physical and virtual space, shared with billions of other potential teachers (formal or not) who are a click, a touch, or a glance away. The institution is just a service, not the environment in which learning occurs. The student picks the time, the space, the pace, and virtually all the surrounding supports of the learning process. Teachers cannot actively control any of this, except through the use of rewards, punishments, and the promise of credentials, that force compliance but that are antagonistic to effective or meaningful learning. In this talk, I will discuss the implications of this inverted dynamic for pedagogy, motivation, digital system design, and organizational structures & systems for online learning.
Having spent a while researching the literature on ways that visual landmarks and other text enhancements (and deliberate obfuscations) affect comprehension and recall, I am a little sceptical about the underlying theory for this patented product that is based on the assumption that we read better if the first chunks of each word are bolded, like this. The primary foundation appears to be a 1980 paper that uses gaze duration/eye fixation to predict readability of text. The Bionic Reading product creates artificial fixation points at the start of each word, so the theory seems to be that we can read faster, and recall more as our eyes are guided through the text. I don’t see any mention of any other research on the Bionic Reading site that supports its claims apart from the 1980 paper, but (ironically) maybe it’s because I missed it.
The assumptions may be a bit over-simplistic: we don’t read everything the same way, there are differences in ways that different people read, subject matter matters, intent matters, and so do many other factors. I found that I could grasp the meaning of the sample plain text that they provide on the home page far quicker than I could the bionic text equivalent: it was a small enough chunk that I could absorb the gist of it in a second or so, whereas I had to read the bionic text word by word in order to understand it, which took several seconds. Familiarity matters, though: there are recognition mechanisms at work here, both in making unadorned text easier to grasp (for me), and in learning to read the bionic text. I suspect that, after a while, the (possible) benefits would diminish as we learn to recognize whole words more easily in their modified form. It makes me wonder whether the benefit is similar to that of making a font more difficult to read, for which there is some (contested) evidence that it can improve recall. When we have to try harder to read the text, for some but not all kinds and lengths of text, we tend to recall more. In fact, anything that makes it more likely for us to read something word by word – as long as the flow is not lost – can aid comprehension and recall, under some circumstances.
The product is interesting, though. It provides an API that can be called to convert any text to bionic text, for use (in principle) in any app. It might make an interesting variation on the ways that we are using to modify text in our Landmarks application (for which I claim prior art, having written about this in 2012). Landmarks is intended to make chunks of e-text more recognizable, especially when text reflows, so it isn’t trying to compete in the same territory. However, the ways that the Bionic Reading app make passages of text more distinctive from one another might play a useful part in overcoming the big problem with most e-texts: that everything looks pretty much the same, and there are very few navigational cues, so it’s harder to remember what you read and where you read it.
Originally posted at: https://landing.athabascau.ca/bookmarks/view/13620551/interesting-product-bionic-reading