English version of my 2021 paper, “Technology, technique, and culture in educational systems: breaking the iron triangle”

Technology, technique, and culture in educational systems: breaking the iron triangle

This is the (near enough final) English version of my journal paper, translated into Chinese by Junhong Xiao and published last year (with a CC licence) in Distance Education in China. (Reference: Dron, Jon (2021).  Technology, technique, and culture in educational systems: breaking the iron triangle (translated by Junhong Xiao). Distance Education in China, 1, 37-49. DOI:10.13541/j.cnki.chinade.2021.01.005).

The underlying theory is the same as that in my paper Educational technology: what it is and how it works (Reference: Dron, J. Educational technology: what it is and how it works. AI & Soc 37, 155–166 (2022). https://doi.org/10.1007/s00146-021-01195-z direct link for reading, link to downloadable preprint) but this one focuses more on what it means for ways we go about distance learning. It’s essentially about ways to solve problems that we created for ourselves by solving problems in the context of in-person learning that we inappropriately transferred to a distance context.

Here’s the abstract:
This paper presents arguments for a different way of thinking about how distance education should be designed. The paper begins by explaining education as a technological process, in which we are not just users of technologies for learning but coparticipants in their instantiation and design, implying that education is a fundamentally distributed technology. However, technological and physical constraints have led to processes (including pedagogies) and path dependencies in In-person education that have tended to massively over-emphasize the designated teacher as the primary controller of the process. This has resulted in the development of many counter technologies to address the problems this causes, from classrooms to grades to timetables, most of which have unnecessarily been inherited by distance education. By examining the different strengths and weaknesses of distance education, the paper suggests an alternative model of distance education that is more personal, more situated in communities and cultures, and more appropriate to the needs of learners and society.

I started working on a revised version of this (with a snappier title) to submit to an English language journal last year but got waylaid. If anyone is interested in publishing this, I’m open to submitting it!

A few thoughts on learning management systems, and on integrated learning environments and their implementation

Why do we build digital learning systems to mimic classrooms?

It is understandable that, when we teach in person, we have to occupy and make different uses of the same or similar environments like classrooms, labs, workshops, lecture theatres, and offices. There are huge financial, physical, and organizational constraints on making the environment fit the task, so it would be madness to build a whole new classroom every time we wished to run a different class.

Online, we could build anything we like

But why do we do the same when we teach online? There are countless tools available and, if none are suitable, it is not too hard to build them or modify them to suit our needs. Once they are built, moving between them just takes a tap of a screen or the click of a mouse. Heck, you can even occupy several of them at once if you have a decent monitor or more than one device.

So why don’t we do this?

Here are a few of the more obvious reasons that using the perfect app for the context of study rarely happens:

  • Teachers’ lack of knowledge of the options (it takes time and effort to discover what’s available).
  • Teachers’ lack of skill in using them (most interesting tools have a learning curve, and that gets steeper in inverse proportion to the softness and diversity of the toolset, so most teachers don’t even know how to make the most of what they already have).
  • Lack of time and/or money for development (a real-life application is what it contains, not just the shell that contains it, and it is not always as easy to take existing stuff and put it in a new tool as it might be in a physical space).
  • Costs and difficulties in management (each tool adds costs in managing faults, configuration, accounting for use, performance, and security).
  • Cognitive load involved for learners in adapting to the metaphors, signposts, and methods needed to use the tool itself.

All of these are a direct consequence of the very diversity that would make us want to use different apps in the first place. This is a classic Faustian bargain in which the technology does what we want, and in the process creates new problems to solve.  Every virtual system invents at least some of the dynamics of how people and things interact with it and within it. In effect, every app has its own physics. That makes them harder to find out about, harder to learn, harder to develop, costlier to manage, and more difficult to navigate than the static, fixed facilities found in particular physical locations. They are all different, there are few if any universals, and any universal today may become a conditional tomorrow. Gravity doesn’t necessarily work the same way in virtual systems.

image of a pile of containersAnd so we get learning management systems

The learning management system (LMS) kind of deals with all of these problems: poorly, harmfully, boringly, and painfully, but it does deal with them. Currently, most of the teaching at Athabasca University is through the open source Moodle LMS, lightly modified by us because our needs are not quite like others (self-pacing and all that). But Moodle is not special: in terms of what it does and how it does it, it is not significantly different from any other mainstream LMS – Blackboard, Brightspace, Canvas, Sakai, whatever.

Almost every LMS essentially automates the functions, though not exactly the form, of traditional classrooms. In other parts of the world people prefer to use the term ‘managed learning environment’ (MLE) for such things, and it is the most dominant representative of a larger category of systems usually described as virtual learning environments (VLEs) that also includes things like MOOs (multi-user dungeons, object oriented), immersive learning environments, and simpler web-based teaching systems that replicate aspects of classrooms such as Google Classroom or Microsoft’s gnarly bundle of hastily repurposed rubbish for teaching that I’m not sure even has a name yet. Notice the spatial metaphors in many of these names.

Little boxes made of ticky tacky

The people who originally designed LMSs back in the 90s (I did so myself) based their designs on the functions and entities found in a traditional university because that was their context, and that was where they had to fit. Metaphorically, an LMS or MLE is a big university building with rather uniform classrooms, with perhaps a yard where you can camp out with a few other systems (plugins, LTI hooks, etc) that conform to its requirements and that are allowed in to classrooms when invited, and a few doors and gateways (mainly hyperlinks) linking it circuitously or in jury-rigged fashion to other similarly weakly connected buildings (e.g. places to register, places to seek support, places to talk to an advisor, places to complain, places to find books, and so on). It doesn’t have metaphorical corridors, halls, common rooms, canteens, yards, libraries or any of the other things that normally make up a physical university. You rarely get to even be aware of other classrooms beyond those you are in. Some people (me in a past life) might give classrooms cute names like ‘the learning cafe’ but it’s still just another classroom. You teleport from one classroom to the next because what happens in corridors (really a big lot of incredibly important pedagogically useful stuff, as it happens) is not perceived by the designers as a useful classroom function to be automated or perhaps, more charitably, they just couldn’t figure out how to automate that.

Reified roles

It’s a very controlled environment where everyone has a programmatically enforced role (mostly reflecting traditional educational roles), that may vary according to the room, but that are far less fluid than those in physical spaces. There are strong hierarchies, and limited opportunities for moving between them. Some of those hierarchies are new: the system administrator, for instance, has way more power than anyone in a physical university to determine how learning happens, like an architect with the power to move walls, change the decor, add extensions, and so on, at will. The programmers of the system are almost god-like in their command of its physics. But the ways that they give teachers (or learning designers, or administrators) control, as designers, directors, and regulators of the classroom, are perhaps the most pernicious. In a classroom a teacher may lead (and, by default, usually does). In an LMS, a teacher (or someone playing that role) must lead. The teacher sees things that students cannot, and controls things that the students may not. A teacher configures the space, and determines with some precision how it will be used. With a lot of effort and risk, it can be made to behave differently, but it almost never is.

Functions are everything

An LMS is typically built along functional lines, and those functions are mostly based on loose, superficial observations of what teachers and students seem to do in physical classrooms. The metaphorical classrooms are weird, because they are structured by teaching (seldom learning) function rather than along pedagogical lines: for instance, if you want to talk with someone, you normally need to go to a separate enclosed area inside the classroom or leave a note on the teacher’s desk. Same if you want to take a test, or share your work with others. Another function, another space. Some have many little rooms for different things. Lectures are either literally that (video recordings) or (more usefully, from a learning perspective), text and images to be read on screen, based on the assumption that the only function of lectures is information transmission (it is so very, very much not – that’s its least useful and least effective role). There’s seldom a chance to put even put up your hand to question something. Notices can usually only be pinned on the wall by teachers. Classroom timetables are embodied in software because of course you need a rigid and unforgiving timetable in a medium that sells itself on enabling learning anywhere, any time. Some, including Moodle, will allow you to break up the content differently, but it’s still another timetable; just a timetable without dates. It’s still the teacher who sets the order, pacing and content.

Robot overlords

It’s a high-tech classroom. There are often robots there that are programmed to make you behave in ways determined by those higher in the hierarchy (sometimes teachers, sometimes administrators, sometimes the programmers of the software). For instance, they might act as gatekeepers that prevent you from moving on to the next section before completing the current one, or they might prevent you submitting work before or after a specified date. They might mark your work. There are surveillance cameras everywhere, recording your every move, often only accessible to those with more powerful roles (though sometimes a robot or two might give you a filtered view of it).

Beginnings and ends

You can’t usually go back and visit when your course is over because someone decided it would be a good idea to set opening and closing enrolment dates and assumed that, when they were done, the learning was done (which of course it never is – it keeps on evolving long after explicit teaching and testing occurred). Again, it’s because physical classes are scheduled and terms come to an end because they must be, not because it makes pedagogical sense. And, like almost everything, you can override this default, but hardly anyone ever does, because it brings back those Faustian bargains, especially in manageability.

Dull caricatures of physical spaces

Basically, the LMS is an automated set of metaphorical classrooms that hardens many of the undesirable by-products of educational systems in software in brain-dead ways that have little to do with how best to teach, and that stretch the spatial metaphors that inform it beyond breaking point. Each bit of automation and each navigational decision hardens pedagogical choices. For all the cozy metaphors, programmers invent rather than replicate physics, in the process warping reality in ways that do no good and much harm. Classrooms solved problems of physics for in-person teaching and form part of a much larger structure that has evolved to teach reasonably well (including corridors, common rooms, canteens, and libraries, as it happens). Their more visible functions are only a part of that and, arguably, not the main part. There is much pedagogy embedded in the ways that physical universities, whether by accident or design, have evolved over centuries to support learning in every quadrangle and nook of a coffee shop. LMSs just focus on a limited subset of teaching roles, and empower the teacher in ways that caricature their already excessive dominance in the classroom (which only occurred because it had to, thanks to physics and the constraints it imposed).

LMSs are crap, but they contain recognizable semblances of their physical counterparts and just enough configurability and flexibility to more or less work as teaching tools, a bit, for everyone, almost no matter what their level of digital proficiency might be. They more or less solve the Faustian bargains listed earlier, but they do so by stifling what we wanted and should have been able to do in the first place with online tools, in the process creating new and quite horrific problems, as well as demolishing most of what makes physical universities work in the first place. It never has been true that virtual learning environments are learning environments – they are only ever parts of them – and there are places to escape from them, such as the Landing, other virtual systems, or even just plain old email, but then all those Faustian bargains come back to haunt us again. There has to be a better way.

Beyond the LMS

Cognisant of the issues, Athabasca University is now some way down the path to developing its own distinctive solutions to these problems, in a multi-year multi-million-dollar initiative known as (following the spatial metaphor) the Integrated Learning Environment (ILE). The ILE is not an application. It is an umbrella term for a lot of different, usually independent systems working together as one. Though some of the most interesting opportunities are still only loosely imagined, perhaps because they cause problems that are fiendishly hard to solve (e.g. how can we integrate systems that we build ourselves without creating risks for the rest of the ILE, and what happens when they need to be maintained?) a lot of progress is being made on the non-teaching foundations on which the rest depends (student admin systems, support tools, procedures, etc), as well as on the most visible and perhaps the biggest of its parts, BrightSpace, a proprietary commercial LMS that is meant to replace Moodle, for no obvious pedagogical or technical reasons (it’s no better). It might make economic sense. I don’t know, but I do know that open source software typically costs a fair bit to own, albeit because of the things that make it a much better idea (freedom, flexibility, ownership, etc). There is probably a fair bit of time and money being spent with Desire2Learn (makers of Brightspace) on the things that we spent a fair bit of time and money on many years ago to make Moodle a bit less classroom-like. The choice no doubt has something to do with how reliably and easily it can be made to work with some of the other proprietary commercial systems that someone has decided will make up the ILE. It bothers me greatly that we are not trying hard to choose open source solutions, for reasons that will become clearer in the rest of this post. However, (pedagogically speaking) all the mainstream LMSs are much of a muchness, making the same mistakes as one another in very similar ways, so it probably won’t wreck too much of what we already do within Moodle. But, on its own, it won’t move us much further forward and we could do it better. That’s what the ILE is supposed to do – to make the LMS just a part of a much larger teaching environment, intimately connected with the rest of what the university does for or with students, and extensible with new and better ways of learning, teaching, and assessing learning.

picture of lego bricksLego bricks make poor metaphors

When we were first imagining the ILE, though the approach was admirably participative, engaging much of the university community, I was very worried by the things we were encouraged to focus on. It was all about the functionality, the usability, the design, the tools, the pedagogies, the business systems that supported them. Those things matter, for sure, and should be not be ignored, but they should and will change and grow all the time: in fact, part of the point of building this thing is to do just that. Using the city metaphor, pretty much all that we (collectively) considered were the spaces (the rooms, mainly), and the stuff that goes on inside them, much like LMS designers thought of universities as just collections of classrooms in which teaching functions were performed. Space and stuff are, not uncoincidentally, exactly what Stewart Brand identified long ago as inevitably being the fastest-changing, most volatile parts of any town or city (after site, structure, skin, and services). I’ve written a fair bit on the universality of this principle across all systems. It’s a solid structural principle that applies as much to ecosystems and educational systems as to cities. As Brand observes himself, drawing from O’Neill et al (1986), the larger, slower-changing elements of any system affect the smaller, faster-changing more than vice versa. This is for much the same reasons that path dependencies set in. It’s about the prior providing the context for what follows. Flexible things have to fit into the gaps left by less flexible, older, pre-existing things. In physical spaces, of course these tend to be bigger and/or slower, but the same is true in virtual spaces, where size seldom matters that much, but hardness (inflexibility, brittleness) really does. Though lip service was paid to the word ‘integrated’ in our discussions,  I had the strong feeling that the kind of integration we had in mind was that of a Lego set. In fact, I think we were aiming to find a ‘Lego Athabasca University’ set, with assembly instructions and a picture on the box. The vendors who came to talk with us made much of how effectively they could do that, rather than how effectively they could make it possible for others to do that.

Metaphors matter. Lego bricks have to fit together tightly, in pre-specified ways, especially if you are following a plan. If you want to move them around, you have to dismantle a bit of the structure to fit them in. It’s difficult to integrate things that are not bricks, or that are made by different toy companies to work in different ways. At best you get what Brand calls ‘magazine architecture’, or ‘no road’ architecture, beautiful, fit for purpose, intricate and solid, but slow to learn. Lego is not a terrible way to build, compared with buying everything pre-assembled, but it could be improved.

Signals and boundaries

Drawing inspiration from John Holland’s brilliant last work, Signals & Boundaries, I tried to make the case that, instead, we should be focusing on the boundaries (the interfaces between the buildings and the rest of the city), and the signals that pass between them (the people, the messages, etc, the forms they take and how they move around). In Brand’s terms, I wanted us to be thinking about skin and services, and perhaps even structure, though site – Athabasca University – was a given. Though a few people nodded in agreement, I think it mainly fell on deaf ears. We wanted oven-ready solutions, not the infrastructure to enable those solutions. Though the city metaphor works well, because we are talking about human constructions, others would result in similar ways of thinking: cells in bodies, organisms in ecosystems, brains, termite mounds, and so on. All are organized by boundaries (at many levels of hierarchy) and the signals that pass between them.

The Lego set metaphor – whether deliberately or not – seems to have prevailed for now. A lot of old buildings are being slated for demolition and a lot of new virtual buildings are now being erected as part of this development, many of them chosen not because of problems with existing buildings but so that they can more easily connect together and live in the same cloud. This will very likely work, for now, but it is not cheap and it is not flexible, especially given the fact that most of it is not open so, like a rental property, we are not allowed to fix things, add utilities, change the walls, etc, and we are wholly dependent on the landlords being nice to us and each other (knowing that some – ahem, Microsoft – have a long history of abusing their tenants). Those buildings will age. We will find them cramped. Some will age faster than others, and will have to be modified to keep up, perhaps at high cost. Companies renting them might go out of business or change their terms so we might have to demolish the buildings and rent/make new ones. We will be annoyed at how they do things, usually without asking us. We will hate the landlords who dictate what we can do and how we can do it, and who will keep upping the rent while not doing what we ask. We will want more, and the only way to get it will be to build extensions, buy new brick sets, if it is not enough to pay someone to remodel the interiors (and it won’t be). Of course, because most of the big structural elements will not be open source, we will not be able to do that ourselves.

What the ILE really should be

The ILE is, I think, poorly named, because it should not be an environment at all. Following the building metaphor, the ILE is (or should be) more like the system that connects a lot of buildings, bringing them together into a coherent, safe, livable community. It’s infrastructure and services; it is the roads, the traffic signals, the doors, the sidewalks, the water pipes, the waste pipes, the electricity, the network cables; it is the services – fire, police, schools, traffic control, etc; it is all the many rules, standards, norms and regulations that make them work together to help make an environment in which people can live, work, play, and grow. It’s part of the environment – the part that makes it work – but it is not the environment itself. The environment itself is Athabasca University, not just the tools, processes, and systems that support its functions. That includes, most importantly, the people who are part of the university, or who are visitors to it, who are not just users of the environment or dwellers in its walls, but who are or should be the most significant and visible parts of it, just as trees are part of the environment of forests, not users of the forest. Those people live in physical as well as other virtual environments (social media, Word documents, websites, etc) that the ILE can connect together too, to make them a part of it, so the spatial metaphor gets weird at this point. The ILE makes environmental boundaries fuzzy, permeable, and shifting. It’s not an ILE, it’s an ILI – an integrated learning infrastructure.

If we focused on the connections and interfaces, and on how information and processes need to pass across them, and if we thought hard about the nature of those signals, then we could build a system that is resilient, that adapts, that lasts, that grows, that evolves, with parts that we can seamless replace or improve because the interfaces – the building facades, the mains pipes, the junction boxes, etc – will mostly stay the same, evolving slowly as they should. This is about strategy, not planning,  a way of thinking about systems rather than a sequence of things to do.

Some of the key people involved in the process realize this. They are talking about standards, protocols, and projects to build interfaces between systems, and imagining future needs, though they are inevitably distracted by the process of renting Lego bricks, so I am not sure how much they will be able to stay focused on that. I hope they prevail over those who think they are building a set of classrooms and tightly connected admin offices out of self-contained interlocking bricks because our future depends on getting it right. We are aiming to grow. It just takes one critical piece in the Lego building to fail to support that, and the rest falls apart like a… well, like a pile of bricks.

References

Brand, S. (1997). How buildings learn. Phoenix Illustrated. https://www.penguinrandomhouse.ca/books/320919/how-buildings-learn-by-stewart-brand/9780140139969

Holland, J. H. (2012). Signals and Boundaries: Building Blocks for Complex Adaptive Systems. MIT Press.  https://mitpress.mit.edu/books/signals-and-boundaries

O’Neill, R.V., DeAngelis, D.L, Waide, J. B., & Allen, T. F. H. (1986). A Hierarchical Concept of Ecosystems. Princeton University Press. http://www.gbv.de/dms/bs/toc/025157787.pdf

Postman, N. (1998). Five things we need to know about technological change. Denver, Colorado, 28.  https://student.cs.uwaterloo.ca/~cs492/papers/neil-postman–five-things.html

Mediaeval Teaching in the Digital Age (slides from my keynote at Oxford Brookes University, May 26, 2021)

 front slide, mediaeval teaching

These are the slides from my keynote today at the Oxford Brookes “Theorizing the Virtual” School of Education Research Conference. As theorizing the virtual is pretty much my thing, I was keen to be a part of this! It was an ungodly hour of the day for me (2am kickoff) but it was worth staying up for. It was a great bunch of attendees who really got into the spirit of the thing and kept me wide awake. I wish I could hang around for the rest of it but, on the bright side, at least I’m up at the right time to see the Super Flower Blood Moon (though it’s looking cloudy, darn it).  In this talk I dwelt on a few of the notable differences between online and in-person teaching. This is the abstract…

Pedagogical methods (ways of teaching) are solutions to problems of helping people to learn, in a context filled with economic, physical, temporal, legal, moral, social, political, technological, and organizational constraints. In mediaeval times books were rare and unaffordable, and experts’ time was precious and limited, so lectures were a pragmatic solution, but they in turn created more problems. Counter-technologies such as classes, classrooms, behavioural rules and norms, courses, terms, curricula, timetables and assignment deadlines were were devised to solve those problems, then methods of teaching (pedagogies) were in turn invented to solve problems these counter-technologies caused, notably including:
· people who might not want (or be able) to be there at that time,
· people who were bored and
· people who were confused.
Better pedagogies supported learner needs for autonomy and competence, or helped learners find relevance to their own goals, values, and interests. They exploited physical closeness for support, role-modelling, inspiration, belongingness and so on. However, increasingly many relied on extrinsic motivators, like classroom discipline, grades and credentials to coerce students to learn. Extrinsic motivation achieves compliance, but it makes the reward or avoidance of the punishment the goal, persistently and often permanently crowding out intrinsic motivation. Intelligent students respond with instrumental approaches, satisficing, or cheating. Learning seldom persists; love of the subject is subdued; learners learn to learn in ineffective ways. More layers of counter-technologies are needed to limit the damage, and so it goes on.
Online, the constraints are very different, and its native forms are the motivational inverse of in-person learning. An online teacher cannot control every moment of a learner’s time, and learners can use the freedoms they gain to take the time they need, when they need it, to learn and to reflect, without the constraints of scheduled classroom hours and deadlines. However, more effort is usually needed to support their needs for relatedness. Unfortunately, many online teachers try (or are required) to re-establish the control they had in the classroom through grading or the promise of credentials, recreating the mediaeval problems that would otherwise not exist, using tools like learning management systems that were designed (poorly) to replicate in-person teaching functions. These are solutions to the problems caused by counter-technologies, not to problems of learning.
There are better ways, and that’s what this session is about.

front slide, mediaeval teaching

Educational technology: what it is and how it works | AI & Society

https://rdcu.be/ch1tl

This is a link to my latest paper in the journal AI & Society. You can read it in a web browser from there, but it is not directly downloadable. A preprint of the submitted version (some small differences and uncorrected errors here and there, notably in citations) can be downloaded from https://auspace.athabascau.ca/handle/2149/3653. The published version should be downloadable for free by Researchgate members.

This is a long paper (about 10,000 words), that summarizes some of the central elements of the theoretical model of learning, teaching and technology developed in my recently submitted book (still awaiting review) and that gives a few examples of its application. For instance, it explains:

  • why, on average researchers find no significant difference between learning with and without tech.
  • why learning styles theories are a) inherently unprovable, b) not important even if they were, and c) a really bad idea in any case.
  • why bad teaching sometimes works (and, conversely, why good teaching sometimes fails)
  • why replication studies cannot be done for most educational interventions (and, for the small subset that are susceptible to reductive study, all you can prove is that your technology works as intended, not whether it does anything useful).

Abstract

This theoretical paper elucidates the nature of educational technology and, in the process, sheds light on a number of phenomena in educational systems, from the no-significant-difference phenomenon to the singular lack of replication in studies of educational technologies.  Its central thesis is that we are not just users of technologies but coparticipants in them. Our participant roles may range from pressing power switches to designing digital learning systems to performing calculations in our heads. Some technologies may demand our participation only in order to enact fixed, predesigned orchestrations correctly. Other technologies leave gaps that we can or must fill with novel orchestrations, that we may perform more or less well. Most are a mix of the two, and the mix varies according to context, participant, and use. This participative orchestration is highly distributed: in educational systems, coparticipants include the learner, the teacher, and many others, from textbook authors to LMS programmers, as well as the tools and methods they use and create.  From this perspective,  all learners and teachers are educational technologists. The technologies of education are seen to be deeply, fundamentally, and irreducibly human, complex, situated and social in their constitution, their form, and their purpose, and as ungeneralizable in their effects as the choice of paintbrush is to the production of great art.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/8692242/my-latest-paper-educational-technology-what-it-is-and-how-it-works

My keynote slides from Confluence 2021 – STEAM engines: on building and testing the machines in our students’ minds

STEAM Engines

These are my slides for my keynote talk at the IEEE 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence-2021), hosted by Amity University, India, 28th January 2021. Technically it was 27th January here in Vancouver when I started, but 28th January when I finished. I hate timezones.

The talk winds up being about how to be a (mainly online) teacher in science, technology, engineering, and mathematics (STEM) – not how to teach, as such – but it gets to the point circuitously through discussing some aspects of the nature of technology, using a subset of my coparticipation model. In (very brief) the idea behind that is that ‘technology’ means organizing stuff to do stuff (any stuff), and we are not just users but participants in that organization, either playing our roles correctly (hard technologies) or organizing stuff ourselves (soft technologies). Almost always, thanks to the fact that almost all technologies are assemblies of and with other technologies, it is a mix of the two. In the technologies of learning there are many coparticipants, all playing roles, soft or hard or both. The designated teacher is only one of these, of varying significance.

The talk dwelt on the technological nature of teaching itself, and on the technological nature of the results of teaching. Teaching (as a distributed process) can usefully be seen as a process of building technologies in learners’ minds, some hard (training), some soft (teaching). These technologies can, like all technologies, be assembled together or with others, so our minds are both enacted and extended through technologies with one another and with the constructed world around us.

In STEM subjects there is a tendency to focus a lot more on building hard technologies than on soft technologies, because there tends to be a lot of hard stuff to learn before you can do anything much at all. There are many other subjects like this, including one of the biggest, language learning. The same is actually true in softer disciplines but students tend to come equipped with a lot of the basic hard stuff – especially language, debating skills, etc – already, so a really big part of the machine already exists. However, as much as it is in the liberal arts (the ‘A’ in STEAM), it is actually the soft technologies – what we do with those hard machines in our minds, the soft technologies we assemble with them – that actually matters, personally, in the workplace, and in our social lives. Also, from a motivational perspective it is normally a really bad idea to force people to learn a lot of hard stuff without them actually having a personal need or desire to do so. Training people in the hard stuff without using it in a soft, personally/socially relevant and meaningful context is a recipe for failure, though the fact that hard skills and knowledge can be accurately measured means that assessments of it tend to create an illusion of success. ‘Success’, though, just means that the hard machine works as intended, not that it actually does anything useful.

Avoiding this chicken and egg problem – the need for hard skills before you can do anything, but the uselessness of them in isolation – is not difficult. In fact, it is how we learn to speak, and many other things. It means letting go of the notion that teachers control everything, embracing the distributed nature of teaching, and designing ways of learning that support autonomy, achievable challenge, and relatedness. To do this means making learning (not just its products) visible, creating a culture and tools for sharing, and designing in support processes to help learners overcome obstacles. Basically, from a designated teacher’s perspective, it’s about letting go and staying close. It’s much the same as how we bring up our kids, as it happens.

It was an odd session, a lecture with no direct interaction. In itself, this would not be a great learning experience for anyone. However – and this is one of my big points – it is the assembly that matters, not the individual components, and I was not the one doing that assembly. Seen as a component of learning, attended without coercion or extrinsic goals, my little lecture is something that can be assembled to make something quite useful.

How distance changes everything: slides from my keynote at the University of Ottawa

These are the slides from my keynote at the University of Ottawa’s “Scaffolding a Transformative Transition to Distance and Online Learning” symposium today. In the presentation I discussed why distance learning really is different from in-person learning, focusing primarily on the fact that they are the motivational inverse of one another. In-person teaching methods evolved in response to the particular constraints and boundaries imposed by physics, and consist of many inventions – pedagogical and otherwise – that are counter-technologies designed to cope with the consequences of teaching in a classroom, a lot of which are not altogether wise. Many of those constraints do not exist online, and yet we continue to do very similar things, especially those that control and dictate what students should do, as well as when, and how they should do it. This makes no sense, and is actually antagonistic to the natural flow of online learning. I provided a few simple ideas and prompts for thinking about how to go more with the flow.

The presentation was only 20 minutes of a lively and inspiring hour-long session, which was fantastic fun and provided me with many interesting questions and a chance to expand further on the ideas.

uottawa2020HowDistanceChangesEverything

Technology, technique, and teaching

These are the slides from my recent talk with students studying the philosophy of education at Pace University.

This is a mashup of various talks I have given in recent years, with a little new stuff drawn from my in-progress book. It starts with a discussion of the nature of technology, and the distinction between hard and soft technologies that sees relative hardness as the amount of pre-orchestration in a technology (be it a machine or a legal system or whatever). I observe that pedagogical methods (‘pedagogies’ for short) are soft technologies to those who are applying them, if not to those on the receiving end. It is implied (though I forgot to explicitly mention) that hard technologies are always more structurally significant than soft ones: they frame what is possible.

All technologies are assemblies, and (in education), the pedagogies applied by learners are always the most important parts of those assemblies. However, in traditional in-person classrooms, learners are (by default) highly controlled due to the nature of physics – the need to get a bunch of people together in one place at one time, scarcity of resources,  the limits of human voice and hearing, etc – and the consequent power relationships and organizational constraints that occur.  The classroom thus becomes the environment that frames the entire experience, which is very different from what are inaccurately described as online learning environments (which are just parts of a learner’s environment).

Because of physical constraints, the traditional classroom context is inherently very bad for intrinsic motivation. It leads to learners who don’t necessarily want to be there, having to do things they don’t necessarily want to do, often being either bored or confused. By far the most common solution to that problem is to apply externally regulated extrinsic motivation, such as grades, punishments for non-attendance, rules of classroom behaviour, and so on. This just makes matters much worse, and makes the reward (or the avoidance of punishment) the purpose of learning. Intelligent responses to this situation include cheating, short-term memorization strategies, satisficing, and agreeing with the teacher. It’s really bad for learning. Such issues are not at all surprising: all technologies create as well as solve problems, so we need to create counter technologies to deal with them. Thus, what we normally recognize as good pedagogy is, for the most part, a set of solutions to the problems created by the constraints of in-person teaching, to bring back the love of learning that is destroyed by the basic set-up. A lot of good teaching is therefore to do with supporting at least better, more internally regulated forms of extrinsic motivation.

Because pedagogies are soft technologies, skill is needed to use them well. Harder pedagogies, such as Direct Instruction, that are more prescriptive of method tend (on average) to work better than softer pedagogies such as problem-based learning, because most teachers tend towards being pretty average: that’s implicit in the term, after all. Lack of skill can be compensated for through the application of a standard set of methods that only need to be done correctly in order to work. Because such methods can also work for good teachers as well as the merely average or bad, their average effectiveness is, of course, high. Softer pedagogical methods such as active learning, problem-based learning, inquiry-based learning, and so on rely heavily on passionate, dedicated, skilled, time-rich teachers and so, on average, tend to be less successful. However, when done well, they outstrip more prescriptive methods by a large margin, and lead to richer, more expansive outcomes that go far beyond those specified in a syllabus or test. Softer technologies, by definition, allow for greater creativity, flexibility, adaptability, and so on than harder technologies but are therefore difficult to implement. There is no such thing as a purely hard or purely soft technology, though, and all exist on a spectrum,. Because all pedagogies are relatively soft technologies, even those that are quite prescriptive, almost any pedagogical method can work if it is done well: clunky, ugly, weak pedagogies used by a fantastic teacher can lead to great, persistent, enthusiastic learning. As Hattie observes, almost everything works – at least, that’s true of most things that are reported on in educational research studies :-). But (and this is the central message of my book, the consequences of which are profound) it ain’t what you do, it’s the way that you do it, that’s what gets results.

Problems can occur, though, when we use the same methods that work in person in a different context for which they were not designed. Online learning is by far the most dominant mode of learning (for those with an Internet connection – some big social, political, economic, and equity issues here) on the planet. Google, YouTube, Wikipedia, Reddit, StackExchange, Quora, etc, etc, etc, not to mention email, social networking sites, and so on, are central to how most of us in the online world learn anything nowadays. The weird thing about online education (in the institutional sense) is that online learning is far less obviously dominant, and tends to be viewed in a far less favourable light when offered as an option. Given the choice, and without other constraints, most students would rather learn in-person than online. At least in part, this is due to the fact that those of us working in formal online education continue to apply pedagogies and organizational methods that solved problems in in-person classrooms, especially with regard to teacher control: the rewards and punishments of grades, fixed length courses, strictly controlled pathways, and so on are solutions to problems that do not exist or that exist in very different forms for online learners, whose learning environment is never entirely controlled by a teacher.

The final section of the presentation is concerned with what – in very broad terms – native distance pedagogies might look like. Distance pedagogies need to acknowledge the inherently greater freedoms of distance learners and the inherently distributed nature of distance learning. Truly learner-centric teaching does not seek to control, but to support, and to acknowledge the massively distributed nature of the activity, in which everyone (including emergent collective and networked forms arising from their interactions) is part of the gestalt teacher, and each learner is – from their perspective – the most important part of all of that. To emphasize that none of this is exactly new (apart from the massive scale of connection, which does matter a lot), I include a slide of Leonardo’s to-do list that describes much the same kinds of activity as those that are needed of modern learners and teachers.

For those seeking more detail, I list a few of what Terry Anderson and I described as ‘Connectivist-generation’ pedagogical models. These are far more applicable to native online learning than earlier pedagogical generations that were invented for an in-person context. In my book I am now describing this new, digitally native generation as ‘complexivist’ pedagogies, which I think is a more accurate and less confusing name. It also acknowledges that many theories and models in the family (such as John Seely Brown’s distributed cognitive apprenticeship) predate Connectivism itself. The term comes from Davis’s and Sumara’s 2006 book, ‘Complexity and Education‘, which is a great read that deserves more attention than it received when it was published.

Slides: Technology, technique and teaching

Small talk, big implications

fingerprint (public domain) An article from Quartz with some good links to studies showing the very many benefits of interacting with others, even at a very superficial level. I particularly like the report of a study showing the (quite strong) cognitive benefits of small talk.

It’s all solid stuff that supports much of what I and many others have written about the value of belongingness and social interaction in learning but, like much research in fields such as psychology, education, sociology, and so on, it makes some seemingly innocuous but fundamentally wrong assertions of fact. For instance:

“Those who were instructed to strike up a conversation with someone new on public transport or with their cab driver reported a more positive commute experience than those instructed to sit in silence.”

What, all of them? That seems either unbelievably improbable, or the result of a flawed methodology, or a sign of way too small a sample size. The paper itself is inaccessibly paywalled so I don’t know for sure, but I suspect this is actually just a sloppy description of the findings. It is not the result of bad reporting in the Quartz article, though: it is precisely what the abstract of the paper itself actually claims. The researchers make several similar claims like “Those who were instructed to strike up a hypothetical conversation with a stranger said they expected a negative experience as opposed to just sitting alone.” Again – all of them? If that were true, no one would ever talk to strangers (which anyone that has ever stood in a line-up in Canada knows to be not just false but Trumpishly false), so this is either a very atypical group or a very misleading statement about group members’ behaviours. The findings are likely, on average, correct for the groups studied, but that’s not the way it is written.

The article is filled with similarly dubious quotes from distinguished researchers and, worse, pronouncements about what we should do as a result. Often the error is subtly couched in (accurate but misleadingly ambiguous) phrasing like “The group that engaged in friendly small talk performed better in the tests.” I don’t think it is odd to carelessly read that as ‘all of the individuals in the group performed better than all of those in the other groups’, rather than that, ‘on average, the collective group entity performed better than another collective group entity’, which is what was actually meant (and that is far less interesting). From there it is an easy – but dangerously wrong – step to claim that ‘if you engage in small talk then you will experience cognitive gains.’ It’s natural to want to extrapolate a general law from averaged behaviours, and in some domains (where experimental anomalies can be compellingly explained) it makes sense, but it’s wrong in most cases, especially when applied to complex systems like, say, anything involving the behaviour of people.

It’s a problem because, like most in my profession, I regularly use such findings to guide my own teaching. On average, results are likely (but far from certain) to be better than if I did not use them, but definitely not for everyone, and certainly not every time.  Students do tend to benefit from engagement with other students, sure. It’s a fair heuristic, but there are exceptions, at least sometimes. And the exceptions aren’t just a statistical anomaly. These are real people we are talking about, not average people. When I do teaching well – nothing like enough of the time –  I try to make it possible for those that aren’t average to do their own thing without penalty. I try to be aware of differences and cater for them. I try to enable those that wish it to personalize their own learning. I do this because I’ve never in my entire life knowingly met an average person.

Unfortunately, our educational systems really don’t help me in my mission because they are pretty much geared to cater for someone that probably doesn’t exist. That said, the good news is that there is a general trend towards personalized learning that figures largely in most institutional plans. The bad news is that (as Alfie Kohn brilliantly observes) what is normally meant by ‘personalized’ in such plans is not its traditional definition at all, but instead ‘learning that is customized (normally by machines) for students in order that they should more effectively meet our requirements.’  In case we might have forgotten, personalization is something done by people, not to people. 

Further reading: Todd Rose’s ‘End of Average‘ is a great primer on how to avoid the average-to-the-particular trap and many other errors, including why learning styles, personality types, and a lot of other things many people believe to be true are utterly ungrounded, along with some really interesting discussion of how to improve our educational systems (amongst other things). I was gripped from start to finish and keep referring back to it a year or two on.

Address of the bookmark: https://qz.com/1134958/small-talks-positive-benefits-outweigh-your-fear-of-being-awkward/

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2849927/small-talk-big-implications

This was actually accepted for an IEEE conference and then published

I invite you to draw your own conclusions about this paywalled paper and the amount of quality control and editorial input that goes into IEEE publications nowadays. Here’s the abstract, which is one of the more coherent passages in the paper:

Abstract—The momentum contemplate evaluates the relationship among online social recreations and the e-learning utilization by look at the impact of social, subjective and teaching nearness on e-learning use between female understudies by method for playing on the web social diversions. This study utilizes an exploratory research plan, comfort test procedure. The outcomes propose that all scales are basically related with E- learning use. It is found that E-learning uses is emphatically tremendous and has a direct related with social nearness. The relationship between E-learning use and psychological nearness has a decidedly strong enormous connection; in like manner, the relationship between E-learning use and teaching nearness has an emphatically strong colossal connection. The disclosures inferred that the characteristic of online social amusements; both intellectual and teaching nearness impact E-learning utilization.

There’s not enough research about female understudies. I’m glad that someone is filling that gap. It’s well worth what otherwise appear to be the subscription fees IEEE is charging (US$33 in case you were wondering) . 

Address of the bookmark: http://ieeexplore.ieee.org/document/8052647/

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2760723/this-was-actually-accepted-for-an-ieee-conference-and-then-published

Cocktails and educational research

A lot of progress has been made in medicine in recent years through the application of cocktails of drugs. Those used to combat AIDS are perhaps the most well-known, but there are many other applications of the technique to everything from lung cancer to Hodgkin’s lymphoma. The logic is simple. Different drugs attack different vulnerabilities in the pathogens etc they seek to kill. Though evolution means that some bacteria, viruses or cancers are likely to be adapted to escape one attack, the more different attacks you make, the less likely it will be that any will survive.

Simulated learningUnfortunately, combinatorial complexity means this is not a simply a question of throwing a bunch of the best drugs of each type together and gaining their benefits additively. I have recently been reading John H. Miller’s ‘A crude look at the whole: the science of complex systems in business, life and society‘ which is, so far, excellent, and that addresses this and many other problems in complexity science. Miller uses the nice analogy of fashion to help explain the problem: if you simply choose the most fashionable belt, the trendiest shoes, the latest greatest shirt, the snappiest hat, etc, the chances of walking out with the most fashionable outfit by combining them together are virtually zero. In fact, there’s a very strong chance that you will wind up looking pretty awful. It is not easily susceptible to reductive science because the variables all affect one another deeply. If your shirt doesn’t go with your shoes, it doesn’t matter how good either are separately. The same is true of drugs. You can’t simply pick those that are best on their own without understanding how they all work together. Not only may they not additively combine, they may often have highly negative effects, or may prevent one another being effective, or may behave differently in a different sequence, or in different relative concentrations. To make matters worse, side effects multiply as well as therapeutic benefits so, at the very least, you want to aim for the smallest number of compounds in the cocktail that you can get away with. Even were the effects of combining drugs positive, it would be premature to believe that it is the best possible solution unless you have actually tried them all. And therein lies the rub, because there are really a great many ways to combine them.

Miller and colleagues have been using the ideas behind simulated annealing to create faster, better ways to discover working cocktails of drugs. They started with 19 drugs which, a small bit of math shows, could be combined in 2 to the power of 19 different ways – about half a million possible combinations (not counting sequencing or relative strength issues). As only 20 such combinations could be tested each week, the chances of finding an effective, let alone the best combination, were slim within any reasonable timeframe. Simplifying a bit, rather than attempting to cover the entire range of possibilities, their approach finds a local optimum within one locale by picking a point and iterating variations from there until the best combination is found for that patch of the fitness landscape. It then checks another locale and repeats the process, and iterates until they have covered a large enough portion of the fitness landscape to be confident of having found at least a good solution: they have at least several peaks to compare. This also lets them follow up on hunches and to use educated guesses to speed up the search. It seems pretty effective, at least when compared with alternatives that attempt a theory-driven intentional design (too many non-independent variables), and is certainly vastly superior to methodically trying every alternative, inasmuch as it is actually possible to do this within acceptable timescales.

The central trick is to deliberately go downhill on the fitness landscape, rather than following an uphill route of continuous improvement all the time, which may simply get you to the top of an anthill rather than the peak of Everest in the fitness landscape. Miller very effectively shows that this is the fundamental error committed by followers of the Six-Sigma approach to management, an iterative method of process improvement originally invented to reduce errors in the manufacturing process: it may work well in a manufacturing context with a small number of variables to play with in a fixed and well-known landscape, but it is much worse than useless when applied in a creative industry like, say, education, because the chances that we are climbing a mountain and not an anthill are slim to negligible. In fact, the same is true even in manufacturing: if you are just making something inherently weak as good as it can be, it is still weak. There are lessons here for those that work hard to make our educational systems work better. For instance, attempts to make examination processes more reliable are doomed to fail because it’s exams that are the problem, not the processes used to run them. As I finish this while listening to a talk on learning analytics, I see dozens of such examples: most of the analytics tools described are designed to make the various parts of the educational machine work ‘ better’, ie. (for the most part) to help ensure that students’ behaviour complies with teachers’ intent. Of course, the only reason such compliance was ever needed was for efficient use of teaching resources, not because it is good for learning. Anthills.

This way of thinking seems to me to have potentially interesting applications in educational research. We who work in the area are faced with an irreducibly large number of recombinable and mutually affective variables that make any ethical attempt to do experimental research on effectiveness (however we choose to measure that – so many anthills here) impossible. It doesn’t stop a lot of people doing it, and telling us about p-values that prove their point in more or less scupulous studies, but they are – not to put too fine a point on it – almost always completely pointless.  At best, they might be telling us something useful about a single, non-replicable anthill, from which we might draw a lesson or two for our own context. But even a single omitted word in a lecture, a small change in inflection, let alone an impossibly vast range of design, contextual, historical and human factors, can have a substantial effect on learning outcomes and effectiveness for any given individual at any given time. We are always dealing with a lot more than 2 to the power of 19 possible mutually interacting combinations in real educational contexts. For even the simplest of research designs in a realistic educational context, the number of possible combinations of relevant variables is more likely closer to 2 to the power of 100 (in base 10 that’s  1,267,650,600,228,229,401,496,703,205,376). To make matters worse, the effects we are looking for may sometimes not be apparent for decades (having recombined and interacted with countless others along the way) and, for anything beyond trivial reductive experiments that would tell us nothing really useful, could seldom be done at a rate of more than a handful per semester, let alone 20 per week. This is a very good reason to do a lot more qualitative research, seeking meanings, connections, values and stories rather than trying to prove our approaches using experimental results. Education is more comparable to psychology than medicine and suffers the same central problem, that the general does not transfer to the specific, as well as a whole bunch of related problems that Smedslund recently coherently summarized. The article is paywalled, but Smedlund’s abstract states his main points succinctly:

“The current empirical paradigm for psychological research is criticized because it ignores the irreversibility of psychological processes, the infinite number of influential factors, the pseudo-empirical nature of many hypotheses, and the methodological implications of social interactivity. An additional point is that the differences and correlations usually found are much too small to be useful in psychological practice and in daily life. Together, these criticisms imply that an objective, accumulative, empirical and theoretical science of psychology is an impossible project.”

You could simply substitute ‘education’ for ‘psychology’ in this, and it would read the same. But it gets worse, because education is as much about technology and design as it is about states of mind and behaviour, so it is orders of magnitude more complex than psychology. The potential for invention of new ways of teaching and new states of learning is essentially infinite. Reductive science thus has a very limited role in educational research, at least as it has hitherto been done.

But what if we took the lessons of simulated annealing to heart? I recently bookmarked an approach to more reliable research suggested by the Christensen Institute that might provide a relevant methodology. The idea behind this is (again, simplifying a bit) to do the experimental stuff, then to sweep the normal results to one side and concentrate on the outliers, performing iterations of conjectures and experiments on an ever more diverse and precise range of samples until a richer, fuller picture results. Although it would be painstaking and longwinded, it is a good idea. But one cycle of this is a bit like a single iteration of Miller’s simulated annealing approach, a means to reach the top of one peak in the fitness landscape, that may still be a low-lying peak. However if, having done that, we jumbled up the variables again and repeated it starting in a different place, we might stand a chance of climbing some higher anthills and, perhaps, over time we might even hit a mountain and begin to have something that looks like a true science of education, in which we might make some reasonable predictions that do not rely on vague generalizations. It would either take a terribly long time (which itself might preclude it because, by the time we had finished researching, the discipline will have moved somewhere else) or would hit some notable ethical boundaries (you can’t deliberately mis-teach someone), but it seems more plausible than most existing techniques, if a reductive science of education is what we seek.

To be frank, I am not convinced it is worth the trouble. It seems to me that education is far closer as a discipline to art and design than it is to psychology, let alone to physics. Sure, there is a lot of important and useful stuff to be learned about how we learn: no doubt about that at all, and a simulated annealing approach might speed up that kind of research. Painters need to know what paints do too. But from there to prescribing how we should therefore teach spans a big chasm that reductive science cannot, in principle or practice, cross. This doesn’t mean that we cannot know anything: it just means it’s a different kind of knowledge than reductive science can provide. We are dealing with emergent phenomena in complex systems that are ontologically and epistemologically different from the parts of which they consist. So, yes, knowledge of the parts is valuable, but we can no more predict how best to teach or learn from those parts than we can predict the shape and function of the heart from knowledge of cellular organelles in its constituent cells. But knowledge of the cocktails that result – that might be useful.