Educational machines and how they work

Educational machines and how they work

These are my presentation slides from a talk I gave today for the AU Research Centre, on the nature of technologies and why that is really interesting from a learning perspective. A clear understanding of the nature of technologies, especially the ways that we coparticipate in them in the highly distributed teaching that occurs in educational systems, helps to explain why bad teaching or even no teaching at all can sometimes work better than good teaching (depending on what we mean by ‘bad’, ‘good’, and ‘works’), the origins of the no-significant-difference phenomenon, why Bloom’s 2-sigma challenge can never be met, why learning styles research makes zero sense, and why we might need to rethink the value and purpose of reductive research in education (all it can ever tell us is whether the machine is working as intentended, and it is not and cannot be generalizable beyond that).

A video capture of the talk itself is available at (link).

Self-referentially, I should note that I didn’t realize that, in this capture, the video of me would appear in the bottom corner of the screen, roughly the size of a very small postage stamp, nor that the chat alongside the presentation would not be captured. Those who were there hopefully got to see me gesticulating and showing off things to do with chopsticks a bit better than this, and they definitely got to see and participate in the chat. Basically, in many ways, they got a different technology altogether than what you will see in the video, and not just because they were there and able to interact. This relates closely to one of the big points that I am making: Microsoft Teams is only part of a whole, and the ways it works within that whole both affect and are affected by the rest of the parts. The whole, how the parts relate to the whole, and how the whole relates to the parts are what matters, not any of the parts in isolation.

Original file on the Landing

Opinion: Let’s admit it – online education is a pale shadow of the real thing


This is an opinion piece from a UoT philosophy professor, Mark Kingwell, published in the Globe and Mail. Its arguments (such as they are) are an extremely poor advertisement for his discipline or for the effectiveness of learning philosophy. This is my best attempt to reconstruct the argument he is making:

P1: education cannot properly occur without a college culture (no supporting evidence given. The existence of Athabasca University proves this to be false)

P2: Socratic engagement cannot occur apart from in-person (no supporting evidence given. Thousands of online philosophy courses doing just that prove this to be false)

P3: being there is better than being online (no supporting evidence given. This is just an opinion. My own opinion is that it depends entirely on what you do and how you do it. Sometimes it is better to shop from Amazon, sometimes it isn’t. I’ve had awful in-person experiences and wonderful online experiences, and vice versa)

P4: online learning cannot replicate the in-person experience (no supporting evidence given apart from a misquote of McLuhan, though I agree that this, at least, is true)

Therefore: if it is provided online, education (or at least the teaching of philosophy) cannot work. At least this conclusion more or less follows from the flawed premises, give or take the odd missing premise, and notwithstanding the fact that ‘education’ is not well defined, apart from circularly.

This is palpable nonsense. The only argument that has any weight whatsoever is that online learning cannot replicate in-person learning, albeit not for the reasons he provides. This is absolutely correct. It doesn’t, and it shouldn’t. It’s a different (though overlapping) set of orchestrations of a different (though overlapping) set of phenomena. It should no more be the same as in-person learning than driving a car should be the same as riding a horse. It is fair to say that if you apply exactly the same techniques to driving a car as you would to riding a horse the best and most likely thing that can happen is nothing. If by some miraculous accident you managed to get in the car by trying to mount it, and if for some reason it started moving (maybe you kicked the handbrake?), the results would not be pretty.

There is a potentially interesting though undeveloped argument to be made about the importance of college culture. I could not agree more that the processes of teaching that a professor manages are only a tiny fraction of what leads to learning and that the vast majority of learning in colleges (or any other teaching institution, especially online) does occur outside the classroom and beyond the purview of any professor. In fact, much of the time, much of it occurs long after a course is over, sometimes years later. I agree that the way colleges create safe and vibrant communities that support students’ growth and development is very valuable, especially in the context of straight-out-of-school kids who need to unlearn the dependencies that have been imposed on them by years of coercive schooling. The scaffolding it provides is great. It is a somewhat damning indictment of a teacher’s teaching, though, that this is the university’s main source of value, don’t you think? And is it really the only way to do this? And, if it is, why not make it available to everyone, rather than the few that you deem worthy of it?

As much as anything this sounds to me like an anguished cry for help from someone who is out of his depth, lost, and unable to understand how to change. So, here’s some advice for Professor Kingwell and anyone else suffering the same existential angst. Let go of the idea that teaching is something that you do to students. Think of it instead as a process of helping students to learn. Question your assumptions. Don’t try to approximate real seminars and lectures. They were poor (in the case of lectures, exceedingly poor) technologies in the first place that were only necessary because of physics and the need for medieval monks to indoctrinate as many people as possible in the absence of affordable books. Imagine what the advantages for students might be of learning in situ, of being able to take time to think about the answers, of being able to make use of and connect with the vast sources of knowledge (especially including other people) that are available online, of integrating their learning in their own lives and communities. Don’t forget that they have their own interests, physical contexts and social circles. Remember that you are only a part of their environment, not the controller of it. If you don’t think the skills of debate can be developed online, visit some of the discussions at r/philosophy on Reddit. You will, of course, despair of the poor quality of most of the arguments and the shallowness of many of the replies but isn’t that a wonderful opportunity? How can you make it better? Do you see glimmers of intelligent argument there that could be developed, with your help? Online learning is not and should not be the same as in-person learning. But it can be richer, more meaningful, more relevant, and more respectful of learners’ individuality and autonomy precisely because it does not suffer the same constraints and path dependencies of the old ways.


There is another argument in this opinion piece that I have ignored, which does have empirical support and coherence, and it goes like this:

Students who have chosen in-person learning rather than online learning prefer in-person learning to online learning. Those dreaming spires and quads that he mentions, not to mention football and bars, and above all the general ‘college experience’, including the potential to drop out of the rest of society for a few years, probably have a great deal to do with this. Many current students resent paying the same over-inflated fees for what they rightly perceive to be a less valuable experience.

Therefore colleges will suffer from loss of revenue and smaller, less successful ones may close

That’s a fair argument, despite the obvious sampling bias. There’s no doubt that the skills and toolsets needed for effective online learning are far beyond the capabilities of most professors in most universities and colleges, as this plea for help reveals. In fact, even dedicated online institutions like Athabasca struggle a great deal to manage, with funding models that completely fail to address the realities and very unevenly spread pedagogical skills.

The current crisis certainly will massively disrupt education as we know it, much as it will disrupt most industries and institutions. There’s a lot of beautiful real estate that is not going to be well used, though it is not well used as it is, with most buildings and classrooms unused most of the time in most institutions, so this is only a matter of degree, as it were. These are interesting times. But that’s no reason to say that it won’t be as good. If we choose to make it better, it can be better.

Originally posted at:

Distributed Teaching

I forgot to share this when it first came out at the end of last year. This is my contribution to Springer’s Encyclopedia of Teacher Education, a brief (3000 word) article that gives a broad overview of the main ways in which the role of a teacher is (always) spread across many individuals, as well as a little general advice about ways that designated teachers might make use of this knowledge. Being an encyclopaedia article there’s nothing particularly earth shattering in it, but I think it captures the essence of most perspectives on how the act of teaching is shared among us, from deliberate collaborations through to socially distributed cognition and collectives, and the conclusion very gently hints at what are actually quite significant consequences of this perspective for how teachers should teach (tl;dr: embrace the crowd, don’t fight it).

Unfortunately it is paywalled (I was invited to submit this by a friend), but there’s a preprint available at which is fairly close to the final version.

Original citation:

Dron J. (2020) Distributed Teaching. In: Peters M. (eds) Encyclopedia of Teacher Education. Springer, Singapore

Originally posted at:

The most comprehensive, accurate, and up-to-date source of information on COVID19 research

screenshot of This site tracks and sorts registered COVID-19 trials, harvested from multiple reliable global databases as well as other promising research not included in such sources, mined and pre-filtered using AI techniques. The studies are all manually reviewed by two humans for validity, reliability, methods, etc, and checked for duplication. They also regularize and standardize the language and data to make studies more easily and directly comparable. The result is then fed into an easily searchable online database. This provides what is essentially a super-fast and flexible way of conducting something akin to a systematic review (better, in some ways), and the results can be freely used by anyone interested in the current state of the research (good export facilities too). I rather like the idea that it is becoming a means for researchers themselves to connect with one another and coordinate research. For laypeople, it’s a brilliant way to check the true state of research without the sensationalism or cherry picking of politicians or regular/social media. You can easily set filter and sorting conditions, and there are links to all the original data and papers (many of which are not paywalled). As of today, the site tracks 590 trials, but the number is growing all the time. The site and its features are still evolving, too. It has been built by researchers who did not have a wealth of web design expertise before this started, but you’d hardly know it: they have done a great job of getting it up and running and making it really usable and responsive.

You can read more about it in The Lancet.  I highly recommend the associated interview. The sound quality of the podcast is not great, but the interview is terrific, and it explains much more of the process, implications, and uses than the article itself. Some great reflections on the relative value of different kinds of data and many of the seldom stated complexities of scientific trials in general, including political and social issues, not to mention the immense promise of analytics approaches to greatly increase what we can learn from existing trials. Fascinating stuff.

Full disclosure: I am the very proud father of the second author of the paper and interviewee in the podcast, who is working punishing hours day and night to make this happen. In the fight against the pandemic he is only one among many heroes, but this one happens to be mine.

Originally posted at:

Does technology lead to improved learning? (tl;dr: it's a meaningless question)

Students using computers, public domain, have been (at least) tens of thousands of comparative studies on the effects of ‘technology’ on learning performed over the past hundred years or so. Though some have been slightly more specific (the effects of computers, online learning, whiteboards, eportfolios, etc) and some more sensible authors use the term ‘tech’ to distinguish things with flashing lights from technologies in general, nowadays it is pretty common to just use the term ‘technology’ as though we all know what the authors mean. We don’t. And neither do they.

It makes no more sense to ask whether (say) computers have a positive or negative effect on learning than to ask whether (say) pedagogies have a positive or negative effect on learning. Pedagogies (methods and principles of learning and teaching) are at least as much technologies as computers and their uses and forms are similarly diverse. Some work better than others, sometimes, in some contexts, for some people. All are soft technologies that demand we act as coparticipants in their orchestration, not just users of them. This means that we have to add stuff to them in order that they work. None do anything of interest by themselves – they must be orchestrated with (usually many) other tools, methods, structures, and so on in order to do anything at all. All can be orchestrated well (assuming we know what ‘well’ really means, and we seldom really do) or badly.

It is instructive to wonder why it is that, as far as I know, no one has yet tried to investigate the effects of transistors, or screws, or words, or cables on learning, even though they are an essential part of most technologies that we do see fit to research and are certainly prerequisite parts of many educational interventions. The answer is, I hope, obvious: we would be looking at the wrong level of detail. We would be examining a part of the assembly that is probably not materially significant to learning success, albeit that, without them, we would not have other technologies that interest us more. Transistors enable computers, but they do not entail them.

Likewise computers and pedagogies enable learning, but do not entail it (for more on enablement vs entailment, see Longo et al, 2012 or, for a fuller treatment, Kauffman, 2019). True, pedagogies and computers may orchestrate many more phenomena for us, and some of those orchestrations may have more consistent and partly causal effects on whether an intervention works than screws and cables but, without considering the entire specific assembly of which they are a part, those effects are no more generalizably relevant to whether learning is effective or not than the effects of words or transistors.

Technologies enable (or sometimes disable) a range of phenomena, but only rarely do they generalizably entail a fixed set of outcomes and, if they do, there are almost always ways that we can assemble them with other technologies that alter those outcomes. In the case of something as complex as education, which always involves thousands and usually millions of technological components assembled with one another by a vast number of people, not just the teacher, every part affects every other. It is irreducibly complex, not just complicated. There are butterfly’s wing effects to consider – a single injudicious expletive, say, or a even a smile can transform the effectiveness or otherwise of teaching. There’s emergence, too. A story is not just a collection of words, a lesson is not just a bunch of pedagogical methods, a learning community is not just a collection of people. And all of these things – parts and emergent or designed combinations of parts – interact with one another to lead to deterministic but unprestatable consequences (Kauffman, 2019).

Of course, any specific technology applied in a specific context can and will entail specific and (if hard enough) potentially repeatable outcomes. Hard technologies will do the same thing every time, as long as they work. I press the switch, the light comes on. But even for such a simple, hard technology, you cannot from that generalize that every time any switch is pressed a light will come on, even if you, without warrant, assume that the technology works as intended, because it can always be assembled with other phenomena, including those provided by other technologies, that alter its effects. I press many switches every day that do not turn on lights and, sometimes, even when I press a light switch the light does not come on (those that are assembled with smart switches, for instance). Soft technologies like computers, pedagogies, words, cables, and transistors are always assembled with other phenomena. They are incomplete, and do not do anything of interest at all without an indefinitely large number of things and processes that we add to them, or to which we add them, each subtly or less subtly different from the rest. Here’s an example using the soft technology of language:

  • There are countless ways I could say this.
  • There are infinitely many ways to make this point.
  • Wow, what a lot of ways to say the same thing!
  • I could say this in a vast number of ways.
  • There are indefinitely many ways to communicate the meaning of what I wish to express.
  • I could state this in a shitload of ways.
  • And so on, ad infinitum.

This is one tiny part of one tiny technology (this post). Imagine this variability multiplied by the very many people, tools, methods, techniques, content, and structures that go into even a typical lesson, let alone a course. And that is disregarding the countless other factors and technologies that affect learning, from institutional regulations to interesting news stories or conversations on a bus.

Reductive scientific methods like randomized controlled tests and null hypothesis significance testing can tell us things that might be useful to us as designers and enactors of teaching. We can, say, find out some fairly consistent things about how people learn (as natural phenomena), and we can find out useful things about how well different specific parts compare with one another in a particular kind of assembly when they are supposed to do the same job (nails vs screws, for instance). But these are just phenomena that we can use as part of an assembly, not prescriptions for successful learning. The question of whether any given type of technology affects learning is meaningless. Of course it does, in the specific, because we are using it to help enable learning. But it only does so in an orchestrated assembly with countless others, and that orchestration is and must always be substantially different from any other. So, please, let’s all stop pretending that educational technologies (including pedagogical methods) can be researched in the same reductive ways as natural phenomena, as generalizable laws of entailment. They cannot.


Arthur, W. B. (2009). The Nature of Technology: what it is and how it evolves (Kindle ed.). New York, USA: Free Press. (Arthur’s definition of technology as the orchestration of phenomena for some purpose, and his insights into how technologies evolve through assembly, underpins the above)

Kauffman, S. A. (2019). A World Beyond Physics: The Emergence and Evolution of Life. Oxford University Press.

Longo, G., Montévil, M., & Kauffman, S. (2012). No entailing laws, but enablement in the evolution of the biosphere. Proceedings from 14th annual conference companion on Genetic and evolutionary computation, Philadelphia, Pennsylvania, USA. Full text available at


Bananas as educational technologies

  Banana Water Slide banana statue, Virginia Beach, Virginia One of my most memorable learning experiences that has served me well for decades, and that I actually recall most days of my life, occurred during a teacher training session early in my teaching career. We had been set the task of giving a two-minute lecture on something central to our discipline. Most of us did what we could with a slide or two and a narrative to match in a predictably pedestrian way. I remember none of them, not even my own, apart from one. One teacher (his name was Philippe) who taught sports nutrition, just drew a picture of a banana. My memory is hazy on whether he also used an actual banana as a prop: I’d like to think he did. For the next two minutes, he then repeated ‘have a banana’ many times, interspersed with some useful facts about its nutritional value and the contexts in which we might do so. I forget most of those useful facts, though I do recall that it has a lot of good nutrients and is easy to digest. My main takeaway was that, if we are in a hurry in the morning, not to skip breakfast but to eat a banana, because it will keep us going well enough to function for some time, and is superior to coffee as a means of making you alert. His delivery was wonderful: he was enthusiastic, he smiled, we laughed, and he repeated the motif ‘have a banana!’ in many different and entertaining ways, with many interesting and varied emphases. I have had (at least) a banana for breakfast most days of my life since then and, almost every time I reach for one, I rememember Philippe’s presentation. How’s that for teaching effectiveness?

But what has this got to do with educational technologies? Well, just about everything.

As far as I know, up until now, no one has ever written an article about bananas as educational technologies. This is probably because, apart from instances like the one above where bananas are the topic, or a part of the topic being taught, bananas are not particularly useful educational technologies. You could, at a stretch, use one to point at something on a whiteboard, as a prop to encourage creative thinking, or as an anchor for a discussion. You could ask students to write a poem on it, or calculate its volume, or design a bag for it. There may in fact be hundreds of distinct ways to use bananas as an educational technology if you really set your mind to it. Try it – it’s fun! Notice what you are doing when you do this, though. The banana does provide some phenomena that you can make use of, so there are some affordances and constraints on what you can do, but what makes it an educational technology is what you add to it yourself. Notwithstanding its many possible uses in education, on balance, I think we can all agree that the banana is not a significant educational technology.

Parts and pieces

Here are some other things that are more obviously technological in themselves, but that are not normally seen as educational technologies either:

  • screws
  • nails
  • nuts and bolts
  • glue

Like bananas, there are probably many ways to use them in your teaching but, unless they are either the subject of the teaching or necessary components of a skill that is being learned (e.g. some crafts, engineering, arts, etc) I think we can all agree that none of these is a significant educational technology in itself. However, there is one important difference. Unlike bananas, these technologies can and do play very significant roles in almost all education, whether online or in-person. Without them and their ilk, all of our educational systems would, quite literally, fall apart. However, to call them educational technologies would make little sense because we are putting the boundaries around the wrong parts of the assembly. It is not the nuts and bolts but what we do with them, and all the other things with which they are assembled, that matters most. This is exactly like the case of the banana.

Bigger pieces

This is interesting because there are other things that some people do consider to be sufficiently important educational technologies that they get large amounts of funding to perform large-scale educational research on them, about which exactly the same things could be said: computers, say. There is really a lot of research about computers in classrooms. And yet metastudies tend to conclude that, on average, computers have little effect on learning. This is not surprising. It is for exactly the same reason that nuts and glue, on average, have little effect on learning. The researchers are choosing the wrong boundaries for their investigations.

The purpose of a computer is to compute. Very few people find this of much value as an end in itself, and I think it would be less useful than a banana to most teachers. In fact, with the exception of some heavily math-oriented and/or computer science subjects, it is of virtually no interest to anyone.

The ends to which the computing they perform are put are another matter altogether. But those are no more the effect of the computer than the computer is the effect of the nuts and bolts that hold it together. Sure, these (or something like them) are necessary components, but they are not causes of whatever it is we do with them. What makes computers useful as educational technologies is, exactly like the case of the banana, what we add to them.

It is not the computer itself, but other things with which it is assembled such as interface hardware, software and (above all) other surrounding processes – notably the pedagogical methods – that can (but on average won’t) turn it into an educational technology. There are potentially infinite numbers of these, or there would be if we had infinite time and energy to enact them. Computers have the edge on bananas and, for that matter, nuts and bolts because they can and usually must embody processes, structures, and behaviours. They allow us to create and use far more diverse and far more complex phenomena than nuts, bolts, and bananas. Some – in fact, many – of those processes and structures may be pedagogically interesting in themselves. That’s what makes them interesting, but it does not make them educational technologies. What can make them educational technologies are the things we add, not the machines in themselves.

This is generalizable to all technologies used for educational purposes. There are hierarchies of importance, of course. Desks, classrooms, chairs, whiteboards and (yes) computers are more interesting than screws, nails, nuts, bolts, and glue because they orchestrate more phenomena to more specific uses: they create different constraints and affordances, some of which can significantly affect the ways that learning happens. A lecture theatre, say, tends to encourage the use of lectures. It is orchestrating quite a few phenomena that have a distinct pedagogical purpose, making it a quite significant participant in the learning and teaching process. But it and all these things, in turn, are utterly useless as educational technologies until they are assembled with a great many other technologies, such as (very non exhaustively and rather arbitrarily):

  • pedagogical methods,
  • language,
  • drawing,
  • timetables,
  • curricula,
  • terms,
  • classes,
  • courses,
  • classroom rules,
  • pencils and paper,
  • software,
  • textbooks,
  • whiteboard markers,
  • and so on.

None of these parts have much educational value on their own. Even something as unequivocally identifiable as an educational technology as a pedagogical method is useless without all the rest, and changes to any of the parts may have substantial impacts on the whole. Furthermore, without the participation of learners who are applying their own pedagogical methods, it would be utterly useless, even in assembly with everything else. Every educational event – even those we apparently perform alone – involves the coparticipation of countless others, whether directly or not.

The point of all this is that, if you are an educational researcher or a teacher investigating your own teaching, it makes no sense at all to consider any generic technology in isolation from all the rest of the assembly. You can and usually should consider specific instances of most if not all those technologies when designing and performing an educational intervention, but they are interesting only insofar as they contribute, in relationship to one another, to the whole.

And this is not the end of it. Just as you must assemble many pieces in order to create an educational technology, what you have assembled must in turn be assembled by learners – along with plenty of other things like what they know already, other inputs from the environment, from one another, the effects of things they do, their own pedagogical methods, and so on – in order to achieve the goals they seek. Your own teaching is as much a component of that assembly as any other. You, the learners, the makers of tools, inventors of methods, and a cast of thousands are coparticipants in a gestalt process of education.

This is one of the main reasons that reductive approaches to educational research that attempt to isolate the effects of a single technology – be it a method of teaching, a device, a piece of software, an assessment technique, or whatever – with the intent of generalizing some statement about it cannot ever work. The only times they have any value at all are when all the technologies in question are so hard, inflexible, and replicable, and the uses to which they are put are so completely fixed, well defined, and measurable that you are, in effect, considering a single specific technology in a single specific context. But, if you can specify the processes and purposes with that level of exactitude then you are simply checking that a particular machine works as it is designed to work. That’s interesting if you want to use that precise machine in an almost identical context, or you want to develop the machine itself further. But it is not generalizable, and you should never claim that it is. It is just part of a particular story. If you want to tell a story then other methods, from narrative descriptions to rich case studies to grounded theory, are usually much more useful.

Why Pioneer Neurosurgeon Wilder Penfield Said the Mind Is More Than the Brain

I had not come across exactly this argument for mind-brain dualism before, though it resembles some going back to antiquity in its basic assumptions. It’s an interesting idea, proposed by Wilder Penfield, a neurosurgeon working in the first half of the 20th Century. The three foundations for his arguments were:

  1. despite hundreds of thousands of stimulations of patients’ brains under neurosurgery, not one ever stimulated the intellect: no one ever did calculus as a result of brain stimulation.
  2. when people have seizures caused by problems in the brain, all sorts of body movements occur, but there are no intellectual seizures. No one ever had a calculus seizure.
  3. though he could stimulate people to move arms etc, the patients always knew it was him doing it. He was never able to stimulate the will. He could not make them believe they were the cause of the movement.

His belief was, therefore, that the mind (the will and the intellect – logic, abstract reasoning, etc) cannot arise from the brain because, if it did, there would be at least some way to stimulate it by prodding the brain. Apparently there are others who still share his belief.

I’ve not investigated how the arguments have developed since then, nor whether anyone has succeeded where Penfield failed, but it seems to me to be a poor line of reductive deductive reasoning. It is fairly reasonable to assume, without recourse to magic and based on what we know of complex adaptive systems, that the mind is an emergent phenomenon that does not exist in one place in the brain, but that occurs through the interaction of billions of simpler elements, and clusters of elements, all recursively affecting one another, most likely at many hierarchical levels and boundaries. There are many things that behave differently as a whole than in their parts: an atom of a cell is not a cell, the cells of hearts are not hearts, a heart is not a body, a body is not society, and so on. The fact that small parts of the brain can be stimulated to produce measurable psychological and physical effects does not mean that all brain-based phenomena have to work that way. Stimulating an area of the brain as an attempt to evoke the mind is no more sensible than buying a can of beans as an attempt to evoke the economy.

Originally posted at:

The makers of the game complaining about the people playing it

“”I think as long as we have education, we’re going to have people who are going to try and game the system and we just have to keep up with them,” said Deb Eerkes, the university’s director of student conduct and accountability. ”

(40 University of Alberta computing science students caught cheating, CBC News, March 4, 2020)

This is stuff and nonsense. Dangerous, cynical, subversive, appalling nonsense. There are lots of different definitions of ‘education’ but, as far as I know, not a single one of them includes grading and sorting learners. Education is supposed, above all else and non-negotiably, to be a system for learning. If you instead treat it as a system for grading, then of course rational students will take the shortest safe path to attain the best grades possible, whether or not that involves learning. Cheating is rarely if ever a very safe path but, if the stakes are set high enough and achieving success is out of their reach for whatever reason, then it is a calculated risk that some will always take. In fact, most will. Almost all studies of the phenomenon across the world show more than half of all students do so at some point (Jurdi et al, 2011), and some studies show rates over 80% (Ma et al, 2013). These people are not gaming the system. They are playing the game as it is designed to be played. It doesn’t help that we almost always force them to learn things that they neither want nor need to learn at times they are not ready, willing, nor able to do so. And when I say ‘learn’ I mean that in the same sense as we learn the room number of our hotel room when we stay there. When there is no longer a need for it (the grade has been attained) then we have no use for it any more and, as often as not, promptly forget.

When cheating is so widespread and ubiquitous, the fault is clearly with the educational system, not the cheaters. A system that is designed to teach people but makes it a fundamental part of its design that some of them must fail to be taught, is fundamentally broken.  There are not many other technologies that are actually designed to consistently fail in such a spectacular way. Imagine the same design approach being used for, say, cars or nuclear power stations. Of course, some immoral manufacturers do rely on built-in obsolescence, many cripple parts of their products’ functionality in order to sell more of them, and so on. But these are not failures when viewed as ways of making money for the manufacturers, it’s just a failure of their users to understand their primary purpose. It is also true that, with the best will in the world, almost all technologies do, sooner or later, fail, but (with a few exceptions like some artworks) that is not what they are normally designed to do. That’s just entropy doing its thing. Indeed, unless something actively inputs significant energy into a system to maintain it and adapt it to its changing context, every system will eventually fail. That’s not what it happening here. Education is actually designed to fail.

As long as education is treated as a sorting machine, students will use counter-technologies to address its shortcomings, and educators will use counter-technologies to counter those counter-technologies, in an ever-escalating arms war that makes everyone the loser.

Here are a few (of many) ways we can improve this situation, even within the context of a system designed to fail:

  • build the system so that students can try and try again until they have actually learned what they seek to learn. If at all possible, even if it means charging more for the service, do not force them to keep to your timetable for this.
  • give them control over what they learn, and how. By all means let them delegate control to you (or anyone else) if they wish, but always let them take it back when they want or need to do so.
  • do not give grades: they destroy intrinsic motivation. Give feedback that helps students to improve. If grades are mandated by the system, only ever use two: A, and incomplete (Kohn, 1999, p.208). If that is impossible, at the very least allow students to participate in grading, let them choose at least some of the criteria, give them ownership of the process.
  • discover the outcomes that have actually occurred, rather than measure the extent to which students meet the outcomes we say they should meet. Students always learn more than we teach. Celebrate it. Outcome harvesting (Wilson-Grau & Britt, 2012) is a promising approach for this.
  • celebrate achievement. Do not punish failure to achieve. When grading, seek evidence of learning, not evidence of failure to learn. When there are failures to learn, treat them as opportunities to improve, not reasons to reject.
  • celebrate re-use. Everything builds on everything else, no one does anything alone. Let people ‘cheat’, authentically, as all of us ‘cheat’ when we use ideas and chunks of stuff other people have created, but make cheating pointless or counter-productive in achieving a grade. A simple way to do that is to make learning personal (not personalized) so that it is both relevant to student interests and needs (so intrinsically motivating), and always unique to them (so difficult to copy from elsewhere). It also helps to celibrate intelligent (properly ascribed) re-use. Don’t ask students to reinvent wheels, but encourage them to use wheels well.
  • make learning visible. Build sharing into the structure of the process. This is both motivating and the many eyes that result make cheating far more likely to be discovered. If ‘face’ is what matters to your students, then design the system so that they must show it.
  • Build community. People tend to try much harder when they know that what they create will be seen by others that they care about.

I could go on indefinitely: there are countless ways to avoid or at least reduce the harms of grading, not one of which requires coercion, punishment, or harm. The main point, though, is that educational systems are technologies for learning, not for grading. If we can spin some useful awards (not rewards) out of that then that’s good, but it should not, in the process, subvert the whole point of having the things in the first place.


Jurdi, R., Hage, H. S., & Chow, H. P. H. (2011). Academic Dishonesty in the Canadian Classroom: Behaviours of a Sample of University Students. Canadian Journal of Higher Education, 41(3).

Kohn, A. (1999). Punished by rewards: The trouble with gold stars, incentive plans, A’s, praise, and other bribes. Mariner Books.

Ma, Y., McCabe, D., & Liu, R. (2013). Students,  Academic Cheating in Chinese Universities: Prevalence, Influencing Factors, and Proposed Action. J Acad Ethics, 11(3), 169-184. doi:10.1007/s10805-013-9186-7

Wilson-Grau, R., & Britt, H. (2012). Outcome harvesting. Cairo: Ford Foundation. http://www. managingforimpact. org/sites/default/files/resource/outome_harvesting_brief_final_2012-05-2-1. pdf.

Originally posted at:

Obsolescence and decay

Koristka camera  All technologies require an input of energy – to be actively maintained – or they will eventually drift towards entropy. Pyramids turn to sand, unused words die, poems must be reproduced to survive, bicycles rust. Even apparently fixed digital technologies rely on physical substrates and an input of power to be instantiated at all. A more interesting reason for their decay, though, is that virtually no technologies exist in isolation, and virtually all participate in, and/or are participated in by other technologies, whether human-instantiated or mechanical. All are assemblies and all exist in an ecosystem that affects them, and which they affect. If parts of that system change, then the technologies on which they depend may cease to function even though nothing about those technologies has, in itself, altered.

Would a (film) camera for which film is no longer available still be a camera? It seems odd to think of it as anything else. However, it is also a bit odd to think of it as a camera, given that it must be inherent to the definition of a camera that it can take photos. It is not (quite) simply that, in the absence of film, it doesn’t work. A camera that doesn’t take photos because the shutter has jammed or the lens is missing is still a camera: it’s just a broken camera, or an incomplete camera. That’s not so obviously the case here. You could rightly claim that the object was designed to be a camera, thereby making the definition depend on the intent of its manufacturer. The fact that it used to be perfectly functional as a camera reinforces that opinion. Despite the fact that it cannot take pictures, nothing about it – as a self-contained object – has changed. We could therefore simply say it is therefore still a camera, just one that is obsolete, and that obsolescence is just another way that cameras can fail to work. This particular case of obsolescence is so similar to that of the missing lens that it might, however, make more sense to think of it as an instance of exactly the same thing. Indeed someone might one day make a film for it and, being pedantic, it is almost certainly possible to cut up a larger format film and insert it, at which point no one would disagree that it is a camera, so this is a reasonable way to think about it. We can reasonably claim that it is still a camera, but that it is currently incomplete.

Notice what we are doing here, though. In effect, we are supposing that a full description of a camera – ie. a device to take photos – must include its film, or at least some other means of capturing an image, such as a CCD. But, if you agree to that, where do you stop? What if the only film that the camera can take demands processing that is not? What if is is a digital camera that creates images that no software can render? That’s not impossible. Imagine (and someone almost certainly will) a DRM’d format that relies on a subscription model for the software used to display it, and that the company that provides that subscription goes out of business. In some countries, breaking DRM is illegal, so there would be no legal way to view your own pictures if that were the case. It would, effectively, be the same case as that of a camera designed to have no shutter release, which (I would strongly argue) would not be a camera at all because (by design) it cannot take pictures. The bigger point that I am trying to make, though, is that the boundaries that we normally choose when identifying an object as a camera are, in fact, quite fuzzy. It does not feel natural to think of a camera as necessarily including its film, let alone also including the means of processing that film, but it fails to meet a common-sense definition of the term without those features.

A great many – perhaps most – of our technologies have fuzzy boundaries of this nature, and it is possible to come up with countless examples like this. A train made for a track gauge that no longer exists, clothing made in a size that fits no living person, printers for which cartridges are no longer available, cars that fail to meet emissions standards, electrical devices that take batteries that are no longer made, and so on. In each case, the thing we tend to identify as a specific technology no longer does what it should, despite nothing having changed about it, and so it is difficult to maintain that it is the same technology as it was when it was created unless we include in our definition the rest of the assembly that makes it work. One particularly significant field in which this matters a great deal is in computing. The problem occurs in every aspect of computing: disk formats for which no disk drives exist, programs written for operating systems that are no longer available, games made for consoles that cannot be found, and so on. In a modern networked environment, there are so many dependencies all the way down the line that virtually no technology can ever be considered in isolation. The same phenomenon can happen at a specific level too. I am currently struggling to transfer my websites to a different technology because the company providing my server is retiring it. There’s nothing about my sites that has changed, though I am having to make a surprising number of changes just to keep them operational on the new system. Is a website that is not on the web still a website?

Whatever we think about whether it remains the same technology, if it does not do what the most essential definition of that technology claims that it must, then a digital technology that does not adapt eventually dies, even though its physical (digital) form might persist unchanged. This is because its boundaries are not simply its lines of code. This both stems from and leads to fact that technologies tend to evolve to ever greater complexity. It is especially obvious in the case of networked digital technologies, because parts of the multiple overlapping systems in which they must participate are in an ever-shifting flux. Operating systems, standards, protocols, hardware, malware, drivers, network infrastructure, etc can and do stop otherwise-unchanged technologies from working as intended, pretty consistently, all the time. Each technology affects others, and is affected by them. A digital technology that does not adapt eventually dies, even though (just like the camera) its physical (digital) form persists unchanged. It exists only in relation to a world that becomes increasingly complex thanks to the nature of the beast.

All species of technology evolve to become more complex, for many reasons, such as:

  • the adjacent possibles that they open up, inviting elaboration,
  • the fact that we figure out better ways to make them work,
  • the fact that their context of use changes and they must adapt to it,
  • the fact other technologies with which they are assembled adapt and change,
  • the fact that there is an ever-expanding range of counter-technologies needed to address their inevitable ill effects (what Postman described as the Faustian Bargain of technology),  which in turn create a need for further counter-technologies to curb the ill effects of the counter technologies,
  • the layers of changes and fixes we must apply to forestall their drift into entropy.

The same is true of most individual technologies of any complexity, ie. those that consist of many interacting parts and that interact with the world around them. They adapt because they must – internal and external pressures see to that – and, almost always, this involves adding rather than taking away parts of the assembly. This is true of ecosystems and even individual organisms, and the underlying evolutionary dynamic is essentially the same. Interestingly, it is the fundamental dynamic of learning, in the sense of an entity adapting to an environment, which in turn changes that environment, requiring other entities within that environment to adapt in turn, which then demands further adaptation to the ever shifting state of the system around it. This occurs at every scale, and every boundary. Evolution is a ratchet: at any one point different paths might have been taken but, once they have been taken, they provide the foundations for what comes next. This is how massive complexity emerges from simple, random-ish beginnings. Everything builds on everything else, becoming intricately interwoven with the whole. We can view the parts in isolation, but we cannot understand them properly unless we view them in relation to the things that they are connected with.

Amongst other interesting consequences of this dynamic, the more evolved technologies become, the more they tend to be comprised of counter-technologies. Some large and well-evolved technologies – transport systems, education systems, legal systems, universities, computer systems, etc – may consist of hardly anything but counter-technologies, that are so deeply embedded we hardly notice them any more. The parts that actually do the jobs we expect of them are a small fraction of the whole. The complex interlinking between counter-technologies starts to provide foundations on which further technologies build, and often feed back into the evolutionary path, changing the things that they were originally designed to counter, leading to further counter-technologies to cater for those changes. 

To give a massively over-simplified but illustrative example:

Technology: books.

Problem caused: cost.

Counter-technology: lectures.

Problem caused: need to get people in one place at one time.

Counter-technology: timetables.

Problem caused: motivation to attend.

Counter-technology: rewards and punishments.

Problem caused: extrinsic motivation kills intrinsic motivation.

Counter-technology: pedagogies that seek to re-enthuse learners.

Problem caused: education comes to be seen as essential to future employment but how do you know that it has been accomplished?

Counter-technology: exams provide the means to evaluate educational effectiveness.

Problem caused: extrinsic motivation kills intrinsic motivation.

Solution: cheating provides a quicker way to pass exams.

And so on.

I could throw in countless other technologies and counter-technologies that evolved as a result to muddy the picture, including libraries, loan systems, fines, courses, curricula, semesters, printing presses, lecture theatres, desks, blackboards, examinations, credentials, plagiarism tools, anti-plagiarism tools, faculties, universities, teaching colleges, textbooks, teaching unions, online learning, administrative systems, sabbaticals, and much much more. The end result is the hugely complex, ever shifting, ever evolving mess that is our educational systems, and all their dependent technologies and all the technologies on which they depend that we see today. This is a massively complex system of interdependent parts, all of which demand the input of energy and deliberate maintenance to survive. Changing one part shifts others, that in turn shift others, all the way down the line and back again. Some are harder and less flexible than others – and so have more effect on the overall assembly – but all contribute to change.

We have a natural tendency to focus on the immediate, the local, and the things we can affect most easily. Indeed, no one in the entire world can hope to glimpse more than a caricature of the bigger picture and, being a complex system, we cannot hope to predict much beyond the direct effects of what we do, in the context that we do them. This is true at every scale, from teaching a lesson in a classroom to setting educational policies for a nation. The effects of any given educational intervention are inherently unknowable in advance, whatever we can say about average effects. Sorry, educational researchers who think they have a solution – that’s just how it is. Anyone that claims otherwise is a charlatan or a fool. It doesn’t mean that we cannot predict the immediate future (good teachers can be fairly consistently effective), but it does mean that we cannot generalize what they do to achieve it.

One thing that might help us to get out of this mess would be, for every change we make, to think more carefully about what it is a counter-technology for,  and at least to glance at what the counter-technologies we are countering are themselves counter-technologies for. It might just be that some of the problems they solve afford greater opportunities to change than their consequences that we are trying to cope with. We cannot hope to know everything that leads to success – teaching is inherently distributed and inherently determined by its context – but we can examine our practice to find out at least some of the things that lead us to do what we do. It might make more sense to change those things than to adapt what we do to their effects.


I hate change, especially when it is inflicted upon me

For at least the past 5 or 6 years I have been hosting the websites I care most about, including this one, with a good-value, mostly reliable provider (OVH) that has servers in Canada. I don’t dislike the company and I’m still paying them, though the value isn’t feeling so great right now, because they are soon to retire their old VPS solution on which my sites are hosted, forcing me to either leave them or ‘upgrade’ to one of their new plans. Of course, the cheapest plan that can fit what I already have is more expensive than the old one. If I had the time, I might look for an alternative, but Canada is not well served by companies that provide cheap, reliable virtual private servers. There’s no way I’m moving my sites to US hosting (guys, stop letting rich corporations decide your laws for you, or at least elect someone to your presidency who’s not a dead ringer for the antichrist). I do have servers elsewhere but I live here, and I like Canada more than any other country.

My new hosting plan might be a bit better than the old one in some ways but worse in others. I am now paying $15/month instead of $10 for something I didn’t need to be improved, and that is mostly not much better than it was. I have lost a day or two of my own time to migration already (with just one site mostly migrated), and expect to lose more as I migrate more sites, not to mention significant downtime when I (inevitably) mess things up, especially because, of course, I am ‘fixing’ a few things in the process. In fairness, OVH have given me 6 months of ‘free’ hosting by way of compensation but, given the amount of work I need to put into it and the increased cost over the long term, it’s not a good deal for me.

I do understand why things must change. You cannot run the same old servers forever because things decay, and complexity (in management, especially) inevitably increases. This is true of all technologies, from languages to bureaucracies, from vehicles to software. But this seems like a sneaky way to impose a price hike, rather than an inevitable need. More to the point, if I need to change the technologies my sites run on, l want to be the one that makes those choices, and I want to choose exactly when I make them. That’s precisely why I put up with the pain and hassle of managing my ‘own’ servers. Well, that and the fact that I figure a computing professor ought to have a rough idea about real world computing, and having my own server does mean I can help out friends and family from time to time.

Way back in time I used to run servers for a living so, though the pace of change (in me and technologies I use) makes it more difficult to keep up than it used to be, I am not too scared about doing the hard stuff. I really like the control that managing a whole server gives me over everything. If it breaks, it’s my fault, but it’s also my responsibility when it works. I’ve always told myself that, worst case, all I need to do is to zip up the sites and move them lock stock and barrel somewhere else, so I am not beholden to proprietary tools and APIs, nor do I have much complexity to worry about when things need to change. I’ve also always known that this belief is over-simplistic and overly optimistic, but I’ve tried to brush that under the carpet because it’s only a problem when it becomes a problem. Now it’s a problem.

On the bright side, I have steadfastly avoided cloud alternatives because they lock you in, in countless ways, eventually making you nothing but a reluctant cash cow for the cloud providers. This would have been many times worse if I had picked a cloud solution. I have one small server to worry about rather than dozens of proprietary services, and everything on it is open and standardized. But path dependencies can lock you in too. Though I rarely make substantial changes – that way madness lies – I’ve made quite a surprising number of small decisions about the system over the past few years that, on the whole, I have mostly documented but that, en masse, are more than a slight pain to deal with. This site was down for hours today, for instance, while I struggled to figure out why it had suddenly decided that it knew nothing about SSL any more, which it turned out was due to a change in the Let’s Encrypt certificates (that had to be regenerated for the new site) and some messiness with permissions that didn’t quite work the same way on the new servers (my bad for choosing this time to upgrade the operating system, but it was a job that needed doing), combined with some automation that wanted to change server configuration files that I expected to configure myself. This kind of process can reveal digital decay that you might not have noticed happening, too. Right now, for example, there appear to be about 50 empty files sitting in my media folder for reasons that I am unsure of, that were almost certainly there on the old server. I think they may be harmless, but I am bothered that there might be something that is not working that I have migrated over, that might cause more problems in future. More hours of tedious effort ahead.

The main thing that all this highlights to me, though, is something I too often try to ignore: that I do not own what I think I own any more. This is my site, but someone else has determined that it should change. All technologies tend towards entropy, be they poems, words, pyramids, or bicycles. They persist only through an active infusion of energy. I suppose I should therefore feel no worse about this than when a drain gets blocked or a lock needs replacing, but I do feel upset, because this is something I was paying someone else to deal with, and because there is absolutely nothing I could have done (or at least nothing that would not have been much more hassle) to prevent it. I have many similar ‘lifetime’ services that are equally tenuous, ‘lifetime’ referring only to the precarious lifespan of the company in its current state, before it chooses to change its policies or gets acquired by someone else, or simply goes out of business. A few of the main things I have learned through having too many such things are:

  • to keep it simple: small, easily replaceable services trump big, highly functional systems every single time.
  • to always maintain alternatives. Even if OVH had gone belly-up, I still have mirrors on lesser sites that would keep me going in a worst case scenario, though it would have been harder work and less efficient to have gone down that path.
  • don’t trust any company, ever. They are not people so, even if they are lovely now, there is no guarantee that they will be next year, or tomorrow. And their purpose is to survive, and probably to make money, not to please you. You can trust people, but you cannot trust machines.
  • this is even true of the companies you work for. Much as I love my university, its needs and purposes only partially coincide with mine. The days of the Landing, for instance, a system into which I have poured much energy for well over 10 years, are very likely numbered, though I have no idea whether that means it has months or years left to live. Not my call, and not the call of any one individual (though someone will eventually sign its death warrant). With luck and concerted effort, it will evolve into something more wonderful but that’s not the point. Companies are not human, and they don’t think like humans.
  • if possible, stick with whatever defaults the software comes with or, at least, make sure that all changes are made in as few places as possible. It’s an awful pain to have to find the tweaks you made when you move it to a new system unless they are all in one easy-to-find place.
  • open standards are critical. There’s no point in getting great functionality if it relies on the goodwill of a company to maintain it, except where the value is unequivocally transient. I don’t much mind a trustworthy agent handling my spam filtering or web conferencing, for instance, though I’d not trust one to handle my instant messaging or site hosting, unless they are using standards that others are using. Open source solutions do die, and do lose support, but they are always there when you need them, and it is always possible to migrate, even if the costs may be high.

This site is now running on the new system, with a slightly different operating system and a few upgrades here and there. It might even be a little faster than the last version, eventually. I (as it turns out) wisely chose Linux and open source web software, so it continues to work, more or less as it did before, notwithstanding the odd major problem. If this had been a Windows or even a Mac site, though, it would have been dead long ago.

I have a bit of work to do on the styling here and there – I’m not sure quite what became of the main menu and (for aforementioned reasons) am reluctant to mess around with the CSS. If you happen to know me, or even if you don’t but can figure out how to deal with the anti-spam stuff in the comments section of this page, do tell me if you spot anything unusual.

Finally, if I’ve screwed up the syndication then you will probably not be reading this anyway. I’ve already had to kill the (weak) Facebook integration in order to make it work at all, though that’s a good riddance and I’m happy to see it go. Twitter might be another matter, though. Another set of proprietary APIs and, potentially, another fun problem to deal with tomorrow.

Addendum: so it turns out that I cannot save anything I write here. Darn. I thought it might be a simple problem with rewrite rules but that’s not it. When you read this, I will have found a solution (and it will probably be obvious, in retrospect) but it is making me tear my hair out right now.

Addendum to addendum: so I did screw up the syndication, and it was a simple problem with rewrite rules. After installing the good old fashioned WordPress editor everything seemed fine, but I soon discovered that the permalinks were failing too, so (though it successfully auto-posted to Twitter) links I had shared to this post were failing. All the signs pointed to a problem with Apache redirection, but all my settings were quadruple-checked correct. After a couple of hours of fruitless hacking,  I realized that the settings were quadruple-checked correct for the wrong domain name (, which actually redirects here to, but that is still running on the old site so not working properly yet). Doh. I had even documented this, but failed to pay attention to my own notes. It’s a classic technology path-dependency leading to increased complexity of exactly the kind that I refer to in my post. The history of it is that I used to use as my home page, and that’s how the site was originally set up, but I chose to switch to a few years ago because it seemed more appropriate and, rather than move the site itself to a new directory, I just changed everything in its database to use the domain name instead. Because I had shared the old site with many people, I set up a simple HTTP redirect from to point to this one, and had retained the virtual host on the server for this purpose. All perfectly logical, and just a small choice along the way, but with repercussions that have just taken up a lot of my time. I hope that I have remembered to reset everything after all the hacks I tried, but I probably haven’t. This is how digital decay sets in.