Obsolescence and decay

Koristka camera  All technologies require an input of energy – to be actively maintained – or they will eventually drift towards entropy. Pyramids turn to sand, unused words die, poems must be reproduced to survive, bicycles rust. Even apparently fixed digital technologies rely on physical substrates and an input of power to be instantiated at all. A more interesting reason for their decay, though, is that virtually no technologies exist in isolation, and virtually all participate in, and/or are participated in by other technologies, whether human-instantiated or mechanical. All are assemblies and all exist in an ecosystem that affects them, and which they affect. If parts of that system change, then the technologies on which they depend may cease to function even though nothing about those technologies has, in itself, altered.

Would a (film) camera for which film is no longer available still be a camera? It seems odd to think of it as anything else. However, it is also a bit odd to think of it as a camera, given that it must be inherent to the definition of a camera that it can take photos. It is not (quite) simply that, in the absence of film, it doesn’t work. A camera that doesn’t take photos because the shutter has jammed or the lens is missing is still a camera: it’s just a broken camera, or an incomplete camera. That’s not so obviously the case here. You could rightly claim that the object was designed to be a camera, thereby making the definition depend on the intent of its manufacturer. The fact that it used to be perfectly functional as a camera reinforces that opinion. Despite the fact that it cannot take pictures, nothing about it – as a self-contained object – has changed. We could therefore simply say it is therefore still a camera, just one that is obsolete, and that obsolescence is just another way that cameras can fail to work. This particular case of obsolescence is so similar to that of the missing lens that it might, however, make more sense to think of it as an instance of exactly the same thing. Indeed someone might one day make a film for it and, being pedantic, it is almost certainly possible to cut up a larger format film and insert it, at which point no one would disagree that it is a camera, so this is a reasonable way to think about it. We can reasonably claim that it is still a camera, but that it is currently incomplete.

Notice what we are doing here, though. In effect, we are supposing that a full description of a camera – ie. a device to take photos – must include its film, or at least some other means of capturing an image, such as a CCD. But, if you agree to that, where do you stop? What if the only film that the camera can take demands processing that is not? What if is is a digital camera that creates images that no software can render? That’s not impossible. Imagine (and someone almost certainly will) a DRM’d format that relies on a subscription model for the software used to display it, and that the company that provides that subscription goes out of business. In some countries, breaking DRM is illegal, so there would be no legal way to view your own pictures if that were the case. It would, effectively, be the same case as that of a camera designed to have no shutter release, which (I would strongly argue) would not be a camera at all because (by design) it cannot take pictures. The bigger point that I am trying to make, though, is that the boundaries that we normally choose when identifying an object as a camera are, in fact, quite fuzzy. It does not feel natural to think of a camera as necessarily including its film, let alone also including the means of processing that film, but it fails to meet a common-sense definition of the term without those features.

A great many – perhaps most – of our technologies have fuzzy boundaries of this nature, and it is possible to come up with countless examples like this. A train made for a track gauge that no longer exists, clothing made in a size that fits no living person, printers for which cartridges are no longer available, cars that fail to meet emissions standards, electrical devices that take batteries that are no longer made, and so on. In each case, the thing we tend to identify as a specific technology no longer does what it should, despite nothing having changed about it, and so it is difficult to maintain that it is the same technology as it was when it was created unless we include in our definition the rest of the assembly that makes it work. One particularly significant field in which this matters a great deal is in computing. The problem occurs in every aspect of computing: disk formats for which no disk drives exist, programs written for operating systems that are no longer available, games made for consoles that cannot be found, and so on. In a modern networked environment, there are so many dependencies all the way down the line that virtually no technology can ever be considered in isolation. The same phenomenon can happen at a specific level too. I am currently struggling to transfer my websites to a different technology because the company providing my server is retiring it. There’s nothing about my sites that has changed, though I am having to make a surprising number of changes just to keep them operational on the new system. Is a website that is not on the web still a website?

Whatever we think about whether it remains the same technology, if it does not do what the most essential definition of that technology claims that it must, then a digital technology that does not adapt eventually dies, even though its physical (digital) form might persist unchanged. This is because its boundaries are not simply its lines of code. This both stems from and leads to fact that technologies tend to evolve to ever greater complexity. It is especially obvious in the case of networked digital technologies, because parts of the multiple overlapping systems in which they must participate are in an ever-shifting flux. Operating systems, standards, protocols, hardware, malware, drivers, network infrastructure, etc can and do stop otherwise-unchanged technologies from working as intended, pretty consistently, all the time. Each technology affects others, and is affected by them. A digital technology that does not adapt eventually dies, even though (just like the camera) its physical (digital) form persists unchanged. It exists only in relation to a world that becomes increasingly complex thanks to the nature of the beast.

All species of technology evolve to become more complex, for many reasons, such as:

  • the adjacent possibles that they open up, inviting elaboration,
  • the fact that we figure out better ways to make them work,
  • the fact that their context of use changes and they must adapt to it,
  • the fact other technologies with which they are assembled adapt and change,
  • the fact that there is an ever-expanding range of counter-technologies needed to address their inevitable ill effects (what Postman described as the Faustian Bargain of technology),  which in turn create a need for further counter-technologies to curb the ill effects of the counter technologies,
  • the layers of changes and fixes we must apply to forestall their drift into entropy.

The same is true of most individual technologies of any complexity, ie. those that consist of many interacting parts and that interact with the world around them. They adapt because they must – internal and external pressures see to that – and, almost always, this involves adding rather than taking away parts of the assembly. This is true of ecosystems and even individual organisms, and the underlying evolutionary dynamic is essentially the same. Interestingly, it is the fundamental dynamic of learning, in the sense of an entity adapting to an environment, which in turn changes that environment, requiring other entities within that environment to adapt in turn, which then demands further adaptation to the ever shifting state of the system around it. This occurs at every scale, and every boundary. Evolution is a ratchet: at any one point different paths might have been taken but, once they have been taken, they provide the foundations for what comes next. This is how massive complexity emerges from simple, random-ish beginnings. Everything builds on everything else, becoming intricately interwoven with the whole. We can view the parts in isolation, but we cannot understand them properly unless we view them in relation to the things that they are connected with.

Amongst other interesting consequences of this dynamic, the more evolved technologies become, the more they tend to be comprised of counter-technologies. Some large and well-evolved technologies – transport systems, education systems, legal systems, universities, computer systems, etc – may consist of hardly anything but counter-technologies, that are so deeply embedded we hardly notice them any more. The parts that actually do the jobs we expect of them are a small fraction of the whole. The complex interlinking between counter-technologies starts to provide foundations on which further technologies build, and often feed back into the evolutionary path, changing the things that they were originally designed to counter, leading to further counter-technologies to cater for those changes. 

To give a massively over-simplified but illustrative example:

Technology: books.

Problem caused: cost.

Counter-technology: lectures.

Problem caused: need to get people in one place at one time.

Counter-technology: timetables.

Problem caused: motivation to attend.

Counter-technology: rewards and punishments.

Problem caused: extrinsic motivation kills intrinsic motivation.

Counter-technology: pedagogies that seek to re-enthuse learners.

Problem caused: education comes to be seen as essential to future employment but how do you know that it has been accomplished?

Counter-technology: exams provide the means to evaluate educational effectiveness.

Problem caused: extrinsic motivation kills intrinsic motivation.

Solution: cheating provides a quicker way to pass exams.

And so on.

I could throw in countless other technologies and counter-technologies that evolved as a result to muddy the picture, including libraries, loan systems, fines, courses, curricula, semesters, printing presses, lecture theatres, desks, blackboards, examinations, credentials, plagiarism tools, anti-plagiarism tools, faculties, universities, teaching colleges, textbooks, teaching unions, online learning, administrative systems, sabbaticals, and much much more. The end result is the hugely complex, ever shifting, ever evolving mess that is our educational systems, and all their dependent technologies and all the technologies on which they depend that we see today. This is a massively complex system of interdependent parts, all of which demand the input of energy and deliberate maintenance to survive. Changing one part shifts others, that in turn shift others, all the way down the line and back again. Some are harder and less flexible than others – and so have more effect on the overall assembly – but all contribute to change.

We have a natural tendency to focus on the immediate, the local, and the things we can affect most easily. Indeed, no one in the entire world can hope to glimpse more than a caricature of the bigger picture and, being a complex system, we cannot hope to predict much beyond the direct effects of what we do, in the context that we do them. This is true at every scale, from teaching a lesson in a classroom to setting educational policies for a nation. The effects of any given educational intervention are inherently unknowable in advance, whatever we can say about average effects. Sorry, educational researchers who think they have a solution – that’s just how it is. Anyone that claims otherwise is a charlatan or a fool. It doesn’t mean that we cannot predict the immediate future (good teachers can be fairly consistently effective), but it does mean that we cannot generalize what they do to achieve it.

One thing that might help us to get out of this mess would be, for every change we make, to think more carefully about what it is a counter-technology for,  and at least to glance at what the counter-technologies we are countering are themselves counter-technologies for. It might just be that some of the problems they solve afford greater opportunities to change than their consequences that we are trying to cope with. We cannot hope to know everything that leads to success – teaching is inherently distributed and inherently determined by its context – but we can examine our practice to find out at least some of the things that lead us to do what we do. It might make more sense to change those things than to adapt what we do to their effects.


I am a professional learner, employed as a Full Professor and Associate Dean, Learning & Assessment, at Athabasca University, where I research lots of things broadly in the area of learning and technology, and I teach mainly in the School of Computing & Information Systems. I am a proud Canadian, though I was born in the UK. I am married, with two grown-up children, and three growing-up grandchildren. We all live in beautiful Vancouver.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.