Does technology lead to improved learning? (tl;dr: it's a meaningless question)

Students using computers, public domain, https://www.flickr.com/photos/internetarchivebookimages/19758917473/There have been (at least) tens of thousands of comparative studies on the effects of ‘technology’ on learning performed over the past hundred years or so. Though some have been slightly more specific (the effects of computers, online learning, whiteboards, eportfolios, etc) and some more sensible authors use the term ‘tech’ to distinguish things with flashing lights from technologies in general, nowadays it is pretty common to just use the term ‘technology’ as though we all know what the authors mean. We don’t. And neither do they.

It makes no more sense to ask whether (say) computers have a positive or negative effect on learning than to ask whether (say) pedagogies have a positive or negative effect on learning. Pedagogies (methods and principles of learning and teaching) are at least as much technologies as computers and their uses and forms are similarly diverse. Some work better than others, sometimes, in some contexts, for some people. All are soft technologies that demand we act as coparticipants in their orchestration, not just users of them. This means that we have to add stuff to them in order that they work. None do anything of interest by themselves – they must be orchestrated with (usually many) other tools, methods, structures, and so on in order to do anything at all. All can be orchestrated well (assuming we know what ‘well’ really means, and we seldom really do) or badly.

It is instructive to wonder why it is that, as far as I know, no one has yet tried to investigate the effects of transistors, or screws, or words, or cables on learning, even though they are an essential part of most technologies that we do see fit to research and are certainly prerequisite parts of many educational interventions. The answer is, I hope, obvious: we would be looking at the wrong level of detail. We would be examining a part of the assembly that is probably not materially significant to learning success, albeit that, without them, we would not have other technologies that interest us more. Transistors enable computers, but they do not entail them.

Likewise computers and pedagogies enable learning, but do not entail it (for more on enablement vs entailment, see Longo et al, 2012 or, for a fuller treatment, Kauffman, 2019). True, pedagogies and computers may orchestrate many more phenomena for us, and some of those orchestrations may have more consistent and partly causal effects on whether an intervention works than screws and cables but, without considering the entire specific assembly of which they are a part, those effects are no more generalizably relevant to whether learning is effective or not than the effects of words or transistors.

Technologies enable (or sometimes disable) a range of phenomena, but only rarely do they generalizably entail a fixed set of outcomes and, if they do, there are almost always ways that we can assemble them with other technologies that alter those outcomes. In the case of something as complex as education, which always involves thousands and usually millions of technological components assembled with one another by a vast number of people, not just the teacher, every part affects every other. It is irreducibly complex, not just complicated. There are butterfly’s wing effects to consider – a single injudicious expletive, say, or a even a smile can transform the effectiveness or otherwise of teaching. There’s emergence, too. A story is not just a collection of words, a lesson is not just a bunch of pedagogical methods, a learning community is not just a collection of people. And all of these things – parts and emergent or designed combinations of parts – interact with one another to lead to deterministic but unprestatable consequences (Kauffman, 2019).

Of course, any specific technology applied in a specific context can and will entail specific and (if hard enough) potentially repeatable outcomes. Hard technologies will do the same thing every time, as long as they work. I press the switch, the light comes on. But even for such a simple, hard technology, you cannot from that generalize that every time any switch is pressed a light will come on, even if you, without warrant, assume that the technology works as intended, because it can always be assembled with other phenomena, including those provided by other technologies, that alter its effects. I press many switches every day that do not turn on lights and, sometimes, even when I press a light switch the light does not come on (those that are assembled with smart switches, for instance). Soft technologies like computers, pedagogies, words, cables, and transistors are always assembled with other phenomena. They are incomplete, and do not do anything of interest at all without an indefinitely large number of things and processes that we add to them, or to which we add them, each subtly or less subtly different from the rest. Here’s an example using the soft technology of language:

  • There are countless ways I could say this.
  • There are infinitely many ways to make this point.
  • Wow, what a lot of ways to say the same thing!
  • I could say this in a vast number of ways.
  • There are indefinitely many ways to communicate the meaning of what I wish to express.
  • I could state this in a shitload of ways.
  • And so on, ad infinitum.

This is one tiny part of one tiny technology (this post). Imagine this variability multiplied by the very many people, tools, methods, techniques, content, and structures that go into even a typical lesson, let alone a course. And that is disregarding the countless other factors and technologies that affect learning, from institutional regulations to interesting news stories or conversations on a bus.

Reductive scientific methods like randomized controlled tests and null hypothesis significance testing can tell us things that might be useful to us as designers and enactors of teaching. We can, say, find out some fairly consistent things about how people learn (as natural phenomena), and we can find out useful things about how well different specific parts compare with one another in a particular kind of assembly when they are supposed to do the same job (nails vs screws, for instance). But these are just phenomena that we can use as part of an assembly, not prescriptions for successful learning. The question of whether any given type of technology affects learning is meaningless. Of course it does, in the specific, because we are using it to help enable learning. But it only does so in an orchestrated assembly with countless others, and that orchestration is and must always be substantially different from any other. So, please, let’s all stop pretending that educational technologies (including pedagogical methods) can be researched in the same reductive ways as natural phenomena, as generalizable laws of entailment. They cannot.

References

Arthur, W. B. (2009). The Nature of Technology: what it is and how it evolves (Kindle ed.). New York, USA: Free Press. (Arthur’s definition of technology as the orchestration of phenomena for some purpose, and his insights into how technologies evolve through assembly, underpins the above)

Kauffman, S. A. (2019). A World Beyond Physics: The Emergence and Evolution of Life. Oxford University Press.

Longo, G., Montévil, M., & Kauffman, S. (2012). No entailing laws, but enablement in the evolution of the biosphere. Proceedings from 14th annual conference companion on Genetic and evolutionary computation, Philadelphia, Pennsylvania, USA. Full text available at https://dl.acm.org/doi/pdf/10.1145/2330784.2330946

 

Bananas as educational technologies

  Banana Water Slide banana statue, Virginia Beach, Virginia One of my most memorable learning experiences that has served me well for decades, and that I actually recall most days of my life, occurred during a teacher training session early in my teaching career. We had been set the task of giving a two-minute lecture on something central to our discipline. Most of us did what we could with a slide or two and a narrative to match in a predictably pedestrian way. I remember none of them, not even my own, apart from one. One teacher (his name was Philippe) who taught sports nutrition, just drew a picture of a banana. My memory is hazy on whether he also used an actual banana as a prop: I’d like to think he did. For the next two minutes, he then repeated ‘have a banana’ many times, interspersed with some useful facts about its nutritional value and the contexts in which we might do so. I forget most of those useful facts, though I do recall that it has a lot of good nutrients and is easy to digest. My main takeaway was that, if we are in a hurry in the morning, not to skip breakfast but to eat a banana, because it will keep us going well enough to function for some time, and is superior to coffee as a means of making you alert. His delivery was wonderful: he was enthusiastic, he smiled, we laughed, and he repeated the motif ‘have a banana!’ in many different and entertaining ways, with many interesting and varied emphases. I have had (at least) a banana for breakfast most days of my life since then and, almost every time I reach for one, I rememember Philippe’s presentation. How’s that for teaching effectiveness?

But what has this got to do with educational technologies? Well, just about everything.

As far as I know, up until now, no one has ever written an article about bananas as educational technologies. This is probably because, apart from instances like the one above where bananas are the topic, or a part of the topic being taught, bananas are not particularly useful educational technologies. You could, at a stretch, use one to point at something on a whiteboard, as a prop to encourage creative thinking, or as an anchor for a discussion. You could ask students to write a poem on it, or calculate its volume, or design a bag for it. There may in fact be hundreds of distinct ways to use bananas as an educational technology if you really set your mind to it. Try it – it’s fun! Notice what you are doing when you do this, though. The banana does provide some phenomena that you can make use of, so there are some affordances and constraints on what you can do, but what makes it an educational technology is what you add to it yourself. Notwithstanding its many possible uses in education, on balance, I think we can all agree that the banana is not a significant educational technology.

Parts and pieces

Here are some other things that are more obviously technological in themselves, but that are not normally seen as educational technologies either:

  • screws
  • nails
  • nuts and bolts
  • glue

Like bananas, there are probably many ways to use them in your teaching but, unless they are either the subject of the teaching or necessary components of a skill that is being learned (e.g. some crafts, engineering, arts, etc) I think we can all agree that none of these is a significant educational technology in itself. However, there is one important difference. Unlike bananas, these technologies can and do play very significant roles in almost all education, whether online or in-person. Without them and their ilk, all of our educational systems would, quite literally, fall apart. However, to call them educational technologies would make little sense because we are putting the boundaries around the wrong parts of the assembly. It is not the nuts and bolts but what we do with them, and all the other things with which they are assembled, that matters most. This is exactly like the case of the banana.

Bigger pieces

This is interesting because there are other things that some people do consider to be sufficiently important educational technologies that they get large amounts of funding to perform large-scale educational research on them, about which exactly the same things could be said: computers, say. There is really a lot of research about computers in classrooms. And yet metastudies tend to conclude that, on average, computers have little effect on learning. This is not surprising. It is for exactly the same reason that nuts and glue, on average, have little effect on learning. The researchers are choosing the wrong boundaries for their investigations.

The purpose of a computer is to compute. Very few people find this of much value as an end in itself, and I think it would be less useful than a banana to most teachers. In fact, with the exception of some heavily math-oriented and/or computer science subjects, it is of virtually no interest to anyone.

The ends to which the computing they perform are put are another matter altogether. But those are no more the effect of the computer than the computer is the effect of the nuts and bolts that hold it together. Sure, these (or something like them) are necessary components, but they are not causes of whatever it is we do with them. What makes computers useful as educational technologies is, exactly like the case of the banana, what we add to them.

It is not the computer itself, but other things with which it is assembled such as interface hardware, software and (above all) other surrounding processes – notably the pedagogical methods – that can (but on average won’t) turn it into an educational technology. There are potentially infinite numbers of these, or there would be if we had infinite time and energy to enact them. Computers have the edge on bananas and, for that matter, nuts and bolts because they can and usually must embody processes, structures, and behaviours. They allow us to create and use far more diverse and far more complex phenomena than nuts, bolts, and bananas. Some – in fact, many – of those processes and structures may be pedagogically interesting in themselves. That’s what makes them interesting, but it does not make them educational technologies. What can make them educational technologies are the things we add, not the machines in themselves.

This is generalizable to all technologies used for educational purposes. There are hierarchies of importance, of course. Desks, classrooms, chairs, whiteboards and (yes) computers are more interesting than screws, nails, nuts, bolts, and glue because they orchestrate more phenomena to more specific uses: they create different constraints and affordances, some of which can significantly affect the ways that learning happens. A lecture theatre, say, tends to encourage the use of lectures. It is orchestrating quite a few phenomena that have a distinct pedagogical purpose, making it a quite significant participant in the learning and teaching process. But it and all these things, in turn, are utterly useless as educational technologies until they are assembled with a great many other technologies, such as (very non exhaustively and rather arbitrarily):

  • pedagogical methods,
  • language,
  • drawing,
  • timetables,
  • curricula,
  • terms,
  • classes,
  • courses,
  • classroom rules,
  • pencils and paper,
  • software,
  • textbooks,
  • whiteboard markers,
  • and so on.

None of these parts have much educational value on their own. Even something as unequivocally identifiable as an educational technology as a pedagogical method is useless without all the rest, and changes to any of the parts may have substantial impacts on the whole. Furthermore, without the participation of learners who are applying their own pedagogical methods, it would be utterly useless, even in assembly with everything else. Every educational event – even those we apparently perform alone – involves the coparticipation of countless others, whether directly or not.

The point of all this is that, if you are an educational researcher or a teacher investigating your own teaching, it makes no sense at all to consider any generic technology in isolation from all the rest of the assembly. You can and usually should consider specific instances of most if not all those technologies when designing and performing an educational intervention, but they are interesting only insofar as they contribute, in relationship to one another, to the whole.

And this is not the end of it. Just as you must assemble many pieces in order to create an educational technology, what you have assembled must in turn be assembled by learners – along with plenty of other things like what they know already, other inputs from the environment, from one another, the effects of things they do, their own pedagogical methods, and so on – in order to achieve the goals they seek. Your own teaching is as much a component of that assembly as any other. You, the learners, the makers of tools, inventors of methods, and a cast of thousands are coparticipants in a gestalt process of education.

This is one of the main reasons that reductive approaches to educational research that attempt to isolate the effects of a single technology – be it a method of teaching, a device, a piece of software, an assessment technique, or whatever – with the intent of generalizing some statement about it cannot ever work. The only times they have any value at all are when all the technologies in question are so hard, inflexible, and replicable, and the uses to which they are put are so completely fixed, well defined, and measurable that you are, in effect, considering a single specific technology in a single specific context. But, if you can specify the processes and purposes with that level of exactitude then you are simply checking that a particular machine works as it is designed to work. That’s interesting if you want to use that precise machine in an almost identical context, or you want to develop the machine itself further. But it is not generalizable, and you should never claim that it is. It is just part of a particular story. If you want to tell a story then other methods, from narrative descriptions to rich case studies to grounded theory, are usually much more useful.

Why Pioneer Neurosurgeon Wilder Penfield Said the Mind Is More Than the Brain

https://mindmatters.ai/2020/02/why-pioneer-neurosurgeon-wilder-penfield-said-the-mind-is-more-than-the-brain/

I had not come across exactly this argument for mind-brain dualism before, though it resembles some going back to antiquity in its basic assumptions. It’s an interesting idea, proposed by Wilder Penfield, a neurosurgeon working in the first half of the 20th Century. The three foundations for his arguments were:

  1. despite hundreds of thousands of stimulations of patients’ brains under neurosurgery, not one ever stimulated the intellect: no one ever did calculus as a result of brain stimulation.
  2. when people have seizures caused by problems in the brain, all sorts of body movements occur, but there are no intellectual seizures. No one ever had a calculus seizure.
  3. though he could stimulate people to move arms etc, the patients always knew it was him doing it. He was never able to stimulate the will. He could not make them believe they were the cause of the movement.

His belief was, therefore, that the mind (the will and the intellect – logic, abstract reasoning, etc) cannot arise from the brain because, if it did, there would be at least some way to stimulate it by prodding the brain. Apparently there are others who still share his belief.

I’ve not investigated how the arguments have developed since then, nor whether anyone has succeeded where Penfield failed, but it seems to me to be a poor line of reductive deductive reasoning. It is fairly reasonable to assume, without recourse to magic and based on what we know of complex adaptive systems, that the mind is an emergent phenomenon that does not exist in one place in the brain, but that occurs through the interaction of billions of simpler elements, and clusters of elements, all recursively affecting one another, most likely at many hierarchical levels and boundaries. There are many things that behave differently as a whole than in their parts: an atom of a cell is not a cell, the cells of hearts are not hearts, a heart is not a body, a body is not society, and so on. The fact that small parts of the brain can be stimulated to produce measurable psychological and physical effects does not mean that all brain-based phenomena have to work that way. Stimulating an area of the brain as an attempt to evoke the mind is no more sensible than buying a can of beans as an attempt to evoke the economy.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/5481069/why-pioneer-neurosurgeon-wilder-penfield-said-the-mind-is-more-than-the-brain

The makers of the game complaining about the people playing it

“”I think as long as we have education, we’re going to have people who are going to try and game the system and we just have to keep up with them,” said Deb Eerkes, the university’s director of student conduct and accountability. ”

(40 University of Alberta computing science students caught cheating, CBC News, March 4, 2020)

This is stuff and nonsense. Dangerous, cynical, subversive, appalling nonsense. There are lots of different definitions of ‘education’ but, as far as I know, not a single one of them includes grading and sorting learners. Education is supposed, above all else and non-negotiably, to be a system for learning. If you instead treat it as a system for grading, then of course rational students will take the shortest safe path to attain the best grades possible, whether or not that involves learning. Cheating is rarely if ever a very safe path but, if the stakes are set high enough and achieving success is out of their reach for whatever reason, then it is a calculated risk that some will always take. In fact, most will. Almost all studies of the phenomenon across the world show more than half of all students do so at some point (Jurdi et al, 2011), and some studies show rates over 80% (Ma et al, 2013). These people are not gaming the system. They are playing the game as it is designed to be played. It doesn’t help that we almost always force them to learn things that they neither want nor need to learn at times they are not ready, willing, nor able to do so. And when I say ‘learn’ I mean that in the same sense as we learn the room number of our hotel room when we stay there. When there is no longer a need for it (the grade has been attained) then we have no use for it any more and, as often as not, promptly forget.

When cheating is so widespread and ubiquitous, the fault is clearly with the educational system, not the cheaters. A system that is designed to teach people but makes it a fundamental part of its design that some of them must fail to be taught, is fundamentally broken.  There are not many other technologies that are actually designed to consistently fail in such a spectacular way. Imagine the same design approach being used for, say, cars or nuclear power stations. Of course, some immoral manufacturers do rely on built-in obsolescence, many cripple parts of their products’ functionality in order to sell more of them, and so on. But these are not failures when viewed as ways of making money for the manufacturers, it’s just a failure of their users to understand their primary purpose. It is also true that, with the best will in the world, almost all technologies do, sooner or later, fail, but (with a few exceptions like some artworks) that is not what they are normally designed to do. That’s just entropy doing its thing. Indeed, unless something actively inputs significant energy into a system to maintain it and adapt it to its changing context, every system will eventually fail. That’s not what it happening here. Education is actually designed to fail.

As long as education is treated as a sorting machine, students will use counter-technologies to address its shortcomings, and educators will use counter-technologies to counter those counter-technologies, in an ever-escalating arms war that makes everyone the loser.

Here are a few (of many) ways we can improve this situation, even within the context of a system designed to fail:

  • build the system so that students can try and try again until they have actually learned what they seek to learn. If at all possible, even if it means charging more for the service, do not force them to keep to your timetable for this.
  • give them control over what they learn, and how. By all means let them delegate control to you (or anyone else) if they wish, but always let them take it back when they want or need to do so.
  • do not give grades: they destroy intrinsic motivation. Give feedback that helps students to improve. If grades are mandated by the system, only ever use two: A, and incomplete (Kohn, 1999, p.208). If that is impossible, at the very least allow students to participate in grading, let them choose at least some of the criteria, give them ownership of the process.
  • discover the outcomes that have actually occurred, rather than measure the extent to which students meet the outcomes we say they should meet. Students always learn more than we teach. Celebrate it. Outcome harvesting (Wilson-Grau & Britt, 2012) is a promising approach for this.
  • celebrate achievement. Do not punish failure to achieve. When grading, seek evidence of learning, not evidence of failure to learn. When there are failures to learn, treat them as opportunities to improve, not reasons to reject.
  • celebrate re-use. Everything builds on everything else, no one does anything alone. Let people ‘cheat’, authentically, as all of us ‘cheat’ when we use ideas and chunks of stuff other people have created, but make cheating pointless or counter-productive in achieving a grade. A simple way to do that is to make learning personal (not personalized) so that it is both relevant to student interests and needs (so intrinsically motivating), and always unique to them (so difficult to copy from elsewhere). It also helps to celibrate intelligent (properly ascribed) re-use. Don’t ask students to reinvent wheels, but encourage them to use wheels well.
  • make learning visible. Build sharing into the structure of the process. This is both motivating and the many eyes that result make cheating far more likely to be discovered. If ‘face’ is what matters to your students, then design the system so that they must show it.
  • Build community. People tend to try much harder when they know that what they create will be seen by others that they care about.

I could go on indefinitely: there are countless ways to avoid or at least reduce the harms of grading, not one of which requires coercion, punishment, or harm. The main point, though, is that educational systems are technologies for learning, not for grading. If we can spin some useful awards (not rewards) out of that then that’s good, but it should not, in the process, subvert the whole point of having the things in the first place.

References

Jurdi, R., Hage, H. S., & Chow, H. P. H. (2011). Academic Dishonesty in the Canadian Classroom: Behaviours of a Sample of University Students. Canadian Journal of Higher Education, 41(3).

Kohn, A. (1999). Punished by rewards: The trouble with gold stars, incentive plans, A’s, praise, and other bribes. Mariner Books.

Ma, Y., McCabe, D., & Liu, R. (2013). Students,  Academic Cheating in Chinese Universities: Prevalence, Influencing Factors, and Proposed Action. J Acad Ethics, 11(3), 169-184. doi:10.1007/s10805-013-9186-7

Wilson-Grau, R., & Britt, H. (2012). Outcome harvesting. Cairo: Ford Foundation. http://www. managingforimpact. org/sites/default/files/resource/outome_harvesting_brief_final_2012-05-2-1. pdf.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/5466481/the-makers-of-the-game-complaining-about-the-people-playing-it

Obsolescence and decay

Koristka camera  All technologies require an input of energy – to be actively maintained – or they will eventually drift towards entropy. Pyramids turn to sand, unused words die, poems must be reproduced to survive, bicycles rust. Even apparently fixed digital technologies rely on physical substrates and an input of power to be instantiated at all. A more interesting reason for their decay, though, is that virtually no technologies exist in isolation, and virtually all participate in, and/or are participated in by other technologies, whether human-instantiated or mechanical. All are assemblies and all exist in an ecosystem that affects them, and which they affect. If parts of that system change, then the technologies on which they depend may cease to function even though nothing about those technologies has, in itself, altered.

Would a (film) camera for which film is no longer available still be a camera? It seems odd to think of it as anything else. However, it is also a bit odd to think of it as a camera, given that it must be inherent to the definition of a camera that it can take photos. It is not (quite) simply that, in the absence of film, it doesn’t work. A camera that doesn’t take photos because the shutter has jammed or the lens is missing is still a camera: it’s just a broken camera, or an incomplete camera. That’s not so obviously the case here. You could rightly claim that the object was designed to be a camera, thereby making the definition depend on the intent of its manufacturer. The fact that it used to be perfectly functional as a camera reinforces that opinion. Despite the fact that it cannot take pictures, nothing about it – as a self-contained object – has changed. We could therefore simply say it is therefore still a camera, just one that is obsolete, and that obsolescence is just another way that cameras can fail to work. This particular case of obsolescence is so similar to that of the missing lens that it might, however, make more sense to think of it as an instance of exactly the same thing. Indeed someone might one day make a film for it and, being pedantic, it is almost certainly possible to cut up a larger format film and insert it, at which point no one would disagree that it is a camera, so this is a reasonable way to think about it. We can reasonably claim that it is still a camera, but that it is currently incomplete.

Notice what we are doing here, though. In effect, we are supposing that a full description of a camera – ie. a device to take photos – must include its film, or at least some other means of capturing an image, such as a CCD. But, if you agree to that, where do you stop? What if the only film that the camera can take demands processing that is not? What if is is a digital camera that creates images that no software can render? That’s not impossible. Imagine (and someone almost certainly will) a DRM’d format that relies on a subscription model for the software used to display it, and that the company that provides that subscription goes out of business. In some countries, breaking DRM is illegal, so there would be no legal way to view your own pictures if that were the case. It would, effectively, be the same case as that of a camera designed to have no shutter release, which (I would strongly argue) would not be a camera at all because (by design) it cannot take pictures. The bigger point that I am trying to make, though, is that the boundaries that we normally choose when identifying an object as a camera are, in fact, quite fuzzy. It does not feel natural to think of a camera as necessarily including its film, let alone also including the means of processing that film, but it fails to meet a common-sense definition of the term without those features.

A great many – perhaps most – of our technologies have fuzzy boundaries of this nature, and it is possible to come up with countless examples like this. A train made for a track gauge that no longer exists, clothing made in a size that fits no living person, printers for which cartridges are no longer available, cars that fail to meet emissions standards, electrical devices that take batteries that are no longer made, and so on. In each case, the thing we tend to identify as a specific technology no longer does what it should, despite nothing having changed about it, and so it is difficult to maintain that it is the same technology as it was when it was created unless we include in our definition the rest of the assembly that makes it work. One particularly significant field in which this matters a great deal is in computing. The problem occurs in every aspect of computing: disk formats for which no disk drives exist, programs written for operating systems that are no longer available, games made for consoles that cannot be found, and so on. In a modern networked environment, there are so many dependencies all the way down the line that virtually no technology can ever be considered in isolation. The same phenomenon can happen at a specific level too. I am currently struggling to transfer my websites to a different technology because the company providing my server is retiring it. There’s nothing about my sites that has changed, though I am having to make a surprising number of changes just to keep them operational on the new system. Is a website that is not on the web still a website?

Whatever we think about whether it remains the same technology, if it does not do what the most essential definition of that technology claims that it must, then a digital technology that does not adapt eventually dies, even though its physical (digital) form might persist unchanged. This is because its boundaries are not simply its lines of code. This both stems from and leads to fact that technologies tend to evolve to ever greater complexity. It is especially obvious in the case of networked digital technologies, because parts of the multiple overlapping systems in which they must participate are in an ever-shifting flux. Operating systems, standards, protocols, hardware, malware, drivers, network infrastructure, etc can and do stop otherwise-unchanged technologies from working as intended, pretty consistently, all the time. Each technology affects others, and is affected by them. A digital technology that does not adapt eventually dies, even though (just like the camera) its physical (digital) form persists unchanged. It exists only in relation to a world that becomes increasingly complex thanks to the nature of the beast.

All species of technology evolve to become more complex, for many reasons, such as:

  • the adjacent possibles that they open up, inviting elaboration,
  • the fact that we figure out better ways to make them work,
  • the fact that their context of use changes and they must adapt to it,
  • the fact other technologies with which they are assembled adapt and change,
  • the fact that there is an ever-expanding range of counter-technologies needed to address their inevitable ill effects (what Postman described as the Faustian Bargain of technology),  which in turn create a need for further counter-technologies to curb the ill effects of the counter technologies,
  • the layers of changes and fixes we must apply to forestall their drift into entropy.

The same is true of most individual technologies of any complexity, ie. those that consist of many interacting parts and that interact with the world around them. They adapt because they must – internal and external pressures see to that – and, almost always, this involves adding rather than taking away parts of the assembly. This is true of ecosystems and even individual organisms, and the underlying evolutionary dynamic is essentially the same. Interestingly, it is the fundamental dynamic of learning, in the sense of an entity adapting to an environment, which in turn changes that environment, requiring other entities within that environment to adapt in turn, which then demands further adaptation to the ever shifting state of the system around it. This occurs at every scale, and every boundary. Evolution is a ratchet: at any one point different paths might have been taken but, once they have been taken, they provide the foundations for what comes next. This is how massive complexity emerges from simple, random-ish beginnings. Everything builds on everything else, becoming intricately interwoven with the whole. We can view the parts in isolation, but we cannot understand them properly unless we view them in relation to the things that they are connected with.

Amongst other interesting consequences of this dynamic, the more evolved technologies become, the more they tend to be comprised of counter-technologies. Some large and well-evolved technologies – transport systems, education systems, legal systems, universities, computer systems, etc – may consist of hardly anything but counter-technologies, that are so deeply embedded we hardly notice them any more. The parts that actually do the jobs we expect of them are a small fraction of the whole. The complex interlinking between counter-technologies starts to provide foundations on which further technologies build, and often feed back into the evolutionary path, changing the things that they were originally designed to counter, leading to further counter-technologies to cater for those changes. 

To give a massively over-simplified but illustrative example:

Technology: books.

Problem caused: cost.

Counter-technology: lectures.

Problem caused: need to get people in one place at one time.

Counter-technology: timetables.

Problem caused: motivation to attend.

Counter-technology: rewards and punishments.

Problem caused: extrinsic motivation kills intrinsic motivation.

Counter-technology: pedagogies that seek to re-enthuse learners.

Problem caused: education comes to be seen as essential to future employment but how do you know that it has been accomplished?

Counter-technology: exams provide the means to evaluate educational effectiveness.

Problem caused: extrinsic motivation kills intrinsic motivation.

Solution: cheating provides a quicker way to pass exams.

And so on.

I could throw in countless other technologies and counter-technologies that evolved as a result to muddy the picture, including libraries, loan systems, fines, courses, curricula, semesters, printing presses, lecture theatres, desks, blackboards, examinations, credentials, plagiarism tools, anti-plagiarism tools, faculties, universities, teaching colleges, textbooks, teaching unions, online learning, administrative systems, sabbaticals, and much much more. The end result is the hugely complex, ever shifting, ever evolving mess that is our educational systems, and all their dependent technologies and all the technologies on which they depend that we see today. This is a massively complex system of interdependent parts, all of which demand the input of energy and deliberate maintenance to survive. Changing one part shifts others, that in turn shift others, all the way down the line and back again. Some are harder and less flexible than others – and so have more effect on the overall assembly – but all contribute to change.

We have a natural tendency to focus on the immediate, the local, and the things we can affect most easily. Indeed, no one in the entire world can hope to glimpse more than a caricature of the bigger picture and, being a complex system, we cannot hope to predict much beyond the direct effects of what we do, in the context that we do them. This is true at every scale, from teaching a lesson in a classroom to setting educational policies for a nation. The effects of any given educational intervention are inherently unknowable in advance, whatever we can say about average effects. Sorry, educational researchers who think they have a solution – that’s just how it is. Anyone that claims otherwise is a charlatan or a fool. It doesn’t mean that we cannot predict the immediate future (good teachers can be fairly consistently effective), but it does mean that we cannot generalize what they do to achieve it.

One thing that might help us to get out of this mess would be, for every change we make, to think more carefully about what it is a counter-technology for,  and at least to glance at what the counter-technologies we are countering are themselves counter-technologies for. It might just be that some of the problems they solve afford greater opportunities to change than their consequences that we are trying to cope with. We cannot hope to know everything that leads to success – teaching is inherently distributed and inherently determined by its context – but we can examine our practice to find out at least some of the things that lead us to do what we do. It might make more sense to change those things than to adapt what we do to their effects.

 

A simple phishing scam

If you receive an unexpected email from what you might, at first glance, assume to me, especially if it is in atrocious English, don’t reply to it until you have looked very closely at the sender’s email address and have thought very carefully about whether I would (in a million years) ask you for whatever help it wants from you.

Being on sabbatical, my AU inbox has been delightfully uncrowded of late, so I rarely look at it until I’ve got a decent amount of work done most days, and occasionally skip checking it altogether, but a Skype alert from a colleague made me visit it in a hurry a couple of days back. I found a deluge of messages from many of my colleagues in SCIS, mostly telling me my identity had been stolen (it hadn’t), though a few asked if I really needed money, or wanted my groceries to be picked up. This would be a surprising, given that I live about 1000km away from most of them. All had received messages in poorly written English purporting to be from me, and at least a couple of them had replied. One – whose cell number was included in his sig – got a phishing text almost immediately, again claiming to be from me: this was a highly directed and malicious attack.

The three simple tricks that made it somewhat believable were:

  1. the fraudsters had created a (real) Gmail account using the username, jondathabascauca. This is particularly sneaky inasmuch as Gmail allows you to insert arbitrary dots into the name part of your email address, so they turned this into jond.athabasca.ca@gmail.com, which was sufficiently similar to the real thing to fool the unwary.

  2. the crooks simply copied and pasted the first part of my official AU page as a sig, which is pretty odd when you look at it closely because it included a plain text version of the links to different sections on the actual page (they were not very careful, and probably didn’t speak English well enough to notice), but again looks enough like a real sig to fool someone glancing at it quickly in the midst of a busy morning.

  3. they  (apparently) only sent the phishing emails to other people listed on the same departmental bio pages, rightly assuming that all recipients would know me and so would be more likely to respond. The fact that the page still (inaccurately) lists me as school Chair probably probably means I was deliberately singled out.

As far as I know they have not extended the attacks further than to my colleagues in SCIS, but I doubt that this is the end of it. If they do think I am still the Chair of the school, it might occur to them that chairs tend to be known outside their schools too.

This is not identity theft – I have experienced the real thing over the past year and, trust me, it is far more unpleasant than this – and it’s certainly not hacking. It’s just crude impersonation that relies on human fallibility and inattention to detail, that uses nothing but public information from our website to commit good old fashioned fraud. Nonetheless, and though I was not an intended victim, I still feel a bit violated by the whole thing. It’s mostly just my foolish pride – I don’t so much resent the attackers as the fact that some of the recipients jumped to the conclusion that I had been hacked, and that some even thought the emails were from me. If it were a real hack, I’d feel a lot worse in many ways, but at least I’d be able to do something about it to try to fix the problem. All that I can do about this kind of attack is to get someone else to make sure the mail filters filter them out, but that’s just a local workaround, not a solution.

We do have a team at AU that deals with such things (if you have an AU account and are affected, send suspicious emails to phishing@athabascau.ca), so this particular scam should have been stopped in its tracks, but do tell me if you get a weird email from ‘me’.

What is it to be Bayesian? The (pretty simple) math modelling behind a Big Data buzzword | Aeon Videos

This is a great little (16 minute) video that intuitively explains Bayesian probability from a variety of perspectives, but especially in visual (geometric) terms. Very useful for pretty much anyone – this is a critical thinking skill that applies in many contexts – but especially for researchers or programmers struggling with the idea.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/5278878/what-is-it-to-be-bayesian-the-pretty-simple-math-modelling-behind-a-big-data-buzzword-aeon-videos

Is China really the educational powerhouse that the PISA rankings suggest? (tl;dr: not even close)

Administered by the OECD, PISA is basically a set of tests, adapted to each country, that attempt to measure educational performance across a range of skills in order to rank educational systems around the world. The rankings really matter to many countries, and help to determine educational policies across the planet, being especially impactful when countries don’t do well. Often, a low PISA score triggers educational reform (not always ending well), but sometimes countries just stop playing the game. India, for instance, dropped out a decade ago after coming second from the bottom, complaining of lack of adaptation to the Indian context (which is totally fair – India is incredibly diverse, so one measure absolutely does not fit all) though it will be back again next year. There are many reasons to dislike PISA, but the one I want to highlight here is Goodhart’s Law that, when a measure becomes a target, it ceases to be a good measure.

This article – a report on an interview with Andreas Schleicher, OECD Director of Education and Skills (a very smart fellow) – provides some useful food for thought. Though it focuses on China as a case in point, the interview is not so much about China’s ‘success’ as it is about PISA and its limitations in general. Among Schleicher’s more interesting insights is the fact that China’s test results came solely from its four most highly developed and economically successful provinces. These are very unrepresentative of the whole. In fact, China replaced Guangdong in its submission this time round because it was blamed for poorer performance last time, suggesting that the Chinese government’s involvement with PISA is far more concerned with appearing effective on the International stage – on presenting a facade – than on actually improving learning. PISA is a test for countries, and some are quite happy to cheat on the test.

In fact, the biggest contributing factor to test results is, of course and as always, economic. Schleicher notes that, worldwide, the top 10% socioeconomically advantaged students have for at least 10 years consistently outperformed the 10% most disadvantaged students in reading by 141 score points, which equates to approximately three year’s worth of schooling. It is not news that by far the most productive way to improve the effectiveness of educational systems would be to diminish wealth inequalities. It is, though, worth noting that schools play a relatively small role in Chinese education, especially among more prosperous families, with vast amounts of (paid, private) tuition occurring outside schools. Similar extracurricular tuition patterns occur in several of the other highest ranking PISA countries, such as South Korea, Singapore, Japan, Hong Kong, and Taiwan. It is significant that, in these countries, test scores are extremely important in almost every way – economically, culturally, socially, and more – so there is a lot of teaching focused on test results at the expense of almost everything else.

It is also notable – and almost certainly a direct consequence of tests’ importance – that over 80% of Chinese students admit to cheating, which might be more than a minor contributor to the good results. In fairness, cheating rates for the US and Canada are also not too far short of that, correctly implying a serious endemic malaise with our educational systems worldwide (Goodhart’s Law, again), so this is just a relatively slight difference of degree, not of kind. Given the large amount of time spent learning outside school, the high levels of cheating, and the cherry-picking of top performing provinces, the implications are that, far from having a world-leading education system, teaching in China is actually really awful, on average. Among the things that can be gleaned from PISA results are that China performs very badly on productivity (points per hour of learning), and ranks 8th from bottom on life satisfaction for students. It is essentially a failure, by any reasonable measure. The PISA ranking is not quite a fiction but it is close. At least in the case of the high overall placing of China, it certainly fails to correctly measure the effectiveness of the educational system, if results are taken at face value.

There appear to be two distinct patterns among those countries that consistently achieve high PISA results, that appear to divide along broadly cultural lines. The first group includes the likes of China, South Korea, Japan, and Taiwan (all quite notable examples of what Hofsteder describes as collectivist cultures), with high levels of out-of-school tuition, a strong educational emphasis on test scores, and great personal penalties for failure. These countries seem to achieve their high ranking by a very strong focus on passing the tests, with high penalties for failure and great significance for success. As a consequence, their educational systems cannot be seen as standalone causes but, rather, as creators of problems that have to be overcome by other means (most notably in the form of extra-curricular assistance that funds a booming personal tuition economy).  Standard bearers for the other main pattern are Finland and Estonia, as well as Switzerland, and Canada (though the latter two devolve educational responsibility to canton/province, so they are less consistently successful in the rankings). In Hofstede’s terms, these are more individualist societies. In this group, test scores (slightly) tend to be seen as a measure of only one of several consequences of teaching, rather than being the primary motivation for doing it. I am certainly culturally biased, but I cannot help but think this is a better way of going about the process: education is for society, much more than for the individual, and certainly not for economic gain, so it must be understood across many dimensions of value. Whether they agree with me or not, I am almost certain that most educators everywhere would like to think that education is about much more than achieving good test scores. It is only a matter of degree, though. Education in all countries I am aware of relies on extrinsic motivation, and there are large pockets of excellence in the first group and large pockets of awfulness in the second. Averages are a stupid way to evaluate a whole country’s educational system, and they conceal great diversity. The boundaries are also blurred. Estonia, for instance, that is singled out in the article as a success story due to its rapid rise through the rankings, actually also makes extensive use of extra tuition in the form of ‘long day groups’ that take place in schools after curricular instruction. Estonia is no worse than most other countries in this regard, and in some ways superior because such long day groups take the place of at least some of the homework that is widely required in many countries, despite a singular lack of evidence that (on average) it has more than a tiny effect on learning. At least Estonia’s approach involves a modicum of good education theory and evidence to support it.

Overall, I think the main thing that is revealed by the PISA process is that average test scores are, for the most part, an extremely poor means of comparing education systems. Given that it is useful for a government to know how their policies are working, there does need to be some way for them to observe how schools are doing, but it would seem more sensible to rely on trained inspectors reviewing schools, their teaching, the work of children, etc, than on test scores. At the very least they should be considering signs of happiness, motivation, community, and social achievement at least as much as academic achievement. However, Goodhart’s Law would cause its usual harm if such things became the dominant measures of success, and more than the lightest of inspections would normally cause more harm than good. I experienced something not too far removed from this (in the form of OFSTED inspections) in the UK as a parent and school governor back in the 1990s. The results were not pretty. For about a year leading up to them teachers’ workloads were massively strained by the need to report on everything, students suffered, resentments piled up, everyone suffered. Though OFSTED reports did sometimes lead to improvements in particularly bad schools, the effects on the vast majority of schools (and especially on teachers) were disastrous, often radically disrupting work, increasing stress levels beyond reasonable bounds, and leading to more than a few resignations and early retirements from the best, most dedicated teachers who could barely cope with the workloads at the best of times. They were forced to become bureaucrats, which is a role to which teachers tend to be very poorly suited. It was (and, I believe, may still be) beyond stupid, despite best intentions.

What is really needed is something more collegial, that is focused on improvement rather than judgment, that celebrates and builds on success rather than amplifying failure, where everyone involved in the process benefits and no one suffers. The whole point (as far as I understand it) is to improve what we do, not to blame those who fail. Appreciative Inquiry is a good start. Simple things like peer observation (with no penalties, no judgments, just formative commentary) can be more than adequate for the most part at a local level, and are beneficial to both observer and observed. Maybe – if someone thinks it necessary – inspectors (volunteers, perhaps, from the teaching profession) could look at samples of student work from further afield with a similarly positive, formative attitude. It might not provide numbers to compare but, if there were enough of a culture of sharing across the whole sector, and if inspectors came from across the geographical and cultural spectrum, it ought to be good enough to improve practice, and to spread good ideas around, so the intent would be achieved. Governments could receive reports on what actually matters – that things are getting better – rather than on what does not (that things are bad, according to some unreliable measurement that compares nothing of any real value to educators, students, or society). Teaching is a deeply soft technology that cannot be reductively simplified to a relationship of entailment. It can, though, as a lived, creative, social process, be improved. This should be the goal of all teachers, and of all those who can influence the process, including governments. PISA only achieves such results in a tiny minority of extreme cases. For the most part, it actively militates against them because it substitutes education – in all its rich complexity – for test scores. These are not even a passable proxy. They are a gross distortion, an abhomination that can trivially be turned to evil, self-serving purposes without in any way improving learning. Schleicher fully understands this. I wish that the people who his organization serves did too.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/5209267/is-china-really-the-educational-powerhouse-that-the-pisa-rankings-suggest-tldr-not-even-close

Excellent news: Twitter Makes A Bet On Protocols Over Platforms.

Well this is good news! Of course, the road to Hell is paved with good intentions and there is much that could go wrong in between plan and execution, but it seems that Twitter is recommitting itself to openness, standards, and the use of protocols for a federated social Web (see also https://twitter.com/biz/status/1204784388107636737 and https://twitter.com/jack/status/1204766078468911106 for the announcements by Twitter’s founders). It is a bit worrying that Twitter wants to help invent a new protocol when there are plenty of established ones that already exist (ActivityPub, OpenSocial, FOAF, XMPP, OStatus, OpenID, OAuth, PubSubHubbub, Zot, Diaspora, etc, etc). Also, there is already a pretty serviceable Twitter competitor in the form of Mastodon, that does most of what they seem to want to do. However, the fact that they are thinking about protocols rather than platforms at all is very heartening. The world needs much much more of this.

Twitter, as it evolved in its first couple of years, was brilliant. What made it great was that it could act as a highly efficient social bookmarking system *plus* commentary *plus* folksonomy, *plus* instant messaging, *plus* social networking, all through one incredibly simple, flexible, open field.  It was, in part, a descendant of social bookmarking systems that people like me developed in the 90s, but there were no predetermined fields for URLs (you could have more than one, or none at all); there were no predetermined categories; the tags (#hashtags) were trivially easy to include, without separate fields (this is what makes it highly supportive of social sets, in which the topic matters more than the person); and it had the lowest threshold social networking (especially through @mentions), again without the need for separate fields. It was a single small text box that did everything, and that could be used to share more or less anything with more or less anyone but, thanks to its size,  was primarily used to connect to other things. Part of what made this so cool is that #hashtags and @mentions were not designed into Twitter at the start, but emerged memetically from practice: the system evolved (at first) through a collective design process. Twitter’s implementation of such things in software ingeniously used automation to make the overall system even softer and more flexible than it was before. It was generous in what it shared, too, so a flourishing ecosystem grew around it, at least for the first few years. You could use pretty much any Twitter data to which you had access in any way you liked. It was a very simple, very powerful component, a tool rather than an environment or platform. In retrospect I wish we had used Twitter as a model when developing the Landing, rather than the kitchen sink approach that we settled for.

Twitter is widely viewed as a competitor to Facebook – increasingly even by the company itself – though it was (and still is, to an extent) a very different animal. Facebook has tried to emulate all of Twitter’s features as a subset of its own horrible evil mess, but completely misses the point. The strength of Twitter is that it (still) does one simple thing very well: it is primarily a hub that makes the rest of the Web more connected, rather than (like Facebook) sucking everything into it. However, that one simple thing is as soft and open to countless, unprestatable uses as an elastic band, a screwdriver, or good old fashioned email.  Jack Dorsey’s announcement of the new move itself is a classic example of this, creating a long-form announcement from short tweets. Beyond simply connecting stuff, people have used it to write novels, coordinate social protests, conduct personal conversations, influence elections, and thousands of other things. It is a very soft, very human-driven tool.

For a few years it was very open, and it seemed to be getting more so, but it lost its way after that and became much more the self-contained platform we see today, pulling a lot of features into its core, closing off many ways of connecting with and using it, and increasingly hardening things that should have stayed soft, notably in its algorithmic placing and sorting of tweets. Though its old character limit was frustrating at times, it was actually a very good idea to set such severe boundaries because it ensured that Twitter remained as a connecting hub, rather than a self-contained site. The new higher character limit is still somewhat constraining, but it makes longer-form conversations increasingly possible – especially when combined with the easy upload of video, files, images, etc – thus drawing people to stay more at the hub, rather than to visit the things that it connects. It has become more and more a social media platform, increasingly isolated, increasingly its own bubble, increasingly driven by the popularity contests and narcissism amplifiers that seldom end well. Twitter’s announcement, I hope, marks a reversal of this pattern. I hope (though don’t expect) that they get the Mastodon gang on board. I will watch with great interest, whatever happens.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/5140548/excellent-news-twitter-makes-a-bet-on-protocols-over-platforms

E-Learn 2019 presentation – X-literacies: beyond digital literacy

Here are  my slides from E-Learn 2019, in New Orleans. The presentation was about the nature of technologies and their roles in communities (groups, networks, sets, whatever), their highly situated nature, and their deep intertwingling with culture. In general it is an argument that literacies (as opposed to skills, knowledge, etc) might most productively and usefully be seen as the hard techniques needed to operate the technologies that are required for any given culture. As well as clarifying the term and using it in the same manner as the original term “literacy”, this implies there may be an indefinitely large range of literacies because we are all members of an indefinitely large number of overlapping cultures. All sorts of possibilities and issues emerge from this perspective.

Abstract: Dozens, if not hundreds, of literacies have been identified by academic researchers, from digital- to musical- to health- to network- literacy, as well as combinatorial terms like new-, multi-, 21st Century-, and media-literacy. Proponents seek ways to support the acquisition of such literacies but, if they are to be successful, we must first agree what we mean by ‘literacy’. Unfortunately, the term is used in many inconsistent and incompatible ways, from simple lists of skills to broad characteristics or tendencies that are either ubiquitous or meaninglessly vague. I argue that ‘literacy’ is most usefully thought of as the set of learned techniques needed to participate in the technologies of a given culture. Through use and application of a culture’s techniques, increasing literacy also leads to increasing knowledge of the associated facts and adoption of the values that come with that culture. Literacy is thus contextually situated, mutates over time as a culture and its technologies evolve, and participates in that co-evolution. As well as subsuming and eliminating much of the confusion caused by the proliferation of x-literacies, this opens the door to more accurately recognizing the literacies that we wish to use, promote and teach for any given individual or group.