Asus Flip C234 Chromebook review

I’ve been thinking for some time that I need to investigate Chromebooks – at least, ever since Chrome OS added the means to run Android and Linux apps alongside Chrome web apps. I decided to get one recently because I was going on a camping trip during which I’d be required to do some work, and the (ridiculously many) machines I already had were all some combination of too limited, too unreliable, too fragile, too heavy, too power-hungry, too buggy, or too expensive to risk in a muddy campsite. A Chromebook seemed like a good compromise. I wanted one that was fairly cheap, had a good battery life, was tough, could be used as a tablet, and that was not too storage-limited, but otherwise I wasn’t too fussy. One of the nice things about Chromebooks is that, notwithstanding differences in hardware, they are all pretty much the same. 

After a bit of investigation, I noticed that an Asus C234 Flip with an 11.6″ screen was available at BestBuy for about $400, which seemed reasonable enough, based on the advertised specs, and even more reasonable when they lopped $60 off the price for Labour Day. Very unusually, though, the specs on the site were literally all that I had to go on. Though there are lots of Flip models, the C234 is completely unknown, apparently even to Asus (at least on its websites), let alone to reviewers on the web, which is why I am writing this! There’s no manual with it, not even on the machine itself, just a generic leaflet. Following the QR code on the base of the machine leads to a generic not-found page on the Asus site. Because it looked identical to the better-known Flip C214 I thought BestBuy must have made a labelling mistake but the model number is clearly printed in two places on the base. Despite the label it is, in fact, as I had guessed and eventually confirmed by circuitous means, identified by Asus themselves as a Flip C214MA, albeit with 64GB of storage rather than the more common 32GB and a very slightly upgrade Intel Celeron N4020 CPU instead of an N4000. This model lacks the option of a stylus that is available for many C214 models (pity – that seemed very nice). It was not quite the cheapest Chromebook to fit the bill, but I trust Asus a lot and have never regretted buying one of their machines over about 20 years or more of doing so quite frequently. They really know how to design and build a great computer, and they don’t make stupid compromises even on their cheapest machines, be they PCs, tablets, phones, or netbooks. Great company in a sea of dross.

Hardware overview

The C234 comes with only 4GB RAM, which means it can get decidedly sluggish when running more than a handful of small apps, especially when some of them are running under Linux, but it is adequate for simple requirements like word processing, light photo editing, audio recording, web browsing, email, webinars, etc: just the use cases I had in mind, in fact. The 64GB of storage is far less than I’d prefer but, I calculated, should be fine for storing apps. I assumed (wrongly) that any data I’d need locally could be kept on the 256GB SDXC card that I bought to go with it so I was – foolishly – not too concerned. It turns out that Android apps running under ChromeOS that can save data to the SD card are few and far between, and ChromeOS itself is barely aware of the possibility although, of course, most apps can read files from just about anywhere so it is not useless. Unfortunately, the apps that do not support it include most video streaming services and Scribd (which is my main source of ebooks, magazines, and audiobooks) – in other words, the ones that actually eat up most space. The physical SD slot is neat – very easy to insert and difficult (but not too difficult) to remove, so it is not going to pop out unexpectedly.

The computer has two full-spec USB-C ports that can be used for charging the device (45W PD, and not a drop less), as well as for video, external storage, and all the usual USB goodness. It has one USB-A 3.0 socket, and a 1/8″ combo mic/headphone socket that can take cellphone headsets or dedicated speakers/microphones. The wifi and bluetooth are both pretty modernish mainstream, adequate for all I currently have but maybe not everything I might buy next year. There is a plastic tab where a stylus is normally stored but, buyer beware, if the detailed model number doesn’t end in ‘S’ then it does not and cannot support a stylus: no upgrade path is available, as far as I can tell. Wifi reception is very good (better than my Macbook Pro), but there is no WiFi6. There’s no cellular modem, which is a pity, but I have a separate device to handle that. It does have a Kensington lock slot, which I guess reflects how it might be used in some schools where students have to share machines. Going back to the days when I used to manage university computer labs, I would have really liked these machines: they are very manageable. A Kensington lock isn’t going to stop a skilled thief for more than a couple of seconds but, as part of a security management strategy, they fit well.

The battery life is very good. It can easily manage 11-12 hours between charges from its 50WH battery, and could almost certainly do at least a couple more hours if you were not stretching its capabilities or using the screen on full brightness (I’m lazy and my eyesight is getting worse, so I tend to do both). It charges pretty quickly – I seldom run it down completely so the longest I’ve needed to plug it in it dropped below 20%, has been a couple of hours. It uncomplainingly charges from any sufficiently powerful USB-C charger.

As a laptop the Flip feels light in the hand (it weighs in at a little over a kilogram) but, as a tablet, it is pretty heavy and unwieldy and the keyboard cannot be detached. This is a fair compromise. Most of the time I use it as a laptop so I’d rather have a decent keyboard and a battery that lasts, but it is not something you’d want to hold for too long in the kind of orientations you might with an iPad or e-reader. Its 360 degree screen can fold to any intermediate angle so it doesn’t need a separate stand if you want to perch it on something, which is handy in a tent: while camping, I used it in both (appropriately) tented orientation and wrapped over a big tent pocket so that it was held in place by its own keyboard.

Video and audio

The touch screen is OK. At 1366×768 resolution and with a meagre 162 pixels per inch it is not even HD, let alone a retina display. It is perfectly adequate for my poor eyesight, though: fairly bright, acceptable but not great viewing angles, very sharp, and not glossy (I hate glossy screens). I’d much rather have longer battery life than a stunning display so this is fine for me. Viewing straight-on, I can still read what’s on the screen in bright sunshine and, though it lacks a sensor to auto-adjust the brightness, it does have an automatic night-time mode (that reddens and dims the display) that can be configured to kick in at sunset, and there are keyboard keys to adjust brightness. The generic Intel integrated GPU chip works, but that’s all I can say of it. I’d certainly not recommend it for playing graphics intensive games or using photoshop, and don’t even think about VR or compiling big programs because it ain’t going to happen.

The speakers, though, are ridiculously quiet: even when pumped up to full volume a little rain on the tent made it inaudible, and they are quite tinny. I’m guessing that this may have a bit to do with its target audience of schoolkids – a lack of volume might be a good thing in a classroom. The speakers are down-facing so it does benefit from sitting on a table or desk, but not a lot. The headphone volume is fine and it plays nicely with bluetooth speakers. It has a surprisingly large array of 5 microphones scattered quite widely that do a pretty good job of echo cancellation and noise reduction, providing surprisingly good sound quality (though not exactly a Blue Yeti).

It has two cameras, one 5K device conventionally placed above the screen when used in laptop mode, the other on the same surface as the keyboard, in the bottom right corner when typing, which is weird until you remember it can be used in tablet mode, when it becomes a rear-facing camera. Both cameras are very poor and the rear facing one is appalling (not even 1K resolution). They do the job for video conferencing, but not much else. That’s fine by me: I seldom need to take photos with my notebook/tablet and, if I want better quality, it handles a Logitech webcam very happily.

Input devices

The keyboard is a touch smaller than average, so it takes a bit of getting used to if you have been mostly using a full-sized keyboard, but it is quite usable, with plenty of travel and the keys and, though each keypress is quite tactile so you know you have pressed it, it is not clicky. It is even resistant to spilt drinks or a spot or two of rain. Having killed a couple of machines this way over the past thirty years or so (once by sneezing), I wish all keyboards had this feature. The only things I dislike about it are that it is not backlit (I really miss that) and that the Return key is far too small, bunched up with a load of symbol keys and easily missed. Apart from that, it is easy to touch type and I’d say it is marginally better than the keyboard on my Macbook Pro (2019 model). The keys are marked for ChromeOS, so they are a bit fussy and it can be hard to identify which of the many quote marks are the ones you want, because they are slightly differently mapped in ChromeOS, Android, and Linux. On the other hand I’m not at all fond of Chrome OS’s slightly unusual keyboard shortcuts so it’s nice that the keys tell you what they can do, even though it can be misleading at times.

The multi-touch screen works well with fingers, though could be far more responsive when using a capacitive stylus: the slow speed of the machine really shows here. Unless you draw or write really slowly, you are going to get broken lines, whether using native Chrome apps, Android, or Linux. I find it virtually unusable when used this way.

The touchpad is buttonless and fine – it just works as you would expect, and its conservative size makes it far less likely to be accidentally pressed than the gigantic glass monstrosity on my Macbook Pro. I really don’t get the point of large touchpads positioned exactly where you are going to touch them with your hand when typing.

There is no fingerprint reader or face recognition, though it mostly does unlock seamlessly when it recognizes my phone. It feels quite archaic to have to enter a password nowadays. You can get dongles that add fingerprint recognition and that work with Chromebooks, but that is not really very convenient.

Build

The machine is made to be used by schoolkids, so it is built to suffer. The shell of the Flip is mostly made of very sturdy plastic. And I do mean sturdy. The edges are rubberised, which feels nice and offers quite a bit of protection. Asus claim it can be dropped onto a hard floor from desk height, and that the pleasingly textured covering hides and prevents scratches and dents. It certainly feels very sturdy, and the texture feels reassuring in the hand, with a good grip so that you are not so likely to drop it. It doesn’t pick up fingerprints as badly as my metal-bodied or conventional plastic machines. Asus say that the 360 degree hinges should survive 50,000 openings and closings, and that the ports can suffer insertion of plugs at least 5,000 times. I believe them: everything about it feels well made and substantial. You can stack 30kg on top of it without it flinching. For the most part it doesn’t need its own case. I felt no serious worries throwing this into a rucksack, albeit that it is neither dust nor water resistant (except under the keyboard). Asus build it to the American military’s MIL-STD 810G spec, which sounds impressive though it should be noted that this is not a particular measure of toughness so much as a quality control standard to ensure that it will survive the normal uses it is designed for. It’s not made for battlefields, boating, or mountaineering, but it is made to survive 11-year-olds, and that’s not bad.

It’s not unattractive but nor is it going to be a design classic. It is just a typical old fashioned fairly non-descript and innocuous small laptop, that is unlikely to attract thieves to the same extent as, say, a Microsoft Surface or Macbook Pro. It has good old fashioned wide bezels. I realize this is seldom considered a feature nowadays, but it is really good for holding it in tablet mode and helps to distinguish the screen from the background. It feels comfortable and familiar. In appearance, it is in fact highly reminiscent of my ancient Asus M5N laptop from 2004, that still runs Linux just fine, albeit without a working battery, with only 768KB of RAM and with, since only recently, a slightly unreliable DVD drive – Asus really does make machines that last.

The machine is fanless so it is quite silent: I love that. Anything that moves inside a computer will break, eventually, and fans can be incredibly annoying even when they do work, especially after a while when dust builds up and operating system updates put more stress on the processor. If things do break, then the device has a removable panel on the base, which you can detach using a single standard Philips screwdriver, and Asus even thoughtfully provide a little thumbnail slot to prise it up. Through this you can access important stuff like storage and RAM, and the whole machine has a modular design that makes every major component easily replaceable – so refreshing after the nightmares of trying to do any maintenance on an Apple device. Inside, it has a dual core Celeron of some kind that can be pushed up to 2800 MHz – an old and well-tried CPU design that is not going to win any performance prizes but that does the job pretty well. From my tech support days I would be a bit bothered leaving this with young and inquisitive kids – they really like to see how things work by doing things that would make them not work. I lost a couple of lab machines to a class of kids who discovered the 240/110v switch on the back of old PCs.

It does feel very sluggish at the best of times after using a Macbook Pro – apps can take ages to load, and there can be quite a long pause before it even registers a touch or a keypress when it is running an app or two already – but it is less than a tenth of the price, so I can’t complain too much about that. It happily runs a full-blown DBMS and web server, which addresses most of my development needs, though I’d not be keen on running a full VM on the device, or compiling a big program.

Included software

There are no Asus apps, docs, or customizations included. It is pure, bare-bones, unadulterated Chrome OS, without even a default Asus page to encourage registration. This is really surprising. Eventually I found the MyAsus (phone) app for Android on Google’s Play store, which is awful but at last – when I entered the serial number to register the machine – it told me what it actually was, so I could go and find a manual for it. The manual contains no surprises and little information I couldn’t figure out for myself, but it is reassuring to have one, and very peculiar that it was not included with the machine. This makes me suspect that BestBuy might have bought up a batch of machines that were originally intended for a (large) organization that had laid down requirements for a bare-bones machine. This might explain why it is not listed on the Asus site.

ChromeOS

I may write more about ChromeOS at some later date – the main reason I got this device was to find out more about it – but I’ll give a very brief overview of my first impressions now. ChromeOS is very clever, though typical of Google’s software in being a little clunky and making the computer itself a little bit too visible: Android suffered such issues in a big way until quite recently, and Android phones still feel more like old fashioned desktop computers than iPhones or even Tizen devices.

Given that it is primarily built to run Chrome apps, It is surprisingly good at running Android apps – even VPNs – though integration is not 100% perfect: you can occasionally run into trouble passing parameters from a Chrome app to Android, for instance, some Android apps are unhappy about running on a laptop screen, and not all understand the SD card very well. Chrome apps run happily without a network, so you are not tied to the network as much as with other thin-client alternatives like WebOS.

It also does a really good job of running and integrating Linux apps. They all run in a Debian Linux container, so a few aspects of the underlying machine are unavailable and it can be a little complex when you want to use files from other apps or peripherals, but it is otherwise fully featured and utilizes much of the underlying Debian system that runs ChromeOS itself, so it is close to native Linux in performance. The icons for Linux apps appear in the standard launcher like any other app and, though there is a little delay when launching the first Linux app when it starts the container, once you have launched one then the rest load quickly. You do need a bit of Linux skill to use it well – command line use of apt is non-negotiable, at least, to install any apps, and integrating both Android and ChromeOS file systems can be a little clunky. Linux is still a geek option, but it makes the machine many times more useful than it would otherwise be. There’s virtually nothing I’d want to do with the machine that is constrained by software, though the hardware creates a few brick walls.

Integration between the three operating systems is remarkably good altogether but the seams show now and then, such as in requiring at least two apps for basic settings (ChromeOS and Android), with a handful of settings only being available via the Chrome browser, or in not passing clipboard contents to the Linux terminal command line (though you can install an x-terminal that works fine). I’ve hit quite a few small problems with the occasional app, and a few Android apps don’t run properly at all (most likely due to screen size issues rather than integration issues) but overall it works really well. In fact, almost too well – I have a few of the same apps in both ChromeOS and Android versions so sometimes I fail to notice that I am using the glitchier one until it is too late.

Despite the underlying Debian foundations, it is not super-stable and crashes in odd ways when you stretch it a little, especially when reconnecting on a different network, but it is stable enough for most standard uses that most people would run into, and it reboots really quickly. Even in the few weeks I’ve had it, it seems more stable, so this is a moving target.

Updates come thick and fast, but it is a little worrying that Google’s long term commitment to ChromeOS seems (like most of their offerings) shaky: the Web app store is due to close at some point soon and there are some doubts about whether it will continue to offer long term support for web apps in general, though Android and Linux support makes that a lot less worrying than it might be. Worst case would be to wipe most traces of ChromeOS and simply partition the machine for Linux, which would not be a bad end-of-life option at all. 

The biggest caveat, though, is that you really need to sell your soul (or at least more of your data than is healthy) to Google to use this. Without a Google account I don’t think it would work at all, but at the very least it would be crippled. I trust Google more than I trust most other big conglomerates – not because they are nice but because their business model doesn’t depend on directly selling my data to others –  but I do not love their fondness for knowing everything about me, nor that they insist on keeping my data in a banana republic run by a reality TV show host. As much as possible the apps I use are Google-free, but it is virtually impossible to avoid using the Chrome browser that runs many apps, even when you have a friendlier alternative like Vivaldi that would work just as well, if Google allowed it. In fairness, it is less privacy-abusive than Windows, and more open about it. MacOS is not great either, but Apple are fiercely aggressive in protecting your data and don’t use it for anything more than selling you more Apple goodies. Linux or BSD are really the only viable options if you really want to control your data or genuinely own your software nowadays.

Conclusions

This was a great little machine for camping. Though water and dust were a concern, especially given the low price I wasn’t too worried about treating it roughly. It was small and light, and it performed well enough on every task that I threw at it. It’s neither a great laptop nor a great tablet, but the fact that it performs both tasks sufficiently well without the ugliness and hassles of Windows or the limitations of single OS machines is very impressive.

Since returning from camping I have found myself using the machine a lot more than I thought I might. My Macbook Pro is pretty portable and its battery life is not too bad, but it is normally plugged in to a big monitor and a whole bunch of disk drives, so I can’t just pick it up to move around the house or down to the boat without a lot of unplugging and, above all, disk ejection (which, thanks to Apple’s increasingly awful implementation of background indexing that has got significantly worse with every recent release of OSX, can often be an exercise in deep frustration), so I rarely do so unless I know I will be away from the desk for a while. I love that I can just pick the Flip up and use it almost instantly, and I only need to charge it once every couple of days, even when I use it a lot. I still far prefer to use my Macbook Pro for anything serious or demanding, my iPad or phone for reading news, messaging, drawing, etc, and a dedicated ebook reader for reading books, but the fact that this can perform all of those tasks reasonably well is useful enough that it is fast becoming my default mobile device for anything a cellphone doesn’t handle well, such as writing anything of any length, like this (which is all written using the Flip).

In summary, the whole thing is a bit of a weird hybrid that shows its seams a bit too often but that can do most things any tablet or PC can do, and then some. It does a much better job than Windows of combining a ‘real’ PC with a tablet-style device, mainly because (thanks to Android) it does the tablet far better than any Windows PC and, thanks to Linux, it is almost as flexible as a PC (though, bearing in mind that Windows now does Linux reasonably well, it is not quite in the same league). The low spec of the machine does create a few brick walls: I am not going to be running any VMs on it, nor running any graphics-intensive, memory-intensive, or CPU-intensive tasks but, for well over 90% of my day to day computing needs, it works just fine.

I’m now left wondering whether it might be worthwhile to invest in one of the top-of-the-line Google Chromebooks to cater for my more advanced requirements. They are beautiful devices that address nearly all the hardware limitations of the C234 very well, and that are at least a match for mid-to-high end Windows and Mac machines in performance and flexibility, and they come at a price to match: really not cheap. But I don’t think either I or ChromeOS are quite ready for that yet.  MacOS beats it hands down in terms of usability, speed, reliability, consistency, and flexibility, despite Apple’s deeply tedious efforts to lock MacOS down in recent years (trying to triple boot to MacOS, Windows, and Linux is an exercise in frustration nowadays) and despite not offering a touch screen option. If Apple goes further down the path of assuming all users are idiots then I might change my mind but, for now, its own operating system is still the best available, and a Mac  runs Windows and Linux better than almost any equivalently priced generic PC. I would very seriously consider a high-end Chromebook, though, as an alternative to a Windows PC. It is inherently more secure, far less hassle to maintain, and lets you get to doing what you want to do much faster than any Windows machine. Unless you really need a bit of hardware of software that only runs under Windows – and there are very few of those nowadays – then I can think of few reasons to prefer it.  

Where to buy (current advertised price $CAD409): https://www.bestbuy.ca/en-ca/product/asus-flip-c234-11-6-touchscreen-2-in-1-chromebook-intel-celeron-n4020-64gb-emmc-4gb-ram-chrome-os/14690262

Letting go and staying close: presentation to GMR Institute of Technology, India, August 2020

Letting go and staying close

Here are my slides from a presentation I gave to GMR Institute of Technology, Rajam of Srikakulum District, Andhra Pradesh State, India, last week. I gave the presentation from a car, parked in a camp site in the midst of British Columbia, surrounded by mountains and lakes and forest, taking advantage of a surprisingly decent 4G connection via an iPad. It was sadly not interactive, but I hope that those present learned something useful (even if, as my presentation emphasized, they did not learn what I intended to teach).

The general gist of it is that, when teaching online, we need to let go because the in-person power we have in a classroom simply isn’t there. There are other consequences – the need to build community, to demonstrate caring, to accept and value the context of the learner, to accept and value the very many teachers that they will encounter apart from us.

A novel approach to protecting academic freedom of speech: allow it, but do not allow it to be heard

The faculty and professional staff union at Athabasca University, AUFA (the Athabasca University Faculty Association), has two mailing lists, one used for announcements from its exec committee, and one for discussions between its members. Given that most of us have barely any physical contact with one another at the best of times, and that there are no other technologies that are likely to reach even a fraction of all staff involved in teaching and research (the Landing AUFA group, for instance, has only about 40 out of a few hundred potential members) the latter is the primary vehicle through which we, as a community of practice, communicate, share ideas and news, and engage in discussions that help to establish our collective identity. It’s a classic online learning community using a very low threshold, simple, universally accessible technology.

There had been a debate on the discussion list for a few days over the past week on a contentious issue pitting academic freedom against the needs and rights of transgender people. As too often happens when the rights of disadvantaged minorities are involved, the conversation was getting toxic, culminating in a couple of faculty members directly and very unprofessionally abusing another, telling him to shut up and to stop displaying his ignorance. This is not behaviour worthy of anyone, let alone teachers (of all people), and something had to be done about it. At this point the obvious solution would have been for the managers of the list to discuss these abuses individually with those members, and/or for the individuals themselves to reflect on and apologize for their behaviour, and/or to open up the debate on the list about acceptable norms and approaches to de-escalating situations like this. Sadly, that’s not how the list managers responded. Very suddenly, and without any prior warning or discussion whatsoever, the union executive committee shut the entire discussion list down indefinitely, mercilessly nuking it with the following terse and uninformative message posted to the announcement list:

” Dear AUFA members,

Until further notice, AUFA is suspending the AUFA discussions list serv for review of harmful language and due to a high volume of complaints.”

Shocked by this baldly authoritarian response, I immediately sent a strong message of protest, that I tempered with recommendations about what would have been an appropriate approach to managing the problem, and suggestions about ways to move forward with alternative methods and tools in future. I received no reply. One long day later, however, the following message was posted to the announcement list:

” Dear AUFA members,

I want to update you on the situation with the AUFA discussions list serv.

AUFA is committed to protecting Academic Freedom. AUFA is equally committed to protecting Human Rights. AUFA did not make the decision to suspend the list serv lightly. As the entity legally responsible for the listserv, AUFA has an obligation to ensure the safety of its members.

The AUFA executive had a lengthy discussion about the purpose and usefulness of the AUFA listserv and is actively considering alternative methods and forums by which members might communicate with each other in the near future. “

That’s it. That’s the whole message. Clearly they did not discuss this with the people who were actually affected, or with those who had been abusive, and they certainly didn’t talk about it with the rest of us. The message itself is remarkably uninformative, raising far more questions than it answers. It reads to me as ‘you have been naughty children and we have decided to send you to your room to think about it’. But I think they must have been following a different discussion than the one I saw because, though there was certainly some unprofessional nastiness and some unsubtle arguments expressed (that were becoming far more refined as the discussion progressed – that’s how free and open debate is supposed to work), I did not spot any human rights abuses during the discussion, and the only abuse of academic freedom I could see was the decision to shut down the list itself. Removing the possibility of speech altogether is certainly a non-traditional approach to protecting freedom of speech.

Notice, too, that in both messages there is a synecdochal conflation of ‘AUFA’ and ‘the AUFA executive committee’. I’m pretty sure that, as a member of AUFA, I would know whether I had been part of such a decision. That’s a bit like a teacher shutting down an online course because someone was rude, then claiming that the class shut it down. It’s a subtle way of abnegating responsibility, suggesting that some technological entity did something when, in fact, it was done by very real and fully responsible people. AUFA did not do this, and AUFA did not make these decisions. A small group of actual, real human beings did it, all by themselves.

I sent a strongly worded (but respectful) response to that one too.

Who owns this?

I think it is clear that the mailing list is not owned by the union executive committee. They are custodians of it, stewards who run it on the behalf of everyone in the union. Shutting it down denies the members of the union their primary means of connection and debate, including debate about this very issue. The message is quite misleading about the AUFA exec’s responsibilities, too: though they do need to be attentive to illegal behaviours, they are not legally responsible for what other people say on the listserv. In fact, the explicit or implicit legal protections afforded to providers of such services are fundamental to allowing much of the Internet to work at all. This is why there is so much outrage and protest against Trump’s efforts to remove such protections in the US right now. And there are lots of ways of handling the problem, from direct personal communication to public debate to the establishment of rules or a social contract to calling in the police. Going nuclear on the service does not fulfill that responsibility at all; it simply evades it.

It is absolutely fair to claim that list managers do have a responsibility to the union members of helping to maintain a non-abusive, safe, supportive online community. However, shutting down the thing they have an obligation to preserve is not just neglect of that responsibility but the worst and most harmful thing they could possibly do to fulfill it. It is like protecting an endangered animal by shooting it.

Ironically, the final message posted on the now-dead discussion list ended with the line:

“One thing I vowed to myself… is that I would never let anyone stop me from saying what I have to say “

Well, that kept like milk.

I feel incensed, abused, and suddenly incredibly isolated from my university and my colleagues. My sense of loss is tangible and intense. It’s lucky that I do have other channels, like this one, to vent my frustration and to bring this to a broader audience. I hope this message gets to at least a few of those who, like me, are feeling cut off and disempowered and, if they have not done so already, that they loudly voice their concerns to those responsible.

Moving on

Unfortunately, though very low threshold and accessible to all, listservs are not great tools for hosting contentious debates. They are extremely soft technologies which means that, on the positive side, they are extremely flexible and very low threshold, but that therefore a great deal of additional process must be added manually by their participants in order to deal with them: distinguishing threads, choosing which to attend to, tracking conversations, managing archived messages, using appropriate subject lines, to name but a few.

Listservs are poor tools for achieving consensus and poor tools for argument. The push nature of the technology means it can be very intrusive but, equally, the fact that we control our mail filters means that it can be completely shut down and ignored, without other participants having any knowledge that their messages are falling on deaf ears. It’s a technology that allows everyone to shout at the same time so it’s unsurprising that it is fertile ground for misunderstandings, confusion, high emotions, and people who forget that they are talking to other people. The very simplicity that makes them so easy to engage with also makes it easier to forget the humans behind the messages. Unless individuals have taken pains to share things about themselves with their messages, there are not even pictures and profiles to serve as a reminder. Though web archives may be available, they are rarely if ever open for continued dialogue: though, in principle, one could reply to a message from months or years ago, that virtually never happens. This means that people tend rush to get their message across before the list moves on to some other topic, with all the risks that entails. It kind of has to be that way. Because of the push nature of the medium, if conversations were to persist then multiple parallel discussions would rapidly overwhelm everyone’s inbox and attention.

For all these reasons and more, as anyone who has ever tried to do so will be painfully aware, managing a mailing list used for open discussion, especially one (like this) that lacks a clear mandate, contract or terms of engagement, takes a lot of manual effort, a fair bit of ingenuity, and a lot of careful attention. When things get out of hand, those who run the list need to take active, timely, creative measures to defuse them. It’s hard but necessary work, that demands sensitivity, a forgiving nature, a willingness to accept abuse with very little chance of being thanked for your efforts and, often, willingness and availability to work far ouside a normal working day (this, as it happens, is also true of many approaches to online teaching). Unfortunately, no one in our union leadership seems willing or able to take on such management. If that’s the case, the solution is not to shut it down. The solution is to pass it on to someone else who can and will moderate it more caringly, perhaps to put some more resources into managing it and, perhaps, to participatively look into rules, norms, and other tools and procedures that might do the job better.

Moving further on

There are hundreds and maybe thousands of tools and methods that can better (or at least differently) support this kind of debate than a listserv. Even the humble threaded forum at least allows such discussions to be segmented and, for those upset by them, ignored. Some allow for threads or people to be (from an individual’s perspective) muted, and many allow forum owners to close discussions in a particular thread without killing the whole thing. Some go beyond crude threads, allowing richer cross-linking between messages and discussions. Some offer authoring help, like in-line searching of previous messages and direct linking to sources or, simple AI to warn when sentiments appear to run high. Many tools allow for simple tricks like karma points, thumbs up, and other low threshold ways of signalling agreement or disagreement, in a manner that shows collective sentiment without a high commitment or fear of reprisal, and that also signals whether a topic is interesting to the crowd without relying on a deluge of messages to show it. Some offer means to reach decisions, from simple votes to computer supported collaborative argumentation tools. Many allow for profiles and other signals of social presence that make the humans behind the messages more visible and salient. Some (blogs, say, like this one) allow for more focused subscribable discussions on specific themes that are managed and owned by the creator of the original post, and that are not as ephemeral as mailing lists. Some offer other tools like persistent shared bookmarks or filesharing that help to organize resources related to themes of debate. Some have recommender systems that show related posts and thus help to situate discussions, and to support connections back to previous discussions. Many have persistence so that learning is reified and searchable, not lost in a stream of thousands of other emails. Some allow for scheduling and time-limited discussions.

Equally, there are lots of process models for reaching consensus on social norms and acceptable behaviours, as well as ways of dealing with issues when they arise. Skills can be developed in stewardship and moderation so that problems are defused before they become severe, or not arise in the first place thanks to careful specification of ground rules or structuring of the process. There are plenty of books and papers on the subject (this is my favourite, especially now that it is free) that delve into great detail. There are ways of taking an holistic approach that takes into account the larger social ecosystem to (for instance) help to build social capital, use different tools for different functions, and so on.

All of these technologies, including process models, methods, and procedures, come with plentiful gotchas – Faustian bargains and monkeys’ paws that can easily cause more problems than they solve and that will never be ideal for all – so this is not a set of decisions that should be entered into lightly or without extensive consultation, participation, and analysis, and it should always be thought of as an ongoing process, never a finished solution. Clearly, it eventually needs to be done. In the meantime, if a listserv is all we have, then we should at least manage it properly. It is not acceptable to simply nuke the only tool we have, even if it is a weak one.

I do realize that union leadership is an extremely hard and often thankless job and, though I frequently feel very critical of things they do on my behalf,  especially when they adopt an archaic ‘us vs them’ vocabulary, I am thankful they do it. I very seldom voice my adverse opinions because I know they are trying to do their best for everyone, I am certainly not willing to take on the enormous commitments involved myself and, without their hard work and principled actions (regardless of occasions when they actively make things worse) we would, on average, be in a far worse place than we are today. However, the union leadership’s response to this has been outrageously authoritarian, disproportionate, insensitive, and deeply harmful, in direct opposition to everything a union should stand for. If this is a reflection of their values then they do not have either my trust or my support.

 

Postscript

Eventually, after nearly two days, I received a one-line personal reply to my original complaint telling me that the suspension of the list is temporary (this may be news to others in the union who have not been told this: you heard it here first, folks!) and that they will, at some unspecified point, be seeking input from members on communication preferences (not consultation, note, or participation, just input). No timelines were given. I am not satisfied with this.

Does technology lead to improved learning? (tl;dr: it's a meaningless question)

Students using computers, public domain, https://www.flickr.com/photos/internetarchivebookimages/19758917473/There have been (at least) tens of thousands of comparative studies on the effects of ‘technology’ on learning performed over the past hundred years or so. Though some have been slightly more specific (the effects of computers, online learning, whiteboards, eportfolios, etc) and some more sensible authors use the term ‘tech’ to distinguish things with flashing lights from technologies in general, nowadays it is pretty common to just use the term ‘technology’ as though we all know what the authors mean. We don’t. And neither do they.

It makes no more sense to ask whether (say) computers have a positive or negative effect on learning than to ask whether (say) pedagogies have a positive or negative effect on learning. Pedagogies (methods and principles of learning and teaching) are at least as much technologies as computers and their uses and forms are similarly diverse. Some work better than others, sometimes, in some contexts, for some people. All are soft technologies that demand we act as coparticipants in their orchestration, not just users of them. This means that we have to add stuff to them in order that they work. None do anything of interest by themselves – they must be orchestrated with (usually many) other tools, methods, structures, and so on in order to do anything at all. All can be orchestrated well (assuming we know what ‘well’ really means, and we seldom really do) or badly.

It is instructive to wonder why it is that, as far as I know, no one has yet tried to investigate the effects of transistors, or screws, or words, or cables on learning, even though they are an essential part of most technologies that we do see fit to research and are certainly prerequisite parts of many educational interventions. The answer is, I hope, obvious: we would be looking at the wrong level of detail. We would be examining a part of the assembly that is probably not materially significant to learning success, albeit that, without them, we would not have other technologies that interest us more. Transistors enable computers, but they do not entail them.

Likewise computers and pedagogies enable learning, but do not entail it (for more on enablement vs entailment, see Longo et al, 2012 or, for a fuller treatment, Kauffman, 2019). True, pedagogies and computers may orchestrate many more phenomena for us, and some of those orchestrations may have more consistent and partly causal effects on whether an intervention works than screws and cables but, without considering the entire specific assembly of which they are a part, those effects are no more generalizably relevant to whether learning is effective or not than the effects of words or transistors.

Technologies enable (or sometimes disable) a range of phenomena, but only rarely do they generalizably entail a fixed set of outcomes and, if they do, there are almost always ways that we can assemble them with other technologies that alter those outcomes. In the case of something as complex as education, which always involves thousands and usually millions of technological components assembled with one another by a vast number of people, not just the teacher, every part affects every other. It is irreducibly complex, not just complicated. There are butterfly’s wing effects to consider – a single injudicious expletive, say, or a even a smile can transform the effectiveness or otherwise of teaching. There’s emergence, too. A story is not just a collection of words, a lesson is not just a bunch of pedagogical methods, a learning community is not just a collection of people. And all of these things – parts and emergent or designed combinations of parts – interact with one another to lead to deterministic but unprestatable consequences (Kauffman, 2019).

Of course, any specific technology applied in a specific context can and will entail specific and (if hard enough) potentially repeatable outcomes. Hard technologies will do the same thing every time, as long as they work. I press the switch, the light comes on. But even for such a simple, hard technology, you cannot from that generalize that every time any switch is pressed a light will come on, even if you, without warrant, assume that the technology works as intended, because it can always be assembled with other phenomena, including those provided by other technologies, that alter its effects. I press many switches every day that do not turn on lights and, sometimes, even when I press a light switch the light does not come on (those that are assembled with smart switches, for instance). Soft technologies like computers, pedagogies, words, cables, and transistors are always assembled with other phenomena. They are incomplete, and do not do anything of interest at all without an indefinitely large number of things and processes that we add to them, or to which we add them, each subtly or less subtly different from the rest. Here’s an example using the soft technology of language:

  • There are countless ways I could say this.
  • There are infinitely many ways to make this point.
  • Wow, what a lot of ways to say the same thing!
  • I could say this in a vast number of ways.
  • There are indefinitely many ways to communicate the meaning of what I wish to express.
  • I could state this in a shitload of ways.
  • And so on, ad infinitum.

This is one tiny part of one tiny technology (this post). Imagine this variability multiplied by the very many people, tools, methods, techniques, content, and structures that go into even a typical lesson, let alone a course. And that is disregarding the countless other factors and technologies that affect learning, from institutional regulations to interesting news stories or conversations on a bus.

Reductive scientific methods like randomized controlled tests and null hypothesis significance testing can tell us things that might be useful to us as designers and enactors of teaching. We can, say, find out some fairly consistent things about how people learn (as natural phenomena), and we can find out useful things about how well different specific parts compare with one another in a particular kind of assembly when they are supposed to do the same job (nails vs screws, for instance). But these are just phenomena that we can use as part of an assembly, not prescriptions for successful learning. The question of whether any given type of technology affects learning is meaningless. Of course it does, in the specific, because we are using it to help enable learning. But it only does so in an orchestrated assembly with countless others, and that orchestration is and must always be substantially different from any other. So, please, let’s all stop pretending that educational technologies (including pedagogical methods) can be researched in the same reductive ways as natural phenomena, as generalizable laws of entailment. They cannot.

References

Arthur, W. B. (2009). The Nature of Technology: what it is and how it evolves (Kindle ed.). New York, USA: Free Press. (Arthur’s definition of technology as the orchestration of phenomena for some purpose, and his insights into how technologies evolve through assembly, underpins the above)

Kauffman, S. A. (2019). A World Beyond Physics: The Emergence and Evolution of Life. Oxford University Press.

Longo, G., Montévil, M., & Kauffman, S. (2012). No entailing laws, but enablement in the evolution of the biosphere. Proceedings from 14th annual conference companion on Genetic and evolutionary computation, Philadelphia, Pennsylvania, USA. Full text available at https://dl.acm.org/doi/pdf/10.1145/2330784.2330946

 

Bananas as educational technologies

  Banana Water Slide banana statue, Virginia Beach, Virginia One of my most memorable learning experiences that has served me well for decades, and that I actually recall most days of my life, occurred during a teacher training session early in my teaching career. We had been set the task of giving a two-minute lecture on something central to our discipline. Most of us did what we could with a slide or two and a narrative to match in a predictably pedestrian way. I remember none of them, not even my own, apart from one. One teacher (his name was Philippe) who taught sports nutrition, just drew a picture of a banana. My memory is hazy on whether he also used an actual banana as a prop: I’d like to think he did. For the next two minutes, he then repeated ‘have a banana’ many times, interspersed with some useful facts about its nutritional value and the contexts in which we might do so. I forget most of those useful facts, though I do recall that it has a lot of good nutrients and is easy to digest. My main takeaway was that, if we are in a hurry in the morning, not to skip breakfast but to eat a banana, because it will keep us going well enough to function for some time, and is superior to coffee as a means of making you alert. His delivery was wonderful: he was enthusiastic, he smiled, we laughed, and he repeated the motif ‘have a banana!’ in many different and entertaining ways, with many interesting and varied emphases. I have had (at least) a banana for breakfast most days of my life since then and, almost every time I reach for one, I rememember Philippe’s presentation. How’s that for teaching effectiveness?

But what has this got to do with educational technologies? Well, just about everything.

As far as I know, up until now, no one has ever written an article about bananas as educational technologies. This is probably because, apart from instances like the one above where bananas are the topic, or a part of the topic being taught, bananas are not particularly useful educational technologies. You could, at a stretch, use one to point at something on a whiteboard, as a prop to encourage creative thinking, or as an anchor for a discussion. You could ask students to write a poem on it, or calculate its volume, or design a bag for it. There may in fact be hundreds of distinct ways to use bananas as an educational technology if you really set your mind to it. Try it – it’s fun! Notice what you are doing when you do this, though. The banana does provide some phenomena that you can make use of, so there are some affordances and constraints on what you can do, but what makes it an educational technology is what you add to it yourself. Notwithstanding its many possible uses in education, on balance, I think we can all agree that the banana is not a significant educational technology.

Parts and pieces

Here are some other things that are more obviously technological in themselves, but that are not normally seen as educational technologies either:

  • screws
  • nails
  • nuts and bolts
  • glue

Like bananas, there are probably many ways to use them in your teaching but, unless they are either the subject of the teaching or necessary components of a skill that is being learned (e.g. some crafts, engineering, arts, etc) I think we can all agree that none of these is a significant educational technology in itself. However, there is one important difference. Unlike bananas, these technologies can and do play very significant roles in almost all education, whether online or in-person. Without them and their ilk, all of our educational systems would, quite literally, fall apart. However, to call them educational technologies would make little sense because we are putting the boundaries around the wrong parts of the assembly. It is not the nuts and bolts but what we do with them, and all the other things with which they are assembled, that matters most. This is exactly like the case of the banana.

Bigger pieces

This is interesting because there are other things that some people do consider to be sufficiently important educational technologies that they get large amounts of funding to perform large-scale educational research on them, about which exactly the same things could be said: computers, say. There is really a lot of research about computers in classrooms. And yet metastudies tend to conclude that, on average, computers have little effect on learning. This is not surprising. It is for exactly the same reason that nuts and glue, on average, have little effect on learning. The researchers are choosing the wrong boundaries for their investigations.

The purpose of a computer is to compute. Very few people find this of much value as an end in itself, and I think it would be less useful than a banana to most teachers. In fact, with the exception of some heavily math-oriented and/or computer science subjects, it is of virtually no interest to anyone.

The ends to which the computing they perform are put are another matter altogether. But those are no more the effect of the computer than the computer is the effect of the nuts and bolts that hold it together. Sure, these (or something like them) are necessary components, but they are not causes of whatever it is we do with them. What makes computers useful as educational technologies is, exactly like the case of the banana, what we add to them.

It is not the computer itself, but other things with which it is assembled such as interface hardware, software and (above all) other surrounding processes – notably the pedagogical methods – that can (but on average won’t) turn it into an educational technology. There are potentially infinite numbers of these, or there would be if we had infinite time and energy to enact them. Computers have the edge on bananas and, for that matter, nuts and bolts because they can and usually must embody processes, structures, and behaviours. They allow us to create and use far more diverse and far more complex phenomena than nuts, bolts, and bananas. Some – in fact, many – of those processes and structures may be pedagogically interesting in themselves. That’s what makes them interesting, but it does not make them educational technologies. What can make them educational technologies are the things we add, not the machines in themselves.

This is generalizable to all technologies used for educational purposes. There are hierarchies of importance, of course. Desks, classrooms, chairs, whiteboards and (yes) computers are more interesting than screws, nails, nuts, bolts, and glue because they orchestrate more phenomena to more specific uses: they create different constraints and affordances, some of which can significantly affect the ways that learning happens. A lecture theatre, say, tends to encourage the use of lectures. It is orchestrating quite a few phenomena that have a distinct pedagogical purpose, making it a quite significant participant in the learning and teaching process. But it and all these things, in turn, are utterly useless as educational technologies until they are assembled with a great many other technologies, such as (very non exhaustively and rather arbitrarily):

  • pedagogical methods,
  • language,
  • drawing,
  • timetables,
  • curricula,
  • terms,
  • classes,
  • courses,
  • classroom rules,
  • pencils and paper,
  • software,
  • textbooks,
  • whiteboard markers,
  • and so on.

None of these parts have much educational value on their own. Even something as unequivocally identifiable as an educational technology as a pedagogical method is useless without all the rest, and changes to any of the parts may have substantial impacts on the whole. Furthermore, without the participation of learners who are applying their own pedagogical methods, it would be utterly useless, even in assembly with everything else. Every educational event – even those we apparently perform alone – involves the coparticipation of countless others, whether directly or not.

The point of all this is that, if you are an educational researcher or a teacher investigating your own teaching, it makes no sense at all to consider any generic technology in isolation from all the rest of the assembly. You can and usually should consider specific instances of most if not all those technologies when designing and performing an educational intervention, but they are interesting only insofar as they contribute, in relationship to one another, to the whole.

And this is not the end of it. Just as you must assemble many pieces in order to create an educational technology, what you have assembled must in turn be assembled by learners – along with plenty of other things like what they know already, other inputs from the environment, from one another, the effects of things they do, their own pedagogical methods, and so on – in order to achieve the goals they seek. Your own teaching is as much a component of that assembly as any other. You, the learners, the makers of tools, inventors of methods, and a cast of thousands are coparticipants in a gestalt process of education.

This is one of the main reasons that reductive approaches to educational research that attempt to isolate the effects of a single technology – be it a method of teaching, a device, a piece of software, an assessment technique, or whatever – with the intent of generalizing some statement about it cannot ever work. The only times they have any value at all are when all the technologies in question are so hard, inflexible, and replicable, and the uses to which they are put are so completely fixed, well defined, and measurable that you are, in effect, considering a single specific technology in a single specific context. But, if you can specify the processes and purposes with that level of exactitude then you are simply checking that a particular machine works as it is designed to work. That’s interesting if you want to use that precise machine in an almost identical context, or you want to develop the machine itself further. But it is not generalizable, and you should never claim that it is. It is just part of a particular story. If you want to tell a story then other methods, from narrative descriptions to rich case studies to grounded theory, are usually much more useful.

Obsolescence and decay

Koristka camera  All technologies require an input of energy – to be actively maintained – or they will eventually drift towards entropy. Pyramids turn to sand, unused words die, poems must be reproduced to survive, bicycles rust. Even apparently fixed digital technologies rely on physical substrates and an input of power to be instantiated at all. A more interesting reason for their decay, though, is that virtually no technologies exist in isolation, and virtually all participate in, and/or are participated in by other technologies, whether human-instantiated or mechanical. All are assemblies and all exist in an ecosystem that affects them, and which they affect. If parts of that system change, then the technologies on which they depend may cease to function even though nothing about those technologies has, in itself, altered.

Would a (film) camera for which film is no longer available still be a camera? It seems odd to think of it as anything else. However, it is also a bit odd to think of it as a camera, given that it must be inherent to the definition of a camera that it can take photos. It is not (quite) simply that, in the absence of film, it doesn’t work. A camera that doesn’t take photos because the shutter has jammed or the lens is missing is still a camera: it’s just a broken camera, or an incomplete camera. That’s not so obviously the case here. You could rightly claim that the object was designed to be a camera, thereby making the definition depend on the intent of its manufacturer. The fact that it used to be perfectly functional as a camera reinforces that opinion. Despite the fact that it cannot take pictures, nothing about it – as a self-contained object – has changed. We could therefore simply say it is therefore still a camera, just one that is obsolete, and that obsolescence is just another way that cameras can fail to work. This particular case of obsolescence is so similar to that of the missing lens that it might, however, make more sense to think of it as an instance of exactly the same thing. Indeed someone might one day make a film for it and, being pedantic, it is almost certainly possible to cut up a larger format film and insert it, at which point no one would disagree that it is a camera, so this is a reasonable way to think about it. We can reasonably claim that it is still a camera, but that it is currently incomplete.

Notice what we are doing here, though. In effect, we are supposing that a full description of a camera – ie. a device to take photos – must include its film, or at least some other means of capturing an image, such as a CCD. But, if you agree to that, where do you stop? What if the only film that the camera can take demands processing that is not? What if is is a digital camera that creates images that no software can render? That’s not impossible. Imagine (and someone almost certainly will) a DRM’d format that relies on a subscription model for the software used to display it, and that the company that provides that subscription goes out of business. In some countries, breaking DRM is illegal, so there would be no legal way to view your own pictures if that were the case. It would, effectively, be the same case as that of a camera designed to have no shutter release, which (I would strongly argue) would not be a camera at all because (by design) it cannot take pictures. The bigger point that I am trying to make, though, is that the boundaries that we normally choose when identifying an object as a camera are, in fact, quite fuzzy. It does not feel natural to think of a camera as necessarily including its film, let alone also including the means of processing that film, but it fails to meet a common-sense definition of the term without those features.

A great many – perhaps most – of our technologies have fuzzy boundaries of this nature, and it is possible to come up with countless examples like this. A train made for a track gauge that no longer exists, clothing made in a size that fits no living person, printers for which cartridges are no longer available, cars that fail to meet emissions standards, electrical devices that take batteries that are no longer made, and so on. In each case, the thing we tend to identify as a specific technology no longer does what it should, despite nothing having changed about it, and so it is difficult to maintain that it is the same technology as it was when it was created unless we include in our definition the rest of the assembly that makes it work. One particularly significant field in which this matters a great deal is in computing. The problem occurs in every aspect of computing: disk formats for which no disk drives exist, programs written for operating systems that are no longer available, games made for consoles that cannot be found, and so on. In a modern networked environment, there are so many dependencies all the way down the line that virtually no technology can ever be considered in isolation. The same phenomenon can happen at a specific level too. I am currently struggling to transfer my websites to a different technology because the company providing my server is retiring it. There’s nothing about my sites that has changed, though I am having to make a surprising number of changes just to keep them operational on the new system. Is a website that is not on the web still a website?

Whatever we think about whether it remains the same technology, if it does not do what the most essential definition of that technology claims that it must, then a digital technology that does not adapt eventually dies, even though its physical (digital) form might persist unchanged. This is because its boundaries are not simply its lines of code. This both stems from and leads to fact that technologies tend to evolve to ever greater complexity. It is especially obvious in the case of networked digital technologies, because parts of the multiple overlapping systems in which they must participate are in an ever-shifting flux. Operating systems, standards, protocols, hardware, malware, drivers, network infrastructure, etc can and do stop otherwise-unchanged technologies from working as intended, pretty consistently, all the time. Each technology affects others, and is affected by them. A digital technology that does not adapt eventually dies, even though (just like the camera) its physical (digital) form persists unchanged. It exists only in relation to a world that becomes increasingly complex thanks to the nature of the beast.

All species of technology evolve to become more complex, for many reasons, such as:

  • the adjacent possibles that they open up, inviting elaboration,
  • the fact that we figure out better ways to make them work,
  • the fact that their context of use changes and they must adapt to it,
  • the fact other technologies with which they are assembled adapt and change,
  • the fact that there is an ever-expanding range of counter-technologies needed to address their inevitable ill effects (what Postman described as the Faustian Bargain of technology),  which in turn create a need for further counter-technologies to curb the ill effects of the counter technologies,
  • the layers of changes and fixes we must apply to forestall their drift into entropy.

The same is true of most individual technologies of any complexity, ie. those that consist of many interacting parts and that interact with the world around them. They adapt because they must – internal and external pressures see to that – and, almost always, this involves adding rather than taking away parts of the assembly. This is true of ecosystems and even individual organisms, and the underlying evolutionary dynamic is essentially the same. Interestingly, it is the fundamental dynamic of learning, in the sense of an entity adapting to an environment, which in turn changes that environment, requiring other entities within that environment to adapt in turn, which then demands further adaptation to the ever shifting state of the system around it. This occurs at every scale, and every boundary. Evolution is a ratchet: at any one point different paths might have been taken but, once they have been taken, they provide the foundations for what comes next. This is how massive complexity emerges from simple, random-ish beginnings. Everything builds on everything else, becoming intricately interwoven with the whole. We can view the parts in isolation, but we cannot understand them properly unless we view them in relation to the things that they are connected with.

Amongst other interesting consequences of this dynamic, the more evolved technologies become, the more they tend to be comprised of counter-technologies. Some large and well-evolved technologies – transport systems, education systems, legal systems, universities, computer systems, etc – may consist of hardly anything but counter-technologies, that are so deeply embedded we hardly notice them any more. The parts that actually do the jobs we expect of them are a small fraction of the whole. The complex interlinking between counter-technologies starts to provide foundations on which further technologies build, and often feed back into the evolutionary path, changing the things that they were originally designed to counter, leading to further counter-technologies to cater for those changes. 

To give a massively over-simplified but illustrative example:

Technology: books.

Problem caused: cost.

Counter-technology: lectures.

Problem caused: need to get people in one place at one time.

Counter-technology: timetables.

Problem caused: motivation to attend.

Counter-technology: rewards and punishments.

Problem caused: extrinsic motivation kills intrinsic motivation.

Counter-technology: pedagogies that seek to re-enthuse learners.

Problem caused: education comes to be seen as essential to future employment but how do you know that it has been accomplished?

Counter-technology: exams provide the means to evaluate educational effectiveness.

Problem caused: extrinsic motivation kills intrinsic motivation.

Solution: cheating provides a quicker way to pass exams.

And so on.

I could throw in countless other technologies and counter-technologies that evolved as a result to muddy the picture, including libraries, loan systems, fines, courses, curricula, semesters, printing presses, lecture theatres, desks, blackboards, examinations, credentials, plagiarism tools, anti-plagiarism tools, faculties, universities, teaching colleges, textbooks, teaching unions, online learning, administrative systems, sabbaticals, and much much more. The end result is the hugely complex, ever shifting, ever evolving mess that is our educational systems, and all their dependent technologies and all the technologies on which they depend that we see today. This is a massively complex system of interdependent parts, all of which demand the input of energy and deliberate maintenance to survive. Changing one part shifts others, that in turn shift others, all the way down the line and back again. Some are harder and less flexible than others – and so have more effect on the overall assembly – but all contribute to change.

We have a natural tendency to focus on the immediate, the local, and the things we can affect most easily. Indeed, no one in the entire world can hope to glimpse more than a caricature of the bigger picture and, being a complex system, we cannot hope to predict much beyond the direct effects of what we do, in the context that we do them. This is true at every scale, from teaching a lesson in a classroom to setting educational policies for a nation. The effects of any given educational intervention are inherently unknowable in advance, whatever we can say about average effects. Sorry, educational researchers who think they have a solution – that’s just how it is. Anyone that claims otherwise is a charlatan or a fool. It doesn’t mean that we cannot predict the immediate future (good teachers can be fairly consistently effective), but it does mean that we cannot generalize what they do to achieve it.

One thing that might help us to get out of this mess would be, for every change we make, to think more carefully about what it is a counter-technology for,  and at least to glance at what the counter-technologies we are countering are themselves counter-technologies for. It might just be that some of the problems they solve afford greater opportunities to change than their consequences that we are trying to cope with. We cannot hope to know everything that leads to success – teaching is inherently distributed and inherently determined by its context – but we can examine our practice to find out at least some of the things that lead us to do what we do. It might make more sense to change those things than to adapt what we do to their effects.

 

A simple phishing scam

If you receive an unexpected email from what you might, at first glance, assume to me, especially if it is in atrocious English, don’t reply to it until you have looked very closely at the sender’s email address and have thought very carefully about whether I would (in a million years) ask you for whatever help it wants from you.

Being on sabbatical, my AU inbox has been delightfully uncrowded of late, so I rarely look at it until I’ve got a decent amount of work done most days, and occasionally skip checking it altogether, but a Skype alert from a colleague made me visit it in a hurry a couple of days back. I found a deluge of messages from many of my colleagues in SCIS, mostly telling me my identity had been stolen (it hadn’t), though a few asked if I really needed money, or wanted my groceries to be picked up. This would be a surprising, given that I live about 1000km away from most of them. All had received messages in poorly written English purporting to be from me, and at least a couple of them had replied. One – whose cell number was included in his sig – got a phishing text almost immediately, again claiming to be from me: this was a highly directed and malicious attack.

The three simple tricks that made it somewhat believable were:

  1. the fraudsters had created a (real) Gmail account using the username, jondathabascauca. This is particularly sneaky inasmuch as Gmail allows you to insert arbitrary dots into the name part of your email address, so they turned this into jond.athabasca.ca@gmail.com, which was sufficiently similar to the real thing to fool the unwary.

  2. the crooks simply copied and pasted the first part of my official AU page as a sig, which is pretty odd when you look at it closely because it included a plain text version of the links to different sections on the actual page (they were not very careful, and probably didn’t speak English well enough to notice), but again looks enough like a real sig to fool someone glancing at it quickly in the midst of a busy morning.

  3. they  (apparently) only sent the phishing emails to other people listed on the same departmental bio pages, rightly assuming that all recipients would know me and so would be more likely to respond. The fact that the page still (inaccurately) lists me as school Chair probably probably means I was deliberately singled out.

As far as I know they have not extended the attacks further than to my colleagues in SCIS, but I doubt that this is the end of it. If they do think I am still the Chair of the school, it might occur to them that chairs tend to be known outside their schools too.

This is not identity theft – I have experienced the real thing over the past year and, trust me, it is far more unpleasant than this – and it’s certainly not hacking. It’s just crude impersonation that relies on human fallibility and inattention to detail, that uses nothing but public information from our website to commit good old fashioned fraud. Nonetheless, and though I was not an intended victim, I still feel a bit violated by the whole thing. It’s mostly just my foolish pride – I don’t so much resent the attackers as the fact that some of the recipients jumped to the conclusion that I had been hacked, and that some even thought the emails were from me. If it were a real hack, I’d feel a lot worse in many ways, but at least I’d be able to do something about it to try to fix the problem. All that I can do about this kind of attack is to get someone else to make sure the mail filters filter them out, but that’s just a local workaround, not a solution.

We do have a team at AU that deals with such things (if you have an AU account and are affected, send suspicious emails to phishing@athabascau.ca), so this particular scam should have been stopped in its tracks, but do tell me if you get a weird email from ‘me’.

E-Learn 2019 presentation – X-literacies: beyond digital literacy

Here are  my slides from E-Learn 2019, in New Orleans. The presentation was about the nature of technologies and their roles in communities (groups, networks, sets, whatever), their highly situated nature, and their deep intertwingling with culture. In general it is an argument that literacies (as opposed to skills, knowledge, etc) might most productively and usefully be seen as the hard techniques needed to operate the technologies that are required for any given culture. As well as clarifying the term and using it in the same manner as the original term “literacy”, this implies there may be an indefinitely large range of literacies because we are all members of an indefinitely large number of overlapping cultures. All sorts of possibilities and issues emerge from this perspective.

Abstract: Dozens, if not hundreds, of literacies have been identified by academic researchers, from digital- to musical- to health- to network- literacy, as well as combinatorial terms like new-, multi-, 21st Century-, and media-literacy. Proponents seek ways to support the acquisition of such literacies but, if they are to be successful, we must first agree what we mean by ‘literacy’. Unfortunately, the term is used in many inconsistent and incompatible ways, from simple lists of skills to broad characteristics or tendencies that are either ubiquitous or meaninglessly vague. I argue that ‘literacy’ is most usefully thought of as the set of learned techniques needed to participate in the technologies of a given culture. Through use and application of a culture’s techniques, increasing literacy also leads to increasing knowledge of the associated facts and adoption of the values that come with that culture. Literacy is thus contextually situated, mutates over time as a culture and its technologies evolve, and participates in that co-evolution. As well as subsuming and eliminating much of the confusion caused by the proliferation of x-literacies, this opens the door to more accurately recognizing the literacies that we wish to use, promote and teach for any given individual or group.

 

Signals, boundaries, and change: how to evolve an information system, and how not to evolve it

primitive cell development

For most organizations there tend to be three main reasons to implement an information system:

  1.     to do things the organization couldn’t do before
  2.     to improve things the organization already does (e.g. to make them more efficient/cheaper/better quality/faster/more reliable/etc)
  3.     to meet essential demands (e.g. legislation, keep existing apps working, etc)

There are other reasons (political, aesthetic, reputational, moral, corruption/bribery/kickbacks, familiarity, etc) but I reckon those are the main ones that matter. They are all very good reasons.

Costs and debts

With each IT solution there will always be costs, both initial and ongoing. Because we are talking about technology, and all technologies evolve to greater complexity over time, the ongoing costs will inevitably escalate. It’s not optional. This is what is commonly described as the ‘technological debt’ but that is a horrible misnomer. It is not a debt, but the price we pay for the solutions we need. If we don’t do it, our IT systems decay and die, starved of their connections with the evolving business and global systems around them. It’s no more of a debt than the need to eat or receive medical care is a debt for living.

Thinking locally, not globally

When money needs to be saved in an organization, senior executives tend to look at the inevitably burgeoning cost of IT and see it as ripe for pruning. IT managers thus tend to be placed under extreme pressure to ‘save’ costs. IT managers might often be relieved about that because they are almost certainly struggling to maintain the customized apps already, unless they have carefully planned for those increased costs over years (few do). Sensibly (from their own local perspective, given what they have been charged with doing), they therefore tend to strip out customizations, then shift to baseline applications, and/or cloud-based services that offer financial savings or, at least, predictable costs, giving the illusion of control. Often, they wind up firing, repurposing, or not renewing contracts for development staff, support staff, and others with deep knowledge of the old tools and systems. This keeps the budget in check so they achieve the goals set for them.

Unfortunately, assuming that the organization continues to need to do what it has been doing up to that point, the unavoidable consequence is that things that computers used to do are now done by people in the workforce instead. When made to perform hard mechanical tasks that computers can and should do, people are invariably far more fallible, slow, inconsistent, and inefficient. Far more. They tend to be reluctant, too. To make things worse, these mundane repetitive tasks take time, and crowd out other, more important things that people need to do, such as the things they were hired for. People tend to get tired, angry, and frustrated when made to do mechanical things over which they have little agency, which reduces productivity much further than simply the time lost in doing them. To make matters even worse, there is inevitably going to be a significant learning curve, during which staff try to figure out how to do the work of machines. This tends to lead to inflated training budgets (usually involving training sessions that, as decades of research show, are rarely very effective and that have to be repeated), time to read documentation, and more time taken out of the working day. Creativity, ingenuity, innovation, problem-solving, and interaction with others all suffer. The organization as a whole consequently winds up losing many times more (usually by orders of magnitude) than they saved on IT costs, though the IT budget now looks healthy again so it is often deemed to be a success. This is like taking the wheels off a car then proudly pointing to the savings in fuel that result. Unfortunately, such general malaises seldom appear in budget reports, and are rarely accounted for at all, because they get lost in the work that everyone is doing. Often, the only visible signs that it has happened are that the organization just gets slower, less efficient, less creative, more prone to mistakes, and less happy. Things start to break, people start to leave, sick days multiply. The reputation of the organization begins to suffer.
 
This is usually the point that more radical large scale changes to the organization are proposed, again usually driven by senior management who (unless they listen very carefully to what the workforce is telling them) may well attribute the problems they are seeing to the wrong causes, like external competition. A common approach to the problem is to impose more austerity, thus delivering the killing blow to an already demoralized workforce. That’s an almost guaranteed disaster. Another common way to tackle it is to take greater risks, made all the more risky thanks to having just converted creative, problem-solving, inquisitive workers into cogs in the machine, in the hope of opening up new sources of revenue or different goals. When done under pressure, that seldom ends well, though at least it has some chance of success, unlike austerity. This vicious cycle is hard to escape from. I don’t know of any really effective way to deal with it once it has happened.

Thinking in systems

The way to avoid it in the first place is not to kill off and directly replace custom IT solutions with baseline alternatives. There are very good reasons for almost all of those customizations that have almost certainly not gone away: all those I mentioned at the start of the post don’t suddenly cease to apply. It is therefore positively stupid to simply remove them without an extremely deep, multifaceted analysis of how they are used and who uses them, and even then with enormous conservatism and care. However, you probably still want to get rid of them eventually anyway, because, as well as being an ever-increasing cost,  they have probably become increasingly out of line with how the organization and the world around it is evolving. Unless there has been a steady increase in investment in new IT staff (too rare), so much time is probably now spent keeping old systems going that there is no time to work on improvements or new initiatives. Unless more money can be put into maintaining them (a hard sell, though important to try) the trick is not to slash and burn, and definitely not to replace old customized apps with something different and less well-tailored, but to gently evolve towards whatever long-term solution seems sensible using techniques such as those I describe below. This has a significant cost, too, but it’s not usually as high, and it can be spread over a much longer period.
 

For example…

If you wish to move away from reliance on a heavily customized learning management system to a more flexible and adaptive learning ecosystem made of more manageable pieces, the trick is to, first of all, build connectors into and out of your old system (if they do not already exist), to expose as many discrete services as possible, and then to make use of plugin hooks (or similar) to seamlessly replace existing functions with new ones. The same may well need to be done with the new system, if it does not already work that way. This is the most expensive part, because it normally demands development time, and what is developed will have to be maintained, but it’s worth it. What you are doing, at an abstract level, is creating boundaries around parts that can be treated as distinct (functions, components, objects, services, etc) and making sure that the signals that pass between them can be understood in the same way by subsystems on either side of the boundary.

Open industry standards (APIs, protocols, etc) are almost essential here, because apps at both sides of the boundary need to speak the same language. Proprietary APIs are risky: you do not want to start doing this then have a vendor decide to change its API or its terms and conditions. It’s particularly dangerous to do this with proprietary cloud-based services, where you don’t have any control whatsoever over APIs or backends,  and where sudden changes (sometimes without even a notification that they are happening) are commonplace. It’s fine to use containers or virtual machines in the cloud – they can be replaced with alternatives if things go wrong, and can be treated much like applications hosted locally – and it’s fine to use services with very well defined boundaries, with standards-based APIs to channel the signals. It is also fine to build your own, as long as you control both sides of the boundary, though maintenance costs will tend to be higher.  It is not fine to use whole proprietary applications or services in the cloud because you cannot simply replace them with alternatives, and changes are not under your control. Ideally, both old and new systems should be open source so that you are not bound to one provider, you can make any changes you need (if necessary), and you can rely on having ongoing access to older versions if things change too fast.
 
Having done this, you have two main ways to evolve, that you can choose according to needs:

  1.  to gradually phase in the new tools you want and phase out the old ones you don’t want in the old system until, like the ship of Theseus, you have replaced the entire thing. This lets you retain your customizations and existing investments (especially in knowledge of those systems) for the longest time, because you can replace the parts that do not rely on them before tackling those that do. Meanwhile, those same fresh tools can start to make their appearance in whatever other new systems you are trying to build, and you can make a graceful, planned transition as and when you are ready. This is particularly useful if there is a great deal of content and learning already embedded in the system, which is invariably the case with LMSs. It means people can mostly continue to work the way they’ve always worked, while slowly learning about and transitioning to a new way of working.
  2.  to make use of some services provided by the old system to power the new one. For instance, if you have a well-established means of generating class lists or collecting assessment data that involves a lot of custom code, you can offer that as a service from the old tool to your new tool, rather than reimplementing it afresh straight away or requiring users to manually replace the custom functions with fallible human work. Eventually, once the time is right to move and you can afford it, you can then simply replace it with a different service, with virtually no disruption to anyone. This is better when you want a clean break, especially useful when the new system does things that the original could not do, though it still normally allows simultaneous operation for a while if needed, as well as the option to fall back to the old system in the event of a disaster.

There are other hybrid alternatives, such as setting up other systems to link both, so that the systems do not interact directly but via a common intermediary. In the case of an LMS migration, this might be a learning record store (LRS) or student record system, for instance. The general principle, though, is to keep part or all of the old system running simultaneously for however long it is needed, parcellating its tools and services, while slowly transitioning to the new. Of course, this does imply extra cost in the short term, because you now have to manage at least two systems instead of one. However, by phasing it this way you greatly reduce risk, spread costs over a timeframe that you control, and allow for changes in direction (including reversal) along the way, which is always useful. The huge costs you save are those that are hidden from conventional accounting – the time, motivation, and morale of the workforce that uses the system. As a useful bonus, this service-oriented approach to building your systems also allows you to insert other new tools and implement other new ideas with a greatly diminished level of risk, with fewer recurring costs, and without the one-time investment of having to deal with your whole monolithic codebase and data. This is great if you want to experiment with innovations at scale. Once you have properly modularized your system, you can grow it and change it by a process of assembly. It often allows you to offer more control to end users, too: for instance, in our LMS example you might allow individuals to choose between different approaches to a discussion forum, or content presentation, or to insert a research-based component without so many of the risks (security, performance, reliability, etc) normally associated with implementing less well-managed code.

Signals and boundaries

In essence, this is all about signals and boundaries. The idea is to identify and, if they don’t exist, create boundaries between distinct parts of systems, then to focus all your management efforts on the signals that pass across them. As long as the signals remain the same from both sides, what lies on either side of the boundaries can be isolated and replaced when needed. This happens to be the way that natural systems mainly evolve too, from organisms to ecosystems. It has done pretty good service for a good billion years or so.

 
 

Education for life or Education for work? Reflections on the RBC Future Skills Report

Tony Bates extensively referenced this report from the Royal Bank of Canada on Canadian employer demands for skills over the next few years, in his characteristically perceptive keynote at CNIE 2019 last week (it’s also referred to in his most recent blog post). It’s an interesting read. Central to its many findings and recommendations are that the Canadian education system is inadequately designed to cope with these demands and that it needs to change. The report played a big role in Tony’s talk, though his thoughts on appropriate responses to that problem were independently valid in and of themselves, and not all were in perfect alignment with the report. Tony Bates at CNIE 2019

The 43-page manifesto (including several pages of not very informative graphics) combines some research findings, with copious examples to illustrate its discoveries, and with various calls to action based on them. I guess not surprisingly for a document intended to ignite, it is often rather hard to tell in any detail how the research itself was conducted. The methodology section is mainly on page 33 but it doesn’t give much more than a broad outline of how the main clustering was performed, and the general approach to discovering information. It seems that a lot of work went into it, but it is hard to tell how that work was conducted.

A novel (-ish) finding: skillset clusters

Perhaps the most distinctive and interesting research discovery in the report is a predictive/descriptive model of skillsets needed in the workplace. By correlating occupations from the federal NOC (National Occupational Classification) with a US Labor Department dataset (O*NET) the researchers abstracted and identified six distinct clusters of skillsets, the possessors of which they characterize as:

  • solvers (engineers, architects, big data analysts, etc)
  • providers (vets, musicians, bloggers, etc)
  • facilitators (graphic designers, admin assistants, Uber drivers, etc)
  • technicians (electricians, carpenters, drone assemblers, etc)
  • crafters (fishermen, bakers, couriers, etc)
  • doers (greenhouse workers, cleaners, machine-learning trainers, etc)

From this, they make the interesting, if mainly anecdotally supported, assertion that there are clusters of occupations across which these skills can be more easily transferred. For instance, they reckon, a dental assistant is not too far removed from a graphic designer because both are high on the facilitator spectrum (emotional intelligence needed). They do make the disclaimer that, of course, other skills are needed and someone with little visual appreciation might not be a great graphic designer despite being a skilled facilitator. They also note that, with training, education, apprenticeship models, etc, it is perfectly possible to move from one cluster to another, and that many jobs require two or more anyway (mine certainly needs high levels of all six). They also note that social skills are critical, and are equally important in all occupations. So, even if their central supposition is true, it might not be very significant.

There is a somewhat intuitive appeal to this, though I see enormous overlap between all of the clusters and find some of the exemplars and descriptions of the clusters weirdly misplaced: in what sense is a carpenter not a crafter, or a graphic designer not a provider, or an electrician not a solver, for instance? It treads perilously close to the borders of x-literacies – some variants of which come up with quite similar categories – or learning style theories, in its desperate efforts to slot the world into manageable niches regardless of whether there is any point to doing so. The worst of these is the ‘doers’ category, which seems to be a lightly veiled euphemism for ‘unskilled’ (which, as they rightly point out, relates to jobs that are mostly under a great deal of threat). ‘Doing’ is definitely ripe for transfer between jobs because mindless work in any occupation needs pretty much the same lack of skill. My sense is that, though it might be possible to see rough patterns in the data, the categories are mostly very fuzzy and blurred, and could easily be used to label people in very unhelpful ways. It’s interesting from a big picture perspective, but, when you’re applying it to individual human beings, this kind of labelling can be positively dangerous. It could easily lead to a species of the same general-to-specific thinking that caused the death of many airplane pilots prior to the 1950s, until the (obvious but far-reaching) discovery that there is no such thing as an average-sized pilot. You can classify people into all sorts of types, but it is wrong to make any further assumptions about them because you have done so. This is the fundamental mistake made by learning style theorists: you can certainly identify distinct learner types or preferences but that makes no difference whatsoever to how you should actually teach people.

Education as a feeder for the job market

Perhaps the most significant and maybe controversial findings, though, are those leading more directly to recommendations to the educational and training sector, with a very strong emphasis on preparedness for careers ahead. One big thing bothers me in all of this. I am 100% in favour of shifting the emphasis of educational institutions from knowledge acquisition to more fundamental and transferable capabilities: on that the researchers of this report hit the nail on the head. However, I don’t think that the education system should be thought of, primarily, as a feeder for industry or preparation for the workplace. Sure, it’s definitely one important role for education, but I don’t think it’s the dominant one, and it’s very dangerous indeed to make that its main focus to the exclusion of the rest. Education is about learning to be a human in the context of a society; it’s about learning to be part of that culture and at least some of its subcultures (and, ideally, about understanding different cultures). It’s a huge binding force, it’s what makes us smart, individually and collectively, and it is by no means limited to things we learn in institutions or organizations. Given their huge role in shaping how we understand the world,  at the very least media (including social media) should, I think, be included whenever we talk of education. In fact, as Tony noted, the shift away from institutional education is rapid and on a vast scale, bringing many huge benefits, as well as great risks. Outside the institutions designed for the purpose, education is often haphazard, highly prone to abuse, susceptible to mob behaviours, and often deeply harmful (Trump, Brexit, etc being only the most visible tips of a deep malaise). We need better ways of dealing with that, which is an issue that has informed much of my research. But education (whether institutional or otherwise) is for life, not for work.

I believe that education is (and should be) at least partly concerned with passing on what we know, who we have been, who we are, how we behave, what we value, what we share, how we differ, what drives us, how we matter to one another. That is how it becomes a force for societal continuity and cohesion, which is perhaps its most important role (though formal education’s incidental value to the economy, especially through schools, as a means to enable parents to work cannot be overlooked). This doesn’t have to exclude preparation for work: in fact, it cannot.  It is also about preparing people to live in a culture (or cultures), and to continue to learn and develop productively throughout their lives, evolving and enhancing that culture, which cannot be divorced from the tools and technologies (including rituals, norms, rules, methods, artefacts, roles, behaviours, etc) of which the cultures largely consist, including work. Of course we need to be aware of, and incorporate into our teaching, some of the skills and knowledge needed to perform jobs, because that’s part of what makes us who we are. Equally, we need to be pushing the boundaries of knowledge ever outwards to create new tools and technologies (including those of the arts, the humanities, the crafts, literature, and so on, as well as of sciences and devices) because that’s how we evolve. Some – only some – of that will have value to the economy. And we want to nurture creativity, empathy, social skills, communication skills, problem-solving skills, self-management skills, and all those many other things that make our culture what it is and that allow us to operate productively within it, that also happen to be useful workplace skills. But human beings are also much more than their jobs. We need to know how we are governed, the tools needed to manage our lives, the structures of society. We need to understand the complexities of ethical decisions. We need to understand systems, in all their richness. We need to nurture our love of arts, sports, entertainment, family life, the outdoors, the natural and built environment, fine (and not fine) dining, being with friends, talking, thinking, creating stuff, appreciating stuff, and so on. We need to develop taste (of which Hume eloquently wrote hundreds of years ago).  We need to learn to live together. We need to learn to be better people. Such things are (I think) more who we are, and more what our educational systems should focus on, than our productive roles in an economy. The things we value most are, for the most part, seldom our economic contributions to the wealth of our nation, and the wealth of a nation should never be measured in economic terms.  Even those few that love money the most usually love the power it brings even more, and that’s not the same thing as economic prosperity for society. In fact, it is often the very opposite.

I’m not saying economic prosperity is unimportant, by any means: it’s often a prerequisite for much of the rest, and sometimes (though far from consistently) a proxy marker for them. And I’m not saying that there is no innate value in the process of achieving economic prosperity: many jobs are critical to sustaining that quality of life that I reckon matters most, and many jobs actually involve doing the very things we love most. All of this is really important, and educational systems should cater for it. It’s just that future employment should not be thought of as the main purpose driving education systems.

Unfortunately, much of our teaching actually is heavily influenced by the demands of students to be employable, heavily reinforced on all sides by employers, families, and governments, and that tends to lead to a focus on topics, technical skillsets, and subject knowledge, not so much to the exclusion of all the rest, but as the primary framing for it. For instance, HT to Stu Berry and Terry Anderson for drawing my attention to the mandates set by the BC government for its post secondary institutions, that are a litany of shame, horribly focused on driving economic prosperity and feeding industry, to the exclusion of almost anything else (including learning and teaching, or research for the sake of it, or things that enrich us as human beings rather than cogs in an economic machine). This report seems to take the primary role of education as a driver of economic prosperity as just such a given. I guess, being produced by a bank, that’s not too surprising, but it’s worth viewing it with that bias in mind.

And now the good news

What is heartwarming about this report, though, is that employers seem to want (or think they will want) more or less exactly those things that also enrich our society and our personal lives. Look at this fascinating breakdown of the skills employers think they will need in the future (Tony used this in his slides):

Projected skills demands, from the RBC future skills report

 

There’s a potential bias due to the research methodology, that I suspect encouraged participants to focus on more general skills, but it’s really interesting to see what comes in the first half and what dwindles into unimportance at the end.

Topping the list are active listening, speaking, critical thinking, comprehension, monitoring, social perceptiveness, coordination, time management, judgement and decision-making, active learning, service orientation, complex problem solving, writing, instructing, persuasion, learning strategies, and so on. These mostly quite abstract skills (in some cases propensities, albeit propensities that can be cultivated) can only emerge within a context, and it is not only possible but necessary to cultivate them in almost any educational intervention in any subject area, so it is not as though they are being ignored in our educational systems. More on that soon. What’s interesting to me is that they are the human things, the things that give us value regardless of economic value. I find it slightly disconcerting that ethical or aesthetic sensibilities didn’t make the list and there’s a surprising lack of mention of physical and mental health but, on the whole, these are life skills more than just work skills.

Conventional education can and often does cultivate these skills. I am pleased to brag that, as a largely unintentional side-effect of what I think teaching in my fields should be about, these are all things I aim to cultivate in my own teaching, often to the virtual exclusion of almost everything else. Sometimes I have worried (a little) that I don’t have very high technical expectations of my students. For instance, my advanced graduate level course in information management provides technical skills in database design and analysis that are, for the most part, not far above high-school level (albeit that many students go far beyond that); my graduate level social computing course demands no programming skills at all (technically, they are optional); my undergraduate introduction to web programming course sometimes leads to limited programming skills that would fail to get them a passing grade in a basic computer science course (though they typically pass mine). However (and it’s a huge HOWEVER) they have a far greater chance to acquire far more of those skills that I believe matter, and (gratifyingly) employers seem to want, than those who focus only on mastery of the tools and techniques. My web programming students produce sites that people might actually want to visit, and they develop a vast range of reflective, critical thinking, complex problem-solving, active learning, judgment, persuasion, social perceptiveness and other skills that are at the top of the list. My information management students get all that, and a deep understanding of the complex, social, situated nature of the information management role, with some notable systems analysis skills (not so much the formal tools, but the ways of understanding and thinking in systems). My social computing students get all that, and come away with deep insights into how the systems and environments we build affect our interactions with one another, and they can be fluent, effective users and managers of such things. All of the successful ones develop social and communication skills, appropriate to the field. Above all, my target is to help students to love learning about the subjects of my courses enough to continue to learn more. For me, a mark of successful teaching is not so much that students have acquired a set of skills and knowledge in a domain but that they can, and actually want to, continue to do so, and that they have learned to think in the right ways to successfully accomplish that. If they have those skills, then it is not that difficult to figure out specific technical skillsets as and when needed. Conveniently, and not because I planned it that way, that happens to be what employers want too.

Employers don’t (much) want science or programming skills: so what?

Even more interesting, perhaps, than the skills employers do want are the skills they do not want, from Operation Monitoring onwards in the list, that are often the primary focus of many of our courses. Ignoring the real nuts and bolts stuff at the very bottom like installation, repairing, maintenance, selection (more on that in a minute), it is fascinating that skills in science, programming, and technology design are hardly wanted at all by most companies, but are massively over-represented in our teaching. The writers of the report do offer the proviso that it is not impossible that new domains will emerge that demand exactly these skills but, right now and for the foreseeable future, that’s not what matters much to most organizations. This doesn’t surprise me at all. It has long been clear that the demand for people that create the foundations is, of course, going to be vastly much smaller than the demand for people that build upon them, let alone the vastly greater numbers that make use of what has been built upon them. It’s not that those skills are useless – that’s a million miles from the truth – but that there is a very limited job market for them. Again, I need to emphasize that educators should not be driven by job markets: there is great value in knowing this kind of thing regardless of our ability to apply it directly in our jobs. On the other hand, nor should we be driven by a determination to teach all there is to know about foundations, when what interests people (and employers, as it happens) is what can be done with them. And, in fact, even those building such foundations desperately need to know that too, or the foundations will be elegant but useless. Importantly, those ‘foundational’ skills are actually often anything but, because the emergent structures that arise from them obey utterly different rules to the pieces of which they are made. Knowing how a cell works tells you nothing whatsoever about function of a heart, let alone how you should behave towards others, because different laws and principles apply at different levels of organization. A sociologist, say, really doesn’t need to know much about brain science, even though our brains probably contribute a lot to our social systems, because it’s the wrong foundation, at the wrong level of detail. Similarly, there is not a lot of value in knowing how CPUs work if your job is to build a website, or a database system supporting organizational processes (it’s not useless, but it’s not very useful so, given limited resources, it makes little sense to focus on it). For almost all occupations (paid or otherwise) that make use of science and technology, it matters vastly much more to understand the context of use, at the level of detail that matters, than it does to understand the underlying substructures. This is even true of scientists and technologists themselves: for most scientists, social and business skills will have a far greater effect on their success than fundamental scientific knowledge. But, if students are interested in the underlying principles and technologies on which their systems are based, then of course they should have freedom and support to learn more about them. It’s really interesting stuff, irrespective of market demand. It enriches us. Equally, they should be supported in discovering gothic literature, social psychology, the philosophy of art, the principles of graphic design, wine making, and anything else that matters to them. Education is about learning to be, not just learning to do. Nothing of what we learn is wasted or irrelevant. It all contributes to making us creative, engaged, mutually supportive human beings.

With that in mind, I do wonder a bit about some of the skills at the bottom of the list. It seems to me that all of the bottom four demand – and presuppose – just about all of those in the top 12. At least, they do if they are done well. Similarly for a few others trailing the pack. It is odd that operation monitoring is not much desired, though monitoring is. It is strange that troubleshooting is low in the ranks, but problem-solving is high. You cannot troubleshoot without solving problems. It’s fundamental. I guess it speaks to the idea of transferability and the loss of specificity in roles. My guess is that, in answering the questions of the researchers, employers were hedging their bets a bit and not assuming that specific existing job roles will be needed. But conventional teachers could, with some justification, observe that their students are already acquiring the higher-level, more important skills, through doing the low-level stuff that employers don’t want as much. Though I have no sympathy at all with our collective desire to impose this on our students, I would certainly defend our teaching of things that employers don’t want, at least partly because (in the process) we are actually teaching far more. I would equally defend even the teaching of Latin or ancient Greek (as long as these are chosen by students, never when they are mandated) because the bulk of what students learn is never the skill we claim to be teaching. It’s much like what the late, wonderful, and much lamented Randy Pausch called a head fake – to be teaching one thing of secondary importance while primarily teaching another deeper lesson – except that rather too many teachers tend to be as deceived as their students as to the real purpose and outcomes of their teaching.

Automation and outsourcing

As the report also suggests, it may also be that those skills lower in the ranking tend to be things that can often be outsourced, including (sooner or later) to machines. It’s not so much that the jobs will not be needed, but that they can be either automated or concentrated in an external service provider, reducing the overall job market for them. Yes, this is true. However, again, the methodology may have played a large role in coming to this conclusion. There is a tendency of which we are all somewhat guilty to look at current patterns of change (in this case the trend towards automation and outsourcing) and to assume that they will persist into the future. I’m not so sure.

Outsourcing

Take the stampede to move to the cloud, for instance, which is a clear underlying assumption in at least the undervaluing of programming. We’ve had phases of outsourcing several times before over the past 50 or 60 years of computing history. Cloud outsourcing is only new to the extent that the infrastructure to support it is much cheaper and more well-established than it was in earlier cycles, and there are smarter technologies available, including many that benefit from scale (e.g. AI, big data). We are currently probably at or near peak Cloud, but it is just a trend even if it has yet to peak. It might last a little longer than the previous generations (which, of course, never actually went away – it’s just an issue of relative dominance) but it suffers from most of the problems that brought previous outsourcing hype cycles to an end. The loss of in-house knowledge, the dangers of proprietary lock-in, the surrender of control to another entity that has a different (and, inevitably, at some point conflicting) agenda, and so on, are all counter forces to hold outsourcing in check. History and common sense suggests that there will eventually be a reversal of the trend and, indeed, we are seeing it here and there already, with the emergence of private clouds, regional/vertical cloud layers, hybrid clouds, and so on. Big issues of privacy and security are already high on the agendas of many organizations, with an increasing number of governments starting to catch up with legislation that heavily restricts unfettered growth of (especially) US-based hosting, with all the very many very bad implications for privacy that entails. Increasingly, businesses are realizing that they have lost the organizational knowledge and intelligence to effectively control their own systems: decisions that used to be informed by experts are now made by middle-managers with insufficient detailed understanding of the complexities, who are easy prey for cloud companies willing to exploit their ignorance. Equally, they are liable to be flanked by those who can adapt faster and less uniformly, inasmuch as everyone gets the same tools in the Cloud so there is less to differentiate one user of it from the next. OK, I know that is a sweeping generalization – there are many ways to use cloud resources that do not rely on standard tools and services. We don’t have to buy in to the proprietary SaaS rubbish, and can simply move servers to containers and VMs while retaining control, but the cloud companies are persuasive and keen to lure us in, with offers of reduced costs, higher reliability, and increased, scalable performance that are very enticing to stressed, underfunded CIOs with immediate targets to meet. Right now, cloud providers are riding high and making ridiculously large profits on it, but the same was true of IBM (and its lesser competitors) in the 60s and 70s. They were brought down (though never fully replaced) by a paradigm change that was, for the most part, a direct reaction to the aforementioned problems, plus a few that are less troublesome nowadays, like performance and cost of leased lines. I strongly suspect something similar will happen again in a few years.

Automation and the end of all things we value

Automation – especially through the increased adoption of AI techniques – may be a different matter. It is hard to see that becoming less disruptive, albeit that the reality is and will be much more mundane than the hype, and there will be backlashes. However, I greatly fear that we have a lot of real stupidity yet to come in this. Take education, for instance. Many people whose opinions I otherwise respect are guilty of thinking that teachers can be, to a meaningful extent, replaced by chatbots. They are horribly misguided but, unfortunately, people are already doing it, and claiming success, not just in teaching but in fooling students that they are being taught by a real teacher.  You can indeed help people to pass tests through the use of such tools. However, the only things that tests prove about learning is that you have learned to pass them. That’s not what education is for. As I’ve already suggested, education is really not much to do with the stuff we think we teach. It is about being and becoming human. If we learn to be human from what are, in fact, really very dumb machines with no understanding whatsoever of the words they speak, no caring for us, no awareness of the broader context of what they teach, no values to speak of at all, we will lower the bar for artificial intelligence because we will become so much dumber ourselves. It will be like being taught by an unusually tireless and creepily supportive (because why would you train a system to be otherwise?) person. We should not care for them, and that matters, because caring (both ways) is critical to the relationship that makes learning with others meaningful. But it will be even worse if and when we do start caring for them (remember the Tamagotchi?).  When we start caring for soulless machines (I don’t mean ‘soul’ in a religious or transcendent sense), when it starts to matter to us that we are pleasing them, we will learn to look at one another in the same way and, in the process, lose our own souls.  A machine, even one that fools us it is human, makes a very poor role model. Sure, let them handle helpdesk enquiries (and pass them on if they cannot help), let them supplement our real human interactions with useful hints and suggestions, let them support us in the tasks we have to perform, let them mark our tests to double-check we are being consistent: they are good at that kind of thing, and will get better. But please, please, please don’t let them replace teachers.

I am afraid of AI, not because I am bothered by the likelihood of an AGI (artificial general intelligence) superseding our dominant role on the planet: we have at least decades to think about that, and we can and will augment ourselves with dumb-but-sufficient AI to counteract any potential ill effects. The worst outcome of AI in the foreseeable future is that we devalue ourselves, that we mistake the semblance of humanity for humanity itself, that machines will become our role models. We may even think they are better than us, because they will have fewer human foibles and a tireless, on-demand, semblance of caring that we will mistake for being human (a bit like obsequious serving staff seeking tips in a restaurant, but creepier, less transparent, and infinitely patient). Real humans will disappoint us. Bots will be trained to be what their programmers perceive as the best of us, even though we don’t have more than the glimmerings of an idea of what ‘best’ actually means (philosophers continue to struggle with this after thousands of years, and few programmers have even studied philosophy at a basic level). That way the end of humanity lies: slowly, insidiously, barely noticeably at first. Not with a bang but with an Alicebot. Arthur C. Clark delightfully claimed that any teacher who could be replaced by a machine should be. I fear that we are not smart enough to realize that it is, in fact, very easy to successfully replace a teacher with a machine if you don’t understand the teacher’s true role in the educational machine, and you don’t make massive changes to it. As long as we think of education as the achievement of pre-specified outcomes that we measure using primitive tools like standardized tests, exams, and other inauthentic metrics, chatbots will quite easily supersede us, despite their inadequacies. It is way too easy to mistake the weirdly evolved educational system that we are part of for education itself: we already do so in countless ways. Learning management systems, for instance, are not designed for learning: they are designed to replicate mediaeval classrooms, with all the trimmings, yet they have been embraced by nearly all institutions because they fit the system. AI bots will fit even better. If we do intend to go down this path (and many are doing so already) then please let’s think of these bots as supplemental, first line support, and please let’s make it abundantly clear that they are limited, fixed-purpose mechanisms, not substitutes but supplements that can free us from trivial tasks to let us concentrate on being more human.

Co-ops and placements

The report makes a lot of recommendations, most of which make sense – e.g. lifelong support for learning from governments, focus on softer more flexible skills, focus on adaptability, etc. Notable among these is the suggestion, as one of its calls to action, that all PSE students should engage in some form of  meaningful work-integrated learning placements during their studies. This is something that we have been talking about offering to our program students in computing for some time at Athabasca University, though the demand is low because a large majority of our students are already working while studying, and it is a logistical nightmare to do this across the whole of Canada and much of the rest of the globe. Though some AU programs embed it (nursing, for instance) I’m not sure we will ever get round to it in computing. I do very much agree that co-ops and placements are typically a good idea for (at least) vocationally-oriented students in conventional in-person institutions. I supervised a great many of these (for computing students) at my former university and observed the extremely positive effects it usually had, especially on those taking the more humanistic computing programs like information systems, applied computing, computer studies, and so on. When they came back from their sandwich year (UK terminology), students were nearly always far wiser, far more motivated, and far more capable of studying than the relatively few that skipped the opportunity. Sometimes they were radically transformed – I saw borderline-fail students turn into top performers more than once – but, apart from when things fell apart (not common, but not unheard of), it was nearly always worth far more than at least the previous couple of years of traditional teaching. It was expensive and disruptive to run, demanding a lot from all academic staff and especially from those who had to organize it all, but it was worth it.

But, just because it works in conventional institutions doesn’t mean that it’s a good idea. It’s a technological solution that works because conventional institutions don’t. Let’s step back a bit from this for a moment. Learning in an authentic context, when it is meaningful and relevant to clear and pressing needs, surrounded by all the complexities of real life (notwithstanding that education should buffer some of that, and make the steps less risky or painful), in a community or practice, is a really good idea. Apprenticeship models have thousands of years of successful implementation to prove their worth, and that’s essentially what co-ops or placements achieve, albeit only in a limited (typically 3-month to 1 year) timeframe. It’s even a good idea when the study area and working practices do not coincide, because it allows many more connections to be made in both aspects of life. But why not extend that to all (or almost all) of the process? To an extent, this is what we at Athabasca already do, although it tends to be more the default context than something we take intentional advantage of. Again, my courses are an exception – most of mine (and all to some extent) rely on students having a meaningful context of their own, and give opportunities to integrate work or other interests and study by default. In fact, one of the biggest problems I face in my teaching arises on those rare occasions when students don’t have sufficient aspects of work or leisure that engage them (e.g. prisoners or visiting students from other universities), or work in contexts that cannot be used (e.g. defence workers). I have seen it work for in-person contexts, too: the Teaching Company Scheme in the UK, that later became Knowledge Transfer Partnerships, has been hugely successful over several decades, marrying workplace learning with academic input, usually leading to a highly personalized MSc or MA while offering great benefits to lecturers, employers and students alike. They are fun, but resource-intensive, to supervise. Largely for this reason, in the past it might have been hard to make this scalable to lower than graduate levels of learning, but modern technologies – shared workspaces, blogs, portfolio management tools, rich realtime meeting tools, etc, and a more advanced understanding of ways to identify and record competencies – make it far more possible. It seems to me that what we want is not co-ops or placements, but a robust (and, ideally, publicly funded) approach to integrating academic and in-context learning. Already, a lot of my graduate students and a few undergraduates are funded by their employers, working on our courses at the same time as doing their existing jobs, which seems to benefit all concerned, so there’s clearly a demand. And it’s not just an option for vocational learning. Though (working in computing) much of my teaching does have a vocational grounding, if not a vocational focus, I have come across students elsewhere across the university who are doing far less obviously job-related studies with the support of their employers. In fact, it is often a much better idea for students to learn stuff that is not directly applicable to their workplace, because the boundary-crossing it entails better improves a vast range of the most important skills identified in the RBC report – creativity, communication, critical thinking, problem solving, judgement, listening, reading, and so on. Good employers see the value in that.

Conclusions

Though this is a long post, I have only cherry-picked a few of the many interesting issues that emerge from the report, but I think there are some general themes in my reactions to it that are consistent:

1: it’s not about money

Firstly, the notion that educational systems should be primarily thought of as feeders for industry is dangerous nonsense. Our educational systems are preparation for life (in society and its cultures), and work is only a part of that. Preparedness for work is better seen as a side-effect of education, not its purpose. And education is definitely not the best vehicle for driving economic prosperity. The teaching profession is almost entirely populated by extremely smart, capable, people who (especially in relation to their qualifications) are earning relatively little money. To cap it all, we often work longer hours, in poorer conditions than many of our similarly capable industry colleagues. Though a fair living wage is, of course, very important to us, and we get justly upset when offered unfair wages or worsening conditions, we don’t work for pay: we are paid for our work. Notwithstanding that a lack of money is a very bad thing indeed and should be avoided like the plague, we do so precisely because we think there are some things – common things –  that are much more important than money (this may also partly account for a liberal bias in the profession, though it also helps that the average IQ of teachers is a bit above the norm). And, whether explicitly or otherwise, this is inevitably part of what we teach. Education is not primarily about learning a set of skills and facts: it’s about learning to be, and the examples that teachers set, the way they model roles, cannot help but come laden with their own values. Even if we scrupulously tried to avoid it, the fact of our existence serves as a prime example of people who put money relatively low on their list of priorities. If we have an influence (and I hope we do) we therefore encourage people to value things other than a large wage packet. So, if you are going to college or school in the hope of learning to make loads of money, you’re probably making the wrong choice. Find a rich person instead and learn from them.

2: it is about integrating education and the rest of our lives

Despite its relentless focus on improving the economy, I think this report is fundamentally right in most of the suggestions it makes about education, though it doesn’t go far enough. It is not so much that we should focus on job-related skills (whatever they might be) but that we should integrate education with and throughout our lives. The notion of taking someone out of their life context and inflicting a bunch of knowledge-acquisition tasks with inauthentic, teacher-led criteria for success, not to mention to subjugate them to teacher control over all that they do, is plain dumb. There may be odd occasions where retreating from and separating education from the world is worthwhile, but they are few and far between, and can be catered for on an individual needs basis.

Our educational processes evolved in a very different context, where the primary intent was to teach dogma to the many by the few, and where physical constraints (rarity of books/reading skills, limited availability of scholars, limits of physical spaces) made lecture forms in dedicated spaces appropriate solutions to those particular technical problems. Later, education evolved to focus more on creating a pliant and capable workforce to meet the needs of employers and the military, which happened to fit fairly well with the one-to-many top-down-control models devised to teach divinity etc. Though those days are mostly ended, we still retain strong echoes of these roles in much of our structure and processes – our pedagogies are still deeply rooted in the need to learn specific stuff, dictated and directed by others, in this weird, artificial context. Somehow along the way (in part due to higher education, at least, formerly being a scarce commodity) we turned into filters and gatekeepers for employment purposes.  But, today, we are trying to solve different problems. Modern education has tended to tread a shifting path between supporting individual development and improving our societies: these should be mutually supportive roles though different educational systems tend to put more emphasis on one than the other. With that in mind, it no longer makes sense to routinely (in fact almost universally) take people out of their physical, social, or work context to learn stuff. There are times that it helps or may even be necessary: when we need access to expensive shared resources (that mediaeval problem again), for instance, or when we need to work with in-person communities (hard to teach acting unless you have an opportunity to act with other actors, for example), or when it might be notably dangerous to practice in the real world (though virtual simulations can help). But, on the whole, we can learn far better when we learn in a real world context, where we can put our learning directly into useful practice, where it has value to us and those around us. Community matters immensely – for learning, for motivation, for diversity of ideas, for belonging, for connection, etc – and one of the greatest values in traditional education is that it provides a ready-made social context. We should not throw the baby out with the bathwater and it is important to sustain such communities, online or in-person. But it does not have to be, and should not ever be, the only social context, and it does not need to be the main social context for learning. Pleasingly, in his own excellent keynote at CNIE, our president Neil Fassina made some very similar points. I think that Athabasca is well on course towards a much brighter future.

3: what we teach is not what you learn

Finally, the whole education system (especially in higher education) is one gigantic head fake. By and large, the subjects we teach are of relatively minor significance. We teach ways of thinking, we teach values, we teach a few facts and skills, but mainly we teach a way of being. For all that, what you actually learn is something else entirely, and it is different from what every one of your co-learners learns, because 1) you are your main and most important teacher and 2) you are surrounded by others (in person, in artefacts they create, online) who also teach you. We need to embrace that far more than we typically do. We need to acknowledge and celebrate the differences in every single learner, not teach stuff at them in the vain belief that what we have to tell you matters more than what you want to learn, or that somehow (contrary to all evidence) everyone comes in and leaves knowing the same stuff. We’ve got to stop rewarding and punishing compliance and non-compliance.

What you learn changes you. It makes you able to see things differently, do things differently, make new connections. Anything you learn. There is no such thing as useless learning. It is, though, certainly possible to learn harmful things – misconceptions, falsehoods, blind beliefs, and so on – so the most important skill is to distinguish those from the things that are helpful (not necessarily true – helpful). On the whole, I don’t like approaches to teaching that make you learn stuff faster (though they can be very useful when solving some kinds of problem) because it devalues the journey. I like approaches that help you learn better: deeper, more connected, more transformative. This doesn’t mean that the RBC report is wrong in criticizing our current educational systems, but it is wrong to believe that the answer is to stop (or reduce) teaching the stuff that employers don’t think they need. Learners should learn whatever they want or need to learn, whenever they need to do so, and educational institutions (collectively) should support that. But that also doesn’t mean teachers should teach what learners (or employers, or governments) think they should teach, because 1) we always teach more than that, whether we want to or not, and it all has value and 2) none of these entities are our customers. The heartbreaking thing is that some of the lessons most of us unintentionally teach – from mindless capitulation to authority, to the terrible approaches to learning nurtured by exams, to the truly awful beliefs that people do not like/are not able to learn certain subjects or skills – are firmly in the harmful category.  It does mean that we need to be more aware of the hidden lessons, and of what our students are actually learning from them. We need to design our teaching in ways that allow them to make it relevant and meaningful in their lives. We need to design it so that every student can apply their learning to things that matter to them, we need to help them to reflect and connect, to adopt approaches, attitudes, and values that they can constantly use throughout their lives, in the workplace or not. We need to help them to see what they have learned in a broader social context, to pay it forward and spread their learning contagiously, both in and out of the classroom (or wherever they are doing their learning). We need to be partners and collaborators in learning, not providers.  If we do that then, even if we are teaching COBOL, Italian Renaissance poetry, or some other ‘useless’ subject, we will be doing what employers seem to want and need. More importantly, we will be enriching lives, whether or not we make people fiscally richer.