Little monsters and big waves

 

pokémon at AuschwitzSome amazing stories have been emerging lately about Pokémon GO, from people wandering through live broadcasts in search of monsters, to lurings of mugging victims, to discoveries of dead bodies, to monsters in art galleries and museums, to people throwing phones to try to capture Pokémons, to it overtaking Facebook in engagement (by a mile), to cafes going from empty to full in a day thanks to one little monster, to people entering closed zoo enclosures and multiple other  dangerous behaviours (including falling off a cliff),  to uses of Pokémon to raise money for charity, to applause for its mental and physical health benefits, to the saving of 27 (real) animals, to religious edicts to avoid it from more than one religion, to cheating boyfriends being found out by following Pokémon GO tracks.

And so on.

Of all of them, my current favourite is the story of the curators of Auschwitz having to ask people not to play the game within its bounds. It’s kind of poetic: people are finding fictional monsters and playing games with them in a memorial that is there, more than anything, to remind us of real monsters. We shall soon see a lot more and a lot wilder clashes between reality and augmented reality, and a lot more unexpected consequences, some great, some not. Lives will be lost, lives will be changed. There will be life affirming acts, there will be absurdities, there will be great joy, there will be great sadness. As business models emerge, from buttons to sponsorship to advertising to trading to training, there will be a lot of money being made in a vast, almost instant ecosystem. Above all, there will be many surprises. So many adjacent possibles are suddenly emerging.

AR (augmented reality) has been on the brink of this breakthrough moment for a decade or so. I did not guess that it would explode in less than a week when it finally happened, but here it is. Some might quibble about whether Pokémon GO is actually AR as such (it overlays rather than augments reality), but, if there were once a more precise definition of AR, there isn’t any more. There are now countless millions that are inhabiting a digitally augmented physical space, very visibly sharing the same consensual hallucinations, and they are calling it AR. It’s not that it’s anything new. Not at all. It’s the sheer scale of it.  The walls of the dam are broken and the flood has begun.

This is an incredibly exciting moment for anyone with the slightest interest in digital technologies or their effects on society. The fact that it is ‘just’ a game just makes it all the more remarkable. For some, this seems like just another passing fad: bigger than most, a bit more interesting, but just a fad. Perhaps so. I don’t care. For me, it seems like we are witnessing a sudden, irreversible, and massive global shift in our perceptions of the nature of digital systems, of the ways that we can use them, and of what they mean in our lives. This is, with only a slight hint of hyperbole, about to change almost everything.

Aside: it’s not VR, by the way

Zuckerberg and an audience wearing Samsung Gears (Facebook image)There has been a lot of hype of late around AR’s geekier cousin, VR (virtual reality), notably relating to Oculus, HTC Vive, and Playstation VR, but I’m not much enthused. VR has moved only incrementally since the early 90s and the same problems we saw back then persist in almost exactly the same form now, just with more dots.  It’s cool, but I don’t find the experience is really that much more immersive than it was in the early 90s, once you get over the initial wowness of the far higher fidelity. There are a few big niches for it (hard core gaming, simulation, remote presence, etc), and that’s great. But, for most of us, its impact will (in its current forms) not come close to that of PCs, smartphones, tablets, TVs or even games consoles. Something that cuts us off from the real world so completely, especially while it is so conspicuously physically engulfing our heads in big tech, cannot replace very much of what we currently do with computers, and only adds a little to what we can already do without it. Notwithstanding its great value in supporting shared immersive spaces, the new ways it gives us to play with others, and its great potential in games and education, it is not just asocial, it is antisocial. Great big tethered headsets (and even untethered low-res ones) are inherently isolating. We also have a long way to go towards finding a good way to move around in virtual spaces. This hasn’t changed much for the better since the early 90s, despite much innovation. And that’s not to mention the ludicrous amounts of computing power needed for it by today’s standards: my son’s HTC Vive requires a small power station to keep it going, and it blows hot air like a noisy fan heater. It is not helped by the relative difficulty of creating high fidelity interactive virtual environments, nor by vertigo issues. It’s cool, it’s fun, but this is still, with a few exceptions, geek territory. Its big moment will come, but not quite yet, and not as a separate technology: it will be just one of the features that comes for free with AR.

Bigger waves

AR, on the whole, is the opposite of isolating. You can still look into the eyes of others when you are in AR, and participate not just in the world around you, but in an enriched and more social version of it. A lot of the fun of Pokémon GO involves interacting with others, often strangers, and it involves real-world encounters, not avatars. More interestingly, AR is not just a standalone technology: as we start to use more integrated technologies like heads-up displays (HUDs) and projectors, it will eventually envelop VR too, as well as screen-based technologies like PCs, smartphones, TVs, e-readers, and tablets, as well as a fair number of standalone smart devices like the Amazon Echo (though the Internet of Things will integrate interestingly with it). It has been possible to replace screens with glasses for a long time (devices between $100 and $200 abound) but, till now, there has been little point apart from privacy, curiosity, and geek cred. They have offered less convenience than cellphones, and a lot of (literal and figurative) headaches. They are either tethered or have tiny battery lives, they are uncomfortable, they are fragile, they are awkward to use, high resolution versions cost a lot, most are as isolating as VR and, as long as they are a tiny niche product, perhaps most of all, there are some serious social obstacles to wearing HUDs in public. That is all about to change. They are about to become mainstream.

The fact that AR can be done right now with no more than a cellphone is cool and it has been for a few years, but it will get much cooler as the hardware for HUDs becomes better, more widespread and, most importantly, more people share the augmented space. The scale is what makes the Pokémon GO phenomenon so significant, even though it is currently mostly a cellphone and GO Plus thing. It matters because, apart from being really interesting in its own right, soon, enough people will want hardware to match, and that will make it worth going into serious mass production. At that point it gets really interesting, because lots of people will be wearing HUD AR devices.

Google’s large-scale Glass experiment was getting there (and it’s not over yet), but it was mostly viewed with mild curiosity and a lot of suspicion. Why would any normal person want to look like the Borg? What were the wearers doing with those very visible cameras? What were they hiding? Why bother? The tiny minority that wore them were outsiders, weirdos, geeks, a little creepy. But things have moved on: the use cases have suddenly become very compelling, enough (I think) to overcome the stigma. The potentially interesting Microsoft Hololens, the incredibly interesting Magic Leap, and the rest (Meta 1, Recon Jet, Moverio, etc, etc) that are queueing up in the sidelines are nearly here. Apparently, Pokémon GO with a Hololens might be quite special. Apple’s rumoured foray into the field might be very interesting. Samsung’s contact-lens camera system is still a twinkling in Samsung’s eye, but it and many things even more amazing are coming soon. Further off, as nanotech develops and direct neural interfaces become available, the possibilities are (hopefully not literally) mind blowing.

What this all adds up to is that, as more of us start to use such devices, the computer as an object, even in its ubiquitous small smartphone or smartwatch form, will increasingly disappear. Tools like wearables and smart digital assistants have barely even arrived yet, but their end is palpably nigh. Why bother with a smart watch when you can project anything you wish on your wrist (or anywhere else, for that matter?). Why bother with having to find a device when you are wearing any device you can imagine? Why take out a phone to look for Pokémon? Why look at a screen when you can wear a dozen of them, anywhere, any size, adopting any posture you like? It will be great for ergonomics. This is pretty disruptive: whole industries are going to shrink, perhaps even disappear.

The end of the computer

Futurologists and scifi authors once imagined a future filled with screens, computers, smartphones and visible tech. That’s not how it will be at all. Sure, old technologies never die so these separate boxes won’t disappear altogether, and there’s still plenty of time left for innovation in such things, and vast profits still to be made in them as this revolution begins. There may be a decade or two of growth left for these endangered technologies. But the mainstream future of digital technologies is much more human, much more connected, much more social, much more embedded, and much less visible. The future is AR. The whirring big boxes and things with flashing lights that eat our space, our environment, our attention and our lives will, if they exist at all, be hidden in well-managed farms of servers, or in cupboards and walls. This will greatly reduce our environmental impact, the mountains of waste, the ugliness of our built spaces. I, for one, will be glad to see the disappearance of TV sets, of mountains of wires on my desk, of the stacks of tablets, cellphones, robots, PCs, and e-readers that litter my desktop, cupboards and basement. OK, I’m a bit geeky. But most of our homes and workplaces are shrines to screens and wiring. it’s ugly, it’s incredibly wasteful, it’s inhibiting. Though smartness will be embedded everywhere, in our clothing, our furniture, our buildings, our food, the visible interface will appear on displays that play only in or on our heads, and in or on the heads of those around us, in one massive shared hyperreality, a blend of physical and virtual that we all participate in, perhaps sharing the same virtual space, perhaps a different one, perhaps one physical space, perhaps more. At the start, we will wear geeky goggles, visors and visible high tech, but this will just be an intermediate phase. Pretty soon they will start to look cool, as designers with less of a Star Trek mentality step in. Before long, they will be no more weird than ordinary glasses. Later, they will almost vanish. The end point is virtual invisibility, and virtual ubiquity.

AR at scale

Pokémon GO has barely scratched the surface of this adjacent possible, but it has given us our first tantalizing glimpses of the unimaginably vast realms of potential that emerge once enough people hook into the digitally augmented world and start doing things together in it. To take one of the most boringly familiar examples, will we still visit cinemas when we all have cinema-like fidelity in devices on or in our heads? Maybe. There’s a great deal to be said for doing things together in a physical space, as Pokémon GO shows us with a vengeance. But, though we might be looking at the ‘same’ screen, in the same place, there will be no need to project it. Anywhere can become a cinema just as anywhere can be a home for a Pokémon. Anywhere can become an office. Any space can turn into what we want it to be. My office, as I type this, is my boat. This is cool, but I am isolated from my co-workers and students, channeling all communication with them through the confined boundaries of a screen. AR can remove those boundaries, if I wish. I could be sitting here with friends and colleagues, each in their own spaces or together, ‘sitting’ in the cockpit with me or bobbing on the water. I could be teaching, with students seeing what I see, following my every move, and vice versa. When my outboard motor needs fixing (it often does) I could see it with a schematic overlay, or receive direct instruction from a skilled mechanic: the opportunities for the service industry, from plumbing to university professoring, are huge. I could replay events where they happened, including historical events that I was not there to see, things that never happened, things that could happen in the future, what-if scenarios, things that are microscopically small, things that are unimaginably huge, and so on. This is a pretty old idea with many mature existing implementations (e.g. here, here, here and here). Till now they have been isolated phenomena, and most are a bit clunky. As this is accepted as the mainstream, it will cascade into everything. Forget rose-tinted spectacles: the world can be whatever I want it to become. In fact, this could be literally true, not just virtually: I could draw objects in the space they will eventually occupy (such virtual sculpture apps already exist for VR), then 3D print them.

Just think of the possibilities for existing media. Right now I find it useful to work on multiple monitors because the boundaries of one screen are insufficient to keep everything where I need it at once. With AR, I can have dozens of them or (much more interestingly) forget the ‘screen’ metaphor altogether and work as fluidly as I like with text, video, audio and more, all the while as aware of the rest of my environment, and the people in it, as I wish. Computers, including cellphones, isolate: they draw us into them, draw our gaze away from the world around us. AR integrates with that world, and integrates us with it, enhancing both physical and virtual space, enhancing us. We are and have only ever been intelligent as a collective, our intelligence embedded in one another and in the technologies we share. Suddenly, so much more of that can be instantly available to us. This is seriously social technology, albeit that there will be some intriguing and messy interpersonal problems when each of us might be  engaged in a private virtual world while outwardly engaging in another. There are countless ways this could (and will) play out badly.

Or what about a really old technology? I now I have hundreds of e-books that sit forgotten, imprisoned inside that little screen, viewable a page at a time or listed in chunks that fit the dimensions of the device. Bookshelves – constant reminders of what we have read and augmenters of our intellects – remain one of the major advantages of p-books, as does their physicality that reveals context, not just text. With AR, I will be able to see my whole library (and other libraries and bookstores, if I wish), sort it instantly, filter it, seek ideas and phrases, flick through books as though they were physical objects, or view them as a scroll, or one large sheet of virtual paper, or countless other visualizations that massively surpass physical books as media that contribute to my understanding of the text. Forget large format books for images: they can be 20 metres tall if we want them to be. I’ll be able to fling pages, passages, etc onto the wall or hovering in the air, shuffle them, rearrange them, connect them. I’ll be able to make them disappear all at once, and reappear in the same form when I need them again. The limits are those of the imagination, not the boundaries of physical space. We will no doubt start by skeuomorphically incorporating what we already know but, as the adjacent possibles unfold, there will be no end to the creative potential to go far, far beyond that. This is one of the most boring uses of AR I can think of, but it is still beyond magical.

We will, surprisingly soon, continuously inhabit multiple worlds – those of others, those others invent, those that are abstract, those that blend media, those that change what we perceive, those that describe it, those that explain it, those that enhance it, those we assemble or create for ourselves. We will see the world through one another’s eyes, see into one another’s imaginations, engage in multiple overlapping spaces that are part real, part illusion, and we will do so with others, collocated and remote, seamlessly, continuously. Our devices will decorate our walls, analyze our diets, check our health. Our devices won’t forget things, will remember faces, birthdays, life events, connections. We may all have eidetic memories, if that is what we want. While cellphones make our lives more dangerous, these devices will make them safer, warning us when we are about to step into the path of an oncoming truck as we monitor our messages and news. As smartness is embedded in the objects around us, our HUDs will interact with them: no more lost shirts, no guessing the temperature of our roasts, no forgetting to turn off lights. We will gain new senses – seeing in the dark, even through walls, will become commonplace. We will, perhaps, sense small fluctuations in skin temperature to help us better understand what people are feeling. Those of us with visual impairment (most of us) will be able to zoom in, magnify, have text read to us, or delve deeper through QR codes or their successors. Much of what we need to know now will be unnecessary (though we will still enjoy discovering it, as much as we enjoy discovering monsters) but our ability to connect it will grow exponentially. We won’t be taking devices out of our pockets to do that, nor sitting in front of brightly lit screens.

We will very likely become very dependent on these ubiquitous, barely visible devices, these prostheses for the mind. We may rarely take them off. Not all of this will be good. Not by a mile. When technologies change us, as they tend to do, many of those changes tend to be negative. When they change us a lot, there will be a lot of negatives, lots of new problems they create as well as solve, lots of aggregations and integrations that will cause unforeseen woes. This video at vimeo.com/166807261 shows a nightmare vision of what this might be like, but it doesn’t need to be a nightmare: we will need to learn to tame it, to control it, to use it wisely. Ad blockers will work in this space too.

What comes next

AR has been in the offing for some time, but mainly as futuristic research in labs, half-baked experimental products like Google Glass, or ‘hey wow’ technologies like Layar, Aurasma, Google Translate, etc. Google, Facebook, Apple, Microsoft, Sony, Amazon, all the big players, as well as many thousands of startups, are already scrabbling frantically to get into this space, and to find ways to use what they already have to better effect. I suspect they are looking at the Pokémon GO phenomenon with a mix of awe, respect, and avarice (and, in Google’s case, perhaps a hint of regret). Formerly niche products like Google Tango or Structure Sensor are going to find themselves a lot more in the spotlight as the value of being able to accurately map physical space around us becomes ever greater. Smarter ways of interacting, like this at www.youtube.com/watch?v=UA_HZVmmY84, will sprout like weeds.

People are going to pay much more attention to existing tools and wonder how they can become more social, more integrated, more fluid, less clunky. We are going to need standards: isolated apps are quite cool, but the big possibilities occur when we are able to mash them up, integrate them, allow them to share space with one another. It would be really useful if there were an equivalent of the World Wide Web for the augmented world: a means of addressing not just coordinates but surfaces, objects, products, trees, buildings, etc, that any application could hook into, that is distributed and open, not held by those that control the APIs. We need spatial and categorical hyperlinks between things that exist in physical and virtual space. I fear that, instead, we may see more of the evils of closed APIs controlled by organizations like Facebook, Google, Apple, Microsoft, Amazon, and their kin. Hopefully they will realise that they will get bigger benefits from expanding the ecosystem (I think Google might get this first) but there is a good chance that short-termist greed will get the upper hand instead. The web had virgin, non-commercial ground in which to flourish before the bad people got there. I am not sure that such a space exists any more, and that’s sad. Perhaps HTML 6 will extend into physical space. That might work. Every space, every product, every plant, every animal, every person, addressable via a URL.

There will be ever more innovations in battery or other power/power saving technologies, display technologies and usability: the abysmal battery life of current devices, in particular, will soon be very irritating. There will likely be a lot of turf wars as different cloud services compete for user populations, different standards and APIs compete for apps, and different devices compete for customers. There will be many acquisitions. Privacy, already a major issue, will take a pounding, as new ways of invading it proliferate. What happens when Google sees all that you see? Measures your room with millimetre accuracy? Tracks every moment of your waking life? What happens when security services tap in? Or hackers? Or advertisers? There will be kickback and resistance, much of it justified. New forms of DRM will struggle to contain what needs to be free: ownership of digital objects will be hotly contested. New business models (personalized posters anyone? in situ personal assistants? digital objects for the home? mashup museums and galleries?) will enrage us, inform us, amuse us, enthrall us. Facebook, temporarily wrong footed in its ill-considered efforts to promote Oculus, will come back with a vengeance and find countless new ways to exploit us (if you think it is bad now, imagine what it will be like when it tracks our real-world social networks). The owners of the maps and the mapped data will become rich: Niantic is right now sitting on a diamond as big as the Ritz. We must be prepared for new forms of commerce, new sources of income, new ways of learning, new ways of understanding, new ways of communicating, new notions of knowledge, new tools, new standards, new paradigms, new institutions, new major players, new forms of exploitation, new crimes, new intrusions, new dangers, new social problems we can so far barely dream of. It will certainly take years, not months, for all of this to happen, though it is worth remembering that network effects kick in fast: the Pokémon GO only took a few days. It is coming, significant parts of it are already here, and we need to be preparing for it now. Though the seeds have been germinating for many years, they have germinated in relatively isolated pockets. This simple game has opened up the whole ecosystem.

Pokéducation

I guess, being an edtech blogger, I should say a bit more about the effects of Pokémon GO on education but that’s mostly for another post, and much of it is implied in what I have written so far. There have been plenty of uses of AR in conventional education so far, and there will no doubt be thousands of ways that people use Pokémon GO in their teaching (some great adjacent possibles in locative, gamified learning), as well as ways to use the countless mutated purpose-built forms that will appear any moment now, and that will be fun, though not earth shattering. I have, for instance, been struggling to find useful ways to use geocaching in my teaching (of computing etc) for over a decade, but it was always too complex to manage, given that my students are mostly pretty sparsely spread across the globe: basically, I don’t have the resources to populate enough geocaches. The kind of mega-scale mapping that Niantic has successfully accomplished could now make this possible, if they open up the ecosystem. However, most uses of AR will, at first, simply extend the status quo, letting us do better what we have always done and that we only needed to do because of physics. The real disruption, the result of the fact we can overcome physics, will take a while longer, and will depend on the ubiquity of more integrated, seamlessly networked forms of AR. When the environment is smart, the kind of intelligence we need to make use of it is quite different from most of what our educational systems are geared up to provide. When connection between the virtual and physical is ubiquitous, fluid and high fidelity, we don’t need to limit ourselves to conventional boundaries of classes, courses, subjects and schools. We don’t need to learn today what we will only use in 20 years time. We can do it now. Networked computers made this possible. AR makes it inevitable. I will have more to say about this.

This is going to change things. Lots of things.

 

Cocktails and educational research

A lot of progress has been made in medicine in recent years through the application of cocktails of drugs. Those used to combat AIDS are perhaps the most well-known, but there are many other applications of the technique to everything from lung cancer to Hodgkin’s lymphoma. The logic is simple. Different drugs attack different vulnerabilities in the pathogens etc they seek to kill. Though evolution means that some bacteria, viruses or cancers are likely to be adapted to escape one attack, the more different attacks you make, the less likely it will be that any will survive.

Simulated learningUnfortunately, combinatorial complexity means this is not a simply a question of throwing a bunch of the best drugs of each type together and gaining their benefits additively. I have recently been reading John H. Miller’s ‘A crude look at the whole: the science of complex systems in business, life and society‘ which is, so far, excellent, and that addresses this and many other problems in complexity science. Miller uses the nice analogy of fashion to help explain the problem: if you simply choose the most fashionable belt, the trendiest shoes, the latest greatest shirt, the snappiest hat, etc, the chances of walking out with the most fashionable outfit by combining them together are virtually zero. In fact, there’s a very strong chance that you will wind up looking pretty awful. It is not easily susceptible to reductive science because the variables all affect one another deeply. If your shirt doesn’t go with your shoes, it doesn’t matter how good either are separately. The same is true of drugs. You can’t simply pick those that are best on their own without understanding how they all work together. Not only may they not additively combine, they may often have highly negative effects, or may prevent one another being effective, or may behave differently in a different sequence, or in different relative concentrations. To make matters worse, side effects multiply as well as therapeutic benefits so, at the very least, you want to aim for the smallest number of compounds in the cocktail that you can get away with. Even were the effects of combining drugs positive, it would be premature to believe that it is the best possible solution unless you have actually tried them all. And therein lies the rub, because there are really a great many ways to combine them.

Miller and colleagues have been using the ideas behind simulated annealing to create faster, better ways to discover working cocktails of drugs. They started with 19 drugs which, a small bit of math shows, could be combined in 2 to the power of 19 different ways – about half a million possible combinations (not counting sequencing or relative strength issues). As only 20 such combinations could be tested each week, the chances of finding an effective, let alone the best combination, were slim within any reasonable timeframe. Simplifying a bit, rather than attempting to cover the entire range of possibilities, their approach finds a local optimum within one locale by picking a point and iterating variations from there until the best combination is found for that patch of the fitness landscape. It then checks another locale and repeats the process, and iterates until they have covered a large enough portion of the fitness landscape to be confident of having found at least a good solution: they have at least several peaks to compare. This also lets them follow up on hunches and to use educated guesses to speed up the search. It seems pretty effective, at least when compared with alternatives that attempt a theory-driven intentional design (too many non-independent variables), and is certainly vastly superior to methodically trying every alternative, inasmuch as it is actually possible to do this within acceptable timescales.

The central trick is to deliberately go downhill on the fitness landscape, rather than following an uphill route of continuous improvement all the time, which may simply get you to the top of an anthill rather than the peak of Everest in the fitness landscape. Miller very effectively shows that this is the fundamental error committed by followers of the Six-Sigma approach to management, an iterative method of process improvement originally invented to reduce errors in the manufacturing process: it may work well in a manufacturing context with a small number of variables to play with in a fixed and well-known landscape, but it is much worse than useless when applied in a creative industry like, say, education, because the chances that we are climbing a mountain and not an anthill are slim to negligible. In fact, the same is true even in manufacturing: if you are just making something inherently weak as good as it can be, it is still weak. There are lessons here for those that work hard to make our educational systems work better. For instance, attempts to make examination processes more reliable are doomed to fail because it’s exams that are the problem, not the processes used to run them. As I finish this while listening to a talk on learning analytics, I see dozens of such examples: most of the analytics tools described are designed to make the various parts of the educational machine work ‘ better’, ie. (for the most part) to help ensure that students’ behaviour complies with teachers’ intent. Of course, the only reason such compliance was ever needed was for efficient use of teaching resources, not because it is good for learning. Anthills.

This way of thinking seems to me to have potentially interesting applications in educational research. We who work in the area are faced with an irreducibly large number of recombinable and mutually affective variables that make any ethical attempt to do experimental research on effectiveness (however we choose to measure that – so many anthills here) impossible. It doesn’t stop a lot of people doing it, and telling us about p-values that prove their point in more or less scupulous studies, but they are – not to put too fine a point on it – almost always completely pointless.  At best, they might be telling us something useful about a single, non-replicable anthill, from which we might draw a lesson or two for our own context. But even a single omitted word in a lecture, a small change in inflection, let alone an impossibly vast range of design, contextual, historical and human factors, can have a substantial effect on learning outcomes and effectiveness for any given individual at any given time. We are always dealing with a lot more than 2 to the power of 19 possible mutually interacting combinations in real educational contexts. For even the simplest of research designs in a realistic educational context, the number of possible combinations of relevant variables is more likely closer to 2 to the power of 100 (in base 10 that’s  1,267,650,600,228,229,401,496,703,205,376). To make matters worse, the effects we are looking for may sometimes not be apparent for decades (having recombined and interacted with countless others along the way) and, for anything beyond trivial reductive experiments that would tell us nothing really useful, could seldom be done at a rate of more than a handful per semester, let alone 20 per week. This is a very good reason to do a lot more qualitative research, seeking meanings, connections, values and stories rather than trying to prove our approaches using experimental results. Education is more comparable to psychology than medicine and suffers the same central problem, that the general does not transfer to the specific, as well as a whole bunch of related problems that Smedslund recently coherently summarized. The article is paywalled, but Smedlund’s abstract states his main points succinctly:

“The current empirical paradigm for psychological research is criticized because it ignores the irreversibility of psychological processes, the infinite number of influential factors, the pseudo-empirical nature of many hypotheses, and the methodological implications of social interactivity. An additional point is that the differences and correlations usually found are much too small to be useful in psychological practice and in daily life. Together, these criticisms imply that an objective, accumulative, empirical and theoretical science of psychology is an impossible project.”

You could simply substitute ‘education’ for ‘psychology’ in this, and it would read the same. But it gets worse, because education is as much about technology and design as it is about states of mind and behaviour, so it is orders of magnitude more complex than psychology. The potential for invention of new ways of teaching and new states of learning is essentially infinite. Reductive science thus has a very limited role in educational research, at least as it has hitherto been done.

But what if we took the lessons of simulated annealing to heart? I recently bookmarked an approach to more reliable research suggested by the Christensen Institute that might provide a relevant methodology. The idea behind this is (again, simplifying a bit) to do the experimental stuff, then to sweep the normal results to one side and concentrate on the outliers, performing iterations of conjectures and experiments on an ever more diverse and precise range of samples until a richer, fuller picture results. Although it would be painstaking and longwinded, it is a good idea. But one cycle of this is a bit like a single iteration of Miller’s simulated annealing approach, a means to reach the top of one peak in the fitness landscape, that may still be a low-lying peak. However if, having done that, we jumbled up the variables again and repeated it starting in a different place, we might stand a chance of climbing some higher anthills and, perhaps, over time we might even hit a mountain and begin to have something that looks like a true science of education, in which we might make some reasonable predictions that do not rely on vague generalizations. It would either take a terribly long time (which itself might preclude it because, by the time we had finished researching, the discipline will have moved somewhere else) or would hit some notable ethical boundaries (you can’t deliberately mis-teach someone), but it seems more plausible than most existing techniques, if a reductive science of education is what we seek.

To be frank, I am not convinced it is worth the trouble. It seems to me that education is far closer as a discipline to art and design than it is to psychology, let alone to physics. Sure, there is a lot of important and useful stuff to be learned about how we learn: no doubt about that at all, and a simulated annealing approach might speed up that kind of research. Painters need to know what paints do too. But from there to prescribing how we should therefore teach spans a big chasm that reductive science cannot, in principle or practice, cross. This doesn’t mean that we cannot know anything: it just means it’s a different kind of knowledge than reductive science can provide. We are dealing with emergent phenomena in complex systems that are ontologically and epistemologically different from the parts of which they consist. So, yes, knowledge of the parts is valuable, but we can no more predict how best to teach or learn from those parts than we can predict the shape and function of the heart from knowledge of cellular organelles in its constituent cells. But knowledge of the cocktails that result – that might be useful.

 

 

Oh yes, that's why I left

St George Cross (Wikipedia)England is a weird, sad, angry little country, where there is now unequivocal evidence that over half the population – mainly the older ones – believe that experts know nothing, and that foreigners (as well as milllions of people born there with darker than average skins) are evil. England is a place filled with drunkenness and random violence, where it’s not safe to pass a crowd of teenagers – let alone a crowd of football supporters – on a street corner, where you cannot hang Xmas decorations outside for fear of losing them, where your class still defines you forever, where whinging is a way of life, where kindness is viewed with suspicion, where barbed wire fences protect schools from outsiders (or vice versa – hard to fathom), where fuckin‘ is a punctuation mark to underline what follows, not an independent word. It’s a nation filled with fierce and inhospitable people, as Horace once said, and it always has been. For all the people and places that I love and miss there, for all its very many good people and slowly vanishing places that are not at all like that, for all its dark and delicious humour, its eccentricity, its diversity, its cheeky irreverance, its feistiness, its relentless creativity, its excellent beer, its pork pies and its pickled onions, all of which I miss, that’s why I was glad to leave it.

It saddens and maddens me to see the country of my birth killing or, at least, seriously maiming itself in such a spectacularly and wilfully ignorant way, taking the United Kingdom, and possibly even the EU itself with it, as well as causing injury to much of the world, including Canada. England is a country-sized suicide bomber. Hopefully this mob insanity will eventually be a catalyst for positive change, if not in England or Wales then at least elsewhere. Until today I opposed Scottish independence, because nationalism is almost uniformly awful and the last thing we need in the world is more separatism, but it is far better to be part of something big and expansive like the EU than an unwilling partner in something small in soul and mind like the UK. Maybe Ireland will unify and come together in Europe. Perhaps Gibraltar too. Maybe Europe, largely freed of the burden of supporting and catering for the small-minded needs of my cantankerous homeland, will rise to new heights. I hope so, but it’s a crying shame that England won’t be a part of that. 

I am proud, though, of my home city, Brighton, the place where English people who don’t want to live in England live. About 70% of Brightonians voted to stay in the EU. Today I am proudly Brightonian, proudly European, but ashamed to be English. 

 

 

Humpback whale in English Bay

Damn it, I didn’t bring my big camera. The camera in my phone does not do this justice…

Humpback whale in English Bay

There is something genuinely awesome – in the original sense of the word – about being out on the water in a boat that is smaller than the creature swimming next to you. The humpback whale swam around us for about 40 minutes before moving on. Somewhere between 10 and 20 seals hung around nearby hoping for some left-overs, as did a small flock of seagulls. We tried to keep our distance (unlike a couple of boats) but the whale was quite happy to swim around us.

Whale

Learning and the Kardashians

As I am preparing for a talk next week on the future of online learning and writing a bit in a paper about the same kind of thing, I am pleased to see another timely publication in a long line of excellent Pew reports on American life, this time focusing on lifelong learning, which is hugely relevant to what I will be speaking and writing about about. As I need to think a bit more on this topic anyway, this seems like a good opportunity for reflection.

Findings of the report

Before moving on to my reflections, there are a few things that particularly stand out for me. For instance:

  • 74% of Americans have engaged in some deliberate personal learning (as measured by the researchers) over the past year, though only 16% have taken an online course.
  • 73% consider themselves to be lifelong learners.

This makes me worry greatly about over a quarter of Americans that have done no such thing and that do not consider themselves to be lifelong learners. It is hard to understand how one could be human and not consider oneself a learner but the study’s design likely shaped the kind of answers it received. I will have more to say on that. It is also interesting that courses play such a small role. More on that later too.

I am fascinated by the motivations of the subjects of the study:

  • 80% of personal learners say they pursued knowledge in an area of personal interest because they wanted to learn something that would help them make their life more interesting and full.
  • 64% say they wanted to learn something that would allow them to help others more effectively.
  • 60% say they had some extra time on their hands to pursue their interests.
  • 36% say they wanted to turn a hobby into something that generates income.
  • 33% say they wanted to learn things that would help them keep up with the schoolwork of their children, grandchildren or other kids in their lives.

This accords better with my understanding of human beings. People love to learn, and learning has huge social value in both process and product. It is notable that far fewer of the study’s subjects have extrinsic than intrinsic motivation, and it appears that, for the vast majority, the extrinsic driver is at most a catalyst for them to do something that is intrinsically fulfilling. This is reinforced in the following graphs, that are a terrific confirmation of the predictions of self-determination theory (SDT):

the value of educational experiences to learners in the US

As we already know from SDT, the value of learning is fundamentally about achieving competence as a good thing in itself, deeply social in purpose and value, and highly concerned with being in (or gaining) control: in brief, competence, relatedness and autonomy support. This is exactly what we see here. It is noteworthy that, though advancement in occupations matters to professional learners, there is no mention of money nor of qualifications in any of this. This accords with the fact that only 16% of those in the study took courses, given that courses tend to lead to formal or less formal credentials. It is very unfortunate that institutional learning has become so much concerned with courses and credentialing that all of these very good reasons for learning are incredibly crowded out. Much of the time, people in institutions learn in order to get the qualification, not for the pleasure that is so profoundly obvious in these findings. The luckiest ones get both. Most are not so lucky. More than a few get neither fulfillment nor credentials.

Matthew Effects: the rich get richer

The survey finds very strong links between existing education, prosperity and culture, and lifelong learning. Furthermore, the digital divide is, at least by some measures, widening:

As a rule, those adults with more education, household incomes and internet-connecting technologies are more likely to be participants in today’s educational ecosystem and to use information technology to navigate the world.

This is not too surprising – it’s pretty much there in the definition – but the Matthew Effect is in full swing here:

For personal learning, 87% of those with college degrees or more (throughout this report adults with college degrees or more refers to anyone who has at least a bachelor’s degree) have done such an activity in the past year, compared with 60% for among those with high school degrees or less. For professional learning, about three quarters (72%) of employed adults with at least college degrees have engaged in some sort of job-related training in the past year, while half (49%) of employed adults with high school degrees or less have done this.

Those that have learned to learn, and to see the value in it, learn more. They probably have more time and resources for it:

Among those with a smartphone and a home broadband connection (just over half the population), 82% have done some personal learning activity in the past year. For the remaining adults (those with just one of these connection devices or neither of them), 64% have done personal learning in the past year.

It is interesting that technology appears to have quite a large effect on learning. This is causal, not just a correlation. It’s not the tools, per se, but the adjacent possible that the tools bring. Basically, the tools can support learning or not but, if you don’t have the tools, the opportunity never arises. Those that claim technology has no effect on learning are simply wrong, but what is significant here is that it is not the teachers, but the learners, that make this so. There may be some very faint and equivocal glimmer of truth in the belief that technology does not normally do much to improve teaching, but it sure does a lot to improve learning.

Being America, a land of conspicuous inequality, the report shows that there are also strong divisions along ethnic lines, with African Americans and Hispanics considerably less likely to have engaged in personal learning, and somewhat less likely to have engaged in professional learning. The report is less clear whether this is a socio-economic issue or a more broadly cultural concern. I’m guessing a bit of both. When a social system separates particular groups, for whatever reason (and ethnicity is a deeply stupid reason), then patterns of behaviour are likely to cluster. As always, diversity (and the celebration of diversity) is much to be wished for here. We are wisest when we are exposed to and open to diverse views, values and opinions.

Finally, an opportunity for distance institutions like Athabasca University. Some of the notable preference for face to face learning (81% to 54%) is almost certainly down to lack of awareness of digital learning methods:

Noteworthy majorities of Americans say they are “not too” or “not at all” aware of these things:

  • Distance learning – 61% of adults have little or no awareness of this concept.
  • The Khan Academy, which provides video lessons for students on key concepts in things such as math, science, the humanities and languages – 79% of adults do not have much awareness of it
  • Massive open online courses (MOOCs) that are now being offered by universities and companies – 80% of adults do not have much awareness of these.
  • Digital badges that can certify if someone has mastered an idea or a skill – 83% of adults do not have much awareness of these.


It seems we have not been particularly smart about getting the message out! That’s a huge and untapped population of people who do not even know our methods of teaching exist, let alone of our own existence. At least some of those appear to be educated people with a thirst for knowledge.

Learning and the Kardashians

A lot of the inequalities demonstrated in the Pew report are deeply worrying and endemic. It seems to me that, as well as trying to address that imbalance directly, we in education should give a bit more thought to how we might embed productive learning more deeply into all our interactions, rather than just concentrating on making courses and tutorials in educational systems. While some of this embedding can be addressed with deliberate intent – popular channels, celebrity scientists and artists, accessible and appealing museums and galleries, subsidies for Internet access, libraries, etc – a lot of this is about system design. It’s about building tools and environments where critical and reflective engagement is part of the fabric of the system.

With that in mind, I think it is important to note a strong methodological bias in these findings. Significantly, they rely on self-reporting of deliberate learning activities that are largely defined by the researchers. There’s a strong bias towards things like courses, tutorials, guides, workshops, conferences and clubs that are explicitly designed to support learning. It is worth observing that most learning is not designed and not intentional (including in formal education). Almost every act of communication involves at least a hint of learning and, especially for interactive media such as Internet or Mobile technologies, the percentage of time spent learning in the process is normally significant. Almost all reading, watching and dialogue involves learning. We might not recognize it as such, but every time we learn of Bieber’s latest exploits, or Trump’s latest vileness, or our friend’s new puppy, we are deeply engaged in acts of learning. It is not just (and rarely most importantly) about the content of what is learned, but the ways of being that such learning engenders. Our values, beliefs and attitudes are deeply dependent on our interactions with others, mediated or not, and what we perceive of the world around us (especially the people and their creations within it). What we choose to observe or communicate changes us. Often, we engage critically with what we read or watch or talk about. Even simple learning from observation is not just about copying but about interpreting and constructing. Internet technologies, in particular, have massively increased the quantity and breadth of such observation and communication. Most of what we know is not learned deliberately but emerges through our interactions with other people and the world around us. Most of what even traditional teachers teach is not the content of what they teach but the ways of being and thinking that go along with it.

To suggest or imply, therefore, that lack of deliberate learning through conventional channels means that no learning is happening is deeply mistaken, and somewhat dangerous, because it ignores all but the visible tip of the iceberg. By far the biggest opportunities for education lie not in the stuff that we educators currently do for a job, but in embedding learning in the everyday; in designing pedagogies that are not pedagogies; in creating architectures where learning can thrive rather than in deliberately leading people in directions we think they should go. It is possibly sad but definitely true that the Kardashians are far better teachers with far greater reach than most professional teachers, apart from (maybe) celebrities like David Attenborough, Randall Munroe, David Suzuki or Neil Degrasse Tyson. What the Kardashians teach might seem to have little value and, arguably, might have negative value, but it should not be discounted as irrelevant learning. Nor, for that matter, should what we learn (directly and indirectly) from politicians, musicians and sports stars. The shapers of our emerging global society are many and varied, and I would be hesitant to suggest, snobbishly, that the reflective, critical, synthetic, analytic and creative skills that professional teachers try to support should have a monopoly over the emotional, social, value-forming ways of thinking that other contributors to society provide in greater measure.

Boundaries and education

Personally, I think the things we try to formally teach (not so much the content as the reflective, critical, synthetic, analytic and creative skills) matter a great deal. Taught well, they directly and demonstrably lead to better, healthier, richer, more creative, more caring, more productive societies, where people can look more critically on the likes of Trump and the Kardashians, with greater perspicacity, with greater creativity, and with more kindness to and understanding of those that think differently. But they also lead to a lot of things that are not so healthy, especially in their institutionalized control-freakery and cataleptic attitudes to change. Educational institutions have done and continue to do a lot of good but, if we really want to bring about a better, more educated world, there is a very good chance that they are no longer the ideal platform for it, and definitely far from the only one.

In my talk next week I will be exploring the ways that physical boundaries, notably of time and place, have deeply influenced how we go about the process of education. Almost all of our pedagogies are predicated on the assumption that a number of people need to gather in a particular place at a particular time, with associated structures, rules and processes to support that. Teachers are a scarce resource, classrooms are rival goods, and schedules matter. So we invented classes, courses, timetables, and methods of managing them. This in turn inevitably demands that people learn things they don’t need to learn, that they may be unable or unwilling to learn, at times that may not suit them, under conditions that greatly restrict their autonomy. All in all, despite good support for relatedness, this is terrible for motivation, and it crowds out almost all the great benefits that are reported on in the Pew study. One-to-one learning works much better because it largely avoids those constraints but is, for all but a few, economically unviable. Voluntary attendance to learning activities when needed (much of what is reported on in the study) is also good, but not well catered for in our educational systems that need to adopt tight schedules and lack much flexibility. Thus, much of our pedagogical practice and almost all of our educational system is designed to overcome or reduce the demotivating side-effects of simple physics. All too often, and all too often institutionalized, the solution is to fall back on primitive behaviourist models of motivation that do a great deal more harm than good. Such physics seldom if ever applies online, where boundaries are inherently fuzzy, metaphorical, fluid and malleable. However, most of us still adopt substantially the same pedagogies and we pointlessly (or worse) attempt to fit our teaching into systems that were designed for and with different boundaries. We even build tools like learning management systems that embody them, saving them from exinction and perhaps even magnifying them (it’s often easier to see what is going on in a live classroom than within the confines of an LMS). And, having done so, we cement the demotivational effects by controlling learners through grades and certificates, rewarding and punishing with Skinnerian efficiency. It’s no surprise that, when you take such things away, MOOC completion rates, though improving thanks mainly to better self-selection and increasing use of real reward and punishment through more recognized credentials (often becoming significantly less open in the process), average no more than 15%

Shifting boundaries and open spaces

Though online boundaries are different, there are lessons to be drawn from the built environment. I am incredibly lucky to live in Vancouver, where public art, information and hey-wow architecture and design is everywhere to be seen. It is hard to look anywhere without being informed, delighted or provoked in useful ways, from the shapes of leaves immortalized on the sidewalks to street art and poetry on the walls. Our cognition is fundamentally distributed, and the richness of the spaces around us, virtual or physical, contributes considerably to how and what we know, as well as our values and behaviours. Even simple separation of space can make a huge difference. It took a while after coming here to realize what was the main difference between schools here and in the UK: fences. In the UK, a school is normally enclosed by tall fences that both keep people out and keep children in. Around the school along the sea wall from me there are no such barriers, and children play at break-time in the parks and playgrounds outside. It’s still very safe – many eyes see to that, as well as a culture of trust – but it makes all the difference in the world to the meaning of the space, especially to the children but also to the community around them. Such little things make big differences. Part of the value of that is, again, diversity: being exposed to different stimuli and people is always a good thing, and another of Vancouver’s immense strengths. The area around the school is a wonderful mix of expensive luxury waterfront property and cheap but attractive and well cared-for community housing: unless you happen to know that red roofs signify community housing, you would be very unlikely to spot the difference. Messing with boundaries and celebrating diversity is, of course, a big part of the thinking behind the Landing. It’s a space where boundaries are deliberately softened, where learning can be visible and shared, but which is still safe and where everyone is accountable. Simply opening up the space is enough to bring about greater and different learning, and a different attitude towards it. 

Openness alone is not enough, though. Far too many public forums and comment areas (e.g. most newspaper sites) that are quite open are filled with vitriol, inanity and stupidity. Sure, a lot of learning happens, but mostly not in a productive or useful way, at least from my biased perspective and that of a lot of people that are turned off by it. I am guessing that this might well be what would happen if fences around UK schools were torn down without considering the surrounding community and environment. Community makes a huge difference: though I am sure they have to indulge in a bit of judicious pruning and moderation, when I read blogs by people like, say, Stephen Downes, George Siemens , Terry Anderson, or David Wiley, I see almost nothing but intelligent dialogue from those that comment, because those with an interest in the area have shared concerns and contested but concordant values. Well, perhaps the dialogue is not always intelligent, but at least it is always a learning dialogue. The downside of that is, of course, a relative lack of diversity in the communities that read their work.

So, environment matters too, and often helps to shape the community. For instance, I am still much smitten, after nearly two decades, by the model of SlashDot, which shapes learning dialogues through a combination of smart algorithms and, most importantly, the actions and interactions of people using the system. The best of these dialogues is more than a match for any textbook or classroom, and the worst are not too bad: anything else evolves away. The algorithms are complex and it takes skill to get the most out of them, so it is way too geeky to be of general use, but it shows the general methods and principles that might underlie a system that makes knowledge grow and learning happen simply by shaping the space of interaction, giving individuals the tools to filter and form the space, and providing a space to gather. Less sophisticated/effective but more generally usable tools of this nature include Reddit and StackExchange, which combine ratings and karma information to allow the community to shape what the community sees. While both are flawed and neither is infallible, the combination of human organization and machine filtering generally makes both quite useful for a wide range of topics.  I am also much encouraged by how Wikipedia has evolved: its more deliberate structuring and guidance of the flow means it involves higher maintenance than more obviously collectively guided tools but it is incredibly successful at supporting and spreading useful knowledge (including about the Kardashians). The approach of each of these systems to diversity is a little like that of the Vancouver City planners: to design for it. There are places where communities meet and interact but there is also parcellation, with signals of their boundaries but no significant barriers, that supports the growth of a supportive culture (at least in places – there are, of course, some areas that thrive on dischord), and that makes trust visible.

There are potential opportunities for analytics tools, collaborative filters, and similar forms of data-driven algorithmic approaches here too. Such methods come with enormous risks, mostly due to the insatiable desire of programmers to control what other people do: to erect new boundaries. Even when done with good intentions, they can have harmful effects. Almost the last thing we need in such spaces is filter bubbles and echo chambers, but such approaches can embed and reinforce patterns and attitudes simply by doing their job, building boundaries that are all the more dangerous because they are invisible and unmentioned. The absolute last thing we need is machines to make decisions for us based on what a programmer has decided is best for us or, just as bad, using criteria over which we have no say. There are huge risks of designing new boundaries that are just as controlling and just as demotivating as the ones they replace. I don’t resent Amazon’s recommendations of what I might like to read next at all, for example, especially when it tells me why it is making those recommendations, because it does nothing to enforce those recommendations and learns when I disagree. I do resent Netflix limiting what it shows me that I might want to watch, though: this reduces my autonomy. I greatly dislike learning analytics tools that tell me how well I am meeting someone else’s goals, but I approve of those that help me to define and reach my own. I am happy for Google Search to suggest relevant sites I might want to visit, as long as it continues to show me those it is less impressed with, but I am deeply unhappy that Facebook shows me a tiny percentage of posts I might like to see. I love that clicking a word or phrase in an e-book will give me a definition and a link to Wikipedia or Google Search. I hate that clicking a help link will tell me what someone else thinks I need to know (especially when the nugget I need is hidden in a lengthy video that gives me no clues about where to find it). What all of this boils down to is support for the fundamental drivers found in the Pew report: autonomy, relatedness and competence. Take away any one of those, and you take away the love of learning. But, with care, scrutability, and attention to supporting human needs, such systems can be expansive and liberating.

In conclusion

For now, most of the new systems we use to replace the formal process of teaching show promise but most have numerous weaknesses, most of which formal teaching overcomes: concerns about reliability, trust, safety, efficiency, and the effects of deliberate malice are well founded, and there are big issues of control and autonomy to overcome. But it seems to me that, as we start to dismantle the boundaries of traditional educational practice, the opportunities to extend and improve learning through reinvention of our learning spaces online are (virtually!) limitless, while we reached a state of near stasis in physically located learning many hundreds of years ago. Sure, there have been incremental improvements here and there but they have been uneven at best, and it is possible to see examples of great pedagogies being used thousands of years ago that are barely, if at all, improved today. It’s all down to physics. 

Footnote

I wouldn’t know a Kardashian if one kicked me in the face and, until just now, I had little idea about what they were apart from being a family that is known across the Internet for nothing more substantial than their own celebrity. For quite a long time I actually thought the headlines and post titles about them were about a fictitious race from Star Trek. What’s quite interesting about that is that I had learned what little I knew on the subject without, until just now, any intention of doing so. I found out a bit more just now by way of fact checking, through Wikipedia, but it seems that what I already knew was pretty much accurate. Education happens whether we seek it or not. It would be good if that education were more valuable more of the time.

Three ways to save distance universities

TELUQ logoToday brings another bit of bad news for a distance education institution, with TELUQ’s future looking uncertain, though it is good to see that its importance and contribution is also recognized, and it is a long way from dead yet. Though rumours of Athabasca University’s own demise resulting mainly from our acting president’s message that has widely been construed as a suicide note to the world are greatly exaggerated, and repudiated by the acting president himself, similar issues are reflected here and in the Open University, UK, that has lost a quarter of its students over the past five years.  I have heard informal whispers from Europe that the OUNl is in similarly dire straits, though have no references to support that and it might just be hearsay – I’d welcome any news on that.

We are all institutions that were established within a very few years of one another (AU and OU-UK within months of each other) at a time that there were no viable higher education alternatives for students without formal qualifications, who were stuck in a location without a university, who were in full-time employment, or for whatever reason could not or would not attend a physical institution.

Moving on 40-50 years, times have changed dramatically but, fundamentally, we have not. Sure, we have mostly dropped the archaic technologies that we used when we were founded, but paper course packs and associated processes and pedagogies lurk deep within our organizational DNA even if the objects themselves are mostly a memory. Sure, we have, collectively, been leaders and prime movers in establishing the research, the pedagogies and the technologies of distance education that are now widespread in most physical universities, but it is notable that most of our innovative practices have been taken up more widely elsewhere than in our own institutions. And there are lots of alternatives elsewhere nowadays, from MOOCs to the massive growth of distance courses on face-to-face campuses, and much else besides.

Competition is only one of many reasons for the peril distance institutions are now in. It is odd, at first glance, that we have reached this point because we were first past the post for decades and, thanks to our relative independence of physical infrastructure and our research leadership, should have been more agile in adapting to what, from the early 90s, has clearly been a rapidly changing educational and technological landscape to which we should have been perfectly adapted. But there are some critical structural flaws in our design that have held us back. All of the open universities of this era originally adopted an industrial design model, based heavily on the work of people like Otto Peters and Charles Wedermeyer, who talked of independent learning but actually meant anything but when it came to teaching. This was essential in pre-Internet times, because communication was too slow and cumbersome to do anything else, both pedagogically and in business processes. But it had systemic consequences.

We have been and to a large extent remain driven by process in all that we do. We were designed primarily as machines for higher education, not as communities of scholars. Just as we structured our teaching, so we structured our organizations and, as transactional distance theory suggests, the result was less dialogue, especially in places like AU that had a distributed workforce. We have inherited a culture of process and structure, and consequent sluggish change. This has been improving in places thanks to things like the Landing at AU and similar initiatives elsewhere, but not fast enough and, certainly at AU and I gather also in our sister institutions, there have been steps backwards as well as forwards. At AU we have, of late, made some very poor ICT choices and retrograde organizational restructuring that actually increases, rather than reduces the amount of structure and process, and that reduces the potential for the spread of knowledge and dialogue. Meanwhile, thanks to our traditional course model, with its lack of feedback loops, we have till now mainly designed our teaching around quality assurance, not quality control: courses can take years to prepare and tend to be pretty well written but, for the majority, their success is measured by meaningless proxies that tell us little or nothing about their true impact and effectiveness. Though there are plenty of exceptions, too few courses use pedagogies, processes and other technologies that allow us to know our students and gain deep understanding of their concerns and interests.

Three things that could save open and distance universities from irrelevance

Given the imminent peril that open and distance universities appear to be finding themselves in, the solution is not to tweak what we have or to seek even more efficiencies in processes that are no longer relevant. Now is the time for a little bit of reinvention: not much. All of what is needed already exists in pockets. We have learned a lot – far more than our physical counterparts – about the challenges of distance learning and many of us have discovered ways of doing it that work. And, for all the path dependencies that claw at us, we do have innate organizational agility, so change is not impossible. More to the point, it is worth doing: distance education has innate advantages that physically co-present education (there must be a better term!) cannot hope to match.

At least part of the solution lies firstly in capitalizing on and enhancing the natural benefits that distance learning brings, notably in terms of freedom. Secondly, it lies in reducing as many of its disadvantages as we can.

Distance learning is all about freedom, but we have inherited two things from our physical forebears that unnecessarily constrain that: fixed-length courses, and accreditation umbilically linked to teaching. We need to rid ourselves of fixed-length courses, and disaggregate learning from assessment, so that learners can choose to work on things that really matter to them and gain accreditation for what they know rather than what we choose to teach. Right now, a course is like one of those cable TV packages that contains one or two channels you actually want and a whole load that you do not. The tightly bound assessments force students to bow to our needs, not theirs, which is awful for motivation and retention. This means that those with prior knowledge are bored, those who find it difficult are over-pressured, and the point of learning becomes not skill acquisition but credit acquisition. This in turn reinforces an unhealthy power relationship that only ever had any point in the first place because of the constraints of teaching in physical classrooms, and that is ultimately demotivating (extrinsically motivating) for all concerned.

This is ridiculous when we do not have such constraints – lack of need for teacher control (unless students want it, of course – but that’s the point, students can choose) is one of the key ways that distance learning is inherently better than classroom learning. Classroom teachers need control. Indeed, it is almost impossible to do it effectively without it, notwithstanding a lot of tricks and techniques that can somewhat limit the damage for those that hate sticks and carrots. At the very least they need to get people in one place at one time, and organize behaviour once everyone is there. We do not.

We need better tailored learning, and to support many different ways of doing it. Smaller chunks would help a lot – the equivalent of unbundling channels on a cable TV package – but, really, courses should be no bigger or smaller than they need to be for the purpose. Only rarely is that 15 weeks/100 hours, or whatever standard size universities choose to use. We do it for reasons that are solely related to organizational convenience and that emerged only because of the need to schedule students, teachers, and classrooms in physical spaces. Some students may need no tuition at all – all adult learners come with some knowledge, and some bring a lot. Some may need more than we currently give. We need to recognize and accommodate all that diversity. One of the most effective ways to handle our accreditation role under such circumstances is to have separate assessment of learning, unrelated to the course in any direct way. Our challenge and PLAR processes at AU are almost ready for that already, so it’s not an impossible shift. The other effective way to handle accreditation when we no longer control the inputs and outputs is to negotiate learning outcomes with the students through personalized learning contracts. There are plenty of models for such competency-based, andragogic ways of doing things: we would not be the first, by any means, and already run quite a few courses and processes that allow for it.

The second part of the solution lies in reducing or even removing the relative disadvantages of distance education. The largest of these by far is social isolation and its side-effects, notably on motivation. We need to build a richer, more connected community, to employ pedagogies that take advantage of the fact that we actually have about 40,000 students passing through every year at AU (OU-UK has many more, despite its losses), and to better support our teachers and researchers in engaging with one another and/or learning from one another. In too many of our courses and programs, students may never even be aware of others, let alone benefit from learning with them. This does not imply that we should always force our students to collaborate (or force them to do anything) and it certainly doesn’t mean we should do truly stupid things like give marks for discussion contributions, but it does mean creating ubiquitous opportunities to engage, and making others (and their learning) more visible in the process. This matters as much to staff as it does to students. The Landing is a partial technological solution (or support for a solution) to that problem but it does not go nearly far enough and is not deeply embedded as it should be. Such opportunities to engage and to be aware of others should be everywhere in our virtual space, not on a separate site that only about a quarter of staff and students visit. And, of course, it only really makes sense if we adapt the ways we support learning to match, not just in our deliberate teaching but in our attitudes to sharing, engaging and connecting.

There are lots of other things that could be done – whole books can be and have been written about that – but these three simple changes would be sufficient, I think, to bring about profound positive change throughout the entire system:

  1. valorizing and enabling the social,
  2. variable length courses and lessons, and
  3. disaggregating assessment from learning

Physical universities would equally benefit from all of these but, apart from in their social affordances (that are certainly great, if sometimes under-utilized), have far less innate ability to support them. I think that means that distance universities still have a place at the vanguard of change.

It has long annoyed me that distance education is seen by many as a poor cousin to face-to-face learning. In some cases and in some ways, sure, physical co-presence gives an edge. But, in others, especially in terms of freedom – pedagogical and personal freedom, not just in terms of space, pace and place – distance education can be notably superior. To achieve its potential, it just needs to throw off the final shackles it inherited from its ancestor.

On the value of awards

The week before last was a bit of a gold-star week for me. Firstly, I received Athabasca University’s Craig Cunningham Memorial Award for Teaching Excellence.  Secondly, Jisc named me one of the 50 top social media influencers in UK higher education (I was eligible because, though I don’t live in the UK any more, I still maintain strong informal and formal ties). It’s always nice to have one’s ego stroked, and mine was purring like a satisfied kitten for some time:  the accompanying photo of one of my kittens gives a rough rendition of my state of mind. Also, I am very Kittenthankful to those that nominated and supported me: thank you all! None-the-less, I have somewhat mixed feelings about both of these. Partly, it’s just because of embarrassment and a general sense of lack of worthiness. I know from intimate personal experience that I am at the very least as awful as I am great.  Equally, I am acutely aware that there are very many people who do things far better than me in many significant ways in both areas, and who did not receive an award for it. But there’s more to my discomfort than that. In this post I am mostly going to focus on the teaching award, but some of these issues relate to being on the list of UK social media influencers too.

The teaching crowd vs the teaching star

The teaching award bothers me, mainly, because no teacher is or should be a stand-alone prima donna or primo uomo, least of all in a highly distributed teaching environment like that at Athabasca. At AU, and to an only slightly lesser extent elsewhere, teaching is always the work of a team, always the result of a much larger community than just that team, and never, ever, the sole domain of one individual. Students (especially), administrators, technicians, learning designers, editors, graphic artists, fellow academics, tutors, textbook authors, Wikipedia editors, Facebook friends and the collectively generated processes and culture that make the university what it is, are at the very least as significant as any one person. To give one person an award for what we all do together therefore just doesn’t make much sense. It’s particularly ironic that I should get a teaching award in the light of a great deal of my work, which for more than 15 years has been about just that – how crowds and systems teach. The individual we label as a ‘teacher’ is just a part of a much larger teaching gestalt and need not be its star. It is true that the charismatic inspirers and/or visible innovators and/or empathetic carers do tend to be the teachers we most remember and are the ones that we tend to nominate for awards. But they also tend to be, for much the same reasons,  the worst teachers for some people: love ’em or hate ’em, there’s not much in between. Truly great teachers, including all those that make up the gestalt, often disappear into the background. My friend and mentor Richard Mitchell wanted a t-shirt slogan for education conferences that summed it up nicely: ‘shut up and let them learn’ (I don’t know if he ever had it made). The point is that it should never be about teachers teaching: it’s always about learners learning, and there are many ways to support that, most of the best of which are driven by the learners, not the teachers. Teachers that do that well are not always the ones that get the awards.

Competition vs caring

I was a bit disconcerted to learn on the day of the award ceremony that my faculty has been competitively pushing its staff for these awards over a period of years so, for some, this was less about celebrating excellence than winning. I don’t think academia needs to be nor should it be gamified: it has far more than enough of that already. If these contests were simple games with clear rules that made winning and losing unequivocal and fair, I would be fine with it. But, outside such a clearly game-like context, competition is not good for motivation – whether you are a winner or a loser – and it is often destructive to communities. Like performance-related pay and grades (deeply flawed ideas), it can all too easily make the award into the goal, which takes away the love of the activity itself as well as shaping how we perform it. This can very easily turn into a bit of behaviourist nonsense that can drive action in the short term but weaken interest in the long term. It is fundamentally unfair, too, which can cause unnecessary tension and divisions in a community that, by its nature, needs to work together to a common goal that everyone plays an important part in reaching. Giving an award is also an expression of power: a bit of behavioural shaping done to us, not with us, the use of award committees and panels notwithstanding. At the AU awards ceremony our leaders told us how proud they were of us. They meant this very kindly, and were simply following a traditional pattern and doing the right thing for the ritual purposes of the event, but it’s not a good idea. Sure, feel pride to be part of a great learning community, show interest in what we do, care about what we do together. Yes, by all means, celebrate the good things we have done, all of us, but not that we, as individuals, are therefore good. That’s too much like patting a dog on the head for behaving the way we want him to behave.

A better way?

What really made my ego purr was not the award itself but reading the generous things kind colleagues and students wrote about me in support of the nomination. Those brought tears to my eyes, and that’s what I am really grateful for.  So, rather than giving one person an award, which seems a bit arbitrary and divisive, I think it would make far more sense that we should all regularly nominate at least one other person for acclaim, but that we should scrap giving an actual award or, if we must, should give it to everyone or a large group. The really valuable part, from a personal perspective, is not the award as such but the kindness and affirmation from friends, students and colleagues. It’s also really nice to give such acclaim. Everyone’s a winner.

The value of awards

For all my misgivings, I think that awards do have real value, especially to those that are not in the competition themselves. Awards are good ways to make concrete the values that we (or, at least, the givers of the awards) deem to be significant. By giving an award for teaching, AU is signalling the importance of teaching to its employees and to the rest of the world, and that’s a message worth sending. Similarly for Jisc, its influential position means that it got a lot of attention for not just the contest but, more significantly, the criteria for success in that contest.  That is really valuable. Social media activities are seldom given much weight when deciding on promotions or research excellence in academia, but they should be. By far the most significant measure of success in academia is whether our work increases the knowledge in the world, whether through research or teaching or dialogue, and social media are a great means of doing that. The most popular of my papers and books have been read by a few thousand people, and most have been read by far fewer than that. My biggest keynotes have addressed less than a thousand people, and some conference papers have reached no more than a few dozen readers and attendees. Some of my blog posts and shared bookmarks have had tens of thousands of readers, and most are read by thousands. There are different measures of quality for such things, for sure: most of my posts are far more like presentations intended to spark ideas than rigorous papers and books and I doubt that any have ever been cited in academic literature. But, though not rivaling peer-reviewed papers, that is still useful, I think, for exactly the reasons it is useful to attend conference presentations and, in the same way, each one is an opportunity to interact directly. Blog posts themselves may not always have much academic clout compared with peer reviewed papers but, sometimes, the dialogue that develops around them can become an incredibly significant artefact in itself, much like the glosses on mediaeval manuscripts, entering depths that can put most peer review to shame. Perhaps the Jisc list will catalyze further social media activity among those who feel that their time is better spent publishing work in journals with high impact factors and low readership. Perhaps it will encourage those outsiders to investigate what those of us who care about such things are sharing. Perhaps it will act as a pre-filter to help them to find stuff worth reading. Perhaps it will inspire innovative uses of the tools and spread good practices. Perhaps it is a good thing to simply assert that there is a community that we are part of. Awards can be catalysts for change, builders of community, and organizers of values. That’s good.

There is, too, some value in recognizing the value of people and what they do for whatever reason. I find it odd that, as well as awards for specific activities, AU gives long service awards. That rather implies that staying here might have been an achievement in itself, which further implies that it might have been a chore to stick it out for so long. That’s not a good message – I’m here because I want to be here, not because I feel I must – and it is made worse by adding a reward for it. To be fair it is, quite literally, a token reward, of a few dollars to spend in the AU store and a pin. But, as carrots go, that’s likely worse than no carrot at all: it sends both a message that it is an extrinsic reward – akin to a payment – and that we are not worth very much. I reckon a bit of applause and a hand-shake is more than enough acknowledgement without muddying the waters with cold hard cash. As a ritual, though, celebrating the simple fact of our continuing community is very worthwhile. Not only is it an opportunity to meet and eat with colleagues in person – a rare thing at AU – it’s an affirmation of the value of the community itself. We need such rituals and celebrations of togetherness.

And that is, I think, the most profound value of awards in general. They are, arguably, counter-productive as ways to drive good practice or encourage better behaviour in those that compete for them. But the ceremonies associated with them and the shared values that they represent bind all of us. They symbolize what we endeavour to be, they signal the values that we cherish, they exclude those outside the community and thus contribute to the community’s internal cohesion, albeit at a potential cost of competition. On balance, for all the complexities and risks, that’s not a bad thing.

A waste of time

pocket watch 

A while back I wrote a blog post about the apparent waste of time involved in things like reading email, loading web pages, etc. At the end of the post I suggested that the simplistic measure of time as money that I was using should be viewed with great suspicion, though it is precisely the kind of measure that we routinely use. This post is mostly about why we should be suspicious.

But first, my basic initial argument, restated and stripped to its bones, is simple. According to the vacation request form that I have to fill in (and, after taking vacation, repeat the process) an Athabasca University working day is 7 hours, or 25,200 seconds, long. There are about 1,200 employees at Athabasca University so, if each employee could save 21 seconds in a day (25,200/1200 = 21), it would be like getting another employee. Equally, every time we do something that loses everyone 21 seconds a day for no good reason, the overall effect is the same as firing someone. I observed then that we have lately adopted a lot of ICT systems that waste a great deal more than that. Since then, things have been getting worse. We are about to move to an Office 365 system, for instance, that I am guessing will cost us the time of at least 5 people, maybe more, compared with our current aged Zimbra suite. It’s not rocket science: a minute of everyone’s day easily accounted for in loading time alone, which I have  checked seems to be roughly 20 seconds longer than the old system, and most people will load it many times a day. At the start, it will take way more than that, what with training, migration, confusion and all and, if my experience of Microsoft’s Exchange system is anything to go by, it is going to carry on sapping minutes out of everyone’s day for the foreseeable future thanks to poor design and buggy implementation. So far, so depressing.

But what have we actually lost?

The simplistic assumption that time is money has a little merit when tasks are routine and mechanical. If you are producing widgets then time spent not producing widgets equates directly to widgets lost, so money is lost for every second spent doing something else. Even that notion is a bit suspect, though, inasmuch as there are normally diminishing returns on working more. Even if a task requires only the slightest hint of skill or judgement, the correlation between time and money is a long, long way from linear. Far more often than not, productivity is lower if you insist on uninterrupted working or longer hours than it would be if you insisted on regular breaks and shorter hours.  At the other end of the spectrum, it is also true, even in the most creative and open occupations, that it is possible to spend so much time doing something else that you never get round to the thing that you claim to be doing, though it is very hard to pin down the actual break-even point. For instance, a poet might spend 23.5 out of every 24 hours not actually writing poetry and that might be absolutely fine. On the other hand, if a professor spends a similar amount of time not marking student work there will probably be words. For most occupations, there’s a happy balance.

But what about those enforced breaks caused by waiting for computers to do something, or playing a mechanical role in a bureaucratic system, or reading an ‘irrelevant’ all-staff email? These are the ones that relate most directly to my original point, and all are quite different cases, so I will take each in turn, as each is illustrative of some of the different ways time and value are strangely connected.

Waiting for the machine

As I wait for machines to do something I have from time to time tried to calculate the time I ‘lose’ to them. As well as time waiting for them to boot up, open a web page, open an application, convert a video or save a document, this includes various kinds of futzing, such as organizing emails or files, backing up a machine, updating the operating system, fixing things that are broken, installing tools, shuffling widgets, plugging and unplugging peripherals, and so on. On average, given that almost my entire working life is mediated through a computer, I reckon that an hour or more of every day is taken up with such things. Some days are better than others, but some are much worse. I sometimes lose whole days to this. Fixing servers can take much more. Because I work in computing and find the mental exercise valuable, futzing is not exactly ‘lost’ time for me, especially as (done well) it can save time later on. Nor, for that matter, is time spent waiting for things to happen. I don’t stop thinking simply because the machine is busy. In fact, it can often have exactly the opposite effect. I actually make very deliberate point of setting aside time to daydream throughout my working day because that’s a crucial part of the creative, analytic and synthetic process. Enforced moments of inactivity thus do a useful job for me, like little inverted alarm clocks reminding me when to dream. Slow machines (up to a point) do not waste time – they simply create time for other actitivies but, as ever, there is a happy balance.

Bacn

pig, showing cuts of meat

Bacn is a bit like spam except that it consists of emails that you have chosen or are obliged to receive. Like spam, though, it is impersonal, often irrelevant, and usually annoying. Those things from mailing lists you sometimes pay attention to, calls for conference papers that might be interesting, notifications from social media systems (like the Landing) that have the odd gem, offers from stores you have shopped at, or messages to all-staff mailing lists that are occasionally very important but that are mostly not –  I get a big lot of bacn. Those ‘irrelevant’ allstaff emails are particularly interesting examples. They are actually very far from irrelevant even though they may have no direct value to the work that I am doing, because they are part of the structure of the organization. They are signals passing around the synapses of the organizational brain that help give its members a sense of belonging to something bigger, even if the particular signals themselves might rarely fire their particular synapses. Every one is an invitation to being a potential contributor to that bigger thing. They are the cloth that is woven of the interactions of an organization that helps to define the boundaries of that organization and reflect back its patterns and values. The same is true of social media notifications: I only glance at the vast majority but, just now and then, I pick up something very useful and, maybe once every day or two, I may contribute to the flow myself. The flow is part of my extended brain, like an extra sense that keeps me informed about the zeitgeist of my communities and social networks and that makes me a part of them. Time spent dealing with such things is time spent situating myself in the sets, networks and groups that I belong to. Organizations, especially those that are largely online, that are seeking to reduce bacn had better beware that they don’t lose all that salty goodness because bacn is a thin web that binds us. Especially in a distributed organization, if you lose bacn, you lose the limbic system of the organization, or even, in some cases, its nervous system. Organizations are not made of processes; they are made of people, and those people have to connect, have to belong. Bacn supports belonging and connection. But, of course, it can go too far. It is always worth remembering that 21 seconds of bacn is another person’s time gone (for a large company, it might be a second or less) and that person might have been doing something really productive with all of that lost time. But to get rid of bacn makes no more sense than to get rid of brain cells because they don’t address your current needs. An organization, not just its members, has to think and feel, and bacn is part of that thinking and feeling. As ever, though, there is a happy balance.

Being a cog

cogs

I’ve saved this one till last because it is not like the others. Being a cog is about the kind of thing that requires individuals to do the work of a machine. For instance, leave-reporting systems that require you to calculate how much leave you have left, how many hours there are in a day, or which days are public holidays (yes, we have one of those). Or systems for reclaiming expenses that require you to know the accounting codes, tax rates, accounting regulations, and approvers for expenses (yes, we have one of those too). Or customer relationship management systems that bombard you with demands that actually have nothing to do with you or that you have already dealt with (yes – we have one of those as well). Or that demand that you record the number of minutes spent using a machine that is perfectly capable of recording those minutes itself (yup). This is real work that demands concentration and attention, but it does nothing to help with thinking or social cohesion and does nothing to help the organization grow or adapt. In fact, precisely the opposite. It is a highly demotivating drain on time and energy that saps the life out of an organization, a minute or two at a time. No one benefits from having to do work that machines can do faster, more accurately and more reliably (we used to have one of those). It is plain common sense that investing in someone who can build build and maintain better cogs is a lot more efficient and effective than trying (and failing) to train everyone to act exactly like a cog. This is one of those tragedies of hierarchically managed systems. Our ICT department has been set the task of saving money and its managers only control their own staff and systems, so the only place they can make ‘savings’ is in getting rid of the support burden of making and managing cogs. I bet that looks great on paper – they can probably claim to have saved hundreds of thousands or even millions of dollars although, actually, they have not only wasted tens of millions of dollars, but they have probably set the organization on a suicide run. But they could as easily have gone the other way and it might have been just as bad. Over-zealous cog-making is harmful, both because ICT departments have a worrisome tendency to over-do it (I cannot have assignments with no marks, for example, if I wish to enter them into our records system, which I have to do because otherwise the cog that pays tutors will not turn) and because systems change, which means many of the cogs inside them have to change too, and it is not just the devil’s work but an accounting nightmare to get them all to change at the right time. Well-designed ICT systems make it easy to take out a cog or some other sub-assembly and replace it, and they use tools that make cog production fast and simple. Poorly designed systems without such flexibility enslave their users, just as much as those that have to submit to cog-retraining are enslaved when their systems change. As ever, there is a happy balance.

Wasting time?

I’m not sure that time is ever lost – it is just spent doing other things. It can certainly be wasted, though, if those other things do not make a positive difference. But it is complicated. Here are just a few of the things I have done today – not a typical day, but few of them are:

  • reading/responding to emails from staff, students and others: roughly 2.5 hours
  • writing a forward for a book: roughly 2 hours
  • writing this post: roughly 1 hour
  • walking: roughly 45 minutes
  • making/consuming food and drink: about 30 minutes
  • reading/ making notes on books and papers: roughly 1 hour
  • replying to interview questions: approximately 45 minutes
  • checking my boat didn’t die in the rainstorm: roughly half an hour
  • cleaning and tidying: maybe half an hour
  • writing a book: about 20-30 minutes
  • replying to student posts: roughly 1 hour
  • marking: roughly 1 hour
  • waiting for computers: perhaps half an hour
  • grooming/washing/etc: maybe half an hour
  • checking/listening to the news and weather: roughly 45 minutes
  • taking an afternoon nap: about half an hour
  • Skyping: roughly 15 minutes
  • Deleting spam from the Elgg community site: about 10 minutes
  • Drying a wet dog: about 5 minutes
  • serious thinking: roughly 12 hours

There are still a couple of hours left of my day before I read a book and eventually go to sleep. Maybe I’ll catch a movie while reading some news after preparing some more food. Maybe I’ll play some guitar or try to get the hang of the sansula one more time. With a bit of luck I might get to chat with my wife (who has been out all day but would normally figure in the list quite a bit). But I hope you get the drift. I don’t think it makes much sense to measure anyone’s life in minutes spent on activities, except for the worst things they do. Time may be worth measuring and accounting for when it is spent doing the things that make us less than human, but it would be better to not do such things in the first place. I have put off responding to the CRM system today and only spent a few minutes checking admin systems in general because, hell, it’s Monday and I have had other things to do. It is all about achieving a happy balance.

The LMS as a paywall

I was writing about openness in education in a chapter I am struggling with today, and had just read Tony Bates’s comments on iQualify, an awful cloud rental service offering a monolithic locked-in throwback that just makes me exclaim, in horror, ‘Oh good grief! Seriously?’ And it got me thinking.

Learning management systems, as implemented in academia, are basically paywalls. You don’t get in unless you pay your fees. So why not pick up on what publishers infamously already do and allow people to pay per use? In a self-paced model like that used at Athabasca it makes perfect sense and most of the infrastructure – role-based time-based access etc – and of course the content already exists. Not every student needs 6 months of access or the trimmings of a whole course but, especially for those taking a challenge route (just the assessment), it would often be useful to have access to a course for a little while in order to get a sense of what the expectations might be, the scope of the content, and the norms and standards employed. On occasion, it might even be a good idea to interact with others. Perhaps we could sell daily, weekly or monthly passes. Or we could maybe do it at a finer level of granularity too/instead: a different pass for different topics, or different components like forums, quizzes or assignment marking. Together, following from the publishers’ lead, such passes might cost 10 or 20 times the total cost of simply subscribing to a whole course if every option were purchased, but students could strategically pick the parts they actually need, so reducing their own overall costs.

This idea is, of course, stupid. This is not because it doesn’t make economic and practical sense: it totally does, notwithstanding the management, technical and administrative complexity it entails. It is stupid because it flips education on its head. It makes chunks of learning into profit centres rather than the stuff of life. It makes education into a product rather than celebrating its role as an agent of personal and societal growth. It reduces the rich, intricately interwoven fabric of the educational experience to a set of instrumentally-driven isolated events and activities. It draws attention to accreditation as the be-all and end-all of the process. It is aggressively antisocial, purpose-built to reduce the chances of forming a vibrant learning community. This is beginning to sound eerily familiar. Is that not exactly what, in too high a percentage of our courses, we are doing already?

If we and other universities are to survive and thrive, the solution is not to treat courses and accreditation as products or services. The ongoing value of a university is to catalyze the production and preservation of knowledge: that is what we are here for, that is what makes us worthwhile having. Courses are just tools that support that process, though they are far from the only ones, while accreditation is not even that: it’s just a byproduct, effluent from the educational process that happens to have some practical societal value (albeit at enormous cost to learning). In physical universities there are vast numbers of alternatives that support the richer purpose of creating and sustaining knowledge: cafes, quads, hallways, common rooms, societies, clubs, open lectures, libraries, smoking areas, student accommodation, sports centres, theatres, workshops, studios, research labs and so on. Everywhere you go you are confronted with learning opportunities and people to learn with and from, and the taught courses are just part of the mix, often only a small part. At least, that is true in a slightly idealized world – sadly, the vast majority of physical universities are as stupidly focused on the tools as we are, so those benefits are an afterthought rather than the main thing to celebrate, and are often the first things to suffer when cuts come along. Online, such beyond-the-course opportunities are few and far between: the Landing is (of course) built with exactly that concern in mind, but there’s precious little sign of it anywhere else at AU, one of the most advanced online universities in the world.  The nearest thing most students get to it is the odd Facebook group or Twitter interaction, which seems an awful waste to me, though a fascinating phenomenon that blurs the lines between the institution and the broader community.

It is already possible to take a high quality course for free in almost any subject that interests you and, more damagingly, any time now there will soon be sources of accreditation that are as prestigious as those awarded by universities but orders of magnitude cheaper, not to mention compellingly cut-price  options from universities that can leverage their size and economies of scale (and, perhaps, cheap labour) to out-price the rest of us. Competing on these grounds makes no sense for a publicly funded institution the role of which is not to be an accreditation mill but to preserve, critique, observe, transform and support society as a whole. We need to celebrate and cultivate the iceberg, not just its visible tip. Our true value is not in our courses but in our people (staff and students) and the learning community that they create.

Niggles about NGDLEs – lessons from ELF

Malcom Brown has responded to Tony Bates and me in an Educause guest post in which he defends the concept of the NGDLE and expands a bit on the purposes behind it. This does help to clarify the intent although, as I mentioned in my earlier post, I am quite firmly in favour of the idea, so I am already converted on the main points. I don’t mind the Lego metaphor if it works, but I do think we should concentrate more on the connections than the pieces. I also see that it is fairly agnostic to pedagogy, at least in principle. And I totally agree that we desperately need to build more flexible, assemblable systems along these lines if we are to enable effective teaching, management of the learning process and, much much more importantly, if we are to support effective learning. Something like the proposed environment (more of an ecosystem, I’d say) is crucial if we want to move on.

But…

It has been done before, over ten years ago in the form of ELF, in much more depth and detail and with large government and standards bodies supporting it, and it is important to learn the lessons of what was ultimately a failed initiative. Well – maybe not failed, but certainly severely stalled. Parts persist and have become absorbed, but the real value of it was as a model for building tools for learning, and that model is still not as widespread as it should be. The fact that the Educause initiative describes itself as ‘next generation’ is perhaps the most damning evidence of its failure.

Elves

Why ELF ‘failed’

I was not a part of nor close to the ELF project but, as an outsider, I suspect that it suffered from four major and interconnected problems:

  1. It was very technically driven and framed in the language of ICTs, not educators or learners. Requirements from educators were gathered in many ways, with workshops, working groups and a highly distributed team of experts in the UK, Australia, the US, Canada, the Netherlands and New Zealand (it was a very large project). Some of the central players had a very deep understanding of the pedagogical and organizational needs of not just learners but organizations that support them, and several were pioneers in personal learning environments (PLEs) that went way beyond the institution. But the focus was always on building the technical infrastructure – indeed, it had to be, in order to operationalize it. For those outside the field, who had not reflected deeply on the reasons this was necessary, it likely just seemed like a bunch of techies playing with computers. It was hard to get the message across.
  2. It was far too over-ambitious, perhaps bolstered by the large amounts of funding and support from several governments and large professional bodies. The e-learning framework was just one of several strands like e-science, e-libraries and so on, that went to make up the e-framework. After a while, it simply became the e-framework and, though conceptually wonderful, in practical terms it was attempting far too much in one fell swoop. It became so broad, complex and fuzzy that it collapsed under its own weight. It was not helped by commercial interests that were keen to keep things as proprietary and closed as they could get away with. Big players were not really on board with the idea of letting thousands of small players enter their locked-in markets, which was one of the avowed intents behind it. So, when government funding fizzled out, there was no one to take up such a huge banner. A few small flags might have been way more successful.
  3. It was too centralized (oddly, given its aggressively decentralized intent and the care taken to attempt to avoid that). With the best of intent, developers built over-engineered standards relying on web service architectures that the rest of the world was abandoning because they were too clunky, insufficiently agile and much too troublesome to implement. I am reminded, when reading many of the documents that were produced at the time, of the ISO OSI network standards of the late 80s that took decades to reach maturity through ornate webs of committees and working groups, were beautifully and carefully engineered, and that were thoroughly and completely trounced by the lighter, looser, more evolved, more distributed TCP/IP standards that are now pretty much ubiquitous. For large complex systems, evolution beats carefully designed engineering every single time.
  4. The fact that it was created by educators whose framing was entirely within the existing system meant that most of the pieces that claimed to relate to e-learning (as opposed to generic services) had nothing to do with learning at all, but were representative of institutional roles and structures: marking, grading, tracking, course management, resource management, course validation, curriculum, reporting and so on. None of this has anything to do with learning and, as I have argued on many occasions elsewhere, may often be antagonistic to learning. While there were also components that were actually about learning, they tended to be framed in the context of existing educational systems (writing lessons, creating formal portfolios, sequencing of course content, etc). Though very much built to support things like PLEs as well as institutional environments, the focus was the institution far more than the learner.

As far as I can tell, any implementation of the proposed NGDLE is going to run into exactly the same problems. Though the components described are contemporary and the odd bit of vocabulary has evolved a bit, all of them can be found in the original ELF model and the approach to achieving it seems pretty much the same. Moreover, though the proposed architecture is flexible enough to support pretty much anything – as was ELF – there is a tacit assumption that this is about education as we know it, updated to support the processes and methods that have been developed since (and often in response to) the heinous mistakes we made when we designed the LMSs that dominate education today. This is not surprising – if you ask a bunch of experts for ideas you will get their expertise, but you will not get much in the way of invention or new ideas. The methodology is therefore almost guaranteed to miss the next big thing. Those ideas may come up but they will be smoothed out in an averaging process and dissenting models will not become part of the creed. This is what I mean when I criticize it as a view from the inside.

Much better than the LMS

If implemented, a NGDLE will undoubtedly be better than any LMS, with which there are manifold problems. In the first place, LMSs are uniformly patterned on mediaeval educational systems, with all their ecclesiastic origins, power structures and rituals intact. This is crazy, and actually reinforces a lot of things we should not be doing in the first place, like courses, intimately bound assessment and accreditation, and laughably absurd attempts to exert teacher control, without the slightest consideration of the fact that pedagogies determined by the physics of spaces in which we lock doors and keep learners controlled for an hour or two at a time make no sense whatsoever in online learning. In the second place, centralized systems have to maintain an uneasy and seldom great balance between catering to every need and remaining usably simple. This inevitable leads to compromises, from small things (e.g. minor formatting annoyances in discussion forums) to the large (e.g. embedded roles or units of granularity that make everything a course). While customization options can soften this a little, centralized systems are structurally flawed by their very nature. I have discussed such things in some depth elsewhere, including both my published books. Suffice to say, the LMS shapes us in its own image, and its own image is authoritarian, teacher-controlled and archaic. So, a system that componentizes things so that we can disaggregate any or all of it, provide local control (for teachers and other learners as well as institutions and administrators) and allow creative assemblies is devoutly to be wished for. Such a system architecture can support everything from the traditional authoritarian model to the loosest of personal learning environments, and much in between.

Conclusion

NGDLE is a misnomer. We have already seen that generation come and go. But, as a broad blueprint for where we should be going and what we should be doing now, both ELF and NGDLE provide patterns that we should be using and thinking about whenever we implement online learning tools and content and, for that, I welcome it. I am particularly appreciative that NGDLE provides reinvigorated support for approaches that I have been pushing for over a decade but that ICT departments and even faculty resist implacably. It’s great to be able to point to the product of so many experts and say ‘look, I am not a crank: this is a mainstream idea’. We need a sea-change in how we think of learning technologies and such initiatives are an important part of creating the culture and ethos that lets this happen. For that I totally applaud this initiative.

In practical terms, I don’t think much of this will come from the top-down, apart from in the development of lightweight, non-prescriptive standards and the norming of the concepts behind it. Of current standards, I think TinCan is hopeful, though I am a bit concerned that it is becoming over-ornate in its emerging development. LTI is a good idea, sufficiently mature, and light enough to be usable but, again, in its new iteration it is aiming higher than might be wise. Caliper is OK but also showing signs of excessive ambition. Open Badges are great but I gather that is becoming less lightweight in its latest incarnation. We need more of such things, not more elaborate versions of them. Unfortunately, the nature of technology is that it always evolves towards increasing complexity. It would be much better if we stuck with small, working pieces and assembled those together rather than constantly embellishing good working tools. Unix provides a good model for that, with tools that have worked more or less identically for decades but that constantly gain new value in recombination.

Footnote: what became of ELF?

It is quite hard to find information about ELF today. It seems (as an outsider) that the project just ground to a halt rather than being deliberately killed. There were lots of exemplar projects, lots of hooks and plenty of small systems built that applied the idea and the standards, many of which are still in use today, but it never achieved traction. If you want to find out more, here is a small reading list:

http://www.elframework.org/ – the main site (the link to the later e-framework site leads to a broken page)

http://www.elframework.org/projects.html  – some of the relevant projects ELF incorporated.

https://web.archive.org/web/20061112235250/http://www.jisc.ac.uk/uploaded_documents/Altilab04-ELF.pd – good, brief overview from 2004 of what it involved and how it fitted together

 https://web.archive.org/web/20110522062036/http://www.jisc.ac.uk/uploaded_documents/AltilabServiceOrientedFrameworks.pdf  – spooky: this is about ‘Next Generation E-Learning Environments’ rather than digital ones. But, though framed in more technical language, the ideas are the same as NGDLE.

http://www.webarchive.org.uk/wayback/archive/20110621221935/http://www.elearning.ac.uk/features/nontechguide2 – a slightly less technical variant (links to part 1, which explains web services for non-technical people)

See also https://web.archive.org/web/20090330220421/http://www.elframework.org/general/requirements/scenarios/Scenario%20Apparatus%20UK%205%20(manchester%20lipsig).doc and https://web.archive.org/web/20090330220553/http://www.elframework.org/general/requirements/use_cases/EcSIGusecases.zip, a set of scenarios and use cases that are eerily similar to those proposed for NGDLE.

If anyone has any information about what became of ELF, or documents that describe its demise, or details of any ongoing work, I’d be delighted to learn more!