Delightful compendium from Bryan Alexander. I particularly like:
Analytics, n. pl. “The use of numbers to confirm existing prejudices, and the design of complex systems to generate these numbers.”
Big data. n. pl. 1.When ordinary surveillance just isn’t enough.
Failure, n. 1. A temporary practice educators encourage in students, which schools then ruthlessly, publicly, and permanently punish.
Forum, n. 1. Social Darwinism using 1980s technology.
World Wide Web, n. A strange new technology, the reality of which can be fended off or ignored through the LMS, proprietary databases, non-linking mobile apps, and judicious use of login requirements.
Of all of them, my current favourite is the story of the curators of Auschwitz having to ask people not to play the game within its bounds. It’s kind of poetic: people are finding fictional monsters and playing games with them in a memorial that is there, more than anything, to remind us of real monsters. We shall soon see a lot more and a lot wilder clashes between reality and augmented reality, and a lot more unexpected consequences, some great, some not. Lives will be lost, lives will be changed. There will be life affirming acts, there will be absurdities, there will be great joy, there will be great sadness. As business models emerge, from buttons to sponsorship to advertising to trading to training, there will be a lot of money being made in a vast, almost instant ecosystem. Above all, there will be many surprises. So many adjacent possibles are suddenly emerging.
AR (augmented reality) has been on the brink of this breakthrough moment for a decade or so. I did not guess that it would explode in less than a week when it finally happened, but here it is. Some might quibble about whether Pokémon GO is actually AR as such (it overlays rather than augments reality), but, if there were once a more precise definition of AR, there isn’t any more. There are now countless millions that are inhabiting a digitally augmented physical space, very visibly sharing the same consensual hallucinations, and they are calling it AR. It’s not that it’s anything new. Not at all. It’s the sheer scale of it. The walls of the dam are broken and the flood has begun.
This is an incredibly exciting moment for anyone with the slightest interest in digital technologies or their effects on society. The fact that it is ‘just’ a game just makes it all the more remarkable. For some, this seems like just another passing fad: bigger than most, a bit more interesting, but just a fad. Perhaps so. I don’t care. For me, it seems like we are witnessing a sudden, irreversible, and massive global shift in our perceptions of the nature of digital systems, of the ways that we can use them, and of what they mean in our lives. This is, with only a slight hint of hyperbole, about to change almost everything.
Aside: it’s not VR, by the way
There has been a lot of hype of late around AR’s geekier cousin, VR (virtual reality), notably relating to Oculus, HTC Vive, and Playstation VR, but I’m not much enthused. VR has moved only incrementally since the early 90s and the same problems we saw back then persist in almost exactly the same form now, just with more dots. It’s cool, but I don’t find the experience is really that much more immersive than it was in the early 90s, once you get over the initial wowness of the far higher fidelity. There are a few big niches for it (hard core gaming, simulation, remote presence, etc), and that’s great. But, for most of us, its impact will (in its current forms) not come close to that of PCs, smartphones, tablets, TVs or even games consoles. Something that cuts us off from the real world so completely, especially while it is so conspicuously physically engulfing our heads in big tech, cannot replace very much of what we currently do with computers, and only adds a little to what we can already do without it. Notwithstanding its great value in supporting shared immersive spaces, the new ways it gives us to play with others, and its great potential in games and education, it is not just asocial, it is antisocial. Great big tethered headsets (and even untethered low-res ones) are inherently isolating. We also have a long way to go towards finding a good way to move around in virtual spaces. This hasn’t changed much for the better since the early 90s, despite much innovation. And that’s not to mention the ludicrous amounts of computing power needed for it by today’s standards: my son’s HTC Vive requires a small power station to keep it going, and it blows hot air like a noisy fan heater. It is not helped by the relative difficulty of creating high fidelity interactive virtual environments, nor by vertigo issues. It’s cool, it’s fun, but this is still, with a few exceptions, geek territory. Its big moment will come, but not quite yet, and not as a separate technology: it will be just one of the features that comes for free with AR.
Bigger waves
AR, on the whole, is the opposite of isolating. You can still look into the eyes of others when you are in AR, and participate not just in the world around you, but in an enriched and more social version of it. A lot of the fun of Pokémon GO involves interacting with others, often strangers, and it involves real-world encounters, not avatars. More interestingly, AR is not just a standalone technology: as we start to use more integrated technologies like heads-up displays (HUDs) and projectors, it will eventually envelop VR too, as well as screen-based technologies like PCs, smartphones, TVs, e-readers, and tablets, as well as a fair number of standalone smart devices like the Amazon Echo (though the Internet of Things will integrate interestingly with it). It has been possible to replace screens with glasses for a long time (devices between $100 and $200 abound) but, till now, there has been little point apart from privacy, curiosity, and geek cred. They have offered less convenience than cellphones, and a lot of (literal and figurative) headaches. They are either tethered or have tiny battery lives, they are uncomfortable, they are fragile, they are awkward to use, high resolution versions cost a lot, most are as isolating as VR and, as long as they are a tiny niche product, perhaps most of all, there are some serious social obstacles to wearing HUDs in public. That is all about to change. They are about to become mainstream.
The fact that AR can be done right now with no more than a cellphone is cool and it has been for a few years, but it will get much cooler as the hardware for HUDs becomes better, more widespread and, most importantly, more people share the augmented space. The scale is what makes the Pokémon GO phenomenon so significant, even though it is currently mostly a cellphone and GO Plus thing. It matters because, apart from being really interesting in its own right, soon, enough people will want hardware to match, and that will make it worth going into serious mass production. At that point it gets really interesting, because lots of people will be wearing HUD AR devices.
Google’s large-scale Glass experiment was getting there (and it’s not over yet), but it was mostly viewed with mild curiosity and a lot of suspicion. Why would any normal person want to look like the Borg? What were the wearers doing with those very visible cameras? What were they hiding? Why bother? The tiny minority that wore them were outsiders, weirdos, geeks, a little creepy. But things have moved on: the use cases have suddenly become very compelling, enough (I think) to overcome the stigma. The potentially interesting Microsoft Hololens, the incredibly interesting Magic Leap, and the rest (Meta 1, Recon Jet, Moverio, etc, etc) that are queueing up in the sidelines are nearly here. Apparently, Pokémon GO with a Hololens might be quite special. Apple’s rumoured foray into the field might be very interesting. Samsung’s contact-lens camera system is still a twinkling in Samsung’s eye, but it and many things even more amazing are coming soon. Further off, as nanotech develops and direct neural interfaces become available, the possibilities are (hopefully not literally) mind blowing.
What this all adds up to is that, as more of us start to use such devices, the computer as an object, even in its ubiquitous small smartphone or smartwatch form, will increasingly disappear. Tools like wearables and smart digital assistants have barely even arrived yet, but their end is palpably nigh. Why bother with a smart watch when you can project anything you wish on your wrist (or anywhere else, for that matter?). Why bother with having to find a device when you are wearing any device you can imagine? Why take out a phone to look for Pokémon? Why look at a screen when you can wear a dozen of them, anywhere, any size, adopting any posture you like? It will be great for ergonomics. This is pretty disruptive: whole industries are going to shrink, perhaps even disappear.
The end of the computer
Futurologists and scifi authors once imagined a future filled with screens, computers, smartphones and visible tech. That’s not how it will be at all. Sure, old technologies never die so these separate boxes won’t disappear altogether, and there’s still plenty of time left for innovation in such things, and vast profits still to be made in them as this revolution begins. There may be a decade or two of growth left for these endangered technologies. But the mainstream future of digital technologies is much more human, much more connected, much more social, much more embedded, and much less visible. The future is AR. The whirring big boxes and things with flashing lights that eat our space, our environment, our attention and our lives will, if they exist at all, be hidden in well-managed farms of servers, or in cupboards and walls. This will greatly reduce our environmental impact, the mountains of waste, the ugliness of our built spaces. I, for one, will be glad to see the disappearance of TV sets, of mountains of wires on my desk, of the stacks of tablets, cellphones, robots, PCs, and e-readers that litter my desktop, cupboards and basement. OK, I’m a bit geeky. But most of our homes and workplaces are shrines to screens and wiring. it’s ugly, it’s incredibly wasteful, it’s inhibiting. Though smartness will be embedded everywhere, in our clothing, our furniture, our buildings, our food, the visible interface will appear on displays that play only in or on our heads, and in or on the heads of those around us, in one massive shared hyperreality, a blend of physical and virtual that we all participate in, perhaps sharing the same virtual space, perhaps a different one, perhaps one physical space, perhaps more. At the start, we will wear geeky goggles, visors and visible high tech, but this will just be an intermediate phase. Pretty soon they will start to look cool, as designers with less of a Star Trek mentality step in. Before long, they will be no more weird than ordinary glasses. Later, they will almost vanish. The end point is virtual invisibility, and virtual ubiquity.
AR at scale
Pokémon GO has barely scratched the surface of this adjacent possible, but it has given us our first tantalizing glimpses of the unimaginably vast realms of potential that emerge once enough people hook into the digitally augmented world and start doing things together in it. To take one of the most boringly familiar examples, will we still visit cinemas when we all have cinema-like fidelity in devices on or in our heads? Maybe. There’s a great deal to be said for doing things together in a physical space, as Pokémon GO shows us with a vengeance. But, though we might be looking at the ‘same’ screen, in the same place, there will be no need to project it. Anywhere can become a cinema just as anywhere can be a home for a Pokémon. Anywhere can become an office. Any space can turn into what we want it to be. My office, as I type this, is my boat. This is cool, but I am isolated from my co-workers and students, channeling all communication with them through the confined boundaries of a screen. AR can remove those boundaries, if I wish. I could be sitting here with friends and colleagues, each in their own spaces or together, ‘sitting’ in the cockpit with me or bobbing on the water. I could be teaching, with students seeing what I see, following my every move, and vice versa. When my outboard motor needs fixing (it often does) I could see it with a schematic overlay, or receive direct instruction from a skilled mechanic: the opportunities for the service industry, from plumbing to university professoring, are huge. I could replay events where they happened, including historical events that I was not there to see, things that never happened, things that could happen in the future, what-if scenarios, things that are microscopically small, things that are unimaginably huge, and so on. This is a pretty old idea with many mature existing implementations (e.g. here, here, here and here). Till now they have been isolated phenomena, and most are a bit clunky. As this is accepted as the mainstream, it will cascade into everything. Forget rose-tinted spectacles: the world can be whatever I want it to become. In fact, this could be literally true, not just virtually: I could draw objects in the space they will eventually occupy (such virtual sculpture apps already exist for VR), then 3D print them.
Just think of the possibilities for existing media. Right now I find it useful to work on multiple monitors because the boundaries of one screen are insufficient to keep everything where I need it at once. With AR, I can have dozens of them or (much more interestingly) forget the ‘screen’ metaphor altogether and work as fluidly as I like with text, video, audio and more, all the while as aware of the rest of my environment, and the people in it, as I wish. Computers, including cellphones, isolate: they draw us into them, draw our gaze away from the world around us. AR integrates with that world, and integrates us with it, enhancing both physical and virtual space, enhancing us. We are and have only ever been intelligent as a collective, our intelligence embedded in one another and in the technologies we share. Suddenly, so much more of that can be instantly available to us. This is seriously social technology, albeit that there will be some intriguing and messy interpersonal problems when each of us might be engaged in a private virtual world while outwardly engaging in another. There are countless ways this could (and will) play out badly.
Or what about a really old technology? I now I have hundreds of e-books that sit forgotten, imprisoned inside that little screen, viewable a page at a time or listed in chunks that fit the dimensions of the device. Bookshelves – constant reminders of what we have read and augmenters of our intellects – remain one of the major advantages of p-books, as does their physicality that reveals context, not just text. With AR, I will be able to see my whole library (and other libraries and bookstores, if I wish), sort it instantly, filter it, seek ideas and phrases, flick through books as though they were physical objects, or view them as a scroll, or one large sheet of virtual paper, or countless other visualizations that massively surpass physical books as media that contribute to my understanding of the text. Forget large format books for images: they can be 20 metres tall if we want them to be. I’ll be able to fling pages, passages, etc onto the wall or hovering in the air, shuffle them, rearrange them, connect them. I’ll be able to make them disappear all at once, and reappear in the same form when I need them again. The limits are those of the imagination, not the boundaries of physical space. We will no doubt start by skeuomorphically incorporating what we already know but, as the adjacent possibles unfold, there will be no end to the creative potential to go far, far beyond that. This is one of the most boring uses of AR I can think of, but it is still beyond magical.
We will, surprisingly soon, continuously inhabit multiple worlds – those of others, those others invent, those that are abstract, those that blend media, those that change what we perceive, those that describe it, those that explain it, those that enhance it, those we assemble or create for ourselves. We will see the world through one another’s eyes, see into one another’s imaginations, engage in multiple overlapping spaces that are part real, part illusion, and we will do so with others, collocated and remote, seamlessly, continuously. Our devices will decorate our walls, analyze our diets, check our health. Our devices won’t forget things, will remember faces, birthdays, life events, connections. We may all have eidetic memories, if that is what we want. While cellphones make our lives more dangerous, these devices will make them safer, warning us when we are about to step into the path of an oncoming truck as we monitor our messages and news. As smartness is embedded in the objects around us, our HUDs will interact with them: no more lost shirts, no guessing the temperature of our roasts, no forgetting to turn off lights. We will gain new senses – seeing in the dark, even through walls, will become commonplace. We will, perhaps, sense small fluctuations in skin temperature to help us better understand what people are feeling. Those of us with visual impairment (most of us) will be able to zoom in, magnify, have text read to us, or delve deeper through QR codes or their successors. Much of what we need to know now will be unnecessary (though we will still enjoy discovering it, as much as we enjoy discovering monsters) but our ability to connect it will grow exponentially. We won’t be taking devices out of our pockets to do that, nor sitting in front of brightly lit screens.
We will very likely become very dependent on these ubiquitous, barely visible devices, these prostheses for the mind. We may rarely take them off. Not all of this will be good. Not by a mile. When technologies change us, as they tend to do, many of those changes tend to be negative. When they change us a lot, there will be a lot of negatives, lots of new problems they create as well as solve, lots of aggregations and integrations that will cause unforeseen woes. This video at vimeo.com/166807261 shows a nightmare vision of what this might be like, but it doesn’t need to be a nightmare: we will need to learn to tame it, to control it, to use it wisely. Ad blockers will work in this space too.
What comes next
AR has been in the offing for some time, but mainly as futuristic research in labs, half-baked experimental products like Google Glass, or ‘hey wow’ technologies like Layar, Aurasma, Google Translate, etc. Google, Facebook, Apple, Microsoft, Sony, Amazon, all the big players, as well as many thousands of startups, are already scrabbling frantically to get into this space, and to find ways to use what they already have to better effect. I suspect they are looking at the Pokémon GO phenomenon with a mix of awe, respect, and avarice (and, in Google’s case, perhaps a hint of regret). Formerly niche products like Google Tango or Structure Sensor are going to find themselves a lot more in the spotlight as the value of being able to accurately map physical space around us becomes ever greater. Smarter ways of interacting, like this at www.youtube.com/watch?v=UA_HZVmmY84, will sprout like weeds.
People are going to pay much more attention to existing tools and wonder how they can become more social, more integrated, more fluid, less clunky. We are going to need standards: isolated apps are quite cool, but the big possibilities occur when we are able to mash them up, integrate them, allow them to share space with one another. It would be really useful if there were an equivalent of the World Wide Web for the augmented world: a means of addressing not just coordinates but surfaces, objects, products, trees, buildings, etc, that any application could hook into, that is distributed and open, not held by those that control the APIs. We need spatial and categorical hyperlinks between things that exist in physical and virtual space. I fear that, instead, we may see more of the evils of closed APIs controlled by organizations like Facebook, Google, Apple, Microsoft, Amazon, and their kin. Hopefully they will realise that they will get bigger benefits from expanding the ecosystem (I think Google might get this first) but there is a good chance that short-termist greed will get the upper hand instead. The web had virgin, non-commercial ground in which to flourish before the bad people got there. I am not sure that such a space exists any more, and that’s sad. Perhaps HTML 6 will extend into physical space. That might work. Every space, every product, every plant, every animal, every person, addressable via a URL.
There will be ever more innovations in battery or other power/power saving technologies, display technologies and usability: the abysmal battery life of current devices, in particular, will soon be very irritating. There will likely be a lot of turf wars as different cloud services compete for user populations, different standards and APIs compete for apps, and different devices compete for customers. There will be many acquisitions. Privacy, already a major issue, will take a pounding, as new ways of invading it proliferate. What happens when Google sees all that you see? Measures your room with millimetre accuracy? Tracks every moment of your waking life? What happens when security services tap in? Or hackers? Or advertisers? There will be kickback and resistance, much of it justified. New forms of DRM will struggle to contain what needs to be free: ownership of digital objects will be hotly contested. New business models (personalized posters anyone? in situ personal assistants? digital objects for the home? mashup museums and galleries?) will enrage us, inform us, amuse us, enthrall us. Facebook, temporarily wrong footed in its ill-considered efforts to promote Oculus, will come back with a vengeance and find countless new ways to exploit us (if you think it is bad now, imagine what it will be like when it tracks our real-world social networks). The owners of the maps and the mapped data will become rich: Niantic is right now sitting on a diamond as big as the Ritz. We must be prepared for new forms of commerce, new sources of income, new ways of learning, new ways of understanding, new ways of communicating, new notions of knowledge, new tools, new standards, new paradigms, new institutions, new major players, new forms of exploitation, new crimes, new intrusions, new dangers, new social problems we can so far barely dream of. It will certainly take years, not months, for all of this to happen, though it is worth remembering that network effects kick in fast: the Pokémon GO only took a few days. It is coming, significant parts of it are already here, and we need to be preparing for it now. Though the seeds have been germinating for many years, they have germinated in relatively isolated pockets. This simple game has opened up the whole ecosystem.
Pokéducation
I guess, being an edtech blogger, I should say a bit more about the effects of Pokémon GO on education but that’s mostly for another post, and much of it is implied in what I have written so far. There have been plenty of uses of AR in conventional education so far, and there will no doubt be thousands of ways that people use Pokémon GO in their teaching (some great adjacent possibles in locative, gamified learning), as well as ways to use the countless mutated purpose-built forms that will appear any moment now, and that will be fun, though not earth shattering. I have, for instance, been struggling to find useful ways to use geocaching in my teaching (of computing etc) for over a decade, but it was always too complex to manage, given that my students are mostly pretty sparsely spread across the globe: basically, I don’t have the resources to populate enough geocaches. The kind of mega-scale mapping that Niantic has successfully accomplished could now make this possible, if they open up the ecosystem. However, most uses of AR will, at first, simply extend the status quo, letting us do better what we have always done and that we only needed to do because of physics. The real disruption, the result of the fact we can overcome physics, will take a while longer, and will depend on the ubiquity of more integrated, seamlessly networked forms of AR. When the environment is smart, the kind of intelligence we need to make use of it is quite different from most of what our educational systems are geared up to provide. When connection between the virtual and physical is ubiquitous, fluid and high fidelity, we don’t need to limit ourselves to conventional boundaries of classes, courses, subjects and schools. We don’t need to learn today what we will only use in 20 years time. We can do it now. Networked computers made this possible. AR makes it inevitable. I will have more to say about this.
Interesting reflections in Scientific American on morbid curiosity – that we are driven by our curiosity, sometimes even when we actually know that there is a strong likelihood it will hurt us. In the article, as the title implies, this is portrayed as a bad thing. I disagree.
“The drive to discover is deeply ingrained in humans, on par with the basic drives for food or sex, says Christopher Hsee of the University of Chicago, a co-author of the paper. Curiosity is often considered a good instinct—it can lead to new scientific advances, for instance—but sometimes such inquiry can backfire. “The insight that curiosity can drive you to do self-destructive things is a profound one,” says George Loewenstein, a professor of economics and psychology at Carnegie Mellon University who has pioneered the scientific study of curiosity.”
This is not exactly a novel, nor a profound insight: we even have a popular proverb for it that I mention to my cats on an almost daily basis. They don’t listen.
There is a strong relationship between curiosity and the desire for competence: a need to know how things work, how to do something we cannot yet do, why things are the way they are, where our limits lie, how to become more capable of acting in the world. From an evolutionary perspective we are curious with a purpose. It allows us to make effective use of our environment, to become competent within it. This is really good for survival so, of course, it is selected for. That it sometimes drives us to do things that harm us is actually a very positive feature, as long as it is balanced with a sufficient level of caution and the harm it causes is not too great. It helps us to know what to avoid, as well as what is useful to us. It also helps us to be more adaptable to bad things that we cannot avoid. It makes us more flexible, and lets us both know and extend our limits.
The first experiment described here involved people playing with pens even knowing that some were novelty items that would give them an electric shock. I’m not sure why the researchers mixed in some harmless pens in this because, even when pain is an absolute certainty, curiosity can drive us to experience it. I have long used electrostatic zappers that are designed to alleviate the itch in mosquito bites by administering a sharp and slightly painful shock to the skin. I have yet to meet a single child and have met very few adults that did not want to try it out on their own skin, regardless of whether they had any bites, in the full and certain knowledge that it would hurt. This is described in the article as self-destructive curiosity but I don’t think that’s right at all. If subjects had been convincingly warned that some pens would kill or maim them, then I am quite certain that very few would have played with them (some might, of course – evolution thrives on variation and, in some environments, high-risk strategies might pay off). But being curious about what kind of pain it might cause is really just a way of discovering or achieving competence, of discovering how we cope with this kind of shock, of testing hypotheses about ourselves and the environment, as well as finding out whether such joke pens actually work as advertised. This is potentially useful information: it will make you less likely to be a victim of a practical joke, or perhaps inspire you to perform one more effectively. Either way, it’s probably not a big thing in the grand scheme of things but, then again, very few learning experiences are. The value is more about how we integrate and connect such experiences.
The article describes another experiment in which participants were encouraged to predict their feelings after being shown an unpleasant image. Those so primed were less likely to choose to see it. Again, this makes sense in the light of what we already know. We are curious with a purpose – to learn – so, if we reflect a bit on what we have already learned, then it might dull our curiosity to experience something bad again. That’s potentially useful. I’m not sure that it is always a good thing, though. I happen to like, say, some horror movies that disgust me, or comedies that rely on discomfort for their humour. In fact, the anticipation of fear or disgust is often one of the main things that drives their plots and keeps my eyes glued to them. If the zombie apocalypse comes, I will be totally prepared. It also prepares me better for things that are going to really upset me. Likewise for funfair rides, sailing on a breezy day, exercising until it hurts, eating hot chili, or struggling with difficult deadlines.
So while, yes, we absolutely should learn from experience, we also need to remember that it can lead us into fixed ways of thinking that can, when conditions change, be less adaptable and adaptive. There is an ever-shifting balance between fear and curiosity that we need to embrace, perhaps especially when curiosity leads to the likelihood of something unpleasant (though not too unpleasant) happening. And, even when the danger is great, there are also risks that are sometimes worth taking. ‘What if..?’ is one of the most powerful phrases in any language.
I love the slogan that Audrey Watters has chosen for her new branding:
As she puts it…
“I wanted my work to both highlight the longstanding relationship between behaviorism and testing – built into the ideology and the infrastructure since ed-tech’s origins in the early twentieth century – and to remind people that there are also alternatives to treating students like animals to be trained.”
An article in Neuroscience News about a recent (paywalled – grr) brain-scan study of teenagers, predictably finding that having your photos liked on social media sparks off a lot of brain activity, notably in areas associated with reward, as well as social activity and visual attention. So far so so, and a bit odd that this is what Neuroscience News chose to focus on, because that’s only a small subsection of the study and by far the least interesting part. What’s really interesting to me about the study is that the researchers mainly investigated the effects of existing likes (or, as they put it ‘quanitfiable social endorsements’) on whether teens liked a photo, and scanned their brains while doing so. As countless other studies (including mine) have suggested, not just for teens, the effects were significant. As many studies have previously shown, photos endorsed by peers – even strangers – are a great deal more likely to be liked, regardless of their content. The researchers actually faked the likes and noted that the effect was the same whether showing ‘neutral’ content or risky behaviours like smoking and drinking. Unlike most existing studies, the researchers feel confident to describe this in terms of peer-approval and conformity, thanks to the brain scans. As the abstract puts it:
“Viewing photos with many (compared with few) likes was associated with greater activity in neural regions implicated in reward processing, social cognition, imitation, and attention.”
The paper itself is a bit fuzzy about which areas are activated under which conditions: not being adept at reading brain scans, I am still unsure about whether social cognition played a similarly important role when seeing likes of one’s own photos compared with others liked by many people, though there are clearly some significant differences between the two. This bothers me a bit because, within the discussion of the study itself, they say:
“Adolescents model appropriate behavior and interests through the images they post (behavioral display) and reinforce peers’ behavior through the provision of likes (behavioral reinforcement). Unlike offline forms of peer influence, however, quantifiable social endorsement is straightforward, unambiguous, and, as the name suggests, purely quantitative.”
I don’t think this is a full explanation as it is confounded by the instrument used. An alternative plausible explanation is that, when unsure of our own judgement, we use other cues (which, in this case, can only ever come from other people thanks to the design of the system) to help make up our minds. A similar effect would have been observed using other cues such as, for example, list position or size, with no reference to how many others had liked the photos or not. Most of us (at least, most that don’t know how Google works) do not see the ordering of Google Search results as social endorsement, though that is exactly what it is, but list position is incredibly influential in our choice of links to click and, presumably, our neural responses to such items on the page. It would be interesting to further explore the extent to which the perception of value comes from the fact that it is liked by peers as opposed to the fact that the system itself (a proxy expert) is highlighting an image as important. My suspicion is that there might be a quantifiable social effect, at least in some subjects, but it might not be as large as that shown here. There’s very good evidence that subjects scanned much-like photos with greater care, which accords with other studies in the area, though it does not necessarily correlate with greater social conformity. As ever, we look for patterns and highlights to help guide our behaviours – we do not and cannot treat all data as equal.
There’s a lot of really interesting stuff in this apart from that though. I am particularly interested in the activiation of the frontal gyrus, previously associated with imitation, when looking at much liked photos. This is highly significant in the transmission of memes as well as in social learning generally.
Unsurprisingly, when you use averages to make decisions about actions concerning individual people, they reinforce biases. This is exactly the basis of bigotry, racism, sexism and a host of other well-known evils, so programming such bias into analytics software is beyond a bad idea. This article describes how algorithmic systems are used to help make decisions about things like bail and sentencing in courts. Though race is not explicitly taken into account, correlates like poverty and acquaintance with people that have police records are included. In a perfectly vicious circle, the system reinforces biases over time. To make matters worse, this particular system uses secret algorithms, so there is no accountability and not much of a feedback loop to improve them if they are in error.
This matters to educators because this is very similar to what much learning analytics does too (there are exceptions, especially when used solely for research purposes). It looks at past activity, however that is measured, compares it to more or less discriminatory averages or similar aggregates of other learners’ past activity, and then attempts to guide future behaviour of individuals (teachers or students) based on the differences. This latter step is where things can go badly wrong, but there would be little point in doing it otherwise. The better examples inform rather than adapt, allowing a human intermediary to make decisions, but that’s exactly what the algorithmic risk assessment described in the article does too and it is just as risky. The worst examples attempt to directly guide learners, sometimes adapting content to suit their perceived needs. This is a terribly dangerous idea.
An interesting proposal from Horn & Fisher that fills in one of the most gaping holes in conventional quantitative research in education (specifically randomized controlled trials but also less rigorous efforts like A/B testing etc) by explicitly looking at the differences in those that do not fit in the average curve – the ones that do not benefit, or that benefit to an unusual degree, the outliers. As the authors say:
“… the ability to predict what works, for which students, in what circumstances, will be crucial for building effective, personalized-learning environments. The current education research paradigm, however, stops short of offering this predictive power and gets stuck measuring average student and sub-group outcomes and drawing conclusions based on correlations, with little insight into the discrete, particular contexts and causal factors that yield student success or failure. Those observations that do move toward a causal understanding often stop short of helping understand why a given intervention or methodology works in certain circumstances, but not in others.“
I have mixed feelings about this. Yes, this process of iterative refinement is a much better idea than simply looking at improvements in averages (with no clear causal links) and they are entirely right to critique those that use such methods but:
a) I don’t think it will ever succeed in the way it hopes, because every context is significantly different and this is a complex design problem, where even miniscule differences can have huge effects. Learning never repeats twice. Though much improved on what it replaces, it is still trying to make sense through tools of reductive materialism whereas what we are dealing with, and what the authors’ critique implies, is a different kind of problem. Seeking this kind of answer is like seeking the formula for painting a masterpiece. It’s only ever partially (at best) about methodologies and techniques, and it is always possible to invent new ones that change everything.
b) It relies on the assumption that we know exactly what we are looking for: that what we seek to measure is the thing that matters. It might be exactly what is needed for personalized education (where you find better ways to make students behave the way you want them to behave) but exactly the opposite for personal education (where every case is different, where education is seen as changing the whole person in unfathomably rich and complex ways).
That said, I welcome any attempts to stop the absurdity of trying to intervene in ways that benefit the (virtually non-existent) average student and that instead attempt to focus on each student. This is a step in the right direction.
This article is based on a flawed initial premise: that universities are there to provide skills for the marketplace. From that perspective, as the writer, Jonathan Munk, suggests, there’s a gap between both what universities generally support and what employers generally need, and the perceptions of students and employers about the skills they actually possess. If we assume that the purpose of universities is to churn out market-ready workers, with employer-friendly skills, they are indeed singularly failing and will likely continue to do so. As Munk rightly notes:
“… universities have no incentive to change; the reward system for professors incentivizes research over students’ career success, and the hundreds of years of institutional tradition will likely inhibit any chance of change. By expecting higher education to take on closing the skills gap, we’re asking an old, comfortable dog to do new tricks. It will not happen.”
Actually quite a lot of us, and even quite a few governments (USA notwithstanding) are pretty keen on the teaching side of things, but Munk’s analysis is substantially correct and, in principle, I’m quite comfortable with that. There are far better, cheaper and faster ways to get most marketable job skills than to follow a university program, and providing such skills is not why we exist. This is not to say that we should not do such things. For pedagogical and pragmatic reasons, I am keen to make it possible for students to gain useful workplace skills from my courses, but it has little to do with the job market. It’s mainly because it makes the job of teaching easier, leads to more motivated students, and keeps me on my toes having to stay in touch with the industry in my particular subject area. Without that, I would not have the enthusiasm needed to build or sustain a learning community, I would be seen as uninterested in the subject, and what I’d teach would be perceived as less relevant, and would thus be less motivating. That’s also why, in principle, combining teaching and research is a great idea, especially in strongly non-vocational subjects that don’t actually have a marketplace. But, if it made more sense to teach computing with a 50 year old language and machine that should be in a museum, I would do so at the drop of a hat. It matters far more to me that students develop the intellectual tools to be effective lifelong learners, develop values and patterns of thinking that are commensurate with both a healthy society and personal happiness, become part of a network of learners in the area, engage with the community/network of practice, and see bigger pictures beyond the current shiny things that attract attention like flames to a moth. This focus on being, rather than specific skills, is good for the student, I hope, but it is mainly good for everyone. Our customer is neither the student nor the employer: it’s our society. If we do our jobs right then we both stabilize and destablize societies, feeding them with people that are equipped to think, to create, to participate, reflectively, critically, and ethically: to make a difference. We also help to feed societies with ideas, theories, models and even the occasional artefact, that make life better and richer for all though, to be honest, I’m not sure we do so in the most cost-effective ways. However, we do provide an open space with freedom to explore things that have no obvious economic value, without the constraints or agendas of the commercial world, nor those of dangerously partisan or ill-informed philanthropists (Zuckerberg, Gates – I’m thinking of you). We are a social good. At least, that’s the plan – most of us don’t quite live up to our own high expectations. But we do try. The article acknowledges this role:
“Colleges and universities in the U.S. were established to provide rich experiences and knowledge to their students to help them contribute to society and improve their social standing.”
Politely ignoring the US-centricity of this claim and its mild inaccuracy, I’d go a bit further: in the olden days, it was also about weeding out the lower achievers and/or, in many countries (the US was again a notable offender), those too poor to get in. Universities were (and most, AU being a noble and rare exception, still are) a filter, that makes the job of recruiters easier by removing the chaff from the wheat before we even get to them, and then again when we give out the credits: that‘s the employment advantage. It’s very seldom (directly) because of our teaching. We’re just big expensive sieves, from that perspective. However, the article goes on to say:
“But in the 1930s, with millions out of work, the perceived role of the university shifted away from cultural perspective to developing specific trades. Over time, going to college began to represent improved career prospects. That perception persists today. A survey from 2015 found the top three reasons people chose to go to college were:
improved employment opportunities
make more money
get a good job”
I’m glad that Munk correctly uses the term ‘perception’, because this is not a good reason to go to a university. The good job is a side-effect, not the purpose, and it is becoming less important with each passing year. Partly this is due to market saturation and degree inflation, partly due to better alternatives becoming more widespread, especially thanks to the Internet. One of the ugliest narratives of modern times is that the student should pay for their education because they will earn more money as a result. Utter nonsense. They will earn more money because they would have earned more money anyway, even if universities had never existed. The whole point of that filtering is that it tends to favour those that are smarter and thus more likely to earn more. In fact, were it not for the use of university qualifications as a pre-filter that would exclude them from a (large but dwindling) number of jobs, they would have earned far more money by going straight into the workforce. I should observe in passing that open universities like AU are not entirely immune from this role. Though not much filtering for ability on entry, AU and other open universities do none-the-less act as filters inasmuch as those that are self-motivated enough to handle the rigours of a distance-taught university program while otherwise engaged, usually while working, are far better candidates for most jobs than those who simply went to a university because that was the natural next step. A very high proportion of our students that make it to the end do so with flying colours, because those that survive are incredibly good survivors. I’ve seen the quality of work that comes out of this place and been able to compare it with that from the best of traditional universities: our students win hands down, almost every time. The only time I have seen anything like as good was in Delhi, where 30 students were selected in a program each year from over 3,000 fully qualified applicants (i.e. those with top grades from their schools). This despite, or perhaps because of, the fact that computing students had to sit an entrance exam that, bizarrely and along with other irrelevances, required them to know about Brownian motion in gases. I have yet to come across a single computing role where such knowledge was needed. Interestingly, they were not required to know about poetry, art, or music, though I have certainly come across computing roles where appreciation of such things would have been of far greater value.
Why this article is right
If it were just about job-ready skills like, in computing, the latest frameworks, languages and systems, the lack of job-readiness would not bother me in the slightest. However, as the article goes on to say, it is not just the ‘technical’ (in the loosest sense) skills that are the problem. The article mentions, as key employer concerns, critical thinking, creativity, and oral and written communication skills. These are things that we should very much be supporting and helping students to develop, however we perceive our other roles. In fact, though the communication stuff is mainly a technical skillset, creativity and problem-solving are pretty much what it is all about so, if students lack these things, we are failing even by our own esoteric criteria.
I do see a tension here, and a systematic error in our teaching. A goodly part of it is down to a misplaced belief that we are teaching stuff, rather than teaching a way of being. A lot of courses focus on a set of teacher-specified outcomes, and on accreditation of those set outcomes, and treat the student as (at best) input for processing or (at worst) a customer for a certificate. When the process is turned into a mechanism for outputting people with certificates, with fixed outcomes and criteria, the process itself loses all value. ‘We become what we behold’ as McLuhan put it: if that’s how we see it, that’s how it will be. This is a vicious circle. Any mechanism that churns students out faster or more efficiently will do. In fact, a lot of discussion and design in our universities is around doing exactly that. For example, the latest trend in personalization (a field, incidentally, that has been around for decades) is largely based on that premise: there is stuff to learn, and personalization will help you to learn it faster, better and cheaper than before. As a useful by-product, it might keep you on target (our target, not yours). But one thing it will mostly not do is support the development of critical thinking, nor will it support the diversity, freedom and interconnection needed for creative thinking. Furthermore, it is mostly anything but social, so it also reduces capacity to develop those valuable social communication skills. This is not true of all attempts at personalization, but it is true of a lot of them, especially those with most traction. The massive prevalence of cheating is directly attributable to the same incorrect perception: if cheating is the shortest path to the goal (especially if accompanied by a usually-unwarranted confidence in avoiding detection) then of course quite a few people will take it. The trouble is, it’s the wrong goal. Education is a game that is won through playing it well, not through scoring.
The ‘stuff’ has only ever been raw material, a medium and context for the really important ways of being, doing and thinking that universities are mostly about. When the stuff becomes the purpose, the purpose is lost. So, universities are trying and, inevitably, failing to be what employers want, and in the process failing to do what they are actually designed to do in the first place. It strikes me that everyone would be happier if we just tried to get back to doing what we do best. Teaching should be personal, not personalized. Skills should be a path to growth, not to employment. Remembered facts should be the material, not the product. Community should be a reason for teaching, not a means by which it occurs. Universities should be places we learn to be, not places we be to learn. They should be purveyors of value, not of credentials.
Thanks to Gerald Ardito for pointing this one out to me. It’s about the growing use of libraries for learning circles, where groups of learners get together locally to study, in this case around MOOCs provided via P2PU. Librarians – rarely subject-matter experts – organize these groups and provide support for the process, but most of the learning engagement is peer-to-peer. As the article notes, the process is quite similar to that of a book club.
As the article suggests, such learning circles are popping up all over the place, not just in libraries. Indeed, the Landing has been used by our students to arrange quite similar study-buddy groups at AU, albeit with less formal organization and intent and not always working on the same courses together. Though there are benefits to be had from co-constructing knowledge together, people do not necessarily need to be working on the same thing. Simply being there to support , enthuse, or inspire one another is often enough to bring real benefits. There are two models, both of which work. The first, as in the case of these learning circles, is to use central coordination online, with local communities working on the same things at roughly the same times. The second is distributed the other way round, with the local communities providing the centre, but with individuals working online in different contexts.
This blurring between local and online is a growing and significant trend. It somewhat resembles the pattern of business and innovation centres that bring together people from many companies etc, working remotely from their own organizations in a shared local space. Doing different things in physical spaces shared with other people helps to overcome many of the issues of isolation experienced by online workers and learners, especially in terms of motivation, without the need to move everyone in an organization (be it a university, a class, or a company) into the same physical location. It adds economies of scale, too, allowing the use of shared resources (e.g. printers, 3D printers, heating, conferencing facilities, etc), and reduces environmentally and psychologically costly issues around commuting and relocating. Moreover, decoupling location and work while supporting physical community brings all the benefits of diversity that, in a traditional organization or classroom, tend to get lost. Working online does not and should not interfere with local connection with real human beings, and this is a great way to support our need to be with other people, and the value that we get from being with them. From the perspective of the environment, our local communities, our psychological well-being, our relationships, our creativity, and our bank balances, local communities and remote working, or remote communities and local working, both seem far more sensible, at least for many occupations and many kinds of learning.
The article reports completion rates of 45-55%, which is at least an order of magnitude greater than the norm for MOOCs, although it would be unwise to read too much into that because of the self-selection bias inherent in this: it might well be that those who were sufficiently interested to make the effort to visit the libraries would be those that would persist anyway. However, theory and experience both suggest that the benefits of getting together at one place and time should lead to far greater motivation to persist. Going somewhere with other people at a particular time to do something is, after all, pretty much the only significant value in most lectures. This is just a more cost-effective, learning-effective, human way of doing that.
A nice interview in AUSU’s Voice Magazine – continued at https://www.voicemagazine.org/articles/featuredisplay.php?ART=11372 – with SCIS’s own Maiga Chang, describing his teaching and research. Maiga’s bubbly enthusiasm comes through strongly in this, and his responses are filled with great insights. I particularly like (in the second part of the interview) his thoughts on what makes Athabasca University so distinctive, and its value in the future of learning:
“What are the benefits of teaching at AU compared to traditional universities? There are differences. They are different from traditional university and AU because we are almost purely online as a university. We teach students with a lot of help from technology. So, in that case, I would say that teaching at AU that we are the pioneers of teaching students with technology, artificial intelligence applications, learning analytics – everything. I would say that this kind of teaching and learning should be the future. As you know, some people start to work on full time jobs after K-12 and some of them go to university for another four years, which means they only learn in traditional classroom or in traditional setting for 12 to 16, maybe 18 years.
How long will you live? How long will you need to learn? You will need to learn for your whole life. When you graduate from high school and university, you cannot go back to university unless you want to quit a job when you want to learn once again. You will need another way of doing life-long learning.
AU gives us the opportunity to create a kind of smart learning environment. So if we can use our research results to make a smarter learning environment, then we can provide students with more personalized learning experiences, which can make them learn more efficient, and learn the things that they really need and want to see on their own way and own pace. That is another good thing for students, I would say, teaching at AU.
What do you think are the strengths of learning at AU? This is the future. Like the students right now in high school and in primary school, you can ask them. They are trying to use mobile devices to learn. Also, as you know, they will post something on their Facebook or their blog. That is the future. As a parent, around 50% of students at AU have family, even children. When they learn at AU, they are adapting to the future of learning, and, in that case, when their child or children have a question. In my upbringing, I could not ask questions of my parents about using Facebook, but right now, you can, because people use Facebook. Now when you’re taking an AU course, you are sometimes asked to make a video, put it on YouTube, and then you can teach your children, your child.
One more thing is very important. It is self-regulated learning skill. It is very important for everyone because it helps you efficiently learn, or digest, or plan your goal. When you learn with AU, you will learn that kind of skills. You can teach your child and children, and other family members.”
Great stuff! I have one comment to add on a small part of this: I am firmly with Alfie Kohn and, more recently and in similar vein, Stephen Downes on the side of ‘personal’ rather than ‘personalized’. Personalized learning does have a place in the rich tapestry of tools and methods to help with meeting a range of learning needs, but it is very important that personalization is not something done to learners. Too often, it is the antithesis of self-direction, too often it reinforces and automates teacher control, too often it is isolating and individually focused, too often it sacrifices caring, breadth and serendipity in the service of efficiency, and that efficiency is too often narrowly defined in terms of teacher goals. Knowing Maiga, and seeing what else he talks about in this interview, I’m pretty sure that’s not what he means here! Personal learning means focusing on what learners need, want, find exciting, interesting, challenging, problematic or mind-expanding. It is inherently and deeply a social activity supported by and engaged with others, and it is, at the same time, inherently a celebration of diversity and individuality. For some skills – mechanical foundations for example, or as controllable advisory input – personalization can contribute to that, but it should never usurp the personal.