TEL MOOC from Athabasca University

Starts today…

Course Description

Teachers who want to learn more about teaching with technology will find this Massive Open Online Course (MOOC), Introduction to Technology-Enabled Learning (TEL), informative and engaging. Using up-to-date learning design and simple, accessible technology, the course runs on an easy-to-use learning platform available via the Internet. The course is designed for teachers who want to build on their knowledge and practice in teaching and learning with technology. It will run over five weeks and requires approximately three to five hours of time each week. Designed to accommodate teachers’ busy schedules, the course offers flexibility with options for learning the content. You will learn from readings, videos, discussions with other participants and instructors, meaningful exercises, quizzes and short assignments. Certification is available for those who wish to complete all required exercises and quizzes.

Address of the bookmark:

Original page

Alfie Kohn: "It’s bad news if students are motivated to get A’s" – YouTube

A nice one-minute summary of Alfie Kohn’s case against grades at

There’s a great deal more Kohn has to say on the subject that is worth reading, such as at or or an interview at

From that interview, this captures the essence of the case pretty well:

“The research suggests three consistent effects of giving students grades – or leading them to focus on what grade they’ll get. First, their interest in the learning itself is diminished. Second, they come to prefer easier tasks – not because they’re lazy, but because they’re rational. After all, if the point is to get an A, your odds are better if you avoid taking intellectual risks. Third, students tend to think in a more superficial fashion – and to forget what they learned more quickly – when grades are involved.

To put it positively, students who are lucky enough to be in schools (or classrooms) where they don’t get letter or number grades are more likely to want to continue exploring whatever they’re learning, more likely to want to challenge themselves, and more likely to think deeply. The evidence on all of these effects is very clear, and it seems to apply to students of all ages.

As far as I can tell, there are absolutely no benefits of giving grades to balance against these three powerful negative consequences – except that doing so is familiar to us and doesn’t take much effort.”


Note: if this video shows up as a blank space in your browser, then your security settings are preventing embedding of untrusted content in a trusted page. This video is totally trustworthy, so look for the alert to override it, typically near the address bar in your browser.

Address of the bookmark:

The cost of admission to the unlearning zone

picture of dull classroom (pubic domain)I describe some of what I do as ‘unteaching’, so I find this highly critical article by Miss Smith – The Unlearning Zone –  interesting. Miss Smith dislikes the terms ‘ unteaching’ and ‘unlearning’ for some well-expressed aesthetic and practical reasons: as she puts it, they are terms “that would not be out of place in a particularly self-satisfied piece of poststructuralist literary analysis circa 1994.”  I partially agree. However, she also seems equally unenamoured with what she thinks they stand for. I disagree with her profoundly on this so, as she claims to be new to these terms, here is my attempt to explain a little about what I mean by them and why I think they are a useful part of the educators’ lexicon, and why they are crucially important for learners’ development in general.

First the terms…

Yes, ‘unteaching’ is an ugly neoligism and it doesn’t really make sense: that’s part of the appeal of using it – a bit of cognitive dissonance can be useful for drawing attention to something. However, it is totally true that someone who is untaught is just someone who has not (yet) been taught, so ‘unteaching’, seen through that light, is at best pointless, at worst self-contradictory.  On the other hand, it does seem to follow pretty naturally from ‘unlearning’ which, contrary to Miss Smith’s assertion, has been in common use for centuries and makes perfect sense. Have you ever had to unlearn bad habits? Me too.

As I understand it, ‘unteach’ is to ‘teach’ as ‘undo’ is to ‘do’.  Unteaching is still teaching, just as undoing is still doing, and unlearning is still learning. Perhaps deteaching would be a better term. Whatever we choose to call it, unteaching is concerned with intentionally dismantling the taught belief that teaching is about exerting power over learners to teach, and replacing it with the attitude that teachers are there to empower learners to learn. This is not a particularly radical idea. It is what all teachers should do anyway, I reckon. But it is worth drawing attention to it as a distinct activity because it runs counter to the tide, and the problem it addresses is virtually ubiquitous in education up to, and sometimes at, doctoral level.

Traditional teaching of the sort Miss Smith seems to defend in her critique does a lot more than teach a subject, skill, or way of thinking. It teaches that learning is a chore that is not valuable in and of itself, that learners must be forced to do it for some other purpose, often someone else’s purpose. It teaches that teaching is something done to students by a teacher: at its worst, it teaches that teaching is telling; at best, that teaching involves telling someone to do something. It’s not that (many) teachers deliberately seek these outcomes, but that they are the most likely lessons to be learned, because they are the ones that are repeated most often. The need for unteaching arises because traditional teaching, with luck in addition to whatever it intends to teach, teaches some terrible lessons about learning and the role of teaching in that process that must be unlearned.

What is unteaching?

Miss Smith claims that unteaching means “open plan classes, unstructured lessons and bean bags.” That’s not the way I see it at all. Unlike traditional teaching, with its timetables, lesson plans, learning objectives, and uniform tests, unteaching does not have its own technologies and methods, though it does, for sure, tend to be a precursor to connectivist, social constructivist, constructionist, and other more learner-centred ways of thinking about the learning process, which may sometimes be used as part of the process of unteaching itself. Such methods, models, and attitudes emerge fairly naturally when you stop forcing people to do your bidding. However, they are just as capable of being used in a controlling way as the worst of instructivist methods: the number of reports on such interventions that include words like ‘students must…’, ‘I make my students…’ or (less blatantly) ‘students (do X)’ far outnumber all others, and that is the very opposite of unteaching. The specific technologies (including pedagogies as much as open-plan classrooms and beanbags) are not the point. Lectures, drill-and-practice and other instructivist methods are absolutely fine, as long as:

  1. they at least attempt to do the job that students want or need,
  2. they are willingly and deliberately chosen by students,
  3. students are well-informed enough to make those choices, and
  4. students can choose to learn otherwise at any time.

No matter how cool and groovy your problem-based, inquiry-based, active methods might be, if they are imposed on students (especially with the use of threats for non-compliance and rewards for compliance – e.g. qualifications, grades, etc) then it is not unteaching at all: it’s just another way of doing the same kind of teaching that caused the problem in the first place. But if students have control – and ‘control’ includes being able to delegate control to someone else who can scaffold, advise, assist, instruct, direct, and help them when needed, as well as being able to take it back whenever they wish – then such methods can be very useful. So can lectures. To all those educational researchers that object to lectures, I ask whether they have ever found them valuable in a conference (and , if not, why did they go to a conference in the first place?). It’s not the pedagogy of lectures that is at fault. It’s the requirement to attend them and the accompanying expectation that people are going to learn what you are teaching as a result. That’s, simply put, empirically wrong. It doesn’t mean that lecturees learn nothing. Far from it. But what you teach and what they learn are different kinds of animal.

Problems with unteaching

It’s really easy to be a bad unteacher – I think that is what Miss Smith is railing against, and it’s a fair criticism. I’m often pretty bad at it myself, though I have had a few successes along the way too. Unteaching and, especially, the pedagogies that result from having done unteaching, are far more likely to go wrong, and they take a lot more emotional, intellectual, and social effort than traditional teaching because they don’t come pre-assembled. They have no convenient structures and processes in place to do the teaching for you.  Traditional teaching ‘works’ even when it doesn’t. If you throw someone into a school system, with all its attendant rewards, punishments, timetables, rules and curricula, and if you give them the odd textbook and assessment along the way, then most students will wind up learning something like what is intended to be taught by the system, no matter how awful the teachers might be. In such a system, students will rarely learn well, rarely persistently, rarely passionately, seldom kindly, and the love of learning will have been squashed out of many of them along the way (survivors often become academics and teachers themselves). But they will mostly pass tests at the end of it. With a bit of luck many might even have gained a bit of useful knowledge or skill, albeit that much will be not just wasted and forgotten as easily as a hotel room number when your stay is over, but actively disliked by the end of it. And, of course, they will have learned dependent ways of learning that will serve them poorly outside institutional systems.

To make things far worse, those very structures that assist the traditional teacher (grades, compulsory attendance, fixed outcomes, concept of failure, etc) are deeply antagonistic to unteaching and are exactly why it is needed in the first place. Unteachers face a huge upstream struggle against an overwhelming tide that threatens to drown passionate learning every inch of the way. The results of unteaching can be hard to defend within a traditional educational system because, by conventional measures, it is often inefficient and time-consuming. But conventional measures only make sense when you are trying to make everyone do the same things, through the same means, with the same ends, measured by and in order to meet the same criteria. That’s precisely the problem.

The final nail in unteaching’s coffin is that it is applied very unevenly across the educational system, so every freedom it brings is counterbalanced by a mass of reiterated antagonistic lessons from other courses and programs. Every time we unteach someone, two others reteach them.  Ideally, we should design educational systems that are friendlier to and more supportive of learner autonomy, and that are (above all else) respectful of learners as human beings. In K-12 teaching there are plenty of models to draw from, including Summerhill, Steiner (AKA Waldorf) schools, Montessori schools, Experiential Learning Schools etc. Few are even close to perfect, but most are at least no worse than their conventional counterparts, and they start with an attitude of respect for the children rather than a desire to make them conform. That alone makes them worthwhile. There are even some regional systems, such as those found in Finland or (recently) British Columbia, that are heading broadly in the right direction. In universities and colleges there are plenty of working models, from Oxford tutorials to Cambridge supervisions, to traditional theses and projects, to independent study courses and programs, to competency-based programs, to PLAR/APEL portfolios, and much more. It is not a new idea at all. There is copious literature and many theoretical models that have stood the test of time, from andragogy to communities of practice, through to teachings from Freire, Illich, Dewey and even (a bit quirkily) Vygotsky. Furthermore, generically and innately, most distance and e-learning unteaches better than its p-learning counterparts because teachers cannot exert the same level of control and students must learn to learn independently. Sadly, much of it is spoiled by coercing students with grades, thereby providing the worst of both worlds: students are forced to behave as the teacher demands in their terminal behaviours but, without physical copresence, are less empowered by guidance and emotional/social support with the process. Much of my own research and teaching is concerned with inverting that dynamic – increasing empowerment and social support through online learning, while decreasing coercion. I’d like to believe that my institution, Athabasca University, is largely dedicated to the same goal, though we do mostly have a way to go before we get it right.

Why it matters

Unteaching is to a large extent concerned with helping learners – including adult learners – to get back to the point at which most children start their school careers – driven by curiosity, personal interest, social value, joy, delight – but that is schooled out of them over years of being taught dependency.  Once misconceptions about what education is for, what teachers do, and how we learn, have been removed, teaching can happen much more effectively: supporting, nurturing, inspiring, challenging, responding, etc, but not controlling, not making students do things they are not ready to do for reasons that mean little to them and have even less to do with what they are learning.

However, though it is an immensely valuable terminal outcome, improved learning is perhaps not the biggest reason for unteaching. The real issue is moral: it’s simply the right thing to do. The greatest value is that students are far more likely to have been treated with the respect, care, and honour that all human beings deserve along the way. Not ‘care’ of the sort you would give to a dog when you train it to be obedient and well behaved. Care of the sort that recognizes and valorizes autonomy and diversity, that respects individuals, that cherishes their creativity and passion, that sees learners as ends in themselves, not products or (perish the thought) customers. That’s a lesson worth teaching, a way of being that is worth modelling. If that demands more effort, if it is more fallible, and if it means that fewer students pass your tests, then I’m OK with that. That’s the price of admission to the unlearning zone.


Understanding the response to financial and non-financial incentives in education: Field experimental evidence using high-stakes assessments

What they did

This is a report by, Simon Burgess, Robert Metcalfe, and Sally Sadoff on a large scale study conducted in the UK on the effects of financial and non-financial incentives on GCSE scores (GCSEs are UK qualifications usually taken around age 16 and usually involving exams), involving over 10,000 students in 63 schools being given cash or ‘non-financial incentives’. ‘Non-financial incentives’ did not stretch as far as a pat on the back or encouragement given by caring teachers – this was about giving tickets for appealing events. The rewards were given not for getting good results but for particular behaviours the researchers felt should be useful proxies for effective study: specifically, attendance, conduct, homework, and classwork. None of the incentives were huge rewards to those already possessing plenty of creature comforts but, for poorer students, they might have seemed substantial. Effectiveness of the intervention was measured in terminal grades. The researchers were very thorough and were very careful to observe limitations and concerns. It is as close to an experimental design as you can get in a messy real-world educational intervention, with numbers that are sufficient and diverse enough to make justifiable empirical claims about the generalizability of the results.

What they found

Rewards had little effect on average marks overall, and it made little difference whether rewards were financial or not. However, in high risk groups (poor, immigrants, etc) there was a substantial improvement in GCSE results for those given rewards, compared with the control groups. 

My thoughts

The only thing that does surprise me a little is that so little effect was seen overall, but I hypothesize that the reward/punishment conditions are so extreme already among GCSE students that it made little difference to add any more to the mix.  The only ones that might be affected would be those for whom the extrinsic motivation is not already strong enough. There is also a possibility that the demotivating effects for some were balanced out by the compliance effects for others: averages are incredibly dangerous things, and this study is big on averages.

What makes me sad is that there appears to be no sense of surprise or moral outrage about this basic premise in this report.

dogs being whipped, from Jack London's 'Call of the Wild' It appears reasonable at first glance: who would not want kids to be more successful in their exams? When my own kids had to do this sort of thing I would have been very keen on something that would improve their chances of success, and would be especially keen on something that appears to help to reduce systemic inequalities. But this is not about helping students to learn or improving education: this is completely and utterly about enforcing compliance and improving exam results. The fact that there might be a perceived benefit to the victims is a red herring: it’s like saying that hitting dogs harder is good for the dogs because it makes them behave better than hitting them gently. The point is that we should not be hitting them at all. It’s not just morally wrong, it doesn’t even work very well, and only continues to work at all if you keep hitting them. It teaches students that the end matters more than the process, that learning is inherently undesirable and should only done when there is a promise of a reward or threat of punishment, and that they are not in charge of it. 

The inevitable result of increasing rewards (or punishments – they are functionally equivalent) is to further quench any love of learning that might be left at this point in their school careers, to reinforce harmful beliefs about how to learn, and to further put students off the subjects they might have loved under other circumstances for life.  In years to come people will look back on barbaric practices like this much as we now look back at the slave trade or pre-emancipation rights for women.

Studies like this make me feel a bit sick. 


Address of the bookmark:

Udacity Partners with IBM, Amazon for Artificial Intelligence 'Degree'

Udacity is now valued at over $1b. This seems a long way from the dream of open (libre and free) learning of the early MOOC pioneers (pre-Thrun):

“Earlier this year, Udacity’s revenue from Nanodegrees was growing nearly 30% month over month and the initiative is profitable, according to Thrun. According to one source, Udacity was on track to make $24 million this year. Udacity also just became a unicorn—a startup valued at or above $1 billion—in its most recent $105 million funding round in 2015.”

This should also be a wake-up call to universities that believe their value is measurable by the employability of their graduates. Udacity has commitments from huge companies like IBM, BMW, Tata and others to accept its nanodegree graduates. Nanodegrees are becoming a serious currency in the job market, at lower cost and higher productivity than anything universities can match, with all the notable benefits of online delivery and timeframes that make lifelong learning of up-to-date competencies a reality, not an aspiration. If we don’t adapt to this then universities are, if not dead in the water, definitely at risk of becoming of less relevance.

I recently posted a response to Dave Cormier’s question about the goals of education in which I suggested that our educational institutions play an important sustaining and generative role in cultures  – not just in large-scale societal level culture, but in the myriad overlapping and contained cultures within societies. Though I have reservations about the risks of government involvement in education, I am a little fearful but also a little intrigued about what happens when private organizations start to make a substantial contribution to that role. There have always been a few such cases, and that has always been a useful thing. Having a few alternatives nipping around your heels and introducing fresh ideas helps to keep an ecosystem from stagnating. But this is big scale stuff, and it’s part of a trend that worries me. We are already seeing extremely large contributions to traditional education from private donors like the Gates and Zuckerberg foundations that reinforce dreadful misguided beliefs about what education is, or what it is for. With big funding, these become self-fulfilling beliefs. As long as we can sustain diversity then I think it is not a bad thing, but the massive influence of a few (even well-meaning) individuals with the spending power of nations is  very, very dangerous.

Original post

‘Rote learning, not play, is essential for a child’s education’ – seriously?

An interesting observation…

Helen Abadzi, an expert in cognitive psychology and neuroscience, who was an education specialist at the World Bank, said that pupils who “overlearn” and repeatedly practise tasks, such as mental arithmetic, free up their working memory for more “higher order” analytical thinking.

Yes, they do, good point. We should not forget that. Unfortunately, she goes way beyond her field of expertise and explicitly picks on Sir Ken Robinson in the process…

“Go out and play, well sure – but is that going to teach me mental math so I can go to a store and instantly make a decision about what is the best offer to buy?” she said.

I cannot be certain but, as far as I know, and although he has made the occasional wild assertion, Sir Ken has never for one moment suggested that overlearning should be avoided. In fact, that’s rather obvious from the examples he gives in what the article acknowledges is the most popular TED talk of all time. I’ve yet to meet a good ballerina that has not practiced until it hurt. When you get into the flow of something and truly play, rote learning is exactly what you do. I have practiced my guitar until my fingers bled. Indeed, for each of my many interests in life, I have very notably repeatedly practiced again, again, and again, doing it until I get it right (or at least right enough). I’m doing it right now. I am fairly certain that you have done the same. To suggest that play does not involve an incredible amount of gruelling repetition and rote learning (particularly valuable when done from different angles, in different contexts, and with different purposes, a point Abadzi fails to highlight but I am sure understands) is bizarre. Even my cats do it. It is even more bizarre to leap from suggesting that overlearning is necessary to a wildly wrong and completely unsubstantiated statement like:

People may not like methods like direct instruction – “repeat after me” – but they help students to remember over the long term. A class of children sitting and listening is viewed as a negative thing, yet lecturing is highly effective for brief periods.

Where the hell did that come from? A scientist should be ashamed of such unsupported and unsupportable tripe. It does not follow from the premises. We need to practice, so extrinsic motivation is needed to make students learn? And play is not essential? Seriously? Such idiocy needs to be stamped on, stamped out, and stamped out hard. This is a good case study in why neuroscience is inadequate as a means to explain learning, and is completely inadequate as a means to explain education.

In the interests of fairness, I should note that brief lectures (and, actually, even long lectures) can indeed lead to effective learning, albeit not necessarily of what is being lectured about and only when they are actually interesting. The problem is not lectures per se, but the fact that people are forced to attend them, and that they are expected to learn what the lecturer intends to teach.

Activity trackers flop without cash motivation – Futurity

Another from the annals of unnecessary and possibly harmful research on motivation. Unsurprisingly, fitness trackers do nothing for motivation and, even less surprisingly, if you offer a reward then people do exercise more, but are significantly less active when the reward is taken away…

…at the end of twelve months, six months after the incentives were removed, this group showed poorer step outcomes than the tracker only group, suggesting that removing the incentives may have demotivated these individuals and caused them to do worse than had the incentives never been offered.

This effect has been demonstrate countless times. Giving rewards infallibly kills intrinsic motivation. When will we ever learn?

One interesting take-away is that (whether or not the subjects took more steps) there were no noticeable improvements in health outcomes across the entire experimental group. Perhaps this is because 6 months is not long enough to register the minor improvements involved, or maybe the instrument for measuring improved outcomes was too coarse. More likely, and as I have previously observed, subjects probably did things to increase their step count at the expense of other healthy activities like cycling etc. 

What is education for?

Dave Cormier, in typically excellent form, reflects on the differences between education and learning in his latest post. I very much agree with pretty much everything he writes here. This extract condenses the central point that, I think, matters more than any other:

Learning is a constant. It is what humans do. They don’t, ever, learn exactly what you want them to learn in your education system. They may learn to remember that 7+5=12 as my children are currently being taught to do by rote, but they also ‘learn’ that math is really boring. We drive them to memorise so their tests will be higher, but is it worth the tradeoff? Is a high score on addition worth “math is boring?””

This is crucial: it is impossible to live and not to learn. Failure to learn is not an option. What matters is what we learn and how we learn it. The thing is, as Dave puts it:

Education is a totally different beast than learning. Learning is a thing a person does. Education is something a society does to its citizens. When we think about what we want to do with ‘education’ suddenly we need to start thinking about what we as a society think is important for our citizens to know. There was a time, in an previous democracy, where learning how to interact in your democracy was the most important part of an education system. When i look through my twitter account now I start to think that learning to live and thrive with difference without hate and fear might be a nice thing for an education system to be for.”

My take on this

I have written here and there about the deep intertwingled relationship between education and indoctrination (e.g, most recently, here). Most of its early formal incarnations were, and a majority of them still are, concerned with passing on doctrine, often of a religious, quasi-religious, or political nature. To do that also requires the inculcation of values, and the acquisition of literacies (by my definition, the set of hard, human-enacted technologies needed to engage with a given culture, be that culture big or small). The balance between indoctrination, inculcation and literacy acquisition has shifted over the years and varies according to culture, context, and level, but education remains, at its heart, a process for helping learners learn to be in a given society or subset of it. This remains true even at the highest levels of terminal degrees: PhDs are almost never about the research topic so much as they are about learning to be an academic, a researcher, someone that understands and lives the norms, values and beliefs of the academic research community in which their discipline resides. To speak the language of a discipline. It is best to speak multiple languages, of course. One of the reasons I am a huge fan of crossing disciplinary boundaries is that it slightly disrupts that process by letting us compare, contrast, and pick between the values of different cultures, but such blurring is usually relatively minor. Hard core physicists share much in common with the softest literary theorists. Much has been written about the quality of ‘graduateness‘, typically with some further intent in mind (eg. employability) but what the term really refers to is a gestalt of ways of thinking, behaving, and believing that have what Wittgenstein thought of as family likenesses. No single thing or cluster of things typifies a graduate, but there are common features spread between them. We are all part of the same family.

Education has a lot to do with replication and stability but it is, and must always have been, at least as much about being able to adapt and change that society. While, in days gone by, it might have been enough to use education as a means to produce submissive workers, soldiers, and priests, and to leave it to higher echelons to manage change (and manage their underlings), it would be nice to think that we have gone beyond that now. In fact, we must go beyond that now, if we are to survive as a species and as a planet. Our world is too complex for hierarchical management alone.

I believe that education must be both replicative and generative. It must valorize challenge to beliefs and diversity as much as it preserves wisdom and uniformity. It must support both individual needs and social needs, the needs of people and the needs of the planet, the needs of all the societies within and intersecting with its society. This balance between order and chaos is about sustaining evolution. Evolution happens on the edge of chaos, not in chaos itself (the Red Queen Regime), and not in order (the Stalinist Regime). This is not about design so much as it is about the rules of change in a diverse complex adaptive system. The ever burgeoning adjacent possible means that our societies, as much as ecosystems, can do nothing but evolve to ever greater complexity, ever greater interdependence but, equally, ever greater independence, ever greater diversity. We are not just one global society, we are billions of them, overlapping, cross-cutting, independent, interdependent. And there is not just one educational system that needs to change. There are millions of them, millions of pieces of them, and more of them arriving all the time. We don’t need to change Education: that’s too simplistic and would, inevitably, just replace one set of mistakes with another. We need to change educations.

Address of the bookmark:

A Devil’s Dictionary of Educational Technology – Medium

Delightful compendium from Bryan Alexander. I particularly like:

Analytics, n. pl. “The use of numbers to confirm existing prejudices, and the design of complex systems to generate these numbers.”

Big data. n. pl. 1.When ordinary surveillance just isn’t enough.

Failure, n. 1. A temporary practice educators encourage in students, which schools then ruthlessly, publicly, and permanently punish.

Forum, n. 1. Social Darwinism using 1980s technology.

 World Wide Web, n. A strange new technology, the reality of which can be fended off or ignored through the LMS, proprietary databases, non-linking mobile apps, and judicious use of login requirements.



Address of the bookmark:

Little monsters and big waves


pokémon at AuschwitzSome amazing stories have been emerging lately about Pokémon GO, from people wandering through live broadcasts in search of monsters, to lurings of mugging victims, to discoveries of dead bodies, to monsters in art galleries and museums, to people throwing phones to try to capture Pokémons, to it overtaking Facebook in engagement (by a mile), to cafes going from empty to full in a day thanks to one little monster, to people entering closed zoo enclosures and multiple other  dangerous behaviours (including falling off a cliff),  to uses of Pokémon to raise money for charity, to applause for its mental and physical health benefits, to the saving of 27 (real) animals, to religious edicts to avoid it from more than one religion, to cheating boyfriends being found out by following Pokémon GO tracks.

And so on.

Of all of them, my current favourite is the story of the curators of Auschwitz having to ask people not to play the game within its bounds. It’s kind of poetic: people are finding fictional monsters and playing games with them in a memorial that is there, more than anything, to remind us of real monsters. We shall soon see a lot more and a lot wilder clashes between reality and augmented reality, and a lot more unexpected consequences, some great, some not. Lives will be lost, lives will be changed. There will be life affirming acts, there will be absurdities, there will be great joy, there will be great sadness. As business models emerge, from buttons to sponsorship to advertising to trading to training, there will be a lot of money being made in a vast, almost instant ecosystem. Above all, there will be many surprises. So many adjacent possibles are suddenly emerging.

AR (augmented reality) has been on the brink of this breakthrough moment for a decade or so. I did not guess that it would explode in less than a week when it finally happened, but here it is. Some might quibble about whether Pokémon GO is actually AR as such (it overlays rather than augments reality), but, if there were once a more precise definition of AR, there isn’t any more. There are now countless millions that are inhabiting a digitally augmented physical space, very visibly sharing the same consensual hallucinations, and they are calling it AR. It’s not that it’s anything new. Not at all. It’s the sheer scale of it.  The walls of the dam are broken and the flood has begun.

This is an incredibly exciting moment for anyone with the slightest interest in digital technologies or their effects on society. The fact that it is ‘just’ a game just makes it all the more remarkable. For some, this seems like just another passing fad: bigger than most, a bit more interesting, but just a fad. Perhaps so. I don’t care. For me, it seems like we are witnessing a sudden, irreversible, and massive global shift in our perceptions of the nature of digital systems, of the ways that we can use them, and of what they mean in our lives. This is, with only a slight hint of hyperbole, about to change almost everything.

Aside: it’s not VR, by the way

Zuckerberg and an audience wearing Samsung Gears (Facebook image)There has been a lot of hype of late around AR’s geekier cousin, VR (virtual reality), notably relating to Oculus, HTC Vive, and Playstation VR, but I’m not much enthused. VR has moved only incrementally since the early 90s and the same problems we saw back then persist in almost exactly the same form now, just with more dots.  It’s cool, but I don’t find the experience is really that much more immersive than it was in the early 90s, once you get over the initial wowness of the far higher fidelity. There are a few big niches for it (hard core gaming, simulation, remote presence, etc), and that’s great. But, for most of us, its impact will (in its current forms) not come close to that of PCs, smartphones, tablets, TVs or even games consoles. Something that cuts us off from the real world so completely, especially while it is so conspicuously physically engulfing our heads in big tech, cannot replace very much of what we currently do with computers, and only adds a little to what we can already do without it. Notwithstanding its great value in supporting shared immersive spaces, the new ways it gives us to play with others, and its great potential in games and education, it is not just asocial, it is antisocial. Great big tethered headsets (and even untethered low-res ones) are inherently isolating. We also have a long way to go towards finding a good way to move around in virtual spaces. This hasn’t changed much for the better since the early 90s, despite much innovation. And that’s not to mention the ludicrous amounts of computing power needed for it by today’s standards: my son’s HTC Vive requires a small power station to keep it going, and it blows hot air like a noisy fan heater. It is not helped by the relative difficulty of creating high fidelity interactive virtual environments, nor by vertigo issues. It’s cool, it’s fun, but this is still, with a few exceptions, geek territory. Its big moment will come, but not quite yet, and not as a separate technology: it will be just one of the features that comes for free with AR.

Bigger waves

AR, on the whole, is the opposite of isolating. You can still look into the eyes of others when you are in AR, and participate not just in the world around you, but in an enriched and more social version of it. A lot of the fun of Pokémon GO involves interacting with others, often strangers, and it involves real-world encounters, not avatars. More interestingly, AR is not just a standalone technology: as we start to use more integrated technologies like heads-up displays (HUDs) and projectors, it will eventually envelop VR too, as well as screen-based technologies like PCs, smartphones, TVs, e-readers, and tablets, as well as a fair number of standalone smart devices like the Amazon Echo (though the Internet of Things will integrate interestingly with it). It has been possible to replace screens with glasses for a long time (devices between $100 and $200 abound) but, till now, there has been little point apart from privacy, curiosity, and geek cred. They have offered less convenience than cellphones, and a lot of (literal and figurative) headaches. They are either tethered or have tiny battery lives, they are uncomfortable, they are fragile, they are awkward to use, high resolution versions cost a lot, most are as isolating as VR and, as long as they are a tiny niche product, perhaps most of all, there are some serious social obstacles to wearing HUDs in public. That is all about to change. They are about to become mainstream.

The fact that AR can be done right now with no more than a cellphone is cool and it has been for a few years, but it will get much cooler as the hardware for HUDs becomes better, more widespread and, most importantly, more people share the augmented space. The scale is what makes the Pokémon GO phenomenon so significant, even though it is currently mostly a cellphone and GO Plus thing. It matters because, apart from being really interesting in its own right, soon, enough people will want hardware to match, and that will make it worth going into serious mass production. At that point it gets really interesting, because lots of people will be wearing HUD AR devices.

Google’s large-scale Glass experiment was getting there (and it’s not over yet), but it was mostly viewed with mild curiosity and a lot of suspicion. Why would any normal person want to look like the Borg? What were the wearers doing with those very visible cameras? What were they hiding? Why bother? The tiny minority that wore them were outsiders, weirdos, geeks, a little creepy. But things have moved on: the use cases have suddenly become very compelling, enough (I think) to overcome the stigma. The potentially interesting Microsoft Hololens, the incredibly interesting Magic Leap, and the rest (Meta 1, Recon Jet, Moverio, etc, etc) that are queueing up in the sidelines are nearly here. Apparently, Pokémon GO with a Hololens might be quite special. Apple’s rumoured foray into the field might be very interesting. Samsung’s contact-lens camera system is still a twinkling in Samsung’s eye, but it and many things even more amazing are coming soon. Further off, as nanotech develops and direct neural interfaces become available, the possibilities are (hopefully not literally) mind blowing.

What this all adds up to is that, as more of us start to use such devices, the computer as an object, even in its ubiquitous small smartphone or smartwatch form, will increasingly disappear. Tools like wearables and smart digital assistants have barely even arrived yet, but their end is palpably nigh. Why bother with a smart watch when you can project anything you wish on your wrist (or anywhere else, for that matter?). Why bother with having to find a device when you are wearing any device you can imagine? Why take out a phone to look for Pokémon? Why look at a screen when you can wear a dozen of them, anywhere, any size, adopting any posture you like? It will be great for ergonomics. This is pretty disruptive: whole industries are going to shrink, perhaps even disappear.

The end of the computer

Futurologists and scifi authors once imagined a future filled with screens, computers, smartphones and visible tech. That’s not how it will be at all. Sure, old technologies never die so these separate boxes won’t disappear altogether, and there’s still plenty of time left for innovation in such things, and vast profits still to be made in them as this revolution begins. There may be a decade or two of growth left for these endangered technologies. But the mainstream future of digital technologies is much more human, much more connected, much more social, much more embedded, and much less visible. The future is AR. The whirring big boxes and things with flashing lights that eat our space, our environment, our attention and our lives will, if they exist at all, be hidden in well-managed farms of servers, or in cupboards and walls. This will greatly reduce our environmental impact, the mountains of waste, the ugliness of our built spaces. I, for one, will be glad to see the disappearance of TV sets, of mountains of wires on my desk, of the stacks of tablets, cellphones, robots, PCs, and e-readers that litter my desktop, cupboards and basement. OK, I’m a bit geeky. But most of our homes and workplaces are shrines to screens and wiring. it’s ugly, it’s incredibly wasteful, it’s inhibiting. Though smartness will be embedded everywhere, in our clothing, our furniture, our buildings, our food, the visible interface will appear on displays that play only in or on our heads, and in or on the heads of those around us, in one massive shared hyperreality, a blend of physical and virtual that we all participate in, perhaps sharing the same virtual space, perhaps a different one, perhaps one physical space, perhaps more. At the start, we will wear geeky goggles, visors and visible high tech, but this will just be an intermediate phase. Pretty soon they will start to look cool, as designers with less of a Star Trek mentality step in. Before long, they will be no more weird than ordinary glasses. Later, they will almost vanish. The end point is virtual invisibility, and virtual ubiquity.

AR at scale

Pokémon GO has barely scratched the surface of this adjacent possible, but it has given us our first tantalizing glimpses of the unimaginably vast realms of potential that emerge once enough people hook into the digitally augmented world and start doing things together in it. To take one of the most boringly familiar examples, will we still visit cinemas when we all have cinema-like fidelity in devices on or in our heads? Maybe. There’s a great deal to be said for doing things together in a physical space, as Pokémon GO shows us with a vengeance. But, though we might be looking at the ‘same’ screen, in the same place, there will be no need to project it. Anywhere can become a cinema just as anywhere can be a home for a Pokémon. Anywhere can become an office. Any space can turn into what we want it to be. My office, as I type this, is my boat. This is cool, but I am isolated from my co-workers and students, channeling all communication with them through the confined boundaries of a screen. AR can remove those boundaries, if I wish. I could be sitting here with friends and colleagues, each in their own spaces or together, ‘sitting’ in the cockpit with me or bobbing on the water. I could be teaching, with students seeing what I see, following my every move, and vice versa. When my outboard motor needs fixing (it often does) I could see it with a schematic overlay, or receive direct instruction from a skilled mechanic: the opportunities for the service industry, from plumbing to university professoring, are huge. I could replay events where they happened, including historical events that I was not there to see, things that never happened, things that could happen in the future, what-if scenarios, things that are microscopically small, things that are unimaginably huge, and so on. This is a pretty old idea with many mature existing implementations (e.g. here, here, here and here). Till now they have been isolated phenomena, and most are a bit clunky. As this is accepted as the mainstream, it will cascade into everything. Forget rose-tinted spectacles: the world can be whatever I want it to become. In fact, this could be literally true, not just virtually: I could draw objects in the space they will eventually occupy (such virtual sculpture apps already exist for VR), then 3D print them.

Just think of the possibilities for existing media. Right now I find it useful to work on multiple monitors because the boundaries of one screen are insufficient to keep everything where I need it at once. With AR, I can have dozens of them or (much more interestingly) forget the ‘screen’ metaphor altogether and work as fluidly as I like with text, video, audio and more, all the while as aware of the rest of my environment, and the people in it, as I wish. Computers, including cellphones, isolate: they draw us into them, draw our gaze away from the world around us. AR integrates with that world, and integrates us with it, enhancing both physical and virtual space, enhancing us. We are and have only ever been intelligent as a collective, our intelligence embedded in one another and in the technologies we share. Suddenly, so much more of that can be instantly available to us. This is seriously social technology, albeit that there will be some intriguing and messy interpersonal problems when each of us might be  engaged in a private virtual world while outwardly engaging in another. There are countless ways this could (and will) play out badly.

Or what about a really old technology? I now I have hundreds of e-books that sit forgotten, imprisoned inside that little screen, viewable a page at a time or listed in chunks that fit the dimensions of the device. Bookshelves – constant reminders of what we have read and augmenters of our intellects – remain one of the major advantages of p-books, as does their physicality that reveals context, not just text. With AR, I will be able to see my whole library (and other libraries and bookstores, if I wish), sort it instantly, filter it, seek ideas and phrases, flick through books as though they were physical objects, or view them as a scroll, or one large sheet of virtual paper, or countless other visualizations that massively surpass physical books as media that contribute to my understanding of the text. Forget large format books for images: they can be 20 metres tall if we want them to be. I’ll be able to fling pages, passages, etc onto the wall or hovering in the air, shuffle them, rearrange them, connect them. I’ll be able to make them disappear all at once, and reappear in the same form when I need them again. The limits are those of the imagination, not the boundaries of physical space. We will no doubt start by skeuomorphically incorporating what we already know but, as the adjacent possibles unfold, there will be no end to the creative potential to go far, far beyond that. This is one of the most boring uses of AR I can think of, but it is still beyond magical.

We will, surprisingly soon, continuously inhabit multiple worlds – those of others, those others invent, those that are abstract, those that blend media, those that change what we perceive, those that describe it, those that explain it, those that enhance it, those we assemble or create for ourselves. We will see the world through one another’s eyes, see into one another’s imaginations, engage in multiple overlapping spaces that are part real, part illusion, and we will do so with others, collocated and remote, seamlessly, continuously. Our devices will decorate our walls, analyze our diets, check our health. Our devices won’t forget things, will remember faces, birthdays, life events, connections. We may all have eidetic memories, if that is what we want. While cellphones make our lives more dangerous, these devices will make them safer, warning us when we are about to step into the path of an oncoming truck as we monitor our messages and news. As smartness is embedded in the objects around us, our HUDs will interact with them: no more lost shirts, no guessing the temperature of our roasts, no forgetting to turn off lights. We will gain new senses – seeing in the dark, even through walls, will become commonplace. We will, perhaps, sense small fluctuations in skin temperature to help us better understand what people are feeling. Those of us with visual impairment (most of us) will be able to zoom in, magnify, have text read to us, or delve deeper through QR codes or their successors. Much of what we need to know now will be unnecessary (though we will still enjoy discovering it, as much as we enjoy discovering monsters) but our ability to connect it will grow exponentially. We won’t be taking devices out of our pockets to do that, nor sitting in front of brightly lit screens.

We will very likely become very dependent on these ubiquitous, barely visible devices, these prostheses for the mind. We may rarely take them off. Not all of this will be good. Not by a mile. When technologies change us, as they tend to do, many of those changes tend to be negative. When they change us a lot, there will be a lot of negatives, lots of new problems they create as well as solve, lots of aggregations and integrations that will cause unforeseen woes. This video at shows a nightmare vision of what this might be like, but it doesn’t need to be a nightmare: we will need to learn to tame it, to control it, to use it wisely. Ad blockers will work in this space too.

What comes next

AR has been in the offing for some time, but mainly as futuristic research in labs, half-baked experimental products like Google Glass, or ‘hey wow’ technologies like Layar, Aurasma, Google Translate, etc. Google, Facebook, Apple, Microsoft, Sony, Amazon, all the big players, as well as many thousands of startups, are already scrabbling frantically to get into this space, and to find ways to use what they already have to better effect. I suspect they are looking at the Pokémon GO phenomenon with a mix of awe, respect, and avarice (and, in Google’s case, perhaps a hint of regret). Formerly niche products like Google Tango or Structure Sensor are going to find themselves a lot more in the spotlight as the value of being able to accurately map physical space around us becomes ever greater. Smarter ways of interacting, like this at, will sprout like weeds.

People are going to pay much more attention to existing tools and wonder how they can become more social, more integrated, more fluid, less clunky. We are going to need standards: isolated apps are quite cool, but the big possibilities occur when we are able to mash them up, integrate them, allow them to share space with one another. It would be really useful if there were an equivalent of the World Wide Web for the augmented world: a means of addressing not just coordinates but surfaces, objects, products, trees, buildings, etc, that any application could hook into, that is distributed and open, not held by those that control the APIs. We need spatial and categorical hyperlinks between things that exist in physical and virtual space. I fear that, instead, we may see more of the evils of closed APIs controlled by organizations like Facebook, Google, Apple, Microsoft, Amazon, and their kin. Hopefully they will realise that they will get bigger benefits from expanding the ecosystem (I think Google might get this first) but there is a good chance that short-termist greed will get the upper hand instead. The web had virgin, non-commercial ground in which to flourish before the bad people got there. I am not sure that such a space exists any more, and that’s sad. Perhaps HTML 6 will extend into physical space. That might work. Every space, every product, every plant, every animal, every person, addressable via a URL.

There will be ever more innovations in battery or other power/power saving technologies, display technologies and usability: the abysmal battery life of current devices, in particular, will soon be very irritating. There will likely be a lot of turf wars as different cloud services compete for user populations, different standards and APIs compete for apps, and different devices compete for customers. There will be many acquisitions. Privacy, already a major issue, will take a pounding, as new ways of invading it proliferate. What happens when Google sees all that you see? Measures your room with millimetre accuracy? Tracks every moment of your waking life? What happens when security services tap in? Or hackers? Or advertisers? There will be kickback and resistance, much of it justified. New forms of DRM will struggle to contain what needs to be free: ownership of digital objects will be hotly contested. New business models (personalized posters anyone? in situ personal assistants? digital objects for the home? mashup museums and galleries?) will enrage us, inform us, amuse us, enthrall us. Facebook, temporarily wrong footed in its ill-considered efforts to promote Oculus, will come back with a vengeance and find countless new ways to exploit us (if you think it is bad now, imagine what it will be like when it tracks our real-world social networks). The owners of the maps and the mapped data will become rich: Niantic is right now sitting on a diamond as big as the Ritz. We must be prepared for new forms of commerce, new sources of income, new ways of learning, new ways of understanding, new ways of communicating, new notions of knowledge, new tools, new standards, new paradigms, new institutions, new major players, new forms of exploitation, new crimes, new intrusions, new dangers, new social problems we can so far barely dream of. It will certainly take years, not months, for all of this to happen, though it is worth remembering that network effects kick in fast: the Pokémon GO only took a few days. It is coming, significant parts of it are already here, and we need to be preparing for it now. Though the seeds have been germinating for many years, they have germinated in relatively isolated pockets. This simple game has opened up the whole ecosystem.


I guess, being an edtech blogger, I should say a bit more about the effects of Pokémon GO on education but that’s mostly for another post, and much of it is implied in what I have written so far. There have been plenty of uses of AR in conventional education so far, and there will no doubt be thousands of ways that people use Pokémon GO in their teaching (some great adjacent possibles in locative, gamified learning), as well as ways to use the countless mutated purpose-built forms that will appear any moment now, and that will be fun, though not earth shattering. I have, for instance, been struggling to find useful ways to use geocaching in my teaching (of computing etc) for over a decade, but it was always too complex to manage, given that my students are mostly pretty sparsely spread across the globe: basically, I don’t have the resources to populate enough geocaches. The kind of mega-scale mapping that Niantic has successfully accomplished could now make this possible, if they open up the ecosystem. However, most uses of AR will, at first, simply extend the status quo, letting us do better what we have always done and that we only needed to do because of physics. The real disruption, the result of the fact we can overcome physics, will take a while longer, and will depend on the ubiquity of more integrated, seamlessly networked forms of AR. When the environment is smart, the kind of intelligence we need to make use of it is quite different from most of what our educational systems are geared up to provide. When connection between the virtual and physical is ubiquitous, fluid and high fidelity, we don’t need to limit ourselves to conventional boundaries of classes, courses, subjects and schools. We don’t need to learn today what we will only use in 20 years time. We can do it now. Networked computers made this possible. AR makes it inevitable. I will have more to say about this.

This is going to change things. Lots of things.