Little monsters and big waves

 

pokémon at AuschwitzSome amazing stories have been emerging lately about Pokémon GO, from people wandering through live broadcasts in search of monsters, to lurings of mugging victims, to discoveries of dead bodies, to monsters in art galleries and museums, to people throwing phones to try to capture Pokémons, to it overtaking Facebook in engagement (by a mile), to cafes going from empty to full in a day thanks to one little monster, to people entering closed zoo enclosures and multiple other  dangerous behaviours (including falling off a cliff),  to uses of Pokémon to raise money for charity, to applause for its mental and physical health benefits, to the saving of 27 (real) animals, to religious edicts to avoid it from more than one religion, to cheating boyfriends being found out by following Pokémon GO tracks.

And so on.

Of all of them, my current favourite is the story of the curators of Auschwitz having to ask people not to play the game within its bounds. It’s kind of poetic: people are finding fictional monsters and playing games with them in a memorial that is there, more than anything, to remind us of real monsters. We shall soon see a lot more and a lot wilder clashes between reality and augmented reality, and a lot more unexpected consequences, some great, some not. Lives will be lost, lives will be changed. There will be life affirming acts, there will be absurdities, there will be great joy, there will be great sadness. As business models emerge, from buttons to sponsorship to advertising to trading to training, there will be a lot of money being made in a vast, almost instant ecosystem. Above all, there will be many surprises. So many adjacent possibles are suddenly emerging.

AR (augmented reality) has been on the brink of this breakthrough moment for a decade or so. I did not guess that it would explode in less than a week when it finally happened, but here it is. Some might quibble about whether Pokémon GO is actually AR as such (it overlays rather than augments reality), but, if there were once a more precise definition of AR, there isn’t any more. There are now countless millions that are inhabiting a digitally augmented physical space, very visibly sharing the same consensual hallucinations, and they are calling it AR. It’s not that it’s anything new. Not at all. It’s the sheer scale of it.  The walls of the dam are broken and the flood has begun.

This is an incredibly exciting moment for anyone with the slightest interest in digital technologies or their effects on society. The fact that it is ‘just’ a game just makes it all the more remarkable. For some, this seems like just another passing fad: bigger than most, a bit more interesting, but just a fad. Perhaps so. I don’t care. For me, it seems like we are witnessing a sudden, irreversible, and massive global shift in our perceptions of the nature of digital systems, of the ways that we can use them, and of what they mean in our lives. This is, with only a slight hint of hyperbole, about to change almost everything.

Aside: it’s not VR, by the way

Zuckerberg and an audience wearing Samsung Gears (Facebook image)There has been a lot of hype of late around AR’s geekier cousin, VR (virtual reality), notably relating to Oculus, HTC Vive, and Playstation VR, but I’m not much enthused. VR has moved only incrementally since the early 90s and the same problems we saw back then persist in almost exactly the same form now, just with more dots.  It’s cool, but I don’t find the experience is really that much more immersive than it was in the early 90s, once you get over the initial wowness of the far higher fidelity. There are a few big niches for it (hard core gaming, simulation, remote presence, etc), and that’s great. But, for most of us, its impact will (in its current forms) not come close to that of PCs, smartphones, tablets, TVs or even games consoles. Something that cuts us off from the real world so completely, especially while it is so conspicuously physically engulfing our heads in big tech, cannot replace very much of what we currently do with computers, and only adds a little to what we can already do without it. Notwithstanding its great value in supporting shared immersive spaces, the new ways it gives us to play with others, and its great potential in games and education, it is not just asocial, it is antisocial. Great big tethered headsets (and even untethered low-res ones) are inherently isolating. We also have a long way to go towards finding a good way to move around in virtual spaces. This hasn’t changed much for the better since the early 90s, despite much innovation. And that’s not to mention the ludicrous amounts of computing power needed for it by today’s standards: my son’s HTC Vive requires a small power station to keep it going, and it blows hot air like a noisy fan heater. It is not helped by the relative difficulty of creating high fidelity interactive virtual environments, nor by vertigo issues. It’s cool, it’s fun, but this is still, with a few exceptions, geek territory. Its big moment will come, but not quite yet, and not as a separate technology: it will be just one of the features that comes for free with AR.

Bigger waves

AR, on the whole, is the opposite of isolating. You can still look into the eyes of others when you are in AR, and participate not just in the world around you, but in an enriched and more social version of it. A lot of the fun of Pokémon GO involves interacting with others, often strangers, and it involves real-world encounters, not avatars. More interestingly, AR is not just a standalone technology: as we start to use more integrated technologies like heads-up displays (HUDs) and projectors, it will eventually envelop VR too, as well as screen-based technologies like PCs, smartphones, TVs, e-readers, and tablets, as well as a fair number of standalone smart devices like the Amazon Echo (though the Internet of Things will integrate interestingly with it). It has been possible to replace screens with glasses for a long time (devices between $100 and $200 abound) but, till now, there has been little point apart from privacy, curiosity, and geek cred. They have offered less convenience than cellphones, and a lot of (literal and figurative) headaches. They are either tethered or have tiny battery lives, they are uncomfortable, they are fragile, they are awkward to use, high resolution versions cost a lot, most are as isolating as VR and, as long as they are a tiny niche product, perhaps most of all, there are some serious social obstacles to wearing HUDs in public. That is all about to change. They are about to become mainstream.

The fact that AR can be done right now with no more than a cellphone is cool and it has been for a few years, but it will get much cooler as the hardware for HUDs becomes better, more widespread and, most importantly, more people share the augmented space. The scale is what makes the Pokémon GO phenomenon so significant, even though it is currently mostly a cellphone and GO Plus thing. It matters because, apart from being really interesting in its own right, soon, enough people will want hardware to match, and that will make it worth going into serious mass production. At that point it gets really interesting, because lots of people will be wearing HUD AR devices.

Google’s large-scale Glass experiment was getting there (and it’s not over yet), but it was mostly viewed with mild curiosity and a lot of suspicion. Why would any normal person want to look like the Borg? What were the wearers doing with those very visible cameras? What were they hiding? Why bother? The tiny minority that wore them were outsiders, weirdos, geeks, a little creepy. But things have moved on: the use cases have suddenly become very compelling, enough (I think) to overcome the stigma. The potentially interesting Microsoft Hololens, the incredibly interesting Magic Leap, and the rest (Meta 1, Recon Jet, Moverio, etc, etc) that are queueing up in the sidelines are nearly here. Apparently, Pokémon GO with a Hololens might be quite special. Apple’s rumoured foray into the field might be very interesting. Samsung’s contact-lens camera system is still a twinkling in Samsung’s eye, but it and many things even more amazing are coming soon. Further off, as nanotech develops and direct neural interfaces become available, the possibilities are (hopefully not literally) mind blowing.

What this all adds up to is that, as more of us start to use such devices, the computer as an object, even in its ubiquitous small smartphone or smartwatch form, will increasingly disappear. Tools like wearables and smart digital assistants have barely even arrived yet, but their end is palpably nigh. Why bother with a smart watch when you can project anything you wish on your wrist (or anywhere else, for that matter?). Why bother with having to find a device when you are wearing any device you can imagine? Why take out a phone to look for Pokémon? Why look at a screen when you can wear a dozen of them, anywhere, any size, adopting any posture you like? It will be great for ergonomics. This is pretty disruptive: whole industries are going to shrink, perhaps even disappear.

The end of the computer

Futurologists and scifi authors once imagined a future filled with screens, computers, smartphones and visible tech. That’s not how it will be at all. Sure, old technologies never die so these separate boxes won’t disappear altogether, and there’s still plenty of time left for innovation in such things, and vast profits still to be made in them as this revolution begins. There may be a decade or two of growth left for these endangered technologies. But the mainstream future of digital technologies is much more human, much more connected, much more social, much more embedded, and much less visible. The future is AR. The whirring big boxes and things with flashing lights that eat our space, our environment, our attention and our lives will, if they exist at all, be hidden in well-managed farms of servers, or in cupboards and walls. This will greatly reduce our environmental impact, the mountains of waste, the ugliness of our built spaces. I, for one, will be glad to see the disappearance of TV sets, of mountains of wires on my desk, of the stacks of tablets, cellphones, robots, PCs, and e-readers that litter my desktop, cupboards and basement. OK, I’m a bit geeky. But most of our homes and workplaces are shrines to screens and wiring. it’s ugly, it’s incredibly wasteful, it’s inhibiting. Though smartness will be embedded everywhere, in our clothing, our furniture, our buildings, our food, the visible interface will appear on displays that play only in or on our heads, and in or on the heads of those around us, in one massive shared hyperreality, a blend of physical and virtual that we all participate in, perhaps sharing the same virtual space, perhaps a different one, perhaps one physical space, perhaps more. At the start, we will wear geeky goggles, visors and visible high tech, but this will just be an intermediate phase. Pretty soon they will start to look cool, as designers with less of a Star Trek mentality step in. Before long, they will be no more weird than ordinary glasses. Later, they will almost vanish. The end point is virtual invisibility, and virtual ubiquity.

AR at scale

Pokémon GO has barely scratched the surface of this adjacent possible, but it has given us our first tantalizing glimpses of the unimaginably vast realms of potential that emerge once enough people hook into the digitally augmented world and start doing things together in it. To take one of the most boringly familiar examples, will we still visit cinemas when we all have cinema-like fidelity in devices on or in our heads? Maybe. There’s a great deal to be said for doing things together in a physical space, as Pokémon GO shows us with a vengeance. But, though we might be looking at the ‘same’ screen, in the same place, there will be no need to project it. Anywhere can become a cinema just as anywhere can be a home for a Pokémon. Anywhere can become an office. Any space can turn into what we want it to be. My office, as I type this, is my boat. This is cool, but I am isolated from my co-workers and students, channeling all communication with them through the confined boundaries of a screen. AR can remove those boundaries, if I wish. I could be sitting here with friends and colleagues, each in their own spaces or together, ‘sitting’ in the cockpit with me or bobbing on the water. I could be teaching, with students seeing what I see, following my every move, and vice versa. When my outboard motor needs fixing (it often does) I could see it with a schematic overlay, or receive direct instruction from a skilled mechanic: the opportunities for the service industry, from plumbing to university professoring, are huge. I could replay events where they happened, including historical events that I was not there to see, things that never happened, things that could happen in the future, what-if scenarios, things that are microscopically small, things that are unimaginably huge, and so on. This is a pretty old idea with many mature existing implementations (e.g. here, here, here and here). Till now they have been isolated phenomena, and most are a bit clunky. As this is accepted as the mainstream, it will cascade into everything. Forget rose-tinted spectacles: the world can be whatever I want it to become. In fact, this could be literally true, not just virtually: I could draw objects in the space they will eventually occupy (such virtual sculpture apps already exist for VR), then 3D print them.

Just think of the possibilities for existing media. Right now I find it useful to work on multiple monitors because the boundaries of one screen are insufficient to keep everything where I need it at once. With AR, I can have dozens of them or (much more interestingly) forget the ‘screen’ metaphor altogether and work as fluidly as I like with text, video, audio and more, all the while as aware of the rest of my environment, and the people in it, as I wish. Computers, including cellphones, isolate: they draw us into them, draw our gaze away from the world around us. AR integrates with that world, and integrates us with it, enhancing both physical and virtual space, enhancing us. We are and have only ever been intelligent as a collective, our intelligence embedded in one another and in the technologies we share. Suddenly, so much more of that can be instantly available to us. This is seriously social technology, albeit that there will be some intriguing and messy interpersonal problems when each of us might be  engaged in a private virtual world while outwardly engaging in another. There are countless ways this could (and will) play out badly.

Or what about a really old technology? I now I have hundreds of e-books that sit forgotten, imprisoned inside that little screen, viewable a page at a time or listed in chunks that fit the dimensions of the device. Bookshelves – constant reminders of what we have read and augmenters of our intellects – remain one of the major advantages of p-books, as does their physicality that reveals context, not just text. With AR, I will be able to see my whole library (and other libraries and bookstores, if I wish), sort it instantly, filter it, seek ideas and phrases, flick through books as though they were physical objects, or view them as a scroll, or one large sheet of virtual paper, or countless other visualizations that massively surpass physical books as media that contribute to my understanding of the text. Forget large format books for images: they can be 20 metres tall if we want them to be. I’ll be able to fling pages, passages, etc onto the wall or hovering in the air, shuffle them, rearrange them, connect them. I’ll be able to make them disappear all at once, and reappear in the same form when I need them again. The limits are those of the imagination, not the boundaries of physical space. We will no doubt start by skeuomorphically incorporating what we already know but, as the adjacent possibles unfold, there will be no end to the creative potential to go far, far beyond that. This is one of the most boring uses of AR I can think of, but it is still beyond magical.

We will, surprisingly soon, continuously inhabit multiple worlds – those of others, those others invent, those that are abstract, those that blend media, those that change what we perceive, those that describe it, those that explain it, those that enhance it, those we assemble or create for ourselves. We will see the world through one another’s eyes, see into one another’s imaginations, engage in multiple overlapping spaces that are part real, part illusion, and we will do so with others, collocated and remote, seamlessly, continuously. Our devices will decorate our walls, analyze our diets, check our health. Our devices won’t forget things, will remember faces, birthdays, life events, connections. We may all have eidetic memories, if that is what we want. While cellphones make our lives more dangerous, these devices will make them safer, warning us when we are about to step into the path of an oncoming truck as we monitor our messages and news. As smartness is embedded in the objects around us, our HUDs will interact with them: no more lost shirts, no guessing the temperature of our roasts, no forgetting to turn off lights. We will gain new senses – seeing in the dark, even through walls, will become commonplace. We will, perhaps, sense small fluctuations in skin temperature to help us better understand what people are feeling. Those of us with visual impairment (most of us) will be able to zoom in, magnify, have text read to us, or delve deeper through QR codes or their successors. Much of what we need to know now will be unnecessary (though we will still enjoy discovering it, as much as we enjoy discovering monsters) but our ability to connect it will grow exponentially. We won’t be taking devices out of our pockets to do that, nor sitting in front of brightly lit screens.

We will very likely become very dependent on these ubiquitous, barely visible devices, these prostheses for the mind. We may rarely take them off. Not all of this will be good. Not by a mile. When technologies change us, as they tend to do, many of those changes tend to be negative. When they change us a lot, there will be a lot of negatives, lots of new problems they create as well as solve, lots of aggregations and integrations that will cause unforeseen woes. This video at vimeo.com/166807261 shows a nightmare vision of what this might be like, but it doesn’t need to be a nightmare: we will need to learn to tame it, to control it, to use it wisely. Ad blockers will work in this space too.

What comes next

AR has been in the offing for some time, but mainly as futuristic research in labs, half-baked experimental products like Google Glass, or ‘hey wow’ technologies like Layar, Aurasma, Google Translate, etc. Google, Facebook, Apple, Microsoft, Sony, Amazon, all the big players, as well as many thousands of startups, are already scrabbling frantically to get into this space, and to find ways to use what they already have to better effect. I suspect they are looking at the Pokémon GO phenomenon with a mix of awe, respect, and avarice (and, in Google’s case, perhaps a hint of regret). Formerly niche products like Google Tango or Structure Sensor are going to find themselves a lot more in the spotlight as the value of being able to accurately map physical space around us becomes ever greater. Smarter ways of interacting, like this at www.youtube.com/watch?v=UA_HZVmmY84, will sprout like weeds.

People are going to pay much more attention to existing tools and wonder how they can become more social, more integrated, more fluid, less clunky. We are going to need standards: isolated apps are quite cool, but the big possibilities occur when we are able to mash them up, integrate them, allow them to share space with one another. It would be really useful if there were an equivalent of the World Wide Web for the augmented world: a means of addressing not just coordinates but surfaces, objects, products, trees, buildings, etc, that any application could hook into, that is distributed and open, not held by those that control the APIs. We need spatial and categorical hyperlinks between things that exist in physical and virtual space. I fear that, instead, we may see more of the evils of closed APIs controlled by organizations like Facebook, Google, Apple, Microsoft, Amazon, and their kin. Hopefully they will realise that they will get bigger benefits from expanding the ecosystem (I think Google might get this first) but there is a good chance that short-termist greed will get the upper hand instead. The web had virgin, non-commercial ground in which to flourish before the bad people got there. I am not sure that such a space exists any more, and that’s sad. Perhaps HTML 6 will extend into physical space. That might work. Every space, every product, every plant, every animal, every person, addressable via a URL.

There will be ever more innovations in battery or other power/power saving technologies, display technologies and usability: the abysmal battery life of current devices, in particular, will soon be very irritating. There will likely be a lot of turf wars as different cloud services compete for user populations, different standards and APIs compete for apps, and different devices compete for customers. There will be many acquisitions. Privacy, already a major issue, will take a pounding, as new ways of invading it proliferate. What happens when Google sees all that you see? Measures your room with millimetre accuracy? Tracks every moment of your waking life? What happens when security services tap in? Or hackers? Or advertisers? There will be kickback and resistance, much of it justified. New forms of DRM will struggle to contain what needs to be free: ownership of digital objects will be hotly contested. New business models (personalized posters anyone? in situ personal assistants? digital objects for the home? mashup museums and galleries?) will enrage us, inform us, amuse us, enthrall us. Facebook, temporarily wrong footed in its ill-considered efforts to promote Oculus, will come back with a vengeance and find countless new ways to exploit us (if you think it is bad now, imagine what it will be like when it tracks our real-world social networks). The owners of the maps and the mapped data will become rich: Niantic is right now sitting on a diamond as big as the Ritz. We must be prepared for new forms of commerce, new sources of income, new ways of learning, new ways of understanding, new ways of communicating, new notions of knowledge, new tools, new standards, new paradigms, new institutions, new major players, new forms of exploitation, new crimes, new intrusions, new dangers, new social problems we can so far barely dream of. It will certainly take years, not months, for all of this to happen, though it is worth remembering that network effects kick in fast: the Pokémon GO only took a few days. It is coming, significant parts of it are already here, and we need to be preparing for it now. Though the seeds have been germinating for many years, they have germinated in relatively isolated pockets. This simple game has opened up the whole ecosystem.

Pokéducation

I guess, being an edtech blogger, I should say a bit more about the effects of Pokémon GO on education but that’s mostly for another post, and much of it is implied in what I have written so far. There have been plenty of uses of AR in conventional education so far, and there will no doubt be thousands of ways that people use Pokémon GO in their teaching (some great adjacent possibles in locative, gamified learning), as well as ways to use the countless mutated purpose-built forms that will appear any moment now, and that will be fun, though not earth shattering. I have, for instance, been struggling to find useful ways to use geocaching in my teaching (of computing etc) for over a decade, but it was always too complex to manage, given that my students are mostly pretty sparsely spread across the globe: basically, I don’t have the resources to populate enough geocaches. The kind of mega-scale mapping that Niantic has successfully accomplished could now make this possible, if they open up the ecosystem. However, most uses of AR will, at first, simply extend the status quo, letting us do better what we have always done and that we only needed to do because of physics. The real disruption, the result of the fact we can overcome physics, will take a while longer, and will depend on the ubiquity of more integrated, seamlessly networked forms of AR. When the environment is smart, the kind of intelligence we need to make use of it is quite different from most of what our educational systems are geared up to provide. When connection between the virtual and physical is ubiquitous, fluid and high fidelity, we don’t need to limit ourselves to conventional boundaries of classes, courses, subjects and schools. We don’t need to learn today what we will only use in 20 years time. We can do it now. Networked computers made this possible. AR makes it inevitable. I will have more to say about this.

This is going to change things. Lots of things.

 

Curiosity Is Not Intrinsically Good

Interesting reflections in Scientific American on morbid curiosity – that we are driven by our curiosity, sometimes even when we actually know that there is a strong likelihood it will hurt us. In the article, as the title implies, this is portrayed as a bad thing. I disagree.

“The drive to discover is deeply ingrained in humans, on par with the basic drives for food or sex, says Christopher Hsee of the University of Chicago, a co-author of the paper. Curiosity is often considered a good instinct—it can lead to new scientific advances, for instance—but sometimes such inquiry can backfire. “The insight that curiosity can drive you to do self-destructive things is a profound one,” says George Loewenstein, a professor of economics and psychology at Carnegie Mellon University who has pioneered the scientific study of curiosity.”

Bub in a boxThis is not exactly a novel, nor a profound insight: we even have a popular proverb for it that I mention to my cats on an almost daily basis. They don’t listen. 

There is a strong relationship between curiosity and the desire for competence: a need to know how things work, how to do something we cannot yet do, why things are the way they are, where our limits lie, how to become more capable of acting in the world. From an evolutionary perspective we are curious with a purpose. It allows us to make effective use of our environment, to become competent within it. This is really good for survival so, of course, it is selected for. That it sometimes drives us to do things that harm us is actually a very positive feature, as long as it is balanced with a sufficient level of caution and the harm it causes is not too great. It helps us to know what to avoid, as well as what is useful to us. It also helps us to be more adaptable to bad things that we cannot avoid. It makes us more flexible, and lets us both know and extend our limits.

The first experiment described here involved people playing with pens even knowing that some were novelty items that would give them an electric shock. I’m not sure why the researchers mixed in some harmless pens in this because, even when pain is an absolute certainty, curiosity can drive us to experience it. I have long used electrostatic zappers that are designed to alleviate the itch in mosquito bites by administering a sharp and slightly painful shock to the skin. I have yet to meet a single child and have met very few adults that did not want to try it out on their own skin, regardless of whether they had any bites, in the full and certain knowledge that it would hurt. This is described in the article as self-destructive curiosity but I don’t think that’s right at all. If subjects had been convincingly warned that some pens would kill or maim them, then I am quite certain that very few would have played with them (some might, of course – evolution thrives on variation and, in some environments, high-risk strategies might pay off). But being curious about what kind of pain it might cause is really just a way of discovering or achieving competence, of discovering how we cope with this kind of shock, of testing hypotheses about ourselves and the environment, as well as finding out whether such joke pens actually work as advertised. This is potentially useful information: it will make you less likely to be a victim of a practical joke, or perhaps inspire you to perform one more effectively. Either way, it’s probably not a big thing in the grand scheme of things but, then again, very few learning experiences are. The value is more about how we integrate and connect such experiences.

The article describes another experiment in which participants were encouraged to predict their feelings after being shown an unpleasant image. Those so primed were less likely to choose to see it. Again, this makes sense in the light of what we already know. We are curious with a purpose – to learn – so, if we reflect a bit on what we have already learned, then it might dull our curiosity to experience something bad again. That’s potentially useful. I’m not sure that it is always a good thing, though. I happen to like, say, some horror movies that disgust me, or comedies that rely on discomfort for their humour. In fact, the anticipation of fear or disgust is often one of the main things that drives their plots and keeps my eyes glued to them. If the zombie apocalypse comes, I will be totally prepared. It also prepares me better for things that are going to really upset me. Likewise for funfair rides, sailing on a breezy day, exercising until it hurts, eating hot chili, or struggling with difficult deadlines.

So while, yes, we absolutely should learn from experience, we also need to remember that it can lead us into fixed ways of thinking that can, when conditions change, be less adaptable and adaptive. There is an ever-shifting balance between fear and curiosity that we need to embrace, perhaps especially when curiosity leads to the likelihood of something unpleasant (though not too unpleasant) happening. And, even when the danger is great, there are also risks that are sometimes worth taking. ‘What if..?’ is one of the most powerful phrases in any language.

Address of the bookmark: http://www.scientificamerican.com/article/curiosity-is-not-intrinsically-good/

Cocktails and educational research

A lot of progress has been made in medicine in recent years through the application of cocktails of drugs. Those used to combat AIDS are perhaps the most well-known, but there are many other applications of the technique to everything from lung cancer to Hodgkin’s lymphoma. The logic is simple. Different drugs attack different vulnerabilities in the pathogens etc they seek to kill. Though evolution means that some bacteria, viruses or cancers are likely to be adapted to escape one attack, the more different attacks you make, the less likely it will be that any will survive.

Simulated learningUnfortunately, combinatorial complexity means this is not a simply a question of throwing a bunch of the best drugs of each type together and gaining their benefits additively. I have recently been reading John H. Miller’s ‘A crude look at the whole: the science of complex systems in business, life and society‘ which is, so far, excellent, and that addresses this and many other problems in complexity science. Miller uses the nice analogy of fashion to help explain the problem: if you simply choose the most fashionable belt, the trendiest shoes, the latest greatest shirt, the snappiest hat, etc, the chances of walking out with the most fashionable outfit by combining them together are virtually zero. In fact, there’s a very strong chance that you will wind up looking pretty awful. It is not easily susceptible to reductive science because the variables all affect one another deeply. If your shirt doesn’t go with your shoes, it doesn’t matter how good either are separately. The same is true of drugs. You can’t simply pick those that are best on their own without understanding how they all work together. Not only may they not additively combine, they may often have highly negative effects, or may prevent one another being effective, or may behave differently in a different sequence, or in different relative concentrations. To make matters worse, side effects multiply as well as therapeutic benefits so, at the very least, you want to aim for the smallest number of compounds in the cocktail that you can get away with. Even were the effects of combining drugs positive, it would be premature to believe that it is the best possible solution unless you have actually tried them all. And therein lies the rub, because there are really a great many ways to combine them.

Miller and colleagues have been using the ideas behind simulated annealing to create faster, better ways to discover working cocktails of drugs. They started with 19 drugs which, a small bit of math shows, could be combined in 2 to the power of 19 different ways – about half a million possible combinations (not counting sequencing or relative strength issues). As only 20 such combinations could be tested each week, the chances of finding an effective, let alone the best combination, were slim within any reasonable timeframe. Simplifying a bit, rather than attempting to cover the entire range of possibilities, their approach finds a local optimum within one locale by picking a point and iterating variations from there until the best combination is found for that patch of the fitness landscape. It then checks another locale and repeats the process, and iterates until they have covered a large enough portion of the fitness landscape to be confident of having found at least a good solution: they have at least several peaks to compare. This also lets them follow up on hunches and to use educated guesses to speed up the search. It seems pretty effective, at least when compared with alternatives that attempt a theory-driven intentional design (too many non-independent variables), and is certainly vastly superior to methodically trying every alternative, inasmuch as it is actually possible to do this within acceptable timescales.

The central trick is to deliberately go downhill on the fitness landscape, rather than following an uphill route of continuous improvement all the time, which may simply get you to the top of an anthill rather than the peak of Everest in the fitness landscape. Miller very effectively shows that this is the fundamental error committed by followers of the Six-Sigma approach to management, an iterative method of process improvement originally invented to reduce errors in the manufacturing process: it may work well in a manufacturing context with a small number of variables to play with in a fixed and well-known landscape, but it is much worse than useless when applied in a creative industry like, say, education, because the chances that we are climbing a mountain and not an anthill are slim to negligible. In fact, the same is true even in manufacturing: if you are just making something inherently weak as good as it can be, it is still weak. There are lessons here for those that work hard to make our educational systems work better. For instance, attempts to make examination processes more reliable are doomed to fail because it’s exams that are the problem, not the processes used to run them. As I finish this while listening to a talk on learning analytics, I see dozens of such examples: most of the analytics tools described are designed to make the various parts of the educational machine work ‘ better’, ie. (for the most part) to help ensure that students’ behaviour complies with teachers’ intent. Of course, the only reason such compliance was ever needed was for efficient use of teaching resources, not because it is good for learning. Anthills.

This way of thinking seems to me to have potentially interesting applications in educational research. We who work in the area are faced with an irreducibly large number of recombinable and mutually affective variables that make any ethical attempt to do experimental research on effectiveness (however we choose to measure that – so many anthills here) impossible. It doesn’t stop a lot of people doing it, and telling us about p-values that prove their point in more or less scupulous studies, but they are – not to put too fine a point on it – almost always completely pointless.  At best, they might be telling us something useful about a single, non-replicable anthill, from which we might draw a lesson or two for our own context. But even a single omitted word in a lecture, a small change in inflection, let alone an impossibly vast range of design, contextual, historical and human factors, can have a substantial effect on learning outcomes and effectiveness for any given individual at any given time. We are always dealing with a lot more than 2 to the power of 19 possible mutually interacting combinations in real educational contexts. For even the simplest of research designs in a realistic educational context, the number of possible combinations of relevant variables is more likely closer to 2 to the power of 100 (in base 10 that’s  1,267,650,600,228,229,401,496,703,205,376). To make matters worse, the effects we are looking for may sometimes not be apparent for decades (having recombined and interacted with countless others along the way) and, for anything beyond trivial reductive experiments that would tell us nothing really useful, could seldom be done at a rate of more than a handful per semester, let alone 20 per week. This is a very good reason to do a lot more qualitative research, seeking meanings, connections, values and stories rather than trying to prove our approaches using experimental results. Education is more comparable to psychology than medicine and suffers the same central problem, that the general does not transfer to the specific, as well as a whole bunch of related problems that Smedslund recently coherently summarized. The article is paywalled, but Smedlund’s abstract states his main points succinctly:

“The current empirical paradigm for psychological research is criticized because it ignores the irreversibility of psychological processes, the infinite number of influential factors, the pseudo-empirical nature of many hypotheses, and the methodological implications of social interactivity. An additional point is that the differences and correlations usually found are much too small to be useful in psychological practice and in daily life. Together, these criticisms imply that an objective, accumulative, empirical and theoretical science of psychology is an impossible project.”

You could simply substitute ‘education’ for ‘psychology’ in this, and it would read the same. But it gets worse, because education is as much about technology and design as it is about states of mind and behaviour, so it is orders of magnitude more complex than psychology. The potential for invention of new ways of teaching and new states of learning is essentially infinite. Reductive science thus has a very limited role in educational research, at least as it has hitherto been done.

But what if we took the lessons of simulated annealing to heart? I recently bookmarked an approach to more reliable research suggested by the Christensen Institute that might provide a relevant methodology. The idea behind this is (again, simplifying a bit) to do the experimental stuff, then to sweep the normal results to one side and concentrate on the outliers, performing iterations of conjectures and experiments on an ever more diverse and precise range of samples until a richer, fuller picture results. Although it would be painstaking and longwinded, it is a good idea. But one cycle of this is a bit like a single iteration of Miller’s simulated annealing approach, a means to reach the top of one peak in the fitness landscape, that may still be a low-lying peak. However if, having done that, we jumbled up the variables again and repeated it starting in a different place, we might stand a chance of climbing some higher anthills and, perhaps, over time we might even hit a mountain and begin to have something that looks like a true science of education, in which we might make some reasonable predictions that do not rely on vague generalizations. It would either take a terribly long time (which itself might preclude it because, by the time we had finished researching, the discipline will have moved somewhere else) or would hit some notable ethical boundaries (you can’t deliberately mis-teach someone), but it seems more plausible than most existing techniques, if a reductive science of education is what we seek.

To be frank, I am not convinced it is worth the trouble. It seems to me that education is far closer as a discipline to art and design than it is to psychology, let alone to physics. Sure, there is a lot of important and useful stuff to be learned about how we learn: no doubt about that at all, and a simulated annealing approach might speed up that kind of research. Painters need to know what paints do too. But from there to prescribing how we should therefore teach spans a big chasm that reductive science cannot, in principle or practice, cross. This doesn’t mean that we cannot know anything: it just means it’s a different kind of knowledge than reductive science can provide. We are dealing with emergent phenomena in complex systems that are ontologically and epistemologically different from the parts of which they consist. So, yes, knowledge of the parts is valuable, but we can no more predict how best to teach or learn from those parts than we can predict the shape and function of the heart from knowledge of cellular organelles in its constituent cells. But knowledge of the cocktails that result – that might be useful.

 

 

Oh yes, that's why I left

St George Cross (Wikipedia)England is a weird, sad, angry little country, where there is now unequivocal evidence that over half the population – mainly the older ones – believe that experts know nothing, and that foreigners (as well as milllions of people born there with darker than average skins) are evil. England is a place filled with drunkenness and random violence, where it’s not safe to pass a crowd of teenagers – let alone a crowd of football supporters – on a street corner, where you cannot hang Xmas decorations outside for fear of losing them, where your class still defines you forever, where whinging is a way of life, where kindness is viewed with suspicion, where barbed wire fences protect schools from outsiders (or vice versa – hard to fathom), where fuckin‘ is a punctuation mark to underline what follows, not an independent word. It’s a nation filled with fierce and inhospitable people, as Horace once said, and it always has been. For all the people and places that I love and miss there, for all its very many good people and slowly vanishing places that are not at all like that, for all its dark and delicious humour, its eccentricity, its diversity, its cheeky irreverance, its feistiness, its relentless creativity, its excellent beer, its pork pies and its pickled onions, all of which I miss, that’s why I was glad to leave it.

It saddens and maddens me to see the country of my birth killing or, at least, seriously maiming itself in such a spectacularly and wilfully ignorant way, taking the United Kingdom, and possibly even the EU itself with it, as well as causing injury to much of the world, including Canada. England is a country-sized suicide bomber. Hopefully this mob insanity will eventually be a catalyst for positive change, if not in England or Wales then at least elsewhere. Until today I opposed Scottish independence, because nationalism is almost uniformly awful and the last thing we need in the world is more separatism, but it is far better to be part of something big and expansive like the EU than an unwilling partner in something small in soul and mind like the UK. Maybe Ireland will unify and come together in Europe. Perhaps Gibraltar too. Maybe Europe, largely freed of the burden of supporting and catering for the small-minded needs of my cantankerous homeland, will rise to new heights. I hope so, but it’s a crying shame that England won’t be a part of that. 

I am proud, though, of my home city, Brighton, the place where English people who don’t want to live in England live. About 70% of Brightonians voted to stay in the EU. Today I am proudly Brightonian, proudly European, but ashamed to be English. 

 

 

Be less pigeon

I love the slogan that Audrey Watters has chosen for her new branding:

Be less pigeon

As she puts it…

“I wanted my work to both highlight the longstanding relationship between behaviorism and testing – built into the ideology and the infrastructure since ed-tech’s origins in the early twentieth century – and to remind people that there are also alternatives to treating students like animals to be trained.”

Absolutely.

Address of the bookmark: http://hackeducation.com/2016/06/08/pigeons

Can The Sims Show Us That We’re Inherently Good or Evil?

As it turns out, yes. temptations to be unkind

The good news is that we are intuitively altruistic. This doesn’t necessarily mean we are born that way. This is probably learned behaviour that co-evolves with that of those around us. The hypothesis on which this research is based (with good grounding) is that we learn through repeated interactions to behave kindly to others. At least, by far the majority of us. A few jerks (as the researchers discovered) are not intuitively generous and everyone behaves selfishly or unkindly sometimes. This is mainly because there are such jerks around, though sometimes because the perceived rewards for being a jerk might outweigh the benefits. Indeed, in almost all moral decisions, we tend to weigh benefits against harm, and it is virtually impossible to do anything at all without at least some harm being caused in some way, so the nicest of us are jerks to at least some people. It might upset the person who gave you a beautiful scarf that you wrecked it while saving a drowning child, for instance. Donating to a charity might reduce the motivation of governments to intervene in humaniarian crises. Letting a car in front of you to change lanes in front of you slows everyone in the queue behind you. Very many acts of kindness have costs to others. But, on the whole, we tend towards kindness, if only as an attitude. There is plentiful empirical evidence that this is true, some of which is referred to in the article. The researchers sought an explanation at a systemic, evolutionary level.

The researchers developed a simulation of a Prisoners’ Dilemma scenario. Traditional variants on the game make use of rational agents that weigh up defection and cooperation over time in deciding whether or not to defect, using a variety of different rules (the most effective of which is usually the simplest ‘tit-for-tat’). Their twist was to allow agents to behave ‘intuitively’ under some circumstances. Some agents were intuitively selfish, some not. In predominantly multiple round games,  “the winning agents defaulted to cooperating but deliberated if the price was right and switched to betrayal if they found they were in a one-shot game.” In predominantly one-shot games – not the norm in human societies – the always-cooperative agents died out completely. Selfish agents that deliberated did not do well in any scenario. As ever, ubiquitous selfish behaviour in a many-round game means that everyone loses, especially the selfish players.  So, wary cooperation is a winning strategy when most other people are kind, and it benefits everyone so it is a winning strategy for societies and favoured by evolution. The explanation, they suggest is that:

when your default is to betray, the benefits of deliberating—seeing a chance to cooperate—are uncertain, depending on what your partner does. With each partner questioning the other, and each partner factoring in the partner’s questioning of oneself, the suspicion compounds until there’s zero perceived benefit to deliberating. If your default is to cooperate, however, the benefits of deliberating—occasionally acting selfishly—accrue no matter what your partner does, and therefore deliberation makes more sense.

This accords with our natural inclinations. As Rand, one of the researchers, puts it:  “It feels good to be nice—unless the other person is a jerk. And then it feels good to be mean.” If there are no rewards for being a jerk under any circumstances, or the rewards for being kind are greater, then perhaps we can all learn to be a bit nicer.

The really good news is that, because such behaviour is learned, selfish behaviour can be modified and intuitive responses can change. In experiments, the researchers have demonstrated that this can occur within less than half an hour, albeit in a very limited and artificial single context. The researchers suggest that, in situations that reward back-stabbing and ladder-climbing (the norm in corporate culture), all it should take is a little top-down intervention such as bonuses and recognition for helpful behaviour in order to set a cultural change in motion that will ultimately become self-sustaining. I’m not totally convinced by that – extrinsic reward does not make lessons stick and the learning is lost the moment the reward is taken away. However, because cooperation is inherently better for everyone than selfishness, perhaps those that are driven by such things might realize that those extrinsic rewards they crave are far better achieved through altruism than through selfishness as long as most people are acting that way most of the time, and this might be a way to help create such a culture.  Getting rid of divisive and counter-productive extrinsic motivation, such as performance-related pay, might be a better (or at least complementary) long-term approach.

Address of the bookmark: http://nautil.us/issue/37/currents/selfishness-is-learned

This is the Teenage Brain on Social Media

An article in Neuroscience News about a recent (paywalled – grr) brain-scan study of teenagers, predictably finding that having your photos liked on social media sparks off a lot of brain activity, notably in areas associated with reward, as well as social activity and visual attention. So far so so, and a bit odd that this is what Neuroscience News chose to focus on, because that’s only a small subsection of the study and by far the least interesting part. What’s really interesting to me about the study is that the researchers mainly investigated the effects of existing likes (or, as they put it ‘quanitfiable social endorsements’) on whether teens liked a photo, and scanned their brains while doing so. As countless other studies (including mine) have suggested, not just for teens, the effects were significant. As many studies have previously shown, photos endorsed by peers – even strangers – are a great deal more likely to be liked, regardless of their content. The researchers actually faked the likes and noted that the effect was the same whether showing ‘neutral’ content or risky behaviours like smoking and drinking. Unlike most existing studies, the researchers feel confident to describe this in terms of peer-approval and conformity, thanks to the brain scans. As the abstract puts it:

“Viewing photos with many (compared with few) likes was associated with greater activity in neural regions implicated in reward processing, social cognition, imitation, and attention.”

The paper itself is a bit fuzzy about which areas are activated under which conditions: not being adept at reading brain scans, I am still unsure about whether social cognition played a similarly important role when seeing likes of one’s own photos compared with others liked by many people, though there are clearly some significant differences between the two. This bothers me a bit because, within the discussion of the study itself, they say:

“Adolescents model appropriate behavior and interests through the images they post (behavioral display) and reinforce peers’ behavior through the provision of likes (behavioral reinforcement). Unlike offline forms of peer influence, however, quantifiable social endorsement is straightforward, unambiguous, and, as the name suggests, purely quantitative.”

I don’t think this is a full explanation as it is confounded by the instrument used. An alternative plausible explanation is that, when unsure of our own judgement, we use other cues (which, in this case, can only ever come from other people thanks to the design of the system) to help make up our minds. A similar effect would have been observed using other cues such as, for example, list position or size, with no reference to how many others had liked the photos or not. Most of us (at least, most that don’t know how Google works) do not see the ordering of Google Search results as social endorsement, though that is exactly what it is, but list position is incredibly influential in our choice of links to click and, presumably, our neural responses to such items on the page. It would be interesting to further explore the extent to which the perception of value comes from the fact that it is liked by peers as opposed to the fact that the system itself (a proxy expert) is highlighting an image as important. My suspicion is that there might be a quantifiable social effect, at least in some subjects, but it might not be as large as that shown here. There’s very good evidence that subjects scanned much-like photos with greater care, which accords with other studies in the area, though it does not necessarily correlate with greater social conformity. As ever, we look for patterns and highlights to help guide our behaviours – we do not and cannot treat all data as equal.

There’s a lot of really interesting stuff in this apart from that though. I am particularly interested in the activiation of the frontal gyrus, previously associated with imitation, when looking at much liked photos. This is highly significant in the transmission of memes as well as in social learning generally.

Address of the bookmark: http://neurosciencenews.com/nucleus-accumbens-social-media-4348/

Bigotry and learning analytics

Unsurprisingly, when you use averages to make decisions about actions concerning individual people, they reinforce biases. This is exactly the basis of bigotry, racism, sexism and a host of other well-known evils, so programming such bias into analytics software is beyond a bad idea. This article describes how algorithmic systems are used to help make decisions about things like bail and sentencing in courts. Though race is not explicitly taken into account, correlates like poverty and acquaintance with people that have police records are included. In a perfectly vicious circle, the system reinforces biases over time. To make matters worse, this particular system uses secret algorithms, so there is no accountability and not much of a feedback loop to improve them if they are in error.

This matters to educators because this is very similar to what much learning analytics does too (there are exceptions, especially when used solely for research purposes). It looks at past activity, however that is measured, compares it to more or less discriminatory averages or similar aggregates of other learners’ past activity, and then attempts to guide future behaviour of individuals (teachers or students) based on the differences. This latter step is where things can go badly wrong, but there would be little point in doing it otherwise. The better examples inform rather than adapt, allowing a human intermediary to make decisions, but that’s exactly what the algorithmic risk assessment described in the article does too and it is just as risky. The worst examples attempt to directly guide learners, sometimes adapting content to suit their perceived needs. This is a terribly dangerous idea.

Address of the bookmark: http://boingboing.net/2016/05/24/algorithmic-risk-assessment-h.html