Here’s Why Public Wifi is a Public Health Hazard

A nice clear and very graphic explanation of why wifi, especially public wifi, is a very dangerous thing to use. And no, it has nothing whatsoever to do with radiation – if that worries you, and it absolutely shouldn’t, you should be a lot more worried about your TV or radio and positively scared stiff by cellphones, heat lamps and electric stoves. Or light, for that matter. Dangerous stuff, light. 

But, back to the article, most of the more frightening issues it illustrates can be dealt with using a good VPN, use of secure sites (like this one) and very careful attention to what you are clicking and what you are sharing. Others, especially those involving man-in-the-middle attacks and password cracking, can be much trickier to deal with. 

If you are worried by this, and you absolutely should be if any of your devices uses wifi, including your home system, then there are numerous articles that will reassure you that you have some basic safeguards in place, such as: 

  • http://www.forbes.com/sites/amadoudiallo/2014/03/04/hackers-love-public-wi-fi-but-you-can-make-it-safe/ (good basic advice, but does not address some of the issues raised here)
  • http://www.gizmag.com/how-to-stay-secure-on-public-wireless-hotspots/28694/ (a little more complex but a little better informed and offering a little more protection)
  • http://www.watchguard.com/infocenter/editorial/27061.asp (for the geeks or those with a serious interest – a more detailed pair of articles on how wifi evil twins work and what can be done to avoid them, as well as other risks)

If you’ve not thought much about such things, now is a good time.

Address of the bookmark: https://medium.com/matter/heres-why-public-wifi-is-a-public-health-hazard-dd5b8dcb55e6

x-literacies

There is an ever-growing assortment of x-literacies. Here are just a few that have entered the realms of academic discourse:

  • Computer literacy
  • Internet literacy
  • Digital literacy
  • Information literacy
  • Network literacy
  • Technology literacy
  • Critical literacy
  • Health literacy
  • Ecological literacy
  • Systems literacy
  • Statistical literacy
  • New literacies
  • Multimedia literacy
  • Media literacy
  • Visual literacy
  • Music literacy
  • Spatial literacy
  • Physical literacy
  • Legal literacy
  • Scientific literacy
  • Transliteracy
  • Multiliteracy
  • Metamedia literacy

This list is a small subset of x-literacies: if there is some generic thing that people do that demands a set of skills, there is probably a literacy that someone has invented to match.  I’ll be arguing in this post that the majority of these x-literacies miss the point, because they focus on tools and technologies more than the reasons and contexts for using them. 

The confusion starts with the name. ‘Literacy’, literally, means the ability to read and write, so most other literacies are not. We might just as meaningfully talk about ‘multinumeracy’ or ‘digital numeracy’ as ‘multiliteracy’ or ‘digital literacy’ and, for some (e.g. ‘statistical literacy’), ‘numeracy’ would actually make far more sense. But that’s fine – words shift in meaning all the time and leave their origins behind. It is not too hard to see how the term might evolve, without bending the meaning too much, to relate to the ability to use not just text but any kind of symbol system. That sometimes makes sense – visual, media or musical literacy, for example, might benefit from this extension of meaning. But most of the literacies I list above have at best only a partial relationship to symbol systems. I think what really appeals to their inventors is that describing a set of skills as ‘x-literacy’ makes ‘x’ seem more important than just a set of skills. They bask in the reflected glory of reading and writing, which actually are awfully important. 

I’m OK with a bit of bigging up, though. The trouble is that prefixing ‘literacy’ with something else infects how we see the thing. It has certainly led to many silly educational initiatives with poorly defined goals and badly considered outcomes. This is because, all too often, it draws attention far too much to the technology and skills, and far too far away from its application in a specific culture. This context-sensitive application (as I shall argue below) is actually what makes it ‘literacy’, as opposed to ‘skill’, and is in fact what makes literacy important.

So this is my rough-draft attempt to unravel the confusion so that at least I can understand it – it’s a bit of sense-making for me. Perhaps you will find it useful too. Some of this is not far off the underpinnings of the multiliteracy camp (albeit with notably different conclusions) and one of my main conclusions will be very similar to what many others have concluded too: that literacy spans many skills, tools and modalities, and is highly contextualized to a given culture at a given time. 

Culture and technology

When they pass a certain level of size and complexity, societies need more than language, ritual, stories, structures and laws passed by word of mouth (mostly things that demand physical co-presence) in order to function. They need tools to manage the complexity, to distribute cognition, replicate patterns, preserve structures, build new ones, pass ideas around, and to bind a dispersed society together. Since the invention of printing, most of the tools that play this role have been based on the technologies of text, which makes reading and writing fundamental to participation in a modern society and its numerous cultures and subcultures.

To be literate has, till recently, simply meant that you can do text. There may also be some suggestion of the ability to use text that relate to abilities to decipher, analyze, synthesize and appreciate: these are at least the product of literacy if not a part of it, and they are among the main reasons we need literacy. But the central point here is that people who are literate, in the traditional sense, are simply able to operate the technology of writing, whether as consumers, producers or both. Why this is ‘literacy’ rather than simply a skillset like any other, is that text manipulation is a prerequisite for people to participate in their culture. It lets them draw on accumulated knowledge, add to it, and be able to operate the social and organizational machinery. At its most basic, this is a pragmatic need: from filling in forms and writing letters to reading signs, labels on food, news, books, contracts and so on. Beyond that, it is also a means to disseminate ideas, challenges, and creative thought in a society. It is futhermore a fundamental technology for learning, arguably second only to language itself in importance. More than that, it is a technology to think with and extend our thinking far beyond what we could manage without such assistance. It lets us offload and enhance our cognition. This remains true, despite multiple other media vying for our attention, most of which incorporate text as well as other forms. I could not do what I am doing right now without text because it is scaffolding and extending the ideas I started with. Other media and modalities can in some contexts achieve this end too and, for some purposes, might even do it better. But only text does it so sweepingly across multiple cultures, and nothing but text has such power and efficiency. In all but the most limited of cultures, text performs culture, and text makes culture: not all of it, by any means, but enough to matter more than most other learned technology skills.

Other ways to perform culture

There have for countless millenia been many other media and tools for cultural transmission and coordination, including many from way before the invention of writing. Paintings, drawings, sculpture, dance, music, rituals, maps, architecture, furniture, transport systems, sport, games, roads, numbers, icons, clothing, design, money, jewellery, weapons, decoration, litany, laws, myths, drama, boats, screwdrivers, door-knobs and many many more technologies, serve (often amongst their other functions) as repositories of cognition, belief, structure and process. They are not just the signs of a culture: they play an active role in its embodiment and enactment. But text, maybe hand in hand with number, holds a special place because of its immense flexibility and ubiquitous application. Someone else can make roads or paintings or door-knobs and everyone else can benefit without needing such skills – this is one of the great benefits of distributed labour. But almost everyone needs skill in text, or at least needs to be close to someone with it. It is far from the only fruit but everyone needs it, just to participate in the cultures of a society.

Cultures and technologies

There are many senses in which we might consider technology and culture to be virtually synonymous. Both are, as Ursula Franklin puts it, ‘the way things are done around here’. Both concern process, structure and purpose. However, I think that there are many significant things about cultures  – attitudes, frames of mind, beliefs, ways of seeing, values, ideologies, for instance – that may be nurtured or enacted by technology, but that are quite distinct from it. Such things are not technological inventions – they are the consequence, precursors and shapers of inventions. Cultures may, however, be ostensively defined by technologies even if they are not functionally identical with them. Archeologists, sociologists and historians do it all the time. Things like language, clothing, architecture, tools, laws and so on are typically used to distinguish one culture from another.

One of the notable things about technologies is that they tend to evolve towards both increasing complexity and increasing specialization. This is a simple dynamic of the adjacent possible. The more we add, the more we are able to add, the more combinations and the more new possibilities that were unavailable to us before reveal themselves, so the more we diversify, subdivide, concatenate and invent. Thus it goes on ad infinitum (or at least ad singularum). Technologies tend to continuously change and evolve, in the absence of unusual forces or events that stop them. Of course, there are countless ways that technologies, notably in the form of religions, can slow this down or reverse it, as well as catastrophes that may be extrinsic or that may result from a particularly poor choice of technologies (over-cultivation of the land, development of oil-dependency, nuclear power, etc). There are also many technologies that play a stabilizing rather than a disruptive role (education systems, for example). Overall, however, viewed globally, in large cultures, the rate of technological change increases, with ever more rapid lifecycles and lifespans.  This means that skills in using technologies are increasingly deictic and increasingly short-lived or, if they survive, increasingly marginalized. In other words, they relate specifically to contexts outside of which they have different or no meaning, and those contexts keep changing thanks to the ever-expanding adjacent possible. Skills and techniques become redundant as contexts change and cultures evolve. That’s a slight over-simplification, but the broad pattern is relentless.

Towards a broader definition of ‘literacy’

Literal literacy is the ability to use a particular technology (text) to give us the ability to learn from, interact with and add to our various different cultures. The label implies more than just reading and writing: to be literate implies that, as a consequence of reading and writing, stuff has been and will be read – not just reading primers, but books, news, reports and other cultural artefacts. In the recent past, text was about the most significant way (after talking and showing) that cultural knowledge was disseminated. In recent decades, there have been plentiful other channels, including movies, radio, TV, websites, multimedia and so on. It was only natural that people would see the significance of this and begin to talk about different kinds of literacy, because these media were playing a very similar cultural role to reading and writing. The trouble is that, in doing so, the focus shifted from the cultural role to the technology itself. At its most absurd, it resulted in terms like ‘computer literacy’ that led to initiatives that were largely focused on building technical skills messily divorced from the cultures they were supporting and of little or no relevance to being an active  member of such a culture.

So here’s a tentative (re)definition of ‘literacy’ that restores the focus: literacy is the prerequisite set of technological skills needed for participation in a culture.  And, of course, we are all members of many cultures. There are other things that matter in a culture apart from technological skills, such as (for example) a playful spirit, honesty, caring for others, good judgement, curiosity, ethical sensibility, as well as an ability to interpret, synthesize, classify, analyze, remix, create and seek within the cultural context. These are probably more important foundations of most cultures than the tools and techniques used to enact them. But, though traits like these can certainly be nurtured, inculcated, encouraged, shown, practiced, learned and improved, they are not literacies. These are the values and valued traits in a culture, not the skills needed to be a part of it, though there is an intimate iterative relationship between the two. In passing, I think it is those traits and others like them that education is really aimed at developing: the rest, the literacy part, is transient and supportive. We don’t have values and propensities in order to achieve literacy. We learn most of them at least partly through the use of literacies, and literacies are there to support them and let them flourish, to provide mechanisms through which they can be exercised.

My suggestion is that, rather than defining a literacy in terms of its technologies, we should define it in terms of the particular culture it supports. If a culture exists, then there is a literacy for it, which is comprised of a set of skills needed to participate in that culture. There is literacy for being a Canadian, but there is equally literacy for being part of the learning technologies community (and for each of its many subcultures), being a researcher, a molecular scientist, a member of a family or of a local chess club. There is literacy for every culture we belong to. Some technological skillsets cross multiple cultures, and some are basic to them. The first of these is nearly always language. Most cultures, no matter how trivial and constrained, have their own vocabularies and acceptable/expected forms of language but, apart from cases where languages are actually a culturally distinguishing factor (e.g. many nations or tribes) they tend to inherit most of the language they use from a super-culture they are a part of. Reading and writing are equally obvious examples of skills that cross multiple cultures, as are numeracy skills. This is why they matter so much – they are foundational. Beyond that, different technologies and consequent skills may matter as much or more in different cultures. In a religious culture these might include the rules, rituals, principles, mythologies and artefacts that define the religion. In a city culture they could include knowledge of bylaws, transit systems, road layouts, map-reading, zones, and norms. In an academic culture it might relate to (for instance) methodologies, corpora, accepted tenets, writing conventions, dress standards, pedagogies, as well as the particular tools and methods relating to the subject matter. In combination, these skills are what makes someone in a given culture literate in that culture.

For instance

Is there such a thing as computer literacy? I’d say hardly at all. In fact, it makes little sense at all to think in those terms. It’s a bit like claiming there is pen literacy, table literacy or wall literacy.  But there might be computing literacy, inasmuch as there may be a culture of computing. In fact, once upon a time, when dinosaurs roamed the earth and people who used computers had to program them themselves, it might have been a pretty important culture that any people who wished to use computers for any purpose at all would need to at least dip their toes in and, most likely, become a part of. That culture is still very much there but it is not a prerequisite of owning a computer that one needs to be a part of it any more – computing culture is now the preserve of a relatively tiny band of geeks who are dwarfed in number by those that simply use computers. The average North American home has dozens of computers, but few of their users need to or want to be part of a computing culture. They just want to operate their TVs, drive their cars, use their phones, take photos, browse the Web, play the keyboard, etc. This is as it should be. Those in a computing culture are undoubtedly still an important tiny band who do important things that affect the rest of the world a lot, but they are just another twig at the end of a branch of the cultural tree, not the large stem that they once were. Within what is left of that computing culture there are a lot of overlapping computing sub-cultures: engineers, bricoleurs, hardware freaks, software specialists, interaction designers, server managers, programmers, object-oriented programmers, PHP enthusiasts, iOS/Mac users, Android/Windows users, big-endians, little-endians. Each sub-culture has its own literacy, its own language, its own technologies on which it is founded, as well as many shared commonalities and cross-cutting concerns. 

Is there such a thing as ‘digital literacy’? Hardly. There is no significant distinctive thing that is digital culture, so there is no such thing as digital literacy. Again, like computing culture, once upon a time, there probably was such a thing and it might have mattered. I recall a point near the start of the 1990s, as we started to build web servers, connect Gopher servers, use email and participate in Usenet Newsgroups, at which it really did seem that we were participating in a new culture, with its own evolving values, its own technologies, its own methods, rules, and ethics. This has almost entirely evaporated now. That culture has in part been absorbed and diffused, in part branched into subcultures. Being ‘digital’ is no longer a way of defining a culture that we are a part of, no longer a way of being. Unless you are one of the very few that has not in the last decade or so bought a telephone, a TV, a washing machine, a stove, or one of countless other digital devices, you are ‘digital’. And, if there were such a thing as a digital culture, you would almost certainly be a part of the digital culture if you are reading this. This is too tenuous a thing – it has nothing to bind it apart from the use of digital devices that are almost entirely ubiquitous, at least in first world cultures, and that are too diverse to bind a culture together. There are, as a result, insufficient shared values to make it meaningful any more. It is, however, still possible to be anti-digital. Some digital luddites (I mean this non-perjoratively to refer to anyone who deliberately eschews digital technologies) do very much have cultures and probably have their own literacies. And there might well be literacies that relate to specific digital technologies and subsets of them. Twitter has a culture, for instance, that implies rules, norms, behaviours, language and methods that anyone participating should probably know. The same may be (and at some point certainly was) true of Facebook, but I think that is less obvious now.

Network culture is probably still a thing, but it is already fading in much the same way that digital culture has already faded, with ubiquity, diversity and specialization each taking bites out of it. We have seen network culture norms develop and spread. New vocabularies have been developed with subtle nuances (LOL, ROFL, LMFAO) that often branch into meanings that may only be deciphered by a few sub-cultures but that may subsequently spread into other cultures (TIL, RT, TLDR, LPT).   We have had to learn new skills, figuring out how to negotiate privacy, filter bubbles, trolls, griefing, effective tagging, filtering, sorting, unfriending and friending, and much much more, in order to participate in a social network culture, one that is (for now) still a bit distinct from other cultures. But that culture has already diversified, spread, diffused, and it is getting more diffuse every day. As it becomes larger and more diverse it ceases to be a relevant means of identifying people, and it ceases to be something we can identify with.

Much of the reason for network culture’s retreat is technological. It was enabled by an assembly of technologies and spawned new ones (norms, conventions, languages, etc) but, as they evolve, other technologies will render it irrelevant. Technologies often help to establish cultures and may even form their foundation but, as they and the cultures co-develop, the technologies that helped build those cultures stop being definitional of them. Partly this results from diffusion, as ways of thinking creep back into the broader super-culture and as more and more diverse cultures spread into it. Partly it is because new technologies take their place and diversify into niches. Partly it is because, rather than us learning to use technologies, they learn to use us. This sounds creepier than it really is: what I mean is that individual inventors see the adjacent possibles and grab them, so technologies change and, in many cases, become embedded, replacing our manual roles in them with pre-orchestrated equivalents. Take, for example, a trivial thing like emoticons, images built from arbitrary text characters, that take some of the role of phatic communication in text communication – like this :-). Emoticons are increasingly being replaced by standardized emojis, like this Smile. Bizarrely, there are now social networks based on emoji that use no text at all. I am intrigued by the kind of culture that this will entail or support but the significant point here is that what we used to have to orchestrate ourselves is now orchestrated in the machine. Consequently, the context changes, problems are solved, and new problems emerge, often as a direct result of the solution. Like, how on earth do you communicated effectively with nothing but emojis Undecided?

Where do we go from here? 

Rather than constantly sub-divide literacies into ever more absurdly-named niches named for the tools to which they relate, or attempt to find bridging competences or values that underly them and call those multiliteracies (or whatever), I propose that we should think of a literacy as being a highly situated set of skills that enable us to play a role as an operator in any given social machine, as creators and/or consumers of a culture – any culture and every culture.  The specificity we choose should be determined by the culture that interests us, not by any predetermined formula. Each subculture has its own language, tools, methods, and signs, and each comes with a set of shared (often contested) attitudes, beliefs, values and passions, that both drive and are driven by the technologies they use.  As a result, each has its own history, that branches from the histories of other subcultures, helping to make it more distinct. This chain of path dependencies helps to reinforce a culture and emphasize its differences. It can also lead to its demise.

In most if not all cases, literacy is an assembly of skills and techniques, not a single skill. ‘Literacy’ is thus simply a label for the essential skills and techniques needed to actively participate in a given culture. Such a culture may be big or small. It may span millenia or centuries but it may span only decades, years or (maybe) months or even weeks or days. It may span continents or exist only in a single room. I have, for example, been involved with courses, workshops and conferences that have evolved their own fleeting cultures, or at least something prototypical of one. In my former job I shared an office with a set of colleagues that developed a slightly different culture from that of the office next door. Of course, the vast majority of our culture was shared because we performed similar roles in the same department in the same organization, the same country, the same field, the same language, the same ethos. But there were differences that might, in some contexts and for some purposes, be important. For most contexts, they were probably not.

Researching literacies 

Assuming that we know what culture we are looking at, identifying literacy in any given culture is simply (well…simply-ish) a question of looking at the technologies that are used in that culture.  While technology use is far from a complete definition of a culture, what makes it distinct from another may be described in terms of its technologies, including its rules, tools, methods, language, techniques, practices, standards and structures. This seems a straightforward way of thinking about it, if seemingly a bit circular. We identify cultures by their technology uses, and define literacy by technology use in a culture. I don’t think this apparent circularity is a major issue, however, as this is an iterative process of discovery: we may start with coarse differentiators that distinguish one culture from another but, as we examine them more closely, will almost certainly find others, or find further differentiators that indicate subcultures. A range of methods and methodologies may be used here, from grounded theory to ethnography, from discourse analysis to Delphi methods, simple observation, questionnaires, interviews, focus groups, and so on. If we want to know about literacy in a culture, we have to discover what technologies are foundational in that culture.

Most of the cultures we belong to are subcultures of some other or others, while others straddle borders between different and otherwise potentially unrelated cultures.  Some skills that partially constitute a given literacy will cross many other cultural boundaries. Almost all will involve language, most will involve reading and writing, many will involve number, lots will involve visual expression, quite a few will involve more or less specific skills using machines (particularly software running on computers, some of which may be common). The ability to create will usually trump the ability to consume although, in some cultures, prosumption may be a defining or overwhelmingly common characteristic (those that emerge in social networks, for instance).

This all implies that a first concern when researching literacy for a given culture, is to identify that culture in the first place, and decide why it is of interest. While this may in some cases be obvious, there may often be subcultures and cross-cultural concerns that could make it more complex to define. One way to help separate out different cultures is to look at the skills, terminology, technologies, implicit and explicit rules, norms, and patterns of technology use in the subset of people that we are looking at. If there are patterns of differences, then there is a good chance that we have identified a cultural divide of some kind. A little more easily, we can also look both at why people are excluded from a culture, and seek to discover the things people need to learn to become a part of it – to look at the things that distinguish an outsider from an insider and how people transition from one to the other.

For example, the literacy for the culture of a country is almost entirely defined by invention. Countries are technologies, first and foremost. They have legislated (if often disputed) borders and boundaries, laws, norms, language, ways of doing things, patterns, establishments, and institutions that are almost entirely enshrined in technology. It is dead easy to spot this particular culture and mostly simple enough to figure out who is not in it and, normally, what they need to do to become a part of it. To be literate in the context of a country is to have the tools to be able to know and to actively interact with the technologies that define it. To give a simple example, although it is quite possible to be Canadian with only a limited grasp of English and/or French, part of what it means to be literate in Canadian culture is to speak one or (ideally) both languages. Other languages are a bonus, but those two are foundational. It is also possible to see similar patterns in religious cultures, academic cultures, sports cultures, sailing cultures and so on. We can see it in subcultures – for example, goths and hipsters are easily identified by a set of technologies that they use and create, because many of them are visible and definitional.  It gets trickier once we try to find subcultures of such easily identified sets but, on the whole, different technologies mark different cultures.

What makes all this technical detail worth knowing is not that different sets of people use different tools but that there are consequences of doing so. Technologies have a deep impact on attitudes, values, beliefs and relationships between people. In turn these values and beliefs equally impact the technologies that are used, developed, and valued. This is what matters and this is what is worth investigating. This is the kind of knowledge that is needed in order to effect change, whether to improve literacy within a culture or to change the culture itself. For example, imagine a university that runs on highly prescriptive processes and a reward structure based on awards for performance. You may not have to look far to find an example. Such a university might be dysfunctional on many counts, either because of lack of literacy in the technologies or because the technologies themselves are poorly considered (or both). One way to improve this would be to ensure that all its members are able to operate the processes and gain awards. This would be to improve literacy within the culture and would, consequently, reinforce it and sustain it. This might be very bad news if the surrounding context changes, making it significantly harder to adapt and change to new demands, but it would be an improvement by some measures. Another, not necessarily conflicting, approach would be to change or eliminate some of the processes, and get rid of or change the nature of rewards for performance: to modify the machinery that drives the culture. This would change the culture and thus change the literacy needed to operate within it. It might do unexpected things, especially as the existing attitudes and values may be at odds with the new culture: people within it would be literate in things that are not relevant or useful any more, while not having literacy needed to operate the new tools and structures. Much existing work surrounding x-literacies fails to clearly make this crucial distinction. By focusing largely on the technological requirements and ignoring the culture, we may reinforce things that are useless, redundant or possibly harmful. For instance, multimedia literacy might be great, sure. But for what and for whom? And in what forms? Different skillsets are needed in different contexts, and will have different value in different cultures.

To conclude

I have proposed that we should define literacy as the skills needed to operate the technologies that underpin a particular culture. While some of those skills are common to many cultures, the precise set and the form they take is likely different in almost every culture, and cultures evolve all the time so no literacy is forever. I think this is a potentially useful perspective.

We cannot sensibly define a set of skills or propensities without reference to the culture that they support, and we should expect differences in literacies both between different cultures and across time and space in any given culture. We can ask meaningful questions about literacy in a culture of (say) people who use Twitter for learning and research as opposed to those needed by people that only use Twitter to stay in touch with one another.  We can look at different literacies for people who are Canadian, people who are in schools, people of a particular religion, people who like a particular sport, people who research learning technologies, people in a particular office, people who live in Edmonton, not to mention their intersections and their subsets. By looking at literacy as simply a set of skills needed for a given culture we can gain large insights into the nature of that culture and its values. As a result, we can start to think more carefully about which skills are important, whether we want to simply support the acquisition of those skills, or whether we want to transform the culture itself.

This is just my little bit of sense making. I have very probably trodden territory that is very familiar to a lot of people who research such things with more rigour, and I doubt very much that any of it is at all original. But I have been bothered by this issue for a while and it now seems a little clearer to me what I think about this. I hope it has encouraged you to think about what you think too. Feel free to share your thoughts in the comment box!

Dining with an overweight person makes you eat more

It looks like one mechanism for the already observed spread of obesity through social networks may be extremely simple: people tend to eat more when dining with people who are fatter. Thanks to an ingeniously simple experimental design, this paper shows that it’s not due to any difference in the fatter people’s behaviour. It’s solely due to their size. Interesting.

The study deliberately used eating companions for the study, making this a clear network effect in which people are influenced by those with whom they share a reciprocal connection. I’d be intrigued to discover whether it would make any difference if the fatter people (wearing body prostheses) were simply strangers sitting in the same restaurant, not eating together. I’d hypothesise that the effect would still show up, probably more weakly, but that it might be proportional to the number of people who appeared to be obese. In fact, I am guessing it would probably be more complex than that: for instance, that we might be more influenced by those that we thought were more like us or that we took more of a shine to. If so, this would be more of a set than a network effect. It would be not unlike flocking behaviour in birds: until quite recently it was thought that birds flocked due to a simple network effect that spread from neighbour to neighbour but, as it turns out, they are simply counting the birds nearby that are behaving in a particular way, and going with the majority. Memes may work the same way.

This is about as far from intentional communication as it can get – it’s not even a behaviour that is being copied here but some imagined and possibly inaccurate belief about someone’s past behaviour – and yet the effects may be quite profound and, spread through a society, might have massive large scale effects that spread over into many different aspects of many people’s lives, affecting everything from population health to the economy. It’s one of the reasons that schools and universities are a good idea, quite apart from, and independently of, any intentional teaching that might or might not be having an effect. When you see people around you behaving in a particular way, you are more likely to behave similarly. If it seems normal to be actively learning, there’s a much greater chance that you will do so too. Behaviours (even imagined ones) are highly infectious.

Address of the bookmark: http://blogs.discovermagazine.com/seriouslyscience/2014/09/22/dining-overweight-person-makes-others-eat/

StudentLife: Assessing Mental Health, Academic Performance and Behavioral Trends of College Students using Smartphones

A totally fascinating study of students conducted using a massive amount of automatically collected data from smartphones along with other data collected from other systems and via surveys to come up with a large set of correlations relating to everything from mood to GPA. This would win a top paper award in any conference I can think of.

Too much to summarize here, and many more questions emerging from it than it answers, but this should keep a load of researchers busy for years to come. I’m certainly going to be picking this over carefully now that I’ve read it through once. I highly recommend that anyone involved in education (staff or students) should read this! But it should be read with great care and with all critical faculties on full alert. This was a very specific group of students in a very specific context and it would be highly dangerous and irresponsible to extrapolate any generalizations at all from any of this, though I bet some people will. There are lots of things that warrant further investigation – active students were happier and did better but lack of activity, especially at night, seems correlated with higher GPAs, for example, and there are some big fuzzy areas in the sampling that involved a lot of interpretation that was unlikely to be particularly accurate much of the time. The finding that I find particularly appealing is the discovery that classroom attendance had no correlation with academic performance at all: I almost laughed out loud at this one. As always, however it’s not what but how that matters. This suggests to me that someone really needs to work on their classroom activities rather than that classroom teaching does no good, and I would really like to know a lot more about the students who skipped classes before even drawing conclusions from this small dataset let alone more broadly. The other big issues here surround the need for careful interpretation and more qualitative data to explore causes: all this shows are correlations, some of which seem to imply obvious things (e.g. students that study rather than party tend to get better grades but they tend to be lonelier) but many of which are more complex and should be considered in context and at a whole systems level.

The anonymized dataset is available for downloading.

Address of the bookmark: http://studentlife.cs.dartmouth.edu/studentlife.pdf

Professor forces students to buy his own $200 textbook

This article is actually purportedly about the very unsurprising discovery that students who can’t afford textbooks are downloading them illegally, even for ethics classes. Shocking! Not. However, the thing that really shocks me about this article is the example given of the professor demanding that his students purchase his own $200 etextbook. Piracy seems a pretty minor crime compared with this apparently outrageous, blatant, extortionate abuse of power. 

 

Address of the bookmark: http://www.washingtonpost.com/blogs/answer-sheet/wp/2014/09/17/more-students-are-illegally-downloading-college-textbooks-for-free/

The Serious Limitation of Rote Memorisation You Probably Don't Know About (And It's Undermining Learning)

Report on an interesting study showing how rote learning of some things results in increasingly creative interpretations of what we have tried to learn, which means it actually gets in the way of remembering, even though more details are recalled. The researchers note that this is not an issue with simple memorization of numbers, words, etc, but it can be an issue where more complex and relational things need to be recalled – the report mentions understanding the solar system as an example and the researchers used recollection of things in pictures for their study for their testing. In such cases, repetition means more things are remembered, but more things are remembered wrong. I’m wondering whether this affects different kinds of rote memorization, such as the muscle memory used when playing a musical instrument, or learning lines in a song or a play. I’m guessing these are more akin to simple recollections of words because they are a linear sequence, whereas the ways we perceive pictures rely on us choosing where to focus. 

Address of the bookmark: http://www.opencolleges.edu.au/informed/news/limitation-of-rote-learning/

Teaching Crowds: Learning and Social Media

The free PDF preview of the new book by me and Terry Anderson is now available from the AU Press website. It is a complete and unabridged version of the paper book. It’s excellent value!

The book is about both how to teach crowds and how crowds can teach us, particularly at a distance and especially with the aid of social software.

For the sake of your health we do not recommend trying to read the whole thing in PDF format unless you have a very big and high resolution tablet or e-reader, or are unusually comfortable reading from a computer screen, but the PDF file is not a bad way to get a flavour of the thing, skip-read it, and/or to find or copy passages within it. You can also download individual chapters and sections if you wish. 

The paper and epub versions should be available for sale at the end of September, 2014, at a very reasonable price. 

Address of the bookmark: http://www.aupress.ca/index.php/books/120235

Researching things that don't exist

As the end of my sabbatical is approaching fast, I am still tinkering with a research methodology based on tinkering (or the synonymous bricolage, to make it sound more academic). Tinkering is an approach to design that involves making things out of what we find around us, rather than as an engineered, designed process. This is relatively seldom seen as valid approach to design (though there are strong arguments to be made for it), let alone to research, though it underpins much invention and discovery. Tinkering is, by definition, a step into the unknown, and research is generally concerned with knowing the unknown (or at least clarifying, confirming or denying the partly- or tentatively-known). This is not a direct path, however.

Research can take many forms but, typically and I think essentially, the sort that we do in academia is a process of discovery, rather than one of invention. This is there in the name – ‘recherche’ (the origin of the term) means to go about seeking, which implies there is something to be found. The word ‘discovery’ suggests that there is something that exists that can be discovered, whereas inventions, by definition, do not exist, so they are never exactly discovered as such.

While we can seldom substitute ‘invention’ for ‘discovery’, the borders are blurry. Did Maxwell discover his equations or did he invent them? What he discovered was something about the order of the universe, that his (invented) equations express, but the equations formed an essential and inextricable part of that discovery. R&D labs get around the problem by simply using two terms so that you know they are using both. The distinction is similarly blurry in art: an artwork is normally not, at least in a traditional sense, research because, for most art, it is a form of invention rather than discovery. But sculptors often talk of discovering a form in stone or wood. And, even for the most mundane of paintings or drawings, artists are in a dialogue with their media and with what they have created, each stroke building on and being influenced by those that came before. A relative of mine recently ran an exhibition of works based on the forms suggested by blots of ink and water, which illustrates this in sharper relief than most, and I do rather like these paintings from Bradley Messer that follow the forms of wood grain. Such artists discover as much as they create and, like Maxwell’s equations, their art is an expression of their discovery, not the discovery itself, though the art is equally a means of making that discovery. Discovery is even more obvious in ‘found’ art such as that of some of the Dadaists, though the ‘art’ part of it is arguably still the invention, not the discovered object itself. Duchamp Fountaine And, as Dombois observes  there are some very important ways research and art can connect: research can inform art and be about art, and art can be about research, can support research and can arise from it. Dombois also believes art can be a means of performing research. Komar and Melamid’s ‘most-wanted paintings’ project is a good example of art not only being informed by research itself being a form of research. Their paintings resulted from research into what ‘the people’ wanted in their paintings. The paintings themselves challenge what collective taste means, and the value of it, changing how we know and make use of such information. And the artwork itself is the research, of which the paintings are just a part. 

Inventions (including art works) use discoveries and, from our inventions, we can make discoveries (including discoveries about our inventions). Invention makes it possible to make novel discovery, but the research is that discovery, not the inventions that lead to it. Research perceived as invention means discovering not what is there but what is not there, which is a little bizarre. More accurately, perhaps, it is seeking to discover what is latently there. It is about discovering possible futures. But even this is a bit strange, inasmuch as latent possibilities are, in many cases, infinite. I don’t think it counts as discovery if you are picking a few pieces from a limitless range of possibilities. It is creation that depends entirely on what you put into it, not on something that can be discovered in that infinity. But, perhaps, the discovery of patterns and regularities in that infinite potential palette is the research. This is because those infinite possibilities are maybe not as infinite as they seem. They are at the very least constrained by what came before, as well as by a wide range of structural constraints that we impose, or have imposed upon us. What is nice about tinkering is that, because it is concerned with using things around us, the forms we work on already have such patterns and constraints. 

Tinkering is concerned with exploring the adjacent possible. It is about looking at the things around you (which, in Internet space, means practically everywhere) and finding ways to put them together in new ways to do new things. These new things can then, themselves, create new adjacent possibles, and so it goes on. Beyond invention, tinkering is a tool for making new discoveries. It is a way of having a conversation with objects in which the tinker manipulates the objects and the objects in turn suggest ways of putting them together. It can inspire new ways of thinking. We discover what our creations reveal. Writing (such as this) is a classic example of this process. The process of writing is not one of recording thoughts so much as it is one of making new ones. We scaffold our thoughts with the words we write, pulling ourselves up by our own bootstraps as we do so in order to build further thoughts and connections.

The construction of all technologies works the same way, though it is often hidden behind walls of abstraction and deliberate design. If, rather than design-then-build, we simply tinker, then the abstraction falls away. The paths we go down are unknown and unknowable in advance, because the process of construction leads to new ideas, new concepts, new possibilities that only become visible as we build. Technologies are (all) tools to think with at least as much as they are tools to perform the tasks we build them for, and tinkering is perhaps the purest way of building them. And this is what makes tinkering a process of discovery. The focus is not on what we build, but on what we discover as a direct result of doing so – both process and product. Tinkering is a scaffold for discovery, not discovery itself. This begins to feel like something that could underpin a methodology.

With this in mind, here is an evolving set of considerations and guidelines for tinkering-based research that have occurred to me as I go along.

Exploring the possible

To be able to explore the adjacent possible, it is first necessary to explore the possible. In fact, it is necessary to be immersed in the possible. At a simple level, this because the bigger your pile of junk, the more chances there are of finding interesting pieces and interesting combinations. But there are other sub-aspects of this that matter as much: the nature of the pile of junk, the skills to assemble the junk, and immersion in the problem space. 

1) The pile of junk

Tinkering has to start with something – some tools, some pieces, some methods, some principles, some patterns. It is important that these are as diverse as possible, on the whole. If you just have a pile of engine parts then the chances are you are going to make another engine although, with a tinker-space containing sufficiently diverse patterns, you might make something else. There is a store near me that sells clocks, lights and other household objects made from bits of old electrical equipment and machinery, and it is wonderful. Similarly, some of the finest blues musicians can make infinite complexity out of just three chords and a (loosely) pentatonic scale. But having diverse objects, methods, patterns and principles certainly makes it easier than just having a subset of it all.

It is important that the majority of the junk is relatively complex and self-contained in itself – that it does something on its own, that it is already an assembly of something. Doing bricolage with nothing but raw materials is virtually impossible – they are too soft (in a technology sense). You have to start with something, otherwise the adjacent possible is way too far away and what is close is way too boring. The chances are that, unless you have a brilliant novel idea (which is a whole other territory and very rare) you will wind up making something that already exists and has probably been done better. This is still scrabbling around in the realms of the possible. The whole point is to start with something and assemble it with something else to make it better, in order to do something that has never been done before. That’s what makes it possible to discover new things. Of course, the complexity does not need to be in physical objects: you might have well-assembled theories, models, patterns, belief systems, aesthetic sensibilities and so on that could be and probably will be part of the assembly. And, since we are not just talking about physical objects but methods, principles, patterns etc, this means you need to immerse yourself in the process – to do it, read about it, talk about it, try it. 

2) The tools of assembly

It is not enough to have a great tinker-space full of bits and pieces. You need tools to assemble them. Not just physical tools, but conceptual tools, skills, abilities, etc. You can buy, make, beg, borrow or steal the tools, but skills to use them take time to develop. Of course, one of the time-honoured and useful ways to do that is to tinker, so this works pretty well. Again, this is about immersion. You cannot gain skills unless you apply them, reflect on it, apply them again, in a never-ending cycle.

There is a flip side to this though. If you get to be too skillful then you start to ignore things that you have discovered to be irrelevant, and irrelevant things aren’t always as irrelevant as they seem. They are only irrelevant to the path you have chosen to tread. Treading multiple paths is essential so, once you become too much of an expert, it is probably time to learn new skills. It is hard to know when you are too much of an expert. Often, the clue is that someone with no idea about the area suggests something and you laughingly tell them it cannot be done. Of course it can. This is technology. It’s about invention. You are just too smart to know it.

Being driven by your tools (including skills) is essential and a vital part of the methodology – it’s how the adjacent possible reveals itself. But it’s a balance. Sometimes you go past an adjacent possible on your way and then leave it so far behind that you forget it is there at all. It sometimes takes a beginner to see things that experts believe are not there. It can be done in all sorts of ways. For example, I know someone who, because he does not want to be trapped by his own expertise, constantly retunes his guitar to new tunings, partly to make discoveries through serendipity, partly to be a constant amateur. But, of course, a lot of his existing knowledge is reusable in the new context. You do not (and cannot) leave expertise behind when learning new things – you always bring your existing baggage. This is good – it’s more junk to play with. The trick is to have a ton of it and to keep on adding to it.

3) The problem space

While simply playing with pieces can get you to some interesting places, once you start to see the possibilities, tinkering soon becomes a problem-solving process and, as you follow a lead, the problem becomes more and more defined, almost always adding new problems with each one solved. Being immersed in a problem space is crucial, which tends to make tinkering a personal activity, not one that lends itself well to formally constructed groups. Scratching your own itch is a pretty good way to get started on the tinkering process because, having scratched one itch, it always leads to more or, at least, you notice other itches as you do so.

If you are scratching someone else’s itch then it can be too constraining. You are just solving a known problem, which seldom gets you far beyond the possible and, if it does, your obligations to the other person make it harder for you to follow the seam of gold that you have just discovered along the way that is really the point of it. It’s the unknown problems, the ones that only emerge as we cross the border of the adjacent possible, that matter here. Again, though, this is a balance. A little constraint can help to sustain a focus and doing something that is not your own idea can spark serendipitous ideas that turn out to be good.

Just because it is not really a team process doesn’t mean that other people are not important to it. Talking with others, exchanging ideas, gaining inspiration, receiving critique, seeing the world through different eyes – all this is very good. And it can also be great to work closely with a small number of others, particularly in pairs – XP relies on this for its success. A small number of people do not need to be bogged down with process, schedules, targets, and other things that get in the way of effective tinkering, can inspire one another, spot more solutions, and sustain motivation when the going gets rough. 

The Structural Space

One of the points of bricolage is that it is structured from the bottom up, not the top down. Just because it is bottom-up structure does not mean it is not structure. This is a classic of example of shaping our tools and our tools shaping us (as McLuhan put it), or shaping our dwellings while our dwellings shape our lives (as Churchill put it a couple of decades earlier). Tinkering starts with forms that influence what we do with them, and what we do with them influences what we do next – our creations and discoveries become the raw material for further creations and discoveries. Though rejecting deliberate structured design processes, I have toyed with and tried things like prototyping, mock-ups and sketches of designs, but I have come to the opinion that they get in the way – they abstract the design too much. What matters in bricolage is picking up pieces and putting them together. Anything beyond vague ideas and principles is too top-down. You are no longer talking with the space but with a map of the space, which is not the same thing at all.

Efficiency

One of the big problems with tinkering is that it tends to lead to highly inefficient design, from an engineering perspective. Part of the reason for that is that path dependencies set in early on. A bad decision early can seriously constrain what you do later. One has only to look at our higher education systems, the result of massively distributed large scale tinkering over nearly a thousand years, to see the dangers here. The vast majority of what we continue to do today is mediaeval in origin and, in a lot of cases, has survived unscathed, albeit assembled with a few other things along the way.

Building from existing pieces can limit the damage – at least you don’t have to pull everything apart if it turns out that it is not a fruitful path. It is also very helpful to start with something like Lego, that is designed to be fitted together this way. Most of my work during my sabbatical has involved programming using the Elgg framework, which is very elegantly designed so that, as long as you follow the guidelines, it naturally forms into at least a decent outline structure. On the other hand, as I have found to my cost, it is easy to put enough work into something that it makes it very discouraging when you to have to start again. As the example of educational systems shows, some blocks are so foundational and deeply linked with everything else, that they affect everything that follows and simply cannot be removed without breaking everything.

Working together

Tinkering is quite hard to do in teams, apart from as sounding boards for reflection on a process already in motion. It is instructive to visit LegoLand to see how it can work, though. In the play spaces of LegoLand one sees kids (and more than a few adults) working alone on building things, but they are doing so in a very social space. They talk about what they are doing, see what others are doing and, sometimes, put their bits of assemblies together, making bigger and more complex artefacts. We can see similar processes at work in GitHub, a site where programmers, often working alone, post projects that others can fork and, through pull-requests, return in modified form to their originators or others, with or without knowing them or ineracting with them in any other way. It’s a wonderful evolutionary tinker-space. If programs are reasonably modular, people can work on different pieces independently, that can then be assembled and reassembled by others. Inspiration, support, patterns of thinking and problem solving, as well as code, flow through the system. The tinkering of others becomes a part of your own tinker-space.  It’s a learning space – a space where people learn but also a space that learns. The fundamental social forms for tinkering are not traditional, purpose-driven, structured and scheduled teams (groups), but networks and, more predominantly, sets of people connected by nothing but shared interest and a shared space in which to tinker.

Planning

As well as resulting in inefficient systems, tinkering is not easy to plan. At the start, one never knows much more than the broad goal (that may change or may not even be there at all) and the next steps. You can build very big systems by tinkering (back to education again but let’s go large on this and think of the whole of gaia) but it is very hard to do so with a fixed purpose in mind and harder still to do so to a schedule. At best, you might be able to roughly identify the kind of task and look to historical data to help get some statistical approximation of how long it might take for something useful to emerge.

A corollary of the difficulty of planning (indeed, that it is counter-productive to do so) is that it is very easy to be thrown off track. Other things, especially those that involve other people that rely on you, can very quickly divert the endeavour. At the very least, time has to be set aside to tinker and, come hell or high water, that time should be used. Tinkering often involves following tenuous threads and keeping many balls in the air at once (mixing metaphors is a good form of tinkering) so distractions are anethema to the effective tinkerer. That said, coming up for a breath of air can remind you of other items in the tinker-chest that may inspire or provoke new ways of assembling things. It is a balance.

Evolution, not design

Naive creationists have in the past suggested that the improbability of finding something as complex as even a watch, let alone the massively more complex mechanisms of the simplest of organisms, means that there must be an intelligent designer. This is sillier than silly. Evolution works by a ratchet, each adaptation providing the basis for the next, with some neat possibilities emerging from combinatorial complexity as well. Given enough time and a suitable mechanism, exponentially increasingly complex systems are not just possible put overwhelmingly probable. In fact, it would be vastly more difficult to explain their absence than their existence. But they are not the result of a plan. Likewise for tinkering with technologies. If you take two complex things and put them together, there is a better than fair chance that you will wind up with something more complex that probably does more than you imagined or intended when you stuck them together.  And, though maybe there is a little less chance of disaster than the random-ish recombinations of natural evolution, the potential for the unexpected increases with the complexity. Most unexpected things are not beneficial – the bugs in every large piece of software attest to that, as do most of my attempts at physical tinkering over the course of my lifetime. However, now and then, some can lead to more actual possibles. The adjacent possible is what might happen next but, in many cases, changes simply come with baggage. Gould calls these exaptations – they are not adaptations as such, but a side-effect or consequence of adaptation. Gould uses the example of the Spandrels of St Marco to illustrated this point, showing how the structure of the cathedral of St Marco, with its dome sitting on rounded arches, unintentionally but usefully created spaces where they met that proved to be the perfect place to put images of saints – in fact, they seem made for them. But they are not – the spaces are just a by-product of the design that were coopted by the creators of the cathedral to a useful purpose. A lot of systems work that way. It is the nature of their assembly to create both constraints and affordances, path dependencies and patterns early on deeply defining later growth and change. Effective tinkering involves using such spandrels, and that means having to think about what you have built. Thinking deeply.

The Reflection Space

Just tinkering can be fun but, to make it a useful research process, it should involve more than just invention. It should also involve discovery. It is essential, therefore, that the process is seen as one of reflective dialogue with the creations we make. Reflection is not just part of an iterative cycle – it is embedded deeply and inextricably throughout the process. Only if we are able to constructively think about what we are doing as well as what we have done can this generate ideas, models, principles and foundations for further development. It is part of the dialogue with the objects (physical, conceptual, etc) that we produce and, perhaps even more importantly, it is the real research output of the tinkering process. Reflection is the point at which we discover rather than just invent. In part it is to think about the meaning and consequence, in part to discover the inevitable exaptions, in part to spot the next adjacent possible. This is not a simple collaboration. Much of the time we argue with the objects we create – they want to be one way but we want them to be another and, from that tension, we co-create something new.  

We need to build stories and rich pictures as much as we need to build technologies. Indeed, it doesn’t really matter that much if we fail to produce any useful artefact through tinkering, as long as the stories have value.  From those stories spin ideas, inspirations, and repeatable patterns. Stories allow us to critique what we have done and learn from it, to see it in a broader context and, perhaps, to discover different contexts where the ideas might apply. And, of course, these stories should be shared, whether with a few friends or the world, creating further feedback loops as well as spreading around what we have discovered.

Stories don’t have to be in words. Pictures are equally and often more useful and, often most useful of all, the interactions with our creations can tell a story too. This is obviously the case in things like games, Arduino projects or interactive site development but is just as true of making things like furniture, accessories and most of the things that can be made or enhanced with Sugru.

Here are two brief stories that I hope begin to reveal a little of what I mean.

A short illustrative story

Early in my sabbatical I wrote one Elgg plugin that, as it emerged, I was very pleased with, because it scratched an itch that I have had for a long time. It allowed anyone to tag anything, and for duplicate tags used by different people to be displayed as a tag cloud instead of the normal list of tags that comes with a post. This was an assembly of many ideas, and was a conversation with the Elgg framework, which provided a lot of the structure and form of what I wanted to achieve. In doing it, I was learning how to program in Elgg but, in shaping Elgg, I was also teaching it about the theories that I had developed over many years. If it had worked, it would have given me a chance to test those theories, and the results would probably have led to some refinements, but that was really a secondary phase of the research process and not the one that I was focusing on.

Before any other human being got to use the system, the research process was shaping and refining the ideas. With each stage of development I was making discoveries. A big one was the per-post tag cloud. My initial idea had simply been to allow people to tag one another’s posts. This would have been very useful in two main ways. Firstly, it would give people the chance to meaningfully bookmark things they had found interesting. Rather than the typical approach of putting bookmarks into organized hierarchies, tags could be used to apply faceted categorizations, allowing posts to cross hierarchical boundaries easily and enabling faceted classification of the things people found interesting. Secondly, the tags would be available to others, allowing social construction of an ontology-like thing, better search, a more organized site. Tags are already very useful things but, in Elgg, they are applied by post authors and there are not enough of them for strong patterns to develop on their own in any but quite large systems. One of the first things I realized was that this meant the same tag might be used for the same post more than once.  It was hard to miss in fact, because what I saw when I ran the program was multiple tags for each post – the system I had assembled was shouting at me. Having built a tag cloud system in the 1990s before I even knew the word ‘tag’ let alone ‘tag cloud’ I was primed to spot the opportunity for a tag cloud, which is a neat way to give shape and meaning to a social space. Individually, tags categorize into binary categories. Collectively, they become fuzzy and scalar – an individual post can be more of one tag than another, not because some individual has decided so, but because a crowd has decided so. This is more than a folksonomy. It is a kind of collaborative recommender system, a means to help people recognize not just whether something is good or bad but in what ways it is good or bad. Already, I was thinking of my PhD work which involved fuzzy tags I called ‘qualities’ (e.g. ‘good for beginners’, ‘comprehensive’, ‘detailed’, etc) that allowed users of my CoFIND system not just to categorize but to rate posts, on multiple pedagogical dimensions. Higher tag weight is an implicity proxy for saying that, in the context of what is described by this tag, the post has been recommended. As I write this (writing is great tinkering – this is the power of reflection) I realize that I could explicitly separate such tags from Elgg’s native tags, which might be a neat way to overcome the limitations of the system I wrote about 15 years ago, that was a good idea but very unusable. Anyway…

It worked like a dream, exactly as I had planned, up to the point that I tried to allow people to see the things they had tagged, which was pretty central to the idea and without which the whole thing was pretty pointless: it is highly improbably that individuals would see great value in tagging things unless they could use those tags to find and organize stuff on the site. As it turns out, the Elgg developers never thought tags might be used this way, so the owner of a tag is not recorded in the system. The person that tags a post is just assumed to be the owner of the post. I’m not a great Elgg developer (which is why I did not realise this till it was too late) but I do know the one cardinal rule – you never, ever, ever mess with the core code or the data model. There was nothing I could do except start again, almost completely from scratch. That was a lot of work – weeks of effort. It was not entirely wasted – I learned a lot in the process and that was the central purpose of it all. But it was very discouraging. Since then, as I have become more immersed in Elgg, my skills have improved. I think I can now see roughly how this could be made to work. The reason I know this is because I have been tinkering with other things and, in the process, found a lightweight way of using relationships to link individuals and objects that, in the ways that matter, can behave much like tags. Now that I have the germ of an idea about how to make this pedagogically powerful, hopefully I will have time to do that. 

Another illustrative story

One of my little sabbatical projects (that actually it turned out to be about the biggest, and it’s not over yet) was to build an OpenBadge plugin. This was actually prompted by and written for someone else. I would not thought of it as a good itch to scratch because I happen to know something about badges and something about learning and, from what I have seen, badges (as implemented so far) are at best of mixed value in learning. In the vast majority of instances that I have seen them used, they can be at the very best as demotivating as they are motivating. Much of the time it is worse than that: they turn into extrinsic proxies that divert motivation away from learning almost entirely. They embed power structures and create divisions. From a learning perspective, they are a pretty bad idea. On the plus side, they are a very neat way to do credentials which is great if that is what you are aiming for, opening up the potential for much more interesting separation of teaching and accreditation, diverse learning paths, and distributed learning, so I don’t hate them. In fact, I quite like them. But their pedagogical risks mean that I don’t love them enough to have even considered writing a plugin that implements them.

Despite reservations, I said I would do it. It didn’t seem like a big task because I reckoned I could just lightly modify one of a couple of existing (non-open) badge plugins that had already been written for Elgg.  I also happened to have some parts lying round – my pedagogical principles, the Elgg framework, the Mozilla OpenBadge standard documentation, various code snippets for implementing OpenBadges – that I could throw together. Putting these pieces together made me realize early on that social badging could be a good idea that might help overcome several of my objections to their usual implementations. Because of the nature of Elgg, the obvious way to build such a plugin would be such that anyone could make a badge, and anyone could award one, making use of Elgg’s native fine-grained bottom-up permissions. This meant that the usual power relationships implied in badging would not be such a problem. This was an interesting start.

Because Elgg has no roles in its design (apart from a single admin role for the site builder and manager), and so no explicit teaching roles, this could have been potentially tricky from a trust perspective – although its network features would mean you could trust awards by people you know, how would you trust an award from someone you don’t know and who is not playing a traditional teacher role in a power hierarchy? Even with the native Elgg option to ‘recommend’ a badge (so more people could assert its validity) this could become chaotic. But my principles told me that teacher control is a bad thing so I was not about to add a teacher role.

After tossing this idea around for a few minutes, I came up with the idea of inheritable badges – in other words, a badge could be configured so that you could only award a badge if you had received it yourself. In an instant, this began to look very plausible. If you could trace the badge to someone you trust (eg. a teacher or a friend or someone you know is trustworthy), which is exactly what Elgg would make possible by default, then you could trust anyone else who had awarded the badge to at least have the competence that the badge signifies, and so be more likely to be able to accurately recognize it in someone else. This was neat – it meant that accreditation could be distributed across a network of strangers (as in a MOOC) without the usual difficulties of the blind accrediting the blind that tend to afflict peer assessment methods in such contexts. Better still, this is a great way to signify and gain social capital, and to build deeper and richer bonds in a community of strangers. It is, I think, among the first scalable approaches to accreditation in a connectivist context, though I have not looked too deeply into the literature, so stand to be corrected.

Later, as I tinkered and became immersed in the problem, thinking how it would be used, I added a further option to let a badge creator specify a prerequisite award (any arbitrarily chosen badge) that must be held before a badge could be awarded. As well as allowing more flexibility than simple inheritance, this meant that you could introduce roles by the back door if you wished, by allowing someone to award a ‘teacher’ badge or similar, and only allowing people holding that badge to make awards of other badges.  I then realized this was a generalized case of the same thing as the inheritance feature, so got rid of the inheritance feature and just added the option to make a prerequisite of the current badge itself. It is worthy of note that this was quite difficult to do – had I planned it from the start, it would have been trivial, but I had to unpick what I had done as well as build it afresh.

Social badging, peer assessment, scalable viral accreditation, social capital, motivation  – this was looking cool. Furthermore, tinkering with an existing framework suggested other cool things. By default, it was a lot easier to build this if people could award badges to themselves. The logical next step would have been to prevent them from doing this but, as I saw it working, I realised self-badging was a very good idea! It bothered me for a moment that it might be a bit confusing, at least, not to mention appearing narcissistic if people started awarding themselves badges. However, Elgg posts can be private, so people giving themselves badges would not have to show them to others. But they could, and that could be useful. They could make a learning contract with someone else or a group of people, and allow them to observe, thus not only improving motivation and honesty, but also building bonding social capital. So, people could set goals for themselves and award themselves badges when they accomplished them, and do so in a safe social context that they would be in control of. It might be useful in many self-directed learning contexts. 

These were not ideas that simply flowed in my head from start to finish: it was a direct result of dialogue with what I was creating that this came about, and it could only have done so because I already had ideas and principles about things like portfolios, learning contracts and social learning floating around in my toolkit, ready to be assembled. I did include the admin option to turn off self-awarding at a system level in case anyone disagreed with me, and because I could imagine contexts where it might get out of hand. I even (a little reluctantly) made it possible to limit badge awarding to admins only, so that there could be a ‘root’ badge or two that would provide the source of all accreditation and awarding. Even then, it could still be a far more social approach to accreditation than most, making expertise not just something that is awarded with an extrinsic badge, but also something that gives real power to its holder to play an important role in a learning community.

This is not exactly what my sponsors asked for: they wanted automation, so that an administrator could set some criteria and the system would automatically award badges when those criteria had been met.  Although I reckon my social solution meets the demand for scalability that lay at the heart of that request, I realized that, with some effort, I could assemble all of this with a karma point plugin that I happened to have in my virtual toolshed in order to enable automated badge awarding for things like posting blogs, etc. Because there was no obvious object for which such an award could be given as it could relate to any arbitrary range of activities, I made the object providing evidence to be the user’s own profile. Again, this was just assembling what was there – it was an adjacent possible, so I took it. I could, if I had not been lazy, have generated a page displaying all of the evidence, but I did not (though I still might – it is an adjacent possible that might be worth exploring). And so, of course, now it is possible to award a badge to a user, rather than for a specific post which, though not normally a good idea from a motivation perspective, could have a range of uses, especially when assembled with the tabbed profile we built earlier (what I refer to in academic writings as a ‘context switcher’ and that can be used as a highly flexible portfolio system).

These are just a sample of many conversations I had with the tools and objects that were available to me. I influenced them, they influenced me. There were plenty of others – exaptions like my discovery that the design I had opted for, which made awards and badges separate objects, meant that I had a way of making awards persistent and not allowing badge owners to sneakily change them afterwards, for example, thus enhancing trust in the system. Or that the Elgg permissions model made it very simple to reliably assert ownership, which is very important if you are going to distribute accreditation over multiple sites and systems. Or that the fact that it turned out to be an incredibly complex task to make it all work in an Elgg Group context was a blessing because I therefore looked for alternatives, and found that the pre-requisite functionality does the job at least as well, and much more elegantly. Or that the Elgg views system made it possible to fairly easily create OpenBadge assertions for use on other sites. The list goes on. 

It was not all wonderful though. Sometimes the conversation got weird. My plan to start with an existing badge plugin quickly bit the dust. It turns out that the badge plugins that were available were both of the kind I hate – they awarded badges to individuals, not for specific competences. To add injury to injury, they could be awarded only by the administrator, either automatically through accrued points or manually. This was exactly the kind of power structure that I wanted to get away from. From an architectural perspective, making these flawed plugins work the way I wished would have been much harder than writing the plugin from scratch. However, in the spirit of tinkering, I didn’t start completely from scratch. I looked around for a plugin that would do some of the difficult stuff for me. After playing with a few, I opted standard Elgg Files plugin, because that ought to have made light work of storing and organizing the badge images. In retrospect, maybe not the best plan, but it was a starting point. After a while I realized I had deleted or not used 90% of the original plugin, which was more effort than it was worth. I also got stuck in a path dependency again, when I wanted to add multiple prerequisites (ie you could specify more than one badge as a prerequisite) : by that time, my ingenious single-prerequisite model was so firmly embedded that it would have taken more than a solid week to change it. I did not have the energy, or the time.  And, relatedly, my limited Elgg skills and lack of forward planning meant that I did not always divide the code into neatly reusable chunks. This still continues to cause me trouble as I try to make the OpenBadge feature work. Reflecting on such issues is useful – I now know that multiple inheritence makes sense for this kind of system, which would not have occurred to me if I hadn’t built a system with a single-prerequisite data model. And I have a better idea about what kind of modularity works best in an Elgg system.

Surfing the adjacent possible

Like all stories worthy of the name, my examples are highly selective and probably contain elements of fiction in some of the details of the process. Distance in time and space changes memories so I cannot promise that everything happened in the order and manner presented here – it  was certainly a lot more complicated, messy and detailed than I have described it to be. I think this fictionlizing is crucial, though. Objective reporting is exactly not what is needed in a bricolage process. It is the sense-making that matters, not religious adherence to standards of objectivity. What matters are the things we notice, the things we reflect on and things we consider to be important. Those are the discoveries. 

This is a brief and condensed set of ten of the main principles that I think matter in effective tinkering for research:

  1. do not design – just build
  2. start with pieces that are fully formed
  3. surround yourself with both quantity and diversity in tools, materials, methods, and perspectives
  4. dabble hard – gain skills, but be suspicious of expertise
  5. look for exaptations and surf the adjacent possible
  6. avoid schedules and goals, but make time and space for tinkering, and include time for daydreaming
  7. do not fear dismantling and starting afresh
  8. beware of teams, but cultivate networks: seek people, not processes
  9. talk with your creations and listen to what they have to say
  10. reflect, and tell stories about your reflections, especially to others

As I read these ideas it strikes me that this is the very antithesis of how research, at least in fields I work in, is normally done and that it would be extremely hard to get a grant for this. With a deliberate lack of process control, no clear budgets, no clear goals, this is not what grant awarders would normally relish. Whatever. It is still worth doing.

Tinkering as a research methodology offers a lot – it is a generative process of discovery that builds ideas and connections as much as it builds objects that are interesting or useful. It is far from being a random process but it is unpredictable. That is why it is interesting. I think that some aspects of it resemble systematic literature review: the discovery and selection of appropriate pieces to assemble, in particular, is something that can be systematized to some extent and, just as in a literature review, once you start with a few pieces, other pieces fall naturally into place. It is very closely related to design-based research and action research, with their formal cycles and iterative processes, although the iteration cycle in tinkering is far finer grained, it is not as rigid in its requirements, and it deliberately avoids the kind of abstractions that such methodologies thrive on. It might be a subspecies though. It definitely resembles and can benefit from soft systems methodologies, because it is the antithesis of hard systems design. Rich pictures have a useful role to play, in particular, though not at the early stages they are used in soft systems methods. And, unlike soft systems, the system isn’t the goal.

Finally, tinkering is not a solution to everything. It is a means of generating knowledge. On the whole, if the products are worthwhile, then they should probably feed into a better engineered system. Note, however, that this is not prototyping. Though products of tinkering may sometimes play the role of a prototype at a later stage in a product cycle, the point of the process is not to produce a working model of something yet to come. That would imply that we know what we are looking for and, to a large extent, how we will go about achieving it. The point is to make discoveries. 

This is not finished yet. It might just turn out to be a lazy way to do research or, perhaps, just another name for something that is already well pinned down. It certainly lacks rigour but, since the purpose is generative, I am not too concerned about that, as long as it works to produce new knowledge. I tinker on, still surfing the adjacent possible.

Have we all been duped by the Myers-Briggs test?

Not all of us, no.

But, if you reckon there is any validity at all to personality tests, learning styles and all such pseudo-scientific hokum the answer is ‘yes’, you have been duped. This digestible and brief article presents a small sample of the compelling evidence.

Address of the bookmark: http://fortune.com/2013/05/15/have-we-all-been-duped-by-the-myers-briggs-test/

Three glimpses of a fascinating future

I’d normally post these three links as separate bookmarks but each, which have popped up in the last few days, share a common theme that is worth noting:

http://singularityhub.com/2014/09/04/experimental-rat-brain-fighter-pilot-may-yield-insights-into-how-the-brain-works/

In this, a neural network made out of the brain cells of a rat is trained to fly a flight simulator.

http://news.sky.com/story/1329954/world-first-as-message-sent-from-brain-to-brain

In this, signals are transmitted directly from one brain to another, using non-invasive technologies (well – if you can call a large cap covered in sensors and cables ‘non-invasive’!)

http://singularityhub.com/2014/09/03/neuromodulation-2-0-new-developments-in-brain-implants-super-soldiers-and-the-treatment-of-chronic-disease/

This reports on a DARPA neuromodulation/neuroaugmentation project to embed tiny electronic devices in brains to (amongst other things) cure brain diseases and conditions, augment brain function and interface with the outside world (including, presumably, other brains). This article contains an awesome paragraph:

“What makes all of this so much more interesting is the fact that, unlike all the other systems of the body, which tend to reject implants, the nervous system is incorporative—meaning it’s almost custom-designed to handle these technologies. In other words, the nervous system is like your desktop computer— as long as you have the right cables, you can hook up just about any peripheral device you want.”

I’m both hugely excited and deeply nervous about these developments and others like them. This is serious brain hacking. Artificial intelligence is nothing like as interesting as augmented intelligence and these experiments show different ways this is beginning to happen. It’s a glimpse into an awe-inspiring future where such things gain sophistication and ubiquity. The potential for brain cracking, manipulation, neuro-digital divides, identity breakdown, privacy intrusion, large-scale population monitoring and control, spying, mass-insanity and so on is huge and scary, as is the potential for things to go horribly wrong in so many new and extraordinary ways. But I would be one of the first to sign up for things like augmenting my feeble brain with the knowledge of billions (and maybe giving some of my knowledge back in return), getting to see the world through someone else’s eyes or even just being able to communicate instantly, silently and unambiguously with loved ones wherever they might be. This is transhumanity writ large, a cyborg future where anything might happen. Smartphones, televisions, the web, social media, all the visible trappings of our information and communication technologies that we know now, might very suddenly become amusing antiques, laughably quaint, redundant and irrelevant. A world wide web of humans and machines (biological and otherwise), making global consciousness (of a kind, at least) a reality. It is hard but fascinating to imagine what the future of learning and knowledge might be in the kind of super-connected scenario that this implies. At the very least, it would disrupt our educational systems beyond anything that has ever come before! From the huge to the trivial, everything would change. What would networked humans (not metaphorically, not through symbolic intermediaries, but literally, in real time) be like? What would it be like to be part of that network? In what new ways would we know one another, how would are attitudes to one another change? Where would our identities begin and end? What would happen if we connected our pets? What would be the effects of a large solar flare that wiped out electronic devices and communication once we had grown used to it all? Everything blurs, everything connects. So very, very cool. So very, very frightening.