Automated Collaborative Filtering and Semantic Transports – draft 0.72

I had to look up this article by the late Sasha Chislenko for a paper I was reviewing today, and I am delighted that it is still available at its original URL, though Chislenko himself died in 2000. I’ve bookmarked the page on systems dating back to 1997 but I don’t think I’ve ever done so on this site, so here it is, still open to the world. Chislenko was writing in public way before it was fashionable and, I think, probably before the first blogs – this is still and, sadly, will always be a work in progress.

This particular page was one of a handful of articles that deeply influenced my early research and set me on a course I’m still pursuing to this day. Back in 1997, as I started my PhD, I had conceived of and started to build a web-based tagging and bookmark sharing system to gather learner-generated recommendations of resources and people so that the crowd could teach itself. It seemed like a common sense idea but I was not aware of anything else like it (this was long before del.icio.us and Slashdot was just a babe in arms), so I was looking for related work and then I found this. It depressed me a little that my idea was not quite as novel as I had hoped, but this article knocked me for six then and it continues to impress me now. It’s still great reading, though many of the suggestions and hopes/fears expressed in it are so commonplace that we seldom give them a second thought any more.

This, along with a special issue of ACM Communications released the same year, was my first introduction to collaborative filtering, the technology that would soon sit behind Amazon and, later, everything from Google Search to Netflix and eBay. It gave a name to what I was doing and to the system I was building, which was consequently christened ‘CoFIND’  (Collaborative Filter in N-Dimensions). 

Chislenko was a visionary who foresaw many of the developments over the past couple of decades and, as importantly, understood many of their potential consequences.  More of his work is available at http://www.lucifer.com/~sasha/articles/ – just a small sample of his astonishing range, most of it incomplete notes and random ideas, but packed with inspiration and surprisingly accurate prediction. He died far too young.

Address of the bookmark: http://www.lucifer.com/~sasha/articles/ACF.html

Constructivism versus objectivism: Implications for interaction, course design, and evaluation in distance education.

I’d not come across this (2000) article from Vrasidas till now, more’s the pity, because it is one of the clearest papers I have read on the distinction between objectivist (behaviourist/cognitivist)  and constructivist/social-constructivist approaches to teaching. It wasn’t new by any means even 15 years ago, but it provides an excellent overview of the schism (both real and perceived) between objectivism and constructivism and, in many ways, presages a lot of the debate that has gone on since surrounding the strengths, weaknesses and novelty of connectivist approaches. Also contains some good practical hints about how to design learning activities.

Address of the bookmark: http://vrasidas.intercol.edu/continuum.pdf

Instructional quality of Massive Open Online Courses (MOOCs)

This is a very interesting, if (I will argue) flawed, paper by Margaryan, Bianco and Littlejohn using a Course Scan instrument to examine the instructional design qualities of 76 randomly selected MOOCs (26 cMOOCs and 50 xMOOCs – the imbalance was caused by difficulties finding suitable cMOOCs). The conclusions drawn are that very few MOOCs, if any, show much evidence of sound instructional design strategies. In fact they are, according to the authors, almost all an instructional designer’s worst nightmare, on at least some dimensions.  
I like this paper but I have some fairly serious concerns with the way this study was conducted, which means a very large pinch of salt is needed when considering its conclusions. The central problem lies in the use of prescriptive criteria to identify ‘good’ instructional design practice, and then using them as quantitative measures of things that are deemed essential to any completed course design. 

Doubtful criteria 

It starts reasonably well. Margaryan et al use David Merrill’s well-accepted abstracted principles for instructional design to identify kinds of activities that should be there in any course and that, being somewhat derived from a variety of models and theories, are pretty reasonable: problem centricity, activation of prior learning, expert demonstration, application and integration. However, the chinks begin to show even here, as it is not always essential that all of these are explicitly contained within a course itself, even though consideration of them may be needed in the design process – for example, in an apprenticeship model, integration might be a natural part of learners’ lives, while in an open ‘by negotiated outcome’ course (e.g. a typical European PhD) the problems may be inherent in the context. But, as a fair approximation of what activities should be in most conventional taught courses, it’s not bad at all, even though it might show some courses as ‘bad’ when they are in fact ‘good’. 
The authors also add five more criteria abstracted from literature relating rather loosely to ‘resources’, including: expert feedback; differentiation (i.e. personalization); collaboration; authentic resources; and use of collective knowledge (i.e. cooperative sharing). These are far more contentious, with the exception of feedback, which almost all would agree should be considered in some form in any learning design (and which is a process thing anyway, not a resource issue). However, even this does not always need to be the expert feedback that the authors demand: automated feedback (which is, to be fair, a kind of ossified expert feedback, at least when done right), peer feedback or, best of all, intrinsic feedback can often be at least as good in most learning contexts. Intrinsic feedback (e.g. when learning to ride a bike, falling off it or succeeding to stay upright) is almost always better than any expert feedback, albeit that it can be enhanced by expert advice. None of the rest of these ‘resources’ criteria are essential to an effective learning design. They can be very useful, for sure, although it depends a great deal on context and how it is done, and there are often many other things that may matter as much or more in a design, like including support for reflection, for example, or scope for caring or passion to be displayed, or design to ensure personal relevance. It is worth noting that Merrill observes that, beyond the areas of broad agreement (which I reckon are somewhat shoehorned to fit), there is much more in other instructional design models that demands further research and that may be equally if not more important than those identified as common.

It ain’t what you do…

Like all things in education, it ain’t what you do but how you do it that makes all the difference, and it is all massively dependent on subject, context, learners and many other things. Prescriptive measures of instructional design quality like these make no sense when applied post-hoc because they ignore all this. They are very reasonable starting frameworks for a designer that encourage focus on things that matter and can make a big difference in the design process, but real life learning designs have to take the entire context into account and can (and often should) be done differently. Learning design (I shudder at the word ‘instructional’ because it implies so many unhealthy assumptions and attitudes) is a creative and situated activity. It makes no more sense to prescribe what kinds of activities and resources should be in a course than it does to prescribe how paintings should be composed. Yes, a few basics like golden ratios, rules of thirds, colour theory, etc can help the novice painter produce something acceptable, but the fact that a painting disobeys these ‘rules’ does not make it a bad painting: sometimes, quite the opposite. Some of the finest teaching I have ever seen or partaken of has used the most appalling instructional design techniques, by any theoretical measure.

Over-rigid assumptions and requirements

One of the biggest troubles with such general-purpose abstractions is that they make some very strong prior assumptions about what a course is going to be like and the context of delivery. Thanks to their closer resemblance to traditional courses (from which it should be clearly noted that the design criteria are derived) this is, to an extent, fair-ish for xMOOCs. But, even in the case of xMOOCs, the demand that collaboration, say, must occur is a step too far: as decades of distance learning research has shown (and Athabasca University proved for decades), great learning can happen without it and, while cooperative sharing is pragmatic and cost-effective, it is not essential in every course. Yes, these things are often a very good idea. No, they are not essential. Terry Anderson’s well-verified (and possibly self-confirming, though none the worse for it) theorem of interaction equivalency  makes this pretty clear.

cMOOCs are not xMOOCs

Prescriptive criteria as a tool for evaluation make no sense whatsoever in a cMOOC context. This is made worse because the traditional model is carried to extremes in this paper, to the extent that the authors bemoan the lack of clear learning outcomes. This doesn’t naturally fall out from the design principles at all, so I don’t understand why they are even mentioned, and it seems an abitrary criterion that has no validity or justification beyond the fact that they are typically used in university teaching. As teacher-prescribed learning outcomes are anathema to Connectivism it is very surprising indeed that the cMOOCs actually scored higher than the xMOOCs on this metric, which makes me wonder whether the means of differentiation were sufficiently rigorous. A MOOC that genuinely followed Connectivist principles would not provide learning outcomes at all: foci and themes, for sure, but not ‘at the end of this course you will be able to x’. And, anyway, as a lot of research and debate has shown, learning outcomes are of far greater value to teachers and instructional designers than they are to learners, for whom they may, if not handled with great care, actually get in the way of effective learning. It’s a process thing – helpful for creating courses, almost useless for taking them. The same problem occurs in the use of course organization in the criteria – cMOOC content is organized bottom-up by learners, so it is not very surprising that they lack careful top-down planning, and that is part of the point.

Apparently, some cMOOCs are not cMOOCs either

As well as concerns about the means of differentiating courses and the metrics used, I am also concerned with how they were applied. It is surprising that there was even a single cMOOC that didn’t incorporate use of ‘collective knowledge’ (the authors’ term for cooperative sharing and knowledge construction) because, without that, it simply isn’t a cMOOC: it’s there in the definition of Connectivism . As for differentiation, part of the point of cMOOCs is that learning happens through the network which, by definition, means people are getting different options or paths, and choosing those that suit their needs. The big point in both cases is that the teacher-designed course does not contain the content in a cMOOC: beyond the process support needed to build and sustain a network, any content that may be provided by the facilitators of such a course is just a catalyst for network formation and a centre around which activity flows and learner-generated content and activity is created. With that in mind it is worth pointing out that problem-centricity in learning design is an expression of teacher control which, again, is anathema to how cMOOCs work. Assuming that a cMOOC succeeds in connecting and mobilizing a network, it is all but certain that a great deal of problem-based and inquiry-based learning will be going on as people post, others respond, and issues become problematized. Moreover, the problems and issues will be relevant and meaningful to learners in ways that no pre-designed course can ever be. The content of a cMOOC is largely learner-generated so of course a problem focus is often simply not there in static materials supplied by people running it. cMOOCs do not tell learners what to do or how to do it, beyond very broad process support which is needed to help those networks to accrete. It would therefore be more than a little weird if they adhered to instructional design principles derived from teacher-led face-to-face courses in their designed content because, if they did, they would not be cMOOCs. Of course, it is perfectly reasonable to criticize cMOOCs as a matter of principle on these grounds: given that (depending on the network) few will know much about learning and how to support it, one of the big problems with connectivist methods is that of getting lost in social space, with insufficient structure or guidance to suit all learning needs, insufficient feedback, inefficient paths and so on. I’d have some sympathy with such an argument, but it is not fair to judge cMOOCs on criteria that their instigators would reject in the first place and that they are actively avoiding. It’s like criticizing cheese for not being chalky enough.

It’s still a good paper though

For all that I find the conclusions of this paper very arguable and the methods highly criticizable, it does provide an interesting portrait of MOOCs using an unconventional lens. We need more research along these lines because, though the conclusions are mostly arguable, what is revealed in the process is a much richer picture of the kinds of things that are and are not happening in MOOCs. These are fine researchers who have told an old story in a new way, and this is enlightening stuff that is worth reading.
 
As an aside, we also need better editors and reviewers for papers like this: little tell-tales like the fact that ‘cMOOC’ gets to be defined as ‘constructivist MOOC’ at one point (I’m sure it’s just a slip of the keyboard as the authors are well aware of what they are writing about) and more typos than you might expect in a published paper suggest that not quite enough effort went into quality control at the editorial end. I note too that this is a closed journal: you’d think that they might offer better value for the money that they cream off for their services.

Address of the bookmark: http://www.sciencedirect.com/science/article/pii/S036013151400178X

Multiple types of motives don't multiply the motivation of West Point cadets

Interesting study analysing the relationship between internal vs instrumental (the author’s take on intrinsic vs extrinsic) motivation as revealed in entry questionnaires for West Point cadets and long-term success in army careers. As you might expect, those with intrinsic motivation significantly outperformed those with extrinsic motivation on every measure.

What is particularly interesting, however, is that extrinsic motivation crowded out the intrinsic in those with mixed motivations. Having both extrinsic and intrinsic motivation is no better than having extrinsic motivation on its own, which is to say it is virtually useless. In other words, as we already know from hundreds of experiments and studies over shorter periods but herein demonstrated for periods of over a decade, extrinsic motivation kills intrinsic motivation. This is further proof that the use of rewards (like grades, performance-related pay, and service awards) in the hope that they will motivate people is an incredibly dumb idea because they actively demotivate. 

Address of the bookmark: http://m.pnas.org/content/111/30/10990.full

Zombie Skinner returns from the dead: an educational horror story

A dark tale for the halloween season

It was a dark and stormy night when Phil deMuth, an investment advisor, sat down to pen this article for Forbes. His voodoo incantations would raise from the dead the ghastly zombified remains of behaviourist dogmatist and appropriately named Skinner and, together, their nightmarish vision would turn the nation’s young into mindless zombies: apathetic, disenfranchised, undead and unthinking fodder suited only to sustaining the ghost of an industrial past. It is because of idiotic but superficially plausible ideas like the ones in this skillfully written article that I have to struggle to unteach, to try and often fail to help students to learn how to love learning again after years of having it beaten out of them. So, in case anyone is persuaded by the slick but outrageously wrong arguments of the article, or is one of the many educators that actually make use of this claptrap, this post is meant as a small antidote to the zombie plague.

A good beginning

DeMuth’s article starts very well. The first six paragraphs of the article present a fine and impassioned analysis of the failings of popular educational technologies and methods that strikes a well-aimed blow at the heart of many of the problems in existing educational systems, the atrociousness of traditional methods, and the unwitting replication of harmful and outmoded ways of teaching in poorly designed MOOCs and misuse of Khan tutorials (he actually attacks Khan tutorials themselves but, had he thought to check out a few rather than lump them with the rest, he would have discovered that they actually closely conform to his vision). deMuth might also have attacked much of university e-learning too on the same grounds. I could not agree more. DeMuth is absolutely right to bemoan the crazy transmission model underlying much of education and the appallingness of monolithic and intimidating exams and tests, not to mention the foolishness of replicating old and weak methods in new and shiny tools. His motives are unimpeachable and he makes a very strong, eloquently argued case. His solution though is not to change that system but to make it do the (wrong) job more efficiently. Here I take issue.

And then the horror starts

The remainder of the article is entertainingly and slickly written but the worse for that, because it carries a very dangerous message indeed.  In brief, it spends a while self-referentially demonstrating the value of programmed learning, then winds up by asking for programmers/behaviourist psychologists to produce modern equivalents of Skinner’s Teaching Machine (see illustration) so that the transmission model can work better and kids can pass more tests.

Behaviourism revisited

B.F.Skinner's Teaching Machine

I’m amazed that anyone still thinks radical behaviourism, as espoused by Skinner, has any value whatsoever. I guess that some people learned this stuff before it was soundly discredited or, as I’m guessing was the case for deMuth, discovered it in passing without looking into what the rest of the world thought about it. Alas, such beliefs still do still persist. Indeed, we see behaviourist shortcuts all too regularly in education and industry to this day, even though hardly anyone who has followed any of the research over the past 40 years or so would find it at all acceptable. 

For those unfamiliar with Skinner’s radical behaviourist model, in brief, it was meant to apply a reductionist scientific method to discovering how animals (including people) learn. Recognizing that internal cognitive processes are hard to observe (and, in Skinner’s radical version of the theory, are themselves simply a consequence of external conditioning), the ‘behaviour’ part of the name is a reflection of that fact that this is what behaviourists concentrated on and, in Skinner’s case, the only thing that counted. Skinner only allowed for stimuli and responses that could be observed and measured – nothing else mattered. The brain for Skinnerian behaviourists was a black box about which they needed to know nothing apart from the effects of particular inputs and the observable outputs. They performed interventions and observed their effects on behaviour. Based on these observations (not uncommonly starting and sometimes ending with experiments on animals) those who tried to apply these methods in teaching sought to work out how to teach better, without ever having to make any assumptions about what was going on inside people’s minds. It’s a laudable goal, if Quixotic and utterly misguided. One big trouble with it (though far from the only one) is that it ignores our minds’ own inputs, that are often a great deal more significant than any external stimuli and that always modify them, in unpredictable ways, with complex effects that often fail to emerge until long after the stimuli have gone. We now know that reductionist methods simply don’t work in this context. Skinner lacked the framework of complexity and chaos theories that demonstrate the theoretical and practical impossibility of predicting even such simple causes and effects as the motion of a double pendulum, so it is perhaps a little forgiveable that he remained lost in a reductionist paradigm. We also now know that the operant conditioning methods that Skinner espoused are relatively ineffective in the short term and highly ineffective in the long term, so behaviourism fails to achieve much even on its own terms. Again, Skinner could not really be blamed for misunderstanding the significance of his results or their long-term weaknesses because such research was in its infancy while he was still alive.

So why did anyone ever believe in this stuff?

To a limited extent, behaviourism works. Among radical behaviourist ‘discoveries’, in large part guided by experiments in which Skinner was able to train animals like pigeons little by little to perform complex tasks, is the one focused on in this article: that small chunked lessons, with immediate feedback, allowing the subject to take it at his or her own pace, can reliably lead to learning. Up to a point this is absolutely true, especially in the ‘spaced’ form in which Skinner actually presented it rather than the simplistic caricature demonstrated in the article. For some kinds of rote learning, the effects of which are easily observed, small chunks and immediate feedback are a very good idea indeed. There are good reasons for this that cognitivist psychologists and constructivist thinkers had also hit upon long before Skinner came on the scene. Although not the archetypal behaviourist way of doing things, it works particularly well if that feedback is innate to the task rather than extrinsically imposed. For instance, staying upright on a bicycle, being able to recite lines in a play, being able to play a piece of music, building a program that does what it should, or writing a satisfying piece of work, are immediate forms of feedback that are intrinsic to the process. This is the form that deMuth uses in the article: he self-referentially demonstrates the effectiveness of the approach by leaving ever larger chunks out of the key terms it employs. In doing this he is actually relying on a cognitivist model of what motivates us (in this case, achievable challenges) rather than a purely behaviourist model of reward and punishment, so it’s not the greatest example of the effectiveness of behaviourism. He is not exactly an expert in the field. Behaviourists also hit on a few other good tricks, more by luck than design. It is absolutely true that putting people in control of the pace of their own learning works very well, both for obvious common-sense reasons (we don’t all learn the same things at the same speed) and for motivation: a sense of being in control is central to intrinsic motivation. This is not a behaviourist notion, but it happens to be true.

The most problematic outcome of behaviourist thinking, that follows from its wilful ignorance of internal motivations and stimuli, lies in the use of rewards and punishments to drive learning, using extrinsic motivation as though we were all pigeons. The big trouble with the reward/punishment idea is that extrinsic motivation actually eliminates intrinsic motivation, which means that a reward/punishment model is positively harmful to effective learning. There have been countless studies and experiments that show this from Deci, Ryan, Kohn and very many others. I am quite taken by a recent paper on the subject which rather neatly shows its effects using 10,000 West Point cadets tracked over 14 years, that summarizes some of the classic research quite well, as well as adding its own compelling evidence. By nature humans love to learn and enjoy achievable challenges but, if you beat or reward that love out of them for long enough, they will stop wanting to do so. The big lesson that we learn from extrinsic rewards and punishments is that learning is done to gain rewards that have no connection with that learning, or (just as bad) to avoid punishment. We also, in passing, learn that the purveyors of those rewards and punishments have power over us.

If it actually worked then even this ugly power trip might be worth it, though I have strong reservations about the ultimate value of teaching people to bow down to authority figures without question. Unfortunately, many studies have demonstrated unequivocally that, though such methods may result in short-term gains that may be sufficient (if not particularly efficient) to pass the big sticks and carrots of exams designed to test behaviour, learning this way does not persist, especially if no attention is paid to meaning, value and connections between things. This accords with common sense and experience. If we are taught that the value of what we are learning is to pass a test or get a grade then, once we have achieved that, it is perfectly natural to promptly forget it. It’s much like remembering your hotel room number: very important as long as you stay there, completely irrelevant when you leave, and therefore promptly forgotten.

Learning that persists is learning that we can continue to use, that relates to our goals, the things we want or need to do, that relates to our social context, that we can apply and that has meaning and value to us because of who we are, where we come from, what we want to do, the communities that bind us, and who we want to be. For this kind of learning, self-pacing, small chunks of increasing complexity and fast feedback can be extremely useful tools (if far from being the only ones) but the point is that it cannot be done effectively in isolation and especially not under the control of someone else. Values and meaning are not in nor can they be usefully described by the behaviourist vocabulary, but they are exactly what deMuth’s reviled educators, against the odds and against the flow of a system that is designed to work in total opposition to them, are trying to foster. At least, the good ones are doing that. Too many of us are buckled down by the system that thwarts us by standardizing learning, trying to make us teach the same things in the same temporal and physical/virtual space, at the same time, over fixed periods, without any thought for the reasons it might be worthwhile to people or their individual and unique needs. It is no wonder that education has one of the highest dropout rates of any profession. DeMuth rightly attacks education but wrongly attacks educators.

The failure of educational systems

As long as we have abominations like the core curriculum, obligatory courses with defined objectives, or coarse-grained programs that ignore individual needs, that make it a requirement to learn a specified body of facts and skills regardless of their personal value or interest to us, this will ever be so. Unless we can devise ways of doing education that will be meaningful, applicable and valuable to the individuals that are learning, without extrinsic rewards or punishments, we have failed. If we teach students that the purpose of learning is to pass tests (or receive some other extrinsic reward or punishment), we will have doubly failed, because we will have made it harder for them to learn anything ever again and, in all probability, will have fostered an aversion to what might otherwise have been an important and interesting thing if it were learned at the right time. Skinnerian teaching machines deployed without addressing these fundamental problems will simply reinforce the same old patterns, making things worse, far worse, than before.

When used to support personally and socially meaningful goals, some behaviourist methods can have limited value, though none of those methods are unique to behaviourism and most come with important provisos and modifiers. Practice can be very good for acquiring a wide range of skills, especially when interleaved and spaced (those studying for exams or learning to play a musical instrument would do well to take note of this), learning things when, how and at what pace we wish to learn them is crucial, and we do need to take things a little at a time. Some such skills are foundational and, once learned, can become self-sustaining and supportive of intrinsically motivated learning: reading and writing, for example, or arithmetic. But behaviourism is not right just because it made a few hits. DeMuth wants to improve literacy (good) but he seeks to improve it through behaviourist methods (very bad) and measure it by standardized tests (very, very bad). This is a bit like assuming that the purpose of the army is to kill people, and therefore providing all soldiers with nuclear weapons. It is putting the cart before the horse. 

The purpose of education is not to pass tests but, along with sustaining some cultural continuity, to help people both to learn and to continue to learn. Behaviourist methods may achieve short-term testing goals but are singularly poor at fostering long-term learning and are positively antagonistic to lifelong learning. They encourage a dependent and submissive attitude and stamp on critical or creative thought. We should let them rest in peace.

BOOK: Teaching Crowds: Learning and Social Media

About the Book

Within the rapidly expanding field of educational technology, learners and educators must confront a seemingly overwhelming selection of tools designed to deliver and facilitate both online and blended learning. Many of these tools assume that learning is configured and delivered in closed contexts, through learning management systems (LMS). However, while traditional “classroom” learning is by no means obsolete, networked learning is in the ascendant. A foundational method in online and blended education, as well as the most common means of informal and self-directed learning, networked learning is rapidly becoming the dominant mode of teaching as well as learning.

In Teaching Crowds, Dron and Anderson introduce a new model for understanding and exploiting the pedagogical potential of Web-based technologies, one that rests on connections — on networks and collectives — rather than on separations. Recognizing that online learning both demands and affords new models of teaching and learning, the authors show how learners can engage with social media platforms to create an unbounded field of emergent connections. These connections empower learners, allowing them to draw from one another’s expertise to formulate and fulfill their own educational goals. In an increasingly networked world, developing such skills will, they argue, better prepare students to become self-directed, lifelong learners.

 

Address of the bookmark: http://www.aupress.ca/index.php/books/120235

Transactional distance and new media literacies

Moore’s theory of transactional distance describes the communications and psychological gulf between learner and teacher in a distance education setting. The theory was formulated in a correspondence era of distance learning and matured in an era where discussion forums and virtual learning environments reduced transactional distance in a closed-group setting that enabled interactions akin to those in a traditional classroom. In recent years the growth of social networking and social interest sites has led to social forms that fit less easily in these traditional formal models of teaching and learning. When the “teacher” is distributed through the network or is an anonymous agent in a set or is an emergent actor formed by collective intelligence, transactional distance becomes a more complex variable. Evolved social literacies are mutated by new social forms and require us to establish new or modified ways of thinking about learning and teaching. In this missive we explore the notion of transactional distance and the kinds of social literacy that are required for or that emerge from network, set, and collective modes of social engagement. We discuss issues such as preferential attachment, confirmation bias, and trust and describe social literacies needed to cope with them.

Address of the bookmark: http://www.mitpressjournals.org/doi/abs/10.1162/IJLM_a_00104#.VEwtAYcfTEI

Agoraphobia and the modern learner

Abstract:Read/write social technologies enable rich pedagogies that centre on sharing and constructing content but have two notable weaknesses. Firstly, beyond the safe, nurturing environment of closed groups, students participating in more or less public network- or set-oriented communities may be insecure in their knowledge and skills, leading to resistance to disclosure. Secondly, it is hard to know who and what to trust in an open environment where others may be equally unskilled or, sometimes, malevolent. We present partial solutions to these problems through the use of collective intelligence, discretionary disclosure controls and mindful design.

Address of the bookmark: http://www-jime.open.ac.uk/jime/article/viewArticle/2014-03/html

Learning in an introductory physics MOOC: All cohorts learn equally, including an on-campus class | Colvin | The International Review of Research in Open and Distance Learning

Thanks to Tony Bates for pointing to and providing a fine review of this interesting article which shows evidence of learning gain in people who were taking an xMOOC.

I have little to add to Tony’s comments apart from to mention the very obvious elephant in this room: that the sampling was skewed by the fact that it only considered considerably less than 10% of the original populace of the MOOC that actually got close to finishing it. It is not too surprising that most of those who had the substantial motivation demanded to finish the course (a large percentage of whom were very experienced learners in related fields) actually did pretty well. What it does not tell us is whether, say, a decent open textbook might have been equally or more effective for these manifestly highly motivated and proficient students. If so, it might not be a particularly cost-effective way of learning.

The study does compare performance of a remedial class of students (ie that had failed an initial course) who received plentiful further face to face support with that of the voluntarily subscribed online students. But the authors rightly note that it would be foolish to read anything into any of the differences found, including the fact that the campus-based students seemed to gain nothing from additional remedial tuition (they may be overly pessimistic about that: without that remedial effort, they might have done even worse) because the demographics and motivations of these students were a million miles removed from the rest of the cohort. Chalk and cheese.

One other interesting thing that is worth highlighting: this is one in a long line of articles focusing on interventions that, when looked at closely, suggest that people who spend more time learning learn more. I suspect that a lot of the value of this and indeed many courses comes from being given permission to learn (or, for the campus students, being made to do so) along with having a few signposts to show the way, a community to learn with, and a schedule to follow. Note that almost none of this has anything to do with how well or how badly a specific course is designed or implemented: it is in the nature of the beast itself. Systems teach as much as teachers. The example of the campus-based students suggests that this may not always be enough although, sadly, the article doesn’t compare the time on task for this group with the rest. It may well be that, despite an extra 4 hours in class each week, they still spent less time actually learning. In fact, given a prima facie case that these students had already mostly demonstrated a lack of interest and/or ability in the subject, then even that tutorial time may have not been dedicated learning time.

A small niggle: the comparison with in-class learning on different courses conducted by Hake in a 1998 study, which is mentioned a couple of times in the article, is quite spurious. There is a world of difference between predominantly extrinsically motivated classroom-bound students and those doing it because, self-evidently, they actually want to do it. If you were to extract the most motivated 10% of any class you might see rather different learning patterns too. The nearest comparison that would make a little sense here is with the remedial campus-bound students though, for aforementioned reasons, that would not be quite fair either.

Little or none of this is news to the researchers, who in their conclusion carefully write:

“Our self-selected online students are interested in learning, considerably older, and generally have many more years of college education than the on-campus freshmen with whom they have been compared. The on-campus students are taking a required course that most have failed to pass in a previous attempt. Moreover, there are more dropouts in the online course (but over 50% of students making a serious attempt at the second weekly test received certificates) and these dropouts may well be students learning less than those who remained. The pre- and posttest analysis is further blurred by the fact that the MOOC students could consult resources before answering, and, in fact, did consult within course resources significantly more during the posttest than in the pretest.”

This is a good and fair account of reasons to be wary of these results. What it boils down to is that there are almost no notable firm conclusions to be drawn from them about MOOCs in general, save that people taking them sometimes learn something or, at least, are able to past tests about them. This is also true of most people that read Wikipedia articles.

For all that, the paper is very well written, the interventions are well-described (and include some useful statistics, like the fact that 95% of the small number that attempted more than 50% of the questions went on to gain a certificate), the research methods are excellent, the analysis is very well conducted, and, in combination with others that I hope will follow, this very good paper should contribute a little to a larger body of future work from which more solid conclusions can be drawn. As Tony says, we need more studies like this.

Address of the bookmark: http://www.irrodl.org/index.php/irrodl/article/view/1902/3009

x-literacies

There is an ever-growing assortment of x-literacies. Here are just a few that have entered the realms of academic discourse:

  • Computer literacy
  • Internet literacy
  • Digital literacy
  • Information literacy
  • Network literacy
  • Technology literacy
  • Critical literacy
  • Health literacy
  • Ecological literacy
  • Systems literacy
  • Statistical literacy
  • New literacies
  • Multimedia literacy
  • Media literacy
  • Visual literacy
  • Music literacy
  • Spatial literacy
  • Physical literacy
  • Legal literacy
  • Scientific literacy
  • Transliteracy
  • Multiliteracy
  • Metamedia literacy

This list is a small subset of x-literacies: if there is some generic thing that people do that demands a set of skills, there is probably a literacy that someone has invented to match.  I’ll be arguing in this post that the majority of these x-literacies miss the point, because they focus on tools and technologies more than the reasons and contexts for using them. 

The confusion starts with the name. ‘Literacy’, literally, means the ability to read and write, so most other literacies are not. We might just as meaningfully talk about ‘multinumeracy’ or ‘digital numeracy’ as ‘multiliteracy’ or ‘digital literacy’ and, for some (e.g. ‘statistical literacy’), ‘numeracy’ would actually make far more sense. But that’s fine – words shift in meaning all the time and leave their origins behind. It is not too hard to see how the term might evolve, without bending the meaning too much, to relate to the ability to use not just text but any kind of symbol system. That sometimes makes sense – visual, media or musical literacy, for example, might benefit from this extension of meaning. But most of the literacies I list above have at best only a partial relationship to symbol systems. I think what really appeals to their inventors is that describing a set of skills as ‘x-literacy’ makes ‘x’ seem more important than just a set of skills. They bask in the reflected glory of reading and writing, which actually are awfully important. 

I’m OK with a bit of bigging up, though. The trouble is that prefixing ‘literacy’ with something else infects how we see the thing. It has certainly led to many silly educational initiatives with poorly defined goals and badly considered outcomes. This is because, all too often, it draws attention far too much to the technology and skills, and far too far away from its application in a specific culture. This context-sensitive application (as I shall argue below) is actually what makes it ‘literacy’, as opposed to ‘skill’, and is in fact what makes literacy important.

So this is my rough-draft attempt to unravel the confusion so that at least I can understand it – it’s a bit of sense-making for me. Perhaps you will find it useful too. Some of this is not far off the underpinnings of the multiliteracy camp (albeit with notably different conclusions) and one of my main conclusions will be very similar to what many others have concluded too: that literacy spans many skills, tools and modalities, and is highly contextualized to a given culture at a given time. 

Culture and technology

When they pass a certain level of size and complexity, societies need more than language, ritual, stories, structures and laws passed by word of mouth (mostly things that demand physical co-presence) in order to function. They need tools to manage the complexity, to distribute cognition, replicate patterns, preserve structures, build new ones, pass ideas around, and to bind a dispersed society together. Since the invention of printing, most of the tools that play this role have been based on the technologies of text, which makes reading and writing fundamental to participation in a modern society and its numerous cultures and subcultures.

To be literate has, till recently, simply meant that you can do text. There may also be some suggestion of the ability to use text that relate to abilities to decipher, analyze, synthesize and appreciate: these are at least the product of literacy if not a part of it, and they are among the main reasons we need literacy. But the central point here is that people who are literate, in the traditional sense, are simply able to operate the technology of writing, whether as consumers, producers or both. Why this is ‘literacy’ rather than simply a skillset like any other, is that text manipulation is a prerequisite for people to participate in their culture. It lets them draw on accumulated knowledge, add to it, and be able to operate the social and organizational machinery. At its most basic, this is a pragmatic need: from filling in forms and writing letters to reading signs, labels on food, news, books, contracts and so on. Beyond that, it is also a means to disseminate ideas, challenges, and creative thought in a society. It is futhermore a fundamental technology for learning, arguably second only to language itself in importance. More than that, it is a technology to think with and extend our thinking far beyond what we could manage without such assistance. It lets us offload and enhance our cognition. This remains true, despite multiple other media vying for our attention, most of which incorporate text as well as other forms. I could not do what I am doing right now without text because it is scaffolding and extending the ideas I started with. Other media and modalities can in some contexts achieve this end too and, for some purposes, might even do it better. But only text does it so sweepingly across multiple cultures, and nothing but text has such power and efficiency. In all but the most limited of cultures, text performs culture, and text makes culture: not all of it, by any means, but enough to matter more than most other learned technology skills.

Other ways to perform culture

There have for countless millenia been many other media and tools for cultural transmission and coordination, including many from way before the invention of writing. Paintings, drawings, sculpture, dance, music, rituals, maps, architecture, furniture, transport systems, sport, games, roads, numbers, icons, clothing, design, money, jewellery, weapons, decoration, litany, laws, myths, drama, boats, screwdrivers, door-knobs and many many more technologies, serve (often amongst their other functions) as repositories of cognition, belief, structure and process. They are not just the signs of a culture: they play an active role in its embodiment and enactment. But text, maybe hand in hand with number, holds a special place because of its immense flexibility and ubiquitous application. Someone else can make roads or paintings or door-knobs and everyone else can benefit without needing such skills – this is one of the great benefits of distributed labour. But almost everyone needs skill in text, or at least needs to be close to someone with it. It is far from the only fruit but everyone needs it, just to participate in the cultures of a society.

Cultures and technologies

There are many senses in which we might consider technology and culture to be virtually synonymous. Both are, as Ursula Franklin puts it, ‘the way things are done around here’. Both concern process, structure and purpose. However, I think that there are many significant things about cultures  – attitudes, frames of mind, beliefs, ways of seeing, values, ideologies, for instance – that may be nurtured or enacted by technology, but that are quite distinct from it. Such things are not technological inventions – they are the consequence, precursors and shapers of inventions. Cultures may, however, be ostensively defined by technologies even if they are not functionally identical with them. Archeologists, sociologists and historians do it all the time. Things like language, clothing, architecture, tools, laws and so on are typically used to distinguish one culture from another.

One of the notable things about technologies is that they tend to evolve towards both increasing complexity and increasing specialization. This is a simple dynamic of the adjacent possible. The more we add, the more we are able to add, the more combinations and the more new possibilities that were unavailable to us before reveal themselves, so the more we diversify, subdivide, concatenate and invent. Thus it goes on ad infinitum (or at least ad singularum). Technologies tend to continuously change and evolve, in the absence of unusual forces or events that stop them. Of course, there are countless ways that technologies, notably in the form of religions, can slow this down or reverse it, as well as catastrophes that may be extrinsic or that may result from a particularly poor choice of technologies (over-cultivation of the land, development of oil-dependency, nuclear power, etc). There are also many technologies that play a stabilizing rather than a disruptive role (education systems, for example). Overall, however, viewed globally, in large cultures, the rate of technological change increases, with ever more rapid lifecycles and lifespans.  This means that skills in using technologies are increasingly deictic and increasingly short-lived or, if they survive, increasingly marginalized. In other words, they relate specifically to contexts outside of which they have different or no meaning, and those contexts keep changing thanks to the ever-expanding adjacent possible. Skills and techniques become redundant as contexts change and cultures evolve. That’s a slight over-simplification, but the broad pattern is relentless.

Towards a broader definition of ‘literacy’

Literal literacy is the ability to use a particular technology (text) to give us the ability to learn from, interact with and add to our various different cultures. The label implies more than just reading and writing: to be literate implies that, as a consequence of reading and writing, stuff has been and will be read – not just reading primers, but books, news, reports and other cultural artefacts. In the recent past, text was about the most significant way (after talking and showing) that cultural knowledge was disseminated. In recent decades, there have been plentiful other channels, including movies, radio, TV, websites, multimedia and so on. It was only natural that people would see the significance of this and begin to talk about different kinds of literacy, because these media were playing a very similar cultural role to reading and writing. The trouble is that, in doing so, the focus shifted from the cultural role to the technology itself. At its most absurd, it resulted in terms like ‘computer literacy’ that led to initiatives that were largely focused on building technical skills messily divorced from the cultures they were supporting and of little or no relevance to being an active  member of such a culture.

So here’s a tentative (re)definition of ‘literacy’ that restores the focus: literacy is the prerequisite set of technological skills needed for participation in a culture.  And, of course, we are all members of many cultures. There are other things that matter in a culture apart from technological skills, such as (for example) a playful spirit, honesty, caring for others, good judgement, curiosity, ethical sensibility, as well as an ability to interpret, synthesize, classify, analyze, remix, create and seek within the cultural context. These are probably more important foundations of most cultures than the tools and techniques used to enact them. But, though traits like these can certainly be nurtured, inculcated, encouraged, shown, practiced, learned and improved, they are not literacies. These are the values and valued traits in a culture, not the skills needed to be a part of it, though there is an intimate iterative relationship between the two. In passing, I think it is those traits and others like them that education is really aimed at developing: the rest, the literacy part, is transient and supportive. We don’t have values and propensities in order to achieve literacy. We learn most of them at least partly through the use of literacies, and literacies are there to support them and let them flourish, to provide mechanisms through which they can be exercised.

My suggestion is that, rather than defining a literacy in terms of its technologies, we should define it in terms of the particular culture it supports. If a culture exists, then there is a literacy for it, which is comprised of a set of skills needed to participate in that culture. There is literacy for being a Canadian, but there is equally literacy for being part of the learning technologies community (and for each of its many subcultures), being a researcher, a molecular scientist, a member of a family or of a local chess club. There is literacy for every culture we belong to. Some technological skillsets cross multiple cultures, and some are basic to them. The first of these is nearly always language. Most cultures, no matter how trivial and constrained, have their own vocabularies and acceptable/expected forms of language but, apart from cases where languages are actually a culturally distinguishing factor (e.g. many nations or tribes) they tend to inherit most of the language they use from a super-culture they are a part of. Reading and writing are equally obvious examples of skills that cross multiple cultures, as are numeracy skills. This is why they matter so much – they are foundational. Beyond that, different technologies and consequent skills may matter as much or more in different cultures. In a religious culture these might include the rules, rituals, principles, mythologies and artefacts that define the religion. In a city culture they could include knowledge of bylaws, transit systems, road layouts, map-reading, zones, and norms. In an academic culture it might relate to (for instance) methodologies, corpora, accepted tenets, writing conventions, dress standards, pedagogies, as well as the particular tools and methods relating to the subject matter. In combination, these skills are what makes someone in a given culture literate in that culture.

For instance

Is there such a thing as computer literacy? I’d say hardly at all. In fact, it makes little sense at all to think in those terms. It’s a bit like claiming there is pen literacy, table literacy or wall literacy.  But there might be computing literacy, inasmuch as there may be a culture of computing. In fact, once upon a time, when dinosaurs roamed the earth and people who used computers had to program them themselves, it might have been a pretty important culture that any people who wished to use computers for any purpose at all would need to at least dip their toes in and, most likely, become a part of. That culture is still very much there but it is not a prerequisite of owning a computer that one needs to be a part of it any more – computing culture is now the preserve of a relatively tiny band of geeks who are dwarfed in number by those that simply use computers. The average North American home has dozens of computers, but few of their users need to or want to be part of a computing culture. They just want to operate their TVs, drive their cars, use their phones, take photos, browse the Web, play the keyboard, etc. This is as it should be. Those in a computing culture are undoubtedly still an important tiny band who do important things that affect the rest of the world a lot, but they are just another twig at the end of a branch of the cultural tree, not the large stem that they once were. Within what is left of that computing culture there are a lot of overlapping computing sub-cultures: engineers, bricoleurs, hardware freaks, software specialists, interaction designers, server managers, programmers, object-oriented programmers, PHP enthusiasts, iOS/Mac users, Android/Windows users, big-endians, little-endians. Each sub-culture has its own literacy, its own language, its own technologies on which it is founded, as well as many shared commonalities and cross-cutting concerns. 

Is there such a thing as ‘digital literacy’? Hardly. There is no significant distinctive thing that is digital culture, so there is no such thing as digital literacy. Again, like computing culture, once upon a time, there probably was such a thing and it might have mattered. I recall a point near the start of the 1990s, as we started to build web servers, connect Gopher servers, use email and participate in Usenet Newsgroups, at which it really did seem that we were participating in a new culture, with its own evolving values, its own technologies, its own methods, rules, and ethics. This has almost entirely evaporated now. That culture has in part been absorbed and diffused, in part branched into subcultures. Being ‘digital’ is no longer a way of defining a culture that we are a part of, no longer a way of being. Unless you are one of the very few that has not in the last decade or so bought a telephone, a TV, a washing machine, a stove, or one of countless other digital devices, you are ‘digital’. And, if there were such a thing as a digital culture, you would almost certainly be a part of the digital culture if you are reading this. This is too tenuous a thing – it has nothing to bind it apart from the use of digital devices that are almost entirely ubiquitous, at least in first world cultures, and that are too diverse to bind a culture together. There are, as a result, insufficient shared values to make it meaningful any more. It is, however, still possible to be anti-digital. Some digital luddites (I mean this non-perjoratively to refer to anyone who deliberately eschews digital technologies) do very much have cultures and probably have their own literacies. And there might well be literacies that relate to specific digital technologies and subsets of them. Twitter has a culture, for instance, that implies rules, norms, behaviours, language and methods that anyone participating should probably know. The same may be (and at some point certainly was) true of Facebook, but I think that is less obvious now.

Network culture is probably still a thing, but it is already fading in much the same way that digital culture has already faded, with ubiquity, diversity and specialization each taking bites out of it. We have seen network culture norms develop and spread. New vocabularies have been developed with subtle nuances (LOL, ROFL, LMFAO) that often branch into meanings that may only be deciphered by a few sub-cultures but that may subsequently spread into other cultures (TIL, RT, TLDR, LPT).   We have had to learn new skills, figuring out how to negotiate privacy, filter bubbles, trolls, griefing, effective tagging, filtering, sorting, unfriending and friending, and much much more, in order to participate in a social network culture, one that is (for now) still a bit distinct from other cultures. But that culture has already diversified, spread, diffused, and it is getting more diffuse every day. As it becomes larger and more diverse it ceases to be a relevant means of identifying people, and it ceases to be something we can identify with.

Much of the reason for network culture’s retreat is technological. It was enabled by an assembly of technologies and spawned new ones (norms, conventions, languages, etc) but, as they evolve, other technologies will render it irrelevant. Technologies often help to establish cultures and may even form their foundation but, as they and the cultures co-develop, the technologies that helped build those cultures stop being definitional of them. Partly this results from diffusion, as ways of thinking creep back into the broader super-culture and as more and more diverse cultures spread into it. Partly it is because new technologies take their place and diversify into niches. Partly it is because, rather than us learning to use technologies, they learn to use us. This sounds creepier than it really is: what I mean is that individual inventors see the adjacent possibles and grab them, so technologies change and, in many cases, become embedded, replacing our manual roles in them with pre-orchestrated equivalents. Take, for example, a trivial thing like emoticons, images built from arbitrary text characters, that take some of the role of phatic communication in text communication – like this :-). Emoticons are increasingly being replaced by standardized emojis, like this Smile. Bizarrely, there are now social networks based on emoji that use no text at all. I am intrigued by the kind of culture that this will entail or support but the significant point here is that what we used to have to orchestrate ourselves is now orchestrated in the machine. Consequently, the context changes, problems are solved, and new problems emerge, often as a direct result of the solution. Like, how on earth do you communicated effectively with nothing but emojis Undecided?

Where do we go from here? 

Rather than constantly sub-divide literacies into ever more absurdly-named niches named for the tools to which they relate, or attempt to find bridging competences or values that underly them and call those multiliteracies (or whatever), I propose that we should think of a literacy as being a highly situated set of skills that enable us to play a role as an operator in any given social machine, as creators and/or consumers of a culture – any culture and every culture.  The specificity we choose should be determined by the culture that interests us, not by any predetermined formula. Each subculture has its own language, tools, methods, and signs, and each comes with a set of shared (often contested) attitudes, beliefs, values and passions, that both drive and are driven by the technologies they use.  As a result, each has its own history, that branches from the histories of other subcultures, helping to make it more distinct. This chain of path dependencies helps to reinforce a culture and emphasize its differences. It can also lead to its demise.

In most if not all cases, literacy is an assembly of skills and techniques, not a single skill. ‘Literacy’ is thus simply a label for the essential skills and techniques needed to actively participate in a given culture. Such a culture may be big or small. It may span millenia or centuries but it may span only decades, years or (maybe) months or even weeks or days. It may span continents or exist only in a single room. I have, for example, been involved with courses, workshops and conferences that have evolved their own fleeting cultures, or at least something prototypical of one. In my former job I shared an office with a set of colleagues that developed a slightly different culture from that of the office next door. Of course, the vast majority of our culture was shared because we performed similar roles in the same department in the same organization, the same country, the same field, the same language, the same ethos. But there were differences that might, in some contexts and for some purposes, be important. For most contexts, they were probably not.

Researching literacies 

Assuming that we know what culture we are looking at, identifying literacy in any given culture is simply (well…simply-ish) a question of looking at the technologies that are used in that culture.  While technology use is far from a complete definition of a culture, what makes it distinct from another may be described in terms of its technologies, including its rules, tools, methods, language, techniques, practices, standards and structures. This seems a straightforward way of thinking about it, if seemingly a bit circular. We identify cultures by their technology uses, and define literacy by technology use in a culture. I don’t think this apparent circularity is a major issue, however, as this is an iterative process of discovery: we may start with coarse differentiators that distinguish one culture from another but, as we examine them more closely, will almost certainly find others, or find further differentiators that indicate subcultures. A range of methods and methodologies may be used here, from grounded theory to ethnography, from discourse analysis to Delphi methods, simple observation, questionnaires, interviews, focus groups, and so on. If we want to know about literacy in a culture, we have to discover what technologies are foundational in that culture.

Most of the cultures we belong to are subcultures of some other or others, while others straddle borders between different and otherwise potentially unrelated cultures.  Some skills that partially constitute a given literacy will cross many other cultural boundaries. Almost all will involve language, most will involve reading and writing, many will involve number, lots will involve visual expression, quite a few will involve more or less specific skills using machines (particularly software running on computers, some of which may be common). The ability to create will usually trump the ability to consume although, in some cultures, prosumption may be a defining or overwhelmingly common characteristic (those that emerge in social networks, for instance).

This all implies that a first concern when researching literacy for a given culture, is to identify that culture in the first place, and decide why it is of interest. While this may in some cases be obvious, there may often be subcultures and cross-cultural concerns that could make it more complex to define. One way to help separate out different cultures is to look at the skills, terminology, technologies, implicit and explicit rules, norms, and patterns of technology use in the subset of people that we are looking at. If there are patterns of differences, then there is a good chance that we have identified a cultural divide of some kind. A little more easily, we can also look both at why people are excluded from a culture, and seek to discover the things people need to learn to become a part of it – to look at the things that distinguish an outsider from an insider and how people transition from one to the other.

For example, the literacy for the culture of a country is almost entirely defined by invention. Countries are technologies, first and foremost. They have legislated (if often disputed) borders and boundaries, laws, norms, language, ways of doing things, patterns, establishments, and institutions that are almost entirely enshrined in technology. It is dead easy to spot this particular culture and mostly simple enough to figure out who is not in it and, normally, what they need to do to become a part of it. To be literate in the context of a country is to have the tools to be able to know and to actively interact with the technologies that define it. To give a simple example, although it is quite possible to be Canadian with only a limited grasp of English and/or French, part of what it means to be literate in Canadian culture is to speak one or (ideally) both languages. Other languages are a bonus, but those two are foundational. It is also possible to see similar patterns in religious cultures, academic cultures, sports cultures, sailing cultures and so on. We can see it in subcultures – for example, goths and hipsters are easily identified by a set of technologies that they use and create, because many of them are visible and definitional.  It gets trickier once we try to find subcultures of such easily identified sets but, on the whole, different technologies mark different cultures.

What makes all this technical detail worth knowing is not that different sets of people use different tools but that there are consequences of doing so. Technologies have a deep impact on attitudes, values, beliefs and relationships between people. In turn these values and beliefs equally impact the technologies that are used, developed, and valued. This is what matters and this is what is worth investigating. This is the kind of knowledge that is needed in order to effect change, whether to improve literacy within a culture or to change the culture itself. For example, imagine a university that runs on highly prescriptive processes and a reward structure based on awards for performance. You may not have to look far to find an example. Such a university might be dysfunctional on many counts, either because of lack of literacy in the technologies or because the technologies themselves are poorly considered (or both). One way to improve this would be to ensure that all its members are able to operate the processes and gain awards. This would be to improve literacy within the culture and would, consequently, reinforce it and sustain it. This might be very bad news if the surrounding context changes, making it significantly harder to adapt and change to new demands, but it would be an improvement by some measures. Another, not necessarily conflicting, approach would be to change or eliminate some of the processes, and get rid of or change the nature of rewards for performance: to modify the machinery that drives the culture. This would change the culture and thus change the literacy needed to operate within it. It might do unexpected things, especially as the existing attitudes and values may be at odds with the new culture: people within it would be literate in things that are not relevant or useful any more, while not having literacy needed to operate the new tools and structures. Much existing work surrounding x-literacies fails to clearly make this crucial distinction. By focusing largely on the technological requirements and ignoring the culture, we may reinforce things that are useless, redundant or possibly harmful. For instance, multimedia literacy might be great, sure. But for what and for whom? And in what forms? Different skillsets are needed in different contexts, and will have different value in different cultures.

To conclude

I have proposed that we should define literacy as the skills needed to operate the technologies that underpin a particular culture. While some of those skills are common to many cultures, the precise set and the form they take is likely different in almost every culture, and cultures evolve all the time so no literacy is forever. I think this is a potentially useful perspective.

We cannot sensibly define a set of skills or propensities without reference to the culture that they support, and we should expect differences in literacies both between different cultures and across time and space in any given culture. We can ask meaningful questions about literacy in a culture of (say) people who use Twitter for learning and research as opposed to those needed by people that only use Twitter to stay in touch with one another.  We can look at different literacies for people who are Canadian, people who are in schools, people of a particular religion, people who like a particular sport, people who research learning technologies, people in a particular office, people who live in Edmonton, not to mention their intersections and their subsets. By looking at literacy as simply a set of skills needed for a given culture we can gain large insights into the nature of that culture and its values. As a result, we can start to think more carefully about which skills are important, whether we want to simply support the acquisition of those skills, or whether we want to transform the culture itself.

This is just my little bit of sense making. I have very probably trodden territory that is very familiar to a lot of people who research such things with more rigour, and I doubt very much that any of it is at all original. But I have been bothered by this issue for a while and it now seems a little clearer to me what I think about this. I hope it has encouraged you to think about what you think too. Feel free to share your thoughts in the comment box!