Analogue Literacies (2011)

ABSTRACT: The continuous co-evolution of digital technologies and the skills needed to use them makes the concept of ‘digital literacy’ a slippery and moving target. Tools in themselves do not technologies make: it is the combination of phenomena, tools and purposes which, in a never ending and always accelerating dance, constantly shift what Stuart Kauffman calls the ‘adjacent possible’ to enable new and unforeseeable trajectories, both good and bad. Traditional literacies are based on an assumption that skills are transferrable and capable of improvement in incremental steps, that we can become experts in their application. Digital competencies, on the other hand, may (with some limited exceptions) become outmoded, unnecessary and defunct, sometimes in weeks or months rather than years, as the pace of technological change moves the goalposts as soon as we reach them. Often, a new generation of digital technologies will render our hard-earned skills redundant almost as soon as we have attained them, meanwhile opening out new vistas of adjacent possibilities that demand the acquisition of new competencies. The so-called ‘digital generation’ is no less immune to this effect than older generations, as witnessed by their enthusiastic but unreflective tendencies to embrace social media without regard to the consequences of persistent digital identity and emerging norms of privacy and public disclosure. In this paper I argue for a different way of thinking about digital literacy that is based on a richer understanding of technologies, following W. Brain Arthur, as assemblies of other technologies, both soft and hard, human and machine. I suggest that the need for literacy should not be focused on the hard, digital media but on the soft, malleable edges of the adjacent possible that each new technological/social/human assembly provides. 

Address of the bookmark: http://www.code.ouj.ac.jp/sympo-2011/pdf/1_Jon_Dron_11.pdf

The Blog and the Borg: a Collective Approach to E-Learning (2003)

This paper describes the use of tools and procedures to encourage reflective learning in a blended-learning postgraduate course. Its ethos encourages self-organized collaborative learning with little taught theoretical content. Students use a variety of Internet-based communication technologies and reflect on their experiences in an online learning diary or “blog,” The course is successful but its limited theoretical foundations, and technical and organizational problems caused by its blended delivery mode have led to student anxiety and have affected learning. The problems have been overcome through structural and methodological changes, sometimes at the expense of compromising the course’s ethos. A new solution is proposed combining the use of blogs with CoFIND, a kind of “group mind amplifier,” leading to a technologically enhanced variant of Kolb’s learning cycle that may serve as an informative model for other technology-assisted courses.

 

Address of the bookmark: http://www.editlib.org/p/14972

On the stupidity of mobs (2006)

This paper explores the implications of social navigation used to assist online learning communities. It presents an experiment in social navigation employing a treasure map, comparing the behavior of users provided with social navigation cues and the behavior of those with no such cues. The experiment suggests that social navigation may cause poor decision-making in its users in two distinct ways. Some users may follow the actions of others (even poor ones), while others may actively try to behave differently. Neither strategy is useful at all times. The paper goes on to discuss approaches to limiting the dangers of such systems. 

Address of the bookmark: http://www.iadis.net/dl/final_uploads/200602C035.pdf

PHD Thesis: Achieving self-organisation in network-based learning environments (2002)

Link to my thesis

This thesis is an investigation into how to exploit the unique features of computer networks (notably the Internet) to support self-organising groups of adult learners.

The structures of systems influence the behaviour of their parts whilst those structures are in turn influenced by their parts’ interactions. Effects of structural hierarchies in popular systems of education may lead to poor learning experiences for some students. An alternative way of organising such systems is to decentralise control and to allow a structure to emerge from the combined actions of learners: a self-organised learning environment.

The functionality of a teacher often has the largest effect on the dynamics of an educational system. This is therefore a good place to concentrate efforts to encourage emergent structures to develop in adult education. The thesis attempts to classify of what that functionality consists, abstracting the roles a teacher may perform.

The Internet (especially the World Wide Web) has more of a network than a hierarchical structure and, being a virtual space, provides relatively virgin ground on which a less centralised model of educational organisation might develop. The thesis considers how self-organised learning may arise in existing Internet-based environments. It identifies a key weakness of existing systems to adequately address the varied and ever-changing needs of learners.

A number of studies performed as part of this investigation centre on the construction of a series of software products explicitly aimed at enabling the self-organisation of learners. They achieve this through the adaptation and evolution of metadata at different structural levels, thereby dynamically adapting to learners’ needs as those needs develop.

The thesis concludes with a set of guiding principles for those seeking to build self- organisation into learning environments.

 

The Filter Bubble

Eli Pariser’s blog, with links to his good TED talk and brilliant book of the same name. The book provides a relentlessly well-written and hard-to-fault argument about the dangers of confirmation bias, as well as an evenly-handed discussion of the many benefits of personalization. The site offers a cut-down version of the advice offered in the book which is aimed to maintain the value of filters (which are useful and essential tools to deal with infoglut) without losing the essential diversity that filter bubbles can and do destroy. Excellent stuff, packed with fascinating and well-researched information, quotable sentences and thought-provoking analysis.

Address of the bookmark: http://www.thefilterbubble.com/

Should Newspapers Give Readers the Power to Hide News They Don’t Want to See? – Rebecca J. Rosen – The Atlantic

The writer of this brief article seems broadly in favour of giving readers the power to self-curate.

The systemic effects of doing so are, however, a little risky. Confirmation bias is a powerful force on the Internet and filter bubbles are widespread enough as it is. We need to encounter other beliefs, other interests and other ideas apart from ones we have already settled on, if we are to grow and learn. The rise of social networking and the stigmergic effects of ubiquitous algorithms like PageRank and Edgerank are already causing enough trouble without site owners contributing to the phenomenon.

On the other hand, there is little value in insisting that, for a single site, people should read things they do not want to read. Apart from anything else, if their choices are unconstrained, they will visit other sites instead if they don’t like what they find. If you are in control of a site then it is better, perhaps, to let people select what they want from your site than to select nothing at all and go somewhere else. 

In design terms, the Web is part of and a major contributor to a self-organising system, a massive range of overlapping, intersecting and connecting ecosystems. If it were one flat savanna, whether as a result of confirmation bias or a lack of differentiation, evolution would slow down or stop. Luckily, neither extreme is possible – we simply cannot pay attention to it all things, nor can we completely divorce ourselves from the things we do not want to see. It is neither an unstructured featureless space nor a set of isolated islands that never connect.

We need parcellated spaces for evolution to happen, but we need isthmuses, bridges, and breaks in barriers for good things to seep through. Self-curation is fine and, to a large extent, unstoppable: even in the days when I used dead trees for my news, I would skip not just articles but whole sections that did not interest me. Attention is a valuable and scarce commodity and, no matter how curious we may be, we don’t have time or capacity to give it all to all things. However, we need to make room for serendipitous channels, seepages and signposts to remind us that there are more things in Heaven and Earth, Horatio, than are dreamt of in our philosophies. Yes, of course we should give people control over what they see and make it easy for them to filter things how they like. But we should also make deliberate holes in those filters, to remind people that their filters can and probably should change from time to time, to provide signposts to what they are missing and to encourage them to explore new islands and territories. We should design for serendipity.

Address of the bookmark: http://www.theatlantic.com/technology/archive/2012/07/should-newspapers-give-readers-the-power-to-hide-news-they-dont-want-to-see/260409/

Project Tin Can

The good folk at SCORM appear to have not only a really good idea but a spec, an API and a whole bunch of example applications that are working right now. I’ve been deeply sceptical of the reusable-learning-object approach since the 1990s. It’s a train wreck that SCORM has played a large role in perpetuating, at huge cost relative to actual gains (excluding a few large-scale military applications and some similarly inward-facing initiatives). The move away from this to the more flexible notion of open educational resources has been a positive one, on the whole. But this is a very different and much more interesting ballgame altogether that leaves the limited pedagogies and poorly conceived metadata standards of the older SCORM standing.

In essence, Tin Can is a spec for capturing actors, verbs and objects (sounds spookily familiar) or, more simply, a way of saying, in machine-legible form, ‘I did this’. ‘I’, ‘did’ and ‘this’ are all very interestingly definable, flexible and mashable. The focus is on activities, not just content, and it puts the LMS in its rightful place as a management tool, not a learning environment (it is treated as a learning record store), though people can continue to use the LMS for teaching and content delivery if they really want. For everything from portfolios to formal quizzes, from social tagging to personal learning apps like Tappestry, the spec supports an open and interoperable world of technologies and tools to support learning.

It’s not the first initiative of this kind by any means, but it has heavyweight industry muscle behind it, is open, and seems flexible, simple and elegant. More importantly, it makes pedagogical and practical sense, which the previous focus on RLOs never did. I don’t know enough about the technology yet to give a full review, and there are clearly a few things that are not quite there yet, but the road map is clear and the vision is a good one. I’m keen to add support for Tin Can to the Landing, both as a client and an endpoint if possible, though there are a few other pieces that must be in place before it becomes really useful in AU, so I think we can take our time to make sure we get it right. Moodle and Mahara, at least, also need to play this game if it is to have a big impact. But there is already work in progress on those platforms to support Tin Can, so it looks like that would not be a major obstacle. 

Address of the bookmark: http://scorm.com/tincan/

Good News, Everyone! Your Twitter Engagement Level Might Be As High As 0.46% – AllTwitter

Thought-provoking if rather minimally scientific mini-study on levels of engagement (measured as responses to tweets proportional to number of followers) within the social network facet of Twitter, which suggests that 1% would be an astonishingly good engagement level, though responses from 0.1% of followers would be reasonably good. The logic is impeccable even if the figures are slightly anecdotal.

This suggests to me that we need to pay much more attention to modelling networks, especially those where timeliness is unusually significant, in four dimensions. We need to be paying much more attention to pace and dynamics. Even when using a system that sends email alerts, IMs or on which we spend a significant amount of time, most of us do not spend all of our time responding to posts, even though it may often feel that way, and different kinds of social networking system work at different speeds. So, our networks are continuously and burstily expanding and contracting, not the fixed and concretised things that we tend to model when doing more basic forms of social network analysis.

Many of us (me included) are deliberately limiting the time we spend in response mode because it became life-destroying to try to stay connected many years ago. I have been operating a policy of non-responsiveness outside office hours for some time and try very hard not to look at the torrential flow over the weekend or in the morning until I have at least reached a state of mild equilibrium. I do quite frequently break my own rules and make exceptions for those in my close social circles (automatically flagged and channeled) but, despite that, the consequences include a morning mailbox of a couple of hundred messages that typically take a couple of hours to even organise, let alone respond to. This in turn means that, even with a lot of intelligent mail filtering that bundles messages into different folders before I even start, I miss things pretty often. Flagged messages wind up lost in a sea of flags. And that’s just the ones that I’ve recognised as important. Throw in holidays of even a day or two and it becomes impossible to track. Several people I know (e.g the ever-wonderful Eric Duval) have reacted by simply auto-responding to things out of hours by saying that messages sent at certain periods will be deleted unread. Seen in this light, unless we have superhuman powers of attention, or strong filters on what and who we choose to see, it is amazing that there is any engagement at all.

Address of the bookmark: http://www.mediabistro.com/alltwitter/twitter-engagement-levels_b7765

Shirky: Group as User: Flaming and the Design of Social Software

An old (2004) article from Clay Shirky, rediscovered serendiptously as I was reviving a long-dead research system I built (CoFIND – a personal instance of which is visible again after many years of absence at http://cofind.jondron.net/, including all my old bookmarks for that instance from 2004 to 2007). In the perceptive article, Shirky explores a range of methods used to deal with flaming, including a few that we have considered for use on the Landing.

His description of Slashdot’s (http://slashdot.org) approach from way back then reminds me yet again how amazingly intelligently that system was designed. Slashdot is one of the oldest examples of modern social software still going strong, and it knocks spots of the likes of Facebook and Twitter in how it uses the crowd in an egalitarian and open fashion.  It has never been easy to take advantage of its briliantly innovative methods and its usability for beginners, which was never great at the best of times, has suffered a bit more from its ever-increasing sophistication over the years. For those who take the time to learn its ways, though, it is the nearest thing to group intelligence out there today; adaptive, subtle, and hugely creative, a well-tuned personalised SlashDot thread beats single-authored systems for learning almost every time and makes Wikipedia seem almost pitifully rigid and uninformative. Always arcane, always a nerd-only site, never destined to enter the mainstream, steadfastly focused on its mission of offering ‘news for nerds’, it is none-the-less a shining example of how to do things right.

Address of the bookmark: http://shirky.com/writings/group_user.html

Scrivener

I’ve started using this to write a couple of books I’m working on and thoroughly recommend it to anyone with large amounts of writing to do. It is remarkably intuitive and natural to use, and remarkably powerful as a means of organising thoughts, keeping notes, incorporating texts and much much more, as well as providing neat distraction-free modes for actual writing. It’s not open source but pricing is very reasonable, especially if you are a student or academic – less than $40 – and it is available for Mac and Windows.

It’s primarily a tool to support the writing process, not for finished drafts. It can be used to generate pretty decent simple-ish output, but the idea is to export the results to a word-processor or desktop publisher to do final tweaks.

The only big problem I have with it at all is that it doesn’t neatly integrate with reference managers, reflecting its origins as a tool for authors of fiction, novels, screenplays etc. Sure, you can insert relevant codes from things like EndNote, Papers or Zotero, then format bibliographies etc when you export the document, but it’s clunky and unintuitive, and not at all friendly or flexible. I’m really hoping that an update with such support arrives soon as this is going to be a real pain as I go on. On the other hand, it has great annotation and reference tools that can be pulled in to do part of that job, so it is not a complete showstopper, but it’s a major omission. 

Address of the bookmark: http://www.literatureandlatte.com/scrivener.php