iProspect Social Networking User Behavior Study

http://community.brighton.ac.uk/jd29/weblog/25584.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1365

A fascinating glimpse into the behaviour of people getting to and interacting on social networking sites in 2007.

One little snippet of information that particularly interests me:

” MySpace (68%), YouTube (65%) and FaceBook (42%) are all visited by a greater percentage of the 18-24 year old user population, as well as more frequently than other age groups. They also search YouTube (81%) and MySpace (41%) for entertainment purposes more than other age groups. And finally, 18-24 year olds post comments on MySpace (56%), YouTube (31%) and Amazon (30%) more than other age groups. “

That’s a really high percentage of engagement. This is another little bit of evidence that there is something different about the younger generation and the ways that they expect to interact with Web content.

There are a couple of issues here that have me pondering.

Firstly, I and many others already do try to use this propensity in teaching, but often without those levels of voluntary engagement seen on social sites. I think that this relative lack of participation might have a fair bit to do with ownership and motivation, though also to self-confidence: contributions on a social site are from a position of strength (we wouldn’t comment unless we thought we knew something)whereas when learning something we are, by definition, less sure of ourselves. If we want engagement then it is important therefore to build educational activities that are within our learners’ comfort zones.

Secondly, and more worryingly, this reminds me that formal education is about attempting to perpetuate (and, albeit, hopefully evolve) the cultural values of academics. Unfortunately we are increasingly finding ourselves in situations where the values that we have learnt are no longer applicable. Like the scribes whose livelihoods were taken away by the printing press, we seek to preserve and perpetuate a way of thinking that does not compete well in an ecosystem driven by dialogue and collective cognition. We old folk are still living in the era of publication, not participation. We trundle along to conferences and subscribe to journals that perpetuate the old cycles of peer-reviewed papers etc not because we are sure it is the best way to do things, but because we have always done things that way. We are both drivers and driven: as academics, our jobs often depend on this. It is like still doing all of our correspondence through the postal service when the rest of the world is using email and IM. Sure, like the scribes, what we produce may be beautiful and well-crafted. But it makes little sense to do it when the fast-moving fast-thinking part of the world is engaged in agile, interactive, participative communication. In my fuddy duddy way I would not like to completely lose the intellectual rigour and richness of the old ivory towers: there will always be a need for this. On the other hand, I would like to embrace the multi-valued, sometimes chaotic, always vibrant collective, cooperative and confrontational melange of the Participative Web. This is a move from knowledge absorption to ubiquitous knowledge construction, where individual authority is just a piece in the mosaic. Sure, when looked at as a pool of information, it is unreliable, eclectic and hard to fathom. But the point is that it is not a pool of information. It is an ocean of dialogue. As academics, we need to swim there, or we will surely drown. Or, worse, we might sit on the beach bemoaning the recklessness of the young swimmers as the rising tide starts to lap around our feet.
Created:Tue, 13 May 2008 18:23:51 GMT

A History of the Social Web – Trebor Scholz

http://community.brighton.ac.uk/jd29/weblog/22806.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1362

A good history that aims to capture most things of importance in the evolution of the Internet towards a social medium.I particularly like the fact that a good deal of attention is paid to what happened in the last century: refreshing after so much collective forgetfulness.

As seems increasingly to be the case with web publications, it is a work in progress, a scholarly version of perpetual beta. This open-endedness that invites sociability and dialogue is far more interesting than the closed peer reviews of the past, making readers into real contributors and allowing us all to explore multiple conceptions that we might otherwise miss (and to allow a little condescending smile now and then when someone posts something stupid, reassuring us a little of our own progress along this learning path). Whether we do so or not, the simple fact that we can engage makes for a richer construction of knowledge. It's like Holmberg's internal didactic conversation, except that this can be real if we wish. That potential transforms how we read: not only is this an attitudinal issue, but it gives a whole new level of control over the learning process. Not sure what the author means? Ask. Want to explore an issue in more detail? Do it.
Created:Thu, 03 Apr 2008 17:19:53 GMT

Social networking sites on the decline

http://community.brighton.ac.uk/jd29/weblog/22529.html

Robert Cringely predicts the imminent and surprisingly rapid demise of the social networking phenomenon. He is, of course and as usual, right. The writing is clearly on the wall – even the evil empire of Facebook is losing users. Poor old AOL, yet again getting in at the tail end of a storm with its acquisition of Bebo. And so the phenomenon that has scratched gaping holes in my time and patience for so long is on the way out. Not soon enough, I reckon. At least, in the main.  It is a twisted variant of the tragedy of the commons, played out again and again. Instead of grazing sheep on a common, it is our attention and good will that are being eaten away. I suffer a death of a thousand knives as  my friends and my 'friends' compete for my attention with both meaningful and meaningless communication. Email is more than enough to do that already, but the big social networking sites supply their own twist, offering mass-production of demanding drivel that takes no more thought than the click of a button. What makes them brilliant is also what will kill them, as surely as the sheep on the common will kill the grass that feeds them. Sure, most offer some control over what I receive, who I receive it from and whether they are my 'friend', but social pressures make it hard to reject people without them feeling slighted.

Some are better than others. those with a clear and undiluted focus (e.g. LinkedIn) are far less annoying than the general purpose sites.  Others are built for specific communities: Elgg, in particular, springs to mind. The trouble is, Elgg is not federated to any great extent. There is simple import and even simpler export through open standards like RSS and of course HTML, but no deep intertwingling of Elgg sites.

The only one of the big ones that I have a lot of time for is Ning, which does what they all should do in parcellating its landscape with rich and diverse niches, almost none of which has any great value in itself but, as a member of the ecosystem, contributes to the richness of the whole and can pass on its genes (with some mutations) to others when it dies. The only problem with Ning is that it is a single site, which may be its ultimate undoing. As Robert Cringely notes, the business models for these things are decidedly shaky at best. What we really need is a distributed Ning, with open APIs that offer flexibility and customisation at low cost, and trustworthy standards-based transfer of identity between systems. I have just started looking at Noserub (thanks to Brian Kelly for pointing me to this) which seems to be moving in the right direction, though still rather incomplete (e.g. no support for OpenSocial) and as yet paying insufficient attention to issues of trust and privacy. I don't know if it has the momentum to really succeed, but it or something like it are what we need if we are to build truly social networks, with the power and  controllability that is necessary to develop rich social ecologies.

The Downside of a Good Idea

http://community.brighton.ac.uk/jd29/weblog/22107.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1361

More (slightly indirect) evidence that parcellation is needed to build rich and diverse learning environments. In essence, big, maximally connected groups solve simple well-defined problems better, but groups organised as a small-world network are far more effective for more complex issues. Not only does this resonate perfectly with one of the key principles I developed in my book, it helps to put another nail in the coffin of crazy, evil and pernicious ideas like national curricula.

Big and undifferentiated is inefficient and counter-productive. On the other hand, so is small, for different reasons. Middle-sized offers the worst of both worlds. What we need is small parcellated clusters, weakly connected.
Created:Thu, 28 Feb 2008 07:52:06 GMT

Wikipedia, collectives and connectives

http://community.brighton.ac.uk/jd29/weblog/22071.html

There has been a bit of a flurry of activity lately relating to the notion of the collective, following a recent report on the future of learning etc from Horizon. It is notable that this flurry centres on George Siemens and Stephen Downes, and my good friend Terry Anderson, who are all very well connected. This is great – these issues are huge. As well as making some interesting observations on the collective, George Siemens talks of his preference for connective intelligence. I love the phrase, but I think that George is using his gift for inventing brilliant memes a little dangerously here: this is not connective intelligence at all. This is a bunch of people learning together using the network. I don't think that we can use the term 'intelligence' in this context. However, we can and probably should do so when talking about collectives, because they are far more distinct actors in the system. We can talk quite intelligently about a collective, but it makes little sense to talk about a 'connective' (at least when referring to a network of people).

One of the biggest problems affecting this recent discussion is that of defining the notion of collective. It is, unfortunately, a term that comes with a lot of baggage, not all of it useful or helpful in this recent exchange of ideas. For some people it comes with bad associations with communist thinking. Not useful. Worse still, there is a rich vein of literature about collective intelligence which is largely hogwash and wishy-washy thinking with no scientific value and weak philosophical foundations. Again, a pity. On the other hand, there is Star Trek. The writers of the Horizon report are more influenced by Star Trek than the rest of these ideas and this is as it should be. We are talking about the Borg here. In the world of Star Trek, the collective is an entity composed of multiple individuals but which is conected by a vaguely described network technology, allowing them in some ways to think as one. What is significant here is that the collective intelligence is an engine driven by algorithms and rules. Collectives in this sense use machine intelligence to amplify human intelligence or vice versa:  recommender systems (e.g. Google's PageRank), automated reputation systems (e.g. Slashdot's Karma points), tag clouds (e.g. everywhere) and collaborative filters (e.g. Amazon recommendations) all fit the bill.The human element is the fuel, not the engine of that intelligence. You could take away or replace any of the individuals without destroying the intelligence of the collective, though it might think differently. Bad rules and algorithms will lead to poor intelligence. The 'wisdom of crowds' has an unnerving and common corollary that is the 'stupidity of mobs'. Consider, for instance, the current presidential primaries, in which it has been shown that early voters have up to 20 times the influence of later voters (http://www.brown.edu/Administration/News_Bureau/2007-08/07-0 but the original article is well worth reading). This influence is, to simplify slightly, a combination of both network (e.g. influence of friends and acquaintances) and collective behaviours (most notably counting of votes). A few simple rules that would introduce delay to reporting of results would largely compensate for collective madness. Connective madness is harder to guard against as memes and similar ideas are spread more easily from person to person, so perhaps we need to look at approaches to containing epidemics to prevent the spread of stupidity. Or we could just put it under the control of smart people.

Which leads me to the problem of thinking of Wikipedia as an example of collective intelligence.

For some,  including Brighton's fine Tara Brabazon and the (far less fine) Andrew Keen, Wikipedia is the work of the devil, an unreliable uncontrollable beast sapping away the next generation's ability to reason and think, replacing depth with a shallow and messy breadth that treats Star Trek with greater reverence than Shakespeare. They are wrong for all sorts of reasons, not least of which is their mistaken premiss that what was good for us will be good for the next generation. They are also wrong because they see its use as almost identical to that of a traditional encyclopaedia whereas it is far more of a jumping off point, a learning tool, an entry into a subject, not a source of definitive knowledge. If our students and kids see it differently it is our fault for not making that clear. However, the anti-Wikipedeans are really wrong about its reliability too. Sure, anyone can post any old nonsense but, by and large, you have to search hard to find it. And, while soft security does play a role in keeping it that way, there is more lurking under Wikipedia's skin than some popular writers give it credit for. Their fundamental miscomprension of its power is understandable given that some of its acolytes are equally confused about how it works.

Wikipedia is only partially a collective venture and, from most perspectives, this is not the main part. First, let's get what is collective out of the way. No one design's Wikipedia's index: it is an entirely emergent feature. In some ways, titles of articles are like tags: user generated metadata that emerge from the bottom up. It could be presented as a tag cloud in fact, with font size related to page views rather than numbers of uses because, apart from in terms of links from other pages, article titles tend not to be re-used. Other collective features include an option to see what links here, and links to what is currently interesting. This is all done by a bunch of simple collective algorithms that combine the discrete actions of individuals to provide a bottom-up structure. There are a few fairly informal rules about behaviour that also contribute. Notably, as Benkler points out in the Wealth of Networks, there is an underlying ideology of making the articles as unbiased as possible, a principle that spreads by example more than by instruction.

There is a fair bit of connective behaviour going on too. Discussion pages help to keep things on track, using network processes that rely on people either achieving consensus or at least identifying where they differ within the page. This is loose connective stuff on the whole, with small, variably committed and often transient communities forming through a shared interest in topics. There is also a certain amount of connection between the elite who are responsible for much of the content. We also know that some people are driven to contribute due to a desire for social capital. However, the network is not the biggest driver either.

The content creation itself is very much an individual activity and has very little to do with collective behaviour. Individuals decide on the length and subject, and of course they actually write the stuff.  At a fine granularity, articles are made by individuals, albeit often more than one. They have many motivations. However, even they are not the primary locus of control.

Wikipedia is structurally a highly top-down system. Structure influences (and sometimes determines) behaviour. Large slow-moving structural features create the context for what can happen there. There are many examples of top-down hierarchical control in Wikipedia: for instance, the featured content, the application of automated algorithms to identify poorly cited or contentious articles, the alphanumeric format for the index, the fact that Jimmy Wales and his crew of administrators have ultimate control which is exercised regularly and often. And let's not forget the structure of the system itself, most significantly in interaction design, functions & operations (including those notable by their absence) and interface, which has a great role to play in determining the forms that emerge. The use of logins, and the structure of lists, featured content, glossaries, portals, timelines and even a hint of Dewey and the Library of Congress, not to mention the numerous automated systems that check for  references, reliability and so on, all make this a highly controlled venture. It is a different kind of control than what we might find in an old-fashioned encyclopaedia, but it is control all the same. For instance, in the previously linked article in Wikipedia on the Borg, we see…

If this is not top down control I don't know what is. Sure, laws are rarely enforced, but there is a great deal of persuasion going on. And this stuff is everywhere. All of which contributes greatly to the reliability and effectiveness of Wikipedia, but relatively little of which has much to do with collective or connective intelligence. Wikipedia (more specifically its designers, managers and administrators) are like intellectual dairy farmers, milking the herd to provide us all with a good, sustaining slug of high quality knowledge.

Digg, Wikipedia, and the myth of Web 2.0 democracy. – By Chris Wilson – Slate Magazine

http://community.brighton.ac.uk/jd29/weblog/21996.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1360

Yet another article discussing the (less than surprising) fact that social sites such as Digg, Wikipedia and SlashDot are not purely crowd-driven applications but rely on small cliques, rules and algorithms to succeed. The top-down vs bottom-up issue appears to be the flavour of the year.

What I find interesting about many of the examples given is that they are instances of what Terry Anderson and I have been calling ‘the collective’. It is the combination of individual (not always explicitly connected) acts with algorithms or rules that gives these systems their power. A crowd left to its own devices is typically dumb, for all sorts of structural reasons such as the Matthew Principle, the effects of priority and unbridled stigmergy. It is only when explicit mechanisms are in place that include things such as delay, evolutionary filtering and reputation mechanisms, not to mention parcellating algorithms, that the crowd becomes smart.
Created:Sat, 23 Feb 2008 23:20:41 GMT

Video: Clay Shirky on Love, Internet Style

http://community.brighton.ac.uk/jd29/weblog/21991.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1359

The brilliant Clay Shirky explaining how social software and communication/coordinating tools in general helps to make love a renewable building material, and how Perl is like a 1300 year old Shinto shrine, aggregating caring into something stable and long-lasting. As usual, he is so right.
Created:Fri, 22 Feb 2008 21:22:08 GMT

Like ants, humans are easily led – Telegraph

http://community.brighton.ac.uk/jd29/weblog/21863.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1358

Reporting findings from Utrecht that not only do people tend to follow the leader (nothing new here), but they will even repeat sub-optimal paths when informed of alternative routes. It seems that mob stupidity sticks! This has some interesting potential implications for allowing the crowd to teach itself using social navigation: even if the path is palpably wrong it may get reinforced.
Created:Sun, 17 Feb 2008 20:05:23 GMT

Donald Clark on OpenLearn (or is it LearningSpace?)

http://community.brighton.ac.uk/jd29/weblog/21851.html

http://donaldclarkplanb.blogspot.com/2008/02/openlearn-another-document-dump

Donald turns his attention to the UKOU's attempt at open courseware. It is sobering reading. Despite the investment of millions the result is less than stellar, not least because of the embarassing course materials (which, incidentally, they should allow the community to contirbute to and improve). This is a pity in many ways. The OU has done an interesting job of integrating some of the wonderful social tools it has been developing over the past few years (everything from collaborative knowledge maps to webinars to geographical presence indicators to vlogging, not to mention tag clouds and discussion forums) and it ought to be great – this has the makings of a self-organising learning environment. Maybe it will get better as more people use it – it was a bit disappointing to find no discussion, no knowledge maps, no other people present in all the courses that I looked at – but I doubt it, at least in its current form. The tools are great and the presentation is (mostly) fine, but there is something missing. I think it is a problem of integration. This is not so much a mash-up or a blend as an assembly. The tools are linked very loosely and, with a couple of exceptions, don't adjust to the context, so you can be looking at a course on computer security but seeing users of the whole site. Or you can click the Flashmeeting link and see a list of recordings of all presentations, not those that relate to where you are. Or chat with people who may have quite different needs and interests. While it is important to have bridges and isthmuses between distinct ecosystems, this site provides nothing but bridges. I think they have entirely failed to achieve proper parcellation.

The site feels very raw, fresh and unfinished. Hopefully these problems will go away as they start to think more about what all these wonderful tools are for. Unfortunately, because it is not very useful yet, I think that it is fairly likely that many people will not bother to come back.

Kevin Kelly — The Bottom is Not Enough

http://community.brighton.ac.uk/jd29/weblog/21840.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1357

I love Kevin Kelly. He has been one of the most consistently inspiring writers that I know of for decades. In this article he starts to explore the balance of top-down and bottom-up needed to take advantage of the hive mind.

“pure unadulterated dumb mobs is the easiest, perhaps least interesting new space in the entire constellation of possibilities. More potent, more unknown, are the many other combinations of everyone and someone.”

This is great, but it seems to me that we have never seen a pure hive mind. Even the most bottom-up of social systems (say, Google Search) is a combination of top-down algorithms and bottom-up control. As KK says, Wikipedia is far from purely crowd-driven. Not only is there the elite that he highlights, there are also engineered processes and a host of automated systems that help to keep the encyclopaedia more or less on track. But he is right – discovering balances of top-down and bottom-up that work will be one of the most important research challenges from now on. In fact, it has been since the first social systems started to emerge in the 1990s. It is only recently that we have started to notice.
Created:Sat, 16 Feb 2008 04:55:03 GMT