Social networking sites on the decline

http://community.brighton.ac.uk/jd29/weblog/22529.html

Robert Cringely predicts the imminent and surprisingly rapid demise of the social networking phenomenon. He is, of course and as usual, right. The writing is clearly on the wall – even the evil empire of Facebook is losing users. Poor old AOL, yet again getting in at the tail end of a storm with its acquisition of Bebo. And so the phenomenon that has scratched gaping holes in my time and patience for so long is on the way out. Not soon enough, I reckon. At least, in the main.  It is a twisted variant of the tragedy of the commons, played out again and again. Instead of grazing sheep on a common, it is our attention and good will that are being eaten away. I suffer a death of a thousand knives as  my friends and my 'friends' compete for my attention with both meaningful and meaningless communication. Email is more than enough to do that already, but the big social networking sites supply their own twist, offering mass-production of demanding drivel that takes no more thought than the click of a button. What makes them brilliant is also what will kill them, as surely as the sheep on the common will kill the grass that feeds them. Sure, most offer some control over what I receive, who I receive it from and whether they are my 'friend', but social pressures make it hard to reject people without them feeling slighted.

Some are better than others. those with a clear and undiluted focus (e.g. LinkedIn) are far less annoying than the general purpose sites.  Others are built for specific communities: Elgg, in particular, springs to mind. The trouble is, Elgg is not federated to any great extent. There is simple import and even simpler export through open standards like RSS and of course HTML, but no deep intertwingling of Elgg sites.

The only one of the big ones that I have a lot of time for is Ning, which does what they all should do in parcellating its landscape with rich and diverse niches, almost none of which has any great value in itself but, as a member of the ecosystem, contributes to the richness of the whole and can pass on its genes (with some mutations) to others when it dies. The only problem with Ning is that it is a single site, which may be its ultimate undoing. As Robert Cringely notes, the business models for these things are decidedly shaky at best. What we really need is a distributed Ning, with open APIs that offer flexibility and customisation at low cost, and trustworthy standards-based transfer of identity between systems. I have just started looking at Noserub (thanks to Brian Kelly for pointing me to this) which seems to be moving in the right direction, though still rather incomplete (e.g. no support for OpenSocial) and as yet paying insufficient attention to issues of trust and privacy. I don't know if it has the momentum to really succeed, but it or something like it are what we need if we are to build truly social networks, with the power and  controllability that is necessary to develop rich social ecologies.

The Downside of a Good Idea

http://community.brighton.ac.uk/jd29/weblog/22107.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1361

More (slightly indirect) evidence that parcellation is needed to build rich and diverse learning environments. In essence, big, maximally connected groups solve simple well-defined problems better, but groups organised as a small-world network are far more effective for more complex issues. Not only does this resonate perfectly with one of the key principles I developed in my book, it helps to put another nail in the coffin of crazy, evil and pernicious ideas like national curricula.

Big and undifferentiated is inefficient and counter-productive. On the other hand, so is small, for different reasons. Middle-sized offers the worst of both worlds. What we need is small parcellated clusters, weakly connected.
Created:Thu, 28 Feb 2008 07:52:06 GMT

Wikipedia, collectives and connectives

http://community.brighton.ac.uk/jd29/weblog/22071.html

There has been a bit of a flurry of activity lately relating to the notion of the collective, following a recent report on the future of learning etc from Horizon. It is notable that this flurry centres on George Siemens and Stephen Downes, and my good friend Terry Anderson, who are all very well connected. This is great – these issues are huge. As well as making some interesting observations on the collective, George Siemens talks of his preference for connective intelligence. I love the phrase, but I think that George is using his gift for inventing brilliant memes a little dangerously here: this is not connective intelligence at all. This is a bunch of people learning together using the network. I don't think that we can use the term 'intelligence' in this context. However, we can and probably should do so when talking about collectives, because they are far more distinct actors in the system. We can talk quite intelligently about a collective, but it makes little sense to talk about a 'connective' (at least when referring to a network of people).

One of the biggest problems affecting this recent discussion is that of defining the notion of collective. It is, unfortunately, a term that comes with a lot of baggage, not all of it useful or helpful in this recent exchange of ideas. For some people it comes with bad associations with communist thinking. Not useful. Worse still, there is a rich vein of literature about collective intelligence which is largely hogwash and wishy-washy thinking with no scientific value and weak philosophical foundations. Again, a pity. On the other hand, there is Star Trek. The writers of the Horizon report are more influenced by Star Trek than the rest of these ideas and this is as it should be. We are talking about the Borg here. In the world of Star Trek, the collective is an entity composed of multiple individuals but which is conected by a vaguely described network technology, allowing them in some ways to think as one. What is significant here is that the collective intelligence is an engine driven by algorithms and rules. Collectives in this sense use machine intelligence to amplify human intelligence or vice versa:  recommender systems (e.g. Google's PageRank), automated reputation systems (e.g. Slashdot's Karma points), tag clouds (e.g. everywhere) and collaborative filters (e.g. Amazon recommendations) all fit the bill.The human element is the fuel, not the engine of that intelligence. You could take away or replace any of the individuals without destroying the intelligence of the collective, though it might think differently. Bad rules and algorithms will lead to poor intelligence. The 'wisdom of crowds' has an unnerving and common corollary that is the 'stupidity of mobs'. Consider, for instance, the current presidential primaries, in which it has been shown that early voters have up to 20 times the influence of later voters (http://www.brown.edu/Administration/News_Bureau/2007-08/07-0 but the original article is well worth reading). This influence is, to simplify slightly, a combination of both network (e.g. influence of friends and acquaintances) and collective behaviours (most notably counting of votes). A few simple rules that would introduce delay to reporting of results would largely compensate for collective madness. Connective madness is harder to guard against as memes and similar ideas are spread more easily from person to person, so perhaps we need to look at approaches to containing epidemics to prevent the spread of stupidity. Or we could just put it under the control of smart people.

Which leads me to the problem of thinking of Wikipedia as an example of collective intelligence.

For some,  including Brighton's fine Tara Brabazon and the (far less fine) Andrew Keen, Wikipedia is the work of the devil, an unreliable uncontrollable beast sapping away the next generation's ability to reason and think, replacing depth with a shallow and messy breadth that treats Star Trek with greater reverence than Shakespeare. They are wrong for all sorts of reasons, not least of which is their mistaken premiss that what was good for us will be good for the next generation. They are also wrong because they see its use as almost identical to that of a traditional encyclopaedia whereas it is far more of a jumping off point, a learning tool, an entry into a subject, not a source of definitive knowledge. If our students and kids see it differently it is our fault for not making that clear. However, the anti-Wikipedeans are really wrong about its reliability too. Sure, anyone can post any old nonsense but, by and large, you have to search hard to find it. And, while soft security does play a role in keeping it that way, there is more lurking under Wikipedia's skin than some popular writers give it credit for. Their fundamental miscomprension of its power is understandable given that some of its acolytes are equally confused about how it works.

Wikipedia is only partially a collective venture and, from most perspectives, this is not the main part. First, let's get what is collective out of the way. No one design's Wikipedia's index: it is an entirely emergent feature. In some ways, titles of articles are like tags: user generated metadata that emerge from the bottom up. It could be presented as a tag cloud in fact, with font size related to page views rather than numbers of uses because, apart from in terms of links from other pages, article titles tend not to be re-used. Other collective features include an option to see what links here, and links to what is currently interesting. This is all done by a bunch of simple collective algorithms that combine the discrete actions of individuals to provide a bottom-up structure. There are a few fairly informal rules about behaviour that also contribute. Notably, as Benkler points out in the Wealth of Networks, there is an underlying ideology of making the articles as unbiased as possible, a principle that spreads by example more than by instruction.

There is a fair bit of connective behaviour going on too. Discussion pages help to keep things on track, using network processes that rely on people either achieving consensus or at least identifying where they differ within the page. This is loose connective stuff on the whole, with small, variably committed and often transient communities forming through a shared interest in topics. There is also a certain amount of connection between the elite who are responsible for much of the content. We also know that some people are driven to contribute due to a desire for social capital. However, the network is not the biggest driver either.

The content creation itself is very much an individual activity and has very little to do with collective behaviour. Individuals decide on the length and subject, and of course they actually write the stuff.  At a fine granularity, articles are made by individuals, albeit often more than one. They have many motivations. However, even they are not the primary locus of control.

Wikipedia is structurally a highly top-down system. Structure influences (and sometimes determines) behaviour. Large slow-moving structural features create the context for what can happen there. There are many examples of top-down hierarchical control in Wikipedia: for instance, the featured content, the application of automated algorithms to identify poorly cited or contentious articles, the alphanumeric format for the index, the fact that Jimmy Wales and his crew of administrators have ultimate control which is exercised regularly and often. And let's not forget the structure of the system itself, most significantly in interaction design, functions & operations (including those notable by their absence) and interface, which has a great role to play in determining the forms that emerge. The use of logins, and the structure of lists, featured content, glossaries, portals, timelines and even a hint of Dewey and the Library of Congress, not to mention the numerous automated systems that check for  references, reliability and so on, all make this a highly controlled venture. It is a different kind of control than what we might find in an old-fashioned encyclopaedia, but it is control all the same. For instance, in the previously linked article in Wikipedia on the Borg, we see…

If this is not top down control I don't know what is. Sure, laws are rarely enforced, but there is a great deal of persuasion going on. And this stuff is everywhere. All of which contributes greatly to the reliability and effectiveness of Wikipedia, but relatively little of which has much to do with collective or connective intelligence. Wikipedia (more specifically its designers, managers and administrators) are like intellectual dairy farmers, milking the herd to provide us all with a good, sustaining slug of high quality knowledge.

Digg, Wikipedia, and the myth of Web 2.0 democracy. – By Chris Wilson – Slate Magazine

http://community.brighton.ac.uk/jd29/weblog/21996.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1360

Yet another article discussing the (less than surprising) fact that social sites such as Digg, Wikipedia and SlashDot are not purely crowd-driven applications but rely on small cliques, rules and algorithms to succeed. The top-down vs bottom-up issue appears to be the flavour of the year.

What I find interesting about many of the examples given is that they are instances of what Terry Anderson and I have been calling ‘the collective’. It is the combination of individual (not always explicitly connected) acts with algorithms or rules that gives these systems their power. A crowd left to its own devices is typically dumb, for all sorts of structural reasons such as the Matthew Principle, the effects of priority and unbridled stigmergy. It is only when explicit mechanisms are in place that include things such as delay, evolutionary filtering and reputation mechanisms, not to mention parcellating algorithms, that the crowd becomes smart.
Created:Sat, 23 Feb 2008 23:20:41 GMT

Video: Clay Shirky on Love, Internet Style

http://community.brighton.ac.uk/jd29/weblog/21991.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1359

The brilliant Clay Shirky explaining how social software and communication/coordinating tools in general helps to make love a renewable building material, and how Perl is like a 1300 year old Shinto shrine, aggregating caring into something stable and long-lasting. As usual, he is so right.
Created:Fri, 22 Feb 2008 21:22:08 GMT

Like ants, humans are easily led – Telegraph

http://community.brighton.ac.uk/jd29/weblog/21863.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1358

Reporting findings from Utrecht that not only do people tend to follow the leader (nothing new here), but they will even repeat sub-optimal paths when informed of alternative routes. It seems that mob stupidity sticks! This has some interesting potential implications for allowing the crowd to teach itself using social navigation: even if the path is palpably wrong it may get reinforced.
Created:Sun, 17 Feb 2008 20:05:23 GMT

Donald Clark on OpenLearn (or is it LearningSpace?)

http://community.brighton.ac.uk/jd29/weblog/21851.html

http://donaldclarkplanb.blogspot.com/2008/02/openlearn-another-document-dump

Donald turns his attention to the UKOU's attempt at open courseware. It is sobering reading. Despite the investment of millions the result is less than stellar, not least because of the embarassing course materials (which, incidentally, they should allow the community to contirbute to and improve). This is a pity in many ways. The OU has done an interesting job of integrating some of the wonderful social tools it has been developing over the past few years (everything from collaborative knowledge maps to webinars to geographical presence indicators to vlogging, not to mention tag clouds and discussion forums) and it ought to be great – this has the makings of a self-organising learning environment. Maybe it will get better as more people use it – it was a bit disappointing to find no discussion, no knowledge maps, no other people present in all the courses that I looked at – but I doubt it, at least in its current form. The tools are great and the presentation is (mostly) fine, but there is something missing. I think it is a problem of integration. This is not so much a mash-up or a blend as an assembly. The tools are linked very loosely and, with a couple of exceptions, don't adjust to the context, so you can be looking at a course on computer security but seeing users of the whole site. Or you can click the Flashmeeting link and see a list of recordings of all presentations, not those that relate to where you are. Or chat with people who may have quite different needs and interests. While it is important to have bridges and isthmuses between distinct ecosystems, this site provides nothing but bridges. I think they have entirely failed to achieve proper parcellation.

The site feels very raw, fresh and unfinished. Hopefully these problems will go away as they start to think more about what all these wonderful tools are for. Unfortunately, because it is not very useful yet, I think that it is fairly likely that many people will not bother to come back.

Kevin Kelly — The Bottom is Not Enough

http://community.brighton.ac.uk/jd29/weblog/21840.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1357

I love Kevin Kelly. He has been one of the most consistently inspiring writers that I know of for decades. In this article he starts to explore the balance of top-down and bottom-up needed to take advantage of the hive mind.

“pure unadulterated dumb mobs is the easiest, perhaps least interesting new space in the entire constellation of possibilities. More potent, more unknown, are the many other combinations of everyone and someone.”

This is great, but it seems to me that we have never seen a pure hive mind. Even the most bottom-up of social systems (say, Google Search) is a combination of top-down algorithms and bottom-up control. As KK says, Wikipedia is far from purely crowd-driven. Not only is there the elite that he highlights, there are also engineered processes and a host of automated systems that help to keep the encyclopaedia more or less on track. But he is right – discovering balances of top-down and bottom-up that work will be one of the most important research challenges from now on. In fact, it has been since the first social systems started to emerge in the 1990s. It is only recently that we have started to notice.
Created:Sat, 16 Feb 2008 04:55:03 GMT

The Habits of Highly Effective Web 2.0 Sites

http://community.brighton.ac.uk/jd29/weblog/21155.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1356

Some sensible thoughts on building social sites that work, some of which mirror my own.In brief, Dion summarises these as…

* Ease of Use is the most important feature of any Web site, Web application, or program.

* Open up your data as much possible. There is no future in hoarding data, only controlling it.

* Aggressively add feedback loops to everything. Pull out the loops that don’t seem to matter and emphasize the ones that give results.

* Continuous release cycles. The bigger the release, the more unwieldy it becomes (more dependencies, more planning, more disruption.) Organic growth is the most powerful, adaptive, and resilient.

* Make your users part of your software. They are your most valuable source of content, feedback, and passion. Start understanding social architecture. Give up non-essential control. Or your users will likely go elsewhere.

* Turn your applications into platforms. An application usually has a single predetermined use while a platform is designed to be the foundation of something much bigger. Instead of getting a single type of use from your software and data, you might get hundreds or even thousands of additional uses.

* Don’t create social communities just to have them. They aren’t a checklist item. But do empower inspired users to create them.

Created:Sat, 02 Feb 2008 12:49:13 GMT

Connecting the Social Graph: Member Overlap at OpenSocial and Facebook

http://community.brighton.ac.uk/jd29/weblog/19715.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1354

An excellent little report on member overlap between social networking sites. As you might guess, it is huge: 64% of those on Facebook are also in MySpace, for instance (only 20% vice versa, but MySpace is still much bigger). Unless we start finding more ways to aggregate these networks, it is likely that many will die as it is simply too hard to maintain that many profiles and logins for most people. It looks very much as though it will be a stand-off between Facebook and MySpace, but I think that there is still a place for niches, as long as we can mash them up intelligently and the big bad networks don’t fight too hard for proprietary lock-in.
Created:Wed, 02 Jan 2008 18:57:17 GMT