(a)social computing conference

http://community.brighton.ac.uk/jd29/weblog/50878.html

I've just spent three rewarding and exhausting days at the IEEE Social Computing Conference in Vancouver.

It was an odd experience for me as by far the majority of papers and presentations seemed to have a lot to do with computing (most predominantly various forms of network analysis and visualisation, plus a fair bit on technologies of privacy and security) and very little to do with 'social'. One of the more spectacularly glaring omissions was any notable use of social technologies before, during or after the conference, apart from a few bottom-up initiatives. In fact, given that this was a computing conference, use of computers was altogether pretty dire, with the most appallingly designed registration process I have ever encountered, that suggest its designers had never considered users let alone followed anything like a user-centred design process. The conference website is something out of the 1990s. At least the network was fine, but that was provided by the hotel.

A few speakers asked people in the audience about their use of various social systems and it was more than slightly bizarre to be among the minority of delegates using big players like Facebook, Digg, Flickr and Twitter, let alone less popular social apps. I find it almost incomprehensible that some social software programmers can be so utterly divorced from the use of the things that they are studying and developing. Except that, as a breed, computer scientists are not known to be the most sociable of people.

Despite this gaping hole, there were some great people and there was some good stuff to be found there including fine sessions from Ben Shneiderman, Bebo White, Barry Smyth, a big contingent of creative folk from MIT MediaLab, and many more. There was some fascinating research relating to the use of sensors and wearable devices and even the mainstream of network analysis and visualisation papers, as well as those considering privacy, security and access control, held some great potential insights and discoveries. Again, however, it was depressing to see how few had performed any follow-ups or studies with real people to find out what social factors might be lurking behind the effects they were seeing in the abstracted data or how their designs might be used by real people. A panel hosted by Jenny Preece followed up Ben Schneiderman's talk in considering the big ethical and related issues that social software engenders, which was refreshing and a necessary counterpoint to all this abstraction of humans into nodes and edges, but it stood out from the mainstream themes as a distinct oddity.

The conference certainly helped to inspire me with some ideas, refinements of ideas and issues I'd not thought about well enough before, so it was well worthwhile, but if that was 'social computing' I hate to imagine what it might be like without the 'social'!

Social software programmer/researcher wanted (Canada)

http://community.brighton.ac.uk/jd29/weblog/50755.html

Terry Anderson and I are leading a small project at Athabasca University in Alberta, Canada taking a design-based research approach to exploring ways of using social software for learning.

We need someone with a computing degree or equivalent experience to extend and improve aspects of Elgg, as well as to integrate and mash it up with other systems (specifically Moodle and the Project Wonderland immersive environment). PHP and/or Java programming experience would be useful. The post holder will also research and evaluate the effectiveness of interventions using the software so will need to be a great communicator, ideally with experience of participative approaches to design and/or qualitative and quantitative research methods.

The things we are trying to do will hopefully be of benefit to anyone in education who wants to use Elgg as their social software. We hope this will be ground-breaking work that will lead to publications etc, so it would be good for someone wanting to break into learning technologies research.

This post can be remotely located, but occasional visits to Edmonton, Alberta would be required, and Canadian residents and citizens will be considered before anyone else.

Full details are available at https://athabascau.hua.hrsmart.com/ats/js_job_details.php?reqid=469

Statistics Show Social Media Is Bigger Than You Think

http://community.brighton.ac.uk/jd29/weblog/50936.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1402

Some great ‘hey wow!’ statistics and facts about social media use of the sort one tends to see a lot in keynotes. Not all of the facts are reliable or significant but there’s a very good list of sources to verify their plausibility and, while we might quibble with the odd detail here and there, the overall message is clear: this stuff is *big*.
Created:Fri, 21 Aug 2009 23:26:00 GMT

New WebGL standard aims for 3D Web without browser plugins

http://community.brighton.ac.uk/jd29/weblog/50555.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1401

It looks like the 3D Web is nearing reality. The current generation of general-purpose immersive spaces (e.g. Second Life, There, Wonderland, OpenSim etc) are clunky, poorly-interoperable, resource-hungry monoliths that help to show the potential but are really not ready for mass adoption. These two initiatives (WebGL and O3D) should be exactly what is needed to build a truly standards-compliant and open immersive web. I recall similar arguments in the early to mid nineties about VRML and later X3D but maybe this is the bit of the puzzle that means we get the real thing at last!
Created:Fri, 07 Aug 2009 20:47:00 GMT

MyTrybe

http://community.brighton.ac.uk/jd29/weblog/50298.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1399

A collective approach to social networking. Instead of explicit friending, the system finds people like you and clusters those into your network.

I guess the big issue is the control that the creators exert in the choice of aspects that are considered in establishing similarity. You can select styles which establish the context that is of interest to you at a given time, each of which uses a set of explicit questions to find out about you (valuable info!). I suspect it could have some potentially interesting applications in education, especially on the informal and lifelong learning front, if it were to be open-sourced. Not so useful as a closed service like this.
Created:Wed, 29 Jul 2009 22:29:33 GMT

Twitter hype punctured by study

http://community.brighton.ac.uk/jd29/weblog/45582.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1398

Is it possible that anyone is surprised by the news that 10% of Twitter users are responsible for 90% of the tweets? Or that over half Tweet less than once every 74 days? I suppose it is interesting when compared with the social network norm (10% produce 30% of the content) but it is certainly not newsworthy nor does it show anything unexpected about Twitter. There was no hype bubble to burst.

The study’s authors suggest that this makes it a one-to-many publishing service as though that is a bad thing. Of course, it *would* be a bad thing if there were any limits on who could publish, but there are not (well, censorship issues to one side for a moment).

It is much more interesting that, in the space of year, it grew by 1382%. That’s a big number – even Facebook only grew 228% in the same period. I’d be inclined to put that down to a few factors apart from the usual variants on Metcalfe’s law:

1) it’s very fast, very simple, very easy to get going – there is little investment needed in time, attention, computer power, etc

2) It exploits multiple technologies and their associated networks – not just computers but cellphones – and it fits neatly in everything from a widget to a web page.

3) unlike phone, email or SMS, it’s a push technology that doesn’t usually intrude too much or demand a response – even if it distracts, for most of us the 140 character limit keeps the distraction small, even less than RSS feeds.

4) Media hype and prominent celeb twitterers – there’s something very intimate and immediate about tweets that makes the view into someone’s personal life compulsive reading with very little effort.This certainly gave a boost.

5) Perhaps most importantly, it can ride on the back of other social networks. Despite their best belated efforts to compete, Twitter started by competing with no one and so was able to take full advantage of people exchanging info about their Twitter profiles on many social networks like Facebook, email, MySpace, etc. Throw in a dead simple API that makes it easy to integrate with other sites, so there is very little reinvestment in building a new social network needed, and it is almost surprising that it did not grow any faster. It is a compelling symbiotic (or maybe a bit parasitic) organism that thrives on other networks as well as building its own.

I find it interesting not so much for what it is but as an example of the way we must go forwards – to build small, open, agile, flexible, integratable services that enable a federation of networks and functionalities, building on what is already there and evolving fast. It is more than likely that Twitter will some day crash and burn or, more probably, get sucked into the genetic material of something else, but that is the nature of evolution and nothing to cry about.

Created:Fri, 12 Jun 2009 03:21:48 GMT

Gin, Television, and Social Surplus

http://community.brighton.ac.uk/jd29/weblog/42189.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1397

Clay Shirky on typically brilliant form, here talking about cognitive surplus and what we do with it.

I love his rough calculation that the whole of Wikipedia, in all its language variants and including discussions, edits, lines of code and so on, amounts to around 100 million hours of thought. Coincidentally, that is the amount of time US viewers spend watching adverts on TV every weekend. That’s a lot of cognitive surplus just ripe for engaging in participative activities. He observes that the Internet-connected world spends around a trillion hours watching TV each year. If just one percent of that time shifted towards producing and sharing on the Internet, it would be equivalent to 100 Wikipedia-sized projects per year. And, of course, that is exactly what is happening, probably at a higher rate than that.

Let’s now imagine that one percent of that one percent could be turned to replacing our current processes of higher education. That’s one Wikipedia a year. Meanwhile the Internet continues to grow at a phenomenal rate, but still slightly less than a quarter of the world’s population have access to it. That’s a lot of growth potential – even a quarter of one percent would be a whole lot of brain power. We are just at the start of this revolution and have barely scratched the surface in terms of searching, filtering, connecting, aggregating and interacting with all of that content and all of those people. Assuming other things remain fairly equal and we don’t all vaporise or vanish down a hole of recession, it is hard to see how this cannot completely change higher education as we now know it.
Created:Mon, 23 Mar 2009 11:17:04 GMT

Medieval wikis and blogs

http://community.brighton.ac.uk/jd29/weblog/41639.html

I was reading Norton's 'Readings in the History of Education' the other day (love my iPhone) because I am intrigued about how and, more especially, why our current university systems came to be. The book is full of wonderful gems, but a couple of things seemed particularly interesting:

1: The glosses

Readings were given from text that, over the centuries, was adapted through the use of glosses – essentially marginalia but, in some cases, greatly exceeding the volume of original text on the page. Often they were wrong, misguided, poorly thought through, but were often read with the same emphasis as the original text.

Wikipedia on parchment?

2: the travelling scholars

Equally interesting, before universities were formed, scholars would travel to learn from masters, wherever they might be. 

"in those days the school followed the teacher, not the teacher the school."

I particularly like this exerpt from Abelard's autobiography:

 "...I betook myself to a certain wilderness previously known to
me, and there on land given to me by certain ones, with the
consent of the Bishop of the region, I constructed out of reeds
and straw a sort of oratory in the name of the Holy Trinity
where, in company with one of our clergy, I might truly chant to
the Lord: "Lo I have wandered far off, and have remained in the
wilderness."

As soon as Scholars learned this they began to gather from every
side, leaving cities and castles to dwell in the wilderness, and
in place of their spacious homes to build small tabernacles for
themselves, and in place of delicate food to live on herbs of the
fields and coarse bread, and in place of soft couches to make up
[beds of] straw and grass, and in place of tables to pile up
sods."

I feel a certain sympathy with this approach (I am in the icy wastes of the Canadian Prairies because of the remarkable people here) but the main thing that bears mention is that it was individuals, not institutions, that attracted scholars.

Oddly like blogs?

It is interesting that, like scholars of old, some of the bloggers I like most (most notably Stephen Downes but, to an extent, most of my favourites) are often concerned more with the discovery and interpretation of other texts than with creating new ones. Of course, it is all much faster, easier, more networked but maybe a bit less impressive- John of Salisbury spent twelve years at Paris and at Chartres following his preferred masters (including Abelard), whereas twelve minutes is about all I can take of most blogs.

Why universities?

My interest in Norton's book stemmed from a concern that has bothered me for a long time that we are driven by the exigencies of the form of our buildings, the physics of everyday life and, maybe more, from the history that drove us there, to learn in ways that would make little sense if we were to start anew without such constraints.

Back then, for instance:

  • If you wanted to learn from a master, you used to have to travel to be with them. More masters, more travellers, more students. Not only that, more lawyers (oddly seen as a benefit way back then). This was very good for towns and cities and the scholars were given many privileges such as freedom from tax, their own law courts and so on, partly to encourage them to come and partly to give them the freedom to study, learn and think.
  • If you wanted to read (or, more often, hear) the great works of human knowledge (well, mainly Aristotle and the Bible) you had to go somewhere that there were copies of those books to be read. 
  • It made sense for many people to gather round a single lecturer as there were not enough books to go round.

These and maybe some other circumstances of course led to aggregation and the formation of universities that attracted more people to them like planets forming from dust or snowballs rolling down hills. Many similar contingent facts caused the formation of the strange, archaic and arcane system we use today. It was not always a direct path and ideas evolved and died along the way. Sadly the student domination of the university at Bologna was lost and we now all follow the Parisian, master-led system but, perhaps luckily, bad lecturers are no longer stoned (at least with stones).

We don't need to work that way any more (and, incidentally, it is bizarre that we reinforce this pattern with learning management systems). As we increasingly turn to learning from and with those we acknowledge as great in the online world, beyond the boundaries of universities, we are slowly reinventing the medieval pre-university system, with bells, whistles and some centuries of innovation, invention and discovery to improve things, of course. Wikipedia becomes our glosses, blogs become the reed-and-straw-built oratories and we gather round, despite the online discomfort, to listen to the wise. David Wiley has gone one step further down this road and offered personal certifications for those who attended his open course. Who needs universities?

Systems often develop more because of their history and context than because they are a good idea and universities, with their long and relentless history, are proof of this. There have been some big changes and innovations from time to time – the Humboldtian model and the open universities are particular milestones, but they really just embellished the existing deeper models. At Athabasca University (where I mostly work)  everything I teach is online and so are all the texts I use. I can teach to anyone, anywhere there is an Internet connection, any time. To maintain the traditions and processes of medieval universities is odd in the extreme and yet many of them are still there – convocations and silly gowns, deans, professors, doctors, degrees… Why? OK, I know we have to rub shoulders with the medievals and won't be accepted as serious scholars unless we do, but it is kind of crazy and a terrible waste of time, money and energy.

So why keep universities?

I think that universities do have some important roles to play still, beyond their credentialing function (one that evolved quite late in the day, incidentally). I love the fact that universities still have some of those inherited privileges from their forebears. We still need a system that gives uninterrupted space to think and freedom from the fear of getting it wrong or pursuing the ridiculous or arcane. We still need the space for young and old to discover the richness of our civilisations and their artefacts.

Is the university as we know it the best space for this, especially one that is online? I suppose one good thing is that a course at an open university like Athabasca gives people the excuse and the licence to make the space in their lives to learn and the resources from which to do so (though why does it have to always take place in units of around 100 hours?). It would be nice if we could make more opportunities for people to also hang out with the scholars. In medieval universities, for instance, lunch was an opportunity for masters to check that students had learnt the lessons of the day, and for students to question and debate issues that arose. Maybe comments on blogs fulfill some of that role – certainly the dialogue can get quite rich around some posts (if you want to comment on mine, by the way, it's probably best to go to the version at http://me2u.athabascau.ca/elgg/jond/weblog/ which allows comments from anyone – sadly seem to be disabled for visitors at the University of Brighton site) and Twitter, Facebook, etc can fill in some gaps here and there. However, we can go further. I think that we need to make our spaces more sociable, and give much greater value and recognition to sociability, provide opportunities for serendipity and enable cross-disciplinary fertilisation. It is worth remembering what it was that the early universities were trying to achieve. It wasn't just about economies of scale and academic freedom, but it was an opportunity for building knowledge, engendered by the drawing together of scholars with shared purposes and a passion for learning.

 

 

Mules and paper

http://community.brighton.ac.uk/jd29/weblog/41453.html

A colleague of mine recently asked me if I knew of any research comparing traditional paper-based correspondence courses with online courses. I replied that it was a ridiculous question to ask and that anyone hoping to get anything more than trivial and/or useless answers to it is doomed to failure. I struggled for an analogy. The best I could come up with was this: if you had to choose a form of transport, would you rather have a mule or would you prefer to be able to select one or more options from any and every form of transport yet invented, including the mule? 

There is a deeply mistaken perception that is rife in the world that the computer (or the connected computer) is a single technology, like the television. It is not. The computer is a universal tool, a universal medium and a universal environment. It can be many things, serially and at once. At its most extreme it could be every thing: I am rather charmed by the irritatingly persuasive but probably unprovable argument that, given everything we know about the universe, we are almost certainly living in a computer simulation. But let's ignore that because it probably doesn't make any difference at all as to how we should live or learn…

I wouldn't suggest that online platforms are the best solution for every distance learning need but it seems pretty obvious that the success of any given learning opportunity depends on what you do with it. This makes comparisons pretty hard to make, unless you are grossly misusing online technologies, as different technologies afford different opportunities and there is no doubt that all but the most appalling online learning experiences will be significantly different in form, style and quality from those delivered via paper. The more technologies that are available, the greater the range of options as to how you can go about enabling learning to occur. Given that fact, arbitrarily limiting yourself to a single technology when there  are thousands or millions of others available that offer an indefinitely large number of different opportunities in terms of pace, medium, level of interaction, convenience, learner control, pedagogy, engagement, etc, etc, etc, seems positively perverse.

There might be more sensible questions we could ask, however, that might let us make more informed decisions about which technologies to use. One of them would be 'what is the minimimum effort and cost that we can get away with in the time available to provide an acceptable learning experience for the largest possible number of learners with the smallest possible up-front capital investment?' Another might be 'how can we cater for the small number of learners without broadband Internet access?' The answers to those questions might lead us towards a paper-based solution under some circumstances though I would still often argue against paper in both those cases.

As technologies improve, I think that paper is becoming increasingly irrelevant. It is expensive, environmentally harmful, unreliable, very limited in its media capabilities, bulky, anti-social, weak in accessibility, and (funnily enough, given paper aeroplanes) inflexible.

Interestingly, I would once have used most of those arguments against computers.

I would still generally rather take paper if I were going into the wilderness, down to the beach, or getting in the bath. I would also rather start a fire with paper than my iPhone and it is much better at mopping up spilt coffee than my Mac. If I had to choose between reading a book on an old cathode ray tube and paper, I would of course choose paper. However, if the choice were between my iPhone and paper, I would and usually do choose the iPhone, unless there are big high quality images or some layout (e.g. some poetry, some tables) that would be hard to look at on the small screen, or simple technical incompatibility gets in the way. As cheap, robust, flexible, high definition, untethered and light displays become more available and projection becomes the norm even for basic mobile phones, those exceptions will be much rarer.

Research in learning technologies is hard for many many reasons, but one of them is that it generally looks at what has been, seldom what is, and very rarely what will be. However, the research question that really interests me (and that, I suppose, distinguishes my interests from the purely educational side of e-learning) is not 'how should we learn, taking advantage of our current technologies' but 'how should we learn, taking advantage of next year's technologies?' There's a further question about how we should try to shape those developments that is also worth asking, which does require us to look at our current technologies with a critical eye, and it certainly helps to know what we have been doing to help to figure out what we will be doing, but the essence of our enquiry has to be focused on the future.

Next year (or the year after, or the one after that…), paper will come into the equation as the best means of learning about as often as mules come into the equation as the best form of transport. It will make as much sense to use it as it does now to choose cuneiform on clay tablets as the best means of delivery for our courses. 

The only real difficulty that I can see with this is in deciding at what point it becomes economically unviable to continue to use paper as a mainstream technology. I think we are close to that point now. So let's stop making fruitless comparisons. Let's figure out what we like about paper technology and make sure that we don't lose it. Then let's move on. 

 

ps – my thoughts on online vs face-to-face learning are an entirely different matter altogether. More on that another time.

pps. – and, of course, it is quite valid to attempt to answer questions about issues faced by those who have always used paper in the past and who are now struggling with how to teach online.