The importance of being first…

http://community.brighton.ac.uk/jd29/weblog/19416.html

It seems that the race for presidential nomination in the US depends on more than the common sense and opinions of voters according to Brian Knight and Nathan Schiff at Brown University. Getting in there early makes a big difference. In fact, voters in early primary states such as Iowa and New Hampshire have up to 20 times the influence of voters in later states in determining whether candidates are selected. This is startling. It is also interesting that it offers a refinement of the simple Matthew Principle (the rich gets rich while the poor gets poorer). People are more influenced by those that came first than by those who have most recently posted results. It appears that priority is more important than novelty, at least in presidential primaries. The reasons for this are not entirely clear: it may be that the fact that the information is available for longer gives it more time to seep in, or that there is a simple cascade (but it is hard to see how this explains the relative unimportance of recently voting states) or that the media makes more of the first ones so it sticks more easily. It is probably a combination of all three.
The implications for those of us trying to use the wisdom of the crowd in e-learning are profound. I have been exploring the importance of delay in harnessing crowd wisdom and it would seem that this offers proof that it is needed. If people didn't know the early results then they wouldn't be influenced (as much) and could make more independent decisions. However, the problem in an educational setting is the cold start – if we don't feed back contributions to the system right away, then contributors and latent contributors will be less inclined to contribute. We seem caught between a rock and a hard place. If we want wise crowds, we need delay, but if we want crowds in the first place, we need immediacy.  Let's imagine an educational social recommender system (say, http://ltsn.CoFIND.net) which tries to provide the appropriate resources for learners as and when they need them, using mainly a combination of list priority and font size to recommend particular resources. The resources themselves are added and rated by learners. This is a clear case where priority could offer great advantages. The first resource will, a priori, be at the top of the list to begin with (and the bottom, as it happens). It will thus attract more attention than those that come later, whether or not it is better. It is thus more likely to stay at the top. A number of potential solutions present themselves:

  1. we introduce delay and control the learning process that surrounds it. In formal education this is not too difficult: we just tell students that they must post (ideas,resources, ratings, whatever) and that feedback will be delayed. This, incidentally, fits neatly with several principles in my book, notably emphasising the importance of context and the significance of scale (the larger scale institutional environment influencing the smaller scale more than vice versa). However, this is less effective in a less formal setting as it requires significant buy-in from the learners and assumes a cohort working in sync.
  2. we layer learning experiences, providing fast feedback at first but delaying it more and more as the content grows, as well as building a natural decay into resources so that they lose weight relative to the new ones. I like this approach and have tried it in CoFIND, but it is incredibly hard to tune it right so that everyone gets the learning experience they want or need. Early on you get mob stupidity (so discouraging people from using the system) and later some people, especially the early starters, get discouraging levels of delay and the system moves slowly. Plus it is really easy for good things to get lost if the rate of decay is too fast. This would work better if we could discover whether the right resources are getting through and then adapt the results. However, it is not clear how we would perform this adaptation. We could of course reintroduce design (e.g. a bit of adaptive testing) but this goes against the grain. My natural inclination is to use random mutation, but when evolutionary systems compete with designed systems they are almost certainly (at first) going to do worse. People will leave, and use the less-than-optimal-but-at-least-working designed systems instead.
  3.  a variation on 2) – we introduce a random element, artificially boosting some things for no particularly good reason (or, as in my systems, you give a boost for novelty). Again, through evolutionary mechanisms, this will head towards a great optimum, but in the short term will give poor results. And it is the short term that matters – if learners can learn better elsewhere then that's where they will go, even though we might promise that it will be better in the long run if they persist.
  4. a variation on 1) – we automate some of the process, perhaps by mining things like Google PageRank or maybe using a bit of content-based matching, or extracting links from Wikipedia, or using the conventional collaborative filtering approach to find similar users, or… the list is endless. This is pragmatic and, in any sensible system with the purpose of helping people to learn, this is the kind of thing that I would do (and, with variations on the theme that tend to involve WordNet and ontologies) many people have done this kind of thing. But I am after something more than just a sensible system. If we really want to harness crowd wisdom, we need to find ways to make it work for us, not to cheat by reintroducing the individual designer. Making use of PageRank or Wikipedia is getting there – instead of using a single approach to crowd wisdom, we can take coarse systems that use big crowds (albeit ones that have seriously large problems with the Matthew Principle) and refine them, with inherent delay. This certainly helps to reduce the cold start problem and works nicely at a range of scales. However, while it might help with finding some of the right resources straight away, it does not begin to cope with issues at a smaller, more private scale (e.g. sorting out the useful parts of a discussion forum) and the immediate benefits are no greater than googling the results in the first place, so it might be hard to get buy-in.
  5. we lie. We establish a community using a different pretext and slowly encourage them to contribute to and build a more complex system. I feel mildly amused by this idea. If we can build, say, a community with shared learning interests that uses a discussion system of some sort, then if they incrementally build a list of resources, that they then make available for ranking (but not showing results), then parcellate the resources and again use blind/delayed ranking, we might have  a gentle way of avoiding the designer too much. Early on it would work like a traditional learning community, but could evolve new features as a result of crowd behaviour. To make this work effectively using crowd processes, we would have to encourage this dynamic to flow naturally within the system, rather than imposing it according to our own rules. We should provide ways for the crowd to decide that it is time to evolve, plus many different affordances according to the needs of the community, different tools, different parameters (which should be crowd-driven). Of course, we would need to use crowd processes to kill off the mutations that failed. This is beginning to sound a bit like a job for the wonderful Ning, especially now that it is using OpenSocial. We could build a Ning application that modifies itself according to the wishes of the crowd. Crowdware indeed.

I'm just rambling out loud. Must get back to some real work.

I am a professional learner, employed as a Full Professor and Associate Dean, Learning & Assessment, at Athabasca University, where I research lots of things broadly in the area of learning and technology, and I teach mainly in the School of Computing & Information Systems. I am a proud Canadian, though I was born in the UK. I am married, with two grown-up children, and three growing-up grandchildren. We all live in beautiful Vancouver.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.