Twitter hype punctured by study

http://community.brighton.ac.uk/jd29/weblog/45582.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1398

Is it possible that anyone is surprised by the news that 10% of Twitter users are responsible for 90% of the tweets? Or that over half Tweet less than once every 74 days? I suppose it is interesting when compared with the social network norm (10% produce 30% of the content) but it is certainly not newsworthy nor does it show anything unexpected about Twitter. There was no hype bubble to burst.

The study’s authors suggest that this makes it a one-to-many publishing service as though that is a bad thing. Of course, it *would* be a bad thing if there were any limits on who could publish, but there are not (well, censorship issues to one side for a moment).

It is much more interesting that, in the space of year, it grew by 1382%. That’s a big number – even Facebook only grew 228% in the same period. I’d be inclined to put that down to a few factors apart from the usual variants on Metcalfe’s law:

1) it’s very fast, very simple, very easy to get going – there is little investment needed in time, attention, computer power, etc

2) It exploits multiple technologies and their associated networks – not just computers but cellphones – and it fits neatly in everything from a widget to a web page.

3) unlike phone, email or SMS, it’s a push technology that doesn’t usually intrude too much or demand a response – even if it distracts, for most of us the 140 character limit keeps the distraction small, even less than RSS feeds.

4) Media hype and prominent celeb twitterers – there’s something very intimate and immediate about tweets that makes the view into someone’s personal life compulsive reading with very little effort.This certainly gave a boost.

5) Perhaps most importantly, it can ride on the back of other social networks. Despite their best belated efforts to compete, Twitter started by competing with no one and so was able to take full advantage of people exchanging info about their Twitter profiles on many social networks like Facebook, email, MySpace, etc. Throw in a dead simple API that makes it easy to integrate with other sites, so there is very little reinvestment in building a new social network needed, and it is almost surprising that it did not grow any faster. It is a compelling symbiotic (or maybe a bit parasitic) organism that thrives on other networks as well as building its own.

I find it interesting not so much for what it is but as an example of the way we must go forwards – to build small, open, agile, flexible, integratable services that enable a federation of networks and functionalities, building on what is already there and evolving fast. It is more than likely that Twitter will some day crash and burn or, more probably, get sucked into the genetic material of something else, but that is the nature of evolution and nothing to cry about.

Created:Fri, 12 Jun 2009 03:21:48 GMT

What exams have taught me

http://community.brighton.ac.uk/jd29/weblog/45251.html

I have argued at some length on numerous occasions that exams, especially in their traditional unseen, time-limited, paper-based form, without access to books or Internet or friends, are the work of the devil and fundamentally wrong in almost every way that I can think of. They are unfair, resource-intensive, inauthentic, counter-productive, anti-educational, disspiriting, soulless products of a mechanistic age that represent an ethos that we should condemn as evil.

And yet they persist.

I have been wondering why something so manifestly wrong should maintain such a hold on our educational system even though it is demonstrably anti-educational. Surely it must be more than a mean-spirited small-minded attempt to ensure that people are who they say they are?

I think I have the answer.

Exams are so much a part of our educational system that pervade almost every subject area that they teach a deeper, more profound set of lessons than any of the subjects that they relate to. Clearly, from their ubiquity, they must relate to more important and basic things to learn than, say, maths, languages, or history. Subjects may come and subjects may go but the forms of assessment remain startlingly constant. So, I have been thinking about what exams taught me:

  • that slow, steady, careful work is not worth the hassle – a bit of cramming (typically one-three days seemed to work for me) in a mad rush just before the event works much more effectively and saves a lot of time
  • the corollary – adrenalin is necessary to achieve anything worth achieving
  • that the most important things in life generally take around three hours to complete
  • that extrinsic motivation, the threat of punishment and the lure of reward, is more important than making what we do fun, enjoyable and intrinsically rewarding
  • that we are judged not on what we achieve or how we grow but on how well we can display our skills in an intense, improbably weird and disconcerting setting

I learnt to do exams early in life better than I learnt most of the subjects I was examined on and have typically done far better than I deserve in such circumstances. I have learnt my lessons well in real life. I (mostly) hit deadlines with minutes to spare and seldom think about them more than a day or two in advance. I perform fairly well in adrenalin-producing circumstances. I summarise and display knowledge that I don’t really have to any great extent. I extemporise. I do things because I fear punishment or crave reward. I play to the rules even when the rules are insane. A bit of high blood pressure comes with the territory. Sometimes this is really useful but I am really trying hard to get out of the habit of always working this way and tp adopt some other approaches sometimes.

There are many other lessons that our educational systems teach us beyond the subject matter – I won’t even begin to explore what we learn from sitting in rows, staying quiet and listening to an authority figure tell us things but, suffice it to say, I haven’t retained much knowledge of grammar, calculus, geography or technical drawing, but I am still unlearning attitudes and beliefs that such bizarre practices instilled in me.

Assessment is good. Assessment tells us how we are doing, where we need to try new things, different approaches, as well as what we are doing right. Assessment is a vital part of the learning process, whether we do it ourselves or get feedback from others (both is best). But assessment should not be the goal. Assessment is part of the process.

Accreditation is good too. Accreditation tells the world that we can do what we claim we can do. it is important that there are ways to verify to others that we are capable (most obviously in the case of people on whom others depend greatly such as surgeons, bus drivers and university professors). Except in cases where the need to work under enormous pressure in unnatural conditions is a prerequisite (there are some occasions) I would just prefer that we relied on authentic evidence rather than this frighteningly artificial process that tells us very little about how people actually perform in the task domain that they are learning in.

The biggest problem comes when we combine and systematise assessment and accreditation into an industrialised, production-line approach to education, losing sight of the real goals. There are many other ways to do this that are less harmful or even positively useful (e.g. portfolios, evidence-based assessment, even vivas when done with care and genuine dialogue) and many are actually used in higher education. We just need more of them to redress the balance a bit.

Gin, Television, and Social Surplus

http://community.brighton.ac.uk/jd29/weblog/42189.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1397

Clay Shirky on typically brilliant form, here talking about cognitive surplus and what we do with it.

I love his rough calculation that the whole of Wikipedia, in all its language variants and including discussions, edits, lines of code and so on, amounts to around 100 million hours of thought. Coincidentally, that is the amount of time US viewers spend watching adverts on TV every weekend. That’s a lot of cognitive surplus just ripe for engaging in participative activities. He observes that the Internet-connected world spends around a trillion hours watching TV each year. If just one percent of that time shifted towards producing and sharing on the Internet, it would be equivalent to 100 Wikipedia-sized projects per year. And, of course, that is exactly what is happening, probably at a higher rate than that.

Let’s now imagine that one percent of that one percent could be turned to replacing our current processes of higher education. That’s one Wikipedia a year. Meanwhile the Internet continues to grow at a phenomenal rate, but still slightly less than a quarter of the world’s population have access to it. That’s a lot of growth potential – even a quarter of one percent would be a whole lot of brain power. We are just at the start of this revolution and have barely scratched the surface in terms of searching, filtering, connecting, aggregating and interacting with all of that content and all of those people. Assuming other things remain fairly equal and we don’t all vaporise or vanish down a hole of recession, it is hard to see how this cannot completely change higher education as we now know it.
Created:Mon, 23 Mar 2009 11:17:04 GMT

Medieval wikis and blogs

http://community.brighton.ac.uk/jd29/weblog/41639.html

I was reading Norton's 'Readings in the History of Education' the other day (love my iPhone) because I am intrigued about how and, more especially, why our current university systems came to be. The book is full of wonderful gems, but a couple of things seemed particularly interesting:

1: The glosses

Readings were given from text that, over the centuries, was adapted through the use of glosses – essentially marginalia but, in some cases, greatly exceeding the volume of original text on the page. Often they were wrong, misguided, poorly thought through, but were often read with the same emphasis as the original text.

Wikipedia on parchment?

2: the travelling scholars

Equally interesting, before universities were formed, scholars would travel to learn from masters, wherever they might be. 

"in those days the school followed the teacher, not the teacher the school."

I particularly like this exerpt from Abelard's autobiography:

 "...I betook myself to a certain wilderness previously known to
me, and there on land given to me by certain ones, with the
consent of the Bishop of the region, I constructed out of reeds
and straw a sort of oratory in the name of the Holy Trinity
where, in company with one of our clergy, I might truly chant to
the Lord: "Lo I have wandered far off, and have remained in the
wilderness."

As soon as Scholars learned this they began to gather from every
side, leaving cities and castles to dwell in the wilderness, and
in place of their spacious homes to build small tabernacles for
themselves, and in place of delicate food to live on herbs of the
fields and coarse bread, and in place of soft couches to make up
[beds of] straw and grass, and in place of tables to pile up
sods."

I feel a certain sympathy with this approach (I am in the icy wastes of the Canadian Prairies because of the remarkable people here) but the main thing that bears mention is that it was individuals, not institutions, that attracted scholars.

Oddly like blogs?

It is interesting that, like scholars of old, some of the bloggers I like most (most notably Stephen Downes but, to an extent, most of my favourites) are often concerned more with the discovery and interpretation of other texts than with creating new ones. Of course, it is all much faster, easier, more networked but maybe a bit less impressive- John of Salisbury spent twelve years at Paris and at Chartres following his preferred masters (including Abelard), whereas twelve minutes is about all I can take of most blogs.

Why universities?

My interest in Norton's book stemmed from a concern that has bothered me for a long time that we are driven by the exigencies of the form of our buildings, the physics of everyday life and, maybe more, from the history that drove us there, to learn in ways that would make little sense if we were to start anew without such constraints.

Back then, for instance:

  • If you wanted to learn from a master, you used to have to travel to be with them. More masters, more travellers, more students. Not only that, more lawyers (oddly seen as a benefit way back then). This was very good for towns and cities and the scholars were given many privileges such as freedom from tax, their own law courts and so on, partly to encourage them to come and partly to give them the freedom to study, learn and think.
  • If you wanted to read (or, more often, hear) the great works of human knowledge (well, mainly Aristotle and the Bible) you had to go somewhere that there were copies of those books to be read. 
  • It made sense for many people to gather round a single lecturer as there were not enough books to go round.

These and maybe some other circumstances of course led to aggregation and the formation of universities that attracted more people to them like planets forming from dust or snowballs rolling down hills. Many similar contingent facts caused the formation of the strange, archaic and arcane system we use today. It was not always a direct path and ideas evolved and died along the way. Sadly the student domination of the university at Bologna was lost and we now all follow the Parisian, master-led system but, perhaps luckily, bad lecturers are no longer stoned (at least with stones).

We don't need to work that way any more (and, incidentally, it is bizarre that we reinforce this pattern with learning management systems). As we increasingly turn to learning from and with those we acknowledge as great in the online world, beyond the boundaries of universities, we are slowly reinventing the medieval pre-university system, with bells, whistles and some centuries of innovation, invention and discovery to improve things, of course. Wikipedia becomes our glosses, blogs become the reed-and-straw-built oratories and we gather round, despite the online discomfort, to listen to the wise. David Wiley has gone one step further down this road and offered personal certifications for those who attended his open course. Who needs universities?

Systems often develop more because of their history and context than because they are a good idea and universities, with their long and relentless history, are proof of this. There have been some big changes and innovations from time to time – the Humboldtian model and the open universities are particular milestones, but they really just embellished the existing deeper models. At Athabasca University (where I mostly work)  everything I teach is online and so are all the texts I use. I can teach to anyone, anywhere there is an Internet connection, any time. To maintain the traditions and processes of medieval universities is odd in the extreme and yet many of them are still there – convocations and silly gowns, deans, professors, doctors, degrees… Why? OK, I know we have to rub shoulders with the medievals and won't be accepted as serious scholars unless we do, but it is kind of crazy and a terrible waste of time, money and energy.

So why keep universities?

I think that universities do have some important roles to play still, beyond their credentialing function (one that evolved quite late in the day, incidentally). I love the fact that universities still have some of those inherited privileges from their forebears. We still need a system that gives uninterrupted space to think and freedom from the fear of getting it wrong or pursuing the ridiculous or arcane. We still need the space for young and old to discover the richness of our civilisations and their artefacts.

Is the university as we know it the best space for this, especially one that is online? I suppose one good thing is that a course at an open university like Athabasca gives people the excuse and the licence to make the space in their lives to learn and the resources from which to do so (though why does it have to always take place in units of around 100 hours?). It would be nice if we could make more opportunities for people to also hang out with the scholars. In medieval universities, for instance, lunch was an opportunity for masters to check that students had learnt the lessons of the day, and for students to question and debate issues that arose. Maybe comments on blogs fulfill some of that role – certainly the dialogue can get quite rich around some posts (if you want to comment on mine, by the way, it's probably best to go to the version at http://me2u.athabascau.ca/elgg/jond/weblog/ which allows comments from anyone – sadly seem to be disabled for visitors at the University of Brighton site) and Twitter, Facebook, etc can fill in some gaps here and there. However, we can go further. I think that we need to make our spaces more sociable, and give much greater value and recognition to sociability, provide opportunities for serendipity and enable cross-disciplinary fertilisation. It is worth remembering what it was that the early universities were trying to achieve. It wasn't just about economies of scale and academic freedom, but it was an opportunity for building knowledge, engendered by the drawing together of scholars with shared purposes and a passion for learning.

 

 

Mules and paper

http://community.brighton.ac.uk/jd29/weblog/41453.html

A colleague of mine recently asked me if I knew of any research comparing traditional paper-based correspondence courses with online courses. I replied that it was a ridiculous question to ask and that anyone hoping to get anything more than trivial and/or useless answers to it is doomed to failure. I struggled for an analogy. The best I could come up with was this: if you had to choose a form of transport, would you rather have a mule or would you prefer to be able to select one or more options from any and every form of transport yet invented, including the mule? 

There is a deeply mistaken perception that is rife in the world that the computer (or the connected computer) is a single technology, like the television. It is not. The computer is a universal tool, a universal medium and a universal environment. It can be many things, serially and at once. At its most extreme it could be every thing: I am rather charmed by the irritatingly persuasive but probably unprovable argument that, given everything we know about the universe, we are almost certainly living in a computer simulation. But let's ignore that because it probably doesn't make any difference at all as to how we should live or learn…

I wouldn't suggest that online platforms are the best solution for every distance learning need but it seems pretty obvious that the success of any given learning opportunity depends on what you do with it. This makes comparisons pretty hard to make, unless you are grossly misusing online technologies, as different technologies afford different opportunities and there is no doubt that all but the most appalling online learning experiences will be significantly different in form, style and quality from those delivered via paper. The more technologies that are available, the greater the range of options as to how you can go about enabling learning to occur. Given that fact, arbitrarily limiting yourself to a single technology when there  are thousands or millions of others available that offer an indefinitely large number of different opportunities in terms of pace, medium, level of interaction, convenience, learner control, pedagogy, engagement, etc, etc, etc, seems positively perverse.

There might be more sensible questions we could ask, however, that might let us make more informed decisions about which technologies to use. One of them would be 'what is the minimimum effort and cost that we can get away with in the time available to provide an acceptable learning experience for the largest possible number of learners with the smallest possible up-front capital investment?' Another might be 'how can we cater for the small number of learners without broadband Internet access?' The answers to those questions might lead us towards a paper-based solution under some circumstances though I would still often argue against paper in both those cases.

As technologies improve, I think that paper is becoming increasingly irrelevant. It is expensive, environmentally harmful, unreliable, very limited in its media capabilities, bulky, anti-social, weak in accessibility, and (funnily enough, given paper aeroplanes) inflexible.

Interestingly, I would once have used most of those arguments against computers.

I would still generally rather take paper if I were going into the wilderness, down to the beach, or getting in the bath. I would also rather start a fire with paper than my iPhone and it is much better at mopping up spilt coffee than my Mac. If I had to choose between reading a book on an old cathode ray tube and paper, I would of course choose paper. However, if the choice were between my iPhone and paper, I would and usually do choose the iPhone, unless there are big high quality images or some layout (e.g. some poetry, some tables) that would be hard to look at on the small screen, or simple technical incompatibility gets in the way. As cheap, robust, flexible, high definition, untethered and light displays become more available and projection becomes the norm even for basic mobile phones, those exceptions will be much rarer.

Research in learning technologies is hard for many many reasons, but one of them is that it generally looks at what has been, seldom what is, and very rarely what will be. However, the research question that really interests me (and that, I suppose, distinguishes my interests from the purely educational side of e-learning) is not 'how should we learn, taking advantage of our current technologies' but 'how should we learn, taking advantage of next year's technologies?' There's a further question about how we should try to shape those developments that is also worth asking, which does require us to look at our current technologies with a critical eye, and it certainly helps to know what we have been doing to help to figure out what we will be doing, but the essence of our enquiry has to be focused on the future.

Next year (or the year after, or the one after that…), paper will come into the equation as the best means of learning about as often as mules come into the equation as the best form of transport. It will make as much sense to use it as it does now to choose cuneiform on clay tablets as the best means of delivery for our courses. 

The only real difficulty that I can see with this is in deciding at what point it becomes economically unviable to continue to use paper as a mainstream technology. I think we are close to that point now. So let's stop making fruitless comparisons. Let's figure out what we like about paper technology and make sure that we don't lose it. Then let's move on. 

 

ps – my thoughts on online vs face-to-face learning are an entirely different matter altogether. More on that another time.

pps. – and, of course, it is quite valid to attempt to answer questions about issues faced by those who have always used paper in the past and who are now struggling with how to teach online.

In games, brains work differently when playing vs. a human

http://community.brighton.ac.uk/jd29/weblog/41219.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1394

Reporting on a study comparing functional MRI scans of people playing the generalised form of the Prisoner’s Dilemma. Half were told they were playing against a machine, half were told they were playing a human. In fact, all were playing a machine. For this particular game there should be no difference in their behaviours, if they were playing the game logically, whether the other party were a machine or a human. But there was. Parts of the brain that are more active in trying to comprehend another person’s mental state were busier in those who believed they were playing against a human than in those who thought they were playing a machine.

This seems to point at something very important about how we learn with others and why it matters to learn things together. One of the distinctive and important roles of a teacher (who may be a peer, an expert, a pedagogue, etc) is to help learners to think differently. When we adjust our responses according to who we think we are talking to, we create mental models of that person and, in doing so, open opportunities to change how we think – to be like them. If that propensity is stronger when interacting with a real person than with a machine, then it presents a good case for real people in online learning. It is not that it is impossible to learn on our own (mediated through technologies like computers and books), but this gives some useful supporting evidence as to why it is often much harder than when we interact with a real person.

The study tells us little about how our brains work when we are talking to many people at once but that would seem to be a fruitful area for further study. From my perspective, it would be particularly interesting to find out whether these brain areas are more or less active when interacting with the collective or the network, where it is much harder to identify another’s mental state because you (usually) don’t know who you are talking to. I would hypothesise that groups, networks and collectives would form a continuum of activity between inter-personal interaction and interaction with a machine, but it may be subtler than that. For instance, in some circumstances it is possible that those sociable areas of the brain might be more active when trying to work out the mental states of a whole group than when trying to do it for an individual.

It would also be interesting to explore ways that we can compensate for this weakness in systems without people to talk with. Would it help to include exercises that require us to think like other people (scenario construction, play-acting, poetry etc)? Or could we find ways to fool people that they were talking to a real person? I guess this would ideally involve some kind of Turing-Prize-winning AI, but maybe there are halfway houses. For instance, an FAQ system where questions unanswered by an automated engine are passed to real people might be sufficiently human to have the right effect.

On a similar note, it would also be intriguing to analyse differences between questions asked of peers and questions asked of experts. If we are really trying to think like the person we are talking to, then we might expect different kinds of question, which would suggest that we are opening up new opportunities to think like that person. The same would apply in group learning, where we might be dealing with multiple models and accepting or rejecting them as part of the learning process. There’s a theory or two in here somewhere!
Created:Tue, 10 Feb 2009 13:06:00 GMT

gpeerreview – Google Code

http://community.brighton.ac.uk/jd29/weblog/41119.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1392

This is mighty cool. Mighty mighty cool…

“What is GPeerReview?

GPeerReview is a command-line tool that makes it simple to write a review of someone’s work and digitally sign them together.

How does it work?

1. First, you read someone’s paper.

2. Next, write a review. (The review is just a simple text file that contains a few scores and your opinions about the paper.)

3. Use GPeerReview to sign the review. (It will add a hash of the paper to your review, then it will use GPG to digitally sign the review.)

4. Send the signed review to the author. If the author likes the review, he/she will include it with his/her list of published works.

5. Prospective employers or other persons can easily verify that the reviews are valid.

Why?

* Peer reviews give credibility to an author’s work.

* Journals and conferences can use this tool to indicate acceptance of a paper.

* Researchers can also give credibility to each other by reviewing each others’ works.

* This enables researchers to publish first, and review later.

* It meshes seamlessly with existing publication venues. Even the credibility of works that have already been published can be enhanced by obtaining additional peer reviews.

* A decentralized social-network of reviewers and papers is naturally formed by this process. The structure of this network reflects that of the research community. “
Created:Thu, 05 Feb 2009 08:56:45 GMT

A GCSE module in 90 minutes, including 30 minutes of basketball

http://community.brighton.ac.uk/jd29/weblog/39700.html

http://www.guardian.co.uk/education/2009/jan/30/gcses-schools

A school in the UK has been testing spaced learning (a technique based on findings about memory formation from neuroscientific research) to condense a four-month GCSE science module into 90 minutes – 20 minutes of intensive narrated Powerpoint, 10 minutes of basketball, repeated three times, with great success. In fact, over a quarter of students did better in the tests using this method than they did after subsequently taking the traditional four-month module. The suggestion is that an entire GCSE could be passed by most students with just a few days of study and, strikingly, that further study might actually be harmful in a significant number of cases.

Ignoring things like the Hawthorne effect and assuming these results are meaningful, there are two main conclusions to be drawn here. The first is positive: that spaced learning works pretty well and that we can learn a lot from neuroscience. The second is appalling: that GCSEs (qualifications usually taken by English students at the age of 16) are almost totally useless as a means of gauging knowledge and understanding. Of course, we suspected that already.

Tests and exams are so embedded in our educational systems that we sometimes think they tell us something useful about the effectiveness of teaching and learning strategies. Alas, they tell us little. What they do tell us is that, somehow, a particular instance of a particular intervention may have helped some people to pass the test. If we get enough similar interventions in enough contexts to help identify a pattern, then we can start to say with some assurance whether a particular kind of intervention might help some people pass some kinds of test, and we might even be able to generalise a little about shared characteristics of such people, which might in turn help us to tailor our teaching for different learners to pass tests more effectively. Whether the test tells us anything useful or not, however, remains a significant question.

In this particular context, there is some evidence that spaced learning may be an effective approach to passing some GCSEs. But even here there are some nasty issues: the fact that many of the same students actually did worse after following this process and then studying for a further four months suggests that:

  1. whatever they learnt was not persistent and/or
  2. that what they learnt later reduced their ability to pass the exam.

If the former is true, spaced learning may have its uses but they are pretty limited. Given the advantages conferred by having already had a successful go at the tests, if the latter is true, then either it is a sign of some truly appalling teaching or, more likely, it suggests that the students carried on learning and may have subsequently known too much to pass. This sounds bizarre, but I have some anecdotal evidence for this: I can remember looking through model exam questions and answers for a GCSE-equivalent computing course with my son a few years ago and being horrified that he was being penalised for knowing too much on many of the questions, which frequently ignored complexities and ambiguities in favour of repeating what the book (sometimes absolutely wrongly) stated. For instance, one that stands out in my poor memory is that markers were explicitly told to penalise students for stating (correctly) that TCP and IP are protocols, while rewarding the incorrect answer of TCP/IP (which is actually a suite of protocols). A student with curiosity and an interest in the subject who had explored even a little further than the book would therefore have received lower marks than those who had memorised just what was needed. It is not surprising, therefore, that a relatively surface approach would be more successful in such instances. Knowing a little of the right kind of facts to answer test questions would, at least sometimes, be more useful than actually understanding the subject.

So, in the context of exam-passing at least, spaced learning is either useless in the long term, or part of the reason for its success is that it emphasises surface-level memory skills at the expense of depth of real learning. Interesting, but not revolutionary.

There are some occasions in life when this kind of learning can be useful (I'd like to try spaced learning as a means of learning to play a song, for instance) but not enough to warrant its wholesale adoption. More significantly, I think that it's yet another a damning indictment of tests/exams as the primary driver and means of evaluating the success of our educational system. There are huge opportunities to rethink what we are assessing and how we do it, and we must work on these urgently. Assessment is such a driver in our systems that, if we do it wrong, we run a big risk of setting inauthentic goals and encouraging weak learning strategies that must be unlearnt as we enter real life.

 

 

Jinni

http://community.brighton.ac.uk/jd29/weblog/39632.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1391

Jinni claims to help you find movies, TV shows matching your taste watch online. I spent a few minutes training it and it’s not that great yet, but I do have eclectic tastes which might mess with its algorithms a bit and the system is obviously still growing.

What is interesting about it is its combination of expert opinions/classifications, machine intelligence (they talk of a movie genome that uses a rich ontology, akin to the music genome that led to Pandora) and collaborative filtering. It is clearly trying to marry the top down and bottom up in an interesting way. The model they seem to be using allows for the bottom-up to become more prominent as time goes by. I suspect that, as the user-base grows and the cold-start problem lessens, that this might turn out to be quite useful.

In its combination of sophisticated (and apparently recursive) algorithms and human input it is a fine example of a collective application. The use of two distinct strata of human input (the experts and the rest of us) gives an extra twist and a potentially richer dynamic than the usual fare.

Its use of an ontology offers benefits of parcellation as well as a richer set of ratings than the usual ‘this is good’ approach. In addition to the usual movie metadata, the main divisions are ‘experience’ and ‘story’, with each aspect subdivided into many other subtypes. The ‘experience’ aspect is particularly interesting, parallel in some respects to my own CoFIND system’s use of qualities, albeit in a more structured and less user-led form. The structure serves a purpose, though, allowing them to automate tagging once the system has been trained. If it works, this might help to overcome the problem of spiralling complexity and everlasting cold starts that have proved to be a stumbling block for CoFIND.

I look forward to seeing how this develops.
Created:Mon, 26 Jan 2009 18:10:57 GMT