Mules and paper

http://community.brighton.ac.uk/jd29/weblog/41453.html

A colleague of mine recently asked me if I knew of any research comparing traditional paper-based correspondence courses with online courses. I replied that it was a ridiculous question to ask and that anyone hoping to get anything more than trivial and/or useless answers to it is doomed to failure. I struggled for an analogy. The best I could come up with was this: if you had to choose a form of transport, would you rather have a mule or would you prefer to be able to select one or more options from any and every form of transport yet invented, including the mule? 

There is a deeply mistaken perception that is rife in the world that the computer (or the connected computer) is a single technology, like the television. It is not. The computer is a universal tool, a universal medium and a universal environment. It can be many things, serially and at once. At its most extreme it could be every thing: I am rather charmed by the irritatingly persuasive but probably unprovable argument that, given everything we know about the universe, we are almost certainly living in a computer simulation. But let's ignore that because it probably doesn't make any difference at all as to how we should live or learn…

I wouldn't suggest that online platforms are the best solution for every distance learning need but it seems pretty obvious that the success of any given learning opportunity depends on what you do with it. This makes comparisons pretty hard to make, unless you are grossly misusing online technologies, as different technologies afford different opportunities and there is no doubt that all but the most appalling online learning experiences will be significantly different in form, style and quality from those delivered via paper. The more technologies that are available, the greater the range of options as to how you can go about enabling learning to occur. Given that fact, arbitrarily limiting yourself to a single technology when there  are thousands or millions of others available that offer an indefinitely large number of different opportunities in terms of pace, medium, level of interaction, convenience, learner control, pedagogy, engagement, etc, etc, etc, seems positively perverse.

There might be more sensible questions we could ask, however, that might let us make more informed decisions about which technologies to use. One of them would be 'what is the minimimum effort and cost that we can get away with in the time available to provide an acceptable learning experience for the largest possible number of learners with the smallest possible up-front capital investment?' Another might be 'how can we cater for the small number of learners without broadband Internet access?' The answers to those questions might lead us towards a paper-based solution under some circumstances though I would still often argue against paper in both those cases.

As technologies improve, I think that paper is becoming increasingly irrelevant. It is expensive, environmentally harmful, unreliable, very limited in its media capabilities, bulky, anti-social, weak in accessibility, and (funnily enough, given paper aeroplanes) inflexible.

Interestingly, I would once have used most of those arguments against computers.

I would still generally rather take paper if I were going into the wilderness, down to the beach, or getting in the bath. I would also rather start a fire with paper than my iPhone and it is much better at mopping up spilt coffee than my Mac. If I had to choose between reading a book on an old cathode ray tube and paper, I would of course choose paper. However, if the choice were between my iPhone and paper, I would and usually do choose the iPhone, unless there are big high quality images or some layout (e.g. some poetry, some tables) that would be hard to look at on the small screen, or simple technical incompatibility gets in the way. As cheap, robust, flexible, high definition, untethered and light displays become more available and projection becomes the norm even for basic mobile phones, those exceptions will be much rarer.

Research in learning technologies is hard for many many reasons, but one of them is that it generally looks at what has been, seldom what is, and very rarely what will be. However, the research question that really interests me (and that, I suppose, distinguishes my interests from the purely educational side of e-learning) is not 'how should we learn, taking advantage of our current technologies' but 'how should we learn, taking advantage of next year's technologies?' There's a further question about how we should try to shape those developments that is also worth asking, which does require us to look at our current technologies with a critical eye, and it certainly helps to know what we have been doing to help to figure out what we will be doing, but the essence of our enquiry has to be focused on the future.

Next year (or the year after, or the one after that…), paper will come into the equation as the best means of learning about as often as mules come into the equation as the best form of transport. It will make as much sense to use it as it does now to choose cuneiform on clay tablets as the best means of delivery for our courses. 

The only real difficulty that I can see with this is in deciding at what point it becomes economically unviable to continue to use paper as a mainstream technology. I think we are close to that point now. So let's stop making fruitless comparisons. Let's figure out what we like about paper technology and make sure that we don't lose it. Then let's move on. 

 

ps – my thoughts on online vs face-to-face learning are an entirely different matter altogether. More on that another time.

pps. – and, of course, it is quite valid to attempt to answer questions about issues faced by those who have always used paper in the past and who are now struggling with how to teach online.

In games, brains work differently when playing vs. a human

http://community.brighton.ac.uk/jd29/weblog/41219.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1394

Reporting on a study comparing functional MRI scans of people playing the generalised form of the Prisoner’s Dilemma. Half were told they were playing against a machine, half were told they were playing a human. In fact, all were playing a machine. For this particular game there should be no difference in their behaviours, if they were playing the game logically, whether the other party were a machine or a human. But there was. Parts of the brain that are more active in trying to comprehend another person’s mental state were busier in those who believed they were playing against a human than in those who thought they were playing a machine.

This seems to point at something very important about how we learn with others and why it matters to learn things together. One of the distinctive and important roles of a teacher (who may be a peer, an expert, a pedagogue, etc) is to help learners to think differently. When we adjust our responses according to who we think we are talking to, we create mental models of that person and, in doing so, open opportunities to change how we think – to be like them. If that propensity is stronger when interacting with a real person than with a machine, then it presents a good case for real people in online learning. It is not that it is impossible to learn on our own (mediated through technologies like computers and books), but this gives some useful supporting evidence as to why it is often much harder than when we interact with a real person.

The study tells us little about how our brains work when we are talking to many people at once but that would seem to be a fruitful area for further study. From my perspective, it would be particularly interesting to find out whether these brain areas are more or less active when interacting with the collective or the network, where it is much harder to identify another’s mental state because you (usually) don’t know who you are talking to. I would hypothesise that groups, networks and collectives would form a continuum of activity between inter-personal interaction and interaction with a machine, but it may be subtler than that. For instance, in some circumstances it is possible that those sociable areas of the brain might be more active when trying to work out the mental states of a whole group than when trying to do it for an individual.

It would also be interesting to explore ways that we can compensate for this weakness in systems without people to talk with. Would it help to include exercises that require us to think like other people (scenario construction, play-acting, poetry etc)? Or could we find ways to fool people that they were talking to a real person? I guess this would ideally involve some kind of Turing-Prize-winning AI, but maybe there are halfway houses. For instance, an FAQ system where questions unanswered by an automated engine are passed to real people might be sufficiently human to have the right effect.

On a similar note, it would also be intriguing to analyse differences between questions asked of peers and questions asked of experts. If we are really trying to think like the person we are talking to, then we might expect different kinds of question, which would suggest that we are opening up new opportunities to think like that person. The same would apply in group learning, where we might be dealing with multiple models and accepting or rejecting them as part of the learning process. There’s a theory or two in here somewhere!
Created:Tue, 10 Feb 2009 13:06:00 GMT

gpeerreview – Google Code

http://community.brighton.ac.uk/jd29/weblog/41119.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1392

This is mighty cool. Mighty mighty cool…

“What is GPeerReview?

GPeerReview is a command-line tool that makes it simple to write a review of someone’s work and digitally sign them together.

How does it work?

1. First, you read someone’s paper.

2. Next, write a review. (The review is just a simple text file that contains a few scores and your opinions about the paper.)

3. Use GPeerReview to sign the review. (It will add a hash of the paper to your review, then it will use GPG to digitally sign the review.)

4. Send the signed review to the author. If the author likes the review, he/she will include it with his/her list of published works.

5. Prospective employers or other persons can easily verify that the reviews are valid.

Why?

* Peer reviews give credibility to an author’s work.

* Journals and conferences can use this tool to indicate acceptance of a paper.

* Researchers can also give credibility to each other by reviewing each others’ works.

* This enables researchers to publish first, and review later.

* It meshes seamlessly with existing publication venues. Even the credibility of works that have already been published can be enhanced by obtaining additional peer reviews.

* A decentralized social-network of reviewers and papers is naturally formed by this process. The structure of this network reflects that of the research community. “
Created:Thu, 05 Feb 2009 08:56:45 GMT

A GCSE module in 90 minutes, including 30 minutes of basketball

http://community.brighton.ac.uk/jd29/weblog/39700.html

http://www.guardian.co.uk/education/2009/jan/30/gcses-schools

A school in the UK has been testing spaced learning (a technique based on findings about memory formation from neuroscientific research) to condense a four-month GCSE science module into 90 minutes – 20 minutes of intensive narrated Powerpoint, 10 minutes of basketball, repeated three times, with great success. In fact, over a quarter of students did better in the tests using this method than they did after subsequently taking the traditional four-month module. The suggestion is that an entire GCSE could be passed by most students with just a few days of study and, strikingly, that further study might actually be harmful in a significant number of cases.

Ignoring things like the Hawthorne effect and assuming these results are meaningful, there are two main conclusions to be drawn here. The first is positive: that spaced learning works pretty well and that we can learn a lot from neuroscience. The second is appalling: that GCSEs (qualifications usually taken by English students at the age of 16) are almost totally useless as a means of gauging knowledge and understanding. Of course, we suspected that already.

Tests and exams are so embedded in our educational systems that we sometimes think they tell us something useful about the effectiveness of teaching and learning strategies. Alas, they tell us little. What they do tell us is that, somehow, a particular instance of a particular intervention may have helped some people to pass the test. If we get enough similar interventions in enough contexts to help identify a pattern, then we can start to say with some assurance whether a particular kind of intervention might help some people pass some kinds of test, and we might even be able to generalise a little about shared characteristics of such people, which might in turn help us to tailor our teaching for different learners to pass tests more effectively. Whether the test tells us anything useful or not, however, remains a significant question.

In this particular context, there is some evidence that spaced learning may be an effective approach to passing some GCSEs. But even here there are some nasty issues: the fact that many of the same students actually did worse after following this process and then studying for a further four months suggests that:

  1. whatever they learnt was not persistent and/or
  2. that what they learnt later reduced their ability to pass the exam.

If the former is true, spaced learning may have its uses but they are pretty limited. Given the advantages conferred by having already had a successful go at the tests, if the latter is true, then either it is a sign of some truly appalling teaching or, more likely, it suggests that the students carried on learning and may have subsequently known too much to pass. This sounds bizarre, but I have some anecdotal evidence for this: I can remember looking through model exam questions and answers for a GCSE-equivalent computing course with my son a few years ago and being horrified that he was being penalised for knowing too much on many of the questions, which frequently ignored complexities and ambiguities in favour of repeating what the book (sometimes absolutely wrongly) stated. For instance, one that stands out in my poor memory is that markers were explicitly told to penalise students for stating (correctly) that TCP and IP are protocols, while rewarding the incorrect answer of TCP/IP (which is actually a suite of protocols). A student with curiosity and an interest in the subject who had explored even a little further than the book would therefore have received lower marks than those who had memorised just what was needed. It is not surprising, therefore, that a relatively surface approach would be more successful in such instances. Knowing a little of the right kind of facts to answer test questions would, at least sometimes, be more useful than actually understanding the subject.

So, in the context of exam-passing at least, spaced learning is either useless in the long term, or part of the reason for its success is that it emphasises surface-level memory skills at the expense of depth of real learning. Interesting, but not revolutionary.

There are some occasions in life when this kind of learning can be useful (I'd like to try spaced learning as a means of learning to play a song, for instance) but not enough to warrant its wholesale adoption. More significantly, I think that it's yet another a damning indictment of tests/exams as the primary driver and means of evaluating the success of our educational system. There are huge opportunities to rethink what we are assessing and how we do it, and we must work on these urgently. Assessment is such a driver in our systems that, if we do it wrong, we run a big risk of setting inauthentic goals and encouraging weak learning strategies that must be unlearnt as we enter real life.

 

 

Jinni

http://community.brighton.ac.uk/jd29/weblog/39632.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1391

Jinni claims to help you find movies, TV shows matching your taste watch online. I spent a few minutes training it and it’s not that great yet, but I do have eclectic tastes which might mess with its algorithms a bit and the system is obviously still growing.

What is interesting about it is its combination of expert opinions/classifications, machine intelligence (they talk of a movie genome that uses a rich ontology, akin to the music genome that led to Pandora) and collaborative filtering. It is clearly trying to marry the top down and bottom up in an interesting way. The model they seem to be using allows for the bottom-up to become more prominent as time goes by. I suspect that, as the user-base grows and the cold-start problem lessens, that this might turn out to be quite useful.

In its combination of sophisticated (and apparently recursive) algorithms and human input it is a fine example of a collective application. The use of two distinct strata of human input (the experts and the rest of us) gives an extra twist and a potentially richer dynamic than the usual fare.

Its use of an ontology offers benefits of parcellation as well as a richer set of ratings than the usual ‘this is good’ approach. In addition to the usual movie metadata, the main divisions are ‘experience’ and ‘story’, with each aspect subdivided into many other subtypes. The ‘experience’ aspect is particularly interesting, parallel in some respects to my own CoFIND system’s use of qualities, albeit in a more structured and less user-led form. The structure serves a purpose, though, allowing them to automate tagging once the system has been trained. If it works, this might help to overcome the problem of spiralling complexity and everlasting cold starts that have proved to be a stumbling block for CoFIND.

I look forward to seeing how this develops.
Created:Mon, 26 Jan 2009 18:10:57 GMT

49 Amazing Social Media, Web 2.0 And Internet Stats

http://community.brighton.ac.uk/jd29/weblog/39582.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1390

The figures speak for themselves and are pretty much what I would have expected on the whole, but one or two caused me to do a double-take. This one surprised me: YouTube’s bandwidth costs per day are about $1,000,000. That’s $365m per year on handling 13 hours of uploaded video every minute and well over 100m videos viewed every day. I guess it doesn’t seem that expensive when you think of it that way.

Created:Sat, 24 Jan 2009 19:46:50 GMT

OpenSocial, OpenID, and OAuth: Oh, My!

http://community.brighton.ac.uk/jd29/weblog/39386.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1389

A terrific talk by Joseph Parr of Plaxo. This video explains the technologies behind social sites very clearly. It’s an hour long but, if you’re interested in developing social applications and you’re not sure where to begin (or even worse, you *are* sure but haven’t heard of these standards) then it’s a great introduction.
Created:Wed, 14 Jan 2009 06:22:08 GMT

TagCrowd – make your own tag cloud from any text

http://community.brighton.ac.uk/jd29/weblog/39309.html

Full story at: http://jondron.net/cofind/frshowresource.php?tid=5325&resid=1388

A simpler, primitive but very usable and less over-technologised system than Wordle that takes some plain text (or HTML from ANY web page) and turns it into a tag cloud. I saw Wordle when it was relatively young (months, not years ago!) and it was slightly more like this, though even then had some novel output options and was less developer-focused than TagCrowd. TagCrowd generates very clear legible, standards compliant, HTML/CSS but little else. They profess a desire to build an API, but it has none yet. Even so, sometimes simple is beautiful. A nice little system.
Created:Sun, 11 Jan 2009 10:32:53 GMT

James Paul Gee on games, social media and education

http://community.brighton.ac.uk/jd29/weblog/39250.html

 http://www.edutopia.org/james-gee-games-learning-video

A marvellous video from Edutopia featuring James Paul Gee in which he presents some very persuasive arguments for games and social media in education. More importantly, he challenges how school education is done in the US (although there are local differences this is much the same as it is done most of the world when you get down to basics, and pretty much the same as much of university education, especially in the sciences) and offers some ways out. Not much is new in what he has to say, but he says it very well. Enjoy!