Study shows Facebook spreads nonsense more effectively than fact

An interesting side-effect of the way Facebook relentlessly and amorally drives the growth of its network no matter what the costs: stupidity thrives at the expense of useful knowledge.

This study looks at how information and misinformation spread in a Facebook network, finding that the latter has way more long-term staying power and thus, thanks to EdgeRank and the reification of communication, continues to spread and grow while more ephemeral factual pieces of news disappear from the stream. I suspect this is because actual news has a sell-by date so people move on to the next news. Misinformation of the sort studied (conspiracy theories, etc) has a more timeless and mythic quality that is only loosely connected with facts or events, but it has a high emotional impact and is innately interesting (if true, the world would be a much more surprising place), so it can persist without becoming any more or less relevant. It doesn’t have to spread fast nor even garner much interest at first, because it persists in the network. All it needs to do is wait around for a while – the Matthew Effect and Facebook’s algorithms see to the rest.

There is not much difference between interest in scientific and anti-scientific articles at the start. There is a wave of activity for the first 120 minutes after posting, then a second one 20 hours later (a common pattern). But then the fun starts…

It’s over the long term that serious differences were observed. While the science news had a relatively short tail, petering out quickly, conspiracy theories tended to grow momentum more slowly, but have a much longer tail. They stick around for a longer period of time, meaning they can reach far more people.

Then there’s another problem with the way Facebook works – the much-discussed echo-chamber effect. This effect is far more active in Facebook than in other networks, with algorithms favouring content from people and groups you regularly interact with. So if you share, Like or even click on conspiracy theories a lot, you’re more likely to be shown them in future, reinforcing the misinformation, rather than challenging it.”

 

Address of the bookmark: http://www.alphr.com/science/1002377/study-shows-facebook-spreads-nonsense-more-effectively-than-fact

Brain Based Learning and Neuroscience – What the Research Says!

Will Thalheimer provides a refreshing look at the over-hyping of (and quite pernicious lies about) neuroscience and brain-based learning. As he observes, neuroscience is barely out of diapers yet in terms of actual usable results for educators, and those actually researching in the field have no illusions that it is anywhere close yet (though they are very hopeful). What the research says is pretty close to nothing, when it comes to learning practice.

I am a little sceptical about whether neuroscience will ever be really valuable in education. This is not to say it is valueless – far from it. We have already had some useful insights into memory and have a better idea of some of the things that reduce or increase the effectiveness of brain functioning (sleep, exercise, etc), as well as a clearer notion of the mechanisms behind learning. Such things are good to know and can lead to some improvements in learning. The trouble is, though, that most researchers in the area are doing reductive science – seeking repeatable mechanisms and processes that underlie phenomena we see. This is of very little value when dealing with complex adaptive systems and emergence. Stuart Kauffman demonstrates that there are two main reasons reductive explanations fail to give us any help at all with understanding emergent systems: epistemological emergence and ontological emergence. Epistemological emergence means that it is impossible in principle to predict emergent features from constituent parts. Ontological emergence means that completely different kinds of causality occur in and between emergent phenomena than in and between their constituent parts, so knowledge of how the constituent parts work has no bearing at all on higher levels of causality in emergent phenomena. It’s a totally different kind of knowledge.

Knowing how the brain works in education is useful in much the same way that knowing about movements of water molecules in clouds is useful in meteorology. There are insights to be gained, explanations even, but they are of relatively little practical value in predicting the weather, let alone in predicting the precise shape of a specific cloud. Worse, in education, we don’t have a very precise idea of what kind of cloud shape we are seeking, most of the time. In fact, when we act like we do (learning objectives and associated assessment) we usually miss a great deal of the important stuff.

But it is worse than that. Those of us concerned with education are not just predicting or explaining phenomena, but orchestrating them. You can no more extrapolate how to teach from knowing how the brain works than you can extrapolate how to paint a masterpiece from knowing what paint is composed of. They are not even in the same family of phenomena. This doesn’t mean that a painter cannot learn useful things about paint that can assist the process – how fast it dries, its colour fastness, its viscosity, etc, and it does open up potential avenues for designing new kinds of paint. But we still need to know what to do with it once we know that. So, yes, brain science has value in education. Just not that much.

Address of the bookmark: http://www.willatworklearning.com/2016/01/brain-based-learning-and-neuroscience-what-the-research-says.html

Social Influence Bias: A Randomized Experiment

Fascinating article from 2013 on an experiment on a live website in which the experimenters manipulated rating behaviour by giving an early upvote or downvote. An early upvote had a very large influence on future voting, increasing the chances by nearly a third that a randomly chosen piece of content would gain more upvotes in future, with final ratings increased by 25% on average. Interestingly, downvotes did not have the same effect, making very little overall difference. Topics and prior relationships made some difference.

This accords closely with many similar studies and experiments, including a social navigation study I performed about a decade ago, involving clicking on a treasure map, the twist being that participants had to try to guess where, on average, most other people would click. About half the subjects could see where others had already clicked, the about half could not. The participants were aware that the average was taken from those that could not see where others had clicked. The click patterns of each set were radically different…

Mob effects in social navigation

On closer analysis, of those that could see where others had clicked, around a third of the subjects followed what others had done (as this recent experiment suggests), around a third followed a similar pattern to the ‘blind’ partipants, and around a third actively chose an option because others had not done so – on the face of it this latter behaviour was a bit bizarre, given the conditions of the contest, though it is quite likely that they were assuming just such a bias would occur and acting accordingly.

One thing that might be useful, though very difficult, would be to try to weed out the herd followers and downgrade their ratings. StackExchange tries to do something like this by giving more weight to those that have shown expertise in the past, but it has not fully sorted out the problem of the super-influential that have a lot of good karma as a result of gaming the system, as well as the networks that form within it leading to bias (a problem shared by the less-sophisticated but also quite effective Reddit). At the very least, it might be helpful to introduce a delay to feedback being shown until a certain amount of time has passed or a threshold has been reached.

One thing is certain, though: simple aggregated ratings that are fed back to prospective raters (including those voting in elections) are almost purpose-built to make stupid mobs. As several people have shown, including Surowiecki and Page, crowds are normally only wise when they do not know what the rest of the crowd is thinking. 

ABSTRACT

Our society is increasingly relying on the digitized, aggregated opinions of others to make decisions. We therefore designed and analyzed a large-scale randomized experiment on a social news aggregation Web site to investigate whether knowledge of such aggregates distorts decision-making. Prior ratings created significant bias in individual rating behavior, and positive and negative social influences created asymmetric herding effects. Whereas negative social influence inspired users to correct manipulated ratings, positive social influence increased the likelihood of positive ratings by 32% and created accumulating positive herding that increased final ratings by 25% on average. This positive herding was topic-dependent and affected by whether individuals were viewing the opinions of friends or enemies. A mixture of changing opinion and greater turnout under both manipulations together with a natural tendency to up-vote on the site combined to create the herding effects. Such findings will help interpret collective judgment accurately and avoid social influence bias in collective intelligence in the future.

Address of the bookmark: http://www.sciencemag.org/content/341/6146/647.full