Scraping Google Scholar to write your PhD literature chapter

Gephi diagram of data created by BibnetThis looks really excellent – it scrapes Google Scholar, starting with a search that reveals work you already know about and that you think is significant. From those search results it generates an exportable Gephi map of authors, subject/disciplinary areas and links between them. Basically, it automatically (well – a little effort and a bit of Google Scholar/Gephi competence needed) maps out connected research areas and authors, mined from Google Scholar, including their relative significance and centrality, shaped to fit your research interests. Doing this manually, as most researchers do, takes a really long time, and it is incredibly easy to miss significant authors and connections. This looks like a fantastic way to help build a literature review, and great scaffolding to help with exploring a research area. I see endless possibilities and uses. Of course, it is only as good as the original query, and only as good as Google Scholar’s citation trail, but that’s an extremely good start, and it could be iterated many times to refine the results further. The code for the tool, Bibnet, is available through Github.

Address of the bookmark: https://mystudentvoices.com/scraping-google-scholar-to-write-your-phd-literature-chapter-2ea35f8f4fa1#.y43s1qg4l

Multiclick

This is great fun and quite fascinating – do try it out. You get to click on a rectangle, then see where other people have clicked – many thousands of them.

This system is incredibly similar to part of an experiment on collective social navigation behaviour that I performed over ten years ago, albeit mine was at a much smaller scale and graphically a little coarser, and I deliberately asked people to click where they thought most other people would click. What’s interesting is that, though I only had a couple of hundred participants overall, and only just over a hundred got this view, the heat map of this new system is almost exactly the same shape as mine, though the nuances are more defined here thanks to the large numbers involved.

In my experiment (the paper was called ‘On the stupidity of mobs’) this was the control case: the other subjects got to see where others in their group had previously clicked. They did not see the clicks of the control group and did not know how later subjects might behave, so finding the most popular point was not as trivial as it sounds. I was expecting stupid behaviour in those that could see where others had clicked but it was not quite so simple. It appeared that people reacted in three distinctly different ways to seeing the clicks of others. About a third followed the herd (as anticipated) and about a third deliberately avoided the herd (not expected). About a third continued to make reasoned decisions, apparently uninfluenced by others, much as those without such cues. Again, I had not expected this. I should have expected it, of course. Similar issues were well known in the context of weighted lists such as Google Search results or reviews on Amazon, where some users deliberately seek less highly rated items or ignore list order in an attempt to counter perceived bias, and I had seen –  but not well understood – similar effects in earlier case studies with other more practically oriented social navigation systems. People are pretty diverse! I wonder whether the researchers here are aiming for something similar? It does offer the opportunity to try again later (not immediately) so they could in theory analyze the results of the influence of others in a similar way. I’d love to see those results.

Address of the bookmark: http://boltkey.cz/multiclick/

The Bonus Effect – Alfie Kohn

Alfie Kohn in brilliant form once again, reaffirming his place as the most eloquent writer on motivation this century, this time taking on the ‘bonus effect’ – the idea that giving rewards makes those rewards themselves more desirable while simultaneously devaluing the activity leading to them. It seems that, though early research was equivocal, more recent studies show that this is real:

“When people are promised a monetary reward for doing a task well, the primary outcome is that they get more excited about money. This happens even when they don’t meet the standard for getting paid. And when a reward other than money is used — raffle tickets for a gift box, in this case — the effect is the same: more enthusiasm about what was used as an incentive.”

Also:

“The more closely a reward is conditioned on how well one has done something, the more that people come to desire the reward and, as earlier research has shown, the more they tend to lose interest in whatever they had to do to get the reward.”

As Kohn summarizes:

‘If the question is “Do rewards motivate people?” the answer is “Sure — they motivate people to get rewards.”’

We have long known that performance-related pay is a terrible idea, and that performance-related bonuses achieve the precise opposite of their intended effects. This is a great explanation of more of the reasons behind that empirical finding.

As it happens, Athabasca University operates just such a system, flying in the face of five decades of research that shows unequivocally that it is positively self-defeating. It’s bad enough when used to drive workers on a production line. For creative and problem-solving work, it is beyond ridiculous. Of course, as Kohn notes, exactly the same dynamic underlies most of our teaching too:

“If we try to justify certain instructional approaches by saying they’ll raise test scores, we’re devaluing those approaches while simultaneously elevating the importance of test scores. The same is true of education research that uses test results as the dependent variable.”

The revolution cannot come soon enough.

Address of the bookmark: http://www.alfiekohn.org/blogs/bonus/