Teens unlikely to be harmed by moderate digital screen use

The results of quite a large study (120,000 participants) appear to show that ‘digital’ screen time, on average, correlates with increased well-being in teenagers up to a certain point, after which the correlation is, on average, mildly negative (but not remotely as bad as, say, skipping breakfast). There is a mostly implicit assumption, or at least speculation, that the effects are in some way caused by use of digital screens, though I don’t see strong signs of any significant attempts to show that in this study.

While this accords with common sense – if not with the beliefs of a surprising number of otherwise quite smart people – I am always highly sceptical of studies that average out behaviour, especially for something as remarkably vague as engaging with technologies that are related only insofar as they involve a screen. This is especially the case given that screens themselves are incredibly diverse – there’s a world of difference between the screens of an e-ink e-reader, a laptop, and a plasma TV, for instance, quite apart from the infinite range of possible different ways of using them, devices to which they can be attached, and activities that they can support. It’s a bit like doing a study to identify whether wheels or transistors affect well-being. It ain’t what you do, it’s the way that you do it. The researchers seem aware of this. As they rightly say:

“In future work, researchers should look more closely at how specific affordances intrinsic to digital technologies relate to benefits at various levels of engagement, while systematically analyzing what is being displaced or amplified,” Przybylski and Weinstein conclude. 

Note, though, the implied belief that there are effects to analyze. This remains to be shown. 

Address of the bookmark: https://www.eurekalert.org/pub_releases/2017-01/afps-tut011217.php

Moral panic: Japanese girls risk fingerprint theft by making peace-signs in photographs / Boing Boing

As Cory Doctorow notes, why this headline should single out Japanese girls as being particularly at risk – and that this is the appeal of it – is much more disturbing than the fact that someone figured out how to lift fingerprints that can be used to access biometric authentication systems from photos taken using an ‘ordinary camera’ at a considerable distance (3 metres). He explains the popularity of the news story thus:

I give credit to the news-hook: this is being reported as a risk that young women put themselves to when they flash the peace sign in photos. Everything young women do — taking selfies, uptalking, vocal fry, using social media — even reading novels! — is presented as a) unique to young women (even when there’s plenty of evidence that the trait or activity is spread among people of all genders and ages) and b) an existential risk to the human species (as in, “Why do these stupid girls insist upon showing the whole world their naked fingertips? Slatterns!”)

The technical feat intrigued me, so I found a few high-res scans of pictures of Churchill making the V sign, taken on very good medium or large format film cameras (from that era, 5″x4″ press cameras were most common, though some might have been taken on smaller formats and/or cropped) with excellent lenses, by professional photographers, under various lighting conditions, from roughly that distance. While, on the very best, with cross-lighting, a few finger wrinkles and creases were partly visible, there was no sign of a single whorl, and nothing like enough detail for even a very smart algorithm to figure out the rest. So, with a tiny fraction of the resolution, I don’t think you could just lift an image from the web, a phone, or even from a good compact camera to steal someone’s fingerprints unless the range were much closer and you were incredibly lucky with the lighting conditions and focus. That said, a close-up selfie using an iPhone 7+, with focus on the fingers, might well work, especially if you used burst mode to get slightly different images (I’m guessing you could mess with bas relief effects to bring out the details). You could also do it if you set out to do it. With something like a good 400mm-equivalent lens,  in bright light, with low ISO, cross-lit, large sensor camera (APS-C or higher), high resolution, good focus and small aperture, there would probably be enough detail. 

Address of the bookmark: https://boingboing.net/2017/01/12/moral-panic-japanese-girls-ri.html

Intrinsic and Extrinsic Motivation

A short article from Lisa Legault that summarizes self-determination theory (SDT) and its findings very succinctly and clearly. It’s especially effective at highlighting the way the spectrum of extrinsic-to-intrinsic motivation works (including the integrated/identified/introjected continuum), and in describing the relationships between autonomy, competence, and relatedness. Nothing new here, nothing inspirational, just a useful resource to point people at so they can learn about the central tenets of SDT

Address of the bookmark: https://www.researchgate.net/profile/Lisa_Legault/publication/311692691_Intrinsic_and_Extrinsic_Motivation/links/5856e60d08ae77ec37094289.pdf

Bridge champion who played her cards right (From The Argus)

Sandra LandySad news for the world of bridge, about which I know almost nothing apart from what Sandra Landy, several-times world and European bridge champion, occasionally shared with me, and sad news for me. I learned today that she died last week, at the age of 78.

Though she never persuaded me to take up bridge, Sandra was a great influence on both my computing and my teaching careers. Firstly, she created (though was at the time no longer leading) the innovative and well-respected MScIS from which I graduated in the early 90s at the University of Brighton. On the course, she taught me Cobol, and supervised my project. At the end of the course, it was because of her recommendation and support that I became a principal software technician and, later, academic support manager for the university’s IT department and, some years later (again with her enthusiastic support and encouragement) became a lecturer, leading me fairly directly to my current career. She used to live down the road from me in a huge house in Hove (which was convenient when she wanted me to fix her computers!).

Sandra was an incredibly intelligent woman, a force of nature to be reckoned with whose influence on the teaching of computing at the University of Brighton, and beyond, was vast. Her subject knowledge was immense, her curiosity intense. She had conducted the first ever lecture on the first ever computing degree in the UK (at Brighton) in 1964 when I was just a toddler, and had played a major role in getting it off the ground in the first place. She was an intellectual powerhouse with a strong will, a clarity of vision, and a total lack of fear in critiquing anything and anyone, including herself. In fairness, as a result, she intimidated a lot of staff and students at the university, but she and I always got on famously. We amused and entertained each other. She had a marvellously dry sense of humour and a wonderfully rich, cigarette-sanded voice that could charm the birds off a tree as easily as it could leave strong people quivering like jelly. Suffice to say, she usually got her way, and her way was usually a very good one, but she was as compassionate as she was passionate. She listened as intently as she spoke and, if the idea made sense to her (after she had challenged it, of course!), she would lend it her full and considerable support. Quite a lot of the more disruptive innovations I was able to bring in during my time in a support role at Brighton were only possible because Sandra stood behind me and barged through any objections.  Her indelible stamp on the computing courses at the University of Brighton gave them a very distinctive character, an enviable mix of rigour and humanity, that persisted long after she retired. The world is a poorer place without her. 

Address of the bookmark: http://www.theargus.co.uk/news/15015130.Bridge_champion_who_played_her_cards_right/

Original page

TEL MOOC from Athabasca University

Starts today…

Course Description

Teachers who want to learn more about teaching with technology will find this Massive Open Online Course (MOOC), Introduction to Technology-Enabled Learning (TEL), informative and engaging. Using up-to-date learning design and simple, accessible technology, the course runs on an easy-to-use learning platform available via the Internet. The course is designed for teachers who want to build on their knowledge and practice in teaching and learning with technology. It will run over five weeks and requires approximately three to five hours of time each week. Designed to accommodate teachers’ busy schedules, the course offers flexibility with options for learning the content. You will learn from readings, videos, discussions with other participants and instructors, meaningful exercises, quizzes and short assignments. Certification is available for those who wish to complete all required exercises and quizzes.

Address of the bookmark: https://www.telmooc.org/

Original page

DeepDyve – Your Personal Research Library

‘Like Spotify for academic articles’, the slogan says. It gives access to a claimed 10,000 paywalled academic journals for $40USD/month ($30/month for a year). The site correctly claims that you can therefore get a whole year’s access to all these journals for the cost of about 10 individual articles in an average paywalled journal, so it seems like a pretty good deal for any researcher outside academia needing to access more than a handful of papers in closed journals a year.

I had a quick browse, and here are my initial observations:

  • There’s a fairly decent selection of many of the more significant profiteering journals, albeit some that I read regularly are not there (including, interestingly, some paywalled but not-for-profit publications). It’s worth noting that, unlike Spotify for music or Netflix for movies, it can be a serious problem if a required paper is not available to a researcher. Some is better than none, but I don’t think 10,000 journals is anything like enough to make this truly compelling or disruptive.
  • The access is a bit variable – not all is full-text, and it looks like there are some notable limitations on what you can do with at least some of the papers (limits on pages you can print per month is a warning sign – this is at best a rental model, the equivalent of streaming).
  • The site seems a bit flaky – the search doesn’t work very well, and sometimes fails altogether, and it seems to lose session state very easily – but it’s mostly a modern, easy-to-use system.
  • There are some useful browser add-ins etc that make it easy to hook in things like Scholar.

It’s not up there with a good university library. Not even close. Athabasca University Library, for instance, gives access to and indexes about 65,000 journals, albeit including a number that are open-access already. But AU library also gives access to a host of physical books and journals, and a very large number of online books, loads of conference proceedings, an excellent group of skilled information professionals to provide help with finding what you need, and plenty more.  Our undergraduate students get all of that for 6 months, as well as any textbooks needed for their courses, and their course materials, for a grand total of $180CAD ($30/month), paid as a standard resources fee. We do run this at a substantial loss (costs to us were, according to the last set of figures I saw, over $250/student, mainly thanks to immoral textbook pricing) but, even so, $40USD a month for a fraction of the services seems extremely steep to me. 

It would be unfair, though, to call this pricing predatory: I expect the company has been fleeced by the publishers just like everyone else. DeepDyve is just filling a market niche left by the truly predatory publishers that steal publicly funded research, then hold it to ransom in closed, legislatively locked containers to sell back to those that produced it (and others), lining their filthy pockets with obscenely huge profits all the way down the line. DeepDyve reduces the costs for some people, and that’s OK, but it’s hardly a solution to the bigger problem, and may actually bolster a status quo that is fundamentally corrupt down to its core, because it provides an ongoing revenue stream to publishers that might otherwise be bypassed by ‘grey’ sources (if you want papers from paywalled journals and books, mail the author!) or ‘pirate’ sites like Sci Hub or Academic Torrents. The correct answer to the problem is for all of us to stop publishing in closed, profiteering, exploitative journals, to stop letting them steal from us in the first place. 



Address of the bookmark: https://www.deepdyve.com/howitworks

Original page

Alfie Kohn: "It’s bad news if students are motivated to get A’s" – YouTube

A nice one-minute summary of Alfie Kohn’s case against grades at www.youtube.com/watch?v=EQt-ZI58wpw

There’s a great deal more Kohn has to say on the subject that is worth reading, such as at http://www.alfiekohn.org/article/case-grades/ or http://www.alfiekohn.org/article/grading/ or an interview at http://www.education.com/magazine/article/Grades_Any_Good/

From that interview, this captures the essence of the case pretty well:

“The research suggests three consistent effects of giving students grades – or leading them to focus on what grade they’ll get. First, their interest in the learning itself is diminished. Second, they come to prefer easier tasks – not because they’re lazy, but because they’re rational. After all, if the point is to get an A, your odds are better if you avoid taking intellectual risks. Third, students tend to think in a more superficial fashion – and to forget what they learned more quickly – when grades are involved.

To put it positively, students who are lucky enough to be in schools (or classrooms) where they don’t get letter or number grades are more likely to want to continue exploring whatever they’re learning, more likely to want to challenge themselves, and more likely to think deeply. The evidence on all of these effects is very clear, and it seems to apply to students of all ages.

As far as I can tell, there are absolutely no benefits of giving grades to balance against these three powerful negative consequences – except that doing so is familiar to us and doesn’t take much effort.”


Note: if this video shows up as a blank space in your browser, then your security settings are preventing embedding of untrusted content in a trusted page. This video is totally trustworthy, so look for the alert to override it, typically near the address bar in your browser.

Address of the bookmark:

Wisdom of the Confident: Using Social Interactions to Eliminate the Bias in Wisdom of the Crowds

A really interesting paper on making crowds smarter.  I find the word ‘confident’ in the title a bit odd because it seems (and I may have misunderstood) that the researchers are actually trying to measure independent thinking rather than confidence. As far as I can tell, this describes a method for separating sheep (those more influenced by others) from goats (those making more independent decisions), at least when you have a sequence of decisions/judgments to work with. The reason it bothers me is that sheep can be confident too (see the US election or Brexit, for example).

We know that crowds can be wise if and only if the agents in the crowd are unaware of the decisions of other agents. If there’s a feedback loop (more accurately, I believe, if there is an insufficiently delayed feedback loop) then you wind up with stupid mobs, driven by preferential attachment and similar dynamics. This is a big problem in many political systems that allow publication of polls and early results. However, some people are, for one reason or another, less influenced by the crowd than others. It would be useful to be able to aggregate their decisions while ignoring those that simply follow the rest, in order to achieve wiser crowds. That’s what the method described here seeks to do.

The paper is more concerned with describing its model than with describing or analyzing the experiment itself, which is a pity as I’d like to know more about the populations used and tasks performed, and whether it really is discriminating confident from independent behaviour. I’ve also done some work in this area and have written about how useful it would be to automatically identify independent thinkers, and to use their captured behaviour instead of that of the whole crowd to make decisions, but I have never implemented that because, in real life, this is quite hard to do. In this experiment, it seems quite possible that the ‘independent’ people might simply have been those that knew more about the domain. That’s great if we are using a sequence of captured data from the same domain (in this case, length of country borders) because we get results from those that know rather than those that guess. But it won’t transfer when the domain changes even slightly: knowing the length of the Swiss border might not well predict knowledge of, say, the length of the Nigerian border, though I guess it might improve things slightly because those that care about such things would be better represented in the sample.

It would take a fair bit of evidence, I suspect, to identify someone as a context-independent independent thinker though, given enough time, it could be done, it would be well worth doing, and this model might provide the means to identify that. I’d like to see it applied in a real context. There are less lengthy and privacy-invading alternatives. For instance, we might capture both a rating/value/judgement/whatever and some measure of confidence. Some kinds of prediction market capture that sort of data and, because of the personal stake in it, might achieve better results when we do not have a long history of data to analyze. Whether and to what extent confidence is related to independence, and whether the results would be better remains to be discovered, of course – there’s a good little research project to be done here – but it would be a good start.

Address of the bookmark: https://arxiv.org/abs/1406.7578

Setapp – Netflix-style rental model for apps for Mac

Interesting. For $10USD/month, you get unlimited access to the latest versions of what is promised to be around 300 commercial Mac apps. Looking at the selection so far (about 50 apps), these appear to be of the sort that usually appear in popular app bundles (e.g. StackSocial etc), in which you can buy apps outright for a tiny fraction of the list price (quite often at a 99% reduction). I have a few of these already, for which I paid an average of 1 or 2 dollars apiece, albeit that they came with a bunch of useless junk that I did not need or already owned, so perhaps it’s more realistic to say they average more like $10 apiece. Either way, they can already be purchased for very little money, if you have the patience to wait for the right bundle to arrive. So why bother with this?

The main advantage of SetApp’s model is that, unlike those in bundles, which often nag you to upgrade to the next version at a far higher price than you paid almost as soon as you get them, you always get the latest version. It is also nice to have on-demand access to a whole library at any time: if you can wait for a few months they will probably turn up in a cheap pay-what-you-want app bundle anyway, but they are only rarely available when you actually need them.  I guess there is a small advantage in the curation service, but there are plenty of much better and less inherently biased ways to discover tools that are worth having. 

The very notable disadvantage is that you never actually own the apps – once you stop subscribing or the company changes conditions/goes bust, you lose access to them. For ephemerally useful things like disk utilities, conversion tools, etc this is no great hassle but, for things that save files in proprietary formats or supply a cloud service (many of them) this would be a massive pain. As there is (presumably) some mechanism for updating and checking licences, this might also be an even more massive pain if you happen to be on a plane or out of network range when either the app checks in or the licence is renewed. I don’t know which method SetApp uses to ensure that you have a subscription but, one way or another, lack of network access at some point in the proceedings could really screw things up. When (with high probability) SetApp goes bust, you will be left high and dry. Also, I’m guessing that it is unlikely that I would want more than a dozen or thereabouts of these in any given year, so each would cost me about $10 every year at the best of times. Though that might be acceptable for a major bit of software on which one’s livelihood depends, for the kind of software that is currently on show, that’s quite a lot of money, notwithstanding the convenience of being able to pick up a specialist tool when you need it at no extra cost. 

This is a fairly extreme assault on software ownership but closed-source software of all varieties suffers from the same basic problem: you don’t own the software that you buy.  Unlike use-once objects like movies or books, software tends to be of continuing value. The obvious solution is to avoid closed-source altogether and go for open source right the way down the stack: that’s always my preference. Unfortunately, there are still commercial apps that I find useful enough to pay for and, unfortunately, software decays. Even if you buy something outright that does the job perfectly, at some point the surrounding ecosystems (the operating system, network, net services, etc) will most likely render it useless or positively dangerous at some point. There are also some doubly annoying cases where companies stop supporting versions, lose databases, or get taken over by other companies, so software that you once owned and paid for is suddenly no longer yours (Cyberduck, I’m looking at you). Worst of all are those that depend on a cloud service over which you have no control at all and that will almost definitely go bust, or get taken over, or be subject to cyberattack, or government privacy breaches, or be unavailable when you need it, or that will change terms and conditions at some point to your extreme disadantage. Though there may be a small niche for such things and the immediate costs are often low enough to be tempting, as a mainstream approach to software provision, it is totally unsustainable.


Address of the bookmark: https://setapp.com/

Pebble dashed


Pebble made my favourite smart watches. They were somewhat open, and the company understood the nature of the technology better than any of the mainstream alternatives. Well, at least they used to get it, until they started moving towards turning them into glorified fitness trackers, which is probably why the company is now being purchased by Fitbit.

So, no more Pebble and, worse, no more support for those that own (or, technically, paid for the right to use) a Pebble. If it were an old fashioned watch I’d grumble a bit about reneging on warranties but it would not prevent me from being able to use it. Thanks to the cloud service model, the watch will eventually stop working at all:

Active Pebble watches will work normally for now. Functionality or service quality may be reduced down the road. We don’t expect to release regular software updates or new Pebble features. “

Great. The most expensive watch I have ever owned has a shelf life of months, after which it will likely not even tell the time any more (this has already occurred on several occasions when it has crashed while I have not been on a viable network). On the bright side (though note the lack of promises):

We’re also working to reduce Pebble’s reliance on cloud services, letting all Pebble models stay active long into the future.”

Given that nearly all the core Pebble software is already open source, I hope that this means they will open source the whole thing. This could make it better than it has ever been. Interesting – the value of the watch would be far greater without the cloud service on which it currently relies. 


Address of the bookmark: https://www.kickstarter.com/projects/597507018/pebble-2-time-2-and-core-an-entirely-new-3g-ultra/posts/1752929