The ultimate insomnia cure: new GDPR legislation soothingly read by Peter Jefferson

The BBC’s Shipping Forecast is one of the great binding traditions of British culture that has been many a Brit’s lullaby since time immemorial (ie. long before I was born). Though I never once paid attention to its content in all the decades I heard it, eleven years after leaving the country I could still probably recite the majority of the 31 sea areas surrounding the British Isles from memory. 

For as long as I can recall, the gently soothing voice of the Shipping Forecast was Peter Jefferson (apparently he retired after 40 years in 2009) who, in this magnificently somnolent rendering, immortalizes exerpts from the General Data Protection Regulation that has recently come into force in the EU. My eyelids start drooping about 30 seconds in.

 

Address of the bookmark: https://blog.calm.com/relax/once-upon-a-gdpr

Originally posted at: https://landing.athabascau.ca/bookmarks/view/3327075/the-ultimate-insomnia-cure-new-gdpr-legislation-soothingly-read-by-peter-jefferson

Black holes are simpler than forests and science has its limits

Mandelbrot set (Wikipedia, https://en.wikipedia.org/wiki/Mandelbrot_set)Martin Rees (UK Astronomer Royal) takes on complexity and emergence. This is essentially a primer on why complex systems – as he says, accounting for 99% of what’s interesting about the world – are not susceptible to reductionist science despite being, at some level, reducible to physics. As he rightly puts it, “reductionism is true in a sense. But it’s seldom true in a useful sense.” Rees’s explanations are a bit clumsy in places – for instance, he confuses ‘complicated’ with ‘complex’ once or twice, which is a rooky mistake, and his example of the Mandelbrot Set as ‘incomprehensible’ is not convincing and rather misses the point about why emergent systems cannot be usefully explained by reductionism (it’s about different kinds of causality, not about complicated patterns) – but he generally provides a good introduction to the issues.

These are well-trodden themes that most complexity theorists have addressed in far more depth and detail, and that usually appear in the first chapter of any introductory book in the field, but it is good to see someone who, from his job title, might seem to be an archetypal reductive scientist (he’s an astrophysicist) challenging some of the basic tenets of his discipline.

Perhaps my favourite works on the subject are John Holland’s Signals and Boundaries, which is a brilliant, if incomplete, attempt to develop a rigorous theory to explain and describe complex adaptive systems, and Stuart Kauffman’s flawed but stunning Reinventing the Sacred, which (with very patchy success) attempts to bridge science and religious belief but that, in the process, brilliantly and repeatedly proves, from many different angles, the impossibility of reductive science explaining or predicting more than an infinitesimal fraction of what actually matters in the universe. Both books are very heavy reading, but very rewarding.

Address of the bookmark: https://aeon.co/ideas/black-holes-are-simpler-than-forests-and-science-has-its-limits

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2874665/black-holes-are-simpler-than-forests-and-science-has-its-limits

Amazon helps and teaches bomb makers

Amazon’s recommender algorithm works pretty well: if people start to gather together ingredients needed for making a thermite bomb, Amazon helpfully suggests other items that may be needed to make it, including hardware like ball bearings, switches, and battery cables. What a great teacher!

It is disturbing that this seems to imply that there are enough people ordering such things for the algorithm to recognize a pattern. However, it would seem remarkably dumb for a determined terrorist to leave such a (figuratively and literally) blazing trail behind them, so it is just as likely to be the result of a very slightly milder form of idiot, perhaps a few Trump voters playing in their backyards. It’s a bit worrying, though, that the ‘wisdom’ of the crowd might suggest uses of and improvements to some stupid kids’ already dangerous backyard experiments that could make them way more risky, and potentially deadly.

Building intelligent systems is not too hard, as long as the activity demanding intelligence can be isolated and kept within a limited context or problem domain. Computers can beat any human at Go, Chess, or Checkers. They can drive cars more safely and more efficiently than people (as long as there are not too many surprises or ethical dilemmas to overcome, and as long as no one tries deliberately to fool them). In conversation, as long as the human conversant keeps within a pre-specified realm of expertise, they can pass the Turing Test. They are even remarkably much better than humans at identifying, from a picture, whether someone is gay or not. But it is really hard to make them wise. This latest fracas is essentially a species of the same problem as that reported last week of Facebook offering adverts targeted at haters of Jews. It’s crowd-based intelligence, without the wisdom to discern the meaning and value of what the crowd (along with the algorithm) chooses. Crowds (more accurately, collectives) are never wise: they can be smart, they can be intelligent, they can be ignorant, they can be foolish, they can even (with a really smart algorithm to assist) be (or at least do) good; but they cannot be wise. Nor can AIs that use them.

Human wisdom is a result of growing up as a human being, with human needs, desires, and interests, in a human society, with all the complexity, purpose, meaning, and value that it entails. An AI that can even come close to that is at best decades away, and may never be possible, at least not at scale, because computers are not people: they will always be treated differently, and have different needs (there’s an interesting question to explore as to whether they can evolve a different kind of machine-oriented wisdom, but let’s not go there – SkyNet beckons!). We do need to be working on artificial wisdom, to complement artificial intelligence, but we are not even close yet. Right now, we need to be involving people in such things to a much greater extent: we need to build systems that informate, that enhance our capabilities as human beings, rather than that automate and diminish them. It might not be a bad idea, for instance, for Amazon’s algorithms to learn to report things like this to real human beings (though there are big risks of error, reinforcement of bias, and some fuzzy boundaries of acceptability that it is way too easy to cross) but it would definitely be a terrible idea for Amazon to preemptively automate prevention of such recommendations.

There are lessons here for those working in the field of learning analytics, especially those that are trying to take the results in order to automate the learning process, like Knewton and its kin. Learning, and that subset of learning that is addressed in the field of education in particular, is about living in a human society, integrating complex ideas, skills, values, and practices in a world full of other people, all of them unique and important. It’s not about learning to do, it’s about learning to be. Some parts of teaching can be automated, for sure, just as shopping for bomb parts can be automated. But those are not the parts that do the most good, and they should be part of a rich, social education, not of a closed, value-free system.

Address of the bookmark: http://www.alphr.com/politics/1007077/amazon-reviewing-algorithms-that-promoted-bomb-materials

Original page

 

Update: it turns out that the algorithm was basing its recommendations on things used by science teachers and people that like to make homemade fireworks, so this is nothing like as sinister as it at first seemed. Nonetheless, the point still stands. Collective stupidity is just as probable as collective intelligence, possibly more so, and wisdom can never be expected from an algorithm, no matter how sophisticated.

Analytic thinking undermines religious belief while intelligence undermines social conservatism, study suggests

‘Suggests’ is the operative word in the title here. The title is a sensationalist interpretation of an inconclusive and careful study, and I don’t think this is what the authors of the study mean to say at all. Indeed, they express caution in numerous ways, noting small effect sizes, lack of proof of causality, large overlaps between groups, and many other reasons for extremely critical interpretation of the evidence:

“We would like to warn readers to resist the temptation to draw conclusions that suit their ideological worldviews,” Saribay told PsyPost. “One must not think in terms of profiles or categories of people and also not draw simple causal conclusions as our data do not speak to causality. Instead, it’s better to focus on how certain ideological tendencies may serve psychological needs, such as the need to simplify the world and conserve cognitive energy.”

This is suitably cautious and very much at odds with the title of the PsyPost article.

The study itself finds some confirmatory evidence that, in the US (and only in the US):

  •     Religion may be embedded more in Type 1 intuitions relative to politics.
  •     Processing liberal political arguments may require cognitive ability.
  •     Religious belief should be predicted uniquely by analytic cognitive style.
  •     Conservatism should be uniquely predicted by cognitive ability.

It is important to note, however, that ‘prediction’ in this instance has a very precise meaning of implying slightly increased odds of correlation between these factors, not that there is a causal connection one way or the other. The study simply adds a little more evidence to an already fairly substantial body of proof that cognitively challenged people, especially those more inclined to intuition than to reason (the two are statistically correlated), are somewhat more likely to be drawn both to religion and to right wing politics. Much as I would like it to imply the inverse – that intelligence and rationality are a cure for religion and right wing beliefs – there is absolutely nothing in this research to suggest that.

Part of the motivation for the study is the researchers’ observation of the growing antagonism to intelligence, expertise, evidence, and truth that is revealed in Trump’s victory, Brexit, ISIL, man-made climate change denial, and so on. While such evils are no doubt fuelled and sustained by (not to put too fine a point on it) stupid people in search of simple solutions to complex problems, it would be foolish (stupid, even) and highly inaccurate to suggest that all (or even a majority) of those exhibiting such attitudes and beliefs are stupid, or driven by intuition rather than reason, or both. As the study’s authors rightly observe, the value of this study is its contribution to understanding some of the complexity of the problem and should not be used to extrapolate exactly the same kind of simplified caricatures that cause it in the first place:

“…a more balanced understanding can only be reached via continued empirical research. Human beings may sometimes benefit from cognitive simplification of a complex and at times scary world of constant change and uncertainty. It does seem that certain aspects of religion and conservative ideology serve to deal with this, in slightly different ways. This is the direction that evidence points to thus far. However, researchers of course must resist this very need to simplify the world beyond a certain level.”

The original study can be found at http://www.sciencedirect.com/science/article/pii/S019188691730226X

Address of the bookmark: http://www.psypost.org/2017/09/analytic-thinking-undermines-religious-belief-intelligence-undermines-social-conservatism-study-suggests-49655

Original page

E-Learn 2017, Vancouver, 17-20 October – last day of cheaper registration rates

Today is the final day to get the discount rate if you are planning on coming to E-Learn in Vancouver this year (US$455 today vs US$495 from tomorrow onwards).

It promises to be quite a big event this year, with an estimated 900+ concurrent sessions, 100+ posters, and three lunchtime SIGs (including a new one on sustainable learning technologies), not to mention some fine keynotes and networking events.  Annoyingly, it clashes with ICDE in Toronto this year but, IMHO, E-Learn is a better conference for those working and researching in online education, and it’s a much better location. I may be a little biased, being both a resident of Vancouver and local co-chair of the conference, but there are some very good reasons I chose to be both those things!

I have attended almost all E-Learn (and its predecessor, WebNet) conferences for nearly 20 years now because it tends to attract some great people, provides an excellently diverse and blended mix of technical and pedagogical perspectives, gives plentiful chances to engage with both early-career researchers and those at the top of the field, usually picks great locations, is well-organized, and focuses solely on adult online learning (mainly higher education but also some from industry, healthcare, government, etc). The acceptance rate (1-in-3 to 1-in-4) is high enough to attract diverse papers that can be off the wall and interesting (especially from younger researchers who don’t know what’s impossible yet so sometimes achieve it), but low enough to exclude utter rubbish. If that kind of thing interests you, this is the conference for you!

I hope to see you there.

Address of the bookmark: https://www.aace.org/conf/elearn/registration/

Categories Uncategorised Leave a comment

Making the community the curriculum | Dave Cormier

The always wonderful Dave Cormier is writing a book (open, of course) about rhizomatic learning and, as you might expect given Dave’s eclectic and rich range of skills (from uber-tech-guru to uber-learning-guru) not to mention his cutting edge knowledge (this is someone so far ahead of trends that he actually invented the term ‘MOOC’) it’s brilliant stuff. Though it is a work in progress and still a bit raw in places, there are clues that this is not your common or garden e-book right from the opening chapter, Why we work together – cheating as learning which introduces the radical idea that people are pretty good at helping other people to learn while, in the process, learning themselves. Other chapters are equally charmingly named: Learning in a Time of Abundance, Five tips for slackers for keeping track of digital stuff and One person’s guide to evaluating educational technologies. What comes through most strongly in this is a vision of where we are going – where we must be going – in a world of increasing connection and increasingly connective technologies. In all, it provides an extremely practical, achievable, and pragmatic way of going about that without breaking everything in sight, very well grounded in theory, and very entertainingly (and very clearly) presented.

I’d not noticed this work in progress till now and am very glad that I found it. Highly recommended reading for anyone in education, edtech, or who is simply interested in learning or how technology changes us, and how to manage that change. I really look forward to seeing the finished or, at least, the published product. My sense is that this will always be an evolving book because that’s pretty much the nature of the beast, and so it will continue to be relevant for a long time to come.

Address of the bookmark: https://davecormier.pressbooks.com/

Original page

SpyStudent: hidden wireless video live transmission camera

Who does not know the problems with the driving test or studies testing? You have not time to learn and have more important things to do! And suddenly, the date for the exam or test in a few days.If your exam is important to you and you do not know what you should do otherwise, then you are right with us! Do not despair, we have something for you that can help you!  “

This fabulous offer from dron.si (who knew that my surname was a thing in Slovenia?)  allows you, for a mere €374.17, to be the proud owner of the SpyStudent kit, a camera, mic, earpiece, and wireless transmission system made for “those who do not like to learn for the test”.

You can easily and undetectably shove the various bits of transmitter down your underpants, assuming you weigh more than 300 kilos, you wear exceptionally baggy clothing, and you have no fear of numerous forms of radiation in your nether regions. You might be a little challenged to find a way to shove the ‘wireless spy earpiece’ that, from the picture, seems to be made for elephants, down into your ear canal, let alone ever hope to get it out again (I hope it holds its charge!) but that’s part of the fun.  Anyway, I am sure you can put up with a little inconvenience for a device that enables you (with the aid of your BMW-owning accomplice outside the building who you really hope knows the answers) to cheat on any exam or test with impunity.

Obviously, exam invigilators have never seen microphones before so you’re fine on that count, and they never bother to look for people muttering into their shirts, holding up exam papers to their chests, or tilting their heads as though listening to large black objects shoved into their ears. And exams are normally taken in open fields, so the range won’t be a problem

The man in the illustration is taking a break from his more usual activities of molesting small children/terrorism/voting for Trump, to enjoy cheating on his driving test. Sadly, he also cheated on the ‘holding your pen’ test, so it’s not going to end well:

child molester/exam cheat/international terrorist

 

Don’t forget this crucial advice, though…

You go into the examination room and you try to keep quiet as if nothing had.

I hope nothing had.

Address of the bookmark: http://www.dron.si/en/brezzicne-ip-kamere/i_424_hidden-wireless-video-live-transmission-camera-spystudent-button-camera-power-pack

Original page

Professor Jon Dron | Beyond Busy

An interview with me by Graham Allcott, author of the bestselling How to be a productivity ninja and other books, for his podcast series Beyond Busy, and as part of the research for his next book. In it I ramble a lot about issues like social media, collective intelligence, motivation, technology, education, leadership, and learning, and Graham makes some incisive comments and asks some probing questions. The interview was conducted on the landing of the Grand Hotel, Brighton, last year.

Address of the bookmark: http://getbeyondbusy.com/e/35495d7ba89876L/?platform=hootsuite

Original page

SCIS makes a great showing at HCI 2017, Vancouver

 

Ali Dewan presenting at HCI 2017

I had the pleasure to gatecrash the HCI 2017 conference in Vancouver today, which gave me the chance to see Dr Ali Dewan present three excellent papers in a row (two with his name on them) on a variety of themes, as well as a great paper written and presented by one of our students, Miao-Han Chang. Miao-Han Chang presenting

Both did superb jobs of presenting to a receptive crowd. Ali got particular acclaim from the audience for the first work he presented  (Combinatorial Auction based Mechanism Design for Course Offering Determination
by Anton Vassiliev, Fuhua Lin & M. Ali Akber Dewan) for its broad applicability in many areas beyond scheduling courses. 

Athabasca, and especially the School of Computing and Information Systems, has made a great showing at this prestigious conference, with contributions not just from Ali and Miao-Han, but also from Oscar (Fuhua) Lin, Dunwei Wen, Maiga Chang and Vive Kumar. Kurt Reifferscheid and Xiaokun Zhang also had a paper in the proceedings but were sadly not able to attend to present it.

 

Jon Dron and Ali Dewan at HCI 2017

Jon and Ali at the Vancouver Conference Centre after Ali’s marathon presentation stint. I detect a look of relief on Ali’s face!

 

Ali Dewan presenting

Papers

  • Combinatorial Auction based Mechanism Design for Course Offering Determination
    Anton Vassiliev, Fuhua Lin, M. Ali Akber Dewan, Athabasca University, Canada
  • Enhance the Use of Medical Wearables through Meaningful Data Analytics
    Kurt Reifferscheid, Xiaokun Zhang, Athabasca University, Canada
  • Classification of Artery and Vein in Retinal Fundus Images Based on the Context-Dependent Features
    Yang Yan, Changchun Normal University, P.R. China; Dunwei Wen, M. Ali Akber Dewan, Athabasca University, Canada; Wen-Bo Huang, Changchun Normal University, P.R. China
  • ECG Identification Based on PCA-RPROP
    Jinrun Yu, Yujuan Si, Xin Liu, Jilin University, P.R. China; Dunwei Wen, Athabasca University, Canada; Tengfei Luo, Jilin University, P.R. China; Liuqi Lang, Zhuhai College of Jilin University, P.R. China
  • Usability Evaluation Plan for Online Annotation and Student Clustering System – A Tunisian University Case
    Miao-Han Chang, Athabasca University, Canada; Rita Kuo, New Mexico Institute of Mining and Technology, United States; Fathi Essalmi, University of Kairouan, Tunisia; Maiga Chang, Vive Kumar, Athabasca University, Canada; Hsu-Yang Kung, National Pingtung University of Science and Technology, Taiwan

Computer science students should learn to cheat, not be punished for it

This is a well thought-through response to a recent alarmist NYT article about cheating among programming students.

The original NYT article is full of holy pronouncements about the evils of plagiarism, horrified statistics about its extent, and discussions of the arms wars, typically involving sleuthing by markers and evermore ornate technological fixes that are always one step behind the most effective cheats (and one step ahead of the dumber ones). This is a lose-lose system. No one benefits. But that’s not the biggest issue with the article. Nowhere does the NYT article mention that it is largely caused by the fact that we in academia typically tell programming students to behave in ways that no programmer in their right mind would ever behave (disclaimer: the one programming course that I currently teach, very deliberately, does not do that, so I am speaking here as an atypical outlier).

As this article rightly notes, the essence of programming is re-use of code. Although there are certainly egregiously immoral and illegal ways to do that (even open source coders normally need to religiously cite their sources for significant uses of code written by others), applications are built on layer upon layer upon layer of re-used code, common subroutines and algorithms, snippets, chunks, libraries, classes, components, and a thousand different ways to assemble (in some cases literally) the code of others. We could not do programming at all without 99% of the code that does what we want it to do being written by others. Programmers knit such things together, often sharing their discoveries and improvements so that the whole profession benefits and the cycle continues. The solution to most problems is, more often than not, to be found in StackExchange forums, Reddit, or similar sites, or in open source repositories like Github, and it would be an idiotic programmer that chose not to (very critically and very carefully) use snippets provided there. That’s pretty much how programmers learn, a large part of how they solve problems, and certainly how they build stuff. The art of it is in choosing the right snippet, understanding it, fitting it into one’s own code, selecting between alternative solutions and knowing why one is better (in a given context) than another. In many cases, we have memorized ways of doing things so that, even if we don’t literally copy and paste, we repeat patterns (whole lines and blocks) that are often identical to those that we learned from others. It would likely be impossible to even remember where we learned such things, let alone to cite them.  We should not penalize that – we should celebrate it. Sure, if the chunks we use are particulary ingenious, or particularly original, or particularly long, or protected by a licence, we should definitely credit their authors. That’s just common sense and decency, as well as (typically) a legal requirement. But a program made using the code of others is no less plagiarism than Kurt Schwitters was a plagiarist of the myriad found objects that made up his collages, or a house builder is a plagiarist of its bricks.

And, as an aside, please stop calling it ‘Computer Science’. Programming is no more computer science than carpentry is woodworking science. It bugs me that ‘computer science’ is used so often as a drop-in synonym for programming in the popular press, reinforced by an increasing number of academics with science-envy, especially in North America. There are sciences used in computing, and a tiny percentage of those are quite unique to the discipline, but that’s a miniscule percentage of what is taught in universities and colleges, and a vanishingly small percentage of what nearly all programmers actually do. It’s also worth noting that computer science programs are not just about programming: there’s a whole bunch of stuff we teach (and that computing professionals do) about things like databases, networks, hardware, ethics, etc that has nothing whatsoever to do with programming (and little to do with science). Programming, though, especially in its design aspects, is a fundamentally human activity that is creative, situated, and inextricably entangled with its social and organizational context. Apart from in some research labs and esoteric applications, it is normally closer to fine art than it is to science, though it is an incredibly flexible activity that spans a gamut of creative pursuits analogous to a broad range of arts and crafts from poetry to music to interior design to engineering. Perhaps it is most akin to architecture in the ways it can (depending on context) blend art, craft, engineering, and (some) science but it can be analogous to pretty much any creative pursuit (universal machines and all that).

Address of the bookmark: https://thenextweb.com/dd/2017/05/30/lets-teach-computer-science-students-to-cheat/#.tnw_FTOVyGc4

Original page