Unintelligent machines

In 2012 there were roughly 100 million lines of code in an average car, a number that has been rapidly increasing for decades, and is no doubt significantly higher now. If you printed out 100 million lines of code, it would consume approximately 1.8 million pages of text, or a stack of paper approaching 200 metres in height. Assuming a text coverage of about 5%, if you were using (say) an HP inkjet printer which uses 35g of ink per thousand pages, it would take about 630kg of ink to print and you would make your way through over 50,000 ink cartridges (which, at about $CAD40 a piece, would set you back a couple of million dollars). On a 20ppm printer, it would take over 600 days of continuous printing, not allowing for time between cartridge changes, paper refills, etc, nor the fact that the printer would need to be replaced every day or two as it reached the end of its useful life.  To be fair, much of that would be duplicate code, well-tested libraries, and standard functions, lots of it is involved in stuff like entertainment systems, USB readers, and other non-critical systems and, of those 100 million lines, ‘only’ around 10 million are actually involved in systems that make the vehicle do its thing.

But wow.

The industry average for bugs varies between 15-50 defects per thousand lines. Microsoft reckon they have that down to 0.5 per thousand which, as anyone who has ever used Microsoft software will no doubt agree, is still way too high. I think that it might result from a peculiarly rosy definition of ‘defect’, and it certainly doesn’t include code behaviours that are entirely intentional but horribly wrong. But let’s assume that they are being open and truthful about it and that this really is a realistic defect rate. In that case, in the 10 million lines of code that make the vehicle work, there will be roughly 5000 defects, a good number of which will definitely cause security holes, some of which might be positively dangerous in their own right. Most of those vehicles are wirelessly connected and updated over the air, and there has been a significant increase in in-vehicle networking over the years (Cisco are becoming big players here) so the opportunities for system-level bugs and vulnerabilities are growing all the time. Meanwhile, the human side of the Internet continues to explode, and so the opportunities and tools available to skript kiddies expand at an exponential rate.

The average car weighs around 1500kg and can easily travel at 140kph. Just saying.

I’m not particularly worried about intelligent machines becoming our robot overlords. We’ve really got a long way to go before we even know what such a thing is, let alone how to make one and, by the time we do get there, we’ll know how to augment ourselves so that we are at least a match for them. But unintelligent machines are another matter.

XKCD on voting software

 

 

 

Address of the bookmark: https://www.codeinstitute.net/blog/much-code-cars/

Originally posted at: https://landing.athabascau.ca/bookmarks/view/3478214/unintelligent-machines

You Can Learn Everything Online Except for the Things You Can't

cookies (public domain, https://flic.kr/p/owGwDH)A Wired magazine article from Rhett Allain that is big on metaphor (courses are the chocolate chips, the cookie is the on-campus experience) but very small on critical thinking. What it does highlight, though, is the failure of imagination lurking in much online and in-person learning discussion and literature, and I give credit to Allain for recognizing the obvious elephant in the room: that education is about learning to be, not about learning to do/learning stuff. As he puts it, “the whole cookie is about becoming more mature as a human. It’s about leveling up in the human race”. I couldn’t agree more. What we explicitly teach and what students actually learn are utterly different things, and our own little contributions are at best catalysts, at worst minor diversions. To simply compare the chocolate chips is a variant on the McNamara Fallacy, and well done to Allain for pointing this out in a mainstream publication. Where I profoundly disagree is the bizarre notion that colleges somehow bake better cookies, or that cookies are the only (or even the best) medium in which to embed chocolate chips.

Allain’s confusion is shared by a great many professional educators and educational researchers so, assuming he is not a professional researcher in the field, his ignorance is forgivable. If we are being persnickety, there is no such thing as either online or in-person learning: learning is something that is done by people (individually and collectively) and it resides in both people and the environments/objects they co-create and in which they live. It is not done online or in-person. It is done in the connections we make, in our heads and between one another. 

It is fair to observe that there are huge differences between online and on-campus learning. There is no doubt that removing people from the rest of the human race, and shoving a bunch of them who share an interest in learning together in one concentrated space does result in some interesting and useful side effects, and it does lead to a distinctive set of benefits. When done well (admittedly rarely) it gives people time to dream, time to explore, time to do nothing much apart from reflect, to discover, to connect, and to talk, to grow. For kids who have lived dependent lives in schools and their homes this can be a useful transition phase. So, yes, there are things learned in physical colleges that are not the same as things learned in other places. But that’s a trite truism. There are things learned in pubs, on planes, while swimming, in fields, etc, etc, etc that are distinctive too.

There is equally no doubt that those that don’t go to college can and do get at least the same diversity and richness in their learning experience: it’s just a different set of things that result from the complex interactions and engagements with where they happen to be and who they happen to know. Being less removed from the rest of life and the community has its own benefits, situating learning in different contexts, enabling richer connections between all aspects of human life. The online folk have (innately) much more control of their learning experience and, on the whole, therefore need to work harder to make the most of the environments they are in – it doesn’t come in a neat, self-contained, packaged box. But to suggest that it is any the less rich and meaningful is to do online learners a deep disservice. My own institution, Athabasca University, doesn’t have online learners. We just have learners, who live somewhere, in communities and in regions, among people and places that matter to them. We provide another (online) place to dwell but, unlike a traditional campus-based institution, it’s not an either/or alternative: our online place coexists with and extends into myriad other physical places, that reach back into it and enrich it as much as we reach out and enrich them. At least, that’s how it works when we do it right.

Analogies and metaphors can be useful jumping-off points for understanding things, and I’m OK with the cookie idea because it emphasizes the intimate relationship between teaching and learning. A more useful analogy, though, might be to compare and contrast online vs in-person learning with the experiences of those who watch movies on a home theatre via Netflix, YouTube, Amazon Prime, Mubi, etc vs those who watch movies at the cinema. There’s a great deal to be said for the cinema – the shared experience, the feeling of belonging to a crowd and, of course, the big benefits of being able to hang out with fellow movie-goers before and after the movie. There’s also the critical value of the rituals, and the simple power of the event. I love going to movie theatres. On the other hand, if you have a decent enough rig at home (technologies matter) there’s also a lot to be said for the control (stop when you need a break, rewind to catch things you missed or want to see again, adjust the volume to your needs, eat the food you want, drink what you wish, etc), the vast choice (tens of thousands of movies rather than a handful), the flexibility (when you want, with whom you want, at a pace to suit you), the focus (no coughing, chatting, phone-using idiots around you, etc), the diversity and range of social connectedness (from looking up reviews on IMDB to chatting about it on social media or with others in the room), and the comfort of watching movies at home.

Can one replace the other? Not really. Is one better than the other? It depends. I’m glad I don’t have to make a final binary choice in the matter, and I think that’s how we should think about online and in-person teaching. I don’t mean that a single institution should offer alternative online and in-person routes: that’s way too limiting, like only getting movies from one organization. I mean that education can and should be a distributed experience, chosen by the learners (with guidance if they wish), not tied to one place and one method of learning. Just as I can watch YouTube, Netflix, Mubi, Crave, Amazon Prime, Apple, or whatever, as well as go to any one of several movie theatres nearby (not to mention open-air movie events etc), so should I be able to choose my ways to learn.

Disclaimer: this is not a perfect metaphor by any means. Perhaps it would be fairer to compare watching a live play with watching streaming TV, and it certainly doesn’t begin to capture the significant differences in engagement, interaction, activity, and creativity involved in the educational processes compared with ‘passive’ watching of entertainment. But it’s still better than chocolate chip cookies.

Address of the bookmark: https://www.wired.com/story/you-can-learn-everything-online-except-for-the-things-you-cant

Originally posted at: https://landing.athabascau.ca/bookmarks/view/3469833/you-can-learn-everything-online-except-for-the-things-you-cant

DT&L2018 spotlight presentation: The Teaching Gestalt

The teaching gestalt  presentation slides (PDF, 9MB)

This is my Spotlight Session from the 34th Distance Teaching & Learning Conference, at Wisconsin Madison, August 8th, 2018. Appropriately enough, I did this online and at a distance thanks to my ineptitude at dealing with the bureaucracy of immigration. Unfortunately my audio died as we moved to the Q&A session so, if anyone who was there (or anyone else) has any questions or observations, do please post them here! Comments are moderated.

The talk was concerned with how online learning is fundamentally different from in-person learning, and what that means for how (or even whether) we teach, in the traditional formal sense of the word.

Teaching is always a gestalt process, an emergent consequence of the actions of many teachers, including most notably the learners themselves, which is always greater than (and notably different from) the sum of its parts. This deeply distributed process is often masked by the inevitable (thanks to physics in traditional classrooms) dominance of an individual teacher in the process. Online, the mask falls off. Learners invariably have both far greater control and far more connection with the distributed gestalt. This is great, unless institutional teachers fight against it with rewards and punishments, in a pointless and counter-productive effort to try to sustain the level of control that is almost effortlessly attained by traditional in-person teachers, and that is purely a consequence of solving problems caused by physical classroom needs, not of the needs of learners. I describe some of the ways that we deal with the inherent weaknesses of in-person teaching especially relating to autonomy and competence support, and observe how such pedagogical methods are a solution to problems caused by the contingent side effects of in person teaching, not to learning in general.

The talk concludes with some broad characterization of what is different when teachers choose to let go of that control.  I observe that what might have been Leonardo da Vinci’s greatest creation was his effective learning process, without which none of the rest of his creations could have happened. I am hopeful that now, thanks to the connected world that we live in, we can all learn like Leonardo, if and only if teachers can learn to let go.

Scholarly publishing is broken. Here’s how to fix it

An article for Aeon by Jon Tennant on the heinous state of affairs that gives unscrupulous publishers profit margins that put Apple to shame while hiding publicly funded research from the public that pays for it. It is a shamefully broken system that stands in the way of human progress. It has to change.

The ground this (open access) article goes over is much the same as the ground many of us have been tilling for many years, but it’s well expressed, and good to see it aired in a non-academic (though intellectually vigorous) journal like Aeon. It winds up with a set of six recommendations for things that all academics can do to improve our lot, which all make sense to me:

  1. Sign, and commit to, the Declaration on Research Assessment, and demand fairer evaluation criteria independent of journal brands. This will reduce dependencies on commercial journals and their negative impact on research.
  2. Demand openness. Even in research fields such as global health, 60 per cent of researchers do not archive their research so it is publicly available, even when it is completely free and within journal policies to do so. We should demand accountability for openness to liberate this life-saving knowledge.
  3. Know your rights. Researchers can use the Scholarly Publishing and Academic Rights Coalition (SPARC) Author Addendum to retain rights to their research, instead of blindly giving it away to publishers. Regain control.
  4. Support libraries. Current library subscription contracts are protected from public view by ‘non-disclosure clauses’ that act to prevent any price transparency in a profoundly anti-competitive practice that creates market dysfunction. We should support libraries in renegotiating such contracts, and in some cases even provide support in cancelling them, so that they can reinvest funds in more sustainable publishing ventures.
  5. Help to build something better. On average, academics currently spend around $5,000 for each published article – to get a PDF and some extra sides. A range of different studies and working examples exist that show the true cost of publishing an article can be as low as $100 using cost-efficient funding schemes, community buy-in, and technologies that go a step further than PDF generation. We can do better.
  6. Use your imagination. What would you want the scholarly communication system to look like? What are all the wonderful features you would include? What can you do to help turn a vision into reality?

 

Address of the bookmark: https://aeon.co/ideas/scholarly-publishing-is-broken-heres-how-to-fix-it

Originally posted at: https://landing.athabascau.ca/bookmarks/view/3388817/scholarly-publishing-is-broken-here%E2%80%99s-how-to-fix-it

Mindfulness Meditation Impairs Task Motivation but Not Performance

Sadly published behind a paywall (but, happily, also available at SciHub) this is a fascinating sequence of studies from Hafenbrack & Voh that, firstly, appears to demonstrate that mindfulness (meditative practice) actively reduces motivation to perform a wide range of cognitively taxing or repetitive tasks, then shows that (despite this, and contrary to what might be expected) the loss of motivation has little or no effect on performance. The studies seem well-designed and well integrated, involving a very wide range of participants across continents, a wide variety of activities, and lots of good meta-analysis to pull them together. I’ll talk about why there are none-the-less very good reasons to be cautious in wholeheartedly accepting the results later on but, even with my provisos, the fact that such an effect can be seen at all is really interesting. Hafenbrack & Voh hypothesize (and provide evidence) that, because mindfulness tends to result in improvements in many other areas of cognition and performance, the overall effect is more or less neutral on actual task performance. The researchers partly explain this by citing other studies showing meditation-mediated improvements in empathy, reading comprehension, resilience to unpleasant images, resistance to distraction, negotiation effectiveness, and improved health indicators, though they only attempt to find out about reduced levels of distraction in their own experiments. They conclude that the effects of mindfulness on performance are complex and nuanced, and that employers and organizers of meditation/mindfulness sessions should therefore look carefully into their timing in the context of the working day.

What I find especially interesting about this study are the suggested reasons that mindfulness might impair motivation, which provided the initial justification for the research:

  1. mindfulness tends to focus on valuing the ‘now’ rather than dwelling on the future.
  2. mindfulness aims to reduce arousal (though, interestingly, many forms of it used in the Western mainstream actually seem to stoke the ego).

I’m not totally convinced by (1), inasmuch as it seems to me that the researchers believe that motivation is about desiring (or wishing to avoid) a future state, which is only partly true, notably in the case of simple extrinsic motivation (especially when externally regulated, as in these studies). It is far less likely to be true in the case of intrinsic motivation. Indeed, high levels of intrinsic motivation tend to be very focused on the ‘now’ and, in many cases, can result in a state of flow that can be very closely akin to mindfulness. For me, for instance, playing music can (sometimes) be a highly meditative pursuit with very little future focus. The same can (sometimes) be true when I get into the flow of most things, from writing to sailing to playing with my grandson. It can even be true for at least a couple of the tasks the researchers used to test their theory, when actively chosen by people as fun things to do rather than given to them as part of a research study.

That mindfulness may reduce arousal, and so in turn reduce motivation, is more believable though, again, it depends very much upon context whether that affects task performance positively or negatively. Sticking with a music theme, some pieces require intense concentration, physical effort, and mental agility, especially when learning a technically demanding new piece so, though it is usually bad to be tense when attempting such things, excessive relaxation might not be too great either. However, other kinds of music require you only to be at one with the sound and the instrument, and being in a relaxed mental state is really good for that. It is not a coincidence that many religious rituals involve music-making – especially that involving repetitive rhythmic sounds, chants, and drones – because it can lift you to exactly that detached, calm, yet spiritually heightened state of mindfulness. I find that the most fun and rewarding musical activities tend to be those which combine both modes at once – things like counterpoint or blues, in which the patterns are relatively easy to learn but remain infinitely rich in their expression. This hints that the real world of motivation, and other mental states, is way more complex than experiments like this suggest. I use music as a fairly unequivocal example, but similar diversity lies even in mundane bureaucratic form-filling, and certainly in complex creative behaviours like teaching or research.

Methodological concerns

One central problem with both hypotheses on reasons for reduced motivation, and with the researchers’ discussion and conclusions, is that they mistakenly assume motivation to be one thing – a very behaviourist orientation that looks at simple effects and ignores their complex causes – when, in reality, it is many things, often all at once, and the kind and strength of motivation varies enormously from one context to the next, often on a minute-by-minute basis, typically changing as a direct result of performance on a given task as well as other extrinsic factors. This study almost completely conceals such diversity.

Another problem is that the seemingly innocuous term ‘participants’ subtly and all too easily shifts from its actual meaning (the averaged behaviours of a particular group of people) to ‘all people’ in the description of the resultsm,the discussion, and the conclusion. It’s like saying ‘ripe bananas are yellow’ because (on average) if you examine any given square centimetre of a ripe banana in a batch, the chances are that it will mostly be yellow. This is despite the fact that virtually no bananas are wholly yellow, some are mainly red, a lot are partly green, and many are mainly black or brown. It bothers me that the consequent leap from ‘on average, people tend to be less motivated to perform researcher-imposed tasks after meditation’ to ‘meditation impairs task motivation’ is huge and unwarranted, especially in the absence of a truly plausible (or at least generalizable) model of why this might be so. In fairness, this is exactly the same form of flawed inductive thinking used in the vast majority of experimental studies in education, sociology, psychology, and related disciplines the world over. Knowing average tendencies can be extremely useful in all sorts of ways but to slip from ‘on average, X’ to ‘X’ as this and countless other studies do, is dangerous and counter-productive, especially when combined with a slip from ‘is’ to ‘ought’,  when suggesting ways the research can be applied.  This kind of experimental study is equally bad at discovering reasons for those averages because (unlike less fuzzy pseudo sciences) the range of possible inputs and outputs is vast, and highly interconnected: there’s irreversible complexity in the whole thing. Such studies can be at least partly saved by including rich qualitative information and analysis, but there’s none of that here. Smedlund offers a far subtler and more thorough critique of this kind of psychological experiment that I highly recommend anyone engaged in such studies should read.

Another way of interpreting the results

I like this paper and, for all my concerns, have found much that is thought-provoking within it. However, the simplistic implied behaviourist model of motivation and the lack of qualitative information that would help to better interpret the results does raise more questions than it answers. It also strikes me that, rather than drawing conclusions about ways to change the behaviour of people in organizations, it would make far more sense and have far more lasting value to look at ways to change the tasks expected of them so that they are (ideally) better aligned with intrinsic motivation, or (where that is difficult or impossible) are less externally regulated. Assuming at least a glimmer of truth in these findings (and there is more than a glimmer) I would hypothesize that, under such circumstances, mindfulness would be highly beneficial. Like so many things relating to human activity, it ain’t what you do it’s the way (and the where, and the when, and the why) that you do it that matters most.

Address of the bookmark: https://www.sciencedirect.com/science/article/pii/S074959781630646X

Originally posted at: https://landing.athabascau.ca/bookmarks/view/3365691/mindfulness-meditation-impairs-task-motivation-but-not-performance

The ultimate insomnia cure: new GDPR legislation soothingly read by Peter Jefferson

The BBC’s Shipping Forecast is one of the great binding traditions of British culture that has been many a Brit’s lullaby since time immemorial (ie. long before I was born). Though I never once paid attention to its content in all the decades I heard it, eleven years after leaving the country I could still probably recite the majority of the 31 sea areas surrounding the British Isles from memory. 

For as long as I can recall, the gently soothing voice of the Shipping Forecast was Peter Jefferson (apparently he retired after 40 years in 2009) who, in this magnificently somnolent rendering, immortalizes exerpts from the General Data Protection Regulation that has recently come into force in the EU. My eyelids start drooping about 30 seconds in.

 

Address of the bookmark: https://blog.calm.com/relax/once-upon-a-gdpr

Originally posted at: https://landing.athabascau.ca/bookmarks/view/3327075/the-ultimate-insomnia-cure-new-gdpr-legislation-soothingly-read-by-peter-jefferson

Black holes are simpler than forests and science has its limits

Mandelbrot set (Wikipedia, https://en.wikipedia.org/wiki/Mandelbrot_set)Martin Rees (UK Astronomer Royal) takes on complexity and emergence. This is essentially a primer on why complex systems – as he says, accounting for 99% of what’s interesting about the world – are not susceptible to reductionist science despite being, at some level, reducible to physics. As he rightly puts it, “reductionism is true in a sense. But it’s seldom true in a useful sense.” Rees’s explanations are a bit clumsy in places – for instance, he confuses ‘complicated’ with ‘complex’ once or twice, which is a rooky mistake, and his example of the Mandelbrot Set as ‘incomprehensible’ is not convincing and rather misses the point about why emergent systems cannot be usefully explained by reductionism (it’s about different kinds of causality, not about complicated patterns) – but he generally provides a good introduction to the issues.

These are well-trodden themes that most complexity theorists have addressed in far more depth and detail, and that usually appear in the first chapter of any introductory book in the field, but it is good to see someone who, from his job title, might seem to be an archetypal reductive scientist (he’s an astrophysicist) challenging some of the basic tenets of his discipline.

Perhaps my favourite works on the subject are John Holland’s Signals and Boundaries, which is a brilliant, if incomplete, attempt to develop a rigorous theory to explain and describe complex adaptive systems, and Stuart Kauffman’s flawed but stunning Reinventing the Sacred, which (with very patchy success) attempts to bridge science and religious belief but that, in the process, brilliantly and repeatedly proves, from many different angles, the impossibility of reductive science explaining or predicting more than an infinitesimal fraction of what actually matters in the universe. Both books are very heavy reading, but very rewarding.

Address of the bookmark: https://aeon.co/ideas/black-holes-are-simpler-than-forests-and-science-has-its-limits

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2874665/black-holes-are-simpler-than-forests-and-science-has-its-limits

Amazon helps and teaches bomb makers

Amazon’s recommender algorithm works pretty well: if people start to gather together ingredients needed for making a thermite bomb, Amazon helpfully suggests other items that may be needed to make it, including hardware like ball bearings, switches, and battery cables. What a great teacher!

It is disturbing that this seems to imply that there are enough people ordering such things for the algorithm to recognize a pattern. However, it would seem remarkably dumb for a determined terrorist to leave such a (figuratively and literally) blazing trail behind them, so it is just as likely to be the result of a very slightly milder form of idiot, perhaps a few Trump voters playing in their backyards. It’s a bit worrying, though, that the ‘wisdom’ of the crowd might suggest uses of and improvements to some stupid kids’ already dangerous backyard experiments that could make them way more risky, and potentially deadly.

Building intelligent systems is not too hard, as long as the activity demanding intelligence can be isolated and kept within a limited context or problem domain. Computers can beat any human at Go, Chess, or Checkers. They can drive cars more safely and more efficiently than people (as long as there are not too many surprises or ethical dilemmas to overcome, and as long as no one tries deliberately to fool them). In conversation, as long as the human conversant keeps within a pre-specified realm of expertise, they can pass the Turing Test. They are even remarkably much better than humans at identifying, from a picture, whether someone is gay or not. But it is really hard to make them wise. This latest fracas is essentially a species of the same problem as that reported last week of Facebook offering adverts targeted at haters of Jews. It’s crowd-based intelligence, without the wisdom to discern the meaning and value of what the crowd (along with the algorithm) chooses. Crowds (more accurately, collectives) are never wise: they can be smart, they can be intelligent, they can be ignorant, they can be foolish, they can even (with a really smart algorithm to assist) be (or at least do) good; but they cannot be wise. Nor can AIs that use them.

Human wisdom is a result of growing up as a human being, with human needs, desires, and interests, in a human society, with all the complexity, purpose, meaning, and value that it entails. An AI that can even come close to that is at best decades away, and may never be possible, at least not at scale, because computers are not people: they will always be treated differently, and have different needs (there’s an interesting question to explore as to whether they can evolve a different kind of machine-oriented wisdom, but let’s not go there – SkyNet beckons!). We do need to be working on artificial wisdom, to complement artificial intelligence, but we are not even close yet. Right now, we need to be involving people in such things to a much greater extent: we need to build systems that informate, that enhance our capabilities as human beings, rather than that automate and diminish them. It might not be a bad idea, for instance, for Amazon’s algorithms to learn to report things like this to real human beings (though there are big risks of error, reinforcement of bias, and some fuzzy boundaries of acceptability that it is way too easy to cross) but it would definitely be a terrible idea for Amazon to preemptively automate prevention of such recommendations.

There are lessons here for those working in the field of learning analytics, especially those that are trying to take the results in order to automate the learning process, like Knewton and its kin. Learning, and that subset of learning that is addressed in the field of education in particular, is about living in a human society, integrating complex ideas, skills, values, and practices in a world full of other people, all of them unique and important. It’s not about learning to do, it’s about learning to be. Some parts of teaching can be automated, for sure, just as shopping for bomb parts can be automated. But those are not the parts that do the most good, and they should be part of a rich, social education, not of a closed, value-free system.

Address of the bookmark: http://www.alphr.com/politics/1007077/amazon-reviewing-algorithms-that-promoted-bomb-materials

Original page

 

Update: it turns out that the algorithm was basing its recommendations on things used by science teachers and people that like to make homemade fireworks, so this is nothing like as sinister as it at first seemed. Nonetheless, the point still stands. Collective stupidity is just as probable as collective intelligence, possibly more so, and wisdom can never be expected from an algorithm, no matter how sophisticated.

Analytic thinking undermines religious belief while intelligence undermines social conservatism, study suggests

‘Suggests’ is the operative word in the title here. The title is a sensationalist interpretation of an inconclusive and careful study, and I don’t think this is what the authors of the study mean to say at all. Indeed, they express caution in numerous ways, noting small effect sizes, lack of proof of causality, large overlaps between groups, and many other reasons for extremely critical interpretation of the evidence:

“We would like to warn readers to resist the temptation to draw conclusions that suit their ideological worldviews,” Saribay told PsyPost. “One must not think in terms of profiles or categories of people and also not draw simple causal conclusions as our data do not speak to causality. Instead, it’s better to focus on how certain ideological tendencies may serve psychological needs, such as the need to simplify the world and conserve cognitive energy.”

This is suitably cautious and very much at odds with the title of the PsyPost article.

The study itself finds some confirmatory evidence that, in the US (and only in the US):

  •     Religion may be embedded more in Type 1 intuitions relative to politics.
  •     Processing liberal political arguments may require cognitive ability.
  •     Religious belief should be predicted uniquely by analytic cognitive style.
  •     Conservatism should be uniquely predicted by cognitive ability.

It is important to note, however, that ‘prediction’ in this instance has a very precise meaning of implying slightly increased odds of correlation between these factors, not that there is a causal connection one way or the other. The study simply adds a little more evidence to an already fairly substantial body of proof that cognitively challenged people, especially those more inclined to intuition than to reason (the two are statistically correlated), are somewhat more likely to be drawn both to religion and to right wing politics. Much as I would like it to imply the inverse – that intelligence and rationality are a cure for religion and right wing beliefs – there is absolutely nothing in this research to suggest that.

Part of the motivation for the study is the researchers’ observation of the growing antagonism to intelligence, expertise, evidence, and truth that is revealed in Trump’s victory, Brexit, ISIL, man-made climate change denial, and so on. While such evils are no doubt fuelled and sustained by (not to put too fine a point on it) stupid people in search of simple solutions to complex problems, it would be foolish (stupid, even) and highly inaccurate to suggest that all (or even a majority) of those exhibiting such attitudes and beliefs are stupid, or driven by intuition rather than reason, or both. As the study’s authors rightly observe, the value of this study is its contribution to understanding some of the complexity of the problem and should not be used to extrapolate exactly the same kind of simplified caricatures that cause it in the first place:

“…a more balanced understanding can only be reached via continued empirical research. Human beings may sometimes benefit from cognitive simplification of a complex and at times scary world of constant change and uncertainty. It does seem that certain aspects of religion and conservative ideology serve to deal with this, in slightly different ways. This is the direction that evidence points to thus far. However, researchers of course must resist this very need to simplify the world beyond a certain level.”

The original study can be found at http://www.sciencedirect.com/science/article/pii/S019188691730226X

Address of the bookmark: http://www.psypost.org/2017/09/analytic-thinking-undermines-religious-belief-intelligence-undermines-social-conservatism-study-suggests-49655

Original page

E-Learn 2017, Vancouver, 17-20 October – last day of cheaper registration rates

Today is the final day to get the discount rate if you are planning on coming to E-Learn in Vancouver this year (US$455 today vs US$495 from tomorrow onwards).

It promises to be quite a big event this year, with an estimated 900+ concurrent sessions, 100+ posters, and three lunchtime SIGs (including a new one on sustainable learning technologies), not to mention some fine keynotes and networking events.  Annoyingly, it clashes with ICDE in Toronto this year but, IMHO, E-Learn is a better conference for those working and researching in online education, and it’s a much better location. I may be a little biased, being both a resident of Vancouver and local co-chair of the conference, but there are some very good reasons I chose to be both those things!

I have attended almost all E-Learn (and its predecessor, WebNet) conferences for nearly 20 years now because it tends to attract some great people, provides an excellently diverse and blended mix of technical and pedagogical perspectives, gives plentiful chances to engage with both early-career researchers and those at the top of the field, usually picks great locations, is well-organized, and focuses solely on adult online learning (mainly higher education but also some from industry, healthcare, government, etc). The acceptance rate (1-in-3 to 1-in-4) is high enough to attract diverse papers that can be off the wall and interesting (especially from younger researchers who don’t know what’s impossible yet so sometimes achieve it), but low enough to exclude utter rubbish. If that kind of thing interests you, this is the conference for you!

I hope to see you there.

Address of the bookmark: https://www.aace.org/conf/elearn/registration/

Categories Uncategorised Leave a comment