These are the slides from my keynote at the University of Ottawa’s “Scaffolding a Transformative Transition to Distance and Online Learning” symposium today. In the presentation I discussed why distance learning really is different from in-person learning, focusing primarily on the fact that they are the motivational inverse of one another. In-person teaching methods evolved in response to the particular constraints and boundaries imposed by physics, and consist of many inventions – pedagogical and otherwise – that are counter-technologies designed to cope with the consequences of teaching in a classroom, a lot of which are not altogether wise. Many of those constraints do not exist online, and yet we continue to do very similar things, especially those that control and dictate what students should do, as well as when, and how they should do it. This makes no sense, and is actually antagonistic to the natural flow of online learning. I provided a few simple ideas and prompts for thinking about how to go more with the flow.
The presentation was only 20 minutes of a lively and inspiring hour-long session, which was fantastic fun and provided me with many interesting questions and a chance to expand further on the ideas.
It’s all solid stuff that supports much of what I and many others have written about the value of belongingness and social interaction in learning but, like much research in fields such as psychology, education, sociology, and so on, it makes some seemingly innocuous but fundamentally wrong assertions of fact. For instance:
“Those who were instructed to strike up a conversation with someone new on public transport or with their cab driver reported a more positive commute experience than those instructed to sit in silence.”
What, all of them? That seems either unbelievably improbable, or the result of a flawed methodology, or a sign of way too small a sample size. The paper itself is inaccessibly paywalled so I don’t know for sure, but I suspect this is actually just a sloppy description of the findings. It is not the result of bad reporting in the Quartz article, though: it is precisely what the abstract of the paper itself actually claims. The researchers make several similar claims like “Those who were instructed to strike up a hypothetical conversation with a stranger said they expected a negative experience as opposed to just sitting alone.” Again – all of them? If that were true, no one would ever talk to strangers (which anyone that has ever stood in a line-up in Canada knows to be not just false but Trumpishly false), so this is either a very atypical group or a very misleading statement about group members’ behaviours. The findings are likely, on average, correct for the groups studied, but that’s not the way it is written.
The article is filled with similarly dubious quotes from distinguished researchers and, worse, pronouncements about what we should do as a result. Often the error is subtly couched in (accurate but misleadingly ambiguous) phrasing like “The group that engaged in friendly small talk performed better in the tests.” I don’t think it is odd to carelessly read that as ‘all of the individuals in the group performed better than all of those in the other groups’, rather than that, ‘on average, the collective group entity performed better than another collective group entity’, which is what was actually meant (and that is far less interesting). From there it is an easy – but dangerously wrong – step to claim that ‘if you engage in small talk then you will experience cognitive gains.’ It’s natural to want to extrapolate a general law from averaged behaviours, and in some domains (where experimental anomalies can be compellingly explained) it makes sense, but it’s wrong in most cases, especially when applied to complex systems like, say, anything involving the behaviour of people.
It’s a problem because, like most in my profession, I regularly use such findings to guide my own teaching. On average, results are likely (but far from certain) to be better than if I did not use them, but definitely not for everyone, and certainly not every time. Students do tend to benefit from engagement with other students, sure. It’s a fair heuristic, but there are exceptions, at least sometimes. And the exceptions aren’t just a statistical anomaly. These are real people we are talking about, not average people. When I do teaching well – nothing like enough of the time – I try to make it possible for those that aren’t average to do their own thing without penalty. I try to be aware of differences and cater for them. I try to enable those that wish it to personalize their own learning. I do this because I’ve never in my entire life knowingly met an average person.
Unfortunately, our educational systems really don’t help me in my mission because they are pretty much geared to cater for someone that probably doesn’t exist. That said, the good news is that there is a general trend towards personalized learning that figures largely in most institutional plans. The bad news is that (as Alfie Kohn brilliantly observes) what is normally meant by ‘personalized’ in such plans is not its traditional definition at all, but instead ‘learning that is customized (normally by machines) for students in order that they should more effectively meet our requirements.’ In case we might have forgotten, personalization is something done by people, not to people.
Further reading: Todd Rose’s ‘End of Average‘ is a great primer on how to avoid the average-to-the-particular trap and many other errors, including why learning styles, personality types, and a lot of other things many people believe to be true are utterly ungrounded, along with some really interesting discussion of how to improve our educational systems (amongst other things). I was gripped from start to finish and keep referring back to it a year or two on.