On the Misappropriation of Spatial Metaphors in Online Learning | OTESSA Journal

This is a link to my latest paper, published in the closing days of 2022. The paper started as a couple of blog posts that I turned into a paper that nearly made an appearance in the Distance Education in China journal before a last-minute regime change in the editorial staff led to it being dropped, and it was then picked up by the OTESSA Journal after I shared it online, so you might have seen some of it before. My thanks to all the many editors, reviewers (all of whom gave excellent suggestions and feedback that I hope I’ve addressed in the final version), and online commentators who have helped to make it a better paper. Though it took a while I have really enjoyed the openness of the process, which has been quite different from any that I’ve followed in the past.

The paper begins with an exploration of the many ways that environments are both shaped by and shape how learning happens, both online and in-person. The bulk of the paper then presents an argument to stop using the word “environment” to describe online systems for learning. Partly this is because online “environments” are actually parts of the learner’s environment, rather than vice versa. Mainly, it is because of the baggage that comes with the term, which leads us to (poorly) replicate solutions to problems that don’t exist online, in the process creating new problems that we fail to adequately solve because we are so stuck in ways of thinking and acting due to the metaphors on which they are based. My solution is not particularly original, but it bears repeating. Essentially, it is to disaggregate services needed to support learning so that:

  • they can be assembled into learners’ environments (their actual environments) more easily;
  • they can be adapted and evolve as needed; and, ultimately,
  • online learning institutions can be reinvented without all the vast numbers of counter-technologies and path dependencies inherited from their in-person counterparts that currently weigh them down.

My own views have shifted a little since writing the paper. I stick by my belief that 1) it is a mistake to think of online systems as generally analogous to the physical spaces that we inhabit, and 2) that a single application, or suite of applications, should not be seen as an environment, as such (at most, as in some uses of VR, it might be seen as a simulation of one). However, there are (shifting) boundaries that can be placed around the systems that an organization and/or an individual uses for which the metaphor may be useful, at the very least to describe the extent to which we are inside or outside it, and that might frame the various kinds of distance that may exist within it and from it. I’m currently working on a paper that expands on this idea a bit more.

Abstract

In online educational systems, teachers often replicate pedagogical methods, and online institutions replicate systems and structures used by their in-person counterparts, the only purpose of which was to solve problems created by having to teach in a physical environment. Likewise, virtual learning environments often attempt to replicate features of their physical counterparts, thereby weakly replicating in software the problems that in-person teachers had to solve. This has contributed to a vicious circle of problem creation and problem solving that benefits no one. In this paper I argue that the term ‘environment’ is a dangerously misleading metaphor for the online systems we build to support learning, that leads to poor pedagogical choices and weak digital solutions. I propose an alternative metaphor of infrastructure and services that can enable more flexible, learner-driven, and digitally native ways of designing systems (including the tools, pedagogies, and structures) to support learning.

Full citation

Dron, J. (2022). On the Misappropriation of Spatial Metaphors in Online Learning. The Open/Technology in Education, Society, and Scholarship Association Journal, 2(2), 1–15. https://doi.org/10.18357/otessaj.2022.2.2.32

Originally posted at: https://landing.athabascau.ca/bookmarks/view/16550401/my-latest-paper-on-the-misappropriation-of-spatial-metaphors-in-online-learning

Tim Berners-Lee: we must regulate tech firms to prevent ‘weaponised’ web

TBL is rightfully indignant and concerned about the fact that “what was once a rich selection of blogs and websites has been compressed under the powerful weight of a few dominant platforms.” The Web, according to Berners-Lee, is at great risk of degenerating into a few big versions of Compuserve or AOL sucking up most of the bandwidth of the Internet, and most of the attention of its inhabitants. In an open letter, he outlines the dangers of putting so much power into hands that either see it as a burden, or who actively exploit it for evil.

I really really hate Facebook more than most, because it aggressively seeks to destroy all that is good about the Web, and it is ruthlessly efficient at doing so, regardless of the human costs. Yes, let’s kill that in any way that we can, because it is actually and actively evil, and shows no sign of getting any nicer. I am somewhat less concerned that Google gets 87% of all online searches (notwithstanding the very real dangers of a single set of algorithms shaping what we find), because most of Google’s goals are well aligned with those of the Web. The more openly people share and link, the better it gets, and the more money Google makes. It is very much in Google’s interest to support an open, highly distributed, highly connected Web, and the company is as keen as everyone else to avoid the dangers of falsehoods, bias, and the spread of hatred (which are among the very things that Facebook feeds upon), and, thanks to its strong market position and careful hiring practices, it is more capable of doing so than pretty much anyone else. Google rightly hates Facebook (and others of its ilk) not just because it is a competitor, but because it removes things from the open Web, probably spreads lies more easily than truths, and so reduces Google’s value.

I am somewhat bothered that the top 100 sites (according to WIkipedia, based on Alexa and SimilarWeb results) probably get far more traffic than the next few thousand put together, and that the long tail pretty much flattens to approximately zero after that. However, that’s an inevitable consequence of the design of the Web (it’s a scale-free network subject to power laws), and ‘approximately zero’ may actually translate to hundreds of thousands or even millions of people, so it’s not quite the skewed mess that it seems. It is, as TBL observes, very disturbing that big companies with big pockets purchase potential competitors and stifle innovation, and I agree that (like all monopolies) they should be regulated, but there’s no way they are ever going to get everything or everyone, at least without the help of politicians and evil legislation, because it’s a really long tail.

It is also very interesting that even the top 10 – according to just about all the systems that measure such things – includes the unequivocally admirable and open Wikipedia itself, and also Reddit which, though now straying from its fully open model, remains excellently social and open. In different ways, both give more than they take.

It is also worth noting that there are many different ways to calculate rank. Moz.com (based on the Mozscape web index of 31 Billion domains and 165 Billion pages) has a very different view of things, for instance, in which Facebook doesn’t even make it to the domains listing, and is way below WordPress and several others in the popular pages list, which is a direct result of it being a closed and greedy system. Quantcast’s perspective is somewhat different again, albeit only focused on US sites which are a small but significant portion of the whole.

Most significantly, and to reiterate the point because it is worth making, the long tail is very long indeed. Regardless of the dangers of a handful of gigantic platforms casting their ugly shadows over the landscape, I am extremely heartened by the fact that, now, over 30% of all websites run on WordPress, which is both open source and very close to the distributed ideal that TBL espouses, allowing individuals and small communities to stake their claims, make a space, and link (profusely) with one another, without lock-in, central control, or inhibition of any kind. That 30% puts any one of the big monoliths, including Facebook, very far into the shade. And, though WordPress’s nearest competitor (Joomla, also open source) accounts for a ‘mere’ 3% of all websites, there are hundreds if not thousands of similar systems, not to mention a huge number of pages (50% of the total, according to W3Techs) that people still roll for themselves.

Yes, the greedy monoliths are extremely dangerous and should, where possible, be avoided, and it is certainly worth looking into ways of regulating their activities, nationally and internationally, as many governments are already doing and should continue to do so. We must ever be vigilant. But the Web continues to grow, and to diversify regardless of their pernicious influence because it is far bigger than all of them put together.

Address of the bookmark: https://www.theguardian.com/technology/2018/mar/11/tim-berners-lee-tech-companies-regulations

Originally posted at: https://landing.athabascau.ca/bookmarks/view/3105535/tim-berners-lee-we-must-regulate-tech-firms-to-prevent-weaponised-web