Here are my slides from my presentation at the Innovate Learning Summit yesterday. It’s not world-shattering stuff – just a brutal attack on proctored, unseen written exams (PUWEs, pronounced ‘pooies’), followed by a description of the rationale, process, benefits, and unwanted consequences behind the particular portfolio-based approach to assessment employed in most of my teaching. It includes a set of constraints that I think are important to consider in any assessment process, grouped into pedagogical, motivational, and housekeeping (mainly relating to credentials) clusters. I list 13 benefits of my approach relating to each of those clusters, which I think make a pretty resounding case for using it instead of traditional assignments and tests. However, I also discuss outstanding issues, most of which relate to the external context and expectations of students or the institution, but a couple of which are fairly fundamental flaws (notably the extreme importance of prompt, caring, helpful instructor/tutor engagement in making it all work, which can be highly problematic when it doesn’t happen) that I am still struggling with.
The authors of a recent paywalled article in MIS Quarterly here summarize their findings in another restrictive and normally paywalled site, the Washington Post. At least the latter gives some access – I was able to read it without forking out $15, and I hope you are too. Unfortunately I don’t have access to the original paper (yet) but I’d really like to read it.
The authors examined the web browsing history of nearly 200,000 US adults, and looked at differences in diversity and polarization related to use of Reddit, Twitter, and Facebook, correlating it with political leanings. What they found will surprise few who have been following such issues. The headliner is that Facebook is over five times more polarizing for US conservatives than for liberals, driving them to far more partisan news sites, far more of the time. Interestingly, though, those using Reddit visited a far more diverse range of news sites than expected, and tended towards more moderate sites than usual: in fact, the sites were a claimed 50% more moderate than what they would typically read. Furthermore, and just as interesting to me, Twitter seemed to have little effect either way.
The authors blame this on the algorithms – that Facebook preferentially shows posts that drive engagement (so polarizing issues naturally bubble to the top), while Reddit relies on votes for its emphasis, so presenting a more balanced view. In the Washington Post article they have little to say about Twitter, apart from that it wants to be more transparent in its algorithms (though nothing like as transparent as Reddit). But it isn’t, and I think I know why that lack of effect was seen.
Algorithms vs structure
You could certainly look at it from an algorithmic perspective. There is no doubt that different algorithms do lead to different behaviours. Facebook and Twitter both make use of hidden algorithms to filter, sort, and alter the emphasis of posts. In Twitter’s case this is a relatively recent invention. It started using a simpler, time-based sort order, and it has become a much less worthwhile site since it began to emphasize posts it thinks individuals want to see. I don’t like it, and I am very glad to hear that it intends to revert to providing greater control to its users (what Judy Kay calls scrutable adaptation). Reddit’s algorithms, on the other hand, are entirely open and scrutable, as well as being intuitive and (relatively) simple. It is important to remember that none of these sites are entirely driven by computer algorithms, though: all have rules, conditions of use, and plentiful supplies of humans to enforce them. Reddit has human moderators but, unlike the armies of faceless paid moderators employed by Twitter and Facebook to implement their rules, you can see who the moderators are and, if you put in the effort and feel so inclined, you could become one yourself.
However, though algorithms do play a significant role, I think that the problem is far more structural, resulting from the social forms each system nurtures. These findings accord very neatly with the distinction that Terry Anderson and I have made between nets (social systems formed from connections between individuals) and sets (social systems that form around interests or shared attributes of their users). Facebook is the archetypal exemplar of the network social form; Reddit is classically set-oriented (as the authors put it ‘topic based’); Twitter is a balanced combination of the two, so the effects of one cancel out the effects of the other (on average). It’s all shades of grey, of course – none are fully one or the other (and all also support group social forms), and none exist in isolation, but these are the dominant forms in each system.
Networks – more specifically, scale-free networks – have a natural tendency towards the Matthew Effect: the rich get richer while the poor get poorer. You can see this in everything from academic paper citations to the spread of diseases, and it is the essence of any human social network. Their behaviours are enormously dependent on highly connected influencers, They are thus naturally inclined to polarize, and it would happen without the algorithms. The algorithms might magnify or diminish the effects but they are not going to stop them from happening. To make things worse, when they are taken online then it is not just current influence that matters, because posts are persistent, and continue to have an influence (potentially) indefinitely, whether the effect is good or bad (though seldom if it is somewhere in between).
There are plenty of sets that are also highly partisan. However, they are quite self-contained and are thus containable, either because you simply don’t bother to join them or because they can easily be eliminated: Reddit, for instance, recently removed r/the_donald, and extreme right wing subreddit for particularly rabid supporters of Trump, for its overwhelmingly violent and hateful content. Also, on a site such as Reddit, there are so many other interesting subreddits that even the hateful stuff can get a bit lost in the torrent of other news (have you seen the number of subreddits devoted to cats? Wow). And, to a large extent, a set-based system has a natural tendency to be more democratic, and to tend towards moderate views. Reddit’s collective tools – karma, votes, and various kinds of tagging – allow the majority (within a given subreddit) to have a say in shaping what bubbles to the top whereas, in a network, the clusters that form around influencers inevitably channel a more idiosyncratic, biased perspective. Sets are intentional, nets are emergent, regardless of algorithms, and there are patterns to that emergence that will occur whether or not they are further massaged by algorithms. Sets have their own intractible issues, of course: flaming, griefing, trolling, sock-puppeting and many more big concerns are far greater in set-based systems, where the relatively impersonal and often anonymous space tends to suck the worst of humanity out of the woodwork.
I would really like to see the researchers’ results for Twitter. I hypothesize that the reason for its apparent lack of effect is that the set-based features (that depolarize) counterbalance the net-based features (that polarize) so the overall effect is null, but that’s not to say that it has no effect: far from it. People are going to be seeing very different things than they would if they did not use Twitter – both more polarized and more moderate, but (presumably) a bit less in between the two. That’s potentially very interesting, especially as the nuances might be quite varied.
Are networks necessarily polarizing?
Are all online social networking systems evil? No. I think the problem emerges mainly when it is an undifferentiated large-scale general purpose social networking system, especially when it uses algorithmic means to massage what members see. There are not many of those (well, not any more). There are, however, very many vertical social networks, or niche networks that, though often displaying the same kinds of polarization problem on a smaller scale, are far less problematic because they start with a set of the people who share attributes or interests that draw them to the sites. People are on Facebook because other people are on Facebook (a simple example of Metcalfe’s Law). People are on (say) ResearchGate are there because they are academics and researchers – they go elsewhere to support the many other facets of their social lives. This means that, for the most part, niche networks are only part of a much larger environment that consists of many such sets, rather than trying to be everything to everyone. Some are even deliberately focused on kindness and mutual support.
Could Facebook shift to a more set-oriented perspective, or at least develop more distinct and separate niches? I doubt it very much. The whole point of Facebook is and always has been to get more people spending more time on the site, and everything it does is focused on that one goal, regardless of consequences. It sucks, in every way, pulling people and content from other systems, giving nothing back, and it thrives on bias. In fact, it is not impossible that it deliberately nurtures the right-wing bias it naturally promotes, because it wishes to avoid being regulated. Without the polarization that drives engagement, it would lose money and users hand over fist, and there are bigger, more established incumbents than Facebook in the set space (YouTube, at least). Could it adjust its algorithms to reduce the bias? Yes, but it would be commercial suicide. Facebook is evil and will remain so because its business model is evil. For more reasons than I can count, I hope it dies.
Originally posted at: https://landing.athabascau.ca/bookmarks/view/6960862/echoes-and-polarization-in-facebook-reddit-and-twitter-its-not-just-about-the-algorithms
I’ve been thinking for some time that I need to investigate Chromebooks – at least, ever since Chrome OS added the means to run Android and Linux apps alongside Chrome web apps. I decided to get one recently because I was going on a camping trip during which I’d be required to do some work, and the (ridiculously many) machines I already had were all some combination of too limited, too unreliable, too fragile, too heavy, too power-hungry, too buggy, or too expensive to risk in a muddy campsite. A Chromebook seemed like a good compromise. I wanted one that was fairly cheap, had a good battery life, was tough, could be used as a tablet, and that was not too storage-limited, but otherwise I wasn’t too fussy. One of the nice things about Chromebooks is that, notwithstanding differences in hardware, they are all pretty much the same.
After a bit of investigation, I noticed that an Asus C234 Flip with an 11.6″ screen was available at BestBuy for about $400, which seemed reasonable enough, based on the advertised specs, and even more reasonable when they lopped $60 off the price for Labour Day. Very unusually, though, the specs on the site were literally all that I had to go on. Though there are lots of Flip models, the C234 is completely unknown, apparently even to Asus (at least on its websites), let alone to reviewers on the web, which is why I am writing this! There’s no manual with it, not even on the machine itself, just a generic leaflet. Following the QR code on the base of the machine leads to a generic not-found page on the Asus site. Because it looked identical to the better-known Flip C214 I thought BestBuy must have made a labelling mistake but the model number is clearly printed in two places on the base. Despite the label it is, in fact, as I had guessed and eventually confirmed by circuitous means, identified by Asus themselves as a Flip C214MA, albeit with 64GB of storage rather than the more common 32GB and a very slightly upgrade Intel Celeron N4020 CPU instead of an N4000. This model lacks the option of a stylus that is available for many C214 models (pity – that seemed very nice). It was not quite the cheapest Chromebook to fit the bill, but I trust Asus a lot and have never regretted buying one of their machines over about 20 years or more of doing so quite frequently. They really know how to design and build a great computer, and they don’t make stupid compromises even on their cheapest machines, be they PCs, tablets, phones, or netbooks. Great company in a sea of dross.
The C234 comes with only 4GB RAM, which means it can get decidedly sluggish when running more than a handful of small apps, especially when some of them are running under Linux, but it is adequate for simple requirements like word processing, light photo editing, audio recording, web browsing, email, webinars, etc: just the use cases I had in mind, in fact. The 64GB of storage is far less than I’d prefer but, I calculated, should be fine for storing apps. I assumed (wrongly) that any data I’d need locally could be kept on the 256GB SDXC card that I bought to go with it so I was – foolishly – not too concerned. It turns out that Android apps running under ChromeOS that can save data to the SD card are few and far between, and ChromeOS itself is barely aware of the possibility although, of course, most apps can read files from just about anywhere so it is not useless. Unfortunately, the apps that do not support it include most video streaming services and Scribd (which is my main source of ebooks, magazines, and audiobooks) – in other words, the ones that actually eat up most space. The physical SD slot is neat – very easy to insert and difficult (but not too difficult) to remove, so it is not going to pop out unexpectedly.
The computer has two full-spec USB-C ports that can be used for charging the device (45W PD, and not a drop less), as well as for video, external storage, and all the usual USB goodness. It has one USB-A 3.0 socket, and a 1/8″ combo mic/headphone socket that can take cellphone headsets or dedicated speakers/microphones. The wifi and bluetooth are both pretty modernish mainstream, adequate for all I currently have but maybe not everything I might buy next year. There is a plastic tab where a stylus is normally stored but, buyer beware, if the detailed model number doesn’t end in ‘S’ then it does not and cannot support a stylus: no upgrade path is available, as far as I can tell. Wifi reception is very good (better than my Macbook Pro), but there is no WiFi6. There’s no cellular modem, which is a pity, but I have a separate device to handle that. It does have a Kensington lock slot, which I guess reflects how it might be used in some schools where students have to share machines. Going back to the days when I used to manage university computer labs, I would have really liked these machines: they are very manageable. A Kensington lock isn’t going to stop a skilled thief for more than a couple of seconds but, as part of a security management strategy, they fit well.
The battery life is very good. It can easily manage 11-12 hours between charges from its 50WH battery, and could almost certainly do at least a couple more hours if you were not stretching its capabilities or using the screen on full brightness (I’m lazy and my eyesight is getting worse, so I tend to do both). It charges pretty quickly – I seldom run it down completely so the longest I’ve needed to plug it in it dropped below 20%, has been a couple of hours. It uncomplainingly charges from any sufficiently powerful USB-C charger.
As a laptop the Flip feels light in the hand (it weighs in at a little over a kilogram) but, as a tablet, it is pretty heavy and unwieldy and the keyboard cannot be detached. This is a fair compromise. Most of the time I use it as a laptop so I’d rather have a decent keyboard and a battery that lasts, but it is not something you’d want to hold for too long in the kind of orientations you might with an iPad or e-reader. Its 360 degree screen can fold to any intermediate angle so it doesn’t need a separate stand if you want to perch it on something, which is handy in a tent: while camping, I used it in both (appropriately) tented orientation and wrapped over a big tent pocket so that it was held in place by its own keyboard.
Video and audio
The touch screen is OK. At 1366×768 resolution and with a meagre 162 pixels per inch it is not even HD, let alone a retina display. It is perfectly adequate for my poor eyesight, though: fairly bright, acceptable but not great viewing angles, very sharp, and not glossy (I hate glossy screens). I’d much rather have longer battery life than a stunning display so this is fine for me. Viewing straight-on, I can still read what’s on the screen in bright sunshine and, though it lacks a sensor to auto-adjust the brightness, it does have an automatic night-time mode (that reddens and dims the display) that can be configured to kick in at sunset, and there are keyboard keys to adjust brightness. The generic Intel integrated GPU chip works, but that’s all I can say of it. I’d certainly not recommend it for playing graphics intensive games or using photoshop, and don’t even think about VR or compiling big programs because it ain’t going to happen.
The speakers, though, are ridiculously quiet: even when pumped up to full volume a little rain on the tent made it inaudible, and they are quite tinny. I’m guessing that this may have a bit to do with its target audience of schoolkids – a lack of volume might be a good thing in a classroom. The speakers are down-facing so it does benefit from sitting on a table or desk, but not a lot. The headphone volume is fine and it plays nicely with bluetooth speakers. It has a surprisingly large array of 5 microphones scattered quite widely that do a pretty good job of echo cancellation and noise reduction, providing surprisingly good sound quality (though not exactly a Blue Yeti).
It has two cameras, one 5K device conventionally placed above the screen when used in laptop mode, the other on the same surface as the keyboard, in the bottom right corner when typing, which is weird until you remember it can be used in tablet mode, when it becomes a rear-facing camera. Both cameras are very poor and the rear facing one is appalling (not even 1K resolution). They do the job for video conferencing, but not much else. That’s fine by me: I seldom need to take photos with my notebook/tablet and, if I want better quality, it handles a Logitech webcam very happily.
The keyboard is a touch smaller than average, so it takes a bit of getting used to if you have been mostly using a full-sized keyboard, but it is quite usable, with plenty of travel and the keys and, though each keypress is quite tactile so you know you have pressed it, it is not clicky. It is even resistant to spilt drinks or a spot or two of rain. Having killed a couple of machines this way over the past thirty years or so (once by sneezing), I wish all keyboards had this feature. The only things I dislike about it are that it is not backlit (I really miss that) and that the Return key is far too small, bunched up with a load of symbol keys and easily missed. Apart from that, it is easy to touch type and I’d say it is marginally better than the keyboard on my Macbook Pro (2019 model). The keys are marked for ChromeOS, so they are a bit fussy and it can be hard to identify which of the many quote marks are the ones you want, because they are slightly differently mapped in ChromeOS, Android, and Linux. On the other hand I’m not at all fond of Chrome OS’s slightly unusual keyboard shortcuts so it’s nice that the keys tell you what they can do, even though it can be misleading at times.
The multi-touch screen works well with fingers, though could be far more responsive when using a capacitive stylus: the slow speed of the machine really shows here. Unless you draw or write really slowly, you are going to get broken lines, whether using native Chrome apps, Android, or Linux. I find it virtually unusable when used this way.
The touchpad is buttonless and fine – it just works as you would expect, and its conservative size makes it far less likely to be accidentally pressed than the gigantic glass monstrosity on my Macbook Pro. I really don’t get the point of large touchpads positioned exactly where you are going to touch them with your hand when typing.
There is no fingerprint reader or face recognition, though it mostly does unlock seamlessly when it recognizes my phone. It feels quite archaic to have to enter a password nowadays. You can get dongles that add fingerprint recognition and that work with Chromebooks, but that is not really very convenient.
The machine is made to be used by schoolkids, so it is built to suffer. The shell of the Flip is mostly made of very sturdy plastic. And I do mean sturdy. The edges are rubberised, which feels nice and offers quite a bit of protection. Asus claim it can be dropped onto a hard floor from desk height, and that the pleasingly textured covering hides and prevents scratches and dents. It certainly feels very sturdy, and the texture feels reassuring in the hand, with a good grip so that you are not so likely to drop it. It doesn’t pick up fingerprints as badly as my metal-bodied or conventional plastic machines. Asus say that the 360 degree hinges should survive 50,000 openings and closings, and that the ports can suffer insertion of plugs at least 5,000 times. I believe them: everything about it feels well made and substantial. You can stack 30kg on top of it without it flinching. For the most part it doesn’t need its own case. I felt no serious worries throwing this into a rucksack, albeit that it is neither dust nor water resistant (except under the keyboard). Asus build it to the American military’s MIL-STD 810G spec, which sounds impressive though it should be noted that this is not a particular measure of toughness so much as a quality control standard to ensure that it will survive the normal uses it is designed for. It’s not made for battlefields, boating, or mountaineering, but it is made to survive 11-year-olds, and that’s not bad.
It’s not unattractive but nor is it going to be a design classic. It is just a typical old fashioned fairly non-descript and innocuous small laptop, that is unlikely to attract thieves to the same extent as, say, a Microsoft Surface or Macbook Pro. It has good old fashioned wide bezels. I realize this is seldom considered a feature nowadays, but it is really good for holding it in tablet mode and helps to distinguish the screen from the background. It feels comfortable and familiar. In appearance, it is in fact highly reminiscent of my ancient Asus M5N laptop from 2004, that still runs Linux just fine, albeit without a working battery, with only 768KB of RAM and with, since only recently, a slightly unreliable DVD drive – Asus really does make machines that last.
The machine is fanless so it is quite silent: I love that. Anything that moves inside a computer will break, eventually, and fans can be incredibly annoying even when they do work, especially after a while when dust builds up and operating system updates put more stress on the processor. If things do break, then the device has a removable panel on the base, which you can detach using a single standard Philips screwdriver, and Asus even thoughtfully provide a little thumbnail slot to prise it up. Through this you can access important stuff like storage and RAM, and the whole machine has a modular design that makes every major component easily replaceable – so refreshing after the nightmares of trying to do any maintenance on an Apple device. Inside, it has a dual core Celeron of some kind that can be pushed up to 2800 MHz – an old and well-tried CPU design that is not going to win any performance prizes but that does the job pretty well. From my tech support days I would be a bit bothered leaving this with young and inquisitive kids – they really like to see how things work by doing things that would make them not work. I lost a couple of lab machines to a class of kids who discovered the 240/110v switch on the back of old PCs.
It does feel very sluggish at the best of times after using a Macbook Pro – apps can take ages to load, and there can be quite a long pause before it even registers a touch or a keypress when it is running an app or two already – but it is less than a tenth of the price, so I can’t complain too much about that. It happily runs a full-blown DBMS and web server, which addresses most of my development needs, though I’d not be keen on running a full VM on the device, or compiling a big program.
There are no Asus apps, docs, or customizations included. It is pure, bare-bones, unadulterated Chrome OS, without even a default Asus page to encourage registration. This is really surprising. Eventually I found the MyAsus (phone) app for Android on Google’s Play store, which is awful but at last – when I entered the serial number to register the machine – it told me what it actually was, so I could go and find a manual for it. The manual contains no surprises and little information I couldn’t figure out for myself, but it is reassuring to have one, and very peculiar that it was not included with the machine. This makes me suspect that BestBuy might have bought up a batch of machines that were originally intended for a (large) organization that had laid down requirements for a bare-bones machine. This might explain why it is not listed on the Asus site.
I may write more about ChromeOS at some later date – the main reason I got this device was to find out more about it – but I’ll give a very brief overview of my first impressions now. ChromeOS is very clever, though typical of Google’s software in being a little clunky and making the computer itself a little bit too visible: Android suffered such issues in a big way until quite recently, and Android phones still feel more like old fashioned desktop computers than iPhones or even Tizen devices.
Given that it is primarily built to run Chrome apps, It is surprisingly good at running Android apps – even VPNs – though integration is not 100% perfect: you can occasionally run into trouble passing parameters from a Chrome app to Android, for instance, some Android apps are unhappy about running on a laptop screen, and not all understand the SD card very well. Chrome apps run happily without a network, so you are not tied to the network as much as with other thin-client alternatives like WebOS.
It also does a really good job of running and integrating Linux apps. They all run in a Debian Linux container, so a few aspects of the underlying machine are unavailable and it can be a little complex when you want to use files from other apps or peripherals, but it is otherwise fully featured and utilizes much of the underlying Debian system that runs ChromeOS itself, so it is close to native Linux in performance. The icons for Linux apps appear in the standard launcher like any other app and, though there is a little delay when launching the first Linux app when it starts the container, once you have launched one then the rest load quickly. You do need a bit of Linux skill to use it well – command line use of apt is non-negotiable, at least, to install any apps, and integrating both Android and ChromeOS file systems can be a little clunky. Linux is still a geek option, but it makes the machine many times more useful than it would otherwise be. There’s virtually nothing I’d want to do with the machine that is constrained by software, though the hardware creates a few brick walls.
Integration between the three operating systems is remarkably good altogether but the seams show now and then, such as in requiring at least two apps for basic settings (ChromeOS and Android), with a handful of settings only being available via the Chrome browser, or in not passing clipboard contents to the Linux terminal command line (though you can install an x-terminal that works fine). I’ve hit quite a few small problems with the occasional app, and a few Android apps don’t run properly at all (most likely due to screen size issues rather than integration issues) but overall it works really well. In fact, almost too well – I have a few of the same apps in both ChromeOS and Android versions so sometimes I fail to notice that I am using the glitchier one until it is too late.
Despite the underlying Debian foundations, it is not super-stable and crashes in odd ways when you stretch it a little, especially when reconnecting on a different network, but it is stable enough for most standard uses that most people would run into, and it reboots really quickly. Even in the few weeks I’ve had it, it seems more stable, so this is a moving target.
Updates come thick and fast, but it is a little worrying that Google’s long term commitment to ChromeOS seems (like most of their offerings) shaky: the Web app store is due to close at some point soon and there are some doubts about whether it will continue to offer long term support for web apps in general, though Android and Linux support makes that a lot less worrying than it might be. Worst case would be to wipe most traces of ChromeOS and simply partition the machine for Linux, which would not be a bad end-of-life option at all.
The biggest caveat, though, is that you really need to sell your soul (or at least more of your data than is healthy) to Google to use this. Without a Google account I don’t think it would work at all, but at the very least it would be crippled. I trust Google more than I trust most other big conglomerates – not because they are nice but because their business model doesn’t depend on directly selling my data to others – but I do not love their fondness for knowing everything about me, nor that they insist on keeping my data in a banana republic run by a reality TV show host. As much as possible the apps I use are Google-free, but it is virtually impossible to avoid using the Chrome browser that runs many apps, even when you have a friendlier alternative like Vivaldi that would work just as well, if Google allowed it. In fairness, it is less privacy-abusive than Windows, and more open about it. MacOS is not great either, but Apple are fiercely aggressive in protecting your data and don’t use it for anything more than selling you more Apple goodies. Linux or BSD are really the only viable options if you really want to control your data or genuinely own your software nowadays.
This was a great little machine for camping. Though water and dust were a concern, especially given the low price I wasn’t too worried about treating it roughly. It was small and light, and it performed well enough on every task that I threw at it. It’s neither a great laptop nor a great tablet, but the fact that it performs both tasks sufficiently well without the ugliness and hassles of Windows or the limitations of single OS machines is very impressive.
Since returning from camping I have found myself using the machine a lot more than I thought I might. My Macbook Pro is pretty portable and its battery life is not too bad, but it is normally plugged in to a big monitor and a whole bunch of disk drives, so I can’t just pick it up to move around the house or down to the boat without a lot of unplugging and, above all, disk ejection (which, thanks to Apple’s increasingly awful implementation of background indexing that has got significantly worse with every recent release of OSX, can often be an exercise in deep frustration), so I rarely do so unless I know I will be away from the desk for a while. I love that I can just pick the Flip up and use it almost instantly, and I only need to charge it once every couple of days, even when I use it a lot. I still far prefer to use my Macbook Pro for anything serious or demanding, my iPad or phone for reading news, messaging, drawing, etc, and a dedicated ebook reader for reading books, but the fact that this can perform all of those tasks reasonably well is useful enough that it is fast becoming my default mobile device for anything a cellphone doesn’t handle well, such as writing anything of any length, like this (which is all written using the Flip).
In summary, the whole thing is a bit of a weird hybrid that shows its seams a bit too often but that can do most things any tablet or PC can do, and then some. It does a much better job than Windows of combining a ‘real’ PC with a tablet-style device, mainly because (thanks to Android) it does the tablet far better than any Windows PC and, thanks to Linux, it is almost as flexible as a PC (though, bearing in mind that Windows now does Linux reasonably well, it is not quite in the same league). The low spec of the machine does create a few brick walls: I am not going to be running any VMs on it, nor running any graphics-intensive, memory-intensive, or CPU-intensive tasks but, for well over 90% of my day to day computing needs, it works just fine.
I’m now left wondering whether it might be worthwhile to invest in one of the top-of-the-line Google Chromebooks to cater for my more advanced requirements. They are beautiful devices that address nearly all the hardware limitations of the C234 very well, and that are at least a match for mid-to-high end Windows and Mac machines in performance and flexibility, and they come at a price to match: really not cheap. But I don’t think either I or ChromeOS are quite ready for that yet. MacOS beats it hands down in terms of usability, speed, reliability, consistency, and flexibility, despite Apple’s deeply tedious efforts to lock MacOS down in recent years (trying to triple boot to MacOS, Windows, and Linux is an exercise in frustration nowadays) and despite not offering a touch screen option. If Apple goes further down the path of assuming all users are idiots then I might change my mind but, for now, its own operating system is still the best available, and a Mac runs Windows and Linux better than almost any equivalently priced generic PC. I would very seriously consider a high-end Chromebook, though, as an alternative to a Windows PC. It is inherently more secure, far less hassle to maintain, and lets you get to doing what you want to do much faster than any Windows machine. Unless you really need a bit of hardware of software that only runs under Windows – and there are very few of those nowadays – then I can think of few reasons to prefer it.
Where to buy (current advertised price $CAD409): https://www.bestbuy.ca/en-ca/product/asus-flip-c234-11-6-touchscreen-2-in-1-chromebook-intel-celeron-n4020-64gb-emmc-4gb-ram-chrome-os/14690262
This stunningly brilliant and passionate essay by my friend, mentor and inspiration, Karamjit Singh Gill (far too formal – he is Ranjit to me) is, sadly, occasioned by the death of his old friend, one of the most insightful, compassionate, humane commentators on technology of the past century, Mike Cooley. I knew Mike only slightly, as a result of the occasional dinner conversation or guest appearance at a conference some 20 years or so ago, but he made a big impression on me, not least because Ranjit – my guru – introduced him to me as his own guru and inspiration. Mike was one of those people who could talk off the cuff with the greater eloquence, sharpness, and persuasiveness than most people can write after multiple revisions. He was funny, he was wise, he was charming. His book, Architect or Bee, has been an inspiration to generations, including to me – mundanely, his distinction between auto-mation and infor-mation is an incredibly useful one but, much more significantly, it is his resolute valorization and glorification of the human and the humane that most persists and that continues to shape my own understanding of the world. These are qualities that Ranjit shares in spades, and it is mainly through Ranjit that his attitudes, perceptions, and beliefs have rubbed off a little on me.
Characteristically, Ranjit does not take the obvious or easy path of providing a conventional obituary or summary of achievements, but instead offers his own profound and far-reaching insights, in a moving tribute to the impact Mike had on his own understanding. It’s a beautiful piece, interweaving poetry and analysis into a seamless whole that offers both a cutting and urgent critique of the inhumane, debasing patterns of the machine, and a path towards hope and redemption that, far from rejecting technology, embraces it as something that can help us to become more human, more filled with wisdom and wonder, more connected. There are, however, huge risks of sliding down “the slippery slope of calculation to judgment”, that Ranjit analyzes in depth. Mike’s poem, with which Ranjit begins the essay, summarizes the gist of it well:
We create devices and then they create us
Narcissus-like, we gaze into a pool of technology and see ourselves
We acquiesce in our own demise, setting out as participants
and metamorphosing into victims
(Cooley 2013 )
This is not a piece to be skip-read – it is full of subtlety and grace, and there is something to savour in almost every sentence.
Originally posted at: https://landing.athabascau.ca/bookmarks/view/6848608/strange-affair-of-man-with-the-machine
This is an extremely fascinating article reporting on a couple of research studies by the author (The Wisdom of Partisan Crowds and Networked collective intelligence improves dissemination of scientific information regarding smoking risks) that – contrary to what you might expect if you follow Eli Pariser’s line of reasoning on filter bubbles – show partisan crowds can in fact be pretty wise, converging on more nuanced, more tolerant, less biased views when left to their own devices to discuss the issues about which they are partisan. Rather than amplifying their biases, they actually become less partisan. This happens (apparently reliably and predictably) when – and only when – networks are egalitarian: when there are no clear leaders or privileged voices. When they become more centralized, i.e. when prominent influencers connect to many others, they turn into echo chambers that amplify the influencers’ biases and intolerant views. The fairly startling, and heartwarming takeaway is that greater equity leads to greater tolerance and wisdom, even when the groups themselves started out with highly partisan views.
Centola’s discoveries help to explain some of the big issues we see in large-scale social networks, with a relatively small number of hubs linking a much larger number of people together and thus amplifying the biases in the ways Centola describes. To pick a fine hair, though technically accurate, I’m not sure about the wisdom of using the term ‘centralization’ to describe this: it is totally about network centrality in the hubs, but ‘centralization’ implies a deliberate hierarchy to me (to centralize implies someone doing the centralization), which is not how it works. It is still a distributed network, after all, just one that (on average) follows a power law distribution. However, as Centola tentatively suggests, knowing this provides us with a potential lever to disrupt the harmful effects of echo chambers. The trick, he claims, is not to eliminate the echo chambers, but to do what we can to increase the equity within them. This, as it happens, aligns fairly well with Pariser’s recent rather fuzzily formulated and weakly justified call for ‘online parks’ . I look forward to reading Centola’s new book on the subject, due out in January.
How might we use this knowledge?
I think there may be great potential for social media designers to use this knowledge to take the big influencers down a few notches. Indeed, using a very different theoretical basis, I did something rather similar myself when I developed my old CoFIND system (a social bookmarking system using the dynamics of evolutionary and stigmergic systems to evolve structure) in the late 90s and early 2000s. Like others working in the field, I had noticed that a really big problem with my evolving system was that popular resources and fuzzy tags (that I called ‘qualities’ – they were scalar rather than binary categories) tended to stay that way: it was a scale-free network with a long, long tail. My solution was to give a novelty weighting that brought novel tags and resources up to equal prominence with the most viewed/ranked, and that could be topped up by being used/ranked themselves, but to decrement the value if they were not used. Initially I made the decay rate constant, which was stupid: if the system was not used for a week or two, there would literally be nothing left to see, and it was really hard to tune it right so that new things didn’t stick around too long if they were not popular. Later, I made the decay proportional to the overall rate of use of the system or niche within it, so it tuned itself: when the system was used a lot, new resources and fuzzy tags didn’t stick around for long but, in less popular systems, they would fall more slowly. The idea behind it was to provide a means for things to ‘die’ in the system for lack of feeding, and for things that were really no use to make them starve pretty quickly. New resources would have a chance to compete but, if they were not used and rated, they would decay quite rapidly – relative to system use – and drop down into the backwaters of the system where few would ever visit. Later (or maybe it was earlier – my memory is vague) I slightly randomized the initial weighting so introduce a bit of serendipity and to reduce the rewards of gaming it.
In fairness, my mechanism was a bit of a sky-hook of the sort the intelligent design nincompoops invoke when trying to find a role for supernatural beings in evolutionary systems. In natural ecosystems, though novelty can sometimes be beneficial when it allows an organism to occupy an unclaimed niche or to out-compete an incumbent, novelty has no innate value of its own. If it did, it would have evolved from the bottom up, certainly not from the top down. However, I reasoned that I was defining the physics of the system so as to influence its behaviour in the direction I wanted to go (to help people to help one another to learn) and thus could legitimately make novelty a positive selection factor without departing from my general principle of letting evolution and stigmergy do all the work. I was also very aware that the system had to be at least minimally useful and, if I had allowed evolution to do all the work (which I did try, once), given the widespread availability of other well-designed social bookmarking systems, no one would ever use it in the first place: the whole system would have been an evolutionary dead-end.
I think the principles I followed could be used for pretty much any social network. If we think of the algorithms that choose what, how, where, and in what order things are displayed, as the physics of the social system, then it is quite legitimate to tune the physics to make the network more equitable and egalitarian, while still retaining the filter bubbles that draw people to them. The big question that remains to me, though, is whether anyone would want to use it. I suspect that this kind of flattened social network may thrive in some niches. It would probably be really useful in academia, for instance, research communities, and other vertical markets where the set social form is equal to or more dominant than the network social form but might not be a great competitor to Facebook, LinkedIn, Twitter, and other commercial social networks precisely because of the awful role they play in forming and sustaining identities, and cultivating an exaggerated sense of belonging. Social networks naturally gravitate towards a long-tail distribution so, if we suppress that, they might not form particularly well, if at all. It would be really interesting to try, though.
Originally posted at: https://landing.athabascau.ca/bookmarks/view/6840452/why-social-media-make-us-more-polarized-and-how-to-fix-it-scientific-american
Immediately following its newly announced (and typically self-serving and cynical) initiative to uplift climate science on its site, Facebook showed its dedication to the cause by removing hundreds of climate change activist, indigenous, and social justice groups (and their posts) from the site.
Facebook claimed it was a ‘random accident’ when challenged. Oops. Silly them.
It is good that, however misguided and deliberately quiet it may be on problems in which it plays a significant role, Facebook is at least paying some attention to righting wrongs it has helped to create, however meagre and paltry their lip service might be. However, as the article states, “If Facebook actually want to address the climate crisis, not censoring environmental activism, being stooges for gas companies, and allowing conspiracies to spread seems like a good place to start. Which is maybe why it hasn’t take[n] those steps in the first place.”
Originally posted at: https://landing.athabascau.ca/bookmarks/view/6669125/facebook-conducts-mass-censorship-of-climate-activists
This CBC report is one of many dozens of articles in the world’s press highlighting one rather small but startling assertion in a recent OECD report on the effects of Covid-19 on education – that the ‘lost’ third of a year of schooling in many countries will lead to an overall lasting drop in GDP of 1.5% across the world. Though it contains many more fascinating and useful insights that are far more significant and helpful, the report itself does make this assertion quite early on and repeats it for good measure, so it is not surprising that journalists have jumped on it. It is important to observe, though, that the reasoning behind it is based on a model developed by Hanushek and Woessman over several years, and an unpublished article by the authors that tries to explain variations in global productivity according to amount and – far more importantly – the quality of education: that long-run productivity is a direct consequence of the cognitive skills (or knowledge capital) of a nation, that can be mapped directly to how well and how much the population is educated.
As an educator I find this model, at a glance, to be reassuring and confirmatory because it suggests that we do actually have a positive effect on our students. However, there may be a few grounds on which it might be challenged (disclaimer: this is speculation). The first and most obvious is that correlation does not equal causation. The fact that countries that do invest in improving education consistently see productivity gains to match in years to come is interesting, but it raises the question of what led to that investment in the first place and whether that might be the ultimate cause, not the education itself. A country that has invested in increasing the quality of education would, normally, be doing so as a result of values and circumstances that may lead to other consequences and/or be enabled by other things (such as rising prosperity, competition from elsewhere, a shift to more liberal values, and so on). The second objection might be that, sure, increased quality of education does lead to greater productivity, but that it is not the educational process that is causing it, as such. Perhaps, for instance, an increased focus on attainment raises aspirations. A further objection might be that the definition of ‘quality’ does not measure what they think it measures. A brief skim of the model used suggests that it makes extensive use of scores from the likes of TIMSS, PIRLS and PISA, standardized test approaches used to compare educational ‘effectiveness’ in different regions that embody quite a lot of biases, are often manipulated at a governmental level, and that, as I have mentioned once or twice before, are extremely dubious indicators of learning: in fact, even when they are not manipulated, they may indicate willingness to comply with the demands of the powerful more than learning (does that improve GDP? Probably). Another objection might be that absence of time spent in school does not equate to absence of education. Indeed, Hanushek and Woessman’s central thesis is that it is not the amount but the quality of schooling that matters, so it seems bizarre that they might fall back on quantifying learning by time spent in school. We know for sure that, though students may not have been conforming to curricula at the rate desired by schools and colleges, they have not stopped learning. In fact, in many ways and in many places, there are grounds to believe that there have been positive learning benefits: better family learning, more autonomy, more thoughtful pedagogies, more intentional learning community forming, and so on. Out of this may spring a renewed focus on how people learn and how best to support them, rather than maintaining a system that evolved in mediaeval times to support very different learning needs, and that is so solidly packed with counter technologies and so embedded in so many other systems that have nothing to do with learning that we have lost sight of the ones that actually matter. If education improves as a result, then (if it is true that better and more education improves the bottom line) we may even see gains in GDP. I expect that there are other reasons for doubt: I have only skimmed the surface of the possible concerns.
I may be wrong to be sceptical – in fairness, I have not read the many papers and books produced by Hanushek and Woessman on the subject, I am not an economist, nor do I have sufficient expertise (or interest) to analyze the regression model that they use. Perhaps they have fully addressed such concerns in that unpublished paper and the simplistic cause-effect prediction distorts their claims. But, knowing a little about complex adaptive systems, my main objection is that this is an entirely new context to which models that have worked before may no longer apply and that, even if they do, there are countless other factors that will affect the outcome in both positive and negative ways, so this is not so much a prediction as an observation about one small part of a small part of a much bigger emergent change that is quite unpredictable. I am extremely cautious at the best of times whenever I see people attempting to find simple causal linear relationships of this nature, especially when they are so precisely quantified, especially when past indicators are applied to something wholly novel that we have never seen before with such widespread effects, especially given the complex relationships at every level, from individual to national. I’m glad they are telling the story – it is an interesting one that no doubt contains grains of important truths – but it is just an informative story, not predictive science. The OECD has a bit of track record on this kind of misinterpretation, especially in education. This is the same organization that (laughably, if it weren’t so influential) claimed that educational technology in the classroom is bad for learning. There’s not a problem with the data collection or analysis, as such. The problem is with the predictions and recommendations drawn from it.
Beyond methodological worries, though, and even if their predictions about GDP are correct (I am pretty sure they are not – there are too many other factors at play, including huge ones like the destruction of the environment that makes the odd 1.5% seem like a drop in the barrel) then it might be a good thing. It might be that we are moving – rather reluctantly – into a world in which GDP serves as an even less effective measure of success than it already is. There are already plentiful reasons to find it wanting, from its poor consideration of ecological consequences to its wilful blindness to (and causal effect upon) inequalities, to its simple inadequacy to capture the complexity and richness of human culture and wealth. I am a huge fan of the state of Bhutan’s rejection of the GDP, that it has replaced with the GNH happiness index. The GNH makes far more sense, and is what has led Bhutan to be one of the only countries in the world to be carbon positive, as well as being (arguably but provably) one of the happiest countries in the world. What would you rather have, money (at least for a few, probably not you), or happiness and a sustainable future? For Bhutan, education is not for economic prosperity: it is about improving happiness, which includes good governance, sustainability, and preservation of (but not ossification of) culture.
Many educators – and I am very definitely one of them – share Bhutan’s perspective on education. I think that my customer is not the student, or a government, or companies, but society as a whole, and that education makes (or should make) for happier, safer, more inventive, more tolerant, more stable, more adaptive societies, as well as many other good things. It supports dynamic meta-stability and thus the evolution of culture. It is very easy to lose sight of that goal when we have to account to companies, governments, other institutions, and to so many more deeply entangled sets of people with very different agendas and values, not to mention our inevitable focus on the hard methods and tools of whatever it is that we are teaching, as well as the norms and regulations of wherever we teach it. But we should not ever forget why we are here. It is to make the world a better place, not just for our students but for everyone. Why else would we bother?
Originally posted at: https://landing.athabascau.ca/bookmarks/view/6578662/skills-lost-due-to-covid-19-school-closures-will-hit-economic-output-for-generations-hmmm
This article from teachonline.ca draws from a report by JISC (the UK academic network organization) to provide 5 ‘principles’ for assessment. I put the scare quotes around ‘principles’ because they are mostly descriptive labels for trends and they are woefully non-inclusive. There is also a subtext here – that I do understand is incredibly hard to avoid because I failed to fully do so myself in my own post last week – that assessment is primarily concerned with proving competence for the sake of credentials (it isn’t). Given these caveats, most of what is written here, however, makes some sense.
Principle 1: authentic assessment. I completely agree that assessment should at least partly be of authentic activities. It is obvious how that plays out in applied disciplines with a clear workplace context. If you are learning how to program, for instance, then of course you should write programs that have some value in a realistic context and it goes without saying that you should assess the same. This includes aspects of the task that we might not traditionally assess in a typical programming course such as analysis, user experience testing, working with others, interacting with StackOverflow, sharing via GitHub, copying code from others, etc. It is less obvious in the case of something like, say, philosophy, or history, or latin, though, or, indeed, in any subject that is primarily found in academia. Authentic assessment for such things would probably be an essay or conference presentation, or perhaps some kind of argument, most of the time, because that’s what real life is like for most people in such fields (whether that should be the case remains an open issue). We should be wary, though, of making this the be-all and end-all, because there’s a touch of behaviourism lurking behind the idea: can the student perform as expected? There are other things that matter. For instance, I think that it is incredibly important to reflect on any learning activity, even though that might not mirror what is typically done in an authentic context. It can significantly contribute to learning but it can also reveal things that may not be obvious when we judge what is done in an authentic context, such as why people did what they did or whether they would do it the same way again. There may also be stages along the way that are not particularly authentic, but that contribute to learning the hard skills needed in order to perform effectively in the authentic context: learning a vocabulary, for example, or doing something dangerous in a cut-down, safe environment. We should probably not summatively assess such things (they should rarely contribute to a credential because they do not demonstrate applied capabilityre), but formative assessment – including of this kind of activity – is part of all learning.
Principle 2: accessible and inclusive assessment. Well, duh. Of course this should be how it is done. Not so much a principle as plain common decency. Was this not ever so? Yes it was. Only an issue when careless people forget that some media are less inclusive than others, or that not everyone knows or cares about golf. Nothing new here.
Principle 3: appropriately automated assessment. This is a reaction to bad assessment, not a principle for good assessment. There is a principle that really matters here but it is not appropriate automation: it is that assessment should enhance and improve the student experience. Automation can sometimes do that. It is appropriate for some kinds of formative feedback (see examples of non-authentic learning above) but very little else which, in the context of this article (that implicitly focuses on the final judgment), means it is a bad idea to use it at all.
Principle 4: continuous assessment. I don’t mind this one at all. Again, the principle is not what the label claims, though. The principle here is that assessment should be designed to improve learning. For sure, if it is used as a filter to sort the great from the not great, then the filter should be authentic which, for the most part, means no high stakes, high stress, one-chance tests, and that overall behaviours and performance over time are what matters. However, there is a huge risk of therefore assessing learning in progress rather than capability once a course is done. If we are interested in assessing competence for credentials, then I’d rather do it at the end, once learning has been accomplished (ignoring the inconvenient detail that this is not a terminal state and that learning must always undergo ever-dynamic renewal and transformation until the day we die). Of course, the work done along the way will make up the bulk of the evidence for that final judgment but it allows for the fact that learning changes people, and that what we did early on in the journey seldom represents what we are able to do in the light of later learning.
Principle 5: secure assessment. Why is this mentioned in an article about assessment in the digital age? Is cheating a new invention? Was it (intentionally) insecure before? This is just a description of how some people have noticed that traditional forms of assessment are really dumb in a context that includes Wikipedia, Google, and communications devices the size of a peanut. Pointless, and certainly not a new principle for the Digital Age. In fairness, if the principles above are followed in spirit as well as in letter, it is not likely to be a huge issue but, then, why make it a principle? It’s more a report on what teachers are thinking and talking about.
The summary is motherhood and apple pie, albeit that it doesn’t entirely fall out from the principles (choice over when to be assessed, or peer assessment, for instance, are not really covered in the principles, though they are very good ideas).
I’m glad that people are sharing ideas about this but I think that there are more really important principles than these: that students should have control over their own assessment, that it should never reward or punish, that it should always support learning, and so on. I wrote a bit about this the other day, and, though that is a work in progress, I think it gets a little closer to what actually matters than this.
Originally posted at: https://landing.athabascau.ca/bookmarks/view/6531701/how-assessment-is-changing-in-the-digital-age-five-guiding-principles-teachonlineca
A simple article on a simple idea, which is to introduce brakes and/or circuit breakers to popular social media platforms in order to slow down viral posts to a speed that sysadmins can handle. Such posts can have deadly consequences and are often far from innocently made. The article mentions cases such as the Plandemic video (a fabric of lies and misinformation intended to discourage mask use and distancing) that received 8 million views in a week before being removed by all major social platforms, or a video funded by ‘dark’ money called America’s Frontline Doctors pushing hyroxychloroquine as a Covid-19 treatment hitting 20 million views on Facebook in just 12 hours, through targeted manipulation of algorithms and deliberate promotion by influential accounts. It would take a large army of human police to identify and contain every instance of that kind of malevolent post before it hit exponential growth, so some kind of automated brake is needed.
Brakes (negative feedback loops and delays) are a good idea. They are a fundamental feature of complex adaptive systems, and of cybernetic systems in general. You have really a lot of them in your own body, they exist from the level of ecosystems down to cellular organelles, and from human organizations to cities to whole cultures they serve the critical function of maintaining metastability. If everything happened at once, there’s a fair chance that nothing would happen at all. But it has to be the right amount of delay. Too little and the system flies off into chaos, never reaching even an approximately stable state. Too much and it either oscillates unstably between extremes or, if taken too far, destroys/stops the system altogether. Positive feedback loops must be balanced by negative feedback loops, and vice versa. Any boundaried entity in a stable complex adaptive system has evolved (or, in human systems, may have been been designed) to have the right amount of delay in the context of the rest of the system. It has to be that way or the system would not persist: when delays change, so do systems. This inherent fragility is what the bad actors are exploiting: they have found a way to bypass the usual delays that keep societies stable. But what is ‘right’ in the context of viral posts, that are part of a much large ecosystem that contains within it bad actors hidden among legitimate agents? Clearly it has to respond at least nearly as fast as the positive feedback loop itself is growing, or it will be too late, which seems to imply mechanization must be involved. The algorithm, such as the one described in the article, might not need to be too complex. Some kinds of growth can be stunted through tools like downvotes, reports of abuse, and the like, and most social technologies have at least a bit of negative feedback built in. However, it is seldom in the provider’s interest to make that as powerful as the positive feedback for all sorts of reasons, many quite legitimate – we don’t have a thumbsdown option on the Landing, for instance, because we want to accentuate the positive to help foster a caring community, and down-voting motives are not always clear or pure.
However, a simple rule-driven system alone would probably be a bad idea. There are times when rapid, exponential, positive feedback loops should be allowed to spread in order to keep the system intact: in real disasters, for example, where time and reach is of the essence in spreading a warning, or in outpourings of support for victims of such disasters. There are also perfectly innocuous viral posts – indeed, they are likely the majority. At least, therefore, humans should be involved in putting their feet on the brakes because such things are beyond the ken of machines and will likely remain so. Machines cannot yet (and probably never will) know what it means to live as a human being in a human society – they simply don’t have a stake in the game – and even the best of AIs are really bad at dealing with novel situations, matters of compassion, or outliers because they don’t have (and cannot have) enough experience of the right kind, or the imagination to see things differently, especially when people are deliberately trying to fool them. On the other hand, humans have biases which, as often as not, are part of the problem we are trying to solve, and can themselves be influenced in many ways. This seems to me to be a perfect application for crowd wisdom. If automated alerts – partly machine-determined, partly crowd-driven – are sent to many truly randomly selected people from a broad sample (something like Mechanical Turk, but less directed), and those people have no way of knowing what the others are deciding, and if each casts a vote whether to trigger the brakes, it might give us the best of both worlds. This kind of thing spreads through networks of people so it is appropriate that it can be destroyed by sets of people.
Originally posted at: https://landing.athabascau.ca/bookmarks/view/6530631/how-social-media-platforms-could-flatten-the-curve-of-dangerous-misinformation
This is a commentary by Rob Beschizza at Boing Boing on a New York Times article describing how the far-right is exploiting Facebook with ruthless efficiency. At least, that’s one way to look at it. Another, as Beschizza notes, is:
“…that Facebook’s cultivation of these audiences is intentional, simply because a Democratic congress and president would present a more potent threat to Facebook than Trump and his cronified GOP ever will. It’s no secret that Zuckerberg is more concerned with conservative critics than progressive ones, a concern often cast as fear but could just as well be because that’s who he and his team wants to please. The right will carp but it knows it rules Facebook from the inside out. Only the left talks seriously about breaking it up.”
That’s an interesting take on things. It would not surprise me if it is true but, if this is what it is doing, it is most likely through deliberate failure to dampen virality rather than more obvious algorithm tuning. It would not be a moral compass that holds it in check – it has no moral compass – but plausible deniability.
I’ve said it before and I will keep saying it: DO NOT USE FACEBOOK. Stop it. Really. If you must use it, use it in a special isolated tab in Firefox (you can get the plugin here), or use different browser solely for that purpose. Then get out of it as soon as you can. As for Whatsapp, Instagram, or any other tool, device, or app that Facebook owns, just say no (or, if you must use them, NEVER use Facebook itself). It bugs the hell out of me that my avoidance of Facebook means I am unable to see a lot of useful things people have witlessly shared on this closed and malicious platform, and I am so sad that the formerly great Whatsapp and Instagram apps, despite the contracts under which they were sold that were meant to stop exactly such abuse, are now just slimy Facebook tentacles. However, I refuse to willingly feed my identity to the Devil that has tried (with too much success) to destroy the Web, and that feeds – and feeds on – the darkness in people’s souls for its own profit. Full disclaimer: I do have an account, as well as Instagram and Whatsapp account, but they are for research purposes. Sometimes you have to take risks in order to learn.
The original article notes some caveats, including that the massive disparity between far-right posts and those of the rest of the world has only been demonstrated in public posts, and that it might include a fair number of legitimate hate-shares. The latter can be significant: I am not sure whether I am typical, but I certainly share at least as many articles of which I disapprove as those that I like. Whether this is a good thing or not is very much up for debate (e.g. see here, here, here, and here).
Originally posted at: https://landing.athabascau.ca/bookmarks/view/6494864/facebook-is-a-parallel-universe-of-lies-and-minisformation-crafted-to-deliver-the-election-to-trump