Why Social Media Make Us More Polarized, and How to Fix It – Scientific American

This is an extremely fascinating article reporting on a couple of research studies by the author (The Wisdom of Partisan Crowds and Networked collective intelligence improves dissemination of scientific information regarding smoking risks) that  – contrary to what you might expect if you follow Eli Pariser’s line of reasoning on filter bubbles – show partisan crowds can in fact be pretty wise, converging on more nuanced, more tolerant, less biased views when left to their own devices to discuss the issues about which they are partisan. Rather than amplifying their biases, they actually become less partisan. This happens (apparently reliably and predictably) when – and only when – networks are egalitarian: when there are no clear leaders or privileged voices. When they become more centralized, i.e. when prominent influencers connect to many others, they turn into echo chambers that amplify the influencers’ biases and intolerant views. The fairly startling, and heartwarming takeaway is that greater equity leads to greater tolerance and wisdom, even when the groups themselves started out with highly partisan views.

Centola’s discoveries help to explain some of the big issues we see in large-scale social networks, with a relatively small number of hubs linking a much larger number of people together and thus amplifying the biases in the ways Centola describes. To pick a fine hair, though technically accurate, I’m not sure about the wisdom of using the term ‘centralization’ to describe this: it is totally about network centrality in the hubs, but ‘centralization’ implies a deliberate hierarchy to me (to centralize implies someone doing the centralization), which is not how it works. It is still a distributed network, after all, just one that (on average) follows a power law distribution. However, as Centola tentatively suggests, knowing this provides us with a potential lever to disrupt the harmful effects of echo chambers. The trick, he claims, is not to eliminate the echo chambers, but to do what we can to increase the equity within them. This, as it happens, aligns fairly well with Pariser’s recent rather fuzzily formulated and weakly justified call for ‘online parks’ . I look forward to reading Centola’s new book on the subject, due out in January.

How might we use this knowledge?

I think there may be great potential for social media designers to use this knowledge to take the big influencers down a few notches. Indeed, using a very different theoretical basis, I did something rather similar myself when I developed my old CoFIND system (a social bookmarking system using the dynamics of evolutionary and stigmergic systems to evolve structure) in the late 90s and early 2000s. Like others working in the field, I had noticed that a really big problem with my evolving system was that popular resources and fuzzy tags (that I called ‘qualities’ – they were scalar rather than binary categories) tended to stay that way: it was a scale-free network with a long, long tail. My solution was to give a novelty weighting that brought novel tags and resources up to equal prominence with the most viewed/ranked, and that could be topped up by being used/ranked themselves, but to decrement the value if they were not used. Initially I made the decay rate constant, which was stupid: if the system was not used for a week or two, there would literally be nothing left to see, and it was really hard to tune it right so that new things didn’t stick around too long if they were not popular. Later, I made the decay proportional to the overall rate of use of the system or niche within it, so it tuned itself: when the system was used a lot, new resources and fuzzy tags didn’t stick around for long but, in less popular systems, they would fall more slowly. The idea behind it was to provide a means for things to ‘die’ in the system for lack of feeding, and for things that were really no use to make them starve pretty quickly. New resources would have a chance to compete but, if they were not used and rated, they would decay quite rapidly – relative to system use – and drop down into the backwaters of the system where few would ever visit. Later (or maybe it was earlier – my memory is vague) I slightly randomized the initial weighting so introduce a bit of serendipity and to reduce the rewards of gaming it.

In fairness, my mechanism was a bit of a sky-hook of the sort the intelligent design nincompoops invoke when trying to find a role for supernatural beings in evolutionary systems. In natural ecosystems, though novelty can sometimes be beneficial when it allows an organism to occupy an unclaimed niche or to out-compete an incumbent, novelty has no innate value of its own. If it did, it would have evolved from the bottom up, certainly not from the top down. However, I reasoned that I was defining the physics of the system so as to influence its behaviour in the direction I wanted to go (to help people to help one another to learn) and thus could legitimately make novelty a positive selection factor without departing from my general principle of letting evolution and stigmergy do all the work. I was also very aware that the system had to be at least minimally useful and, if I had allowed evolution to do all the work (which I did try, once), given the widespread availability of other well-designed social bookmarking systems, no one would ever use it in the first place: the whole system would have been an evolutionary dead-end. 

I think the principles I followed could be used for pretty much any social network. If we think of the algorithms that choose what, how, where, and in what order things are displayed, as the physics of the social system, then it is quite legitimate to tune the physics to make the network more equitable and egalitarian, while still retaining the filter bubbles that draw people to them. The big question that remains to me, though, is whether anyone would want to use it. I suspect that this kind of flattened social network may thrive in some niches. It would probably be really useful in academia, for instance, research communities, and other vertical markets where the set social form is equal to or more dominant than the network social form but might not be a great competitor to Facebook, LinkedIn, Twitter, and other commercial social networks precisely because of the awful role they play in forming and sustaining identities, and cultivating an exaggerated sense of belonging. Social networks naturally gravitate towards a long-tail distribution so, if we suppress that, they might not form particularly well, if at all. It would be really interesting to try, though.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/6840452/why-social-media-make-us-more-polarized-and-how-to-fix-it-scientific-american

Facebook Conducts 'Mass Censorship' of Climate Activists

https://earther.gizmodo.com/facebook-follows-up-vow-to-fight-climate-change-with-ma-1845139884

Immediately following its newly announced (and typically self-serving and cynical) initiative to uplift climate science on its site, Facebook showed its dedication to the cause by removing hundreds of climate change activist, indigenous, and social justice groups (and their posts) from the site.

Sigh.

Facebook claimed it was a ‘random accident’ when challenged. Oops. Silly them.

It is good that, however misguided and deliberately quiet it may be on problems in which it plays a significant role, Facebook is at least paying some attention to righting wrongs it has helped to create, however meagre and paltry their lip service might be. However, as the article states, “If Facebook actually want to address the climate crisis, not censoring environmental activism, being stooges for gas companies, and allowing conspiracies to spread seems like a good place to start. Which is maybe why it hasn’t take[n] those steps in the first place.”

Originally posted at: https://landing.athabascau.ca/bookmarks/view/6669125/facebook-conducts-mass-censorship-of-climate-activists

Skills lost due to COVID-19 school closures will hit economic output for generations (hmmm)

Snippet from OECD report on covid-19 and education This CBC report is one of many dozens of articles in the world’s press highlighting one rather small but startling assertion in a recent OECD report on the effects of Covid-19 on education – that the ‘lost’ third of a year of schooling in many countries will lead to an overall lasting drop in GDP of 1.5% across the world. Though it contains many more fascinating and useful insights that are far more significant and helpful, the report itself does make this assertion quite early on and repeats it for good measure, so it is not surprising that journalists have jumped on it. It is important to observe, though, that the reasoning behind it is based on a model developed by Hanushek and Woessman over several years, and an unpublished article by the authors that tries to explain variations in global productivity according to amount and  – far more importantly – the quality of education: that long-run productivity is a direct consequence of the cognitive skills (or knowledge capital) of a nation, that can be mapped directly to how well and how much the population is educated.

As an educator I find this model, at a glance, to be reassuring and confirmatory because it suggests that we do actually have a positive effect on our students. However, there may be a few grounds on which it might be challenged (disclaimer: this is speculation). The first and most obvious is that correlation does not equal causation. The fact that countries that do invest in improving education consistently see productivity gains to match in years to come is interesting, but it raises the question of what led to that investment in the first place and whether that might be the ultimate cause, not the education itself.  A country that has invested in increasing the quality of education would, normally, be doing so as a result of values and circumstances that may lead to other consequences and/or be enabled by other things (such as rising prosperity, competition from elsewhere, a shift to more liberal values, and so on).  The second objection might be that, sure, increased quality of education does lead to greater productivity, but that it is not the educational process that is causing it, as such. Perhaps, for instance, an increased focus on attainment raises aspirations. A further objection might be that the definition of ‘quality’ does not measure what they think it measures. A brief skim of the model used suggests that it makes extensive use of scores from the likes of TIMSS, PIRLS and PISA, standardized test approaches used to compare educational ‘effectiveness’ in different regions that embody quite a lot of biases, are often manipulated at a governmental level, and that, as I have mentioned once or twice before, are extremely dubious indicators of learning: in fact, even when they are not manipulated, they may indicate willingness to comply with the demands of the powerful more than learning (does that improve GDP? Probably).  Another objection might be that absence of time spent in school does not equate to absence of education. Indeed, Hanushek and Woessman’s central thesis is that it is not the amount but the quality of schooling that matters, so it seems bizarre that they might fall back on quantifying learning by time spent in school. We know for sure that, though students may not have been conforming to curricula at the rate desired by schools and colleges, they have not stopped learning. In fact, in many ways and in many places, there are grounds to believe that there have been positive learning benefits: better family learning, more autonomy, more thoughtful pedagogies, more intentional learning community forming, and so on.  Out of this may spring a renewed focus on how people learn and how best to support them, rather than maintaining a system that evolved in mediaeval times to support very different learning needs, and that is so solidly packed with counter technologies and so embedded in so many other systems that have nothing to do with learning that we have lost sight of the ones that actually matter. If education improves as a result, then (if it is true that better and more education improves the bottom line) we may even see gains in GDP. I expect that there are other reasons for doubt: I have only skimmed the surface of the possible concerns.

I may be wrong to be sceptical –  in fairness, I have not read the many papers and books produced by Hanushek and Woessman on the subject, I am not an economist, nor do I have sufficient expertise (or interest) to analyze the regression model that they use. Perhaps they have fully addressed such concerns in that unpublished paper and the simplistic cause-effect prediction distorts their claims. But, knowing a little about complex adaptive systems, my main objection is that this is an entirely new context to which models that have worked before may no longer apply and that, even if they do, there are countless other factors that will affect the outcome in both positive and negative ways, so this is not so much a prediction as an observation about one small part of a small part of a much bigger emergent change that is quite unpredictable. I am extremely cautious at the best of times whenever I see people attempting to find simple causal linear relationships of this nature, especially when they are so precisely quantified, especially when past indicators are applied to something wholly novel that we have never seen before with such widespread effects, especially given the complex relationships at every level, from individual to national.  I’m glad they are telling the story – it is an interesting one that no doubt contains grains of important truths – but it is just an informative story, not predictive science.  The OECD has a bit of track record on this kind of misinterpretation, especially in education. This is the same organization that (laughably, if it weren’t so influential) claimed that educational technology in the classroom is bad for learning. There’s not a problem with the data collection or analysis, as such. The problem is with the predictions and recommendations drawn from it.

Beyond methodological worries, though, and even if their predictions about GDP are correct (I am pretty sure they are not – there are too many other factors at play, including huge ones like the destruction of the environment that makes the odd 1.5% seem like a drop in the barrel) then it might be a good thing. It might be that we are moving – rather reluctantly – into a world in which GDP serves as an even less effective measure of success than it already is. There are already plentiful reasons to find it wanting, from its poor consideration of ecological consequences to its wilful blindness to (and causal effect upon) inequalities, to its simple inadequacy to capture the complexity and richness of human culture and wealth. I am a huge fan of the state of Bhutan’s rejection of the GDP, that it has replaced with the GNH happiness index. The GNH makes far more sense, and is what has led Bhutan to be one of the only countries in the world to be carbon positive, as well as being (arguably but provably) one of the happiest countries in the world. What would you rather have, money (at least for a few, probably not you), or happiness and a sustainable future? For Bhutan, education is not for economic prosperity: it is about improving happiness, which includes good governance, sustainability, and preservation of (but not ossification of) culture.

Many educators – and I am very definitely one of them – share Bhutan’s perspective on education. I think that my customer is not the student, or a government, or companies, but society as a whole, and that education makes (or should make) for happier, safer, more inventive, more tolerant, more stable, more adaptive societies, as well as many other good things. It supports dynamic meta-stability and thus the evolution of culture. It is very easy to lose sight of that goal when we have to account to companies, governments, other institutions, and to so many more deeply entangled sets of people with very different agendas and values, not to mention our inevitable focus on the hard methods and tools of whatever it is that we are teaching, as well as the norms and regulations of wherever we teach it. But we should not ever forget why we are here. It is to make the world a better place, not just for our students but for everyone. Why else would we bother?

Originally posted at: https://landing.athabascau.ca/bookmarks/view/6578662/skills-lost-due-to-covid-19-school-closures-will-hit-economic-output-for-generations-hmmm

How Assessment is Changing in The Digital Age – Five Guiding Principles | teachonline.ca

This article from teachonline.ca draws from a report by JISC (the UK academic network organization) to provide 5 ‘principles’ for assessment. I put the scare quotes around ‘principles’ because they are mostly descriptive labels for trends and they are woefully non-inclusive. There is also a subtext here – that I do understand is incredibly hard to avoid because I failed to fully do so myself in my own post last week – that assessment is primarily concerned with proving competence for the sake of credentials (it isn’t). Given these caveats, most of what is written here, however, makes some sense. Lecture with skeleton

Principle 1: authentic assessment. I completely agree that assessment should at least partly be of authentic activities. It is obvious how that plays out in applied disciplines with a clear workplace context. If you are learning how to program, for instance, then of course you should write programs that have some value in a realistic context and it goes without saying that you should assess the same. This includes aspects of the task that we might not traditionally assess in a typical programming course such as analysis, user experience testing, working with others, interacting with StackOverflow, sharing via GitHub, copying code from others, etc. It is less obvious in the case of something like, say, philosophy, or history, or latin, though, or, indeed, in any subject that is primarily found in academia. Authentic assessment for such things would probably be an essay or conference presentation, or perhaps some kind of argument, most of the time, because that’s what real life is like for most people in such fields (whether that should be the case remains an open issue). We should be wary, though, of making this the be-all and end-all, because there’s a touch of behaviourism lurking behind the idea: can the student perform as expected? There are other things that matter. For instance, I think that it is incredibly important to reflect on any learning activity, even though that might not mirror what is typically done in an authentic context. It can significantly contribute to learning but it can also reveal things that may not be obvious when we judge what is done in an authentic context, such as why people did what they did or whether they would do it the same way again. There may also be stages along the way that are not particularly authentic, but that contribute to learning the hard skills needed in order to perform effectively in the authentic context: learning a vocabulary, for example, or doing something dangerous in a cut-down, safe environment. We should probably not summatively assess such things (they should rarely contribute to a credential because they do not demonstrate applied capabilityre), but formative assessment – including of this kind of activity – is part of all learning.

Principle 2: accessible and inclusive assessment. Well, duh. Of course this should be how it is done. Not so much a principle as plain common decency. Was this not ever so? Yes it was. Only an issue when careless people forget that some media are less inclusive than others, or that not everyone knows or cares about golf. Nothing new here.

Principle 3: appropriately automated assessment. This is a reaction to bad assessment, not a principle for good assessment. There is a principle that really matters here but it is not appropriate automation: it is that assessment should enhance and improve the student experience. Automation can sometimes do that. It is appropriate for some kinds of formative feedback (see examples of non-authentic learning above)  but very little else which, in the context of this article (that implicitly focuses on the final judgment), means it is a bad idea to use it at all.

Principle 4: continuous assessment. I don’t mind this one at all. Again, the principle is not what the label claims, though. The principle here is that assessment should be designed to improve learning. For sure, if it is used as a filter to sort the great from the not great, then the filter should be authentic which, for the most part, means no high stakes, high stress, one-chance tests, and that overall behaviours and performance over time are what matters. However, there is a huge risk of therefore assessing learning in progress rather than capability once a course is done. If we are interested in assessing competence for credentials, then I’d rather do it at the end, once learning has been accomplished (ignoring the inconvenient detail that this is not a terminal state and that learning must always undergo ever-dynamic renewal and transformation until the day we die). Of course, the work done along the way will make up the bulk of the evidence for that final judgment but it allows for the fact that learning changes people, and that what we did early on in the journey seldom represents what we are able to do in the light of later learning.

Principle 5: secure assessment. Why is this mentioned in an article about assessment in the digital age? Is cheating a new invention? Was it (intentionally) insecure before? This is just a description of how some people have noticed that traditional forms of assessment are really dumb in a context that includes Wikipedia, Google, and communications devices the size of a peanut. Pointless, and certainly not a new principle for the Digital Age. In fairness, if the principles above are followed in spirit as well as in letter, it is not likely to be a huge issue but, then, why make it a principle? It’s more a report on what teachers are thinking and talking about.

The summary is motherhood and apple pie, albeit that it doesn’t entirely fall out from the principles (choice over when to be assessed, or peer assessment, for instance, are not really covered in the principles, though they are very good ideas).

I’m glad that people are sharing ideas about this but I think that there are more really important principles than these: that students should have control over their own assessment, that it should never reward or punish, that it should always support learning, and so on. I wrote a bit about this the other day, and, though that is a work in progress, I think it gets a little closer to what actually matters than this.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/6531701/how-assessment-is-changing-in-the-digital-age-five-guiding-principles-teachonlineca

How social media platforms could flatten the curve of dangerous misinformation.

Faecesbook A simple article on a simple idea, which is to introduce brakes and/or circuit breakers to popular social media platforms in order to slow down viral posts to a speed that sysadmins can handle. Such posts can have deadly consequences and are often far from innocently made. The article mentions cases such as the Plandemic video (a fabric of lies and misinformation intended to discourage mask use and distancing) that received 8 million views in a week before being removed by all major social platforms, or a video funded by ‘dark’ money called America’s Frontline Doctors pushing hyroxychloroquine as a Covid-19 treatment hitting 20 million views on Facebook in just 12 hours, through targeted manipulation of algorithms and deliberate promotion by influential accounts. It would take a large army of human police to identify and contain every instance of that kind of malevolent post before it hit exponential growth, so some kind of automated brake is needed.

Brakes (negative feedback loops and delays) are a good idea. They are a fundamental feature of complex adaptive systems, and of cybernetic systems in general. You have really a lot of them in your own body, they exist from the level of ecosystems down to cellular organelles, and from human organizations to cities to whole cultures they serve the critical function of maintaining metastability. If everything happened at once, there’s a fair chance that nothing would happen at all. But it has to be the right amount of delay. Too little and the system flies off into chaos, never reaching even an approximately stable state. Too much and it either oscillates unstably between extremes or, if taken too far, destroys/stops the system altogether. Positive feedback loops must be balanced by negative feedback loops, and vice versa. Any boundaried entity in a stable complex adaptive system has evolved (or, in human systems, may have been been designed) to have the right amount of delay in the context of the rest of the system. It has to be that way or the system would not persist: when delays change, so do systems. This inherent fragility is what the bad actors are exploiting: they have found a way to bypass the usual delays that keep societies stable. But what is ‘right’ in the context of viral posts, that are part of a much large ecosystem that contains within it bad actors hidden among legitimate agents? Clearly it has to respond at least nearly as fast as the positive feedback loop itself is growing, or it will be too late, which seems to imply mechanization must be involved. The algorithm, such as the one described in the article, might not need to be too complex. Some kinds of growth can be stunted through tools like downvotes, reports of abuse, and the like, and most social technologies have at least a bit of negative feedback built in. However, it is seldom in the provider’s interest to make that as powerful as the positive feedback for all sorts of reasons, many quite legitimate – we don’t have a thumbsdown option on the Landing, for instance, because we want to accentuate the positive to help foster a caring community, and down-voting motives are not always clear or pure.

However, a simple rule-driven system alone would probably be a bad idea. There are times when rapid, exponential, positive feedback loops should be allowed to spread in order to keep the system intact: in real disasters, for example, where time and reach is of the essence in spreading a warning, or in outpourings of support for victims of such disasters. There are also perfectly innocuous viral posts – indeed, they are likely the majority. At least, therefore, humans should be involved in putting their feet on the brakes because such things are beyond the ken of machines and will likely remain so. Machines cannot yet (and probably never will) know what it means to live as a human being in a human society – they simply don’t have a stake in the game – and even the best of AIs are really bad at dealing with novel situations, matters of compassion, or outliers because they don’t have (and cannot have) enough experience of the right kind, or the imagination to see things differently, especially when people are deliberately trying to fool them. On the other hand, humans have biases which, as often as not, are part of the problem we are trying to solve, and can themselves be influenced in many ways. This seems to me to be a perfect application for crowd wisdom. If automated alerts – partly machine-determined, partly crowd-driven – are sent to many truly randomly selected people from a broad sample (something like Mechanical Turk, but less directed), and those people have no way of knowing what the others are deciding, and if each casts a vote whether to trigger the brakes, it might give us the best of both worlds. This kind of thing spreads through networks of people so it is appropriate that it can be destroyed by sets of people.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/6530631/how-social-media-platforms-could-flatten-the-curve-of-dangerous-misinformation

Facebook is a 'parallel universe' of lies and minisformation crafted to deliver the election to Trump

Faecesbook This is a commentary by Rob Beschizza at Boing Boing on a New York Times article describing how the far-right is exploiting Facebook with ruthless efficiency. At least, that’s one way to look at it. Another, as Beschizza notes, is:

“…that Facebook’s cultivation of these audiences is intentional, simply because a Democratic congress and president would present a more potent threat to Facebook than Trump and his cronified GOP ever will. It’s no secret that Zuckerberg is more concerned with conservative critics than progressive ones, a concern often cast as fear but could just as well be because that’s who he and his team wants to please. The right will carp but it knows it rules Facebook from the inside out. Only the left talks seriously about breaking it up.”

That’s an interesting take on things. It would not surprise me if it is true but, if this is what it is doing, it is most likely through deliberate failure to dampen virality rather than more obvious algorithm tuning. It would not be a moral compass that holds it in check – it has no moral compass – but plausible deniability. 

I’ve said it before and I will keep saying it: DO NOT USE FACEBOOK. Stop it. Really. If you must use it, use it in a special isolated tab in Firefox (you can get the plugin here), or use different browser solely for that purpose. Then get out of it as soon as you can. As for Whatsapp, Instagram, or any other tool, device, or app that Facebook owns, just say no (or, if you must use them, NEVER use Facebook itself). It bugs the hell out of me that my avoidance of Facebook means I am unable to see a lot of useful things people have witlessly shared on this closed and malicious platform, and I am so sad that the formerly great Whatsapp and Instagram apps, despite the contracts under which they were sold that were meant to stop exactly such abuse, are now just slimy Facebook tentacles. However, I refuse to willingly feed my identity to the Devil that has tried (with too much success) to destroy the Web, and that feeds – and feeds on – the darkness in people’s souls for its own profit. Full disclaimer: I do have an account, as well as Instagram and Whatsapp account, but they are for research purposes. Sometimes you have to take risks in order to learn.

The original article notes some caveats, including that the massive disparity between far-right posts and those of the rest of the world has only been demonstrated in public posts, and that it might include a fair number of legitimate hate-shares. The latter can be significant: I am not sure whether I am typical, but I certainly share at least as many articles of which I disapprove as those that I like. Whether this is a good thing or not is very much up for debate (e.g. see here, here, here, and here).

Originally posted at: https://landing.athabascau.ca/bookmarks/view/6494864/facebook-is-a-parallel-universe-of-lies-and-minisformation-crafted-to-deliver-the-election-to-trump

Evaluating assessment

Exam A group of us at AU have begun discussions about how we might transform our assessment practices, in the light of the far-reaching AU Imagine plan and principles. This is a rare and exciting opportunity to bring about radical and positive change in how learning happens at the institution. Hard technologies influence soft more than vice versa, and assessments (particularly when tied to credentials) tend to be among the hardest of all technologies in any pedagogical intervention. They are therefore a powerful lever for change. Equally, and for the same reasons, they are too often the large, slow, structural elements that infest systems to stunt progress and innovation.

Almost all learning must involve assessment, whether it be of one’s own learning, or provided by other people or machines. Even babies constantly assess their own learning. Reflection is assessment. It is completely natural and it only gets weird when we treat it as a summative judgment, especially when we add grades or credentials to the process, thus normally changing the purpose of learning from achieving competence to achieving a reward. At best it distorts learning, making it seem like a chore rather than a delight, at worst it destroys it, even (and perhaps especially) when learners successfully comply with the demands of assessors and get a good grade. Unfortunately, that’s how most educational systems are structured, so the big challenge to all teachers must be to eliminate or at least to massively reduce this deeply pernicious effect. A large number of the pedagogies that we most value are designed to solve problems that are directly caused by credentials. These pedagogies include assessment practices themselves.

With that in mind, before the group’s first meeting I compiled a list of some of the main principles that I adhere to when designing assessments, most of which are designed to reduce or eliminate the structural failings of educational systems. The meeting caused me to reflect a bit more. This is the result:

Principles applying to all assessments

  • The primary purpose of assessment is to help the learner to improve their learning. All assessment should be formative.
  • Assessment without feedback (teacher, peer, machine, self) is judgement, not assessment, pointless.
  • Ideally, feedback should be direct and immediate or, at least, as prompt as possible.
  • Feedback should only ever relate to what has been done, never the doer.
  • No criticism should ever be made without also at least outlining steps that might be taken to improve on it.
  • Grades (with some very rare minor exceptions where the grade is intrinsic to the activity, such as some gaming scenarios or, arguably, objective single-answer quizzes with T/F answers) are not feedback.
  • Assessment should never ever be used to reward or punish particular prior learning behaviours (e.g. use of exams to encourage revision, grades as goals, marks for participation, etc) .
  • Students should be able to choose how, when and on what they are assessed.
  • Where possible, students should participate in the assessment of themselves and others.
  • Assessment should help the teacher to understand the needs, interests, skills, and gaps in knowledge of their students, and should be used to help to improve teaching.
  • Assessment is a way to show learners that we care about their learning.

Specific principles for summative assessments

A secondary (and always secondary) purpose of assessment is to provide evidence for credentials. This is normally described as summative assessment, implying that it assesses a state of accomplishment when learning has ended. That is a completely ridiculous idea. Learning doesn’t end. Human learning is not in any meaningful way like programming a computer or storing stuff in a database. Knowledge and skills are active, ever-transforming, forever actively renewed, reframed, modified, and extended. They are things we do, not things we have.

With that in mind, here are my principles for assessment for credentials (none of which supersede or override any of the above core principles for assessment, which always apply):

  • There should be no assessment task that is not in itself a positive learning activity. Anything else is at best inefficient, at worst punitive/extrinsically rewarding.
  • Assessment for credentials must be fairly applied to all students.
  • Credentials should never be based on comparisons between students (norm-referenced assessment is always, unequivocally, and unredeemably wrong).
  • The criteria for achieving a credential should be clear to the learner and other interested parties (such as employers or other institutions), ideally before it happens, though this should not forestall the achievement and consideration of other valuable outcomes.
  • There is no such thing as failure, only unfinished learning. Credentials should only celebrate success, not punish current inability to succeed.
  • Students should be able to choose when they are ready to be assessed, and should be able to keep trying until they succeed.
  • Credentials should be based on evidence of competence and nothing else.
  • It should be impossible to compromise an assessment by revealing either the assessment or solutions to it.
  • There should be at least two ways to demonstrate competence, ideally more. Students should only have to prove it once (though may do so in many ways and many times, if they wish).
  • More than one person should be involved in judging competence (at least as an option, and/or on a regularly taken sample).
  • Students should have at least some say in how, when, and where they are assessed.
  • Where possible (accepting potential issues with professional accreditation, credit transfer, etc) they should have some say over the competencies that are assessed, in weighting and/or outcome.
  • Grades and marks should be avoided except where mandated elsewhere. Even then, all passes should be treated as an ‘A’ because students should be able to keep trying until they excel.
  • Great success may sometimes be worthy of an award – e.g. a distinction – but such an award should never be treated as a reward.
  • Assessment for credentials should demonstrate the ability to apply learning in an authentic context. There may be many such contexts.
  • Ideally, assessment for credentials should be decoupled from the main teaching process, because of risks of bias, the potential issues of teaching to the test (regardless of individual needs, interests and capabilities) and the dangers to motivation of the assessment crowding out the learning. However, these risks are much lower if all the above principles are taken on board.

I have most likely missed a few important issues, and there is a bit of redundancy in all this, but this is a work in progress. I think it covers the main points.

Further random reflections

There are some overriding principles and implied specifics in all of this. For instance, respect for diversity, accessibility, respect for individuals, and recognition of student control all fall out of or underpin these principles. It implies that we should recognize success, even when it is not the success we expected, so outcome harvesting makes far more sense than measurement of planned outcomes. It implies that failure should only ever be seen as unfinished learning, not as a summative judgment of terminal competence, so appreciative inquiry is far better than negative critique. It implies flexibility in all aspects of the activity. It implies, above and beyond any other purpose, that the focus should always be on learning. If assessment for credentials adversely affects learning then it should be changed at once.

In terms of implementation, while objective quizzes and their cousins can play a useful formative role in helping students to self-assess and to build confidence, machines (whether implemented by computers or rule-following humans) should normally be kept out of credentialling. There’s a place for AI but only when it augments and informs human intelligence, never when it behaves autonomously. Written exams and their ilk should be avoided, unless they conform to or do not conflict with all the above principles: I have found very few examples like this in the real world, though some practical demonstrations of competence in an authentic setting (e.g. lab work and reporting) and some reflective exercises on prior work can be effective.

A portfolio of evidence, including a reflective commentary, is usually going to be the backbone of any fair, humane, effective assessment: something that lets students highlight successes (whether planned or not), that helps them to consolidate what they have learned, and that is flexible enough to demonstrate competence shown in any number of ways. Outputs or observations of authentic activities are going to be important contributors to that. My personal preference in summative assessments is to only use the intended (including student-generated) and/or harvested outcomes for judging success, not for mandated assignments. This gives flexibility, it works for every subject, and it provides unquivocal and precise evidence of success. It’s also often good to talk with students, perhaps formally (e.g. a presentation or oral exam), in order to tease out what they really know and to give instant feedback. It is worth noting that, unlike written exams and their ilk, such methods are actually fun for all concerned, albeit that the pleasure comes from solving problems and overcoming challenges, so it is seldom easy.

Interestingly, there are occasions in traditional academia where these principles are, for the most part, already widely applied. A typical doctoral thesis/dissertation, for example, is often quite close to it (especially in more modern professional forms that put more emphasis on recording the process), as are some student projects. We know that such things are a really good idea, and lead to far richer, more persistent, more fulfilling learning for everyone. We do not do them ubiquitously for reasons of cost and time. It does take a long time to assess something like this well, and it can take more time during the rest of the teaching process thanks to the personalization (real personalization, not the teacher-imposed form popularized by learning analytics aficionados) and extra care that it implies. It is an efficient use of our time, though, because of its active contribution to learning, unlike a great many traditional assessment methods like teacher-set assignments (minimal contribution) and exams (negative contribution).  A lot of the reason for our reticence, though, is the typical university’s schedule and class timetabling, which makes everything pile on at once in an intolerable avalanche of submissions. If we really take autonomy and flexibility on board, it doesn’t have to be that way. If students submit work when it is ready to be submitted, if they are not all working in lock-step, and if it is a work of love rather than compliance, then assessment is often a positively pleasurable task and is naturally staggered. Yes, it probably costs a bit more time in the end (though there are plenty of ways to mitigate that, from peer groups to pedagogical design) but every part of it is dedicated to learning, and the results are much better for everyone.

Some useful further reading

This is a fairly random selection of sources that relate to the principles above in one way or another. I have definitely missed a lot. Sorry for any missing URLs or paywalled articles: you may be able to find downloadable online versions somewhere.

Boud, D., & Falchikov, N. (2006). Aligning assessment with long-term learning. Assessment & Evaluation in Higher Education, 31(4), 399-413. Retrieved from https://www.jhsph.edu/departments/population-family-and-reproductive-health/_docs/teaching-resources/cla-01-aligning-assessment-with-long-term-learning.pdf

Boud, D. (2007). Reframing assessment as if learning were important. Retrieved from https://www.researchgate.net/publication/305060897_Reframing_assessment_as_if_learning_were_important

Cooperrider, D. L., & Srivastva, S. (1987). Appreciative inquiry in organizational life. Research in organizational change and development, 1, 129-169.

Deci, E. L., Vallerand, R. J., Pelletier, L. G., & Ryan, R. M. (1991). Motivation and education: The self-determination perspective. Educational Psychologist, 26(3/4), 325-346.

Hussey, T., & Smith, P. (2002). The trouble with learning outcomes. Active Learning in Higher Education, 3(3), 220-233.

Kohn, A. (1999). Punished by rewards: The trouble with gold stars, incentive plans, A’s, praise, and other bribes (Kindle ed.). Mariner Books. (this one is worth forking out money for).

Kohn, A. (2011). The case against grades. Educational Leadership, 69(3), 28-33.

Kohn, A. (2015). Four Reasons to Worry About “Personalized Learning”. Retrieved from http://www.alfiekohn.org/blogs/personalized/ (check out Alfie Kohn’s whole site for plentiful other papers and articles – consistently excellent).

Reeve, J. (2002). Self-determination theory applied to educational settings. In E. L. Deci & R. M. Ryan (Eds.), Handbook of Self-Determination research (pp. 183-203). Rochester, NY: The University of Rochester Press.

Ryan, R. M., & Deci, E. L. (2017). Self-determination theory: Basic psychological needs in motivation, development, and wellness. Guilford Publications. (may be worth paying for if such things interest you).

Wilson-Grau, R., & Britt, H. (2012). Outcome harvesting. Cairo: Ford Foundation. http://www.managingforimpact.org/sites/default/files/resource/outome_harvesting_brief_final_2012-05-2-1.pdf.

The perfect Ploughman’s Lunch

ploughman's lunch with bread, sausage, stilton, salad, pickled onion, etc, served with a pint of bitter on a white place
A very acceptable ploughman’s lunch,
served in the ancient town of Lewes.

Life in Canada is, generally speaking, wonderful when compared with life in the UK but, as well as the loved ones I miss, there are a few everyday things that I yearn for from the old country, most often connected with food or drink. I am reminded on this sunny day in late August of the meal that I miss most: the Ploughman’s Lunch. I almost wrote ‘the traditional Ploughman’s Lunch’ but Ploughman’s Lunches are not really traditional, though they are loosely based on what rural workers ate and drank for centuries. The term and the dish were invented by a cheese marketing board in the late 1950s, and enthusiastically promoted for sale in pubs – usually with a glass of ale or cider – as a more substantial and satisfying alternative to traditional pies and sandwiches, but just as easy for inexpert pub staff with no proper kitchen to prepare. They have been almost ubiquitous in English pubs for my entire life. I know of a handful of pubs in Greater Metro Vancouver that purport to serve them but, without exception, even when they do other British dishes like bangers and mash, toad in the hole, Shepherd’s Pie, and so on well, they are not even close to the real thing and are almost guaranteed to disappoint. A couple of ‘British’ pubs in Victoria get pretty close, with something you might find in a mediocre chain pub in the UK, but none do it right.

So here, in case any pub owners in Vancouver or the surrounding area ever get to read it, is how you do it right…

Two or three really thickly cut slices or wedges of crusty bread (but not the sort of crust that will break your teeth). Not toast (though very light toasting may be OK if the bread is not completely fresh), not thin-sliced bread, no fancy flavours or additions, definitely not wraps or flatbreads. The perfect bread is an English cottage loaf or similar, wholemeal or otherwise. Sourdough is acceptable, a baguette works if it is really fresh, or a country loaf inspired by French, Italian, or similar traditions can substitute well. A nice roll – ciabatta or similar – may do at a pinch (that is what is in the image above, and it was quite pleasant). Not optional.

A small tub of soft (but not too soft) cultured butter (never margarine, never uncultured butter – almost all butter served in the UK is cultured, but it is not the default here in Canada). A handful of foil packets of Anchor butter or similar are acceptable and commonplace (see image above). Not optional.

A substantial wedge of sharp, well-aged Cheddar or similar hard cheese. Shouldn’t be the shrink-wrapped supermarket variety unless you cannot find anything better. Stilton is a good alternative/addition (as in the image above). Not optional.

A wedge or two of Melton Mowbray pork pie (the best) or other meat product such as the sausage in the image above. Optional.

A generous, thick-cut slice or two of slightly chewy baked ham. Never, ever, ever, substitute the pasty gelatinous mechanically recovered slices that come in plastic boxes at your local supermarket, or turkey slices (though a good chunk of smoked turkey from a delicatessen works well), or thin-sliced charcuterie meats, pastrami, salami, etc. Optional.

A dob of intensely hot Coleman’s English mustard or similar. Only needed (in tiny quantities) if you are having the ham, pork pie, sausage, etc. Don’t substitute Dijon, German, grainy, or other mild, vinegary alternatives unless you really can’t stand the intensity of proper English mustard.

A pickled onion or two. Not optional. I prefer strong pickled onions but medium strength will do. Do not substitute cocktail onions, or mild pickled onions, and especially do not even consider substituting dill pickles or gherkins. How could you even think of such a thing? I’m looking at you, Vancouver pubs.

Plenty of proper, dark brown, crunchy, chunky, richly flavoured Branston pickle (not the weird light-brown goop that tends to be sold in many Canadian stores – many supermarkets now stock the real thing, made by Crosse & Blackwell, usually in the British part of the ‘ethnic food’ section). Heinz, Waitrose, or Marks & Spencer Ploughman’s pickle will do at a pinch. Not optional. Do not substitute chutney or other sweet goo, especially if flavoured with cinnamon or other strong, fragrant spices. A really good, sharp, crunchy mango chutney, though, with a not-too-sweet sauce might be OK if you really hate Branston.

Some good, sharp, bright, chunky, crunchy piccalilli. Optional (good if you are having the ham or pork pie).

A fairly plain, leafy salad with lettuce, tomatoes, some red or other mild onion, maybe some cucumber, perhaps a sprig or two of parsley as a garnish. Go light on the dressing, if you use any at all. Mayonnaise may be provided on the side. Not optional. Do not substitute coleslaw, exotic leaves, potato salad, caesar salad, etc. Keep it simple, keep the ingredients distinct.

A few other ingredients may be added or, sometimes, substituted to taste, such as a Scotch egg, a gherkin or two, perhaps some coleslaw, maybe a British banger or similar sausage, maybe a boiled or pickled egg, possibly some black pudding or liver sausage/pate, perhaps a bit of game pie instead of pork pie; certainly an alternative or additional hard or semi-hard cheese or two (Caerphilly, Red Leicester, Wensleydale, Cheshire, Gloucester, etc); perhaps a slice or two of thickly cut cold roast beef (horseradish optional, otherwise mustard) or cold roast lamb (with mustard or mint sauce), or maybe a chicken or turkey drumstick (lovely with Branston) instead of ham; perhaps a chutney (as well as, not instead of, Branston), maybe some pickled beetroot, perhaps some watercress, radish, cress, celery, etc in the salad.

Never serve any ingredients hot, though the bread can benefit from being a little warm.

Don’t overdo it. The best Ploughmans tend to keep things fairly simple, with two or three main proteins, in chunks or wedges or thick slices, good bread, a simple salad, pickled onion, and Branston or Ploughman’s Pickle, with only a smattering of signature embellishments to complement the main centrepieces.

Absolutely essential, and not optional, it must be paired with the right drink…

The perfect accompaniment is real English bitter, pulled from a barrel (cask), never from a keg, bottle, or can, served at cellar temperature (not warm, not room temperature, certainly not cold, just a little cool), with the lightest ring of froth, not completely flat but with no visible bubbles (the texture of velvet), and no more than 4% alcohol. A mild or IPA will do just as well, though remember that, in England, a really strong IPA is around 4.5%. You could substitute Guinness or similar stout if you wish. If that’s not possible, a cask-style nitro can (Kilkenny is the most common brand sold here) is better than nothing, though not ideal. Avoid anything with bubbles. If you don’t like beer, scrumpy (the real stuff, never fizzy, always cloudy, always dry), or a proper French cider or similar will do. Beware the alcohol content of real scrumpy, if you can find it here: you can drink it as easily as orange juice, and it hardly tastes alcoholic at all, but it will flatten you faster than hard liquor. Red wine is acceptable. If you don’t want alcohol, a glass of real lemonade is not a bad substitute, or perhaps a jug of water with a slice of cucumber or mint, or maybe a lime juice cordial or lemon barley water. Avoid anything sweet or fizzy or very strongly flavoured, unless you are sure the flavour will complement the dish (red wine is normally good because it cuts through the fatty, protein rich ingredients, much like the pickle components of the meal).

Serve it on a large, plain, white china plate, with a knife, fork, and cloth napkin. Use a wooden platter if you must, but only if you are a stockbroker, social media influencer or advertising executive. Do not use slate, stone, or fancy porcelain.

Assemble and eat the ingredients in any order or combination you like. Experiment with different combinations. Use your fingers for most of it (including pickled onions, chunks of cheese, pie, meat, sausage, etc as well as the bread). Expect things to get messy. You’ll probably need that cloth napkin.

Eat it in a leafy grassy pub garden on a lazy sunny, but not too hot, day, if possible surrounded by trees, hedges, or a crumbling brick wall. A babbling brook helps. If possible, sit at an untreated wooden bench. Beware of sparrows. In times of covid, eating inside is inadvisable anyway but, if you must, find a sheltered nook.

Do not, under any circumstances, add TV screens, piped music, or music to dance to. A little live light jazz, folk, or classical music is acceptable if you can still hear the bees buzzing in the garden. If you can add the very lightest hint of cigarette and/or cigar smoke in the air, that’s a plus.

Do not expect to tip your server, do not expect your server to ask if you are still working on it, do not expect your server to clear away your plate while you are still chewing. In fact, do not necessarily expect a server at all: you might have to order and pick it up from the bar, along with your beer. This is fine.

Add perfect company, relax, and enjoy. If you finish it before you’ve had time to order a second pint then you are eating too fast, drinking too slow, or there’s something wrong with the portion size. Take your time. This is a meal to be savoured, not devoured.

Kafkaesque and Orwellian technology design

Death certificate of undead RomanianI am much indebted to the Romanian legal system for the examples it repeatedly provides of hard (rigid, inflexible, invariant) technologies enacted by human beings without the inconvenience, lack of accountability, or cost of actual machinery. I have previously used examples from two cases in which Romanian mayoral candidates were elected to office despite being dead (here, and – though the link seems dead and unarchived so I cannot confirm it – here). This, though, is the best example yet. Despite the moderately compelling evidence he provided to the court that he is alive (he appeared in person to make his case) the court decided that Constantin Reliu, 63, is in fact, still dead, upholding its earlier decision on the subject. This Kafkaesque decision has had some quite unpleasant consequences for Reliu, who cannot get official documents, access to benefits, and so on as a result. Romania is, of course, home to Transylvania and legends of the undead. Reliu is maybe more unalive than undead, though I guess you could look at it either way.

The misapplication of hard technology

The mechanical application of rules, laws, and regulations is rarely a great idea. One advantage of human-enacted hard technologies over those that are black-boxed inside machines, though, is that, on the whole and notwithstanding the Romanian legal system, the workings of the machine are scrutable and can more easily be adapted. Even when deliberations occur (intentionally or not) in camera, the mechanism is clear to participants, albeit that it is rare for all participants to be equally adept in implementing it.

Things are far worse when such decisions are embedded in machines, as a great many A-level students in the UK are discovering at the moment. Though the results are appalling and painful in every sense – the algorithm explicitly reinforces existing inequalities and prejudices, notably disadvantaging racial minorities and poorer students – it is hard not to be at least a little amused by, say, the fact that an 18-year-old winner of the Orwell Prize for her dystopian story about the use of algorithms to sort students according to socio-economic class had her own A-level mark (in English) reduced by exactly such an algorithm for exactly such a reason. Mostly, though, such things are simply appalling and painful, with little redeeming irony apart from the occasional ‘I never thought leopards would eat MY face‘ moment. Facebook, to pick an easy target, has been unusually single-minded in its devotion to algorithms that divide, misinform, demean, and hurt, since its very beginnings. The latest – misinforming readers about Covid-19 – has had direct deadly consequences though, arguably, its role in electing the antichrist to the US presidency was way more harmful.

The ease with which algorithms can and, often, must be embedded in code is deeply beguiling. I know because I used to make extensive use of them myself, with the deliberate intent of affecting the behaviour of people who used my software. My intentions were pure: I wanted to help people to learn, and had no further hidden agendas. And I was aware of at least some of the dangers. As much as possible, I tried to move the processing from the machine to the minds of those using it and, where I could not do that, I tried to make the operation of my software as visible, scrutable, and customizable as possible (why do we so often use the word ‘transparent’ when we mean something is visible, by the way?). This also made them far more difficult to use – softness in technologies always demands more active thought and hard work in users. None-the-less, my apps were made to affect people because – well – why else would there be any point in doing it?

Finding the right balance

The Landing (my most recent major software project) is, on the face of it, a bit of an exception. It is arguably fortunate that some of my early plans for it, involving algorithmic methods like collaborative filtering and social navigation, failed to come to fruition, especially as one of the main design principles on which the Landing was based was to make the site as neutral and malleable as possible. It was supposed to be by and for its users, not for any other purpose or person, not even (like an LMS) to embed the power structures of the university (though these can emerge through path dependencies in groups). However, it is impossible to avoid this kind of shaping altogether. The Landing has quite a few structural elements that are determined by algorithms – tag clouds, recommended content, social network mining for ‘following’ recommendations, etc – but it also embodies values in its very design. Its menu system, for instance, is based on work Terry Anderson and I did that split the social world into networks, groups, and sets, and is meant to affect how people engage. It has a whole bunch of defaults, from default permissions to default notification settings, that are consciously intended to shape behaviour. When it does not do that kind of shaping, though, things can be much worse. The highly tool-centric and content-neutral design that puts the onus on the individual person to make sense of it is one of the reasons it is a chaotic jumble that is difficult to use, for instance

We need some hardness in our technologies – constraint is vital to creation, and many things are better done by machines – but each individual’s needs for technology hardening are different from those of pretty much every other. Hardness in machines lets us do things that are otherwise impossible, makes many things easier, quicker, more reliable, and more consistent. This can be a very good thing but it is just as easy – and almost inevitable – to harden some things that would be better done by people, or that actively cause harm, or that should be adapted to individual needs. We are all different, one size does not fit all.

Openness and control

It seems to me that a fundamental starting point for dealing with the wrong kind of hardness is knowing what and how things are being hardened, and to be capable of softening them if necessary. This implies that:

  • openness is essential: we must be able to see what these things are doing;
  • the ability to make changes is essential: we must be able to override or modify what they do.

Actually messing with algorithms is complex, and it’s usually complicated, which is an unholy mix. It can also be dangerous, at best breaking the machine and at worst making it more harmful than ever. The fact that we can scrutinize and make changes to our tools does not mean that we should, not that we are actually able to exert any meaningful amount of control, unless we have the skills, time, energy, and mandate to do so. Moreover, there are often reasons we should not do so: for instance, a lot of crowd-based systems would not work at all if individual users could adjust how they work, modified software can be used to cause deliberate harm, and so on. It seems to me, though, that having such issues is far preferable to not knowing how we are affected, and not being able to fix it. Our technologies must be open, and they must be controllable, if we are not to be lost in the mire of counter-technologies, Monkeys’ Paws, and malicious machines that increasingly define our lives today.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/6368257/kafkaesque-and-orwellian-technology-design

I am disgusted, outraged, furious, sickened, and irritated by this perverse research (actually, no I am not at all)

I am disgusted, outraged, furious, sickened, and irritated by this perverse research (actually, no I am not at all)

https://www.pnas.org/content/114/28/7313

   Image of disgust  I am really not at all offended in any way by this well-conducted, clearly reported, very interesting research into what the authors describe as ‘moral contagion’. The actual article title is ‘Emotion shapes the diffusion of moralized content in social networks‘, by Brady et al, from 2017. If the research is valid (it seems solid), I should probably get quite a few more retweets of this bookmark than usual when it gets posted to Twitter. The findings are fascinating, and help to partly explain the success of awful, awful, awful people and their ideas in social media such as Twitter, Faecesbook, and the like.

Abstract

Political debate concerning moralized issues is increasingly common in online social networks. However, moral psychology has yet to incorporate the study of social networks to investigate processes by which some moral ideas spread more rapidly or broadly than others. Here, we show that the expression of moral emotion is key for the spread of moral and political ideas in online social networks, a process we call “moral contagion.” Using a large sample of social media communications about three polarizing moral/political issues (n = 563,312), we observed that the presence of moral-emotional words in messages increased their diffusion by a factor of 20% for each additional word. Furthermore, we found that moral contagion was bounded by group membership; moral-emotional language increased diffusion more strongly within liberal and conservative networks, and less between them. Our results highlight the importance of emotion in the social transmission of moral ideas and also demonstrate the utility of social network methods for studying morality. These findings offer insights into how people are exposed to moral and political ideas through social networks, thus expanding models of social influence and group polarization as people become increasingly immersed in social media networks.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/6362693/i-am-disgusted-outraged-furious-sickened-and-irritated-by-this-perverse-research-actually-no-i-am-not-at-all