How social media platforms could flatten the curve of dangerous misinformation.

Faecesbook A simple article on a simple idea, which is to introduce brakes and/or circuit breakers to popular social media platforms in order to slow down viral posts to a speed that sysadmins can handle. Such posts can have deadly consequences and are often far from innocently made. The article mentions cases such as the Plandemic video (a fabric of lies and misinformation intended to discourage mask use and distancing) that received 8 million views in a week before being removed by all major social platforms, or a video funded by ‘dark’ money called America’s Frontline Doctors pushing hyroxychloroquine as a Covid-19 treatment hitting 20 million views on Facebook in just 12 hours, through targeted manipulation of algorithms and deliberate promotion by influential accounts. It would take a large army of human police to identify and contain every instance of that kind of malevolent post before it hit exponential growth, so some kind of automated brake is needed.

Brakes (negative feedback loops and delays) are a good idea. They are a fundamental feature of complex adaptive systems, and of cybernetic systems in general. You have really a lot of them in your own body, they exist from the level of ecosystems down to cellular organelles, and from human organizations to cities to whole cultures they serve the critical function of maintaining metastability. If everything happened at once, there’s a fair chance that nothing would happen at all. But it has to be the right amount of delay. Too little and the system flies off into chaos, never reaching even an approximately stable state. Too much and it either oscillates unstably between extremes or, if taken too far, destroys/stops the system altogether. Positive feedback loops must be balanced by negative feedback loops, and vice versa. Any boundaried entity in a stable complex adaptive system has evolved (or, in human systems, may have been been designed) to have the right amount of delay in the context of the rest of the system. It has to be that way or the system would not persist: when delays change, so do systems. This inherent fragility is what the bad actors are exploiting: they have found a way to bypass the usual delays that keep societies stable. But what is ‘right’ in the context of viral posts, that are part of a much large ecosystem that contains within it bad actors hidden among legitimate agents? Clearly it has to respond at least nearly as fast as the positive feedback loop itself is growing, or it will be too late, which seems to imply mechanization must be involved. The algorithm, such as the one described in the article, might not need to be too complex. Some kinds of growth can be stunted through tools like downvotes, reports of abuse, and the like, and most social technologies have at least a bit of negative feedback built in. However, it is seldom in the provider’s interest to make that as powerful as the positive feedback for all sorts of reasons, many quite legitimate – we don’t have a thumbsdown option on the Landing, for instance, because we want to accentuate the positive to help foster a caring community, and down-voting motives are not always clear or pure.

However, a simple rule-driven system alone would probably be a bad idea. There are times when rapid, exponential, positive feedback loops should be allowed to spread in order to keep the system intact: in real disasters, for example, where time and reach is of the essence in spreading a warning, or in outpourings of support for victims of such disasters. There are also perfectly innocuous viral posts – indeed, they are likely the majority. At least, therefore, humans should be involved in putting their feet on the brakes because such things are beyond the ken of machines and will likely remain so. Machines cannot yet (and probably never will) know what it means to live as a human being in a human society – they simply don’t have a stake in the game – and even the best of AIs are really bad at dealing with novel situations, matters of compassion, or outliers because they don’t have (and cannot have) enough experience of the right kind, or the imagination to see things differently, especially when people are deliberately trying to fool them. On the other hand, humans have biases which, as often as not, are part of the problem we are trying to solve, and can themselves be influenced in many ways. This seems to me to be a perfect application for crowd wisdom. If automated alerts – partly machine-determined, partly crowd-driven – are sent to many truly randomly selected people from a broad sample (something like Mechanical Turk, but less directed), and those people have no way of knowing what the others are deciding, and if each casts a vote whether to trigger the brakes, it might give us the best of both worlds. This kind of thing spreads through networks of people so it is appropriate that it can be destroyed by sets of people.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/6530631/how-social-media-platforms-could-flatten-the-curve-of-dangerous-misinformation

I am a professional learner, employed as a Full Professor and Associate Dean, Learning & Assessment, at Athabasca University, where I research lots of things broadly in the area of learning and technology, and I teach mainly in the School of Computing & Information Systems. I am a proud Canadian, though I was born in the UK. I am married, with two grown-up children, and three growing-up grandchildren. We all live in beautiful Vancouver.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.