Kafkaesque and Orwellian technology design

Death certificate of undead RomanianI am much indebted to the Romanian legal system for the examples it repeatedly provides of hard (rigid, inflexible, invariant) technologies enacted by human beings without the inconvenience, lack of accountability, or cost of actual machinery. I have previously used examples from two cases in which Romanian mayoral candidates were elected to office despite being dead (here, and – though the link seems dead and unarchived so I cannot confirm it – here). This, though, is the best example yet. Despite the moderately compelling evidence he provided to the court that he is alive (he appeared in person to make his case) the court decided that Constantin Reliu, 63, is in fact, still dead, upholding its earlier decision on the subject. This Kafkaesque decision has had some quite unpleasant consequences for Reliu, who cannot get official documents, access to benefits, and so on as a result. Romania is, of course, home to Transylvania and legends of the undead. Reliu is maybe more unalive than undead, though I guess you could look at it either way.

The misapplication of hard technology

The mechanical application of rules, laws, and regulations is rarely a great idea. One advantage of human-enacted hard technologies over those that are black-boxed inside machines, though, is that, on the whole and notwithstanding the Romanian legal system, the workings of the machine are scrutable and can more easily be adapted. Even when deliberations occur (intentionally or not) in camera, the mechanism is clear to participants, albeit that it is rare for all participants to be equally adept in implementing it.

Things are far worse when such decisions are embedded in machines, as a great many A-level students in the UK are discovering at the moment. Though the results are appalling and painful in every sense – the algorithm explicitly reinforces existing inequalities and prejudices, notably disadvantaging racial minorities and poorer students – it is hard not to be at least a little amused by, say, the fact that an 18-year-old winner of the Orwell Prize for her dystopian story about the use of algorithms to sort students according to socio-economic class had her own A-level mark (in English) reduced by exactly such an algorithm for exactly such a reason. Mostly, though, such things are simply appalling and painful, with little redeeming irony apart from the occasional ‘I never thought leopards would eat MY face‘ moment. Facebook, to pick an easy target, has been unusually single-minded in its devotion to algorithms that divide, misinform, demean, and hurt, since its very beginnings. The latest – misinforming readers about Covid-19 – has had direct deadly consequences though, arguably, its role in electing the antichrist to the US presidency was way more harmful.

The ease with which algorithms can and, often, must be embedded in code is deeply beguiling. I know because I used to make extensive use of them myself, with the deliberate intent of affecting the behaviour of people who used my software. My intentions were pure: I wanted to help people to learn, and had no further hidden agendas. And I was aware of at least some of the dangers. As much as possible, I tried to move the processing from the machine to the minds of those using it and, where I could not do that, I tried to make the operation of my software as visible, scrutable, and customizable as possible (why do we so often use the word ‘transparent’ when we mean something is visible, by the way?). This also made them far more difficult to use – softness in technologies always demands more active thought and hard work in users. None-the-less, my apps were made to affect people because – well – why else would there be any point in doing it?

Finding the right balance

The Landing (my most recent major software project) is, on the face of it, a bit of an exception. It is arguably fortunate that some of my early plans for it, involving algorithmic methods like collaborative filtering and social navigation, failed to come to fruition, especially as one of the main design principles on which the Landing was based was to make the site as neutral and malleable as possible. It was supposed to be by and for its users, not for any other purpose or person, not even (like an LMS) to embed the power structures of the university (though these can emerge through path dependencies in groups). However, it is impossible to avoid this kind of shaping altogether. The Landing has quite a few structural elements that are determined by algorithms – tag clouds, recommended content, social network mining for ‘following’ recommendations, etc – but it also embodies values in its very design. Its menu system, for instance, is based on work Terry Anderson and I did that split the social world into networks, groups, and sets, and is meant to affect how people engage. It has a whole bunch of defaults, from default permissions to default notification settings, that are consciously intended to shape behaviour. When it does not do that kind of shaping, though, things can be much worse. The highly tool-centric and content-neutral design that puts the onus on the individual person to make sense of it is one of the reasons it is a chaotic jumble that is difficult to use, for instance

We need some hardness in our technologies – constraint is vital to creation, and many things are better done by machines – but each individual’s needs for technology hardening are different from those of pretty much every other. Hardness in machines lets us do things that are otherwise impossible, makes many things easier, quicker, more reliable, and more consistent. This can be a very good thing but it is just as easy – and almost inevitable – to harden some things that would be better done by people, or that actively cause harm, or that should be adapted to individual needs. We are all different, one size does not fit all.

Openness and control

It seems to me that a fundamental starting point for dealing with the wrong kind of hardness is knowing what and how things are being hardened, and to be capable of softening them if necessary. This implies that:

  • openness is essential: we must be able to see what these things are doing;
  • the ability to make changes is essential: we must be able to override or modify what they do.

Actually messing with algorithms is complex, and it’s usually complicated, which is an unholy mix. It can also be dangerous, at best breaking the machine and at worst making it more harmful than ever. The fact that we can scrutinize and make changes to our tools does not mean that we should, not that we are actually able to exert any meaningful amount of control, unless we have the skills, time, energy, and mandate to do so. Moreover, there are often reasons we should not do so: for instance, a lot of crowd-based systems would not work at all if individual users could adjust how they work, modified software can be used to cause deliberate harm, and so on. It seems to me, though, that having such issues is far preferable to not knowing how we are affected, and not being able to fix it. Our technologies must be open, and they must be controllable, if we are not to be lost in the mire of counter-technologies, Monkeys’ Paws, and malicious machines that increasingly define our lives today.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/6368257/kafkaesque-and-orwellian-technology-design

I am a professional learner, employed as a Full Professor and Associate Dean, Learning & Assessment, at Athabasca University, where I research lots of things broadly in the area of learning and technology, and I teach mainly in the School of Computing & Information Systems. I am a proud Canadian, though I was born in the UK. I am married, with two grown-up children, and three growing-up grandchildren. We all live in beautiful Vancouver.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.