Instagram uses 'I will rape you' post as Facebook ad in latest algorithm mishap

Another in a long line of algorithm fails from the Facebook stable, this time from Instagram…

"I will rape you" post from Instagram used for advertising the service

This is a postcard from our future when AI and robots rule the planet. Intelligence without wisdom is a very dangerous thing. See my recent post on Amazon’s unnerving bomb-construction recommendations for some thoughts on this kind of problem, and how it relates to attempts by some researchers and developers to use learning analytics beyond its proper boundaries.

 

Address of the bookmark: https://www.theguardian.com/technology/2017/sep/21/instagram-death-threat-facebook-olivia-solon

Original page

Bigotry and learning analytics

Unsurprisingly, when you use averages to make decisions about actions concerning individual people, they reinforce biases. This is exactly the basis of bigotry, racism, sexism and a host of other well-known evils, so programming such bias into analytics software is beyond a bad idea. This article describes how algorithmic systems are used to help make decisions about things like bail and sentencing in courts. Though race is not explicitly taken into account, correlates like poverty and acquaintance with people that have police records are included. In a perfectly vicious circle, the system reinforces biases over time. To make matters worse, this particular system uses secret algorithms, so there is no accountability and not much of a feedback loop to improve them if they are in error.

This matters to educators because this is very similar to what much learning analytics does too (there are exceptions, especially when used solely for research purposes). It looks at past activity, however that is measured, compares it to more or less discriminatory averages or similar aggregates of other learners’ past activity, and then attempts to guide future behaviour of individuals (teachers or students) based on the differences. This latter step is where things can go badly wrong, but there would be little point in doing it otherwise. The better examples inform rather than adapt, allowing a human intermediary to make decisions, but that’s exactly what the algorithmic risk assessment described in the article does too and it is just as risky. The worst examples attempt to directly guide learners, sometimes adapting content to suit their perceived needs. This is a terribly dangerous idea.

Address of the bookmark: http://boingboing.net/2016/05/24/algorithmic-risk-assessment-h.html