Thursday 21 September 2017

Theresa May's speech is just the latest in politicians wilfully misunderstanding the internet

Theresa May's speech is just the latest in politicians wilfully misunderstanding the internet: " As is so often the case, The Daily Mail started it. After the Parsons Green attack last week, the newspaper wasted no time in allocating blame. A day after the tube bombing, the Mail's front page headline read: WEB GIANTS WITH BLOOD ON THEIR HANDS.   This isn't a new line of argument for the paper, which labelled Google "the terrorist's friend" after the Westminster attack in March.

As I wrote in the magazine back in April, the government (with the aid of particular papers) consistently uses the threat of terrorism to challenge tech giants and thus justify extreme invasions of our online privacy.

This year, Amber Rudd condemned WhatsApp's privacy-protecting encryption practices, the Snoopers' Charter passed with little fanfare, the Electoral Commission suggested social media trolls should be banned from voting, and now - just today - Theresa May has threatened web giants with fines if they fail to remove extremist content from their site in just two hours. 

 No one can disagree with the premise that Google, YouTube, and Facebook should remove content that encourages terrorism from their sites - and it is a premise designed to be impossible to disagree with. What we can argue against is the disproportional reactions by the government and the Mail, which seem to solely blame terrorism on our online freedoms, work against not with tech giants, and wilfully misunderstand the internet in order to push through ever more extreme acts of surveillance and censorship.

It is right for May to put pressure on companies to go "further and faster" in tackling extremism - as she is due to say to the United Nations general assembly later today. Yet she is demanding artificially intelligent solutions that don't yet exist and placing an arbitrary two hour time frame on company action.

In April, Facebook faced scrutiny after a video in which a killer shot a grandfather remained on the site for two hours. Yet Facebook actually acted within 23 minutes of the video being reported, and the delay was due to the fact that not one of their users flagged the content until one hour and 45 minutes after it had been uploaded. It is impossible for Facebook's team to trawl through everything uploaded on the site (100 million hours of video are watched on Facebook every day) but at present, the AI solutions May and other ministers demand don't exist. (And incidentally, the fact the video was removed within two hours didn't stop it being downloaded and widely shared across other social media sites). 

 As Jamie Bartlett, Director of the Centre for the Analysis of Social Media at Demos, told me after a home affairs committee report accused Facebook, Twitter, and YouTube of "consciously failing" to tackle extremism last year:

“The argument is that because Facebook and Twitter are very good at taking down copyright claims they should be better at tackling extremism. But in those cases you are given a hashed file by the copyright holder and they say: ‘Find this file on your database and remove it please’. This is very different from extremism. You’re talking about complicated nuanced linguistic patterns each of which are usually unique, and are very hard for an algorithm to determine.”

 At least May is in good company. Last November, health secretary Jeremy Hunt argued that it was up to tech companies to reduce the teenage suicide rate, helpfully suggesting "a lock" on phone contracts, referring to image-recognition technology that didn't exist, and misunderstanding the limitations of algorithms designed to limit abuse. And who can forget Amber Rudd's comment about the "necessary hashtags"? In fact, our own Media Mole had a round-up of blunderous statements made by politicians about technology after the Westminster attack, and as a bonus, here's a round-up of Donald Trump's best quotes about "the cyber".

But in all seriousness, the government have to acknowledge the limits of technology to end online radicalisation.

And not only do we need to understand limits - we need to impose them. Even if total censorship of extremist content was possible, does that mean its desirable to entrust this power to tech giants?

As I wrote back in April: "When we ignore these realities and beg Facebook to act, we embolden the moral crusade of surveillance. We cannot at once bemoan Facebook’s power in the world and simultaneously beg it to take total control. When you ask Facebook to review all of the content of all of its billions of users, you are asking for a God." " 'via Blog this'

No comments: