While regulators have become proficient and identifying Islamic State content, many are less good at tackling white nationalist sentiment. With far-right attacks on the rise, can big tech keep up?by Morgan Meaker / March 27, 2019 / Leave a comment
Brenton Tarrant’s trajectory as a terrorist started in the trenches of white-supremacist internet conspiracy and ended with him driving to Christchurch, New Zealand’s Al Noor Mosque in a car with a yellow air freshener and a passenger seat full of guns.
The attacks that followed left 50 people dead. Moments before the massacre, Tarrant released a coded manifesto full of memes and in-jokes directed at the users of online forum 8chan. Conversation there celebrated the massacre as if it was a hilarious joke, revelling at the killer’s choice of music in the car (apparently the “remove kebab” song about ethnic cleansing in the Balkans).
Even after the massacre in New Zealand, 8chan’s /pol/ board remains a place where white supremacists continue to openly debate how to accelerate a race war, and which Nazi symbols you can wear in public.
The continuing existence of this content points to a sharp double standard in how different forms of extremism are tolerated online.
One of Silicon Valley’s most vocal critics, tech journalist Kara Swisher, said at last year’s Web Summit conference: “When it comes to ISIS, most of these tech companies will remove and monitor this content … when it comes to Charlottesville, they let it flourish.”
Her reference to the Charlottesville “Unite the Right” rally in August 2017, at which a car was deliberately driven in to a crowd of peaceful counter-protestors killing 32-year-old Heather Heyer, speaks to a threat which many are still struggling to address.
Not long ago, Islamic State supporters did use mainstream social media networks like Twitter to spread their messages. But Linda Schlegel, counter-terrorism consultant with political association and think tank, the Konrad-Adenauer-Foundation in Berlin, says crackdowns pushed discussions onto closed channels such as Telegram, using groups participants could only join via a special link.
“When the rise of Islamic State began, content was not removed quickly and effectively, but over the course of the last years vast improvements have become evident,” she says over email. “Twitter accounts from IS supporters are now taken down very fast and Facebook too has stepped up its policies regarding extremist content and deletes reported content efficiently.”
Online, extremist far-right content has long existed…