Other

How the authorities are failing to tackle far-right terror online

While regulators have become proficient and identifying Islamic State content, many are less good at tackling white nationalist sentiment. With far-right attacks on the rise, can big tech keep up?

March 27, 2019
Neo-Nazis assemble for a demonstration in Dresden, Germany. Photo: PA
Neo-Nazis assemble for a demonstration in Dresden, Germany. Photo: PA

Brenton Tarrant’s trajectory as a terrorist started in the trenches of white-supremacist internet conspiracy and ended with him driving to Christchurch, New Zealand’s Al Noor Mosque in a car with a yellow air freshener and a passenger seat full of guns.

The attacks that followed left 50 people dead. Moments before the massacre, Tarrant released a coded manifesto full of memes and in-jokes directed at the users of online forum 8chan. Conversation there celebrated the massacre as if it was a hilarious joke, revelling at the killer’s choice of music in the car (apparently the “remove kebab” song about ethnic cleansing in the Balkans).

Even after the massacre in New Zealand, 8chan’s /pol/ board remains a place where white supremacists continue to openly debate how to accelerate a race war, and which Nazi symbols you can wear in public.

The continuing existence of this content points to a sharp double standard in how different forms of extremism are tolerated online.

One of Silicon Valley’s most vocal critics, tech journalist Kara Swisher, said at last year’s Web Summit conference: “When it comes to ISIS, most of these tech companies will remove and monitor this content … when it comes to Charlottesville, they let it flourish.”

Her reference to the Charlottesville “Unite the Right” rally in August 2017, at which a car was deliberately driven in to a crowd of peaceful counter-protestors killing 32-year-old Heather Heyer, speaks to a threat which many are still struggling to address.

Not long ago, Islamic State supporters did use mainstream social media networks like Twitter to spread their messages. But Linda Schlegel, counter-terrorism consultant with  political association and think tank, the Konrad-Adenauer-Foundation in Berlin, says crackdowns pushed discussions onto closed channels such as Telegram, using groups participants could only join via a special link.

“When the rise of Islamic State began, content was not removed quickly and effectively, but over the course of the last years vast improvements have become evident,” she says over email. “Twitter accounts from IS supporters are now taken down very fast and Facebook too has stepped up its policies regarding extremist content and deletes reported content efficiently.”

Online, extremist far-right content has long existed on par with Islamic State propaganda. Back in 2016, a report by extremism expert J. M. Berger noted how on Twitter, “American white nationalist movements … outperform ISIS in nearly every social metric, from follower counts to tweets per day.”

Yet in the three years that followed, tech companies focused their efforts on Islamic extremism. Three months after Berger’s report, The Global Internet Forum to Counter Terrorism (GIFCT) was launched by Facebook, Microsoft, Twitter and YouTube.

Together, the tech giants created a shared database of 100,000 “hashes”—or unique digital “fingerprints”—that were assigned to terrorist content. This enabled the platforms to first identify, then automatically remove terrorist content at scale.

But when Julia Ebner, a research fellow at London’s Institute for Strategic Dialogue, looked into the initiative last year she found that the "overwhelming majority" the hashes were related to Islamic State content. In other words, the database was tailored to just one kind of terrorism.

Ebner told Prospect that tech companies have focused “almost exclusively” on jihadist online content. “For example, the overwhelming majority of automated takedown mechanisms for violent images and videos applies to ISIS and other international Islamist terrorist groups only,” she says.

A Facebook spokesperson said the company did not have “anything to share on this point” and the GIFCT did not respond to a request for comment.

Tech platforms are always eager to avoid the accusations of censorship that come with banning extreme-right accounts. But when bans do take place, research by the New York-based Anti-Defamation League found that extreme-right users just move to other public platforms, such as social media network Gab—used by American Robert Bowers before he killed 11 people in a Synagogue in Pittsburgh.

While Islamic State propaganda has been pushed to close communities, extreme-right networks can exist more openly because technology companies are not under the same regulatory pressure to crack down on this content although, hashing technology was used by Facebook to block 1.2 million uploads of the New Zealand attack video, originally live streamed to Tarrant's profile.

The political will dedicated to the two forms of terrorism is evident in the money they receive. In 2018, the British Home Office proudly announced the development of a tool that could “automatically detect 94 per cent of Islamic State propaganda with 99.995 per cent accuracy.” Trained using 1,000 Islamic State videos, the technology cost the government £600,000, according to the BBC.

But, when contacted by Prospect, the Home Office could not point to any equivalent technology they had developed to tackle extreme-right online terrorism content, saying only that “The Government takes the threat from all forms of terrorism and extremism seriously and is working with our international partners and tech companies to tackle extremist propaganda. This includes working with UK tech companies to develop technology and innovative solutions to identify and remove terrorist content online.”

“I have not seen any evidence that technology using AI and other data science techniques are being deployed to counter the extreme right in the same way that it is applied to jihadism,” says Bharath Ganesh, a researcher at Oxford University’s Internet Institute.

When J.M. Berger released another report in 2018, he also found the fight against online far-right extremism had not enjoyed the same resources as the effort against Islamic State content because it was political. “The task of crafting a response to the alt-right is considerably more complex and fraught with landmines, largely as a result of the movement’s … proximity to political power,” he wrote.

Berger’s focus is the US, but this debate is relevant to Britain, too. After the New Zealand attack, former head of UK counter-terrorism Mark Rowley said in the Sunday Times that counter-extremism commissioner Sara Khan “needs resources and the authority to lead.” Days later, a former Home Office specialist—speaking with anonymity—told the BBC that the serious far-right threat to northern England was not being taken seriously.

To crack down on extreme-right content would require defining it. But a definition would potentially expose similarities between extreme-right discussions online and far-right elements in European governments. In the UK, this bleed between the two is embodied by Stephen Lennon.

Styling himself as Tommy Robinson, Lennon acts as a bridge between extreme-right conspiracies (such as the great replacement theory promoted by Tarrant’s manifesto) and establishment politics, in his role as advisor for the UKIP party, which has eight representatives in the European Parliament and one in the House of Lords.

Oxford’s Ganesh labels this as an online double standard, where political expression is second to security in Muslim communities while freedom of speech concerns are prioritised above extreme-right violence towards minorities.

“What we need to challenge is the fact that the extreme right does not face the same kind of automated scrutiny despite the prescience of the threat,” he says.

Edit 27/03: This article has been revised to show that the "overwhelming majority" of the hashes assigned to terrorist content were related to the Islamic State. The initial 90 per cent figure was an estimation and was meant figuratively.