Technology

"A threat to society": How social media platforms are failing to keep pace with the rise in far-right extremism

Social media sites including Facebook, Twitter and YouTube have long been aware of the increasing presence of white nationalism on their platforms. It's time to do more to tackle it—or risk being complicit in hateful ideology

May 10, 2019
Members of the far-right Proud Boys shout at a group of counter-protestors at a march in Canada. Photo: PA
Members of the far-right Proud Boys shout at a group of counter-protestors at a march in Canada. Photo: PA

White nationalism is thriving online and social media sites have failed to take the problem seriously. Whether its Tommy Robinson whipping up hatred towards Muslims or lesser-known commentators inciting hatred against equality campaigners, the situation has reached a tipping point.

Most white nationalists have links to the far right and claim their mission is to ensure the survival and prosperity of the white race. In the UK, figures show that extreme far-right activity is increasing, particularly online. According to the BBC referrals to Prevent, which forms part of the government’s counter-terrorism strategy, rose by 36 per cent in 2017-18.

As Joe Mulhall, a senior researcher at UK-based campaign group HOPE not hate, explains: “White nationalism and the far-right continue to pose a threat to society. By definition, this usually means a belief in nationalism (exceptionalism) of either a race or country rather than mere patriotism. Coupled with this is a belief that the nation (either geographic or racial) is in decay or crisis and radical action is required to halt or reverse it.”

He pointed to the Christchurch attacks as one way that white nationalism manifests itself offline, but added: “At the less extreme end, this threat manifests at a community and street level, such a hate crime, or at a political level with unfair or racist legislative agendas that oppress minority communities.”

White nationalist groups appear to be highly effective at organising themselves on social media in order to promote their ideas. US-based organisation the Southern Poverty Law Centre calculated that the number of white nationalist groups increased by almost 50 per cent in 2018. It has categorised many of these collectives as hate groups —yet people are joining them in record numbers.

Social media sites including Facebook, Twitter and YouTube have long been aware of the increasing presence of white nationalism on their platforms but appear reluctant to clamp down on it. In March, Facebook made the first meaningful attempt to tackle the issue by announcing a ban on white nationalist content.

The move has been praised in many quarters as a positive step, although doubts remain about whether the ban will be effective in the long term. Around the time that the ban was announced, the company’s CEO, Mark Zuckerberg, wrote on his official Facebook account: "I believe we need a more active role for governments and regulators.”

“By updating the rules for the Internet, we can preserve what's best about it—the freedom for people to express themselves and for entrepreneurs to build new things—while also protecting society from broader harms.”

The company’s ban should be welcomed—even if it’s not entirely clear yet how it will work, said Mulhall. “The practicalities of the new policy are more complex. However, the really important thing is that they implement the policy earnestly and that it wasn’t merely a PR stunt in the wake of the New Zealand attacks.”

Facebook cannot tackle the issue alone, however. YouTube hosts a considerable amount of white nationalist content and its hate-speech policies do not specifically prohibit it. Instead, the company claims that content promoting violence and hatred based on nationality and race is removed.

But channels, such as Red Ice TV, which promotes white nationalism to thousands of subscribers with videos like “They want you dead white man” and “Europe is not white,” continue to operate freely on the site.

Mulhall said YouTube’s white nationalism problem stands out when compared to other mainstream platforms. “[It] is a key platform for the far-right and its algorithms have played and continue to play a role in radicalising young people towards far-right politics,” he said.

“For years far-right figures have made a living off their YouTube content and the damage caused by the distribution of hate content via their platform is hard to quantify. While we welcome their recent steps to limit the harmful impact of Stephen Yaxley-Lennon [the birth name of Tommy Robinson], they have not gone anywhere near far enough.”

Both Facebook and YouTube were approached for comment on this article but did not respond.

Meanwhile, on Twitter, Richard Spencer, one of the world’s most prominent white nationalists, has amassed a huge following. Twitter has previously stated that it has no plans to ban him so Prospect asked the site to clarify its official position on white nationalist content and whether there were any plans to prohibit it.

A Twitter spokesperson made no specific reference to white nationalism but instead said:

Twitter’s Hateful Conduct policy prohibits people from promoting violence, attacks or making threats to other people on the basis of protected categories such as race, ethnicity, and national origin, among others. We have also expanded this policy to ban the use of hateful symbols and imagery in profile images or headers. When we identify content that violates these rules, we take aggressive enforcement action.”

Ultimately, if social media sites are genuinely serious about preventing hate speech they should begin by banishing all traces of white nationalism from their platforms—or risk being complicit in spreading its hateful ideology.