Politics

After Christchurch, we all need to address the normalisation of online hate speech

Niche violent and Islamophobic content is increasingly prominent even on mainstream platforms. To confront the legacy of the Christchurch attack will require deeper conversations about how the modern internet works

March 18, 2019
The killer's apparent "manifesto" isn't just filled with hate speech—it's shot through with online "humour". Photo: PA
The killer's apparent "manifesto" isn't just filled with hate speech—it's shot through with online "humour". Photo: PA

Last week New Zealand became home to a new, and very 2019, form of terror attack. A man in his twenties, armed with weapons, drove to two Mosques and killed 49 people while they were praying. His victims included women and infants.

The shooter also had a camera strapped to his body, and apparently streamed the attack live on the internet. I will not post, nor will I describe, what happens in the video. What is worth noting, however, is the way in which he streamed the attack—something akin to a first-person shooter game, of the kind that are extremely popular on gaming platforms like Discord and Twitch.

It’s a similarity that hints to a wider and extremely troubling relationship with online spaces. The killer’s behaviour has been linked to mainstream Islamophobia, with comparisons made to figures like Tommy Robinson.

The shooter’s “manifesto,” which he appears to have uploaded online hours before the attack, is laced with Islamophobic rhetoric. It features comments about Muslims “outbreeding” white Europeans, reinforcing the familiar trope of the “Great Replacement,” a conspiracy theory with Islamophobic and Antisemitic connotations that imagines a deliberate strategy to kill off white Christians.

Aside from the fact the killer targeted a Mosque, magazine rounds found at the site of the attack had Islamophobic tropes written on them in white marker, including references to the Rotherham grooming scandal and to the Crusades.

Look a little closer, though, and you’ll see something more complex, and certainly more sinister, being communicated. While the manifesto appears to lay out some of the killer’s intentions using justifications typical of the far-right, it also uses a particular kind of “humour”: jokes formed by layers upon layers of irony, and in-jokes developed in particular corners of far-right online spaces.

While describing himself as an ethno-nationalist, it includes references to Candance Owens—the African-American leader of the US social conservative movement, Turning Point, which recently launched a chapter in the UK. More notably, it references the YouTuber Pewdiepie, one of the platform’s biggest stars, who has recently gained the support of right-wing figures like Infowars pundit Paul Joseph Watson and, even more bizarrely, Ukip in his quest to attain more Youtube subscribers than his rival, an Indian entertainment company called T-Series.

Unless you are “extremely online”—i.e., you are on Twitter for most of the day, or you spend most of your time in the weird backdoors of the internet—the manifesto will not only be confusing to understand but also purposefully misleading. As the New York Times’ Charlie Warzel says: “the Christchurch shooting feels different, in part due to its perpetrator’s apparent familiarity with the darkest corners of the internet.”

https://twitter.com/heyfools/status/1106395631663370240

What is particularly disturbing about this online culture is its size and anonymity. Online, someone with malicious motives can use the culture of memes, “shitposting,” trolling and doxxing—publishing someone’s personal details publicly—to incite hatred while staying invisible. Anonymous, and under the radar, these characters can avoid detection while amplifying right-wing messages to other users.

If posts attributed to the Christchurch shooter are indeed his, then it seems he was not only influenced by right-wing content online, but also that he has helped fuel that network. On message board 8chan, where he apparently trailed the attack, responses discussed alleged “Muslim Reactions” to the killings, while another user suggested it was part of a Jewish conspiracy.

Despite what we might hope to believe, these ideas are not limited to obscure message boards. As a 2018 report from Data and Society showed, platforms like Twitter and YouTube host a network of right-wing pundits whose voices have been amplified so much that they affected the way that YouTube’s algorithm works.

As a result, right-wing content is now a staple of internet culture, so much so that even watching the most mundane of YouTube videos can lead you to piece of anti-Muslim content in a matter of minutes.

As we learn more about the Christchurch shooter’s online profile, one thing that should be explored is not only the influence his acts will have in niche right-wing spaces, but how those spaces go on to change the platforms we all use every day.

Understanding the motivations behind the Christchurch mosque attack therefore needs to depart from the way we currently understand right-wing extremism. It’s likely that right-wing pundits and personalities, some of whom make a handsome living out of anti-Muslim sentiment, will be keen to distance themselves from the attack and his shooter. Indeed, Candance Owens has already threatened those accusing her of influencing him with court action.

While the question of how anti-Muslim sentiment appears in the media is an important one, however, we must not turn away from a similar conversation about how mainstream online spaces increasingly host hate speech. To confront the legacy of the Christchurch attack will require deeper conversations about how the modern internet works, how we distinguish authentic threats in a subculture that communicates through irony, and how much we have to change as a culture to protect vulnerable people in our new digital age.