Illustration by Vincent Kilbride

When the internet becomes unknowable

Tools for studying social media are being shut down. It’s the last thing researchers—and democracy—need
November 1, 2023

Early on in the horrific war in Israel and Gaza, a new media reality became clear: real-time information on social media is less reliable than ever. X, the social network formerly known as Twitter and the most popular platform for breaking news, apparently no longer has either the capability or the will to combat disinformation. Footage of firework celebrations in Algeria has been presented as evidence of Israeli strikes on Hamas, videogame graphics have been passed off as reality and a clip from the Syrian war has been recycled and amplified on X as though it were new. 

Recent decisions taken by the platform’s owner, Elon Musk, are complicating the problem. On Twitter, a blue tick signified that a user’s identity had been validated. It wasn’t a perfect system, but it helped with finding trustworthy sources. Under Musk, the platform has removed blue ticks from journalist accounts and offered them to virtually anyone willing to pay $8 per month for a premium subscription. These accountholders share revenue with X when their content goes viral, incentivising them to share engaging content whether or not it’s true, and the algorithm gives their posts more weight in users’ feeds.

Additionally, under Musk X has cut the size of most of the company’s teams, especially “trust and safety”, the department responsible for making sure content posted to the network is accurate and not harmful. That team has reportedly been reduced from 230 staff to roughly 20. While a voluntary system called Community Notes allows X users to flag and potentially debunk inaccurate content, users have complained that these notes can take days to appear—if they ever do.

While X’s performance has been so poor that European commissioner Thierry Breton has announced a probe into the platform’s handling of misinformation during the Israel–Hamas war, a greater misinformation crisis is unfolding. Simply put, the journalists, activists and scholars who study misinformation on social platforms no longer have the tools to do their jobs, or a safe environment to work in.

Researchers began taking digital misinformation and disinformation seriously as a force in politics in 2016, when the Brexit campaign in the UK and the Trump campaign in the US both featured prominent deceptions in digital spaces. Studying the 2016 US campaigns, a team at Harvard led by my colleague Yochai Benkler concluded that the most influential disinformation was not always stories made up from whole cloth, but propaganda that amplified some facts and framings at the expense of others. While stories about teens in eastern Europe writing pro-Trump political fiction got widespread coverage, more important were stories from right-wing blogs and social media accounts, amplified within a right-wing media ecosystem and ultimately by mainstream media, if only to refute them. 

Benkler’s analysis, and the analysis of many others, relied on information from Twitter’s API (Application Programming Interface), a stream of data from the platform accessible to scholars, journalists, activists and any other interested parties. In March this year, seeking a new source of revenue, Twitter announced that research access to the API would now start from $42,000 a month, putting it out of reach of most researchers. Other platforms—notably Reddit, which was also popular with academic researchers—followed suit. 

Facebook and Instagram have historically been far more protective of their APIs, but some insight into content on these platforms was provided via CrowdTangle, a tool developed by activists to see how their content performed on social media. Facebook (whose parent company, Meta, owns Instagram) bought the tool in 2016, and it was run by one of its founders, Brandon Silverman, until he left in 2021 amid an acrimonious atmosphere. In 2022, Meta stopped taking new applications to use the tool, and users reported that the project seemed starved of resources, with bugs unfixed. 

Losing the tools to study social media—no longer allowing outside researchers to determine, for example, whether or not X was doing an adequate job removing disinformation—would be problematic enough. But another set of barriers has made researchers’ jobs yet more difficult. 

In July 2023, X filed a lawsuit against the Center for Countering Digital Hate (CCDH), a nonprofit that researches the spread of extremist speech on digital platforms and campaigns for tougher oversight. CCDH had reported that hate speech targeting minority communities had increased since Musk purchased the platform in October 2022. X’s CEO, Linda Vaccarino, has termed CCDH’s accusations false, and the suit seeks unspecified damages. It is difficult to see the action as anything but an attempt to silence research about the platform. When the world’s wealthiest man makes it clear that he will sue, it substantially raises the stakes for criticising his favourite plaything. 

But an angry Musk is not the only powerful individual disinformation researchers find themselves targeted by. The US House judiciary chairman, Republican congressman Jim Jordan, has been seeking information from scholars who have studied the amplification of falsehoods on digital platforms. These requests, targeting university professors, are seeking years’ worth of communications in the expectation of exposing a “censorship regime” involving these researchers and the US government. Such information requests are costly for institutions to comply with, and often add to the emotional burden faced by scholars, who are harassed after their alleged role in “censoring” social media is reported on.

When the world’s wealthiest man makes it clear that he will sue critics, it substantially raises the stakes

This constellation of factors—increasing disinformation on some platforms, the closure of tools used to study social media, lawsuits against investigations on disinformation—suggests we may face an uphill battle to understand what happens in the digital public sphere in the near future. That’s very bad news as we head into 2024, a year that features key elections in countries including the UK, Mexico, Pakistan, Taiwan, India and the US.  

Elections in Taiwan are of special interest to China, and journalists report that Taiwan has been flooded by disinformation portraying the US as a threat to the territory. One story claimed that the Taiwanese government would send 150,000 blood samples to the US so America could engineer a virus to kill Chinese people. The goal of these stories is to encourage Taiwanese voters to oppose alliances with the US and push for closer ties to mainland China. Taiwanese NGOs are developing fact-checking initiatives to combat false narratives, but are also affected by reduced access to information on social media.

The prime minister of India, Narendra Modi, has enacted legislation to combat fake news on social media and it seems likely that these new laws will target government critics more effectively than Modi’s supporters. The 2024 US presidential election, meanwhile, is shaping up to be a battle of the disinformation artists. Serial liar Donald Trump, who made more than 30,000 false or misleading claims in his four years in office, is competing not only against the incumbent Joe Biden, but against anti-vaccine crusader Robert F Kennedy, who was banned from Instagram for medical disinformation, before having his account restored when he became a presidential candidate.

If there is any hope for our ability to understand what really happens on social media next year, it may come from the European Union, where the Digital Services Act demands transparency from platforms operating on the continent. But enforcement actions are slow, and wars and elections are fast by comparison. The surge of disinformation around Israel and Gaza may point to a future in which what happens online is literally unknowable.