In the winter of 2011-12, public enthusiasm for social media arguably reached its zenith. Less than a decade after Facebook emerged from a Harvard dorm room, the Arab Spring suggested a new form of politics in which groups of people, loosely organised on free digital platforms such as Facebook, Twitter and YouTube, could unseat dictators.
The actual importance of social media in building these movements is fiercely debated by scholars, and there’s evidence that Facebook took down pages set up by Egyptian organisers countless times for violating its policies that insisted on the use of real names. And yet Facebook in particular and social media more broadly enjoyed a victory lap. Protesters in Tahrir Square, celebrating the ousting of Hosni Mubarak in February 2011, held up signs thanking Facebook for its role in their revolution.
From that lofty peak, public opinion of social media has descended into a deep trough. In 2016, suspicions that the Brexit vote was swayed by misinformation spread on social media were reinforced by Cambridge Analytica’s overblown claims that its systems could manipulate public opinion via social media targeting. Later that year, Donald Trump’s firehose of untruths on Twitter, from alleging Ted Cruz’s father was linked to Lee Harvey Oswald before the assassination of JFK to misrepresenting his academic achievements at business school, helped elevate him to power.
Attempts to combat misinformation on social media during Covid and the 2020 US presidential election, for instance through the use of factcheckers on Facebook and Instagram, led to a conservative counterreaction in the name of free speech and the rise of explicitly political social media platforms like Truth Social and X under Elon Musk. It is possible to draw a straight line from these events through to today and the extraordinary sight of Trump threatening genocide against Iran and then picking an online fight with the pope.
Just as some of us worry that social media has coarsened our political climate and damaged democracy, at least as many worry that it is harmful to mental health, especially that of young people. Psychologist Jonathan Haidt has tapped into a deep well of parenting anxieties with his bestselling 2024 book, The Anxious Generation, which connected smartphones and social media to anxiety, depression, disordered eating and suicidality. Many researchers believe Haidt’s writing is alarmist, pointing out that most teenagers don’t appear to be negatively affected by social media, though a small minority are. Social psychologist Candice Odgers has commented that: “Hundreds of researchers, myself included, have searched for the kind of large effects suggested by Haidt. Our efforts have produced a mix of no, small and mixed associations.”
But some policymakers have taken Haidt’s arguments to heart, with Australia introducing a ban in December 2025 on under-16s having social media accounts—a policy which so far seems to have had little effect in actually keeping teens away—and similar legislation under discussion around the world, including in the United Kingdom.
For the last decade, societies have been trying to figure out how to hold these powerful companies responsible for alleged harms not just to individuals but also to democracy, due to the ease with which misinformation, disinformation and extremist views can be disseminated. For years, the big platform companies have seemed invulnerable, a position enhanced by the alliance between tech company CEOs and Trump. But three court cases in individual US states—California, New Mexico and Massachusetts—may have opened a new way to hold platforms responsible, circumventing a defence that’s shielded them for years— Section 230 of the 1996 Communications Decency Act.
In California, attorneys for a 20-year-old plaintiff known as “KGM” successfully argued that YouTube and Instagram had been designed to be addictive, and were therefore responsible for damage to her mental health. Meta and Google deny this and have appealed the ruling. In New Mexico, the state attorney general won the first round of a case against Meta, persuading the court that it operated Instagram in a way that directly endangered children and misled consumers about the safety of the platform. Meta intends to appeal this verdict too. And in Massachusetts, a state judge ruled that Section 230, which has provided social media companies with protection from a broad class of liabilities, did not apply in cases where Meta was being accused of “designing a social media platform that capitalises on the developmental vulnerabilities of children”.
In these cases, the plaintiffs didn’t go after platforms based on harassing or problematic content posted by users, where the liability of social media platforms is often limited under Section 230, but looked at how services were designed and operated. In KGM v Meta, lawyers for the plaintiff argued that social media companies had created a technology as dangerous and addictive as gambling machines or tobacco. It was not specific pieces of content that harmed KGM, her attorneys argued, but the overall design of these services that made it hard for her to break away and led to her depression, anxiety and body dysphoria.
The comparison to tobacco is especially terrifying for Meta and Google, the defendants in the suit. In the 1990s, more than 40 US states began litigation against US tobacco companies, seeking to recover the substantial costs faced by state healthcare systems due to lung cancer and other smoking-linked illnesses. States ultimately received more than $200bn in a 1998 “Master Settlement Agreement”, a huge amount for the time.
Meta and Google will appeal the California court’s decision, and, as Stanford legal scholar Evelyn Douek told me, “It’s not obvious that the ruling will survive. And that’s not even getting to the First Amendment issues here—because even if Section 230 were repealed tomorrow, the First Amendment would still require important limitations on the kinds of liability you could impose on platforms.”
The legal landscape around social media platforms is shifting dramatically and rapidly. Understanding how platforms like Instagram, YouTube and TikTok went from a position of apparent invulnerability to one where they could end up being treated like tobacco or asbestos means going back to the earliest days of the world wide web and the history of Section 230—the “26 words that created the internet”.
On 3rd July 1995, Time magazine published a cover story that almost destroyed the nascent consumer internet. The world wide web—Tim Berners-Lee’s invention that made the internet vastly more user-friendly and image filled—had barely entered public consciousness when a research paper broke the surprising news that the internet was full of pornography.
Marty Rimm, an electrical and computer engineering undergraduate at Carnegie Mellon University in Pittsburgh, published a paper in the Georgetown Law Journal that claimed that 83.5 per cent of images shared on the internet were pornographic. While the paper was an impressive piece of showmanship, it had serious shortcomings. Rather than being submitted to a journal read by Rimm’s computing peers, his paper was being reviewed by law students without expertise in quantitative research or the nascent field of internet studies. Internet scholars subsequently pointed out flaws that should have been caught: most of Rimm’s analysis was of explicitly adult bulletin board services, and only nine of the 11,576 websites he analysed (less than 0.08 per cent) contained anything that could be considered pornography.
Rimm gave a Time reporter exclusive access to a preprint of his study and was rewarded with a cover story titled “On a Screen Near You: Cyberporn”, which reported that pornography on the internet was “popular, pervasive and surprisingly perverse”. The cover image showed a young boy, face washed out by the light of a computer monitor, agog at the “surprisingly perverse” content he had stumbled upon. Time, in turn, gave an exclusive on the story to ABC’s Ted Koppel, anchor of a hugely influential news show, Nightline, which had a Christian conservative leader on to discuss the story’s implications.
The deeply flawed story had significant consequences for Rimm who, after facing an online backlash, changed his name and disappeared from the public eye for decades, and for its author, Philip Elmer-DeWitt, who received so much criticism that he left the technology beat for 12 years. But it inspired Nebraska senator James Exon to propose legislation, the Communications Decency Act, banning “indecent” content, including any “comment, request, suggestion, proposal, [or] image” that could be viewed by anyone under 18. Liability under the act applied to anyone who posted—or transmitted—the content. Exon lobbied fellow senators for the bill by showing off his “blue book”, a binder filled with pornographic images from the web, presumably printed by an aide, as the 75-year-old was not known to be tech savvy.
Two members of the House of Representatives who were better with technology—Republican Chris Cox and Democrat Ron Wyden—saw a way of both making Exon’s legislation, which sailed through the Senate, less awful and overcoming a problematic ruling that had been made the year before on platform liability. In 1995, a New York judge ruled that the internet service provider (ISP) Prodigy was responsible for a defamatory statement posted by an anonymous user on its platform. The judge ruled that, as Prodigy had stated that it had content guidelines that prohibited harassment or violation of community standards, Prodigy should be considered a publisher, while the ISP CompuServe, which had no similar guidelines, was not considered liable in a similar case from 1991.
Wyden and Cox saw a dangerous implication of the Prodigy decision: to avoid liability for what their users posted, platforms could simply choose not to moderate online spaces. Cox was a user of both Prodigy and CompuServe and saw how an unmoderated space could quickly spin out of control. He and Wyden wrote a “good Samaritan” clause for the Communications Decency Act that made clear that online platforms would not face liability as publishers just because they took steps to clean up their communities. To ensure these protections had teeth, they crafted “the 26 words that created the internet”, Section 230(c)(1): “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
The idea is simple: if a user publishes on a platform like Prodigy or CompuServe, they are the publisher, not the platform, so you’ll need to sue that user, not the platform, for any defamation or other forms of damages.
The implications of Section 230 were not immediate: most legal observers believed that the whole of the Communications Decency Act was so wildly unconstitutional that it would not stand legal review by the US Supreme Court. They were right. In June 1997, the court ruled that the internet was entitled to the same free-speech protections as the printed press, and that prohibitions against obscenity that applied to radio broadcasts were inappropriate for the internet, as someone would have to take “a series of affirmative steps” to access explicit online materials. While the vast majority of the Communications Decency Act was struck down, Section 230 remained: it was determined to be “severable” from the rest of the unconstitutional statute, and has survived subsequent court challenges.
It’s hard to imagine the contemporary internet without Section 230 and, depending on how you feel about the current state of internet culture, that could be a good or a bad thing. The internet before 1996 was a hodgepodge of standalone websites launched by people competent enough to run them. Universities often provided webserver space for their students, but could easily sanction users who behaved poorly.
A first wave of internet companies—including Tripod.com, where I was employed—began exploring the idea of allowing users to create their own homepages hosted on the company’s servers, in exchange for the right to sell ads on those pages. Before Section 230, this was an extremely risky proposition: if someone used a webpage to attack or harass someone, our company might be forced out of business by the resulting lawsuit. We looked at various ways we could review the millions of webpages we hosted to identify legal risks, before concluding that doing so was impossible—and that it might subject us to more legal liability if we actively tried. After Section 230, that all changed: our responsibility was to forward a legal threat to the relevant user and, if they didn’t tell us they would fight the legal action, to take down their page.
Section 230 didn’t just enable our webpage hosting business: it made possible a slew of businesses that define the contemporary internet. YouTube would have faced similar challenges in determining whether any of the billions of videos it hosts could lead to a lawsuit. Wikipedia has cited the protections from “intermediary liability”—meaning a platform’s liability for the behaviour of its users—as crucial for building encyclopaedia articles from the work of anonymous and pseudonymous contributors.
Yet Section 230 can yield unexpected outcomes. In 1997, a post by an anonymous troll on AOL made it appear that Kenneth Zeran, a Seattle-based TV producer and artist, was marketing tasteless shirts celebrating the bombing of the Oklahoma City federal building in 1995, a terrorist attack which killed 168 people. Zeran sought damages against AOL but the court, citing Section 230, said the law had made AOL immune to that claim. A few years later, the actress Chase Masterson, who starred on Star Trek: Deep Space Nine, discovered a profile on Matchmaker.com that claimed to be her. When unsuspecting suitors interacted with the profile, her harasser sent her home address and phone number. She sued the company behind Matchmaker, but her case too was thwarted by 230. Similar cases of sexual harassment led legal scholars Danielle Citron and Benjamin Wittes to advocate for a change to 230 in which “bad Samaritans”—companies that use 230 as a shield for obviously bad or negligent behaviour, like hosting nonconsensual sexual imagery, aka “revenge porn”—are denied immunity.
By 2020, frustration with the power of platforms like Facebook was so widespread that both Republicans and Democrats were sticking it to Silicon Valley billionaires
Sexual harassment is not the only undesirable behaviour that’s found a defender in Section 230. After American graduate student Taylor Force was murdered in Israel by a member of Hamas, his father sued Facebook, arguing that content promoted by its algorithms was partially responsible for radicalising his son’s killer. The US Second Circuit ruled against him, citing Section 230 as shielding the platform from liability, and the Supreme Court declined to hear an appeal. (Supreme Court decisions that absolved Twitter from liability for amplifying terrorist content in 2023 did not address Section 230.)
Despite its power in these cases, Section 230 has never been a bulletproof defence for platforms. In the original law, federal intellectual property claims, federal criminal liability and electronic privacy violations are explicitly denied 230 protection. And the shelter offered by the law has weakened over time. In 1998, the Digital Millennium Copyright Act stipulated that, in order to keep their safe harbour provisions under Section 230, platforms now had to comply with guidelines designed to prevent copyright infringement. Two decades later, a package of laws designed to combat sex trafficking called SESTA/FOSTA removed Section 230 protections for businesses knowingly facilitating sex trafficking. Backpage, a popular site used by sex workers to advertise—and which was often accused of benefitting from sex trafficking—shut down, while Craigslist removed categories of adult ads popular with sex workers and their clients.
While the protections offered by 230 were limited by Congress and the courts, it took on new life as the bête noire for politicians wanting to strike a blow against the power of the platforms. In January 2020, in an interview with the New York Times, presidential candidate Joe Biden responded to a question about falsehoods on Facebook, declaring, “Section 230 should be revoked, immediately should be revoked, number one.” Asked to elaborate, he explained that he felt that Facebook needed to have editors and to take responsibility for content circulating on the platform, and that Mark Zuckerberg and his company should face “civil liability” in the same way that the New York Times did.
In December the same year, Donald Trump, smarting from his recent election defeat, threatened to veto a crucial defence spending bill if Congress didn’t overturn Section 230, arguing that the law was “a serious threat to our national security and election integrity”. The threat was part of a larger case he sought to make that social media companies unfairly discriminated against right-wing views generally and his campaign in particular.
There was not much Trump and Biden agreed on as presidential rivals. But by 2020, frustration with the power of platforms like Facebook was so widespread that both Republicans and Democrats were sticking it to Silicon Valley billionaires. The threats weren’t serious. Trump didn’t hold up the appropriations bill, nor did Biden make a sustained effort to overturn the law. Instead, 2020 was the year social media became an immensely attractive target for politicians from across the ideological spectrum.
For US politicians, regulating technology giants like Google and Meta has an obvious downside: these companies are huge economic success stories, major employers and pillars of the domestic economy. The tech industry, defined broadly, has provided one third of the country’s economic growth in the past decade. For European regulators, the calculus is less complex as they don’t experience the same economic benefits. The European Union, capable of setting policy for 27 nations representing nearly 14 per cent of global GDP, might have the heft to shape the behaviour of tech companies in a way most nations do not.
EU regulators took aim at tech platforms with 2016’s General Data Protection Regulation, a package of laws designed to give users more control over how platforms collect and use personal information. For many in the EU, the main consequence has been a ubiquitous cookie consent banner that users click through as quickly as possible. But EU fines on tech companies had exceeded €5bn by early 2025, including a €1.2bn fine levied on Meta by Irish regulators in 2023 for transferring European users’ data to the US with insufficient protections. While €1.2bn is a figure that might catch the attention of any corporation, it’s roughly half a per cent of Meta’s 2025 global revenues.
Introduced in 2022, the EU Digital Markets Act and the Digital Services Act are designed to force platforms to be more transparent to researchers and regulators, and to ensure fair online marketplaces and some interoperability between different messaging tools such as Slack and WhatsApp. This legislation has also yielded fines: €2.95bn to Google last year, for discrimination in its ad markets, and €1.8bn to Apple in 2024 for abusing its position of power in its app store.
These fines are big enough to cause trade tensions between the US and Europe, with the US threatening tariffs to “combat digital service taxes, fines, practices, and policies that foreign governments levy on American companies”. (Given the Trump administration’s tendency to threaten tariffs in response to virtually any slight, real or imagined, it’s hard to know how seriously the EU should take this.) As for the companies themselves, they seem to view European regulations as a speed bump—they might briefly slow a company down, but they aren’t causing major changes in direction.
It’s worth noting that the EU regulations haven’t focused explicitly on problems most widely cited as cause for concern over social media: addiction, the effects on mental health, political polarisation and the impact of widespread misinformation and disinformation on democracy. But other countries have acted. Australia has banned under-16s from several platforms, citing the exposure of young people to violent, misogynistic and other dangerous content. Neither users nor their parents face fines for violating the ban. Instead, platforms could be fined up to A$49.5m (£26m) for serious, sustained failure to block youth users. As well as the other countries, including the UK and Canada, considering similar legislation, the state of Nebraska has passed a law limiting platforms from exposing youth to features such as infinite scroll that are believed to contribute to social media addiction.
Nebraska’s social media law won’t be enforced until July 2026, and is likely to face court challenges. The strong protections for freedom of expression under US law make it difficult for states to restrict access to speech, even if the goal is to protect child safety. In the US, children have First Amendment rights to access information, even information that might be harmful to them: a 2011 Supreme Court decision (Brown v Entertainment Merchants Association) overturned a California law that had banned the sale of violent video games to children without parental consent. Between protections for individual speech under Section 230 and protection of the rights of young people to access information, it’s been difficult to make a case under US law to restrict social media access.
This explains why the recent decisions in California and New Mexico are making platform companies so nervous—they bypass these protections and focus on the products’ design and the intent behind it, on what these companies and their algorithms do, or are prepared to do, to keep people scrolling. Meta, Google and the companies that settled with KGM out of court—Snapchat parent company Snap Inc and TikTok—are bracing themselves for a deluge of similar lawsuits. While the settlements with KGM—$4.2m in compensatory and punitive damages for Meta, $1.8m for Google—are modest, multiply those numbers by thousands and they add up to significant liabilities.
If states follow the tobacco playbook, those numbers could increase by orders of magnitude, as state governments could seek to transfer costs for treating mental-health issues on to the platforms. Internationalise that strategy, and the payments could easily become an existential threat. The highest court in the state of Massachusetts has allowed a similar case to advance, explicitly rejecting Meta’s defence on Section 230 grounds, with Justice Dalila Wendlandt writing, “The claims do not seek to impose liability on Meta for information provided by third parties. Instead, the claims allege harm stemming from Meta’s own conduct, either by designing a social media platform that capitalises on the developmental vulnerabilities of children or by affirmatively misleading consumers about the safety of the Instagram platform… We decline Meta’s invitation to read [Section 230] immunity so broadly.”
The science is unsettled around the effects of social media, and there is unlikely to be as clear a correlation between its use and effects on mental health as there is between smoking and lung cancer
While the California case focused on the idea that social media systems are dangerous products, the case brought against Meta by the New Mexico attorney general Raúl Torrez alleged that the company made a series of decisions that traded child safety for profitability by failing to enforce policies that could have prevented adults from grooming children on Instagram, and which made such abuse impossible to detect. Asked by the attorney general to review internal Meta documents released as part of the case, child safety expert Brian Levine highlighted a document from 2020 that indicated that every day 500,000 Instagram accounts of people under 18 experienced “inappropriate interactions with children”—and that was just for accounts in the English-speaking world.
Experts at the New Mexico trial argued that Meta could have taken actions to reduce these inappropriate interactions, including quickly flagging cases where adults contacted multiple people under 18 to whom they had no existing connections. Instead, another internal document suggested a “17 strike” policy for accounts suspected of sexual solicitation. According to Levine’s testimony, “The safety options they had available, some of which they did later, are not advanced computer science. And even if they were, this is one of the most capable companies in the world… In the end, I don’t believe they made decisions that were independent of revenue, that put kids first, and they often just decided to put their heads under the sand.”
Meta has been ordered to pay $375m in damages to New Mexico for violating the state’s consumer protection laws, but the trial is not yet over. Meta says it disagrees with the verdict and will appeal. In a second phase, New Mexico’s attorney general will argue at a bench trial—in front of a judge, not a jury—that Meta has created a “public nuisance”, meaning a systemic danger to the health and safety of residents, by running itself so irresponsibly. This second trial could lead to injunctive relief, where the judge orders the platform to make changes, such as introducing stringent age verification, to limit harmful behaviours.
This question of injunctive relief sounds a simple one: if social media has tools and policies harming people, let’s prevent it from using these tools! But figuring out what the specific harms of social media are, and how best to address them, will be anything but simple.
The KGM case in California reflected a growing public consensus that social media is dangerous, especially for teenagers. But the debate over Jonathan Haidt’s book reveals that the science is far from settled. One of the most cited studies on the topic, by Oxford researchers Amy Orben and Andrew Przybylski, looked at data on more than 350,000 adolescents and came to the surprising conclusion that the effects of social media overall were extremely modest: “The association we find between digital technology use and adolescent well-being is negative but small, explaining at most 0.4% of the variation in well-being.”
In other words, while heavy use of social media is slightly associated with teenagers being unhappy, other unremarkable factors—being left-handed, wearing glasses—had stronger associations with unhappiness. The 2019 paper is known in academia as “the potato paper” because it found that eating a diet high in potatoes had a similarly negative effect to being a heavy social media user.
Haidt and his camp have spent many months re-analysing Orben and Przybylski’s data and concluded that, while the overall negative effects of social media may be modest, they may be more serious for young women. As it happens, Orben and Przybylski somewhat agree: in a 2022 paper, they find narrow developmental windows during which social media use is more strongly correlated with unhappiness later in life: heavy use of social media for girls aged 11-13, for boys at 14-15 and for both at age 19 seem to have more negative impacts than at other points in time.
The internet has benefited enormously from growing up on commercial platforms in the US, where there are strong protections for speech, particularly political speech
For those persuaded that social media is dangerous, this new study further supports cases such as KGM’s. But a better reading is this: the science is unsettled around the effects of social media, and there is unlikely to be as clear a correlation between its use and effects on mental health as there is between smoking and lung cancer. No one seems to benefit from being a heavy smoker, while some people do benefit from social media use—making important personal and professional connections for adults, finding social acceptance as adolescents—and many other people are largely unaffected.
In regulating social media, it’s going to be especially important to consider the ways in which young people experiencing stress elsewhere in their lives—from parental abuse or neglect, to exploring sexual orientation or gender identity, to bullying—might turn to social media for help and support. As Odgers wrote in her critique of Haidt’s book, “When associations over time are found, they suggest not that social media use predicts or causes depression, but that young people who already have mental-health problems use such platforms more often or in different ways from their healthy peers.”
All this raises questions about whether the sorts of remedies being proposed by governments all over the world will work as anticipated. Haidt has argued, “It’s as though we sent Gen Z to grow up on Mars when we gave them smartphones in the early 2010s in the largest uncontrolled experiment humanity has ever performed on its own children.”
In truth, we are now carrying out similar experiments by banning mobile phones from schools and blocking children from social media. This time around, however, we may be able to study the effects, rather than trying to intuit results after the fact. Orben is working with a team in Bradford, West Yorkshire, where thousands of teenagers will have their social media use limited during the day and blocked at night. Half of participating teens will be a control group, experiencing unblocked social media usage. This sort of experiment—a randomised control trial—is the gold standard for social science and might reveal whether blocking teens from social media helps them or harms them. More likely it will reveal what we are coming to suspect: it’s complicated.
Nathan Matias, a Cornell University social scientist who helps communities design their own experiments, and who has authored social media studies with Orben, worries that lawsuits might lead us to jump to conclusions about what remedies might work best. He told me: “In theory, lawsuits can make products safer by creating financial incentives for safety innovation. But I worry that that history will repeat and that we will jump out of Big Tech’s frying pan into the fire of untested, so-called safety products that enrich the same [venture capitalists] who got us into the mess.” He and Orben have advocated for platforms to work with researchers to allow much faster research on their effects and on the impact of changes to the platforms. “We need to speed up harm detection and solution testing to quickly understand and prevent harm, rather than wait for the next lawsuit. And if pundits and policymakers impose sweeping changes on young people without evidence, we need to hold them just as accountable as tech firms if those changes fail the next generation.”
Parents and children who believe they’ve been harmed by social media—and the lawmakers who seek their votes—may not be willing to wait for careful analysis of solutions. A flood of lawsuits that have figured out how to breach the defence of Section 230 are likely to prompt responses from platforms, even if those responses are not all well researched and carefully tested.
For defenders of online speech—and I include myself in that camp—the good news from these court cases is that it may be possible to hold powerful platform companies responsible for any poor, negligent or cynical behaviour that causes harm without eliminating an often misunderstood law that, on balance, has protected valuable online speech. The internet has benefited enormously from growing up on commercial platforms in the US, where there are strong protections for speech, particularly political speech.
However, those First Amendment protections are worth very little without strong protections against defamation claims—a popular tool for silencing uncomfortable speech. Section 230 matters because free speech is meaningless if people do not have a place to speak; without protection from intermediary liability, few platforms would be brave enough to offer that space.
With protection from intermediary liability still in place, we can take on some of the key questions about social media that we face as a society. How many platform companies have been negligent around sexual exploitation of children and how do we now hold them responsible? And can society address the harms of social media to teenagers, however widespread they may be, without making these problems worse?