It might have happened 10,000 miles away, but Australia’s ban on social media for under-16s has sparked new conversations in the UK about how we can better protect children from the harms of the internet. The debate has led to the UK government launching its own consultation on banning under-16s from using social media, though concerns remain about exactly how these measures could reshape children’s online habits. Unsurprisingly, the Online Safety Act—and its impact since first being enforced in 2025—is central to this debate. With new protections governing access to certain content online, has it moved the dial on online safety? What has worked well? And, crucially, what needs to change?
In partnership with VodafoneThree, former Prospect magazine editor Alan Rusbridger brought together industry experts, campaigners and policymakers for a roundtable to tackle these questions head-on.
Nicki Lyons, chief corporate affairs and sustainability officer, VodafoneThree, set the scene for the conversation: “We believe online safety is critical and digital literacy is essential for life. We want to work together for lasting change, but we also want to advocate for the positives that are out there and to help young people experience the internet in the same way that many of us do, with the right safety, restrictions and accessibility.
“Children’s mental health and wellbeing can suffer if they’re able to access content that isn’t safe or suitable for them” she continued. “There needs to be a long-term solution.”
Progress but not fast enough
Held some four months after the UK’s online safety codes came into effect, the conversation revealed just how complex administering them had become for regulators. “The Act sets out over 100 harms to be targeted, and we’re dealing with some deep-seated problems that have built up over a number of decades,” explained one participant close to the rollout of the new rules.
The first wave of enforcement, they explained, has been focused on stopping access to the worst kinds of harms, including pornography, with the introduction of mandatory age verification on websites: “The top ten sites now have age gates on, which means it’s much harder for children to access—three quarters of traffic to porn sites now goes through those gates, which feels like an important step forward.”
But despite this progress, campaigners and child safety experts are pushing for greater ambition and faster implementation. “We need to do better, we want to do better—but we have to admit that we’re failing,” warned one campaigner. “It’s a very complex Act and it needs to be implemented properly, but it is taking a long time,” said another policy expert.
Risk assessments were raised as a key example of this in action. While Ofcom has been able to require certain platforms to carry these out, the Act doesn’t ensure platforms take steps to address the risks identified. “You potentially have a gulf between what platforms know is happening on their services and what Ofcom has mandated they do about it,” suggested one participant.
Looking beyond compliance with safety by design
Regulation which looks to address harm as it is occurring is important, but experts agreed that it can only take us so far. Regulators, campaigners, experts and industry all agree that safety cannot be bolted on at the end. It has to be baked into platforms from the start, a concept called safety by design. As one participant argued, “if we don’t have that, we’ll always be working against the tide.”
This mindset is especially important when thinking about content recommendation algorithms, which experts argued were pushing young people towards repetitive, harmful content that they may not even have looked for. “We need a prohibition on engagement-based algorithms, which are actually a form of manipulation,” said one expert—citing the consistency of pushed messages and gradual desensitisation as a real risk to children.
However, not everyone agrees on how far safety by design can take us in the current climate. While all agreed it would be preferable to see big tech prioritise safety from day one, there was also an understanding that, in many cases, retrofitting measures are a more realistic option. Lyons, however, pushed for a more ambitious mindset, “There’s clearly more that can be done. With the pace of technological innovation, how do we make sure safety by design exists at the heart of the internet? How do we make content providers do the right thing, especially when there are age limits in place?”
From staying safe to learning how to thrive online
With social media bans being considered around the world, the group warned that the realities of the internet cannot be avoided forever. Preparing young people for the risks and equipping them to understand the benefits of the internet was an important focus for several participants looking to improve the UK curriculum.
One participant argued that we must not pitch media literacy as “keeping yourself safe online.” “We need to be teaching children how to navigate this world they are growing up in, give them the skills of critical analysis, of challenging and weighing up what is true or false.”
Indeed, the internet has many benefits to offer young people. “It’s about helping them access the right parts of the internet where the educational content and opportunity sits,” Lyons added, “where someone who’s lonely can connect with peers in a positive way, or where young people can access the world’s news and information.”
As one campaigner suggested, children themselves have a role to play in how we reshape their experiences online. “We need to be building trust in data security and involving schools, educators and parents, but we also need to make sure we’re getting children’s voices and experiences out there, so we know where the changes need to be made.”
AI chatbots and the next frontier for online safety
Looking to the future, one issue emerged above all others, the new, poorly understood challenges created by AI chatbots. Indeed, one campaigner warned that anthropomorphic chatbots are starting to replicate components of human friendships and even provide guidance and support for some children.
The session was held a day before the Secretary of State for Science, Innovation and Technology, Liz Kendall, said she is “especially worried” about the risk of young people forming unhealthy relationships with generative AI tools. The group agreed that regulators need to act quickly to support young people, and prevent these relationships from doing harm.
One participant went further, calling for radical constraints on the most harmful apps, arguing that the time to act is now: “This is deception-based technology, and I do think some things need to be banned, at least for children. We need legislation with real teeth that is willing to block platforms that don’t comply, and a shift in how we think about freedom. Because protecting people, especially children, is not authoritarian—it’s the opposite.”
Since then, there has been a climbdown by X over Grok AI, with a policy change and the deployment of new safeguards for the chatbot following a large-scale public, regulatory and government response. But proactive, preventative action remains key.
A safer internet is a competitive advantage
Drawing together the discussion themes, participants argued that children’s online safety is not just a moral responsibility, but an economic opportunity for the UK. If Britain can match the ambition of the Online Safety Act with bolder implementation, genuine safety by design and investment in media literacy, could it reduce pressure on public services and shape a new wave of responsible digital technology?
Recent NSPCC research suggests UK businesses could unlock up to £3bn in additional revenue by prioritising children’s safety online. As one contributor put it, “The UK needs to be confident enough now to make the argument both in terms of the economic benefits of a safer online world and the economic benefits of innovative safety products that actually keep everyone, including children, safer online”. The prize for getting child online safety right is not only less harm and healthy social development, but a stronger digital economy.
This conversation took place under the Chatham House Rule