Technology companies are facing mounting pressure to strengthen protections for young users online after MPs rejected a proposed blanket ban on social media access for under-16s.
The Information Commissioner’s Office (ICO) and communications regulator Ofcom have written to several major platforms demanding clearer safeguards for children.
Companies including Facebook, Instagram, Roblox, Snapchat, TikTok and YouTube have been given until the end of April to outline what steps they are taking to improve age verification and prevent online grooming.
Regulators have also instructed platforms to explain how they are addressing potentially harmful algorithms and how updates are rolled out to users, with Ofcom calling for an “end to product testing on children”.
Ofcom Demands Stronger Age Verification And Child Safety Measures
Alongside Ofcom’s request, the ICO has contacted TikTok, Snapchat, Facebook, Instagram, YouTube and X, formerly Twitter, seeking details on how their age-verification systems protect children online.
The intervention comes after a Conservative-led proposal to introduce a nationwide social media ban for under-16s failed in the House of Commons. The measure was defeated by 307 votes to 173.
Although ministers initially opposed the idea, the government has now launched a consultation on the possibility of introducing such a ban, without committing to supporting it.
Australia became the first country to introduce a social media ban for children when its policy came into force in December last year.
Ofcom’s research found that age restrictions on platforms were frequently not enforced. Despite a minimum age of 13 on many sites, 72% of children aged between eight and 12 were still accessing services intended for older users.
Ofcom chief executive Dame Melanie Dawes said: “There is a gap between what tech companies promise in private, and what they’re doing publicly to keep children safe on their platforms.
“Without the right protections, like effective age checks, children have been routinely exposed to risks they didn’t choose, on services they can’t realistically avoid.
“That must now change quickly, or Ofcom will act.”
Regulators Threaten Enforcement Action Under Online Safety Rules
ICO chief executive Paul Arnold also warned that stronger action from industry was needed.
“With ever-growing public concern, the status quo is not working and industry must do more to protect children,” he said.
Ofcom confirmed it will publish a report in May detailing how the platforms responded to its requests. The regulator will also release research examining how the Online Safety Act has affected children’s online experiences during its first year.
If the responses fail to satisfy regulators, Ofcom said it “will be ready to take enforcement action”, which could include tightening regulations.
The ICO added that it had already contacted some of the “highest risk services” and warned that “further regulatory action” could follow if improvements are not made.
Mr Arnold said: “Our message to platforms is simple: act today to keep children safe online.
“There’s now modern technology at your fingertips, so there is no excuse not to have effective age assurance measures in place.”
Platforms Respond As Child Safety Campaigners Welcome Action
The renewed pressure on technology firms was welcomed by the Molly Rose Foundation, a charity created in memory of a 14-year-old who died after viewing harmful content on social media.
The organisation said Ofcom was “turning up the heat on reckless tech firms and their dangerous products which continue to cause daily harm to children”.
A YouTube spokesperson said the company had been developing products designed specifically for younger audiences for more than a decade.
They said the platform had been “designed to provide age-appropriate high-quality experiences”.
“We are surprised to see Ofcom move away from a risk-based approach, particularly given that we routinely update them and other regulators on our industry-leading work on youth safety,” the spokesperson added.
Meta, the company behind Facebook and Instagram, said it had already implemented several measures requested by regulators, including artificial intelligence tools that estimate a user’s age based on their activity and facial age recognition technology.
YOU MAY ALSO LIKE: Nvidia Reportedly Working On Open-Source AI Agent Platform NemoClaw For Enterprise Use
“We also place teens in Teen Accounts, which offer built-in protections that limit who can contact them, the content they see, and the time they spend on our apps,” the company said.
A Roblox spokesperson said the platform remained in “regular dialogue” with Ofcom on protecting its users and had introduced more than 140 safety features over the past year, including mandatory age checks for access to chat features.
“While no system is ever perfect, we continue to strengthen protections designed to keep players safe and look forward to demonstrating our efforts in our ongoing dialogue with Ofcom,” the spokesperson said.
