Instagram has unveiled a new feature designed to notify parents if their teenager repeatedly searches for content linked to suicide or self-harm.
The tool, set to roll out in the coming weeks, aims to give caregivers better visibility and help them support their children. At present, when users attempt to search for suicide or self-harm material on Instagram, the platform blocks the results and redirects them to support resources and helplines.
Under the new system, if a user with a Teen Account repeatedly searches for suicide or self-harm terms within a short timeframe, their parent will receive a notification.
New Instagram Parental Alerts: How The Feature Works
Alerts will be delivered via email, text message, WhatsApp, or through an in-app notification, depending on the contact details available. When parents tap the alert, a full-screen message will appear explaining that their teen has repeatedly searched for terms associated with suicide or self-harm in a short period.
Parents will also be able to access expert guidance intended to help them approach potentially sensitive conversations with their child.
Search attempts that could trigger the alert include phrases promoting suicide or self-harm, wording suggesting a teen wants to harm themselves, and the direct terms “suicide” or “self-harm”.
Rollout Timeline And Links To Molly Russell Case
The notifications will initially be available to parents using Instagram’s parental supervision tools in the US, UK, Australia and Canada next week, with wider global availability expected later this year.
The rollout comes one week before the Channel 4 documentary Molly Vs The Machines, which revisits the death of 14-year-old Molly Russell. She died in 2017 after months of viewing online content related to self-harm and suicide.

According to reports cited by the Standard, Molly had saved, liked and shared 16,300 pieces of content on Instagram in the six months before her death, including 2,100 posts relating to self-harm, depression and suicide. She had also searched for similar material on Pinterest.
Both platforms now block such material in search results. Content that encourages suicide, self-injury or eating disorders is removed when identified.
Online Safety Pressure On Social Media Companies
The move comes amid growing regulatory pressure following the introduction of the UK’s Online Safety Act in 2023, which strengthened requirements for platforms to protect users, particularly children.
Under the law, social media companies and search services must prevent young users from accessing harmful or age-inappropriate material and provide clear reporting mechanisms when problems arise. Firms that fail to comply face fines of up to £18 million or 10% of qualifying global revenue, whichever is higher.
YOU MAY ALSO LIKE: DHS Issues Hundreds Of Subpoenas To Google, Meta And Reddit Over Anti-ICE Posts, Report Claims
Vicki Shotbolt, CEO of Parent Zone, said of the latest announcement: “It’s vital that parents have the information they need to support their teens. This is a really important step that should help give parents greater peace of mind – if their teen is actively trying to look for this type of harmful content on Instagram, they’ll know about it.”
Meta, Instagram’s parent company, said it is also working on developing similar parental notifications for teens’ interactions with AI.
