Alerts will be delivered via email, text message, WhatsApp, or in-app notification.
Instagram supervision to flag teen self-harm search attempts
Photo Credit: Meta
Meta on Thursday announced new parental notification features on Instagram, aimed at identifying potential warning signs in teen search activity. The company said it will begin informing parents when supervised teens repeatedly attempt to look up content linked to suicide or self-harm within a short timeframe. The update expands Instagram's existing teen safety measures and will first roll out in select English-speaking markets, with broader availability planned later this year. Last year, Instagram placed all users under 18 in a stricter 13+ setting that requires parental approval to change.
In its announcement, Meta said that Instagram will begin alerting parents if their teen repeatedly searches for terms related to suicide or self-harm within a short period. The company did not quantify the time or the number of searches. The feature will roll out next week in Australia, Canada, the UK, and the US, with other regions to follow later this year.
Only parents who have enabled Instagram's parental supervision tools will receive these alerts. Both the parent and the teen connected through supervision will be notified in advance before the feature goes live. If a teen makes multiple attempts to search for phrases that promote suicide or self-harm, suggest an intention to harm themselves, or include related keywords, a notification will be sent to the parent.
The company added that alerts will be delivered via email, text message, WhatsApp, or in-app notification, depending on available contact details. Selecting the alert opens a full-screen message explaining the repeated search attempts and provides access to expert guidance to help parents approach the conversation.
Meta said it already blocks searches clearly linked to suicide or self-harm and redirects users to support resources and helplines. The company stated that it analysed search behaviour and consulted members of its Suicide and Self-Harm Advisory Group to determine a threshold that requires several searches before notifying parents. It acknowledged that some alerts may be sent even if there is no immediate risk but said the approach aims to balance caution with avoiding excessive notifications.
Meta added that it will introduce similar notifications for certain teen interactions with its AI tools later this year. These alerts will inform parents if a teen attempts to engage in specific conversations about suicide or self-harm with the company's AI systems.
| Helplines | |
|---|---|
| Vandrevala Foundation for Mental Health | 9999666555 or help@vandrevalafoundation.com |
| TISS iCall | 022-25521111 (Monday-Saturday: 8 am to 10 pm) |
| (If you need support or know someone who does, please reach out to your nearest mental health specialist.) | |
Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.