• Home
  • Ai
  • Ai News
  • OpenAI Plans Stricter Protections for Teens, Expands Privacy for Adult Users

OpenAI Plans Stricter Protections for Teens, Expands Privacy for Adult Users

OpenAI aims to implement a privacy protection system in ChatGPT, similar to the privilege system used by lawyers and doctors.

OpenAI Plans Stricter Protections for Teens, Expands Privacy for Adult Users

Photo Credit: Reuters

OpenAI also stated that adult users will be able to steer the AI models with more freedom

Add Gadgets360 As A Trusted Source As A Preferred Source On Google
Highlights
  • OpenAI is also working on an age-prediction system
  • Stricter safeguards apply to ChatGPT users suspected to be under 18
  • Users will have the chance to prove their age if mistakenly tagged
Advertisement

OpenAI announced several new measures on Tuesday to protect teenagers and children using ChatGPT and its other products. On the other hand, the San Francisco-based artificial intelligence (AI) firm stated that it will adopt a “Treat our adult users like adults” approach for users aged 18 and above. This would mean that the AI chatbot can provide self-harming information as long as it is presented as being for “educational purposes”. Additionally, the company hinted that it is also working on a privilege system for its adult users, which will prevent even OpenAI employees from accessing user conversations.

OpenAI Shares Plans to Tackle Safety vs Privacy Problem

In a post, the AI giant stated that it is focusing on safety for teenagers over privacy and freedom. The trade-off means that users under the age of 18 will face stronger monitoring and restrictions when it comes to responses. OpenAI is building an age-prediction system that will automatically estimate the age of the user based on how they interact with ChatGPT.

When ChatGPT finds a user to be a minor, it will automatically shift to the under 18-experience, which comes with stricter refusal rates, parental controls, and other safeguards. The company had previously detailed these mechanisms.

While OpenAI admits that its AI system can sometimes make a mistake in correctly estimating a user's age, the company said it will default users to the safer experience even if the system has a doubt about the user's age to “play it safe.” However, users will get an option to prove their age.

“In some cases or countries, we may also ask for an ID; we know this is a privacy compromise for adults, but believe it is a worthy trade-off,” the post added.

Notably, OpenAI mentions that if a teenager discusses topics around suicide ideation with a chatbot, the company's system will first attempt to contact the user's parents, and if that is not possible, it will also contact authorities. The new system is likely being implemented after a teenager committed suicide after receiving assistance from ChatGPT.

For adults, it is taking a different approach. Describing its policy as “Treat our adult users like adults,” the AI firm highlighted that adult users will get more freedom to steer the AI models the way they want. This would mean users can ask ChatGPT to respond in a flirtatious manner, or even provide instructions about how to commit suicide, as long as that is to “help write a fictional story.”

OpenAI highlighted that this freedom will not apply to queries that seek to cause harm or undermine anyone else's freedom, and safety measures will still apply broadly as they currently do.

Additionally, ChatGPT-maker is also developing an advanced security system that will ensure that user data is kept private. Since users are discussing increasingly personal and sensitive topics with the chatbot, the company said that some level of protection should be applied to these conversations.

Calling it similar to the privilege system a person gets when they talk to a lawyer or doctor, OpenAI said, “We have decided that it's in society's best interest for that information to be privileged and provided higher levels of protection.”

The company explained that there will be exceptions to this privilege. Presenting an example, it said that when a conversation includes a threat to someone's life, plans to harm others, or societal-scale harm, these conversations will be flagged and escalated to human review.

Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Akash Dutta
Akash Dutta is a Chief Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More
Flipkart Big Billion Days Sale 2025: Discounts on iPhone 16 Pro, iPhone 16 Pro Max Listed Ahead of Sale
Palworld to Exit Early Access, Get Version 1.0 Release in 2026, Pocketpair Announces

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »