OpenAI aims to implement a privacy protection system in ChatGPT, similar to the privilege system used by lawyers and doctors.
Photo Credit: Reuters
OpenAI also stated that adult users will be able to steer the AI models with more freedom
OpenAI announced several new measures on Tuesday to protect teenagers and children using ChatGPT and its other products. On the other hand, the San Francisco-based artificial intelligence (AI) firm stated that it will adopt a “Treat our adult users like adults” approach for users aged 18 and above. This would mean that the AI chatbot can provide self-harming information as long as it is presented as being for “educational purposes”. Additionally, the company hinted that it is also working on a privilege system for its adult users, which will prevent even OpenAI employees from accessing user conversations.
In a post, the AI giant stated that it is focusing on safety for teenagers over privacy and freedom. The trade-off means that users under the age of 18 will face stronger monitoring and restrictions when it comes to responses. OpenAI is building an age-prediction system that will automatically estimate the age of the user based on how they interact with ChatGPT.
When ChatGPT finds a user to be a minor, it will automatically shift to the under 18-experience, which comes with stricter refusal rates, parental controls, and other safeguards. The company had previously detailed these mechanisms.
While OpenAI admits that its AI system can sometimes make a mistake in correctly estimating a user's age, the company said it will default users to the safer experience even if the system has a doubt about the user's age to “play it safe.” However, users will get an option to prove their age.
“In some cases or countries, we may also ask for an ID; we know this is a privacy compromise for adults, but believe it is a worthy trade-off,” the post added.
Notably, OpenAI mentions that if a teenager discusses topics around suicide ideation with a chatbot, the company's system will first attempt to contact the user's parents, and if that is not possible, it will also contact authorities. The new system is likely being implemented after a teenager committed suicide after receiving assistance from ChatGPT.
For adults, it is taking a different approach. Describing its policy as “Treat our adult users like adults,” the AI firm highlighted that adult users will get more freedom to steer the AI models the way they want. This would mean users can ask ChatGPT to respond in a flirtatious manner, or even provide instructions about how to commit suicide, as long as that is to “help write a fictional story.”
OpenAI highlighted that this freedom will not apply to queries that seek to cause harm or undermine anyone else's freedom, and safety measures will still apply broadly as they currently do.
Additionally, ChatGPT-maker is also developing an advanced security system that will ensure that user data is kept private. Since users are discussing increasingly personal and sensitive topics with the chatbot, the company said that some level of protection should be applied to these conversations.
Calling it similar to the privilege system a person gets when they talk to a lawyer or doctor, OpenAI said, “We have decided that it's in society's best interest for that information to be privileged and provided higher levels of protection.”
The company explained that there will be exceptions to this privilege. Presenting an example, it said that when a conversation includes a threat to someone's life, plans to harm others, or societal-scale harm, these conversations will be flagged and escalated to human review.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.