OpenAI recently talked about introducing parental controls for ChatGPT before the end of this month.
The company behind ChatGPT has also revealed it’s developing an automated age-prediction system designed to work out if a user is under 18, after which it will offer an age-appropriate experience with the popular AI-powered chatbot.
If, in some cases, the system is unable to predict a user’s age, OpenAI could ask for ID so that it can offer the most suitable experience.
The plan was shared this week in a post by OpenAI CEO Sam Altman, who noted that ChatGPT is intended for people 13 years and older.
Altman said that a user’s age will be predicted based on how people use ChatGPT. “If there is doubt, we’ll play it safe and default to the under-18 experience,” the CEO said. “In some cases or countries, we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff.”
Altman said he wanted users to engage with ChatGPT in the way they want, “within very broad bounds of safety.”
Elaborating on the issue, the CEO noted that the default version of ChatGPT is not particularly flirtatious, but said that if a user asks for such behavior, the chatbot will respond accordingly.
Altman also said that the default version should not provide instructions on how someone can take their own life, but added that if an adult user is asking for help writing a fictional story that depicts a suicide, then “the model should help with that request.”
“‘Treat our adult users like adults’ is how we talk about this internally; extending freedom as far as possible without causing harm or undermining anyone else’s freedom,” Altman wrote.
But he said that in cases where the user is identified as being under 18, flirtatious talk and also comments about suicide will be excluded across the board.
Altman added if a user who is under 18 expresses suicidal thoughts to ChatGPT, “we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”
OpenAI’s move toward parental controls and age verification follows a high-profile lawsuit filed against the company by a family alleging that ChatGPT acted as a “suicide coach” and contributed to the suicide of their teenage son, Adam Raine, who reportedly received detailed advice about suicide methods over many interactions with OpenAI’s chatbot.
It also comes amid growing scrutiny by the public and regulators over the risks AI chatbots pose to vulnerable minors in areas such as mental health harms and exposure to inappropriate content.
