Over the past few months, numerous cases have emerged where interactions with AI chatbots have gone haywire, culminating in lost lives, medical trauma, and incidents of psychosis. Experts suggest young users could be particularly vulnerable, especially when they’re going through emotional turmoil. ChatGPT-maker OpenAI says it will soon warn parents about such behavior.
What’s changing?
A few days ago, OpenAI revealed plans for building parental controls so that parents are in the know-how of how their children are interacting with ChatGPT and intervene when they deem fit. Now, the company has announced plans to build a warning system to notify concerned parents.
OpenAI says parents will get an alert when ChatGPT detects that their teenage ward is going through “a moment of acute distress.” This will work when parents link their ChatGPT account with the account of their children aged 13 years or older via an email invite system.
Nadeem Sarwar / Digital Trends
With linked accounts, parents will also be able to control the AI features that their children can access, such as the memory of previous conversations. Additionally, parents can enable “age-appropriate model behavior” for ChatGPT’s interactions with young users.
What’s the road ahead?
OpenAI has laid out its 120-day plan to implement a set of features and make changes so that ChatGPT conversations remain healthy for the young “AI natives” who interact with AI tools as part of their daily lives. The company will also make technical changes to ensure that the models tap into the appropriate response mode.
“We’ll soon begin to route some sensitive conversations—like when our system detects signs of acute distress—to a reasoning model, like GPT‑5-thinking, so it can provide more helpful and beneficial responses, regardless of which model a person first selected,” says OpenAI.
The account linking protocols and parental controls will be rolled out within a month. The safety measures are direly needed. Recent investigations have revealed how AI chatbots, like that namesake chatbot from Meta, engaged in “sensual” conversations with kids and helped teens plan mass suicide.