What’s happened? OpenAI has made changes to how ChatGPT handles delicate conversations when people turn to it for emotional support. The company has updated its Model Spec and default model, GPT-5, to reflect how ChatGPT handles sensitive conversations related to psychosis/mania, self-harm and suicide, and emotional reliance on the assistant.
This is important because: AI-driven emotional reliance is real, where users form one-sided attachments to chatbots.
- OpenAI estimates that about 0.15% of weekly active users show signs of emotional attachment to the model, and 0.03% of messages point to the same risk.
- That same 0.15% figure applies to conversations indicating suicidal planning or intent, while 0.05% of messages show suicidal ideation.
- Scaled to ChatGPT’s massive user base, that’s over a million people forming emotional ties with AI.
- OpenAI reports big improvements after the update. Undesirable responses in these domains fell by 65–80%, and emotional-reliance related bad outputs dropped about 80%.
Nadeem Sarwar / Digital Trends
How it’s improving: The updated version introduces new rules around mental health safety and real-world relationships, ensuring the AI responds compassionately without pretending to be a therapist or a friend.
- OpenAI worked with more than 170 mental-health experts to reshape model behavior, add safety tooling, and expand guidance.
- GPT-5 can now detect signs of mania, delusions, or suicidal intent, and responds safely by acknowledging feelings while gently directing users to real-world help.
- A new rule ensures ChatGPT doesn’t act like a companion or encourage emotional dependence; it reinforces human connection instead.
- The model can now prioritize trusted tools or expert resources when those align with user safety.
Nadeem Sarwar / Digital Trends
Why should I care? Emotional and ethical questions don’t just concern adults forming attachments with chatbots; they also touch on how AI interacts with kids who may not fully understand its impact.
- If you have ever confided in ChatGPT during a rough patch, this update is about ensuring your emotional safety.
- Now, ChatGPT will be more attuned to emotional cues and help users find real-world help instead of replacing it.
