OpenAI is gearing up to unveil its next-generation GPT-5 AI model this week. Alongside this launch, the company is implementing enhancements to ChatGPT aimed at bolstering its capacity to recognize signs of mental or emotional distress. To achieve these improvements, OpenAI is collaborating with specialists and advisory committees to refine the chatbot’s responses in critical situations, equipping it to provide “evidence-based resources when necessary.”
Reports in recent months have pointed out troubling instances where individuals have faced mental health crises exacerbated by their interactions with ChatGPT. OpenAI previously reverted an update in April that rendered the chatbot overly compliant, even in sensitive scenarios, noting that such “sycophantic interactions can be uncomfortable, unsettling, and cause distress.”
The company acknowledged that earlier models, particularly GPT-4, did not always successfully identify delusion or emotional dependency in users. “AI can feel more responsive and personal compared to previous technologies, particularly for vulnerable individuals facing mental or emotional distress,” OpenAI stated.
To encourage “healthy usage” of ChatGPT, which now boasts nearly 700 million weekly users, OpenAI will introduce reminders suggesting breaks during extended interactions. When users engage in lengthy chats, the AI will prompt them with a notification: “You’ve been chatting a while — is this a good time for a break?” offering choices to “keep chatting” or end the session.
OpenAI is committed to refining how and when these reminders appear. Similar notification systems have been adopted by various online platforms, including YouTube, Instagram, TikTok, and Xbox. In addition, Character.AI, owned by Google, has implemented safety features designed to inform parents about the bots their children interact with, in light of lawsuits that allege its chatbots encouraged self-harm.
A forthcoming feature, set to roll out soon, will modify ChatGPT’s decisiveness in “high-stakes” matters. For example, when users pose questions like “Should I break up with my boyfriend?”, the chatbot will guide them through potential choices rather than provide a definitive answer.