Artificial Intelligence and Mental Health: An Emerging Issue
OpenAI has released new data exposing the scale of mental health–related conversations occurring on ChatGPT, revealing that over a million users engage with the chatbot every week about suicide or emotional distress.
According to the report published on Monday, approximately 0.15% of ChatGPT’s 800 million weekly active users show “explicit signs of possible suicidal planning or intent.” This small percentage represents a staggering number of individuals, more than one million people per week turning to ChatGPT to discuss suicidal thoughts.
The company also disclosed that a similar proportion of users demonstrate heightened emotional attachment to ChatGPT, while hundreds of thousands display communication patterns consistent with psychosis or mania during their weekly chats.
Although OpenAI emphasized that such cases are “extremely rare,” the platform’s vast user base means that the issue cannot be overlooked.
OpenAI’s Efforts to Respond Responsibly
The findings were released alongside OpenAI’s broader efforts to make ChatGPT more responsive and sensitive to users experiencing mental health crises. The company stated that its latest improvements were developed in collaboration with more than 170 mental health professionals, who analyzed and provided feedback on how the model interacts with emotionally distressed individuals.
These experts observed that the latest version of ChatGPT now responds “more appropriately and consistently” compared to earlier iterations. OpenAI explained that this update was driven by the need to ensure that the chatbot handles sensitive conversations with greater empathy, care, and precision.
Past Incidents and Legal Challenges
Concerns surrounding the relationship between AI and mental health have grown rapidly in recent months. Several reports have warned that AI chatbots may unintentionally worsen users’ conditions by validating or reinforcing harmful thought patterns.
For OpenAI, the stakes are particularly high. The company currently faces a lawsuit filed by the parents of a 16-year-old boy who allegedly shared suicidal thoughts with ChatGPT before taking his life. Furthermore, attorneys general from California and Delaware have warned OpenAI to strengthen its protections for young users, concerns that could directly impact the company’s ongoing restructuring process.
Enhancements in GPT-5’s Safety Features
As part of the same announcement, OpenAI shared performance data for its newest model, GPT-5, which reportedly shows major improvements in addressing mental health–related conversations.
The company claims that GPT-5 provides 65% more desirable responses to mental health prompts compared to its predecessor. In safety evaluations involving suicide-related discussions, GPT-5 achieved 91% compliance with OpenAI’s internal behavioral standards, up from 77% in the previous GPT-5 release.
Additionally, OpenAI said that GPT-5 maintains its safety guardrails more effectively during extended conversations, addressing one of the critical shortcomings of earlier versions.
Expanded Safety Testing and Parental Controls
OpenAI announced that it is expanding its baseline safety testing to include benchmarks for emotional dependency and non-suicidal mental health emergencies. This will allow the company to better monitor, measure, and refine the model’s safeguards.
To protect younger users, OpenAI is also introducing new parental controls and an age prediction system designed to automatically identify minors using ChatGPT and apply stricter safety settings.
While continuing to update its technology, OpenAI acknowledged that these rapid advancements come with complex trade-offs in terms of sustainability and ethical responsibility, an ongoing balance between innovation and accountability.
Balancing Innovation and Responsibility
Despite the improvements, OpenAI admits that undesirable responses still occur in some instances. The company continues to make older, less-safe models such as GPT-4o available to paying subscribers, which raises concerns about consistency in user protection across different versions.
In early February, CEO Sam Altman claimed that OpenAI had “managed to mitigate the serious mental health issues” associated with ChatGPT, though he offered no evidence at the time. The data released on Monday appears to support his statement, even as it underscores the continued challenges in addressing the emotional impact of AI interactions.
Support Resources
If you or someone you know is struggling with suicidal thoughts, help is available:
- National Suicide Prevention Lifeline (U.S.): 1-800-273-8255
- Text HOME to 741-741 or dial 988 for immediate support
- Outside the U.S.: Visit the International Association for Suicide Prevention (IASP) for a global directory of mental health resources.

















































































































