OpenAI has announced new updates to ChatGPT designed to improve how the AI system responds to users experiencing mental health distress, including signs of self‑harm, psychosis, or emotional dependence on artificial intelligence.
The company said the update, applied to its default GPT‑5 model, trains ChatGPT to better recognize and respond safely in sensitive conversations. Working with mental health experts across 60 countries, OpenAI says it focused on three key areas: severe mental health symptoms such as psychosis or mania, self‑harm and suicide, and emotional reliance on AI, cases where users express unhealthy attachment to the chatbot.
These changes are part of OpenAI’s broader effort to integrate real‑world clinical expertise into model evaluation and safety testing. The company said its “Global Physician Network” includes more than 170 psychiatrists, psychologists, and primary care practitioners, who helped define desired AI behaviour, evaluate model responses, and craft appropriate interventions.
Earlier this month, we updated GPT-5 with the help of 170+ mental health experts to improve how ChatGPT responds in sensitive moments—reducing the cases where it falls short by 65-80%.https://t.co/hfPdme3Q0w
— OpenAI (@OpenAI) October 27, 2025
Data shows both progress and concern
In an extensive data disclosure, OpenAI estimated that around 0.15% of ChatGPT’s 800 million weekly active users, or roughly 1.2 million people, have conversations that include explicit indications of potential suicidal intention or planning. Additionally, about 0.07% of users, or 560,000 individuals per week, show possible signs of psychosis or mania, while around 1.2 million users exhibit heightened emotional attachment to the chatbot.
While these percentages are small, experts highlighted their real‑world implications.
“Even though 0.07% sounds small, that’s still hundreds of thousands of people,” said Dr. Jason Nagata, a University of California professor who studies technology use among young adults.
OpenAI acknowledged that such conversations are “extremely rare,” but admitted that even a tiny share represents a significant number of people. The company said it is taking the risks “very seriously” and has committed to continuing collaboration with independent clinicians to monitor model performance.
Reducing unsafe model responses
Quantitatively, OpenAI reports major safety improvements. In internal and third‑party evaluations involving over 1,000 challenging mental‑health‑related conversations, the new GPT‑5 model produced 65–80% fewer undesired responses across all categories compared with previous versions.
For conversations related to self‑harm or suicide, undesired responses dropped by 52% relative to GPT‑4o. On similar tests around psychosis or mania, performance improved by 39%, while responses reflecting emotional over‑reliance were 80% less likely to fail compliance benchmarks.
The new model is also trained to gently discourage users from seeing ChatGPT as a substitute for human connection. For example, it reinforces that “real people can offer care that goes beyond words on a screen” and reminds users to reach out to trusted friends, family, or mental health professionals.
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
— Sam Altman (@sama) October 14, 2025
Now that we have…
Behind the improvements
To improve its safeguards, OpenAI used a five‑step process that begins by defining different categories of potential harm, measuring them through evaluations and real‑world data, validating approaches with experts, fine‑tuning the models, and then iterating.
The company developed detailed taxonomies, guides that define what safe, empathetic, and appropriate responses look like in various mental health situations. These are used to train and test models prior to deployment and to measure progress over time.
Before releasing GPT‑5, OpenAI performed structured offline evaluations focused on rare, high‑risk scenarios. According to these tests, GPT‑5 achieved over 90% compliance with desired behaviors, compared with around 77% in prior models.
Calls for transparency and accountability
While OpenAI’s improvements have been praised for enhancing user safety, some experts warn that the company’s massive reach amplifies the stakes. Professor Robin Feldman of the University of California Law School noted that chatbots “create the illusion of reality,” and even with warnings, “a person who is mentally at risk may not be able to heed those warnings.”
Critics also point to the lack of independent auditing and the potential for AI tools to unintentionally deepen distress among vulnerable users. This concern has gained legal attention, a California couple recently sued OpenAI for allegedly contributing to their son’s suicide after interactions with ChatGPT.
Looking ahead
OpenAI said it will continue refining its safety systems and evaluation methods in future model releases. The company also plans to add emotional reliance and non‑suicidal mental health emergencies to its standard baseline safety testing going forward, emphasizing that these updates are part of its evolving responsibility as AI use expands globally.
Although the company acknowledges more work is needed, OpenAI says the improvements mark a meaningful step toward ensuring ChatGPT provides empathetic, clinically informed support while steering people toward real‑world help when they need it most.

Disclaimer: All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice. Please note that despite the nature of much of the material created and hosted on this website, HODL FM is not a financial reference resource, and the opinions of authors and other contributors are their own and should not be taken as financial advice. If you require advice. HODL FM strongly recommends contacting a qualified industry professional.





