OpenAI has shared a concerning statistic: over one million people every week use ChatGPT to talk about suicide, self-harm, or deep emotional distress. This accounts for roughly 0.15% of its total active users, showing that the AI chatbot has become more than just a tool for productivity or curiosity; it’s also being used by people in emotional crisis.
Key Highlights
| Topic | Details |
|---|---|
| Weekly Active Users | Over 800 million |
| Users Discussing Suicide or Self-Harm | Over 1 million (0.15%) weekly |
| Users Showing Psychosis or Mania Signs | Around 0.07% weekly |
| Improvement in Sensitive Response Accuracy | 65–80% better after new training |
| Experts Involved | 170+ mental health professionals collaborated |
Why This Matters
The data highlights a growing trend — people are increasingly turning to AI chatbots for emotional support. Many users might see ChatGPT as a non-judgmental listener that is available 24/7. While this shows how integrated AI has become in daily life, it also raises serious ethical and safety concerns.
AI systems are not designed to replace human empathy or mental health care. Relying on a chatbot during moments of suicidal thought can be dangerous, as the system might not always understand the severity of a situation or offer the right kind of help.
OpenAI’s Response
In reaction to these findings, OpenAI has taken several steps to improve how ChatGPT handles sensitive conversations. The company says it has:
- Trained the model to better detect signs of distress, suicidal intent, or self-harm.
- Reduced harmful or incomplete responses by up to 80%.
- Collaborated with more than 170 mental health experts to guide improvements.
- Introduced new internal policies to ensure safer and more compassionate responses to crisis-related prompts.
These updates aim to make ChatGPT more capable of recognising when a user might be in emotional danger and to encourage them to seek help.
Concerns and Challenges
Even with improvements, challenges remain. Detecting emotional distress in text can be complex — tone, context, and cultural differences can affect how messages are interpreted. Critics say that while OpenAI’s data transparency is commendable, AI still lacks the emotional intelligence and real-world understanding needed for such sensitive topics.
Another concern is dependency. Some users may start turning to AI instead of real people, which can isolate them further. The balance between offering help and encouraging human connection is delicate — and essential.
The Bigger Picture
As ChatGPT and similar AI tools become part of everyday life, they are no longer just digital assistants — they are emotional companions for millions. This shift shows both the power and the risk of artificial intelligence.
The fact that over one million users each week discuss suicide on ChatGPT highlights a silent mental health crisis happening online. It also points to the potential of technology, if guided responsibly, to offer early support and possibly even save lives.
Final Thoughts
OpenAI’s disclosure marks an important step toward acknowledging the real-world emotional use of AI. It reminds us that while machines can talk, only humans can truly listen, understand, and care.
If you or someone you know is struggling with suicidal thoughts or emotional distress, it’s important to reach out to trusted people or professional counsellors. You are not alone, and help is available.








