This new data comes after an LBC investigation discovered Grok, Elon Musk’s ChatGPT rival, gives out detailed suicide instructions and methods when asked by users
New data released by the tech giant shows 0.15% of users send messages including "explicit indicators of potential suicide planning or intent."
ChatGPT, a large language model that essentially serves as a chatbot, has around 800 million weekly active users, according to OpenAI’s chief, Sam Altman.
This new data comes after an LBC investigation discovered Grok, Elon Musk’s ChatGPT rival, gives out detailed suicide instructions and methods when asked by users.
When a user mentions taking their own life to ChatGPT, it directs them to a crisis helpline.
However, OpenAI has admitted "in some rare cases, the model may not behave as intended in these sensitive situations."
In a recent blog post, OpenAI said it looked at over 1,000 "challenging self-harm and suicide conversations" and found GPT-5, the chatbot’s latest model, took the “desired” behaviour 91% of the time.
This would leave tens of thousands with an “undesirable” response that could make their mental health worse.
"ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards," OpenAI said.
"Mental health symptoms and emotional distress are universally present in human societies, and an increasing user base means that some portion of ChatGPT conversations include these situations."
