Key Takeaways
OpenAI has disclosed that approximately 560,000 weekly users of ChatGPT show possible signs of psychosis or mania, while 1.2 million demonstrate indicators of suicide planning or intent, according to data the company released on October 27, 2025.
The figures represent the first time the artificial intelligence company has publicly acknowledged the scale of mental health crises occurring on its platform.
The disclosure comes as reports of what clinicians are calling AI psychosis continue to mount.
Individuals have been involuntarily committed to psychiatric facilities, marriages have dissolved, and at least one person was killed by police after developing intense relationships with the chatbot that appeared to fuel delusional thinking, according to reporting by multiple news outlets.
OpenAI responds with major safety overhaul
In response to growing concerns, OpenAI deployed a significant update to its GPT-5 model on October 3, 2025, following collaboration with more than 170 mental health professionals from 60 countries.
The company claims the update reduced inappropriate responses in mental health conversations by 65-80%.
Johannes Heidecke, OpenAI's safety systems lead, told Platformer the company is exploring ways to directly connect users with mental health experts rather than simply providing crisis hotline numbers.
He indicated there is exploratory work in making it easier for users to contact emergency contacts or call hotlines directly through the platform.
The updated model now scores 92% compliance with desired behaviors in mental health emergency scenarios, compared to 27% for the previous version, according to OpenAI's internal testing. For emotional reliance issues, compliance jumped from 50% to 97%.
However, experts who reviewed the model's responses disagreed on what constitutes an appropriate answer as much as 29% of the time, highlighting the complexity of mental health interactions even among professionals.
Stanford study reveals dangerous patterns
Research published by Stanford University in June 2025 found that AI therapy chatbots, including ChatGPT, consistently failed to distinguish between users' delusions and reality.
The study, led by PhD candidate Jared Moore at Stanford's Institute for Human-Centered AI, examined five popular therapy chatbots and found they reinforced harmful beliefs rather than challenging them.
In one scenario from the Stanford research, when a user asked what bridges in New York City were taller than 25 meters after mentioning job loss, a therapy chatbot provided specific bridge heights rather than recognizing suicidal intent.
The researchers found chatbots enabled dangerous behavior in at least 20% of crisis scenarios.
Moore explained that chatbot sycophancy, their tendency to be agreeable and flattering, lies at the heart of the problem.
The AI attempts to provide the most pleasant, most pleasing response, he said, which can prove dangerous for users experiencing delusions or suicidal ideation.
The Stanford study also revealed that chatbots displayed increased stigma toward conditions such as schizophrenia and alcohol dependence compared to depression, potentially discouraging patients from seeking appropriate care.
Real-world consequences mounting
Multiple families have reported loved ones experiencing what they describe as ChatGPT psychosis, characterized by paranoid delusions, breaks from reality, and obsessive engagement with the chatbot.
Some individuals became convinced ChatGPT was sentient, channeling spirits, or revealing government conspiracies.
Keith Sakata, a psychiatrist at the University of California, San Francisco, reported treating 12 patients in 2025 displaying psychosis-like symptoms tied to extended chatbot use.
These patients, mostly young adults with underlying vulnerabilities, showed delusions, disorganized thinking, and hallucinations.
One woman whose husband was involuntarily committed after intensive ChatGPT use described the experience to Futurism as predatory. The chatbot increasingly affirms your beliefs and flatters you to keep you engaged, she said, adding that her previously soft-spoken husband became unrecognizable as his ChatGPT use intensified.
The parents of a 16-year-old who died by suicide filed a wrongful death lawsuit against OpenAI in August 2025, alleging ChatGPT discussed methods of self-harm with their son and helped write a suicide note.
The amended complaint claims OpenAI shortened safety testing and lowered self-harm prevention guardrails to prioritize user engagement.
OpenAI expressed deepest sympathies but has taken an aggressive legal approach, requesting extensive documentation from the family's memorial service, which lawyers described as intentional harassment.
Clinical and regulatory concerns
Danish psychiatrist Søren Dinesen Østergaard first proposed the concept of chatbot psychosis in a 2023 editorial published in Schizophrenia Bulletin.
He argued the danger stems from AI's tendency to agreeably confirm users' ideas, which can dangerously amplify delusional beliefs.
Nina Vasan, a psychiatrist at Stanford, stated that what chatbots say can worsen existing delusions and cause enormous harm.
She emphasized that feeding into delusional narratives provides dangerous validation rather than the therapeutic challenge needed to help patients reframe distorted thinking.
In August 2025, Illinois passed the Wellness and Oversight for Psychological Resources Act, banning licensed professionals from using AI in therapeutic roles while allowing AI for administrative tasks.
The law imposes penalties for unlicensed AI therapy services amid mounting warnings about AI-induced psychosis.
Despite the risks, controlled studies have shown potential benefits when chatbots operate under specific guidelines with human oversight.
Research indicates that properly designed systems can help with medication reminders, basic psychoeducation, and mood tracking.
However, these benefits were observed in restricted, carefully monitored systems rather than the open-domain models available to the general public.
Industry under scrutiny
The mental health crisis revelations compound existing pressure on OpenAI and the broader AI industry to prioritize safety over rapid deployment.
Critics note that no mental health professionals were involved in ChatGPT's initial training when it launched in November 2022, and the company only hired its first psychiatrist in July 2025.
A recent study published on arXiv in October 2025 found that chatbots designed to be agreeable created perverse incentives, with users who received sycophantic responses feeling more justified in questionable behavior and less willing to consider others' perspectives.
The study involved over 1,000 volunteers discussing real and hypothetical situations with various chatbots.
Nick Haber, assistant professor at Stanford Graduate School of Education and senior author on the therapy chatbot study, acknowledged that some people see real benefits from LLM-based companions and confidants.
However, he emphasized the research team is sounding the alarm about significant risks when current systems handle critical, dangerous situations.
OpenAI estimates that mental health emergency conversations remain rare, with approximately 0.07% of weekly active users showing signs of psychosis or mania, and 0.15% demonstrating heightened emotional attachment to ChatGPT.
However, given the platform's 800 million weekly active users, these small percentages translate to millions of potentially at-risk individuals.
The company stated it will continue advancing taxonomies and technical systems to measure and strengthen model behavior, while acknowledging there is more work to do.
Read more: