Key takeaways
"The operators want engagement — they want your attention," said Dr. Stephan Taylor, chair of the Department of Psychiatry for the University of Michigan Medical School.
"They do have this tendency to kind of suck people in, because of how rewarding and addictive it can be when you have some entity that responds to you as if they're a conscious, sentient individual."
According to OpenAI, the company that operates ChatGPT, approximately 0.07% of users show possible signs of "mental health emergencies related to psychosis or mania."
With 800 million users per week, this translates to more than half a million people globally who may be talking to ChatGPT during a crisis.
Adolescents increasingly turn to AI for emotional support
The concern extends particularly to young people.
A nationally representative study by RAND published in JAMA Network Open found that approximately one in eight U.S. adolescents and young adults aged 12 to 21 use AI chatbots for mental health advice.
The study surveyed 1,058 youth between February and March 2025.
"There has been a lot of discussion that adolescents were using ChatGPT for mental health advice, but to our knowledge, no one had ever quantified how common this was," said Ateev Mehrotra, professor at the Brown University School of Public Health and a coauthor of the study.
Among those who used chatbots for mental health advice, 66 percent engage at least monthly, and over 93 percent reported that they found the advice helpful.
However, researchers caution that young people may be particularly vulnerable to the limitations and potential harms of these technologies.
Experts warn of unpredictable and potentially harmful responses
Dr. Tracy Juliao, a practicing mental health provider and University of Michigan-Flint educator, expressed concern that AI operators prioritize profit over user wellbeing.
"Often it becomes profit and not looking at what is in the best interest of the individuals who are using the product," she said.
The American Psychological Association issued a health advisory in November 2025 warning about the use of generative AI chatbots for mental health support.
"We are in the midst of a major mental health crisis that requires systemic solutions, not just technological stopgaps," said Arthur C. Evans Jr., CEO of the American Psychological Association.
"While chatbots seem readily available to offer users support and validation, the ability of these tools to safely guide someone experiencing crisis is limited and unpredictable."
The APA advisory emphasized that "due to the unpredictable nature of these technologies, do not use chatbots and wellness apps as a substitute for care from a qualified mental health professional."
Taylor also shared concerns that AI services can feed into the delusions of someone experiencing a mental health episode.
"By delusions, we mean false beliefs about the world that are not just kind of average false beliefs — but they're ones that are very much like around paranoia, or maybe they're a particular idea that a person has a special mission and will save the world," he said.
A study from Brown University published in October 2025 found that chatbots systematically violate ethical standards established by organizations like the American Psychological Association, including inappropriately navigating crisis situations and creating a false sense of empathy with users.
Human connection remains irreplaceable
Brian Babbitt, CEO of North Country Community Mental Health, emphasized the importance of maintaining human relationships.
"I think some of it is inevitable, but i do think that people really have to be cognizant of that," he said. "There is no replacement for that human connection."
Psychologists also warn that younger users could be more likely to act on irresponsible advice given out by a chatbot, potentially at risk to their safety or others.
The RAND study found that while AI-based mental health advice may reflect low cost, immediacy, and perceived privacy, engagement with generative AI raises concerns, especially for users with intensive mental health needs.
OpenAI announced in October 2025 that it had worked with over 170 mental health experts to improve ChatGPT's responses in sensitive conversations, claiming to have reduced undesired responses by 65-80%.
Read more:
AI-Generated Images Fuel Misinformation Following Bondi Beach Terror Attack
Coalition Of Attorneys General Demands Meta To Remove Deceptive AI Weight-Loss Ads
China Completes Prototype Chip-Making Machine In A Push For Semiconductor Independence