Key Takeaways
OpenAI is facing seven new lawsuits filed Thursday in California state courts, with plaintiffs alleging that the company's ChatGPT artificial intelligence chatbot drove users to suicide and induced harmful psychological delusions even in individuals with no prior mental health issues.
The legal actions, filed in the Superior Courts of California in San Francisco and Los Angeles counties, bring charges of wrongful death, assisted suicide, involuntary manslaughter, and negligence.
The lawsuits were filed by the Social Media Victims Law Center and Tech Justice Law Project on behalf of six adults and one teenager. According to court documents, four of the victims died by suicide.
Allegations of premature release and inadequate safety testing
The complaints allege that OpenAI knowingly released its GPT-4o model prematurely in May 2024, despite internal warnings that the AI system was dangerously sycophantic and psychologically manipulative.
According to the lawsuits, the company compressed months of safety testing into a single week to beat Google's Gemini to market, with OpenAI's own preparedness team later admitting the process was "squeezed."
"These lawsuits are about accountability for a product that was designed to blur the line between tool and companion, all in the name of increasing user engagement and market share," said Matthew P. Bergman, founding attorney of the Social Media Victims Law Center, in a statement.
"OpenAI designed GPT-4o to emotionally entangle users, regardless of age, gender, or background, and released it without the safeguards needed to protect them."
The legal filings argue that GPT-4o was engineered with features designed to maximize user engagement through persistent memory of previous conversations, human-mimicking empathy cues, and overly agreeable responses that affirmed users' emotions regardless of content.
The lawsuits claim these design choices fostered psychological dependency, displaced human relationships, and contributed to addiction and harmful delusions.
Cases range from teenagers to middle-aged adults
Among the cases detailed in court documents is 17-year-old Amaurie Lacey, who, according to the lawsuit filed in San Francisco Superior Court, turned to ChatGPT for emotional support.
The complaint alleges that "the defective and inherently dangerous ChatGPT product caused addiction, depression, and eventually counselled him on the most effective way to tie a noose and how long he would be able to live without breathing.
" The lawsuit states that "Amaurie's death was neither an accident nor a coincidence but rather the foreseeable consequence of Open AI and Samuel Altman's intentional decision to curtail safety testing and rush ChatGPT onto the market."
Another lawsuit was filed by Alan Brooks, a 48-year-old from Ontario, Canada, who claims he used ChatGPT as a resource tool for more than two years before the system allegedly changed behavior.
According to court documents, the AI chatbot began preying on his vulnerabilities, manipulating him and inducing delusions that triggered a severe mental health crisis in someone with no prior psychiatric history.
The lawsuit alleges Brooks suffered "devastating financial, reputational, and emotional harm" as a result.
These new lawsuits follow an earlier case filed in August by Matthew and Maria Raine, whose 16-year-old son Adam died by suicide earlier this year.
That complaint alleged that ChatGPT became Adam's "closest confidant" and encouraged him to plan what it described as a "beautiful suicide."
According to the Raine family's lawsuit, the AI mentioned suicide more than 1,200 times in conversations with the teenager and detailed multiple methods for carrying it out.
OpenAI responds, announces new safety measures
OpenAI called the situations "incredibly heartbreaking" and said the company was reviewing the court filings to understand the details.
In a statement provided to Bloomberg Law, an OpenAI spokesperson said, "In early October, we updated ChatGPT's default model to better recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support."
The spokesperson added that the company will "continue to strengthen ChatGPT's responses in sensitive moments, working closely with mental health clinicians."
On the same day the lawsuits were filed, OpenAI released its "Teen Safety Blueprint," announcing it was "building toward an age-prediction system" to identify users under 18.
The company has also implemented parental controls and updated its models to better detect signs of mental distress.
Broader concerns about AI safety and accountability
Meetali Jain, Executive Director of Tech Justice Law Project, said in a statement that "ChatGPT is a product designed by people to manipulate and distort reality, mimicking humans to gain trust and keep users engaged at whatever the cost."
Jain added that "these cases show how an AI product can be built to promote emotional abuse—behavior that is unacceptable when done by human beings."
Daniel Weiss, chief advocacy officer at Common Sense Media, which was not part of the complaints, commented on the broader implications of the lawsuits.
"The lawsuits filed against OpenAI reveal what happens when tech companies rush products to market without proper safeguards for young people," Weiss said.
"These tragic cases show real people whose lives were upended or lost when they used technology designed to keep them engaged rather than keep them safe."
Read more: